Talk:Huber loss

Corrections needed
As far as I can tell this article is wrong, and the notation is a mess.

+ Please don't use $L$ for every loss function.

+ The suggested criteria seems to be missing the important constraint of convexity.

+ A continuous function $f$ satisfies condition 1 iff $f(x)\geq 1 \, \forall x$. This is not what you want.

+ From the perspective of SVM style learning, condition 1 or the ideal loss function should be $\delta(x)=\begin{cases} 0&\text{if x\leq 0}\\1& \text{otherwise.}\end{cases}$. Then the hinge loss $L^1(x)=max(x+1,0)$, and quadratic hinge loss $L^2(x)=(max(x+1,0))^2$ form an upper bound satisfying condition 1.

Then taking $H$ as the Huber function $H(x)=\begin{cases}x^2/2&x<1\\x &\text{otherwise.}\end{cases} an appropriate Huber style loss function would be either $H(max(x+2,0))$ or $2H(max(x+1,0))$, as both of these would satisfy the corrected conditions 1-3 and convexity.

I haven't made the above corrections as I'm unfamiliar with Huber loss, and it presumably has uses outside of SVMs in continuous optimization. For these cases criteria 1. will need to be fixed. Hopefully someone who is familiar with Huber's loss can make some corrections. 86.31.244.195 (talk) 17:08, 6 September 2010 (UTC)

Some corrections
I agreed with the previous writer. This article was poorly sourced and made a lot of unqualified and unreferenced claims, and suffered from imbalance, being written from the POV of an enthusiast for "machine learning". I tried to make the most important corrections. Kiefer.Wolfowitz (talk) 13:50, 30 October 2010 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified one external link on Huber loss. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20150126123924/http://statweb.stanford.edu/~tibs/ElemStatLearn/ to http://statweb.stanford.edu/~tibs/ElemStatLearn/

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 00:07, 8 November 2017 (UTC)

An error in a formula
The factor delta squared in the smooth version should be delta. Perhaps one should then add delta > 0 for good measure 87.52.15.99 (talk) 11:17, 2 September 2023 (UTC)

Pseudo-Huber loss function (redundant scale factor in loss function)
The delta^2 multiplier is redundant, right? 162.246.139.210 (talk) 18:21, 30 October 2023 (UTC)