Talk:Lasso (statistics)

Capitalization
In the article "Lasso (statistics)" the main object is written all the time as LASSO. In literature, there are three ways of writing it, namely "lasso", "Lasso" and "LASSO" are used. I think it should be changed to the style "lasso" as introduced by Tibshirani (1996) to be neutral. But, all three options should be mentioned. — Preceding unsigned comment added by 87.139.244.85 (talk) 07:13, 19 January 2016 (UTC)


 * Using "lasso" without any capitalisation is wrong, because it is the form appropriate for a common noun, not a proper noun. The form "Lasso" evokes a misleading analogy to a rope used for catching animals. All uppercase is the proper form for an acronym, which LASSO is. Following the most accepted style in the professional literature is more important than paying tributes to the pioneer researcher. As far as I can see from this selection, "LASSO" is the most widespread style, while "Lasso" is used less often. AVM2019 (talk) 20:02, 10 September 2020 (UTC)

Average vs sum
I think the 1/N prefactors throughout the article shouldn't be there. They're not in the LASSO, as far as I understand. They lead to confusion. I don't want to edit the article myself, I think an expert should step in and change things. Hope this comment is ok, I've never commented on Wikipedia before.128.218.42.15 (talk) 02:11, 22 March 2017 (UTC)

What does "LASSO" refer to exactly?
Several different nouns are applied to LASSO throughout the article, including: But most often it is referred to as "the lasso", which grammatically incorrect and avoids putting it in any wider scientific context.
 * operator (coming from O in LASSO), which suggests that LASSO is some sort of a transformation mapping on a vector space or similar structure;
 * estimator, which puts LASSO in the context of inferential statistics;
 * method, which is a very broad notion, potentially suggesting that LASSO is a computational method;
 * regularization, which presents LASSO as a regularisation method.

I find all this confusing (if not inconsistent). The article lacks clarity as to where exactly LASSO stands as a concept, a class of optimisation problems, an approach, or a method; and what category its analogues fall into.

Someone with sufficient depth and breadth of subject knowledge should address this issue. AVM2019 (talk) 19:24, 10 September 2020 (UTC)


 * I agree, this article is a mess, and is extremely confusing. It is one of the worst available sources of information on this topic, and significantly overly complicates things. It is written in a very inaccessible and pedantic manner. To answer your question, from my level of understanding as a PhD student (certainly no expert), all of these terms you have listed apply to the LASSO in different ways. I think by far the easiest way of understanding it is as a regularization of ordinary least squares regression. "the lasso" or "The LASSO" is commonly used to describe it. I'm not sure why you mentioned that it was grammatically incorrect. Let me know if I can answer any questions. I'm thinking about rewriting a lot of this article. Saimouer (talk) 20:48, 25 March 2021 (UTC)

Data-optimized OLS?
In the section "Making λ easier to interpret with an accuracy-simplicity tradeoff" the terms "data-optimized" OLS is used, which I haven't seen in any scientific literature on the subject yet. Searching the net for this term, I only found it in the given reference by Victor Hoornweg, who introduced this section into the article himself as user Beos123. However, this reference does not seem to be peer-reviewed or reflect general scientific consensus, so I've marked the article as containing original research. I can't judge whether the content of the section is good, or correct, or helpful (maybe it is, maybe it's not), but I think it violates WPs original research principle. Especially the terminology (data-optimized OLS, hypothesized coefficients, ...) seems very idiosyncratic to me, and I think the section should be removed. — Preceding unsigned comment added by Ezander (talk • contribs) 09:18, 2 March 2021 (UTC)

Dear Ezander, thank you for asking questions about the source of this section. It is based on the PhD dissertation of Victor Hoornweg, peer-reviewed by the doctoral committee of Erasmus University Rotterdam. In the article, 'Good ridge estimators based on prior information' by B. F. Swindel (1976), the author explains that $$\beta_0$$ 'might well be chosen to reflect as well as possible the prior information or hypotheses on b'. Hence the term 'hypothesized regression parameters'. The term 'data-optimized OLS' is used for explanatory purposes. If λ equals 0, the hypotheses have no influence on the estimated coefficients, so that the estimates are only based on data-optimization, which results in the OLS solutions. This helps, I hope, to explain that λ balances between prior hypotheses and the data (see "prior lasso" below). Greetings, Victor Hoornweg (Beos123)