Talk:Nonlinear programming

Applications
Kindly add more real-life application of NLP in the main article. For example, in Finance (portfolio optimization), Operations (cutting and packing), Engineering (electricity generation & distribution), Transportation (traffic flow models), Economics (econometric and equilibrium models), Medical science (protocols in cancer radiotherapy), Nonlinear network flow models, and Training machine learning models. Amit 08:37, 15 June 2022 (UTC) — Preceding unsigned comment added by Qx2020 (talk • contribs)

Examples
In the Examples section, the text describes things in terms of x1, x2, and (in the case of 3 dimensions) x3, while the illustrations depict things in terms of x, y and (in the case of 3 dimensions) z. For the sake of clarity, I think that the text and the images should agree in the naming of variables. Are there any objections to replacing x1, x2 and x3 in the example text with x, y and z? --134.173.66.89 22:52, 20 October 2007 (UTC)

I too would like to see this fixed. Since I'm a newcomer to this topic I don't presume to know the best manner of notation, but the easy fix from where I'm standing is to just alter the text. At least that may prevent another 2 years from elapsing before the next revisit. Flatus (talk) 00:49, 24 January 2009 (UTC)

I linked to this page from Inverse Kinematics. Any chance of any applied examples for those of us who aren't in pure math, preferably from the computer or operations end of the spectrum? The Linear Programming page had some. To be frank, as of right now, I'm substantially more confused and still don't know how this connects, at all, in inverse kinematics or to the real world. Guinness4life (talk) 00:25, 16 August 2009 (UTC)

Kuhn-Tucker
Should a discussion of the optimality conditions (i.e. the Kuhn-Tucker conditions) belong here?


 * Hello User:Kendrick Hang (I guessed your name from the history of the page). To answer your question, I don't know, I am not familiar with this particular technique (although I am doing some optimization). My opinion would be that you go ahead and try it. If what you want to add is quite long, you could also consider making a separate article. Let's see how it goes. Be bold in updating pages! And it would be nice if you have some references. --Oleg Alexandrov 06:23, 26 Dec 2004 (UTC)


 * Optimality conditions should most definitely mentioned. Please do add them. We can always reorganize later if necessary. Jitse Niesen 11:39, 26 Dec 2004 (UTC)

Definitely optimality conditions are something which has to be mentioned. I personally prefer the order like in Bazaraa/Sherali/Shetty (why is the older book giuven as a reference in the article?) emanating from the geometric idea (non-existence of a feasible descent direction) via the Fritz John conditins to the KKT, requiring regularity/convexity.

BTW: can anyone tell me how a convex problem could be solved with the same techniques as linear programming (where I suppose the simplex method is referred to)? I would agree that there are similarities if you use active index set strateg but IMHO it's by no means the same method.


 * I am not sure the same techiques working for linear programming will work for convex problems. For linear problems the optimal solution is guaranteed to be at the boundary, while for convex problems most of the time it will be inside the domain.


 * Gradient descent works fine for convex problems, as long as you know the derivative of the objective function. Oleg Alexandrov 23:08, 15 Feb 2005 (UTC)


 * But I don't know much about what you mention above (active index set strateg) so I could be wrong. Oleg Alexandrov 23:08, 15 Feb 2005 (UTC)

Formatting issues
To User: Frau Holle

I noticed that you split the long introduction of the Nonlinear programming page into several sub-sections, like Mathematical formulation of the problem and Methods for solving the problem, etc. But, some text which should have clearly been in Methods for solving the problem you left in the Mathematical formulation of the problem. I fixed that. Please let me know if you think I am wrong about the change.

Oh, and I don't think that the part about the least squares minimization you included in the Examples subsection of the Nonlinear programming page is a true example. What do you think?

Thanks,

--User: Olegalexandrov


 * Please feel free to make further improvements. I think we both agree that this article is still far from being a good one. And yes, least squares minimization is rather a subclass than an example. - Greetings from hinter den Bergen Frau Holle 16:32, 22 Nov 2004 (UTC)


 * I removed the least squares minimization part. Wikipedia is an interesting thing. On one hand, it is a collaborative process, so whatever work we do, it will be improved upon. On the other hand, it can be risky to add stuff, hoping that later it will be harmonized/improved by other people. So there is a tricky balansing act. --User: Olegalexandrov


 * I restored it in the minimal version of a see also link under the heading related topics. The present article seems to be written with economic applications in mind. However, given the state of the optimization (mathematics) article, many readers following the link to nonlinear programming will expect information about nonlinear curve fitting. So the least we can do for them is to redirect them to more pertinent articles. -- Frau Holle 20:30, 22 Nov 2004 (UTC)


 * Thanks! I was not sure whether to keep the link or not. You are right, it is good to have it in there. By the way, how do you insert the current date? I mean, the time you wrote above? Is there an instruction in Wikipedia? --User: Olegalexandrov

Convex
If the objective function is convex in all cost functions (when looking from the "bottom")

Can the meaning of this condition be clarified? This is especially confusing because both "objective function" and "cost function" have been used above this to refer to the function to be minimized. Josh Cherry 04:22, 25 Apr 2005 (UTC)

Lower Bound
The statement "at some point an actual solution will be obtained whose cost is equal to or lower than the best lower bound obtained for any of the approximate solutions" seems misleading. How can an actual solution be cheaper then the best lower bound?

A question
Given optimization problem $$\begin{matrix} \underset{x,y}{\mathop{\min }}\, & {{x}^{T}}y \\ {} & x\in {{\Omega }_{1}} \\ {} & y\in {{\Omega }_{2}} \\ \end{matrix}$$, if we reach its solution by solve the following 2 problem one by one iteratively:
 * Problem 1:
 * $$\begin{matrix}

\underset{x}{\mathop{\min }}\, & {{x}^{T}}y \\ {} & x\in {{\Omega }_{1}} \\ \end{matrix}$$
 * Problem 2:
 * $$\begin{matrix}

\underset{y}{\mathop{\min }}\, & {{x}^{T}}y \\ {} & y\in {{\Omega }_{2}} \\ \end{matrix}$$. How do we call this method? And is there an article concerning this algothrim in Wikipedia? Thanks you! --虞海 (Yú Hǎi) (talk) 08:35, 13 July 2010 (UTC)