Talk:Mathematical optimization/Archive 1

Global optimization
A more general question that came up: this page is about global optimization. I propose to merge Optimization (mathematics) and global optimization and store it in global optimization. The entry Optimization (mathematics) should be changed to give links to global optimization and local optimization and describe the difference (like NP-completness for most of the GO-problems, local optimization more or less a topic in numerics with standard algorithms like conjugate gradient etc...).

I strongly argue into this direction as optimization has to be distinguished according to the targets that one follows.


 * I disagree that this page is about global optimization, but I agree that they should be merged. I think that global optimization should redirect to here, and that there should be a section in this article on local vs global optimization, and what kind of algorithms find which or are likely to find which etc.  moink 17:08, 31 Mar 2004 (UTC)


 * Well, the definition at the top of the page talks about finding x_m in the domain A, so that f(x_m)<=f(x) for every x in A. A local version would be "for every x_m there is a neighborhood U in which f(x_m)<=f(x) with x in U". That was my reasoning behind classifying the page to be about GO. KaHa242 08:12, 2 Apr 2004 (UTC)


 * You're right, the introduction is poor and should be fixed. moink 18:52, 2 Apr 2004 (UTC)

I disagree with many points of view expressed in this comment. Initially, a clear distinction between the concept of an optimization problem and the capabilities and limitations of available numerical methods to solve it must be made. An optimization problem and its optimal solution are defined precisely as in the beginning of the article. Rigorously, the criteria that defines a local optimal solution does not necessarily meet the requirements for it to be regarded as a optimal solution, although many numerical methods have the strong limitation of not being able to guarantee rigorous (or global) optimality and treat local solutions as de facto solutions to the problem. In other words, a local optimal solution is not necessarily the actual solution to the problem, but it’s the best we can get using common NLP and MINLP solvers. Global optimization, on the other hand, is a field of study that aims the development of deterministic algorithms that are capable of guaranteeing convergence in finite time to the actual optimal solution of a non-linear (and non-convex) problem. Therefore, Global Optimization defines a set of methodologies and algorithms designed to obtain a proven optimal solution to an optimization problem, and it is not, by any means, the definition of the problem at hand. --Jonnat 20:30, 17 March 2006 (UTC)

Mathematical programming
This article seems to be about mathematical programming, which is not the same as but one type of optimization. Am I right? Should it be moved to mathematical programming then? I mean: The Lagrange multiplier is not even mentioned. And the convex optimization section assumes that there exists a local minimum within the constraints. I am not an expert in this field so ... what do you think? --Rade


 * Mathematical Programming and Optimization are synonyms. However, the term optimization is sometimes viewed as having a broader sense that includes non-deterministic numerical procedures and alternative problem formulations such as genetic algorithms and tabu-search. --Jonnat 20:38, 17 March 2006 (UTC)

Notations and other alterations
I would suggest major modifications on the notations section of the article. Specially, I suggest the replacement of the entire section containing all the illustrative examples by a more rigorous exposition of general instances of mathematical programming problems. From the mathematical representation of the general problem, most “subfields” specified in the next section can be derived from the specification of simple characteristics of the functions in the general model, such as linearity or convexity.

Moreover, the remaining sections also require major alterations. The information under the “techniques” section only regards methods for solving non-linear programming problems. The criteria for a local minimum to be also a global one is not only the convexity of the objective function, but also of the feasible region. The other popular methods linkified are considered heuristic methods and are commonly regarded as a separate from deterministic optimization procedures.--Jonnat 21:38, 17 March 2006 (UTC)


 * This is a nice article, and rather proeminent, so whatever you want to do has to be done slowly, and with discussion. :)


 * I like the existing "Notation" section. I think you suggest replacing it with something more complicated, and as such, I would think harder to understand for the general public. I would like to see more details on what you want to do before you get down to big rewrites. Could you write in here your version of the "Notation" section? Would be nice to see. Oleg Alexandrov (talk) 02:00, 18 March 2006 (UTC)


 * I completely agree that any major changes should be thoroughly discussed. And I appreciate the suggestion of posting major changes in the discussion prior to a modification of the article. I am new to Wikipedia and still getting used with the dynamics of the site :)


 * However, I do believe that the article should be consistently reviewed. I believe that it should present the Mathematical Programming problem in a way that is closer to the ones that acknowledged textbooks in the field use. But primarily, I think that the information contained in the article should be concise. If methods for solving nonlinear programming problems are described, so must be methods for solving LPs and IPs. And a clear distinction between proven finite-time methods and heuristics should be made.


 * Your point about the understanding the general public is crucial, and I admit that I have little idea about who is Wikipedia’s public. I would really appreciate if you could share some insights in that mater. All I can do now, however, is base this analysis in myself, as a Wikipedia user. I use Wikipedia differently when I’m looking for mathematical theories or general information. In the latter case, I’d rather find a simplified article that would give me some basic definitions to start understanding a topic, whereas in the former case, I like to see more rigorous articles that could give me a more complete understanding of the topic and give me some pointers to further study. I don’t know if I’m off the scope of the site, but please tell me if so. --Jonnat 15:40, 20 March 2006 (UTC)


 * As often, when Oleg says that an article is "nice", I think it is merely a good start.
 * The public depends on the article under consideration. This article is probably also read by people that do not know much mathematics. So I think a good idea is to start at the most basic level possible. However, mathematicians will also read the article so it's okay to get technical later. The important thing is to increase the difficulty slowly throughout the article (see Manual of Style (mathematics) and Make technical articles accessible for more discussion on this). Regarding textbooks, think of a text book used by non-maths (e.g., economics) students.
 * This makes me a bit anxious to support your changes to the notation section, though I'm not quite sure what you want. I think a nice outline for the article is to start with an informal defition (what is optimization?), then examples / applications (including notation) and then a discussion of the subfields (LP, IP, QP, etc.). The subfields should be treated in a few paragraphs (what is a linear problem? what are the main techniques for solving them?), with references to articles linear programming for details (see Summary style).
 * I agree that the article, as it stands currently, gives an undue stress on nonlinear problems, and especially heuristic algorithms for them.
 * The question of the level at which an article should address its readers is one of the most difficult questions on Wikipedia and often generates a lot of discussion, so feel free to disagree. But you're most certainly not off-scope for the site. There are some highly technical articles around here.
 * Finally, welcome! I look forward to your future contributions. -- Jitse Niesen (talk) 23:56, 20 March 2006 (UTC)


 * Thank you for the insights. The linked pages were very useful. --Jonnat 22:15, 21 March 2006 (UTC)

Below are the discussed suggestions. The next section would replace the current sections of “notations” and “major subfields”. --Jonnat 16:32, 18 April 2006 (UTC)

logical programming
I always thought that logical programming is a special case of mathematical programming (where the search is for a model that satisfies a given theory), but the page on logical programming is written from a different viewpoint, like it is just another programming paradigm. I wonder if I am correct, maybe the log. progr. page could be improved by people here. Samohyl Jan 16:58, 14 June 2006 (UTC)

Shorter introduction, promote discussion of applications, history
I support the comments above about placing the more technical content later in the article, and therefore propose a short introduction quickly followed by an expanded section on Uses. The ones given at the moment seem relatively specialized. It's important to mention that optimization lies at the heart of traditional (neo-classical) economics, and works out the consequences of a major hypothesis about human behaviour (i.e. that people make optimizing choices). One should also mention its role in statistical estimation – least-squares methods, maximum likelihood.

After Uses, History would be relatively non-technical and a good rationale for the more technical content that follows. Indeed, the history would provide a good way to organize the technical content.

One also needs to mention, or link to, extreme-value functions and duality theory.

Rwb001 16:07, 16 October 2006 (UTC)]]


 * I absolutely agree. Of all concepts used in mainstream economics this is the most fundamental and the article should mention it. I am going to add a short paragraph on it, then we can on that together. MartinDK 17:29, 8 November 2006 (UTC)

unrelated to computer programming
hi. AFAIK mathematical programming is completely unrelated to computer programming, however the article is a bit ambiguous with regard to this. I've not changed since i'm not truly sure... Jabbba (talk) 21:27, 3 February 2008 (UTC)

How about a much shorter introduction

 * The concepts of Maxima_and_minima have been nicely described in the article about Maxima_and_minima, but the description the the present article is a little vague. Perhaps it is better to provide a very short description and a link.
 * The description of convexity is important, but it should be put in a subsection rather than in the introduction of the article.


 * I do agree convexity should be placed in another section. Xerenio (talk) 18:51, 23 May 2008 (UTC)

Lagrange multipliers
This article currently states: Constrained problems can often be transformed into unconstrained problems with the help of Lagrange multipliers. This statement is misleading in this article. Lagrange multipliers create saddle points, not stable optima, where the constraints are satisfied. Since (nearly?) all of the optimization techniques listed in this article seek stable optima, the implied procedure (use Lagrange multipliers to create an unconstrained problem, and then solve with your favorite optimization technique) will not work--the optimizer will just converge to values of +/- infinity. If there is a known way to use Lagrange multipliers to create an unconstrained optimization problem that is suitable for these optimization techniques, then more details are needed. —Preceding unsigned comment added by 128.187.80.2 (talk) 18:40, 28 May 2009 (UTC)

external link/modeling languages
Hi. i miss there the link to high-level modeling language 'lpl', which was designed and developed at the University of Fribourg over the last 20 years and it is one of the leading (mathematical) modeling language by all means Crisgh (talk) 15:12, 29 October 2009 (UTC)

A reference
User Bonnans would like his books added to this article's reference section. I'm not familiar with these books (nor whether they were used in editing this article).  Jwesley 78 18:07, 21 March 2010 (UTC)
 * I saw your note here after replying on the page of Bonnans. His books are among the world's best books on optimization. His co-author, Professor Gilbert, has recently joined the English-language Wikipedia, also; Professor Gilbert's discussion of French articles notes many problems with our English articles. Sincerely, Kiefer.Wolfowitz (talk) 01:58, 9 November 2010 (UTC)

Minor Optimization Methods
Some authors of the papers edit these important pages of wikipedia and put their recent methods which are not verified in the literature in these article. For example someone has put his algorithm called cuckoo in all the pages about optimization and evolutionary computation. I think wikipedia is not a tool for advertising methods and doing science marketing. If we are going to list some minor algorithms like these algorithm in the list of important optimization methods, this list should include more that 100 algorithms.

Lets use wikipedia to share the knowledge not to highlight our works. —Preceding unsigned comment added by 74.194.214.87 (talk) 22:59, 23 November 2010 (UTC)


 * I removed the least important heuristics, and classified the others as heuristics, providing some context. Please be bold, and remove those links and gently alert the author (informally or formally) about WP guidelines for notability, and COI, if you believe the violation is serious. (I don't know what is worse, the proliferation of heuristics in optimization on WP, or the cult of machine learning in statistics articles on WP!) Kiefer.Wolfowitz (talk) 01:13, 24 November 2010 (UTC)

Dynamic Optimization
I think there should be a section on this article considering the case when at least one of the following components change over time: (a)the objective functions; (b) the problem instance; or (c) the variable constraints. Obviously, many real world optimization problems share this property of temporal change. If no one objects this proposal, I'll be creating a section on dynamic optimization in the following weeks. —Preceding unsigned comment added by Crbazevedo (talk • contribs) 13:20, 3 December 2010 (UTC)


 * The following dynamic topics already appear:


 * Stochastic programming studies the case in which some of the constraints or parameters depend on random variables.
 * Infinite-dimensional optimization studies the case when the set of feasible solutions is a subset of an infinite-dimensional space, such as a space of functions.
 * Trajectory optimization is the specialty of optimizing trajectories for air and space vehicles.
 * In a number of subfields, the techniques are designed primarily for optimization in dynamic contexts (that is, decision making over time):
 * Calculus of variations seeks to optimize an objective defined over many points in time, by considering how the objective function changes if there is a small change in the choice path.
 * Optimal control theory is a generalization of the calculus of variations.
 * Dynamic programming studies the case in which the optimization strategy is based on splitting the problem into smaller subproblems. The equation that describes the relationship between these subproblems is called the Bellman equation.
 * Kiefer.Wolfowitz (talk) 14:47, 3 December 2010 (UTC)

Example section
This article does a pretty good job of providing an advanced overview of optimization methods, but it seems to me it's missing an example section, perhaps between sections 3 and 4. Just work through a simple example of minimizing (or maximizing) a smooth, convex (or concave) function by calculating the first-order condition associated with the optimum. And show a graph. And explain how the secont-order condition verifies optimality. I will do this myself if I find time, but it would be great if someone beat me to it. Rinconsoleao (talk) 10:51, 12 May 2011 (UTC)

Search algorithms are sloppily classified
I recently did some changes to section "Computational optimization techniques" that Kiefer Wolfowitz reverted. Please see the comparison between my version and the current one (my version is on the left). Here's my main edit summary:

"Every search method is an algorithm. And every search method is iterative or uses an iteration in some step (even the Simplex algorithm). Thus, this classification needs to be rewritten."

In the current version, optimization techniques are classified as: That's sloppy terminology. Both the "iterative" and the Euristic methods are algorithms (indeed the main article for "Heuristics" is called Heuristic algorithms), and obviously all of them have a "finite number of steps" (even if some of these steps are iterated).
 * 1) Algorithms
 * 2) Iterative methods
 * 3) Heuristics.

All the optimization techniques are search algorithms, and the link to the relevant article is not even provided in the current version. The very important distinction between techniques for global optimization and techniques which can only search for local extrema is not even mentioned in the current version.

Moreover, the phrase "algorithms that terminate in a finite number of steps", which is currently used to define one of the three types of optimization techniques is misleading. All algorithms, by definition, have a finite number of steps. If they are iterative algorithms, some of these steps are repeated again and again. And no search method, not even the most complex one, is supposed to have an infinite number of steps or an infinite number of iterations.

My edits may not be perfect (I was stopped before I could complete them), but I think they are a much better starting point than the current version.

Paolo.dL (talk) 11:17, 4 August 2011 (UTC)


 * Did you look at Knuth, as I suggested? Did you ask for help from the computer scientists at the Wikiproject Computer Science? Your edits are not based on any references and seem to have problems with OR/POV. We can cite Knuth's TAOCP Volume One if you want, which cites Kolmogorov. Kiefer .Wolfowitz 11:25, 4 August 2011 (UTC)


 * I do not have Knuth's book. Can you explain how my statements conflict with Knuth's definition of "algorithm"? Is there any optimization technique which involves an infinite number of steps? Do you deny the existance of iterative algorithms? If iterative algorithm exist (see figure in algorithm), how can you distinguish between optimization techniques 1 and 2? Do you believe that the Heuristic algorithms are not algoritms? Paolo.dL (talk) 12:32, 4 August 2011 (UTC)


 * That's not the point. You have not cited anybody, and you are changing an established definition, not just here, but by Knuth and by Kolmogorov. It's hard to think of what reliable source you could cite that might trump them. Maybe Smale might be arguable? You need to cite sources. Kiefer .Wolfowitz 13:01, 4 August 2011 (UTC)

I confess to being a little bit confused by all this. The page (either version) doesn't actually give an explicit definition of "algorithm" (and the algorithm page leaves room for ambiguity), so I'm not sure which established definition is being disputed here. It strikes me that the distinction being made in the current version between "algorithms" and "iterative methods" is really a distinction between methods that give an exact answer in finite time (e.g. the simplex algorithm), and methods that give successively better approximations at each iteration (e.g. Newton's method). The distinction is real, but I agree that it could be described in a better way. Jowa fan (talk) 13:50, 4 August 2011 (UTC)


 * Regarding "algorithm", a definition from CS101 should suffice. Knuth has a formal definition.
 * Kantorovich's in "Functional Analysis and Applied Mathematics" discusses a hierarchy of mathematical procedures---limiting theorems, approximation results that hold asymptotically, finite step bounds (approximation algorithms), etc.) However, functional analysis is more relevant to numerical analysis than to combinatorial optimization. Please let us focus on reliable sources rather than OR. Kiefer .Wolfowitz 14:07, 4 August 2011 (UTC)


 * Kiefer, would you mind to confirm that Knuth divides the optimization techniques in three distinct groups called Algorithms, Iterative methods, and Heuristics?


 * Jowa fan, I like your distinction. In the context of this article a distinction between "exact" and "approximate" is useful.


 * To whoever it may concern: I am not interested at all in arguing about the definition of the term "algorithm". A distinction between "algorithm" and "non-algorithm" is irrelevant in this context.
 * Paolo.dL (talk) 15:03, 4 August 2011 (UTC)


 * I'm afraid that I just don't see Kiefer's point. Paolo correctly says
 * "In the current version, optimization techniques are classified as:
 * Algorithms
 * Iterative methods
 * Heuristics.
 * "That's sloppy terminology. Both the "iterative" and the Euristic methods are algorithms (indeed the main article for "Heuristics" is called Heuristic algorithms), and obviously all of them have a "finite number of steps" (even if some of these steps are iterated)."
 * How can one possibly justify retaining a set of section subheadings that says that some (!) optimization techniques are algorithms and some are iterative methods and some are heuristics? The article iterative method says:  "A specific implementation of an iterative method, including the termination criteria, is an algorithm of the iterative method. ....heuristic-based iterative methods are also common." The article algorithm says, with sources, "In mathematics and computer science, an algorithm (Listeni/ˈælɡərɪðəm/) is an effective method expressed as a finite list[1] of well-defined instructions[2] for calculating a function.[3]"
 * Paolo's correct set of section headings is:
 * Algorithms for special cases
 * Iterative methods for local optimization
 * Non-differentiable optimization
 * Heuristic algorithms for global optimization


 * Makes sense to me. Duoduoduo (talk) 16:29, 4 August 2011 (UTC)


 * Please stop spouting off POV/OR opinion without reliable sources.
 * KNUTH defines Algorithm in Chapter 1.1. Citing Markov, not Kolmogorov (as I had written above), Knuth defines a "computational (iterative) method" on page 7. Heuristic does not appear in the index of Volume 1. Papadimitriou/Steiglitz define approximation algorithms, probabilistic algorithms, exponenential algroithms, local search on page 401; heuristics are defined as "any of the approaches above without a formal guarantee of performance can be considered a 'heuristic'" (also on page 401).
 * As I said.
 * So either all of you are wrong or Knuth and Papadimitriou and Steiglitz are wrong. Kiefer .Wolfowitz 17:52, 4 August 2011 (UTC)
 * If anybody wants to try to classify algorithms, you should adopt Minoux (or Magnanti's survey article or Fisher ...), which are serious optimization references. Kiefer .Wolfowitz 17:54, 4 August 2011 (UTC)


 * Kiefer, we're all here for the same purpose -- to make Wikipedia as good as possible. There's no need to say things like "please stop spouting off".
 * (1) You say "Papadimitriou/Steiglitz define approximation algorithms, probabilistic algorithms, exponenential algroithms, local search on page 401; heuristics are defined as "any of the approaches above without a formal guarantee of performance can be considered a 'heuristic'" (also on page 401)." That is, a heuristic is an algorithm, just like we've been saying, and contrary to the categorization that you restored, which lists heuristics as an alternative to algorithms. Do you read that passage as saying that heuristics are not algorithms?
 * (2) Instead of saying that a source defines algorithm or iterative method, please quote their definition. That way maybe we can see what your objection is.
 * (3) Please answer Paolo's question: "Kiefer, would you mind to confirm that Knuth divides the optimization techniques in three distinct groups called Algorithms, Iterative methods, and Heuristics?" Please give a quote from the source saying that.
 * (4) Does Knuth say that iterative methods are not algorithms (as your restored version implies by its organization)? Please quote.
 * (5) Does Knuth say that heuristic methods are not algorithms (as your restored version implies by its organization)? Please quote. Duoduoduo (talk) 18:40, 4 August 2011 (UTC)


 * I could have complained more sweetly, granted. But look at what you guys wrote. Where are the references? And you guys sounded very confident of yourselves. How about a dash of mea culpas to garnish the sua culpa main course?
 * (1) In common usage, an algorithm does something for a class of problems. In contrast, a heuristic lacks a performance guarantee: Thus a computation-scheme does not ascend from the  inferno of heuristics to the purgatory of iterative methods or approximation algorithms until there is some theorem describing its partial or limiting success on a problem class.  With the power of prayer, and grace, a scheme may then ascend to the paradise of algorithms.
 * (2) You guys can go to a library. Do you see how much volunteer work I've been doing. Please don't take that tone with me. If I were quoting from a rare book, then yes, I would feel obligated to give you quotes. But Knuth and Papadimitriou/Steiglitz?
 * (3)I answered Paolo. Knuth has the first two; as I wrote, Volume I's index does not describe heuristics. Heuristics were studied in monographs after he wrote his first volume in the 1960s. (Although Herb Simon probably did something great in the 1950s, as usual!)
 * (4)--(5) Again, go to a library and look it up yourselves, while you wait for your order of these two books to arrive.
 * My role was to defend the article, not to perfect it. I am dealing with bigger clean-up problems now. (Also I am exhausted, as you can tell from my typos and spelling.)  Kiefer .Wolfowitz 19:41, 4 August 2011 (UTC)

Exact or approximate
Algorithm explains that algorithms may be recursive or iterative, ... deterministic or non-determinisitc (including Heuristic algorithms) and even exact or approximate ("While many algorithms reach an exact solution, approximation algorithms seek an approximation that is close to the true solution. Approximation may use either a deterministic or a random strategy"). "Performance guarantee" is not listed as a criterion for classification. Moreover, according to Algorithm characterizations, Both Kuth (1968, 1973) and Stone (1972) define an algorithm as a program that "must terminate in a finite number of steps... a reasonable number" (see also note 11 in Algorithm). This definition is repeated in this article, at the beginning of the section we are discussing about, and even Kiefer supports it (see his comments on my talk page). Since no optimization technique is supposed to run forever (i.e. all of them are supposed to terminate according to precise rules or "stopping criteria", including a threshold for number of iterations), I am convinced that in this context the terms "algorithm", "technique" and "method" are equivalent. In other words, the only optimization techniques that are not algorithms are those with bugs.

Based on these considerations, and according to what Jowa fan suggested, I propose using this classification: Paolo.dL (talk) 22:53, 4 August 2011 (UTC)
 * 1) Exact algorithms
 * 2) Approximate algorithms:
 * 3) * Iterative algorithms for local optimization
 * Non-differentiable optimization
 * 1) * Heuristic algorithms for global optimization


 * Looks good to me. Duoduoduo (talk) 01:22, 5 August 2011 (UTC)


 * References? Has this page become an interesting blog now? Kiefer .Wolfowitz 22:16, 5 August 2011 (UTC)


 * "Heuristic" and "approximation algorithm" have a fairly well-defined meaning in computer science (A heuristic is a computational method which may or may not terminate and does not have any performance guarantee, an approximation algorithm is an algorithm which terminates, does not find the exact solution, but does have a performance guarantee. See e.g. Kleinberg & Tardos, Algorithms Design, p. 599). The classification should be based on reliable sources, not on any editor's (flawed) re-interpretation of those terms.
 * If you are somewhat familiar with the field you'll know there's a big division between computer scientists doing algorithms (both exact and approximate) and computer scientists doing heuristics. (I'm not very familiar with iterative methods, so I guess this is done by another group of people, mainly mathematicians.) In that respect the current division into algorithms, iterative methods and heuristics seems to reflect reality better than Paolo's proposed hierarchy. (Although you should not have to take my word for this should be references. That might be a bit more difficult though. I have books which do algorithms (both exact and approximate), I have books which do heuristics and my numerical analysis lecture notes might have some iterative methods, but I don't think I have any which discuss at least two of those.) —Ruud 00:18, 6 August 2011 (UTC)
 * I think Paolo's point is that an iterative method is a type of algorithm, by any reasonable definition of the word "algorithm". So the current headings looks a little illogical, since they imply that "algorithms" and "methods" are distinct things.  Since there are no sources for the current version of this section of the page, it seems reasonable to relabel some things.  (Saying "you can't make these edits because you don't have a source" lacks conviction if the original version was unsourced.)  I don't see anyone trying to redefine the terms; it's just a matter of choosing sensible section headings and subheadings.
 * Regarding sources, I think there's a misunderstanding here. We're not looking for sources that say "an algorithm is ..." or "a heuristic is ..."  Rather, what's required is a source that says "optimization techniques fall into three categories, called ...".  If something like that can be found, then we can settle this easily.  If not, then it's a matter of applying common sense. Jowa fan (talk) 08:06, 6 August 2011 (UTC)
 * Jowa fan, please rephrase "lacks conviction" in clear English.
 * I didn't source definitions which are known by anybody who does research in computational science. When asked about "algorithm", "iterative method", and "heuristic", I provided page references to some of the best references. (How can people pass a first course in programming without knowing the definition of an algorithm?)
 * None of the "revisionists" have provided any references (although Paolo recently cited Kuh/Knuth? and Stone, which may be a less reliable resource than Knuth or P/S) not even to the weakest works in heuristics in business administration, for example, as yet. (Paolo's other recent edits seems similarly problematic, but I don't have time to fix every problem.) Kiefer .Wolfowitz 08:27, 6 August 2011 (UTC)
 * "Lacks conviction" means "is not convincing". I'm not sure how to make it any clearer.  Note that I'm not referring to sources for definitions of words such as "algorithm", but to the lack of sources regarding a classification of optimization techniques.  (See Ruud's point 1 below.) Jowa fan (talk) 13:43, 6 August 2011 (UTC)
 * Yes it should be sourced, but this might be difficult due to the way the communities and literature is divided. This is, however, implicit evidence that the current characterisation is correct and not Paolo's.
 * Most computer scientists wouldn't consider Newton's method and algorithm. This is the distinction Knuth tries to make with "A procedure that has all of the characteristics of an algorithm except that it possibly lacks finiteness may be called a computational method." In particular, you can't do a Big-O analysis of Newton's method, you have to use techniques from numerical analysis to say something about its effectiveness. Similarly for heuristics. On algorithms, exact and approximate, you can.
 * Paolo's hierarchy might look attractive to the laymen, distinguishing between methods giving exact and inexact solutions, but it does not account for how these methods work in practice. Both exact algorithms and approximation algorithms draw from the same set of tools and techniques to design these algorithms and prove their correctness. They are very closely related and an exact algorithm is little more than the limiting case of an approximation algorithms with approximation ratio 1. Heuristics and iterative methods draw on an entirely different set of mechanism for their design and to argue their effectiveness. —Ruud 12:53, 6 August 2011 (UTC)
 * Thanks, this is constructive. Certainly Newton's method as taught in a mathematics class may not be an algorithm (if you agree that an algorithm terminates within finitely many steps), but most practical implementations of Newton's method will be in the form of an algorithm (terminating when some condition is met).  So the boundaries are a little fuzzy as far as the terminology goes.  In terms of the classification, the article as it currently stands proposes that there's a fundamental distinction between things like the simplex algorithm or various combinatorial algorithms and things like Newton's method.  (I don't think the simplex algorithm can really be seen as a limiting case of something else.)  Paolo's proposed version, as far as I can tell, makes exactly the same distinction. Jowa fan (talk) 13:43, 6 August 2011 (UTC)
 * No, Newton's method doesn't have input in the sense that all exact and approximate have and therefore you cannot express its runtime in terms of the size of the input. You have to take a different approach and say something about the rate of convergence. Then you're doing numerical analysis, not algorithm analysis. Paolo's hierarchy contains more elementary mistakes. Approximation algorithms have by definition a performance guarantee (see the book I cited above). So approximation algorithms can be heuristics, but a heuristic is generally not an approximation algorithm. Paolo seems to be using this term in a very different sense than would be done by a computer scientist: a method which might not give an exact solution. The inexactness of iterative methods and heuristics is also of a different nature. Iterative methods converge to an exact solution and you get an approximation by terminating at some point, while heuristics don't even have a guarantee of convergence. —Ruud 15:28, 6 August 2011 (UTC)
 * Thanks, this gives a basis for distinguishing between "approximation algorithms" and "iterative methods", something that wasn't clear to me before. On the other hand, the current version of the page seems to include approximation under the heading of iterative methods; perhaps this needs to be changed? Jowa fan (talk) 03:03, 7 August 2011 (UTC)

Search algorithms are sloppily classified (continued)
Thank you for the interesting discussion about the strict definition of the term "algorithm". This definition, however, is not explained in this article and a reader (even an experienced one) is not supposed to know it. What a reader would be glad to know is that some techniques, as Jowa fan noted, give an exact answer, some others do not. Also, some can only find local extrema (and hence are not always appropriate for operating on non-convex or non-concave goal functions), some other can find global extrema. Conversely, the current version explains that the techniques classified as "algorithms" terminate in a finite number of steps. It follows that either Isn't that a perfect example of a sloppy classification? I already explained this, and I believe that my explanation was simply ignored. Paolo.dL (talk) 19:59, 10 August 2011 (UTC)
 * 1) all the others are not supposed to terminate (because the antonym of "finite" is "infinite") or
 * 2) the definition is misleading.

Classification
This article is not the typology for Computing Reviews. Rather, it is supposed to inform the public. Therefore, the article should emphasize the most common problems, algorithms, and methods that are used in basic courses and scientific/computational practice. We should agree to follow a leading textbook's organization. Such an organization must be a compromise, of course.

Such an effort should be led only ideally by a person who has moved a mathematics/computer science article to "good article" status on English Wikipedia (or by researchers of international standing, like several Wikipedians). Kiefer .Wolfowitz 14:16, 6 August 2011 (UTC)


 * I agree with the first paragraph (thanks to people patiently answering my questions above) but not the second. Wikipedia already has enough rules; we don't need to make up extra ones that will discourage new contributors.  I would encourage people to be bold.  (Replace "only" by "ideally" and you have my full agreement.) Jowa fan (talk) 03:03, 7 August 2011 (UTC)


 * I believe that in Wikipedia nobody should consider himself a moderator, or a leader. Everybody should treat everybody else respectfully as a peer. A moderator may be freely chosen by those editors who wish to be moderated. Typically, consensus should be seeked, with or without a moderator. Unfortunately, the experienced editors are not always right and not always unbiased, and although their contribution is precious they are not supposed to lead discussions, if this means that they are allowed to impose their opinion without seeking consensus. By the way, the same is true for everybody else, including newcomers. Paolo.dL (talk) 19:59, 10 August 2011 (UTC)


 * The article needs an energetic professional to lead its expansion and clarification. Of course, everybody is welcome to contribute, as you rightly note.
 * We could do worse than just follow the outline suggested in the template on optimization algorithms: It lists important algorithms/methods/heuristics that have decent articles. That template needs expansion in combinatorial optimization. (Of course, mathematical objects and theory need discussion also.) Kiefer .Wolfowitz 23:39, 10 August 2011 (UTC)

Additional opinions requested on scope of convex optimization
There is an ongoing debate about the scope of convex optimization (or convex programming) on Talk:Convex optimization. Additional views are welcome. Isheden (talk) 21:19, 3 February 2012 (UTC)

Conjugate gradient
Conjugate gradient is an iterative method to solve a system of linear equation where the system matrix is symmetric and and positive-definite, so I don't think it's a good idea to list it in the iterative methods for optimization, although many optimization problems can be converted to solve linear equations. — Preceding unsigned comment added by Chaohuang (talk • contribs) 02:10, 14 June 2012 (UTC)

The Nonlinear programming article
There is a Wikipedia article called Nonlinear programming. It is much shorter than this article. I think that is a subtopic of this article's subject, but a large one, probably covering the majority of cases in science and engineering. It seems to be more simplified and didactically oriented than this one, so there may be advantages to its being separate, but it is odd organization for an article to contain more information on a subtopic than an article on that subtopic does. (I myself am new to these articles but not to this subject.) David R. Ingham (talk) 00:53, 21 February 2013 (UTC)

Combinatorial optimization
Is it true that combinatorial optimization is considered a subfield of mathematical optimization? In optimization (disambiguation), mathematical optimization is defined as "the theory and computation of extrema or stationary points of functions". This would be in line with the definition in note 1: "Mathematical programming is the study or use of the mathematical program" and "A mathematical program is an optimization problem of the form: Maximize f(x): x in X, g(x) <= 0, h(x) = 0, where..." Everything else on this page seems to be concerned with optimization of continuous functions, so why is combinatorial optimization suddenly mentioned as a subfield? Isheden (talk) 11:56, 25 April 2012 (UTC)


 * I agree. This article, reachable also from the REDIRECT "Numerical optimization", is about optimization of a function f: A → R when the set A bears at least a metric and f is at least continuous. In contrast, combinatorial optimization deals with functions from finite A, typically without metric, and often using symbolic (as opposed to numeric) computation methods. As far as I know, the intersection between both issues is very small. Therefore I think, both issues shouldn't be confused into a single article. Instead, appropriate "see also" links should suffice. - Jochen Burghardt (talk) 21:16, 29 November 2013 (UTC)

Assessment comment
Substituted at 18:30, 17 July 2016 (UTC)

Numerical Optimization in Statistics might be a big mistake
Statistics is something not like mathematics. The mathematical theory of extreme value cannot be simply applied in Statistics. This is because an optimizer constructed with sample data is randomly variable, and the extreme value of the optimizer (minimum or maximun) cannot be more significant than other values of the optimizer. We should take the expectation of the optimizer to do statistical decision, e.g. model selection.

I think that the property of an optimizer is a random variable. If the minimum or maximum of a random variable is not more significant than other sampling points, why the minimum or maximum of an optimizer can do more than other values in a statistical analysis? People might say that an optimizer will be converged to its minimum or maximum. This is wrong since any random variable will not converge to its extreme value but to its expectation only. Yuanfangdelang (talk) 21:02, 30 August 2016 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified 3 external links on Mathematical optimization. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20140305080324/http://glossary.computing.society.informs.org/index.php?page=nature.html to http://glossary.computing.society.informs.org/index.php?page=nature.html
 * Added archive https://web.archive.org/web/20141218090504/http://www.seophonist-wahl.de/ to http://www.seophonist-wahl.de/
 * Added archive https://web.archive.org/web/20090204025624/http://see.stanford.edu/SEE/courseinfo.aspx?coll=2db7ced4-39d1-4fdb-90e8-364129597c87 to http://see.stanford.edu/see/courseinfo.aspx?coll=2db7ced4-39d1-4fdb-90e8-364129597c87

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 14:47, 5 June 2017 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified one external link on Mathematical optimization. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20120316104607/http://reactive-search.org/thebook to http://reactive-search.org/thebook

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 01:33, 27 July 2017 (UTC)

Other names of "Mathematical Optimi(s/z)ation"
The first paragraph seems to be rather badly written in how it formats what is the proper name of this discipline.

104.228.101.152 (talk) 15:19, 30 August 2017 (UTC)

External links modified (January 2018)
Hello fellow Wikipedians,

I have just modified one external link on Mathematical optimization. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20020822203831/http://www-fp.mcs.anl.gov/otc/GUIDE/index.html to http://www-fp.mcs.anl.gov/otc/Guide/index.html

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 11:24, 21 January 2018 (UTC)

Book of Boyd and Vandenberghe
As far as I can see, the book of Boyd and Vandenberghe cannot be downloaded from the website, in contradiction to the recent edit. Did I perhaps miss the correct link? -- Jitse Niesen 12:22, 8 Nov 2004 (UTC)


 * No response, so I removed the link. -- Jitse Niesen 23:14, 23 Nov 2004 (UTC)

The book can be downloaded from (http://www.stanford.edu/~boyd/bv_cvxbook.pdf ). Right ?


 * Correct, so I added the book again to the article. -- Jitse Niesen 13:38, 26 Nov 2004 (UTC)

Terminology: Computational optimization techniques
The "Computational optimization techniques" section presents three categories of techniques: finite termination algorithmic "optimization algorithms", convergent iterative "iterative methods", and heuristics. Under the iterative category, this is further broken down into "hessian", "gradient" and "function values" categories.

Having this kind of nomenclature/categorization clearly defined is great and very important! There are a few points that I think could be improved to make this section really useful:

- There are no references given where a reader can see the categories explained in more detail, or to support this categorization scheme. Is this categorization standard in the field? Can references be added to support this?

- It is confusing how the "Global Convergence" fits in to the categories. Are all algorithmic and iterative methods local, and all heuristics are global?

- Isn't the content of the Metaheuristic page more relevant to this topic than the Heuristic algorithm page? What is the difference in terminology here?

Maybe someone with a broad scope of knowledge in the field can weigh in on this and clarify the terminology/categorization of the techniques? ReadEuler (talk) 23:45, 21 March 2021 (UTC)

math
module 1 120.29.69.185 (talk) 07:22, 22 September 2021 (UTC)

Physical theories
This article doesn't mention that most (if not all) physical theories are expressed as optimization problem. My suspicion is that optimization problems are reverse of dynamic problems (diriclitch problems ). 41.190.245.207 (talk) 23:11, 18 December 2021 (UTC)

Maths literacy
Plan 102.132.225.41 (talk) 15:13, 30 May 2022 (UTC)

equal
There are a fair number of optimization problems where the optimum is when two quantities are equal. For example, the maximum power transfer is when the load impedance equals the source impedance. The actual reason I am writing this is that there is no Kelvin's law for conductor size where the optimum is when the annual power loss equals the annual costs of the power line. But there are so many problems that have a similar form, or at least close enough, that there should be a page for that. Gah4 (talk) 07:40, 2 June 2022 (UTC)

One thing I hoped to find in the article (but did not):
One thing I hoped to find in the article is the answer to this question:


 * Who first recognized that by solving an equation of the form f′(x) = 0 for x, one finds the critical points of the function f, which can then be used to determine the maximum and minimum values taken by f(x) for x in an interval [a, b] of the real line?

Was it clearly either Newton or Leibniz before the other one knew this?

Or did they both claim to have priority on it that point in their reported quarrel about who did what first?

Or, perhaps history just doesn't know the answer.

But there must be an earliest known publication of that method of finding maxima and minima!

(This optimization technique, which every first-semester calculus student learns, has undoubtedly saved the world many billions of dollars, and I suspect I am underestimating.)

If anyone knows the answer, it would be good to include that information in the article. 2601:200:C000:1A0:D0B:17BA:E5BF:AD9F (talk) 23:40, 20 June 2022 (UTC)

Mathematical programming as a type of declarative programming
The page on Programming paradigm says that "mathematical" programming is a type of "declarative programming" "in which the desired result is declared as the solution of an optimization problem", and the link on the word "mathematical" links here because it is a link to "Mathematical programming", which redirects to here. As far as I can tell, this page doesn't give any indications or examples as to how a fully fledged programming language could be made up of solving optimization problems, though. (Here, by "fully fledged" I'm thinking "Turing complete/equivalent", but I have to admit that it doesn't that on the "programming paradigm" page, and the word "programming language" can refer to languages that are not Turing complete, even if it usually doesn't.) The closest thing I could find was the "Machine learning" section under "Applications", which consists of a link to Machine learning. DubleH (talk) 18:22, 15 July 2022 (UTC)