Talk:Dynamic programming/Archive 1

Memoization vs. Dynamic Programming
These are two different approaches, not a tool and a class of problems. This page is perpetuating serious misconceptions. --67.188.230.235 (talk) 19:08, 24 November 2015 (UTC)


 * I'm not really sure what you mean. In the lede, at least, DP is presented as "a method for solving a complex problem by ...", while memoization is presented as a "technique". (I do agree that memoization, which has its own article, gets a lot of coverage in this one.) Q VVERTYVS (hm?) 10:51, 14 February 2016 (UTC)

Memoization can't do the egg dropping puzzle??
This page makes a very bold statement: that there are dynamic programming problems which memoization cannot be used to solve. I supposed there might exist examples of this (though I can't think of any), but the page goes on to claim that the egg dropping puzzle is one of them. The recurrent function is:


 * W(n,k) = 1 + min{max(W(n &minus; 1, x &minus; 1), W(n,k &minus; x)): x = 1, 2, ..., k }

with base cases at k=1 and n=1. This is straightforwardly (if inefficiently) implemented using a recursive function:

function W(n,k): if n = 1 return 1 else if k = 1 return k       else mn = inf for x from 1 to k:               mn = min(mn, max(W(n - 1, x - 1), W(n, k-x))) return mn + 1

Thus it's also trivially memoized. What's the issue? What am I missing? — Preceding unsigned comment added by 129.174.124.66 (talk) 16:43, 12 January 2015 (UTC)


 * The issue is that you did not read the paragraph carefully. It was talking about optimal solutions, specifically algorithms with optimal time complexity (space complexity is usually a secondary concern). Your memoized recursive algorithm is indeed the naive one referred to by the paragraph, which works but does not has optimal time complexity. The section on the egg dropping problem itself gives your solution (though in words rather than pseudo-code) and notes that it takes O(n*k^2) time with a DP solution (which is exactly what yours would take if you use memoization). It even goes on to state how it can be directly improved a little, before giving a faster (but I don't know whether it is optimal) algorithm that uses a totally different parametrization. Lim Wei Quan (talk) 12:50, 9 July 2015 (UTC)

This is misunderstanding perpetuated, unfortunately, by the way CLRS talks about DP. DP is an algorithm design paradigm, not a cure-all. There is no guarantee that the time required by a DP solution be polynomial--indeed, the DP solution to the TSP of Held and Karp (SIAM J., vol. 10 (1962), pp. 196-210) is still the best general TSP algorithm; it is, of course, exponential. I wish CLRS were clearer on this point: it gives the impression that the DP solution, if done right, leads to algorithms in P. I do not understand Lim Wei Quan's reference to "optimal solutions"; how does one know, for example, that the O(n^3) DP solution to matrix chain product is optimal?! I have never seen a proof of that (but would like to). — Preceding unsigned comment added by 73.44.176.246 (talk) 16:02, 18 February 2016 (UTC)


 * The point I hope was clearly conveyed by the article is that any recursive algorithm is trivially memoized. (In fact some programming languages do that automatically!) As for my statement about optimal solutions, notice that nowhere in the article or on the comments page did I say that any of the presented algorithms are optimal. In my above comment I even expressly state that I do not know whether my solution for the egg-drop puzzle is optimal. But we do know that the naive DP algorithms are often not optimal, and that is the point. In some rare cases we can of course prove optimality, such as if we are required to print out all fibonacci numbers, in which case it is clear that the memoized recursive algorithm takes space and time linear in the output, which is best possible. In many other cases no optimality results are known. Another simple example is finding an element in an n-item sorted list, which requires O(log(n)) key-comparisons, and so binary search is optimal, which is not a memoized recursive algorithm in any reasonably natural way. That is the point of the sentence "Others can be more complicated and cannot be implemented as a recursive function with memoization.". Arguably this sentence is technically false since every algorithm can be written as a recursive function that does not really do any recursion, but it is an attempt to stop students' common misconceptions before they begin, one of which is that DP can solve a lot of problems. No it does not, just as we cannot quite say that recursion solves a lot of problems. They are both just generic techniques and usually need to be used in conjunction with some crucial insights specific to the problem to be solved. I hope that it is all clear now! =) Lim Wei Quan (talk) 10:58, 9 March 2016 (UTC)

Discrete vs Continous, Deterministic vs. Stohastic
If I remember well from the courses I took long time ago dynamic programming comes in different flavors: discrete vs. continuous and stohastic vs. deterministic. Usually, in computer science we just reduce dynamic programming to the deterministic, discrete case.

An example of the continuous dynamic programming that I would like to see in Wikipedia is the solution of the problem of landing on the Moon where one has to minimize the amount of rocket fuel for lunar landing of a space ship. Tomo 12:38, 16 Apr 2005 (UTC)


 * It's been sometime since the proposal of adding emphasis or an example for continuous problems treated by dynamic programming, but I support this.Hardmath (talk) 14:34, 3 April 2016 (UTC)

Assessment comment
Substituted at 13:57, 29 April 2016 (UTC)

Dr. Bambi's comment on this article
Dr. Bambi has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:

"As it stands, the article is more on Dynamic Programming Algorithm than on Dynamic Programming.

The introduction to the article is really biased in this sense and, therefore, misleading.

Broadly speaking, dynamic programming is an analytical method to solve multi-stage problems. This method can be used in 1) continuous time or discrete time models; (e.g. Fleming and Rishel [1]) 2) deterministic or stochastic models (e.g. Fleming and Rishel [1], and Yong and Zhou [3]); 3) finite or infinite dimensional models (e.g. Li and Yong [6], and Fabbri and Gozzi [2]). To be noticed that applications in economics, biology, demography etc. exist for all the combinations of these groups of models.

The introduction and, really, the whole article focus on discrete time deterministic models. In addition, the emphasis is on how to implement such a method using a computer even though that is not necessarily important to understand the method itself.

Overall the article gives a very partial view of what dynamic programming really means.

In my opinion, the introduction is also quite obscure and too much rich of details for a reader who has never heard of this method before.

In the article, there is no reference to the theory on dynamic programming. For example a clear explanation of the Bellman Principle and the Bellman Equation is missing. In fact, some of these concepts pop up during the applications but are not previously discussed or clarified to the reader.

There is also a MISTAKE at the very beginning of the Introduction, i.e. "dynamic programming (also known as dynamic optimization)" is clearly incorrect. Dynamic optimization is an area of mathematics which includes several methods of optimization and dynamic programming is one of these.

Regarding this last point, it could be nice to see the advantages or drawbacks in using the dynamic programming approach instead of, for example, the maximum principle which is another method to solve optimal control problem in continuous time models. A discussion about this issue could be found, for example, in the introduction of Bambi et al. [4].

Therefore the article could be drastically improved by: 1) Rewriting completely the introduction without the emphasis on the algorithm and the computational details. 2) Explain carefully that dynamic programming can be applied not only to solve discrete time, finite dimensional deterministic models but also to solve models where time can be continuous (reference), and which could be stochastic and infinite dimensional. 3) Introduce and explain the main theoretical concepts and results of the dynamic programming approach.

TO UPDATE: The example: Economic Optimization, should be updated. After the last paragraph, starting with “We see that it is optimal…” the following sentence should be added. “The same problem has been investigated by Bambi et al. [4] and [5] in continuous time and under the assumption that capital takes time to become productive. With this two features a dynamic programming approach for solving infinite dimensional continuous time and deterministic model is used.”

REFERENCES: 1)	Fleming, W. H. and Rishel, R. W. (1975). Deterministic and stochastic optimal control. Springer. 2)	Fabbri, G. and Gozzi, F. (2016). Stochastic optimal control in infinite dimensions: dynamic programming and HJB equations. Springer (forthcoming). 3)	Yong, J. and Zhou, X.Y. (1999). Stochastic Controls. Springer. 4)	Bambi, M., Fabbri, G. and Gozzi, F. (2012). “Optimal policy and consumption smoothing effects in the time-to-build AK model”. Economic Theory, 50(3), 635-669. 5)	Bambi, M., Di Girolami, C., Federico, S. and Gozzi, F. (2016). “On the consequences of generically distributed investments on flexible projects in an endogenous growth model”. Economic Theory (forthcoming). 6)	Li, X. and Yong, J. (1995). Optimal control theory for infinite dimensional systems. Springer."

We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.

Dr. Bambi has published scholarly research which seems to be relevant to this Wikipedia article:


 * Reference : Mauro Bambi & Cristina Di Girolami & Salvatore Federico & Fausto Gozzi, 2014. "On the Consequences of Generically Distributed Investments on Flexible Projects in an Endogenous Growth Model," Discussion Papers 14/15, Department of Economics, University of York.

ExpertIdeasBot (talk) 18:17, 27 June 2016 (UTC)
 * Broadly speaking, the assertion "dynamic programming is an analytical method to solve multi-stage problems" is false. Rather, this is only one of the senses in which dynamic programming is now used, and very far from the sense that it is widely used in computer science. —David Eppstein (talk) 00:53, 28 June 2016 (UTC)

Dr. Fabbri's comment on this article
Dr. Fabbri has reviewed this Wikipedia page, and provided us with the following comments to improve its quality:

"I would like first of all the clarify that I give some comments on this Wikipedia article only because I have been invited in doing so and I feel a little embarassed in criticising the work of other people that spent generously their time for the community.

My overall impression is that the article is well done but it only concerns a part of the story (and, to tell the truth, the part on which I am less expert).

The dynamic programming (expression that is not equivalent to "dynamic optimization" as stated in the second line) can aso be applyed to continuous time optimisation problems both in a deterministic and in a stochastic framework, both in finite and infinite dimensions.

The use of dynamic programming for continous time optimization problem is strictly related to the study of the Hamitlon-Jacobi-Bellman (HJB) equation associated to the problem. If the problem is deterministic (resp. stochastic) the HJB equation is of the frst (resp. second) order. Under suitable hypotheses the value function of the optimization problem can be identified with the solution of the HJB equation. A particolarly suited notation of solution for HJB equations appear to be that of viscosity solution.

I mention some books on the subjet:

Deterministic problems, finite dimension:

- M. Bardi and I. Capuzzo-Dolcetta, Optimal control and viscosity solutions of Hamilton-Jacobi-Bellman equations, Systems and Control: Foundations and Appli- cations, Birkhäuser, Boston, 1997.

- W. H. Fleming and R. W. Rishel, Deterministic and stochastic optimal control, Ap- plications of Mathematics, vol. 1, Springer, Berlin, 1975. (also stochastic)

Stochastic problems, finite dimension:

- J. Yong and X. Y. Zhou, Stochastic controls, Hamiltonian systems and HJB equa- tions, Applications of Mathematics, vol. 43, Springer, New York, 1999.

Deterministic problems, infinite dimensions:

- X. J. Li and J. M. Yong, Optimal control theory for infinite-dimensional systems, Systems and Control: Foundations and Applications, Birkhäuser, Boston, 1995.

Stochastic problems, infinite dimensions

Stochastic Optimal Control in Infinite Dimensions: Dynamic Programming and HJB Equations (to appear).

This said, again, I wuold not like to give marks to anybody. I love Wikipedia and I only need to thank the people that work for it.

Kind regards"

We hope Wikipedians on this talk page can take advantage of these comments and improve the quality of the article accordingly.

We believe Dr. Fabbri has expertise on the topic of this article, since he has published relevant scholarly research:


 * Reference : Bambi, Mauro & Fabbri, Giorgio & Gozzi, Fausto, 2009. "Optimal policy and consumption smoothing effects in the time-to-build AK model," MPRA Paper 17128, University Library of Munich, Germany.

ExpertIdeasBot (talk) 16:07, 11 July 2016 (UTC)

Factual accuracy tag
This article has beed tagged for factual accuracy, yet there is no discussion in the talk page. I've introduced this section to allow for such conversation.--Work permit (talk) 19:06, 5 November 2016 (UTC)

Lead rewrite needed
Having read some of the above comments, and clicked on a few things, it strikes me that the material at Bellman equation is far more appropriate for the lead than what we presently have. The present lead almost seems to suggest that dynamic programming is a specialized form of divide and conquer, but I personally regard its core motivation as something quite different.

The present leads dives into memoization rather briskly, which I regard as primarily an accounting trick. Neither the word "discrete" nor the word "continuous" is presently found in the lead. Yuck.

If someone wishes to rewrite the lead from scratch, beginning with a short statement of what characterized the problem class as motivated from the Bellman page, then provides a paragraph about discrete/continuous, deterministic/stochastic, finite/infinite domain distinctions, and only then dives into the accounting trick of memoization and other algorithmically motivated material, consider this my earnest vote to fill your boots. &mdash; MaxEnt 20:46, 12 November 2016 (UTC)
 * The material from Bellman equation is suitable only for the economic/engineering version of dynamic programming. It does not even approximate the way dynamic programming is taught in computer science. —David Eppstein (talk) 21:22, 12 November 2016 (UTC)


 * As an academic with a foot in Computater Science and another in Management Science/Operations Research I tend to agree with MaxEnt about the fact that the original motivation of dynamic programming has little to do with divide and conquer, memoization, or dynamic programming algorithms taught in computer science. Probably there should be different pages: dynamic programming (computer science) and dynamic programming (management science). The term programming in dynamic programming is clearly linked to the origins of other programming techniques such as linear programming and stochastic programming. As Dijkstra explains in Reminiscences about the origins of linear programming the term programming is related to mathematical formulations (linear or otherwise) of military programs, i.e. plans for allocating resources, scheduling operations, etc. In his seminar work (Bellman 1957) Bellman is also mainly concerned with multi-stage (deterministic or stochastic) decision problems and their applications in "economics, physics, biology, or otherwise" (page 7); for instance, the book at page 3 begins with a motivating example involving a multi-stage resource allocation process. Most importantly, Bellman explicitly states at page 7, that [...] we must realise that the essential purpose in formulating many of these mathematical models [...], is not so much to calculate numbers [...] but rather to determine the structure of the solution and also at page ix The problem is not to be considered solved in a mathematical sense until the structure of the optimal policy is understood. With this statement, Bellman clearly positions his research in the same club as the one carried out in the previous decade by Dijkstra, Arrow and von Neumann. The structural properties Bellman is interested are mostly analytical characterisations of the optimal policy for a given system (e.g. Scarf 1960, "The optimality of (S,s) policies in the dynamic inventory problem"), rather than efficient numerical procedures. In Chapter I, Section 4, page 7-9, Bellman illustrates the essence of a backward recursion algorithm, he then goes on and states that "a digital computer may be programmed to print out the sequence of values" (i.e. the value function and the policy function). Bellman is aware of the computational advantages, and he mentions them, but he seems to be also clearly aware of the curse of dimensionality. He believes numerical approaches should be used only to gain insights. His chief concern is ultimately to determine analytically the structure of the optimal policy for a given class of problems, so that closed form solutions can be obtained, or dedicated algorithms - not necessarily dynamic programming ones - can be developed. --Dr.roberto.rossi (talk) 23:55, 27 March 2017 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified one external link on Dynamic programming. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20100619011046/https://netfiles.uiuc.edu/meyn/www/spm_files/CTCN/CTCN.html to https://netfiles.uiuc.edu/meyn/www/spm_files/CTCN/CTCN.html

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 07:15, 15 September 2017 (UTC)

Opening Summary
The summary of this article needs overhauled. The first paragraph and a half reads as if it's describing the concept of caching and it's not apparent that it's a different concept entirely. In particular the importance of subdivision of the cached solutions needs to be brought front and center, if indeed that's as crucial to the definition as the rest of the article seems to imply. Ketura01 (talk) 02:35, 19 September 2017 (UTC)

Smith-Waterman Algorithm
I don't see any mention of the Smith-Waterman algorithm, which is a dynamic programming algorithm just like the Needleman-Wunsch. Would it be appropriate to add that one as well? --scskowron 13:57, 13 December 2006 (UTC)


 * I know this is an old post, but this problem has been resolved and there is now a mention of the Smith Waterman algorithm in the Sequence Alignment section. Neilc314 (talk) 03:58, 10 October 2017 (UTC)

Mild correction of pseudocode.
The article's pseudocode was inconsistent. At times, it used a Python-like indenting to represent the code structure. But it also included brackets at times. I have added brackets so that the pseudocode follows a C-like syntax: if the result of a loop, conditional, or whatnot is longer than one line, it is surrounded by brackets.

A more aggressive editor might prefer that no brackets be used. I don't know whether a bracket-less pseudocode would be clearer, but any consistent use of brackets would be clearer than what was there. (Would always using brackets be clearer -- for people who use a text-to-speech synthesizer, for example -- or would it just put in too much punctuation?) Chip Unicorn


 * In pseudocode I like to prefer indentation to braces where it's not unclear, to avoid eye-clutter, but I don't disagree with your changes - it's still sparse enough to be readable. Deco 01:10, 23 Jun 2005 (UTC)


 * I tried it without braces, and it looks visually clearer. If Wikipedia gets complaints that this is un-understandable by some group. we can put in some way to distinguish the levels.  Chip Unicorn 12:41, 23 Jun 2005 (UTC)


 * Just to clarify, are there any set Wikipedia standards on formatting pseudocode? Neilc314 (talk) 04:00, 10 October 2017 (UTC)

Checkerboard
The current „Checkerboard“ section has many problems that make it confusing or outright incorrect. Here are just a few:


 * 1) What is the meaning of the dashes („–“) at positions ,  ,    ,   and   in the checkerboard?  The text of the section says „start at any square on the first rank (i.e., row)“. So if an implementation starts at one of the squares that contains dashes, what cost should a dash be given? Or should those positions be ignored when computing the cost? The current explanation does not make that clear.
 * 2) The explanation goes on to say „before computing the cost of a path, we check the array   to see if the path cost is already there“. But the pseudo code does not check to see if the path is already there. Instead, the pseudo code unconditionally always computes the path cost and assigns it to the   array within the various   loops; none of which ever checks whether or not any costs were computed already.
 * 3) The text refers to „the previous node on the shortest path to  “. The word “node” implies a graph data structure. But the data structure being modeled in the example is explicitly introduced as an array of squares. In my opinion, it is confusing to refer to an array's squares as nodes if it is never explicitly mentioned that a graph data structure is what's being modeled.
 * 1) The text refers to „the previous node on the shortest path to  “. The word “node” implies a graph data structure. But the data structure being modeled in the example is explicitly introduced as an array of squares. In my opinion, it is confusing to refer to an array's squares as nodes if it is never explicitly mentioned that a graph data structure is what's being modeled.
 * 1) The text refers to „the previous node on the shortest path to  “. The word “node” implies a graph data structure. But the data structure being modeled in the example is explicitly introduced as an array of squares. In my opinion, it is confusing to refer to an array's squares as nodes if it is never explicitly mentioned that a graph data structure is what's being modeled.

I implemented the pseudo code to the best of my understanding. However, it isn't clear whether or not the path my implementation computed is the expected result for the specific checkerboard given in the explanation.

Wouldn't there be some value in including within the explanation the expected solution to the problem that's posed in the explanation?

algoHolic (talk) 17:15, 4 February 2019 (UTC)

Break this article into two pieces, one for dynamic programming (Bellman equation), the other for dynamic programming (recursion and memoizing)?
The two topics are different enough they are worth to be separate articles. A CS student learning dynamic programming in an introductory algorithm course will be unnecessarily confused when he or she read the Bellman equation part. Golopotw (talk) 02:11, 7 January 2020 (UTC)

About the Fibonnacci Sequence
It is possible to do a direct calculation of  any fibonnacci sequence term using  matrix calculation :

Vector (xn xn-1) is equal to matrix ( 1 1 1 0) multiplied by vector (xn-1 xn-2).

Using that recursively means that you can compute any term using matrix exponentiation. By replacing matrix (1 1 1 0) by the product of 3 matrices (Eigen Matrix, inverse of Eigen Matrix and a diagonal matrix), the exponentiation has only to be done on the  diagonal matrix  which is trivial. Vapula (talk) 23:54, 15 February 2020 (UTC)

Memoization
It would help to clean up usage of this term, possibly with its own subsection. This term is not clearly defined until the section on Fibonnacci sequence, which is the 6th occurrence on this page. The first occurrence is in the Computer Programming section where it is explained differently ("store the solutions to the sub-problems in a table"). The second occurrence on the page says, "Some languages make it possible portably," which I don't understand. The portability statement is backed by links to four WikiPedia pages (perl, scheme, lisp, and D) that do not address portable memoization or even dynamic programming. Jason.Rafe.Miller (talk) 01:32, 11 August 2020 (UTC)

Example
I (FvdP 21:51 Oct 22, 2002 (UTC)) had begun an example but do not feel the motivation to finish it cleanly right now (and perhaps not in the immediate future either). Here is it if anyone wants to complete it::

Suppose that we have to compute the product of n matrices A1.A2. ... .An, and we want to minimize the total number of operations by executing the matrix products in the best possible order.

The product of two matrices of size i &times; j and j &times; k

........

This example is published in Cormen's Introduction to Algorithms textbook. Will we run into copyright issues by putting it here?


 * Absolutely not! Ideas cannot be copyrighted, only a particular way of expressing those ideas. CLRS itself based this section on various papers. In other words, we can write an explanation of the problem and its solution in our own words, but should be careful not to copy text or even the precise presentation of the problem. Derrick Coetzee 20:09, 5 Oct 2004 (UTC)

Checkerboard
I don't think the Checkerboard example has optimal substructure. Consider the following example: If I understood the problem correctly, the shortest path to row 4 goes to right, and the shortest path to row 5 goest to the left. W3ricardo (talk) 19:55, 28 January 2022 (UTC)
 * The structure involves shortest paths to individual squares, not to whole rows. A bigger problem with this section is that it is totally unsourced and looks like original research. —David Eppstein (talk) 22:04, 28 January 2022 (UTC)