Jump to content

Talk:Dynamic programming

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by Joshua Davis (talk | contribs) at 20:58, 6 July 2012. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

WikiProject iconComputer science B‑class Top‑importance
WikiProject iconThis article is within the scope of WikiProject Computer science, a collaborative effort to improve the coverage of Computer science related articles on Wikipedia. If you would like to participate, please visit the project page, where you can join the discussion and see a list of open tasks.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
TopThis article has been rated as Top-importance on the project's importance scale.
Things you can help WikiProject Computer science with:

Template:WikiProject Computational Biology

WikiProject iconSystems B‑class High‑importance
WikiProject iconThis article is within the scope of WikiProject Systems, which collaborates on articles related to systems and systems science.
BThis article has been rated as B-class on Wikipedia's content assessment scale.
HighThis article has been rated as High-importance on the project's importance scale.
Taskforce icon
This article is within the field of Control theory.

Wrong solution

I think the solution to the 0-1 matrix problem is wrong because it doesn't say anything about the number of ones and zeroes in one column constraint. I can fill the first column only with ones and the solution should be valid, according to the explanation on the page. You have to specify that the number of ones and zeroes on every line has to be n/2 and n/2, respectively. —Preceding unsigned comment added by 92.82.163.126 (talk) 20:16, 31 August 2008 (UTC)[reply]

Example

I (FvdP 21:51 Oct 22, 2002 (UTC)) had begun an example but do not feel the motivation to finish it cleanly right now (and perhaps not in the immediate future either). Here is it if anyone wants to complete it::

Suppose that we have to compute the product of n matrices A1.A2. ... .An, and we want to minimize the total number of operations by executing the matrix products in the best possible order.

The product of two matrices of size i × j and j × k

........


This example is published in Cormen's Introduction to Algorithms textbook. Will we run into copyright issues by putting it here?

Absolutely not! Ideas cannot be copyrighted, only a particular way of expressing those ideas. CLRS itself based this section on various papers. In other words, we can write an explanation of the problem and its solution in our own words, but should be careful not to copy text or even the precise presentation of the problem. Derrick Coetzee 20:09, 5 Oct 2004 (UTC)

Confusion

The first paragraph of the second part and the article on algorithms states that dynamic programming is a bottom-up approach, but later this article says dynamic programming may use bottom-up or top-down approaches. --zeno 11:46, 9 Feb 2004 (UTC)

I've revised that section. It was very vague and somewhat incorrect. I hope there's less confusion now. Derrick Coetzee 20:18, 5 Oct 2004 (UTC)

Added example

I added an example at the Implementations-page, and i need people to check it. It concerns a checkerboard (not matrices) and i think it's correct, but i can't be sure. Should i put it on the main page? Well, read it and give it an honest opinion.

Gkhan 23:45, Jul 26, 2004 (UTC)

I think the example should be moved to Wikibooks. It is a bit long-winded, and not very encyclopedic stylistically. Fredrik | talk 20:46, 22 May 2005 (UTC)[reply]

Weak analysis of fibonacci running time

The analysis of the nieve fibonacci computation is rather loose. It's not quite as bad as O(2^n), it's really Θ(phi^n)=Θ(fib(n)), and since phi < 2, the computation is grows strictly slower than 2^n.

Given it is still exponential time. I don't know if it is worth the tangent off of the main topic dynamic programming to make the statement tighter, however. Rrenaud 01:56, 3 Dec 2004 (UTC)

Thanks, I took care of this by just making it say "exponential time" and not be too specific. Deco 19:26, 17 Apr 2005 (UTC)
I don't want to be nitpicky (is that a word?) but there is nothing wrong with saying that its a O(2^n) algorithm since big-O is just an upper bound. It would be wrong to say Θ(2^n). I personally preferred it the other way, but i guess this way is more accurate. Gkhan 22:15, Apr 17, 2005 (UTC)
To say it's O(2^n) would be a bit silly though, when the point is that it takes a lot of time. We really want to give a lower bound in this context, not an upper bound. Deco 07:07, 23 May 2005 (UTC)[reply]

Fleizach the example says m = min(a, b, c). then it later says if m = a, path = 1, if m = b, path = 2 and so on. this doesn't take into account if, for example, a and b were the same. the path taken may not be the same path. fixing this would make the code a lot more verbose unfortunately.

The note about there being an O(1) algorithm for computing fib(n) is sloppy at best. Since all of the analysis in this section is done with respect to the input n, not the size of the input log(n), it may at first glance seem that there is an O(1) algorithm using the closed form expression. However, since the fibonacci numbers grow with phi^n, no matter what algorithm is used even writing down the result must be at least O(n). In fact, this is exponential in the size of the input. As implemented on most hardware the close form calculation of fib(n) uses floating point calculation, and is therefore arguably an O(1) approximation, but not an exact solution. I think this misleading and distracting comment should be removed from the article. —Preceding unsigned comment added by 108.65.30.164 (talk) 04:26, 1 February 2011 (UTC)[reply]

Discrete vs Continous, Deterministic vs. Stohastic

If I remember well from the courses I took long time ago dynamic programming comes in different flavors: discrete vs. continuous and stohastic vs. deterministic. Usually, in computer science we just reduce dynamic programming ot the deterministic, discrete case.

An example of the continuous dynamic programming that I would like to see in Wikipedia is the solution of the problem of landing on the Moon where one has to minimize the amount of rocket fuel for lunar landing of a space ship. Tomo 12:38, 16 Apr 2005 (UTC)

Figure 1

"Figure 1. Finding the shortest path using optimal substructure; wavy lines indicate shortest path"

Surely it's bold lines that indicate shortest path in this diagram? What are wavy lines representing here? - Chris Wood 15:23, 3 Jun 2005 (UTC)

The bold lines indicate the shortest path from the start to the goal. The wavy lines indicate the shortest path between the two vertices they connect. Clarifying now. Deco 18:55, 3 Jun 2005 (UTC)
Just a thought—my edit (reverted, thanks Pm215) should be some indication that the graphic still doesn't make much sense. There's only one path between each pair of vertices, that is, the edge that connects them, and it must be the shortest of, um, itself, right? And just to add confusion, the wavy edges are the three longest in the graph! At the very least, the extra-short edge labeled '2' is clearly short, but it doesn't get any special treatment. I don't think I'm being dense, but this doesn't really make sense to me, and I wonder if I'm the only person staring at the graph who's lost. Patrick O'Leary 04:23, 15 November 2006 (UTC)[reply]
The idea (I believe) is that the right hand side of the figure is the 'solved' subpart of the problem, so the wavy lines indicate the length of the (determined in previous iterations) shortest path through a set of nodes which aren't shown, whereas the straight edges indicate real connections in the small part of the graph we're trying to solve next. I agree that it's not very clear, and given the other problems noted below perhaps we should just drop this example? pm215 12:45, 18 November 2006 (UTC)[reply]
Ahh, that sort of makes sense. Call it left-to-right language bias, but there's no indication that the solution is proceding leftward, and since this is the English-language Wikipedia, some explicit mention of that would seem justified. But at any rate, removing it does seem like a viable option. Patrick O'Leary 18:46, 20 November 2006 (UTC)[reply]

This figure makes me a little nervous, since it just falls short of implying that one can search for shortest paths in a graph using a straightforward DP (say, memoization: shortestpath(i)). This is not possible, except in directed, acyclic graphs. I suggest altering the image so that the edges are directed (have an arrowhead at one end). Ruberik 02:05, 10 June 2006 (UTC)[reply]

Hmm, you're right about that. For now I'll note it in the text. Deco 02:08, 10 June 2006 (UTC)[reply]
Wow, you're speedy. Thanks; also, I've just realized that the example that refers to the image has a similar issue (at best) or is wrong (at worst). It talks about finding shortest paths from adjacent vertices first, which makes some sense if the graph is a DAG. The problem is that no matter how you slice it, you don't find shortest paths "from all adjacent vertices" first -- if I have a graph where the destination is T and I have nodes A and B neighbouring it, A->B costs 1, A->T costs 20, B->T costs 1, then computing all adjacent vertices in no particular order is not useful. Ruberik 02:26, 10 June 2006 (UTC)[reply]
I'm not really sure how to fix this example; while its heart is in the right place, I don't think it has anything to do with DP. Changing the graph to a DAG helps, but the explanation would need changing to something much more complex (unless someone has a succinct way of doing it). Can anyone come up with a succinct example that fills the same "very brief introduction" purpose as the current one? Coin Changer is a good simple example of DP, but it still takes some space to explain. Ruberik 21:06, 12 June 2006 (UTC)[reply]
Our best bet may be to stick with Fibonnaci throughout. Sorry about that. Deco 03:10, 13 June 2006 (UTC)[reply]
Agreed with Ruberik. It's misleading to start off with an "example" that has nothing to do with DP at all. The diagram is not correct unless the graph is directed and acyclic. Fixing it is no easy task. igor_nav

Mild correction of pseudocode.

The article's pseudocode was inconsistent. At times, it used a Python-like indenting to represent the code structure. But it also included brackets at times. I have added brackets so that the pseudocode follows a C-like syntax: if the result of a loop, conditional, or whatnot is longer than one line, it is surrounded by brackets.

A more aggressive editor might prefer that no brackets be used. I don't know whether a bracket-less pseudocode would be clearer, but any consistent use of brackets would be clearer than what was there. (Would always using brackets be clearer -- for people who use a text-to-speech synthesizer, for example -- or would it just put in too much punctuation?) Chip Unicorn

In pseudocode I like to prefer indentation to braces where it's not unclear, to avoid eye-clutter, but I don't disagree with your changes - it's still sparse enough to be readable. Deco 01:10, 23 Jun 2005 (UTC)
I tried it without braces, and it looks visually clearer. If Wikipedia gets complaints that this is un-understandable by some group. we can put in some way to distinguish the levels. Chip Unicorn 12:41, 23 Jun 2005 (UTC)

Haskell memoization?

The article makes the claim that Haskell automatically memoizes the results of every function call, so that dynamic programming is never necessary. Having just tried to find the 300th Fibonacci number by a naively recursive approach in Hugs, I can assure you that such is not the case, at least not in all implementations. Similar doubts are expressed on the Talk page for Levenshtein distance. In fact, I came across someone's blog entry which talks about memoization in Haskell with an example of memoizing the Fibonacci numbers, contradicting the article's claim that this is done automatically. -- Anonymous, 11 March 2006

No documents found confirming memoization in Haskell

I would like to add that

  • The Haskell 98 Standard does not talk anything about memoization.
  • Google search on haskell memoization does not point to any such documentation; it only gives documents talking about techniques to achieve this by programming. (Ofcourse this is not a very good argument!)
  • Even the most optimizing Glasgow Haskell Compiler does not perform these kinds of memoizations, as one can verify by checking that the timing of the evaluation of the following naïve Fibbonaci function goes as exponential in n:
       fib 0 = 0
       fib 1 = 1
       fib n = fib (n-1) + fib (n-2)

Thus I think the mention of Haskell memoizing should be removed.

(Ofcourse it is possible to write memoizing functions in Haskell; the point is there seems to be no automatic memoization.)


-- Abhay Parvate 06:20, 13 July 2006 (UTC)[reply]

Old-fashioned Terminology

The word "Dynamic" makes most programmers think of dynamic programming language and/or Dynamic typing, or even dynamic compilation. Of course, Dynamic Programming was developed and named long before any of these (in fact, before any kind of compilation!) Maybe we should say something about this? Here's my first draft; improvements welcome!

Dynamic Programming is not related to dynamic programming languages nor to dynamic compilation, which were developed later.

This could be appended to either of the introductory paragraphs. I'd like some feedback before I insert this. Chris Chittleborough 08:53, 26 March 2006 (UTC)[reply]


I like to mention that Sean R. Eddy described how the term was coined. It was meant to shield Richard Bellman's work from US Secretary of Defense Charles Wilson who was hostile to math resarch. http://www.nature.com/nbt/journal/v22/n7/full/nbt0704-909.html -Huggie


How is this different from Divide & Conquer strategy, which is quite popular in computing world. Can someone explain pls? Its just that the word 'dynamic' has more implications in computer software, and is just confusing ! Cgeorge1 (talk) 18:45, 24 February 2009 (UTC)[reply]

Dynamic programming is mentioned in the Divide and Conquer Wikipedia page. So I think the deterministic characteristic of DP is the memoization of subproblems. If there is no common subproblem and hence no memoization, it cannot be called DP. Don't get too focused on the words "dynamic" or "programming". To me it is completely uninformative.--Huggie (talk) 04:37, 13 October 2009 (UTC)[reply]

I added a citation needed tag to the sentence that says "programming" is a reference to "mathematical programming". According to the pdf linked in the article [1] and [2], the word "dynamic" seems to refer to time series, and "programming" refers to planning. I don't know if that has to do with programming in "mathematical programming". Well, I don't know what the word programming means in that context. --Huggie (talk) 05:18, 13 October 2009 (UTC)[reply]

Greedy vs Dynamic Programming

A few of the algorithms listed here, I believe, are not referred to as dynamic programming.

Certainly Dijkstra's shortest-path algorithm is not; it's more of a greedy algorithm. I believe Bellman-Ford might also be wrongly listed here, but I'm not as confident about that.


The n^2 version of Dijkstra's could be considered dynamic programming, though the far more common m + n log n version is surely not. I would term Bellman-Ford DP because it can be thought of as filling in a two-dimensional table, where T[i][j] is the shortest path to i using less than or equal to j edges. Since we don't care about the exact value of j, we tend to squash out that dimension of the array. One can think of Floyd-Warshall in a similar way.


According to section 24.3 of the CLRS Introduction to Algorithms book, MIT Press, Dijkstra's single source shortest-paths algorithm uses a greedy approach. This is quite logical given the fact that we continually add the locally optimal vertex to the set of solved vertices at each step. I have thus removed it from the list of DP algorithms. Also, the n^2 version of Dijkstra's algorithm just doesn't use a priority queue to sort the vertices (it has an O(n) extract-min operation instead). Also with regard to Bellman-Ford, the fact that it fills in a table T[i][j] doesn't make it a Dynamic Programming algorithm. In fact, I would almost call Bellman-Ford a brute-force algorithm, because for each vertex, it goes through each edge (O(VE)) relaxing every single edge V times...it seems almost crude.

Dikjstra is not DP because it fills out a table. It is DP because it solves recursively smaller sub-problems (one node less). It is greedy in the way it choses which sub-problem to solve (kind of backwards, it chooses superproblems). Because of the way it chooses, it needs to do less work (one relax for each node). -- Nils Grimsmo 21:29, 3 August 2006 (UTC)[reply]

Weirdness

The page says "i dont know" in those terms, which is not only weirdly misplaced, but also improperly formatted. Someone check.

Smith-Waterman Algorithm

I don't see any mention of the Smith-Waterman algorithm, which is a dynamic programming algorithm just like the Needleman-Wunsch. Would it be appropriate to add that one as well? --scskowron 13:57 , 13 December 2006 (UTC)

Bellman equation

Principle of optimality links here, but there is no mention what the principle is. The principle, as I understand it, is basically the equivalence of certain dynamic programming problems to the Bellman equation (which should probably be mentioned here, as well, or Hamilton-Jacobi-Bellman equation). Smmurphy(Talk) 00:53, 26 March 2007 (UTC)[reply]

I put the BE into this article, and the PoO into the BE article. I'll change the PoO redirect to BE. Smmurphy(Talk) 02:27, 26 March 2007 (UTC)[reply]

Figure 1?

The caption on Figure 1 still doesn't make any sense. There's at most one path between any two verticies. Why aren't all the lines wavy, then? It seems like the figure should be replaced; it doesn't do much to support the artilce as is, and has been confusing readers for almost two years! -- Mikeblas 06:29, 27 March 2007 (UTC)[reply]

I'd like to clarify the diagram, but I don't understand your objections at all. Why would there be at most one path between any two vertices? In most graphs there's many paths between any two chosen vertices. Why should all the lines be wavy? The other lines represent single edges, not paths. I also disagree with the claim that it doesn't add much, there are no other simple examples of optimal substructure presented here. The text provides some additional explanation, but if there's anything I can do to clarify it further I'd be happy to. Dcoetzee 20:27, 31 March 2007 (UTC)[reply]
I think I might know what you're not clear about now. This is not a graph on 5 vertices; the wavy lines indicate paths that may include any number of nodes. The nodes on those paths are diagramatically omitted because they don't matter, only the total length of the path matters. Only the neighbours of the start node, those immediately adjacent to it, are shown. Dcoetzee 21:00, 31 March 2007 (UTC)[reply]

Problem with Checkerboard Example

In the checkerboard example, the indices are reversed. As indicated c(i,j) should reference the ith row and the jth column, however, as far as I can tell the ith column and jth row are referenced instead. This error is propagated throughout the example.

I don't think the problem is in reversal of the indices but in reversal of i=1 and i=n (the rows). If the definition of q(i,j) = c(i,j) for i= 1 than things will work out. 145.33.124.14 (talk) 13:36, 11 May 2011 (UTC)[reply]

Misplaced section

The part following "The steps for using dynamic program goes as follows:" is out of place. Out of nowhere the article starts using terms like 'gap penalties' and 'sequence alignment.' Seems like this was copied from some article on a specific application, such as protien sequencing. Anon, Wed Jul 11 14:11:43 EDT 2007

Top-down approach

[quote] Top-down approach: The problem is broken into subproblems, and these subproblems are solved and the solutions remembered, in case they need to be solved again. This is recursion and memoization combined together. [/quote]

I thought the top-down approach wasn't part of dynamic programming. I thought it was used by Greedy Algorithms —Preceding unsigned comment added by 84.192.157.164 (talk) 18:12, 5 January 2008 (UTC)[reply]

Matrix Example

The Matrix example is a particularly bad one. Not only is it poorly explained, but it's impractical. Why bother writing an algorithm to count all possible assignments of such a matrix, when you can get the answer using a combinatorics formula? A different example might be more appropriate. —Preceding unsigned comment added by 66.207.196.66 (talk) 18:02, 10 July 2008 (UTC)[reply]

Would you kindly provide a reference for the combinatorics formula, as the OEIS entry (sequence A058527 in the OEIS) does not list it. -Zahlentheorie (talk) 23:05, 11 July 2008 (UTC)[reply]

IMHO, the confusion comes from the fact that the four examples given for N=4 are, somehow, equivalent. Every one can be transformed into any of the others by permutating rows, and then permutating columns. Thus, one might get the wrong idea that the number of solutions is , or something similar. I would replace one of these four with:

--Como (talk) 13:08, 14 January 2012 (UTC)[reply]

If you sort the pairs before memoizing, then you will save a lot of space and time. The next two cases (mentioned in the article) are completely equivalent:

((1, 2) (2, 1) (1, 2) (2, 1))       ((1, 2) (1, 2) (2, 1) (2, 1))     k = 3

Permutating the pairs will not change the number of ways to arrange ones and zeroes in the remaining rows.

Oh, by the way, you don't need to store both elements of the pair, since for every pair . In addition, the number of remaining ones is always equal to the number of remaining rows times n/2. Therefore, the memoization key can be as simple as the sorted vector of remaining ones per column. There will be no conflict between different values of k.

The previous two cases would be memoized under the key:

(1, 1, 2, 2)

The text might be simplified taking all of this into account. --Como (talk) 10:56, 21 January 2012 (UTC)[reply]

But what is Dynamic Programming?

The article does not say what Dynamic Programming is! The introduction only says what it is used for ("solving problems exhibiting [certain properties]"). Can someone write a more descriptive definition? Magical Orb (talk) 01:01, 20 July 2008 (UTC)[reply]


^ I agree- it's very difficult to determine exactly what dynamic programming is from this article. It seems somewhat circular, too. At the beginning, it says it is for solving problems like overlapping subproblems and optimal substructure. How does it solve these problems? By using overlapping subproblems and optimal substructure. That makes no sense. This article really needs cleaning up - and the formatting needs some help too. —Preceding unsigned comment added by SparkyCola (talkcontribs) 16:57, 22 March 2009 (UTC)[reply]

Tropical Matrix Multipliction

Dynamic programming can be succinctly and elegantly expressed as tropical matrix multiplication. It avoids most of the clutter that plagues this page. —Preceding unsigned comment added by 69.245.186.56 (talk) 08:21, 27 July 2008 (UTC)[reply]

Error in Fibonacci sequence example?

The example says:

var m := map(0 → 1, 1 → 1)

It should be:
var m := map(0 → 0, 1 → 1)

Yes?

This is the first time I write anything on wikipedia. I'm scared of doing something wrong :) If I'm correct maybe someone else can change it. —Preceding unsigned comment added by BrusNO (talkcontribs) 12:25, 19 September 2008 (UTC)[reply]

"Programming" versus "Computer Programming" in introduction section

The last sentence of the introduction says: "The word 'programming' in 'dynamic programming' has no particular connection to computer programming ... the 'program' is the optimal plan for action that is produced ... Programming, in this sense, means finding an acceptable plan of action, an algorithm." This entire paragraph contradicts itself, because what is described is exactly the same as computer programming. Developing an algorithm is computer programming, and a computer program is an algorithm, or a "plan of action." This paragraph is either totally incorrect, or requires much better explanation. An example of 'dynamic programming' that couldn't be implemented as a computer program would make this section much more clear. I doubt that such an example exists, however. A computer program is just a sequence ("plan") of instructions ("actions") that produce a result. Also, if there really is no connection to computer programming, then why does the first sentence of this same section say: "In mathematics and computer science, dynamic programming is a method of ..."? -LesPaul75 (talk) 08:53, 29 October 2008 (UTC)[reply]

The point is correct, but confusingly stated. The term "programming" in "dynamic programming" is analogous to the term "programming" in "linear programming", a more general usage that predates the more specific sense now typically used (computer programming). Television programming is another example of the more general usage. The point is to emphasize that dynamic programming is not a method for writing code - it may be a tool used when writing code, but it operates by a mechanism that is independent of computer representation. I don't know if this makes any more sense. Dcoetzee 09:03, 29 October 2008 (UTC)[reply]
I understand, but it still seems false to say that there is "no particular connection to computer programming." You just gave an example of a connection, in fact. It may be a tool used when writing code. The two seem very connected. It might be more clear to say that "dynamic programming" is not limited to computer programming, because it does not require computer representation (i.e. a computer program). Similarly, one could say that "addition" is not limited to computer programming, because you can add physical things without computer representation, like bananas. But you can't say that addition has no connection to computer programming, because "ADD" is one of the most fundamental computer instructions and is present on (probably) every digital computer ever made. -LesPaul75talk 09:28, 29 October 2008 (UTC)[reply]
  • I wasn't watching the page so I don't know exactly when it happened, but the change made to the introduction (discussed above) is excellent. Much more clear than the original. Kudos! -LesPaul75talk 07:35, 23 April 2009 (UTC)[reply]

Field founded as topic

The sentence in the intro,

The field was founded as a systems analysis and engineering topic that is recognized by the IEEE.

is unclear. Let's see: "The field was founded as a topic recognized by IEEE." If it was founded as a topic, is it really a field? Did IEEE recognize the topic or the field? If it was founded as a field by the IEEE, then what did Richard Bellman create?--Christopher King (talk) 19:20, 15 January 2009 (UTC)[reply]

And Prolog

As an old (in both senses) Prolog programmer, it finally clicked with me what dynamic programming is when I realized it's implemented with 'assert' in Prolog (see the article on Prolog). For me, this article would have been clearer if it said that up front. But I suppose there aren't too many of us Prolog programmers around, so not many people would benefit from that piece of information :-(. Mcswell (talk) 22:29, 30 January 2009 (UTC)[reply]

New Introduction

I have added a paragraph on the general structure of the dynamic programming algorithm. I have also added the book by Dreyfus and Law which I consider as an excellent introduction to dynamic programming. May 2009

Retrieved from "http://en.wikipedia.org/wiki/User_talk:Shuroo" —Preceding unsigned comment added by Shuroo (talkcontribs) 19:38, 13 May 2009 (UTC)[reply]

Bottom - top ???

Currently the definitions of bottom-up and top-down in the introduction are SWITCHED relative to the definitions given in the overview section. Which ones are right?? --Rinconsoleao (talk) 09:44, 20 May 2009 (UTC)[reply]

I just noticed that the introduction states bottom-up twice, but no top-down. --JulienC (talk) 22:29, 14 October 2009 (UTC)[reply]

definition of dynamic programming?

I assume the usual meaning of dynamic programming is dynamic optimization. When I read this entry, it is really confusing to me to see a category of 'dynamic programming in computer programming'. In fact the shortest path algorithm in the category is basically an example of bellman equation. I suggest reorganizing this article to make it more clear. For example, add a formal mathematical definition of dynamic programming, which should only be a type of optimization. And some examples like Fibonacci sequence and 0-1 matrix should be removed. —Preceding unsigned comment added by SeldonPlan (talkcontribs) 03:49, 8 November 2009 (UTC)[reply]

What you describe sounds close to what I've been told is the definition of dynamic programming in certain kinds of engineering, but farther from the standard usage in computer science. Perhaps someone who understands both should clarify the distinction between these two meanings of dynamic programming. —David Eppstein (talk) 07:16, 8 November 2009 (UTC)[reply]
You're right. The clarification of the two meanings is needed. I took a look at some dynamic programming descriptions from computer scientists and it seems to me that these two meanings are fundamentally the same thing but with different forms. A dynamic programming method, either in mathematics or computer science, is to find the solution of a problem from the solutions of sub-problems. We can formulate it in the following way. For a set of problems, connect them in a directed acyclic graph and sort them by topological sort. The graph is constructed in such a way that the solution of any problem P in the set should be easily worked out from the solutions of the problems after P. So we can tackle P by tackling problems from right to left. -- SeldonPlan
Your comments are correct but are limited to FINITE horizon dynamic programming.Shuroo (talk) 10:35, 9 November 2009 (UTC)[reply]

I also was taught that dynamic programming is just dynamic optimization, a particular subfield or application of control theory - and that's from the point of view of economics (in addition to engineering mentioned above). It seems that this article is based on a computer programming definition of the term which may not be applicable across fields. Volunteer Marek (talk) 00:13, 3 December 2010 (UTC)[reply]

It seems to me that the current article is pretty clear about the fact that dynamic programming refers to methods in two different fields-- mathematical optimization and computer programming. In math (including mathematical economics), it means recursive methods for dynamic optimization. In computer programming, it refers to writing algorithms in a recursive way. At some deep abstract level, they are both the same thing--- in particular, both are just ways of breaking a complex problem into simpler pieces. But the applications are sufficiently different that mathematicians use very different notation and vocabulary when talking about dynamic programming than computer scientists do. I think the article makes clear that DP is applied in two different fields, and gives examples of both, using the notation and vocabulary of those fields.
Still, maybe the lead paragraph should include a bit of math vocabulary too, to make it more readable by people outside computer science. And maybe it would be useful to write a section explaining the relationship between these two areas of application. Unfortunately, I don't know of any textbook that discusses DP both from the math perspective and the computer science perspective. So if I tried to explain the relation between the two, it would feel like 'original research' to me.Rinconsoleao (talk) 08:08, 3 December 2010 (UTC)[reply]

Polynomial/exponential?

Recent edits keep going back and forth on whether DP is mostly used for problems solveable in polynomial or exponential time. Since there is no clear agreement I am just going to delete the claim.

Obviously, DP is easier for problems solveable in polynomial time. But at least in my experience (economics), DP is often used for problems solveable in exponential time. Therefore attention is often restricted to small problems, in order to avoid the curse of dimensionality. --Rinconsoleao (talk) 11:05, 9 February 2010 (UTC)[reply]

"Dynamic" and "programming"

The origin of the term has been discussed before under #Old-fashioned Terminology and #"Programming" versus "Computer Programming" in introduction section, but I think the current article is not very clear. It primarily distances the term from "computer programming", instead of saying why the words were chosen.

I became much clearer about dynamic programming after I read in an article by Eddy that the term does not explain the method; I stopped trying to understand what was "dynamic" about it. So now I am revising one paragraph of the article, using the article by Eddy as a reference and with further information from the Wikipedia article Optimization (mathematics). I am keeping the reference to the book by Nocedal and Wright even though I have not seen it; I hope it is still relevant.

For the benefit on editors who have no access to the article by Eddy, I include a quotation here:

Don't expect much enlightenment from the etymology of the term 'dynamic programming,' though. Dynamic programming was formalized in the early 1950s by mathematician Richard Bellman, who was working at RAND Corporation on optimal decision processes. He wanted to concoct an impressive name that would shield his work from US Secretary of Defense Charles Wilson, a man known to be hostile to mathematics research. His work involved time series and planning — thus 'dynamic' and 'programming' (note, nothing particularly to do with computer programming). Bellman especially liked 'dynamic' because "it's impossible to use the word dynamic in a pejorative sense"; he figured dynamic programming was "something not even a Congressman could object to"'.

JonH (talk) 21:52, 7 April 2010 (UTC)[reply]

Congressmen also don't object to "memoization" or "bottoms up" Maghnus (talk) 15:44, 8 April 2010 (UTC)[reply]
The current page reads "The word dynamic was chosen by Bellman because it sounded impressive, not because it described how the method worked." That's leaving out the "involved time series" (from your above quote) reasoning behind the name. I'm going to edit the page to include this facet of the name. 216.195.28.24 (talk) 04:30, 8 February 2011 (UTC)[reply]

What is dynamic programming? revisited

Much of the confusion regarding this important issue -- here and elsewhere --- has to do with the fact that a distinction should be made between various aspects of dynamic programming. In particular, a discussion on the distinction between the modeling and algorithmic aspects of dynamic programming is completely missing in this article. It is "hidden" in the distinction made between "computer science" and "mathematical optimization" perspectives on dynamic programming.

I shall try to restructure the article with this in mind.

Preliminary comments/suggestions will be useful!! Sniedo (talk) 07:53, 25 September 2010 (UTC)[reply]

Section needs to be revised

The section http://en.wikipedia.org/wiki/Dynamic_programming#Dynamic_programming_in_computer_programming says:

There are two key attributes that a problem must have in order for dynamic programming to be applicable: optimal substructure and overlapping subproblems which are only slightly smaller. When the overlapping problems are, say, half the size of the original problem the strategy is called "divide and conquer" rather than "dynamic programming". This is why mergesort, quicksort, and finding all matches of a regular expression are not classified as dynamic programming problems.

I think this is confusing to say the least, if not outright incorrect. There are no overlapping subproblems when it comes to merge- and quicksort. If the array to be sorted is special (e.g. [2,1,2,1]), there might be reoccurring subproblem-instances (e.g. sorting [2,1]), but this depends on the input. When we say that a problem exhibits overlapping subproblems, we mean that solutions to subproblem-instances will be needed more than once, regardless of the input. Merge- and quicksort and not classified as dynamic programming solutions because they don't remember (don't memoize) solutions to subproblem-instances in order to be more efficient. Dynamic programming is about trading storage for speed.

Then it says:

Overlapping subproblems means that the space of subproblems must be small, that is, any recursive algorithm solving the problem should solve the same subproblems over and over, rather than generating new subproblems.

I think this statement is confusing the set of the subproblem-instances that are actually needed with the space of subproblems in general.

Matt Kovacs (talk) 05:33, 17 November 2010 (UTC)[reply]

O(1) formula claim is not true

(Note the calculation of the Fibonacci sequence is used to demonstrate dynamic programming. An O(1) formula exists from which an arbitrary term can be calculated, which is more efficient than any dynamic programming technique.)

It is true that there exists a direct formula for the Fibonacci numbers, but the formula isn't O(1) time to compute. It is easy to see that the computation would take at least O(n) time, as the n-th Fibonacci number has O(n) digits (and in fact using the formula to compute Fibonacci numbers results in an algorithm with even worse than linear complexity, depending on which algorithm you use to multiply numbers). —Preceding unsigned comment added by 85.191.32.132 (talk) 01:31, 13 May 2011 (UTC)[reply]

Mergesort and Quicksort are dynamic programming???

Under the header "Dynamic programming in computer programming" there is a claim that mergesort and quicksort are a type of dynamic programming, though, not labeled as such. However, I feel that assertion is incorrect, because dynamic programming has overlapping subproblems, but the "answer" to a part of mergesort or quicksort does get reused. Once an section is sorted, it is not resorted (and only revisited for combining). Thus, I feel the claim that they are dynamic programming to be false, because they do not satisfy the "overlapping subproblem" requirement, though, they do satisfy optimal substructure. Please correct me if I am wrong, or the article if I am not!

Edit: I did not notice that the comment two before mine was about the same thing, but I believe this only adds credence to my statement — Preceding unsigned comment added by 99.181.61.118 (talk) 15:20, 15 August 2011 (UTC)[reply]

"Exponentially Large" in first paragraph. Arrgh!

I apologise for wasting people's time with a pet annoyance of mine, but I really think that any article that aspires to even the most rudimentary level of accuracy must avoid otiose phrases like exponentially large. I know this phrase has become popular with the public and in the news, but that does not mean that it makes sense.

To clarify, any positive number a is "exponentially larger" than any other positive number b, because a / b = 10^x, where x = log a - log b in base-10 logarithms.

If the phrase actually means a = K^b (c.f. exponential growth y = exp x) then this is true of any positive pair a and b that are greater than 1. (Set K = exp( (1/b ) log a ) .)

Unless there some other, more meaningful definition of "exponentially large" that I'm unaware of, I think we should avoid using the phrase completely. StandardPerson (talk) 05:03, 7 March 2012 (UTC)[reply]

In this context, it means that it can grow as an exponential function of the number of input items. It is used with the correct technical meaning, not as a colloquialism. A standard example is the computation of the Fibonacci numbers — a recursive algorithm for the nth Fibonacci number that is based directly on the recurrence relation would have a number of problems that is proportional to φn where φ is the golden ratio. I.e., exponentially large as a function of n. Dynamic programming reduces the number of subproblems to linear by only considering each distinct subproblem once. —David Eppstein (talk) 05:18, 7 March 2012 (UTC)[reply]

Thank you for that clarification. Sadly, while you were writing it, I was floundering around with the Wikipedia interface, trying to find how I could contact you, and writing the note that is now on your User Talk page. Ships in the night. I remain (a little) concerned that your meaning may be unclear to visitors to this page, like myself, who hear the phrase misused in public and then get confused when the same phrase is used with a technical meaning.

I can see that you prefer "exponentially larger" to (say) "vastly larger" because it allows you to convey two concepts (magnitude and functional dependence on data size) with one word; being concise is great, but I think this is a situation where the text should be unpacked more.

In the case of a recursive program to calculate fib(n), I can see that the number of calculations ("subproblems"?) and clearly the magnitude of fib(n) both grow exponentially in n (as you said, specifically as φn) as n increases, but here the functional dependence of fib: N -> N is clear. I'm not sure that the functional dependence is nearly so clear in the opening paragraph as it stands. StandardPerson (talk) 06:33, 7 March 2012 (UTC)[reply]

Bottom-Up and Top-Down switched?

Is it just me or are the meanings of the phrases "bottom-up" and "top-down" reversed in the 3rd paragraph of the intro? For the famous Fibonacci problem, I would think bottom-up computation of the n-th number, F_n, would entail computing F_0, then F_1, then F_2, etc, storing each one along the way. Top-down would be the recursive computation, starting with F_n splitting that into F_n-1 and F_n-2, and so on. In the context of dynamic programming, the latter would use memoization. These meanings of the phrases seems consistent with the discussion in Cormen, Leiserson, Rivest, & Stein. Joshua Davis (talk) 20:58, 6 July 2012 (UTC)[reply]