43
$\begingroup$

I have used the technique of dynamic programming multiple times however today a friend asked me how I go about defining my sub-problems, I realized I had no way of providing an objective formal answer. How do you formally define a sub-problem for a problem that you would solve using dynamic programming?

Kaveh
22.7k4 gold badges53 silver badges113 bronze badges
asked Mar 22, 2012 at 3:30
$\endgroup$
1
  • $\begingroup$ It looks like none of the existing answers (as of April, 2019) are good enough, especially for beginners. I would recommend tutorials on other sites. $\endgroup$ Commented Apr 10, 2019 at 1:27

3 Answers 3

39
$\begingroup$

The principle of dynamic programming is to think top-down (i.e recursively) but solve bottom up. So a good strategy for designing a DP is to formulate the problem recursively and generate sub-problems that way.

answered Mar 22, 2012 at 4:36
$\endgroup$
8
  • 15
    $\begingroup$ I claim that's the only strategy. $\endgroup$ Commented Mar 22, 2012 at 9:35
  • 2
    $\begingroup$ @JeffE Yes, I read and used your notes when teaching my algorithm classes and it was effective. Quote: "Dynamic is not about filling in tables. It's about smart recursion!" $\endgroup$ Commented Mar 22, 2012 at 14:06
  • 2
    $\begingroup$ My understanding of how to teach DPs is STRONGLY influenced by @JeffE's notes :) $\endgroup$ Commented Mar 22, 2012 at 16:47
  • 3
    $\begingroup$ It's also possible to automatically turn a top-down recursive procedure into a dynamic programming algorithm. When you are about to return, store the answer in a hash table. On the start of each call, check if the answer is already in the hash table, and if so, return it immediately. Many algorithms become easy with this idea. For example with such a table, recursive algorithms on tries automatically work on DAWGs. By storing a sentinel value in the table at the start of a call, the same algorithms can work even on DFAs. Algorithms on BDDs become very easy to specify recursively. $\endgroup$ Commented Mar 23, 2012 at 19:21
  • 1
    $\begingroup$ Last but not least, this can have huge performance advantages. For example the traditional bottom up subset sum algorithm can compute tons of unneeded table entries. With this method only the necessary table entries will be computed. $\endgroup$ Commented Mar 23, 2012 at 19:26
4
$\begingroup$

As @Suresh pointed out, once you know that your problem can be solved by DP (i.e. it exhibits optimal substructure and overlapping subproblems), you may think of a divide and conquer recursive solution.

Of course, divide and conquer will be highly ineffecient because every subproblem encountered in the associated recursion tree will be solved again even if it has already been found and solved. This is where DP differs: each time you encounter a subproblem, you solve it and store its solution in a table. Later, when you encounter again that subproblem, you access in $\mathcal{O}(1)$ time its solution instead of solving it again. Since the number of overlapping subproblems is typically bounded by a polynomial, and the time required to solve one subproblem is polynomial as well (otherwise DP can not provide a cost-efficient solution), you achieve in general a polynomial solution.

Therefore, thinking about a divide and conquer solution will provide you with the insight about what a subproblem may be for your particular problem.

Raphael
73.3k31 gold badges183 silver badges403 bronze badges
answered Mar 22, 2012 at 9:16
$\endgroup$
7
  • 1
    $\begingroup$ "optimal substructure" (whatever that means) is probably not a sufficient condition for DP-solvability. "Overlapping subproblems" is certainly not a necessary one. $\endgroup$ Commented Mar 22, 2012 at 11:08
  • 1
    $\begingroup$ Optimal substructure and overlapping supproblems are both exhibited by problems that can be efficiently solved by DP. Of course optimal substructure alone is not enough for DP solvability. However, if you do not have overlapping subproblems, then you can solve the problem by ordinary divide and conquer with the same cost: indeed, the advantage of DP over divide an conquer is that each subproblem is solved exactly once when encountered in the recursion tree. $\endgroup$ Commented Mar 22, 2012 at 12:16
  • 1
    $\begingroup$ It's not my formulation: you will find it on "Introduction to algorithms" by Cormen, Leiserson, Rivest and Stein and on many other textbooks on algorithms. $\endgroup$ Commented Mar 22, 2012 at 15:00
  • 1
    $\begingroup$ Jup, and most are at least partially wrong. I am happy to elaborate if you post a suitable question. $\endgroup$ Commented Mar 22, 2012 at 15:35
  • 1
    $\begingroup$ I am not sure I understand correctly your last comment. To show that this kind of characterization is wrong (it can't be partially wrong: either is correct or it is wrong), you can simply show as a counterexample, a problem which does not exhibit both optimal substructure and overlapping subproblems, yet is amenable to a polynomial DP solution. But note that, in this context, that means a solution which is provably better than ordinary divide and conquer. $\endgroup$ Commented Mar 22, 2012 at 16:23
2
$\begingroup$

My experience is finding out a way to "cut down redundant enumerating with help of storing useful value already enumerated". By the way, Dynamic Programming is really popular in ICPC(International Collegiate Programming Contest. Anyone can have his own feeling about DP after practice several ICPC problems.

answered Mar 22, 2012 at 9:32
$\endgroup$

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.