Concept
Dynamic Programming (DP) is just a problem-solving technique. It's nothing fancy. The key idea is very simple really: instead of doing the same work multiple times, we can be, err, lazy, and "store" the result of the computation we did previously and use that directly.
If I asked you to solve 437 x 89 without a calculator, you might take some time (after getting annoyed at me for a while) to solve the question using a pen and paper.
And if i ask you the same question 5 minutes later, are you going to actually do the multiplication again? I should hope not. You would just look at the piece of paper which you had used earlier, and tell me the answer (almost immediately).
That's it. That's all dynamic programming is. It's just a way to "cache" the result. Everything that follows is just a way of expressing this idea in the programming lingo.
When to use DP?
Optimal Substructure: An optimal solution can be constructed from optimal solution to smaller sub-problems. That is, to solve a larger problem, we can solve the smaller problems and combine the solutions to get our required original solution.
Optimal substructure basically allows us to break down the problem into smaller pieces, and solve them individually / independently, and then using the solutions of these smaller pieces to get the solution to the bigger problem.
If this is not the case, it woudn't make sense to split the problem in the first place...? Since we wouldn't get the correct answer using this approach anyway. So, we would probably have to use some approach.
DP is basically brute force (solve all subproblems in order to solve the original problem) + memoization (store the result of the subproblems you solve so you only ever to have solve them once).
Examples of problems that have an optimal substructure include:
Sorting - We have seen in MergeSort that we sort two smaller array and then merge them to get a sorted array.
Reversing a string - To reverse a string of length , we can reverse the last characters and then add the first character to the end of this reversed string. (actually this is equivalent to reversing an array which can be done by splitting the array into two halves, recursively reversing each part and joining them in the other order)
Merging two arrays - We can split the problem into merging two smaller arrays
Shortest paths - We know that if is the shortest path from to and it contains , then contains the shortest path from to and from to So, a shortest path between two nodes is composed of the shortest path between many intermediary nodes too.
Minimum Spanning Tree - We know that if we cut an MST, we get 2 MSTs.
So, it becomes clear that optimal substructure is a very common property. In fact, nearly every problem has an optimal substructure. There are two kinds of algorithms used to solve such problems:
Greedy Algorithms: e.g. Dijkstra, Primās MST, Kruskalās MST
Divide-and-Conquer: e.g. MergeSort, Fast Fourier Transform
But having an optimal substructure is not the only requirement to be able to use DP effectively.
DP should be used when the problem has an optimal substructure with overlapping subproblems. That is, the same problem is used to solve multiple bigger problems. Then it is worth storing the result (memoisation) to avoid recomputing the same problem.
This is just a fancy way of saying "it only makes sense to save the result if we're going to be needed it again."
So, both Divide-and-Conquer and DP algorithms have an optimal substructure but the key difference lies in whether or not the problem has overlapping subproblems.
There are two main strategies to solve DP problems:
Bottom-up approach: start from the smallest problems and ābuildā the solution upwards until you reach your required case. This is similar to topologically sorting the DAG representation of the problem and solving it in reverse order.
Top-down approach: Try to solve the main problem by breaking it down into smaller problems and then recursively trying to solve those, while memoizing the answers to solved problems.
DP as a table
People often view DP strategies as trying to fill up a table. Each entry in the table corresponds to a subproblem that is being solved.
The key requirement is that filling any entry (i.e., solving any subproblem) should only depend on the entries that have already been filled, preferably, just the nearest ones (latest filled entries). So, it is necessary to fill the DP table in the right order.
What is more important (and generally the difficult part) is deriving the relation/formula between the entries of the table - i.e., given the solution to smaller subproblems, how do you combine them to get your larger solution?
Often it is also difficult to decide what the subproblems are and how they can be effectively used to solve the original problem. All these questions must be answered before a DP algorithm can work.
It is worth mentioning that this ātableā can be an array, a 2-D matrix, or even a 3-D or 4-D matrix. It all depends on the question.
DP Recipe
In general, we will follow the following steps to solve a problem using DP:
Define sub-problems (in a clever way such that we have optimal substructure of the problems)
Identify the optimal substructure (i.e., if you're given the solutions to the subproblems, can you solve the original problem?)
Identify the base cases (i.e., what's the smallest problem you can solve without knowing the solutions to any other problems?)
Solve problem using subproblem, usually in terms of a recurrence relation (i.e., HOW would you construct the solution of the main problem using the solutions of subproblems)
Write (pseudo)code
Analysing Time Complexity
While analysing the time complexity of DP, it is useful to split the analysis into:
Number of subproblems (or even a higher level e.g. row in DP table)
Cost of solving each subproblem (e.g. cost of solving a row in DP table)
Last updated