All about Dynamic Programming

Shubham Mittal
5 min readMar 24, 2021

What is Dynamic Programming?

Dynamic Programming is essentially an improvement over plain recursion. Any place we see a recursive arrangement that has repeated calls for same inputs, we can optimize it using Dynamic Programming. The idea is to just store the solutions of subproblems, so we don’t need to re-compute them when required later.

When using approaches like Divide-and-Conquer, a sub-problem may be solved multiple times. So Divide-and-Conquer methods shall have to perform more work in these cases. Dynamic Programming shall solve each of these sub-problems just once, thus reducing the number of computations, and then saves it, thus avoiding the work of recalculating it again at a later stage, when the solution for that sub-problem is required.

Why it is so important ?

This simple core idea of Dynamic Programming i.e. just storing the solutions of subproblems reduces time complexities from exponential to polynomial.

Above image shows how just by storing solutions of subproblems , we have reduced exponential time complexity to linear.

Now you might wonder where to use Dynamic Programming ?

The answer is simple, we can use Dynamic Programming where we have problems, which can be further divided into simpler subproblems, so that their results that are stored can be re-used. So, basically it is used where the solution of one subproblem is required repeatedly. Hence, we can say that Dynamic Programming can be used where overlapping subproblem exists.

For example, in Binary Search we can cannot use this because it does not have overlapping subproblems, whereas recursive program of Fibonacci numbers have many overlapping subproblems.

Methods of Dynamic Programming

Dynamic Programming offers two methods to solve a particular problem.

1. Top-down with Memoization

In this approach, we try to solve the bigger problem by recursively finding the solutions to smaller sub-problems. Whenever we solve a sub-problem, we store its result so that we do not have to solve it again and again if it’s called multiple times. Instead, we can just return the saved result. This technique of storing the results of already solved subproblems is called Memoization.

2. Bottom-up with Tabulation

Tabulation is the opposite of the top-down approach and avoids recursion. In this approach, we solve the problem “bottom-up” (i.e. by solving all the related sub-problems first). This is typically done by filling up an n-dimensional table. Based on the results in the table, the solution to the top/original problem is then computed.

Tabulation is the opposite of Memoization, as in Memoization we solve the problem and maintain a map of already solved sub-problems. In other words, in memoization, we do it top-down in the sense that we solve the top problem first (which typically recurses down to solve the sub-problems).

What’s the difference between Greedy Algorithm and Dynamic Programming?

Both Greedy and dynamic programming algorithms construct an optimal solution of a subproblem based on optimal solutions of smaller subproblems. However, the main difference is that greedy algorithms have a local choice of the subproblem that will lead to an optimal answer. On the other hand, dynamic programming would solve all dependent subproblems and then select one that would lead to an optimal solution. Both algorithms require that an optimal solution of current subproblem is based on optimal solutions of dependent subproblems which is referred to as optimal substructure property.

In dynamic programming, we need to identify the following:

· smallest subproblems and their solutions.

· description of solution of a subproblem based on solution of smaller subproblems (usually described as a recurrence).

· an ordering of the subproblems from smallest to largest to construct solutions of all subproblems.

It is not easy to prove that a greedy algorithm is optimal however greedy algorithms have much simpler implementation. For an example, consider the following specific coin change problem. You have a set of coins of denomination: 1,2,5,101,2,5,10. You are required to find the minimum number of coins to construct an amount of money X. A natural greedy algorithm would try the denomination form largest to smallest until constructing the money. For example, constructing 29 would need two 10s, one 5, two 2s.

The previous greedy algorithm is optimal for the given problem. However, if we changed the set of denomination we have to 1,3,4,101,3,4,10 the algorithm becomes not optimal. For example, to construct 6 the greedy algorithm would use one 4 and two 1s. An optimal answer requires only two 3s.

A dynamic programming algorithm for the same problem would work as follows. A subproblem f(i)f(i) identify the minimum number of coins needed to construct an ii amount of money. The elements of such algorithm are highlighted as follows:

· (Smallest subproblem) It is clear that f(0)=0f(0)=0as we need zero coins to construct zero amount of money.

· (recurrence) f(i)=minc<=if(i−c)+1f(i)=minc<=if(i−c)+1, where c is a domination that is smaller than the amount of money i.

· (Ordering) Order the subproblems based on amount of money 0<=1<=2…<=x.0<=1<=2…<=x.

The dynamic programming algorithm is slower on average however it ensures the optimality of the answer.

Steps to solve a dynamic programming problem

Unfortunately, there is no universal recipe to solve a dynamic programming problem. You need to go through many problems until you start getting the hang of it. Do not get discouraged. This is hard. Maybe the hardest type of problems you will face in an interview. This is about modeling a problem with relatively simple tools — no need for fancy data structures or algorithms.

I have solved tons of them and still, sometimes I find it difficult to get to the solution. The more you practice, the easier it will be. This is the closest to a recipe to solve dynamic programming problems:

  • Prove overlapping subproblems and suboptimal structure properties.
  • Define subproblems.
  • Define recursion.
  • Code your top-down or bottom-up dynamic programming solution.

Complexity analysis varies from problem to problem, but in general, the time complexity can be expressed as:

Time ~ Number of subproblems * time per subproblem

It is straightforward to compute the space complexity for a bottom-up solution since it is equal to the space required to store solutions to the subproblems (multidimensional array).

Conclusion

Thus we can now say that Dynamic Programming helps reduce time complexity in exchange for bigger space complexity.

Thanks for reading!

--

--