Home Theory Algorithmic Paradigms – Dynamic Programming

Algorithmic Paradigms – Dynamic Programming

by nikoo28
0 comment 13 minutes read

Dynamic programming always reminds of a favorite quotation:

those who cannot repeat the past are condemned to repeat it.
Fig: Quotation on Dynamic Programming

Of the several different ways to solve a problem, dynamic programming is a paradigm where we tend to solve the problem as we move ahead and keep on learning. This is one of the techniques which even seasoned programmers find hard to implement. In this post we will try to address some of these fears and aim to have a better understanding of this approach. If you are new to problem solving and somehow landed at this post, it is recommended to go through some easier paradigms first.

What on earth is Dynamic Programming?

This is a question which people usually fail to understand. While we are growing up as a kid, what do we generally do in school? We start from the first standard and then move on to second and so on, until we graduate. Suppose you learn some specific topics in your first standard, you pass your exams and move on to the second standard. What do you do in the second standard? We don’t learn everything from the beginning and then add on the new syllabus. We just memorize whatever we had learnt and add new stuff.

This is exactly what is dynamic programming. Some problems in computer science may have several successive parts. To solve them, instead of solving them again and again, we try to memorize the results from our previous computation and use them again. This technique is called memoization.

We don’t want to repeat the things for which we already have the answer. It is same like, we don’t want to learn how to solve 1 + 1, because we already know it. For dynamic programming, we would always remember answers to problems that we have already solved.

A classroom example:
example of dp in classroom.
Fig: Dynamic Programming in a classroom.

Teacher:
What is "1 + 1 + 1 + 1"?

Student:
(After some counting)
The answer would be 4.

Teacher:
Okay, what if now I add 1 more to the right of this equation?
What is "1 + 1 + 1 + 1 + 1"?

Student:
(Almost immediately)
The answer would be 5.

Teacher:
How could you say that so fast? Didn’t you have to count it all again?

Student:
No, I had the previous result stored in my brain.

This would be the easiest example of how dynamic programming works.
– It has an Overlapping Sub-problem, this means that the sub-problems of the given problem are not independent. They are related to each other.
– It follows the Optimal sub-structure property, this means that the sub-problems can be constructed together to solve the overall problem.

Dynamic Programming Patterns:

Suppose you want to become a musician by using dynamic programming. You have 2 ways:

Memoization (Top-down)

I will be an amazing musician. How? I will work hard like crazy. How? I’ll practice more instruments and try to improve. How? I’ll start taking part in contests. Then? I’ll be practicing. What? I’m going to learn chords.

Hence, this is an approach in which we look for the answer of a sub-problem in a lookup table before computing its solution.

The top down memoization approach starts from a large problem and breaks it down to smaller parts
Fig: The top down memoization approach starts from a large problem and breaks it down to smaller parts.

Tabulation (Bottom-up)

I’m going to learn chords. After that, I will start practicing. Then, I will start taking part in contests. After that, I’ll practice even more and try to improve. After working hard like crazy, I’ll be an amazing musician.

In this approach, we solve the problem as we go or bottom-up. This is typically done by filling up the lookup table, and computing the solution to the top/original problem based on the results in the table.

The bottom up tabulation approach starts from a small problem and builds it up to a larger one
Fig: The bottom up tabulation approach starts from a small problem and builds it up to a larger one
Fibonacci Numbers:

One cannot talk about dynamic programming and forget the popular Fibonacci series. It is one of the most popular sequence and used in a lot of programming applications and cryptographic techniques. The nth Fibonacci number is given by:

fib(n) = fib(n-1) + fib (n-2)

where:
fib(0) = 0
fib(1) = 1

Given this, we can calculate fib(2) as sum of fib(1) and fib(0).

Calculating fib(2)
Fig: Calculating fib(2)

This seems pretty straightforward, but what happens when we try to calculate a bigger Fibonacci number. fib(4) would look something like:

calculating fib(4)
Fig: Calculating fib(4)

In this scenario, we are calculating the value of fib(2) twice. What happens when we try to calculate even higher Fibonacci numbers? We would end up having higher branched trees and more repetitive calculations. Referring to the quotation at the start of this post, we must try to avoid un-necessary computations. Hence dynamic programming comes to the rescue.

We can save all this effort by computing only once and re-using every other time. We can achieve that by storing the results the first time we compute them and instead of doing the same computation again, we can just look it up from the stored memory.

Further Reading:

Some more examples where we use dynamic programming are:

Video Explanation:
Part 1:
YouTube player
Part 2:
YouTube player

You may also like

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More