Homepage Blogs Dynamic Programming
Coderspace Pro Coderspace Pro

Dynamic Programming

6 Minutes Reading Time · 01.04.2021
Dynamic Programming

Summarize this content with artificial intelligence!

Dynamic programming is a method that solves complex problems by dividing the problem into recurring sub-problems, stores the results obtained, and uses these results to solve the main problem with deductive-inductive approaches. Because it keeps the solution of sub-problems, it reduces code cost by eliminating the need to re-calculate the same operations. For example, consider a factorial calculation operation. The factorial of each number equals the product of all numbers down to 1, including itself. This is also equivalent to the multiplication of the number itself by the factorial of the number before it. If we were to solve with dynamic programming, keeping the factorial of each number starting from 1 would allow us to obtain the result with only the multiplication of 2 values for the next number. Dynamic programming has two methods that can be interpreted as bottom-up and top-down. In this solution, which we start by finding a Brute force recursive solution, we try to find the most efficient solution with dynamic programming by interpreting it with memoization or tabulation(table).

 

 

 

About Dynamic Programming Methods

 

Dynamic programming interpretation, where solutions of small problem pieces are known and interpreted to facilitate the solution, is called memoization. Memoization is a top-down approach. The method in which the small pieces of the solution are interpreted with the table and examined with a bottom-up approach is called tabulation. These enable us to bring various interpretations to various question types and make our job much easier.

How Is A Dynamic Approach Formed?

We need a few steps to establish dynamic programming. These can be summarized as follows:

  • As we can guess from the basic logic of dynamic programming, we first make it into small pieces.

  • Interpret solutions on sub-problems and therefore prepare the decisions to be made at each step.

  • Interpret the main problem by repeating the upper steps, i.e. one in which you reduce it into small pieces and one in which you interpret the decisions in mathematical form. Ponder how to solve the main problem with these data.

  • Focus on Memoization or Tabulation.

  • Strive to put all these data into code.

The main theme, which can be established in this way, Dynamic Programming, may differ over questions, approaches, and cases on this roadmap. I will try to explain the known dynamic programming examples in the next topic.

Known Dynamic Programming Solutions

Let's start with a topic where optimum solution is obtained thanks to dynamic programming and then the solution is formalized;

Matrix chain multiplication calculations. In a matrix chain multiplication given to us, we need to calculate parenthesization, i.e. where we need to give operation priority. We can easily reach its solution with dynamic programming. Mathematically, square matrices can be multiplied in pairs and the wrong operation priority will cause much more calculation, which will slow down the system considerably. We reach to the optimal solution by filling up in the table thanks to dynamic programming, thus we do the parenthesization. The control of suitable and unsuitable conditions in the repeated solution will be sufficient for us. It is also useful to think a little about memoization. Normally, in Dynamic Programming, which is a bottom-up approach, memoization solutions are usually more efficient where a top-down strategy needs to be developed. Our second solution example is to find the longest similar piece. I think I hear you saying what is that. Imagine two arrays that consist of letters and it is asked to find the longest similar series of elements between these arrays. In addition, we need an efficient solution, dynamic programming will benefit us in this case. Using the general solution approach we keep in the dynamic solution, we can find the longest series. Our general solution approach for dynamic programming is as follows;

0 invalid cases

C[i,j]= C[i-1,j-1] can't take

Max/Min(C[i-1,j-1],C[i-1,j-1]+V[1]) decision

The can't take case is the state where the previous value of the table should be used. Can't take is the case where you inevitably continue from the previous table value without the need to choose because the current amount, money or load amount(depending on the question) is not enough to meet. The decision represents the situation where the previous table value and the current product are compared and the larger one is taken if the maximum is wanted and the smaller one is taken if the minimum is wanted.

We can integrate this approach into our question, for example, for series similarity,

0 invalid cases

C[i,j]= C[i-1,j-1]+1 if list items are the same

Max(C[i,j-1],C[i-1,j]) decision

Thus our table is created, vertical elements hold one list and horizontal elements hold another list and the table is created by comparing them.

And of course, we can't do without mentioning the famous Knapsack problem. This problem has been modeled on which of the object options in the middle to choose to form the most valuable load with a bag that has a certain volume. The objects have some values and some volumes that will take up space in our bag but will provide us a return value. There are two types of this question model, one is the type that can be solved with Dynamic Programming represented by 0-1(binary), and the one that can be solved with a Greedy Approach where we can partially take into account the objects that we can split. For explaining the 0-1, 0 represents that we did not take the object and 1 represents that we took it. For this, the solution of dynamic programming can be said as follows:

0 invalid cases

C[i,W]= C[i-1,W] if the product weight exceeds the remaining weight allowance

Max(Vi + C[i-1,W-Wi],C[i-1,W]) otherwise, to compare the benefits

0 value is put for invalid cases, that is, if the object is 0 or the weight is 0. In cases where the weight of the product is more than the remaining weight allowance in my bag, the previous value must continue from the value that we keep in the array we keep the values, for this

C[i-1,W] is returning. In the decision part, the incoming object is suitable for the weight limit left in my bag, but should it be taken, is it reasonable, this distinction is made, so the value of that product (Vi) is added and the values are compared, thus the condition containing the maximum value is calculated.

 

Visual example representing the Knapsack question.

 

 

Comparison of Divide and Conquer Algorithm Approaches with Dynamic Programming

 

Divide and Conquer approach could be the subject of another article in detail but briefly we can summarize it as follows;

Divide and Conquer is an approach based on solving problems by dividing them into smaller pieces and managing them. A recursive repetitive approach is used for this, thus the problem is made into relatively easily solvable small pieces, and then, so to speak, it will be easier to manage. These small problems are solved repetitively and the solutions are brought together. Thus, our problem has been divided into small parts and has been solved repetitively and brought together to get the desired solution.

In Dynamic Programming, efficiency is also taken into account and small problems are solved only once. Our problem is thought of in small parts like in Divide and Conquer, so the approach is this way. However, the outputs of small problems, which are the gains of problems, are kept in a table and interpreted. During coding, this table approach is usually in the form of an array, list or hash table. Thus, small solutions are created and an easy choice can be made for the good of the requested result. With the memoized solution interpretation, much faster results can be obtained from the case. For example, let's take the Fibonacci numbers, which are a very famous interpretation. For Divide and Conquer, after initializing the first 2 steps(F1=F2=1), we can implement it in a formulaic way. Interpreting from the Naive algorithm, we can find the complexity analysis T(n) = T(n-1) + T(n-2) + O(1).

So in Divide and Conquer, the general complexity analysis is;

T(n) = 2T(n-2) + O(1) and this state is interpreted as exponential time, which means it works very slow.

As an extra information in Divide and Conquer, the 2T(n-k) part represents the complexity of the divided small/sub problems while the O(n) part represents the time to conquer i.e. solve the problem internally.

For Dynamic Programming, with the solution interpretation memoized, with a single if else structure we can calculate our Fibonacci series. So our solution complexity analysis is,

T(n) = T(n-1) + O(1) and this is equal to O(n) for this case which is linear time, which means it works very fast.

To summarize, Divide and Conquer does more work on sub-problems, therefore consumes more time, and also, solutions to sub-problems are independent from each other. On the contrary in Dynamic Programming, sub-problems are dependent on each other and they are solved only once, then they are recorded in the table and used in solving, thus increasing efficiency.

 

 

Summarize this content with artificial intelligence!

CONTENTS
Topic content

Don’t Miss the Product Management Bootcamp! A 40-hour online training, company sessions, webinars, and the opportunity to learn the world of product management are waiting for you! Explore Now!
Don’t Miss the Product Management Bootcamp! A 40-hour online training, company sessions, webinars, and the opportunity to learn the world of product management are waiting for you! Explore Now!

Recommended Contents

All Blogs
What is Natural Language Understanding (NLU)?
What is Natural Language Understanding (NLU)?
What is Natural Language Understanding (NLU)?

When we think about it, language is one of our most powerful tools. We use it to express our feelings and thoughts. We can leverage the power of lang…

6 Minutes Reading Time
Research
03.11.2025
What is Java? What is it used for?
What is Java? What is it used for?
What is Java? What is it used for?

Java is a widely used object-oriented programming language that runs on billions of devices, including laptops, mobile devices, game consoles, medica…

7 Minutes Reading Time
Software Development
06.10.2025
Popular Java Frameworks
Popular Java Frameworks
Popular Java Frameworks

Java is one of the most popular programming languages. It offers versatility and flexibility with the "write once, run anywhere" philosophy. To enhan…

4 Minutes Reading Time
Software Development
01.10.2025