In this topic, you will learn about dynamic programming. You'll explore key problem-solving techniques and how to implement iterative and recursive approaches.
When to use dynamic programming
If you use an iterative or recursive approach to solve a problem, dynamic programming can be a valuable addition because it enhances solution optimization. It primarily saves the solution of a subproblem and reuses it in future iterations.
For instance, consider breaking down a complex problem into its subproblems. These subproblems are further divided into smaller ones. As you solve the problem, you might notice many repeated calculations. In such situations, to cut down on the time spent on repeated calculations with the same outcomes, you can apply the dynamic programming (DP) approach. This is an example of overlapping subproblems.
Dynamic programming vs recursion
To prevent any kind of misunderstanding, let's define the difference between two terms: dynamic programming and recursion. The table below provides a concise overview of the key differences, including their purpose, approach, solution storage, efficiency, and examples of problems that each technique can solve.
Feature | Dynamic Programming (DP) | Recursion |
|---|---|---|
Purpose | Solves optimization problems by breaking them into simpler subproblems and reusing solutions. | Solves problems by breaking them into smaller, similar subproblems and solving each recursively. |
Approach | Both top-down (with memoization) and bottom-up. | Typically top-down. |
Solution Storage | Solutions to subproblems are stored in a table or array for efficient lookup and reuse. | Solutions are not explicitly stored; each recursive call recalculates solutions. |
Efficiency | Typically more efficient, as it avoids redundant computation by storing solutions to subproblems. | May be less efficient due to the repeated computation of solutions and function call overhead. |
Base Cases | Base cases are explicitly defined and used to terminate recursion (in top-down approach). | Base cases are explicitly defined and used to terminate recursion; recursion may continue indefinitely without proper termination conditions. |
Example Problems | Fibonacci sequence, shortest path problems, knapsack problems. | Factorial calculation, Fibonacci sequence, graph and tree traversal algorithms like Depth First Search (DFS). |
Problem-solving techniques: memoization and tabulation
When we talk about dynamic programming, our aim is to save solutions and avoid performing the same calculations more than once. There are two main techniques: memoization and tabulation. These approaches to problem-solving differ. Memoization is a top-down approach, while tabulation is a bottom-up approach.
For a clearer picture, consider this everyday example. Imagine that you want to learn a new language.
Using the top-down approach:
You begin by trying to translate a novel in a language you don't know (starting with a complex problem).
You look up new words in the dictionary one by one (solving smaller parts of the problem).
When you come across different cases, verb forms, or sentence structures, you figure out the rules (memoizing them).
As you learn each new word, phrase, and rule, the process of translation becomes easier.
By the end, you've learned the language by working through the book.
Using the bottom-up approach:
You start with the basics, like the alphabet (the simplest part of the problem).
Then you learn simple words and phrases, such as greetings and polite expressions (building upon simple parts).
Next, you learn to make sentences about the past, present, and future.
Practicing with a teacher or classmates, you gradually gain the confidence to speak outside the classroom (addressing the complete problem).
The language becomes easier as you practice because you're building upon the basic skills you've already learned.
Both the top-down and the bottom-up approaches can help you reach your goal, though they take different paths and may vary in their space and time requirements.
Now we are ready to put the theory into practice.
Recursive with memoization
Let's start with the top-down approach called memoization. This method involves storing the results of function calls and reusing them when the same inputs occur again. You can do this by keeping a lookup table. You typically use a map or an array to store the computed results. When a function is called with certain inputs, it first checks if the result is in the lookup table. If it is, the function returns the stored result, which saves time by avoiding redundant calculations.
For example, consider the well-known coin change problem. You have an integer array of coins of size N, representing different coin denominations. Your goal is to figure the minimum number of coins needed to make a certain value sum. Imagine you want to find how to make the sum of using coins of denominations , , and .
To tackle this problem, you should examine all possible combinations of coins to get the sum you want. Each time you go deeper into the recursion, you have two options:
1. Add the current coin to the target sum and call the function again with the sum decreased by the value of the current coin.
2. Skip the current coin, which means keeping the sum unchanged and calling the function with one less coin to choose from.
Let's first solve the problem without using memoization.
Coin change without memoization
function countRecursive(coins, availableCoinsCount, targetSum){
// if the targetSum is 0, then there is only one solution.
if (targetSum == 0)
return 1
// if there are no coins left or the targetSum is negative,
// there are no ways to make that sum, return 0.
if (availableCoinsCount == 0 || targetSum < 0)
return 0
// recursive call, including current coin
currentCoin = coins[availableCoinsCount - 1]
subtractedTargetSum = targetSum - currentCoin
sumIncludingCurrentCoin = countRecursive(coins, availableCoinsCount, subtractedSum)
// recursive call, excluding current coin
sumExcludingCurrentCoin = countRecursive(coins, availableCoinsCount - 1, targetSum)
return sumIncludingCurrentCoin + sumExcludingCurrentCoin
}
// main function to call countRecursive function
function main() {
targetSum = input() // initialize the desired sum (for example, 10)
coins[] = input() // array of coin values (for example, 1,2,3)
availableCoinsCount = coins.length // initialize the number of coin types
result = countRecursive(coins, availableCoinsCount, targetSum)
print("Coin change without Memoization")
print("Available coins: " + coins)
print("Number of ways to exchange" + targetSum + " is: " result)
}The output, if is , and are , , :
Coin change without Memoization
Available coins: 1,2,3
Number of ways to exchange 10 is 14Complexity analysis without memoization
For the current solution, you have
time complexity:
auxiliary space:
where is the .
The time complexity of this recursive solution is exponential because there are two additional calls with each recursive step: one includes the current coin, and the other excludes it. The process creates a recursion tree that can go up to .
Since each level of the tree has twice as many calls as the one above it, the number of calls roughly doubles at each level, leading to exponential growth.
The memory complexity here is linear, due to the maximum depth of the call stack during recursion. In the worst case, you might use the smallest denomination, like , which means the deepest recursive call could be levels deep.
As you are aware, exponential complexity can hinder performance, especially with large input values. Fortunately, you can address the coin problem using dynamic programming techniques.
Coin change with memoization
Now let's add a tiny detail that allows us to improve performance. The detail is the creation of an array to cache the results of already calculated subproblems.
First, in the function, we set . It will represent the count of unique methods to achieve the total sum utilizing the first types of coins. The table with initial values is provided as an argument to the recursion function.
// main function, where we create initial values for recursive function
function main() {
targetSum = input() // initialize the desired sum (for example, 5).
coins[] = input() // array of coin values (for example, 1,2,3)
availableCoinsCount = coins.length // initialize the number of coin types
// inittialize a memoization table to store values of calculation
memoizationTable = [availableCoinsCount + 1][targetSum + 1]
// fill the table with initial values
for (i = 0; i < availableCoinsCount + 1; i++) {
for (j = 0; j < targetSum + 1; j++) {
memoizationTable[i, j] = -1
}
}
result = countRecursiveMemo(coins, availableCoinsCount, desiredSum, memoizationTable)
print("Coin change with Memoization")
print("Available coins: " + coins)
print("Number of ways to exchange" + desiredSum + " is: " result)
}In the function, we follow these steps:
Implement a check within the recursive function to identify if a particular state (, ) has been computed previously.
If a state has been encountered before, immediately return the stored result for that state to avoid redundant calculations.
// recursive function with memoization table
function countRecursiveMemo(coins, availableCoinsCount, targetSum, memoizationTable){
// if the targetSum is 0, then there is only one solution.
if (targetSum == 0)
return memoizationTable[availableCoinsCount][targetSum] = 1;
// if there are no coins left or the targetSum is negative,
// there are no ways to make that sum, return 0.
if (availableCoinsCount == 0 || targetSum < 0)
return 0
// check if the value has already been computed
// if so, return the stored value from the memoization table
if (memoizationTable[availableCoinsCount][targetSum] != -1)
return memoizationTable[availableCoinsCount][targetSum]
// recursive call, including current coin
currentCoin = availableCoins[availableCoinsCount - 1]
subtractedTargetSum = targetSum - currentCoin
sumIncludingCurrentCoin = countRecursiveMemo(availableCoins, availableCoinsCount, subtractedTargetSum, memoizationTable)
// recursive call, excluding current coin
sumExcludingCurrentCoin = countRecursiveMemo(availableCoins, availableCoinsCount - 1, targetSum, memoizationTable)
// put a new value in the table
memoizationTable[availableCoinsCount][targetSum] = sumIncludingCurrentCoin + sumExcludingCurrentCoin
return memoizationTable[availableCoinsCount][targetSum]
}Complexity analysis with memoization
Let's analyze the time and memory complexity of the previous code.
The time complexity of the memoized solution depends on the number of unique states that the recursive function can reach. In this solution, each state is calculated only once and retrieving it from storage requires constant time. The states are defined by two parameters:
The remaining amount of money , which ranges from to the desired sum.
The number of coin types considered , which ranges from up to the number of coin types.
So the question is, how many combinations exist to change using ? To get the answer, we multiply with .
As a result, we can skip adding constant values, so the time complexity is , where and .
For memory complexity, we have two components:
Memoization table
We use a table with size .
So, it needs space equal to , where and .Depth of recursion stack
As with the solution without memoization, in the worst-case scenario, the depth is equal to when we use the smallest coin.
The recursion stack requires , where .
Therefore, the overall memory complexity includes the memoization table plus the recursion stack: .
We can skip the component because the second component will dominate, and we get a memory complexity .
Comparing approaches with and without memoization
We can review the results in the table below:
| Without Memo | With Memo |
Time complexity | ||
Memory complexity |
In the second table, two examples are provided. The first example has a target sum of and available coins . The second example has a target sum of and available coins .
| Without Memo | With Memo | Without Memo | With Memo |
Time complexity | = 32 | = 15 | = 1024 | = 50 |
Memory complexity | = 5 | = 15 | = 10 | = 50 |
As you can see, we have increased memory usage. At the same time, we significantly decreased the time complexity from exponential to multiplied complexity by using the memoization technique.
Iterative approach with tabulation
Tabulation is a bottom-up approach in dynamic programming, where we solve smaller subproblems first and use their solutions to build solutions for bigger subproblems.
Tabulation involves building a table or an array to store solutions. Each entry in the table represents the solution to a specific subproblem, with the final result obtained from the last entry in the table.
Let's illustrate tabulation with the following problem. You need to find the minimum number of steps to reach a given number by performing either addition of 1 or multiplication by 2.
To apply dynamic programming here, we would use an array to remember the minimum steps for each number up to , with each iteration using the results of previous calculations.
Here are the steps for implementing the algorithm:
You start from ; so the number of steps to achieve it is .
For each subsequent number up to , you figure out the best way to get there. You look at the previous number and add one to its step count.
If the current number is even (like , , , etc.), you also consider the scenario of dividing the number by .
For example, to get to from , you could multiply by or add twice. If multiplication from step 2 takes fewer moves than addition, then multiplying is the best choice.You keep track of the number of moves it takes to reach each step in a list. When you finally reach , you'll have the answer to the fastest way to get there.
The code for this approach is below:
// function to find the minimum number of steps to reach a target number
function calculateMinimumSteps(targetNumber) {
// array to store the minimum steps for each number
minimumStepsToReach = new array[targetNumber + 1];
// initiate value: it takes 0 steps to reach 0
minimumStepsToReach[0] = 0;
// calculate the minimum steps for each number from 1 to targetNumber
for (int currentNumber = 1; currentNumber <= targetNumber; currentNumber++) {
// to reach currentNumber, it requires one more step than the number of steps to reach the previous number
minimumStepsToReach[currentNumber] = minimumStepsToReach[currentNumber - 1] + 1;
// If currentNumber is even, we check if this way takes fewer steps than the current number of steps
if (currentNumber % 2 == 0) {
int halfOfCurrent = currentNumber / 2;
minimumStepsToReach[currentNumber] = Math.min(minimumStepsToReach[currentNumber], minimumStepsToReach[halfOfCurrent] + 1);
}
}
// Return the final calculated minimum number of steps to achieve the target number
return minimumStepsToReach[targetNumber];
}
// method to call the calculateMinimumSteps function
function main(String[] args) {
targetNumber = input() // target number (for example, 10)
minimumSteps = calculateMinimumSteps(targetNumber)
print("Minimum steps to reach " + targetNumber + " is: " + minimumSteps)
}The output for the input of is:
Minimum steps to reach 10 is 5In this code, the list is used to store the minimum number of steps required for each number up to . By building up the table iteratively, we avoid redundant computations and obtain the solution efficiently.
Conclusion
You have just learned a small part of dynamic programming problems. However, it is enough to identify which problems you can solve with this technique and which methods to apply. Here's a summary of the main points:
Dynamic programming is a method that reduces future calculations by reusing already found solutions.
You've looked at two common strategies: memoization and tabulation. Memoization saves results to prevent unnecessary recalculations, while tabulation systematically develops solutions.
The recursive method seems intuitive but may not be effective if you don't use memoization; on the other hand, the iterative method is productive because it merges solutions from smaller subproblems.