Dynamic Programming
dynamic-programming: The Ultimate Cheatsheet
Section titled “dynamic-programming: The Ultimate Cheatsheet”1. Detailed Explanation
Section titled “1. Detailed Explanation”Dynamic programming (DP) is a powerful algorithmic technique used to solve optimization problems by breaking them down into smaller, overlapping subproblems, solving each subproblem only once, and storing their solutions to avoid redundant computations. This avoids the exponential time complexity often associated with naive recursive approaches by exploiting the optimal substructure and overlapping subproblems properties.
Why is it important? DP drastically improves efficiency for problems exhibiting overlapping subproblems. Instead of recalculating solutions repeatedly, DP stores and reuses them, leading to polynomial-time solutions where brute-force approaches would take exponential time.
Core Concepts:
- Optimal Substructure: An optimal solution to the problem can be constructed from optimal solutions to its subproblems.
- Overlapping Subproblems: The same subproblems are encountered multiple times during a naive recursive solution.
- Memoization (Top-Down): Storing the results of expensive function calls and returning the cached result when the same inputs occur again.
- Tabulation (Bottom-Up): Building a table (usually an array or matrix) to store solutions to subproblems, starting from the base cases and working up to the final solution.
2. When to Use Dynamic Programming (and When Not To)
Section titled “2. When to Use Dynamic Programming (and When Not To)”Use DP when:
- The problem can be broken down into smaller, overlapping subproblems.
- The problem exhibits optimal substructure (optimal solution is composed of optimal sub-solutions).
- The number of subproblems is relatively small (polynomial in the input size).
Don’t use DP when:
- The problem doesn’t exhibit optimal substructure.
- Subproblems are independent (no overlapping).
- The number of subproblems is exponentially large. In such cases, other techniques like greedy algorithms or branch and bound might be more appropriate.
3. Core Algorithm / Data Structure Template
Section titled “3. Core Algorithm / Data Structure Template”General Approach (Tabulation):
- Identify the subproblems: Determine how the problem can be broken down into smaller, overlapping subproblems.
- Define the base cases: Determine the solutions for the smallest subproblems.
- Create a table (usually a multi-dimensional array): This table will store the solutions to the subproblems.
- Iteratively fill the table: Start from the base cases and work your way up to the final solution, using the solutions of smaller subproblems to compute the solutions of larger ones.
- Retrieve the final solution: The solution to the original problem will be stored in a specific cell of the table.
General Approach (Memoization):
- Define a recursive function: This function will solve the problem recursively, but with a cache to store the results of subproblems.
- Check the cache: Before making a recursive call, check if the solution for the current subproblem is already in the cache. If it is, return the cached solution.
- Make recursive calls: If the solution is not in the cache, make recursive calls to solve the subproblems.
- Store the result in the cache: After solving a subproblem, store its solution in the cache.
- Return the final solution: The final solution will be the result of the initial recursive call.
4. Code Implementations (Python, Java, C++)
Section titled “4. Code Implementations (Python, Java, C++)”This example demonstrates the classic 0/1 Knapsack problem using tabulation.
Python
Section titled “Python”def knapsack(capacity, weights, values): n = len(values) dp = [[0 for _ in range(capacity + 1)] for _ in range(n + 1)]
for i in range(1, n + 1): for w in range(1, capacity + 1): if weights[i - 1] <= w: dp[i][w] = max(dp[i - 1][w], values[i - 1] + dp[i - 1][w - weights[i - 1]]) else: dp[i][w] = dp[i - 1][w] return dp[n][capacity]
values = [60, 100, 120]weights = [10, 20, 30]capacity = 50max_value = knapsack(capacity, weights, values)print(f"Maximum value: {max_value}")public class Knapsack { public static int knapsack(int capacity, int[] weights, int[] values) { int n = values.length; int[][] dp = new int[n + 1][capacity + 1];
for (int i = 1; i <= n; i++) { for (int w = 1; w <= capacity; w++) { if (weights[i - 1] <= w) { dp[i][w] = Math.max(dp[i - 1][w], values[i - 1] + dp[i - 1][w - weights[i - 1]]); } else { dp[i][w] = dp[i - 1][w]; } } } return dp[n][capacity]; }
public static void main(String[] args) { int[] values = {60, 100, 120}; int[] weights = {10, 20, 30}; int capacity = 50; int maxValue = knapsack(capacity, weights, values); System.out.println("Maximum value: " + maxValue); }}#include <iostream>#include <vector>#include <algorithm>
using namespace std;
int knapsack(int capacity, const vector<int>& weights, const vector<int>& values) { int n = values.size(); vector<vector<int>> dp(n + 1, vector<int>(capacity + 1, 0));
for (int i = 1; i <= n; ++i) { for (int w = 1; w <= capacity; ++w) { if (weights[i - 1] <= w) { dp[i][w] = max(dp[i - 1][w], values[i - 1] + dp[i - 1][w - weights[i - 1]]); } else { dp[i][w] = dp[i - 1][w]; } } } return dp[n][capacity];}
int main() { vector<int> values = {60, 100, 120}; vector<int> weights = {10, 20, 30}; int capacity = 50; int maxValue = knapsack(capacity, weights, values); cout << "Maximum value: " << maxValue << endl; return 0;}5. Complexity Analysis
Section titled “5. Complexity Analysis”The complexity of dynamic programming varies greatly depending on the specific problem. However, for the 0/1 Knapsack problem example above:
| Algorithm | Time Complexity | Space Complexity |
|---|---|---|
| Tabulation | O(n*W) | O(n*W) |
| Memoization | O(n*W) | O(n*W) |
Where ‘n’ is the number of items and ‘W’ is the knapsack capacity. Both approaches have the same time and space complexity in this case. The space complexity can sometimes be optimized depending on the specific problem.
6. Pro Tips, Tricks, and Common Pitfalls
Section titled “6. Pro Tips, Tricks, and Common Pitfalls”- Space Optimization: For some DP problems, you can reduce space complexity from O(n*W) to O(W) by only storing the previous row or column of the DP table.
- State Representation: Choosing the right state representation is crucial. A well-defined state should uniquely identify a subproblem.
- Base Cases: Carefully define and handle base cases. Incorrect base cases can lead to wrong results.
- Order of Iteration: The order in which you iterate through the DP table matters. Ensure that you’re always using previously computed values.
- Debugging: Print intermediate results to debug your DP solution. Tracing the values in the DP table can help identify errors.
7. Classic Problem Examples
Section titled “7. Classic Problem Examples”Example: Regular Expression Matching
Section titled “Example: Regular Expression Matching”High-Level Approach:
Regular expression matching can be solved using dynamic programming by creating a DP table where dp[i][j] represents whether the first i characters of the string s match the first j characters of the pattern p. The base cases are dp[0][0] = true (empty string matches empty pattern) and dp[i][0] = false for i > 0 (non-empty string doesn’t match empty pattern). The recursive relation considers different scenarios:
- If
p[j-1]is a literal character, it must matchs[i-1]. - If
p[j-1]is ’.’, it matches any character. - If
p[j-1]is ’*’, it represents zero or more occurrences of the preceding character. This requires considering both cases (zero occurrences and one or more occurrences).
The final result is stored in dp[s.length()][p.length()]. This approach avoids redundant calculations by storing and reusing the results of subproblems. Both tabulation and memoization can be effectively used for this problem.