Overview
This page organizes coding problems by their underlying algorithm design technique. Understanding these techniques is crucial for solving complex algorithmic challenges efficiently.Source code location:
/workspace/source/algorithms/design-techniques/Dynamic Programming (1 Problem)
Dynamic programming solves problems by breaking them into overlapping subproblems and storing solutions to avoid redundant computation. It’s ideal for optimization problems with optimal substructure.Characteristics of DP Problems
Characteristics of DP Problems
When to use Dynamic Programming:
- Problem has optimal substructure: optimal solution contains optimal solutions to subproblems
- Problem has overlapping subproblems: same subproblems are solved multiple times
- Can be solved by combining solutions to smaller versions of the same problem
- Memoization (Top-Down): Recursive solution with caching
- Tabulation (Bottom-Up): Iterative solution building a table
Problem 1: Longest Common Subsequence (LCS)
Problem 1: Longest Common Subsequence (LCS)
Difficulty: HardProblem Statement:
Given two sequences, find the length of the longest subsequence present in both of them. A subsequence is a sequence that appears in the same relative order but not necessarily contiguous.Example:Source:
- Solution: Bottom-Up DP
- Why DP Works Here
- Variations
Approach: Build a 2D table where Implementation:
dp[i][j] represents the LCS length of first i characters of strA and first j characters of strB.Time Complexity: O(m × n) where m and n are string lengthsSpace Complexity: O(m × n)Algorithm:- Create a (m+1) × (n+1) matrix initialized to 0
- For each character pair:
- If characters match:
dp[i][j] = dp[i-1][j-1] + 1 - If they don’t match:
dp[i][j] = max(dp[i-1][j], dp[i][j-1])
- If characters match:
- Result is in
dp[m][n]
/workspace/source/algorithms/design-techniques/dynamic-programming/dynamic-programming.problems.test.ts:2-41Learning Resources:
- Dynamic programming is about “trading time for space” by storing computed results
- Start with simpler DP problems (Fibonacci, climbing stairs) before tackling LCS
- Practice identifying the recurrence relation - this is the key to DP solutions
Divide and Conquer (2 Problems)
Divide and conquer breaks problems into independent subproblems, solves them recursively, and combines results. Unlike DP, subproblems don’t overlap.Characteristics of D&C Problems
Characteristics of D&C Problems
When to use Divide and Conquer:
- Problem can be broken into independent subproblems
- Subproblems are of the same type as original problem
- Solutions to subproblems can be combined to solve the original
- Base case is simple enough to solve directly
- Divide: Break problem into smaller subproblems
- Conquer: Solve subproblems recursively
- Combine: Merge solutions to get final answer
Problem 1: Maximum Subarray Sum
Problem 1: Maximum Subarray Sum
Difficulty: Medium-HardProblem Statement:
Find the contiguous subarray within a one-dimensional array of numbers that has the largest sum.Example:Source:
- Solution: Divide & Conquer
- Why D&C Works Here
- Alternative: Kadane's Algorithm
Approach: At each division, the maximum sum is either in the left half, right half, or crosses the middle.Time Complexity: O(n log n)Space Complexity: O(log n) for recursion stackAlgorithm:
- Divide array into left and right halves
- Recursively find max sum in left half
- Recursively find max sum in right half
- Find max sum crossing the middle
- Return maximum of the three
/workspace/source/algorithms/design-techniques/divide-and-conquer/divide-and-conquer.problems.test.ts:3-53Problem 2: Inversion Count
Problem 2: Inversion Count
Difficulty: HardProblem Statement:
Count the number of inversions in an array. An inversion is a pair of indices (i, j) where i < j but arr[i] > arr[j]. This indicates how far the array is from being sorted.Example:Source:
- Solution: Modified Merge Sort
- Why Modified Merge Sort?
- Step-by-Step Example
Approach: Count inversions while performing merge sort.Time Complexity: O(n log n)Space Complexity: O(n)Key Insight: When merging two sorted subarrays, if an element from the right subarray is placed before elements from the left subarray, all remaining left elements form inversions with it.Implementation:
/workspace/source/algorithms/design-techniques/divide-and-conquer/divide-and-conquer.problems.test.ts:55-116D&C vs DP: Divide and conquer splits into independent subproblems (like splitting an array in half), while dynamic programming handles overlapping subproblems (like computing Fibonacci numbers where F(n) needs F(n-1) and F(n-2), and F(n-1) also needs F(n-2)).
Greedy Algorithms (1 Problem)
Greedy algorithms make locally optimal choices at each step, hoping to find a global optimum. They’re fast but don’t always guarantee the optimal solution.Characteristics of Greedy Problems
Characteristics of Greedy Problems
When to use Greedy Algorithms:
- Problem has greedy choice property: local optimum leads to global optimum
- Problem has optimal substructure: optimal solution contains optimal solutions to subproblems
- No need to reconsider previous choices
- Greedy: Make best choice now, never look back
- DP: Consider all choices, pick best based on future implications
Problem 1: Minimum Coin Change
Problem 1: Minimum Coin Change
Difficulty: Easy-MediumProblem Statement:
Given a value V and an infinite supply of coins with denominations [c1, c2, …, cn], find the minimum number of coins needed to make change for V.Example:Source:
- Solution: Greedy Approach
- Why Greedy Works Here
- When Greedy Fails
- Real-World Applications
Approach: Always pick the largest coin that doesn’t exceed the remaining amount.Time Complexity: O(n × k) where n is number of coin types, k is value/largest coinSpace Complexity: O(n) for storing coin countsAlgorithm:
- Sort coins in descending order
- While amount remains:
- Take as many of the largest coin as possible
- Move to next smaller coin
- Count total coins used
/workspace/source/algorithms/design-techniques/greedy/greedy.problems.test.ts:2-28Backtracking (1 Problem)
Backtracking explores solution spaces by trying different paths and abandoning paths that don’t lead to valid solutions. It’s a refined brute force with pruning.Characteristics of Backtracking Problems
Characteristics of Backtracking Problems
When to use Backtracking:
- Need to explore all possible solutions
- Can eliminate invalid paths early (pruning)
- Problem involves making a sequence of choices
- Constraints can be checked incrementally
- Choice: What decision to make at this step
- Constraint: Rules that determine if current path is valid
- Goal: Condition that defines a complete solution
- Recursion with state
- Try a choice
- Check if it’s valid
- If not, undo (backtrack) and try another choice
Problem 1: Rat in a Maze
Problem 1: Rat in a Maze
Difficulty: MediumProblem Statement:
A maze is represented as an n×n binary matrix where 0 is a dead end and 1 is a valid path. A rat starts at the top-left corner [0,0] and must reach the bottom-right corner [n-1,n-1]. The rat can only move forward (right) or down.Example:Source:
- Solution: Backtracking
- Backtracking Visualization
- Extensions & Variations
Approach: Try moving right or down. If a path fails, backtrack and try the other direction.Time Complexity: O(2^(n²)) in worst case (trying all paths)Space Complexity: O(n²) for solution matrix + O(n²) for recursion stackAlgorithm:
- Start at (0, 0)
- Mark current cell as part of solution
- Try moving right: if it leads to solution, return true
- Try moving down: if it leads to solution, return true
- If neither works, unmark cell (backtrack) and return false
/workspace/source/algorithms/design-techniques/backtracking/backtracking.problems.test.ts:2-63Backtracking vs Brute Force: Backtracking is more efficient than pure brute force because it prunes the search space early. As soon as we detect a path violates constraints, we stop exploring that branch.
Technique Comparison
- Quick Comparison
- Decision Tree
- Complexity Patterns
| Technique | Subproblems | When to Use | Time Complexity | Examples |
|---|---|---|---|---|
| Dynamic Programming | Overlapping | Optimization with overlapping subproblems | O(n²) to O(n³) typically | LCS, Knapsack, Edit Distance |
| Divide & Conquer | Independent | Can split into non-overlapping parts | O(n log n) often | Merge Sort, Binary Search, Max Subarray |
| Greedy | None (single pass) | Local optimum = global optimum | O(n) to O(n log n) | Activity Selection, Huffman, Coin Change* |
| Backtracking | Explored via search tree | Need all solutions or constraint satisfaction | Exponential (with pruning) | N-Queens, Sudoku, Maze |
Learning Path
Learn Divide & Conquer
Study merge sort and binary search thoroughly. Understand the three-step process.
Master Dynamic Programming
Start with Fibonacci and climbing stairs. Progress to LCS and knapsack problems.
View by Data Structure
Explore problems organized by data structure implementation
Overview
Return to the problems overview page