Skip to main content

Overview

This page organizes coding problems by their underlying algorithm design technique. Understanding these techniques is crucial for solving complex algorithmic challenges efficiently.
Source code location: /workspace/source/algorithms/design-techniques/

Dynamic Programming (1 Problem)

Dynamic programming solves problems by breaking them into overlapping subproblems and storing solutions to avoid redundant computation. It’s ideal for optimization problems with optimal substructure.
When to use Dynamic Programming:
  • Problem has optimal substructure: optimal solution contains optimal solutions to subproblems
  • Problem has overlapping subproblems: same subproblems are solved multiple times
  • Can be solved by combining solutions to smaller versions of the same problem
Two main approaches:
  1. Memoization (Top-Down): Recursive solution with caching
  2. Tabulation (Bottom-Up): Iterative solution building a table
Common DP patterns: Fibonacci, knapsack, longest common subsequence, matrix chain multiplication, edit distance
Difficulty: HardProblem Statement: Given two sequences, find the length of the longest subsequence present in both of them. A subsequence is a sequence that appears in the same relative order but not necessarily contiguous.Example:
Input: strA = "ABCDGH", strB = "AEDFHR"
Output: 3
Explanation: LCS is "ADH"

Subsequences of "ABCDEFG":
- "abc", "abg", "bdf", "aeg", "acefg" (and many more)
Approach: Build a 2D table where dp[i][j] represents the LCS length of first i characters of strA and first j characters of strB.Time Complexity: O(m × n) where m and n are string lengthsSpace Complexity: O(m × n)Algorithm:
  1. Create a (m+1) × (n+1) matrix initialized to 0
  2. For each character pair:
    • If characters match: dp[i][j] = dp[i-1][j-1] + 1
    • If they don’t match: dp[i][j] = max(dp[i-1][j], dp[i][j-1])
  3. Result is in dp[m][n]
DP Table Visualization:
//    "" A  B  C  D  G  H
// "" [ 0  0  0  0  0  0  0]
// A  [ 0  1  1  1  1  1  1]
// E  [ 0  1  1  1  1  1  1]
// D  [ 0  1  1  1  2  2  2]
// F  [ 0  1  1  1  2  2  2]
// H  [ 0  1  1  1  2  2  3]
// R  [ 0  1  1  1  2  2  3] ← Result
Implementation:
const strA = 'ABCDGH';
const strB = 'AEDFHR';
const matrix: number[][] = [];

// Initialize matrix
for (let i = 0; i <= strA.length; i++) {
  matrix[i] = [];
  for (let j = 0; j <= strB.length; j++) {
    matrix[i][j] = 0;
  }
}

// Fill matrix
for (let i = 1; i <= strA.length; i++) {
  for (let j = 1; j <= strB.length; j++) {
    if (strB[i - 1] === strA[j - 1]) {
      // Characters match: extend previous LCS
      matrix[i][j] = matrix[i-1][j-1] + 1;
    } else {
      // Characters don't match: take max from left or top
      matrix[i][j] = Math.max(
        matrix[i-1][j],
        matrix[i][j-1]
      );
    }
  }
}

// Result is bottom-right cell
const lcsLength = matrix[strB.length][strA.length]; // 3
Source: /workspace/source/algorithms/design-techniques/dynamic-programming/dynamic-programming.problems.test.ts:2-41
Learning Resources:
  • Dynamic programming is about “trading time for space” by storing computed results
  • Start with simpler DP problems (Fibonacci, climbing stairs) before tackling LCS
  • Practice identifying the recurrence relation - this is the key to DP solutions

Divide and Conquer (2 Problems)

Divide and conquer breaks problems into independent subproblems, solves them recursively, and combines results. Unlike DP, subproblems don’t overlap.
When to use Divide and Conquer:
  • Problem can be broken into independent subproblems
  • Subproblems are of the same type as original problem
  • Solutions to subproblems can be combined to solve the original
  • Base case is simple enough to solve directly
Three steps:
  1. Divide: Break problem into smaller subproblems
  2. Conquer: Solve subproblems recursively
  3. Combine: Merge solutions to get final answer
Classic examples: Merge sort, quick sort, binary search, Karatsuba multiplication
Difficulty: Medium-HardProblem Statement: Find the contiguous subarray within a one-dimensional array of numbers that has the largest sum.Example:
Input: [-2, -5, 6, -2, -3, 1, 5, -6]
Output: 7
Explanation: Subarray [6, -2, -3, 1, 5] has sum = 7
Approach: At each division, the maximum sum is either in the left half, right half, or crosses the middle.Time Complexity: O(n log n)Space Complexity: O(log n) for recursion stackAlgorithm:
  1. Divide array into left and right halves
  2. Recursively find max sum in left half
  3. Recursively find max sum in right half
  4. Find max sum crossing the middle
  5. Return maximum of the three
Implementation:
function middleMaxSum(
  arr: number[],
  left: number,
  middle: number,
  right: number
): number {
  // Find max sum in left half ending at middle
  let leftSum = 0;
  let maxLeftSum = Number.MIN_SAFE_INTEGER;
  for (let i = middle; i >= left; i--) {
    leftSum += arr[i];
    if (leftSum > maxLeftSum) {
      maxLeftSum = leftSum;
    }
  }
  
  // Find max sum in right half starting at middle+1
  let rightSum = 0;
  let maxRightSum = Number.MIN_SAFE_INTEGER;
  for (let i = middle + 1; i <= right; i++) {
    rightSum += arr[i];
    if (rightSum > maxRightSum) {
      maxRightSum = rightSum;
    }
  }
  
  // Return max of: crossing sum, left only, right only
  return Math.max(
    maxLeftSum + maxRightSum,
    maxLeftSum,
    maxRightSum
  );
}

function maxSum(arr: number[], left: number, right: number): number {
  // Base case: single element
  if (left === right) {
    return arr[left];
  }
  
  const middle = Math.floor((left + right) / 2);
  
  // Return max of: left half, crossing middle, right half
  return Math.max(
    maxSum(arr, left, middle),
    middleMaxSum(arr, left, middle, right),
    maxSum(arr, middle + 1, right)
  );
}

const result = maxSum(arr, 0, arr.length - 1);
Source: /workspace/source/algorithms/design-techniques/divide-and-conquer/divide-and-conquer.problems.test.ts:3-53
Difficulty: HardProblem Statement: Count the number of inversions in an array. An inversion is a pair of indices (i, j) where i < j but arr[i] > arr[j]. This indicates how far the array is from being sorted.Example:
Input: [2, 4, 1, 3, 5]
Output: 3
Explanation: Inversions are (2,1), (4,1), (4,3)
Approach: Count inversions while performing merge sort.Time Complexity: O(n log n)Space Complexity: O(n)Key Insight: When merging two sorted subarrays, if an element from the right subarray is placed before elements from the left subarray, all remaining left elements form inversions with it.Implementation:
function merge(
  [leftSubArr, leftInv]: [number[], number],
  [rightSubArr, rightInv]: [number[], number]
): [number[], number] {
  const arr = [];
  let i = 0;
  let j = 0;
  let swaps = leftInv + rightInv;
  
  while (i < leftSubArr.length && j < rightSubArr.length) {
    if (leftSubArr[i] < rightSubArr[j]) {
      arr.push(leftSubArr[i++]);
    } else {
      arr.push(rightSubArr[j++]);
      
      // Key insight: all remaining elements in left subarray
      // are greater than rightSubArr[j], so they form inversions
      swaps += leftSubArr.length - i;
    }
  }
  
  return [
    [...arr, ...leftSubArr.slice(i), ...rightSubArr.slice(j)],
    swaps,
  ];
}

function getInversionCount([arr, swaps]: [number[], number]): [
  number[],
  number
] {
  // Base case
  if (arr.length === 1) {
    return [arr, swaps];
  }
  
  const mid = Math.floor(arr.length / 2);
  const left = getInversionCount([arr.slice(0, mid), swaps]);
  const right = getInversionCount([arr.slice(mid), swaps]);
  
  return merge(left, right);
}

const [sortedArr, inversions] = getInversionCount([[2, 4, 1, 3, 5], 0]);
// sortedArr: [1, 2, 3, 4, 5]
// inversions: 3
Source: /workspace/source/algorithms/design-techniques/divide-and-conquer/divide-and-conquer.problems.test.ts:55-116
D&C vs DP: Divide and conquer splits into independent subproblems (like splitting an array in half), while dynamic programming handles overlapping subproblems (like computing Fibonacci numbers where F(n) needs F(n-1) and F(n-2), and F(n-1) also needs F(n-2)).

Greedy Algorithms (1 Problem)

Greedy algorithms make locally optimal choices at each step, hoping to find a global optimum. They’re fast but don’t always guarantee the optimal solution.
When to use Greedy Algorithms:
  • Problem has greedy choice property: local optimum leads to global optimum
  • Problem has optimal substructure: optimal solution contains optimal solutions to subproblems
  • No need to reconsider previous choices
Key difference from DP:
  • Greedy: Make best choice now, never look back
  • DP: Consider all choices, pick best based on future implications
Classic examples: Activity selection, Huffman coding, Dijkstra’s algorithm, Prim’s MST, Kruskal’s MSTPitfall: Greedy doesn’t always work! Must prove greedy choice property.
Difficulty: Easy-MediumProblem Statement: Given a value V and an infinite supply of coins with denominations [c1, c2, …, cn], find the minimum number of coins needed to make change for V.Example:
Input: value = 186, coins = [1, 5, 10, 25]
Output: 9 coins
Breakdown: 7×25 + 1×10 + 1×1 = 175 + 10 + 1 = 186
Approach: Always pick the largest coin that doesn’t exceed the remaining amount.Time Complexity: O(n × k) where n is number of coin types, k is value/largest coinSpace Complexity: O(n) for storing coin countsAlgorithm:
  1. Sort coins in descending order
  2. While amount remains:
    • Take as many of the largest coin as possible
    • Move to next smaller coin
  3. Count total coins used
Implementation:
const value = 186;
const coins = [1, 5, 10, 25];

const result: Record<number, number> = {};
let amount = value;
let count = 0;

// Process from largest to smallest
for (let i = coins.length - 1; i >= 0; i--) {
  while (coins[i] <= amount) {
    // Take as many of this coin as possible
    const coinCount = Math.floor(amount / coins[i]);
    
    result[coins[i]] = result[coins[i]]
      ? result[coins[i]] + coinCount
      : coinCount;
    
    count += coinCount;
    amount = amount % coins[i];
  }
}

// Result: { '1': 1, '10': 1, '25': 7 }
// Total: 9 coins
Source: /workspace/source/algorithms/design-techniques/greedy/greedy.problems.test.ts:2-28
Greedy Algorithm Pitfall: Always verify that the greedy choice property holds for your specific problem! Many problems seem like they’d work with greedy but actually require dynamic programming for the optimal solution.

Backtracking (1 Problem)

Backtracking explores solution spaces by trying different paths and abandoning paths that don’t lead to valid solutions. It’s a refined brute force with pruning.
When to use Backtracking:
  • Need to explore all possible solutions
  • Can eliminate invalid paths early (pruning)
  • Problem involves making a sequence of choices
  • Constraints can be checked incrementally
Three key components:
  1. Choice: What decision to make at this step
  2. Constraint: Rules that determine if current path is valid
  3. Goal: Condition that defines a complete solution
Common patterns:
  • Recursion with state
  • Try a choice
  • Check if it’s valid
  • If not, undo (backtrack) and try another choice
Classic examples: N-Queens, Sudoku solver, maze solving, subset sum, permutations, combinations
Difficulty: MediumProblem Statement: A maze is represented as an n×n binary matrix where 0 is a dead end and 1 is a valid path. A rat starts at the top-left corner [0,0] and must reach the bottom-right corner [n-1,n-1]. The rat can only move forward (right) or down.Example:
Input:
[1, 0, 0, 0]
[1, 1, 1, 1]
[0, 0, 1, 0]
[0, 1, 1, 1]

Output (path marked with 1):
[1, 0, 0, 0]
[1, 1, 1, 0]
[0, 0, 1, 0]
[0, 0, 1, 1]

Path: (0,0) → (1,0) → (1,1) → (1,2) → (2,2) → (3,2) → (3,3)
Approach: Try moving right or down. If a path fails, backtrack and try the other direction.Time Complexity: O(2^(n²)) in worst case (trying all paths)Space Complexity: O(n²) for solution matrix + O(n²) for recursion stackAlgorithm:
  1. Start at (0, 0)
  2. Mark current cell as part of solution
  3. Try moving right: if it leads to solution, return true
  4. Try moving down: if it leads to solution, return true
  5. If neither works, unmark cell (backtrack) and return false
Implementation:
const maze = [
  [1, 0, 0, 0],
  [1, 1, 1, 1],
  [0, 0, 1, 0],
  [0, 1, 1, 1],
];

const n = maze.length;
const solution = [];

// Initialize solution matrix
for (let i = 0; i < n; i++) {
  solution[i] = Array.from({ length: n }, () => 0);
}

function solveMaze(
  maze: number[][],
  solution: number[][],
  i: number,
  j: number
): boolean {
  const n = maze.length;
  
  // Out of bounds
  if (i > n - 1 || j > n - 1) {
    return false;
  }
  
  // Reached destination
  if (i === n - 1 && j === n - 1) {
    solution[i][j] = 1;
    return true;
  }
  
  // Current cell is valid path
  if (maze[i][j] !== 0) {
    solution[i][j] = 1;
    
    // Try moving right
    if (solveMaze(maze, solution, i, j + 1)) {
      return true;
    }
    
    // Try moving down
    if (solveMaze(maze, solution, i + 1, j)) {
      return true;
    }
    
    // Backtrack: neither direction worked
    solution[i][j] = 0;
  }
  
  return false;
}

solveMaze(maze, solution, 0, 0);
Source: /workspace/source/algorithms/design-techniques/backtracking/backtracking.problems.test.ts:2-63
Backtracking vs Brute Force: Backtracking is more efficient than pure brute force because it prunes the search space early. As soon as we detect a path violates constraints, we stop exploring that branch.

Technique Comparison

TechniqueSubproblemsWhen to UseTime ComplexityExamples
Dynamic ProgrammingOverlappingOptimization with overlapping subproblemsO(n²) to O(n³) typicallyLCS, Knapsack, Edit Distance
Divide & ConquerIndependentCan split into non-overlapping partsO(n log n) oftenMerge Sort, Binary Search, Max Subarray
GreedyNone (single pass)Local optimum = global optimumO(n) to O(n log n)Activity Selection, Huffman, Coin Change*
BacktrackingExplored via search treeNeed all solutions or constraint satisfactionExponential (with pruning)N-Queens, Sudoku, Maze
*Coin change with greedy only works for certain coin systems

Learning Path

1

Start with Greedy

Easiest to understand. Practice recognizing when greedy works vs when it doesn’t.
2

Learn Divide & Conquer

Study merge sort and binary search thoroughly. Understand the three-step process.
3

Master Dynamic Programming

Start with Fibonacci and climbing stairs. Progress to LCS and knapsack problems.
4

Practice Backtracking

Begin with simple maze problems. Advance to N-Queens and Sudoku solvers.

View by Data Structure

Explore problems organized by data structure implementation

Overview

Return to the problems overview page

Build docs developers (and LLMs) love