The absolute essential problems every software engineer should master, with core patterns and variations
These aren’t just interview problems—they’re fundamental patterns that appear in production code daily. Master these and you’ll recognize variations instantly.
Difficulty: Easy · Frequency: 76.4%Given an array and target, return indices of two numbers that sum to target.
def twoSum(nums: List[int], target: int) -> List[int]: seen = {} for i, num in enumerate(nums): complement = target - num if complement in seen: return [seen[complement], i] seen[num] = i return []
Time: O(n) · Space: O(n)
Why It’s Essential:The hash table pattern for O(1) complement lookup is fundamental to:
Pair sum problems (3Sum, 4Sum, Two Sum II)
Anagram detection
Frequency counting
Duplicate detection
Caching and memoization
Core Pattern: For each element, check if its complement exists in a hash table. Store elements as you iterate.Common Variations:
Two Sum II - Sorted Array
Input is sorted → Use two pointers instead of hash table
def twoSum(numbers, target): left, right = 0, len(numbers) - 1 while left < right: total = numbers[left] + numbers[right] if total == target: return [left + 1, right + 1] elif total < target: left += 1 else: right -= 1
Time: O(n) · Space: O(1)
3Sum - Find All Unique Triplets
Find three numbers that sum to zero → Fix one, two-pointer for restKey insight: Sort array, fix first element, use two pointers for remaining pair.Time: O(n²) · Space: O(1)
4Sum - Generalize to K Numbers
Find four numbers → Recursive reduction or two-pointer with nested loopsPattern extends to K-Sum using recursion or iterative reduction.
Interview Follow-Ups:
“What if the array is sorted?” → Two pointers
“What if we need all pairs?” → Handle duplicates carefully
“What if input is too large for memory?” → External merge sort
Difficulty: Easy · Frequency: 74.9%Determine if string of brackets is valid (properly nested and matched).
def isValid(s: str) -> bool: stack = [] mapping = {')': '(', '}': '{', ']': '['} for char in s: if char in mapping: top = stack.pop() if stack else '#' if mapping[char] != top: return False else: stack.append(char) return not stack
Time: O(n) · Space: O(n)
Why It’s Essential:Stacks are the go-to structure for:
Matching/balancing problems (parentheses, tags)
Expression evaluation (calculators, parsers)
Backtracking and undo operations
Monotonic stack problems (next greater element)
DFS traversal (iterative)
Core Pattern: Use stack to track opening brackets. When you see closing bracket, check if top of stack matches.Common Variations:
Minimum Remove to Make Valid Parentheses
Remove minimum brackets to make valid → Track invalid indices
def minRemoveToMakeValid(s): to_remove = set() stack = [] for i, char in enumerate(s): if char == '(': stack.append(i) elif char == ')': if stack: stack.pop() else: to_remove.add(i) to_remove.update(stack) return ''.join(char for i, char in enumerate(s) if i not in to_remove)
Frequency: 100% - The most asked parentheses variant!
Basic Calculator II
Evaluate expression with +, -, *, / → Stack for precedenceKey insight: Use stack to handle operator precedence. Push numbers, apply high-precedence ops immediately.Frequency: 84% - Classic stack application
Remove All Adjacent Duplicates II
Remove k consecutive duplicates → Stack with countsPattern: Store (char, count) pairs in stack, pop when count reaches k.
Interview Follow-Ups:
“What if we need to count minimum insertions?” → Counter approach
“Can you do it with O(1) space?” → Counter for simple cases
“What about nested tag validation?” → Stack with state
Difficulty: Medium · Frequency: 77.9%Merge all overlapping intervals into non-overlapping intervals.
def merge(intervals: List[List[int]]) -> List[List[int]]: if not intervals: return [] intervals.sort(key=lambda x: x[0]) merged = [intervals[0]] for current in intervals[1:]: last = merged[-1] if current[0] <= last[1]: last[1] = max(last[1], current[1]) else: merged.append(current) return merged
Time: O(n log n) · Space: O(n)
Why It’s Essential:This pattern solves:
Scheduling conflicts (meeting rooms, calendar)
Range queries and updates
Resource allocation
Time-based analytics
Interval tree problems
Core Pattern: Sort by start time, then iterate and merge overlapping intervals by extending end time.Common Variations:
Insert Interval
Insert into sorted intervals → Three phases: before, overlap, after
def insert(intervals, newInterval): result = [] i = 0 # Add all intervals before newInterval while i < len(intervals) and intervals[i][1] < newInterval[0]: result.append(intervals[i]) i += 1 # Merge overlapping intervals while i < len(intervals) and intervals[i][0] <= newInterval[1]: newInterval[0] = min(newInterval[0], intervals[i][0]) newInterval[1] = max(newInterval[1], intervals[i][1]) i += 1 result.append(newInterval) # Add remaining intervals result.extend(intervals[i:]) return result
Meeting Rooms II
Minimum meeting rooms needed → Min heap or sweep lineKey insight: Track start/end times separately, or use min heap to track end times.Pattern extends to resource allocation problems.
Non-overlapping Intervals
Minimum removals to make non-overlapping → Greedy by end timeSort by end time, greedily select intervals that don’t overlap.
Interview Follow-Ups:
“What if intervals aren’t sorted?” → Must sort first
“How to handle infinite streams?” → Sweep line with priority queue
Combining data structures (hash table + linked list)
O(1) operation design
System design principles
Cache invalidation strategies
Real-world data structure implementation
Core Pattern: Hash table for O(1) access, doubly linked list for O(1) reordering and eviction.Common Variations:
LFU Cache - Least Frequently Used
Evict by frequency instead of recency → Two hash maps + frequency bucketsMore complex: Track frequency counts and maintain frequency-ordered buckets.Real-world: Better for access patterns with hot/cold data.
LRU Cache with Expiration
Add TTL (time to live) → Additional timestamp trackingPattern: Store timestamps, check expiration on get, periodic cleanup.
Snapshot Array
Design with versioning → Hash map per snapshot or delta trackingExtension of state management with historical queries.
Interview Follow-Ups:
“How to handle concurrent access?” → Locks or lock-free structures
“What if cache is distributed?” → Consistent hashing
“Memory vs performance tradeoffs?” → Discuss eviction policies
Why It’s Essential:This simple problem introduces:
State-based dynamic programming
Tracking optimal solutions incrementally
Space optimization (reducing from O(n) to O(1))
Maximum subarray variants
Financial modeling patterns
Core Pattern: Track minimum value seen so far, calculate profit at each step, maintain maximum.Common Variations:
Best Time to Buy/Sell Stock II
Multiple transactions allowed → Sum all positive differences
def maxProfit(prices): profit = 0 for i in range(1, len(prices)): if prices[i] > prices[i-1]: profit += prices[i] - prices[i-1] return profit
Key insight: Add profit for every upward move.
Best Time to Buy/Sell Stock III
At most 2 transactions → State machine DPTrack 4 states: buy1, sell1, buy2, sell2. Classic state machine problem.Difficulty: Hard but follows same pattern with more states.
Best Time to Buy/Sell Stock with Cooldown
Cooldown after selling → Three-state DPStates: holding stock, cooldown, available to buy. Pattern for state transitions.
Interview Follow-Ups:
“What if we can do k transactions?” → Generalized DP
“With transaction fees?” → Subtract fee in profit calculation
Difficulty: Hard · Frequency: 76.4%Merge k sorted linked lists into one sorted list.
def mergeKLists(lists: List[Optional[ListNode]]) -> Optional[ListNode]: min_heap = [] # Initialize heap with first node from each list for i, node in enumerate(lists): if node: heapq.heappush(min_heap, (node.val, i, node)) dummy = ListNode(0) current = dummy while min_heap: val, i, node = heapq.heappop(min_heap) current.next = node current = current.next if node.next: heapq.heappush(min_heap, (node.next.val, i, node.next)) return dummy.next
Time: O(N log k) · Space: O(k)
Why It’s Essential:Heaps are critical for:
Top-K problems
Stream processing (finding median, top elements)
Priority scheduling
Merge operations
External sorting
Core Pattern: Maintain min heap of size k, always pop smallest and add next from same source.Common Variations:
Kth Largest Element
Find kth largest → Min heap of size k
def findKthLargest(nums, k): heap = nums[:k] heapq.heapify(heap) for num in nums[k:]: if num > heap[0]: heapq.heapreplace(heap, num) return heap[0]
Maintain heap of k largest elements, smallest is at top.
Top K Frequent Elements
Most frequent k elements → Heap or bucket sortCan use min heap of size k, or bucket sort for O(n) solution.Frequency: 77.9% - Essential for analytics queries.
Find Median from Data Stream
Running median → Two heaps (max heap + min heap)Max heap for lower half, min heap for upper half. Balance sizes to find median in O(1).Difficulty: Hard but fundamental pattern for stream processing.
Interview Follow-Ups:
“What if lists are very long?” → External merge sort
“Divide and conquer alternative?” → Merge pairs recursively
Core Pattern: Use hash map to track visited nodes (and their clones). DFS or BFS to traverse, creating clones as you go.Common Variations:
Course Schedule (Cycle Detection)
Detect cycles in directed graph → DFS with three states
def canFinish(numCourses, prerequisites): graph = [[] for _ in range(numCourses)] for course, prereq in prerequisites: graph[course].append(prereq) # 0: unvisited, 1: visiting, 2: visited state = [0] * numCourses def hasCycle(course): if state[course] == 1: return True if state[course] == 2: return False state[course] = 1 for prereq in graph[course]: if hasCycle(prereq): return True state[course] = 2 return False return not any(hasCycle(i) for i in range(numCourses))
Number of Islands
Count connected components → DFS/BFS on gridClassic pattern: Iterate through grid, DFS/BFS on each unvisited land cell.Extension: Union-find for dynamic connectivity queries.
Word Ladder
Shortest path in word graph → BFSBuild graph of word transformations, use BFS to find shortest path.Difficulty: Hard - combines graph + BFS + string manipulation.
Interview Follow-Ups:
“BFS vs DFS - when to use which?” → BFS for shortest path, DFS for all paths
“How to detect cycles?” → Three-state coloring or visited set with recursion stack
“Optimize for sparse vs dense graphs?” → Adjacency list vs matrix
These problems aren’t arbitrary—they were chosen because:
High Frequency: Appear in 62-77% of interviews at top companies
Fundamental Patterns: Each teaches a pattern that appears in 50+ other problems
Building Blocks: More complex problems combine these patterns
Real-World Relevance: These patterns appear daily in production code
Interview Signals: Success on these indicates strong foundational knowledge
Master these seven problems and their core patterns, and you’ll be able to solve 80% of interview problems you encounter. The remaining 20% are just combinations or minor variations of these fundamentals.