Complexity Overview
The Push Swap algorithm’s complexity varies based on the input size and the sorting strategy employed. Understanding these complexities helps evaluate the algorithm’s efficiency and expected performance.Time Complexity
By Input Size
Size 2 - O(1)
Size 2 - O(1)
Best Case: 0 operations (already sorted)
Worst Case: 1 operation (
Worst Case: 1 operation (
sa)For 2 elements, the algorithm performs a constant-time check and at most one swap operation.Size 3 - O(1)
Size 3 - O(1)
Best Case: 0 operations (already sorted)
Worst Case: 2 operationsThe
Worst Case: 2 operationsThe
sort_three() function uses a hardcoded decision tree with 6 possible cases. Each case requires 0-2 operations.Example worst case: [3, 2, 1] requires sa + rraSince the number of operations is bounded by a constant (2), this is O(1).Size 4+ - O(n²)
Size 4+ - O(n²)
For larger stacks, the Turkish algorithm dominates the complexity:Average Case: O(n²)
Worst Case: O(n²)The quadratic complexity comes from:
Worst Case: O(n²)The quadratic complexity comes from:
- For each element to push (n iterations)
- Cost analysis examines all nodes in both stacks (O(n) per iteration)
- Target finding requires scanning stack B (O(n))
Detailed Analysis for Large Stacks
Let’s break down the operations insort_big_stacks():
Phase 1: Push to B
Iterations: n - 3 (leave 3 in A)Per iteration:
init_nodes_a(): O(n) for indexing + O(n × m) for target finding + O(n) for cost analysis- Where n = size of A, m = size of B
move_a_to_b(): O(n) in worst case to position nodes
While the naive upper bound is O(n³), the actual behavior is closer to O(n²) because:
- As A shrinks, B grows (inverse relationship)
- Target finding is typically faster than O(n) due to early termination
- Simultaneous rotations (
rr/rrr) reduce actual operation count
Operation Count Analysis
The algorithm’s efficiency is often measured by the number of stack operations performed:Theoretical Bounds
| Input Size | Operations (Avg) | Operations (Worst) | Notes |
|---|---|---|---|
| 2 | 0-1 | 1 | Direct swap |
| 3 | 0-2 | 2 | Hardcoded cases |
| 5 | ~8 | 12 | Turkish algorithm |
| 100 | ~700 | ~1500 | Cost-optimized |
| 500 | ~5500 | ~11500 | Cost-optimized |
Operation Count Examples
From the implementation, we can trace the operations:Why Not O(n log n)?
Traditional comparison-based sorting algorithms like merge sort and quicksort achieve O(n log n). Push Swap’s O(n²) complexity comes from:- Limited operations: Only push, swap, and rotate - no direct element access
- Cost calculation overhead: Must scan both stacks to calculate costs
- Greedy approach: Local optimization doesn’t guarantee global optimum
- Sequential processing: Can’t parallelize or divide-and-conquer effectively
Despite O(n²) complexity, Push Swap is highly optimized for the constraints of the problem (limited operations). Its operation count is competitive for the specific use case.
Space Complexity
Memory Usage
Space Complexity: O(n) The algorithm uses:Auxiliary Space
The algorithm uses minimal auxiliary space:- A few integer variables for indexing and length tracking: O(1)
- No recursive calls (iterative implementation): O(1) stack space
- No additional data structures: O(1)
The O(n) space complexity is required to store the input. The algorithm itself uses only O(1) additional space.
Comparison with Other Algorithms
Classic Sorting Algorithms
| Algorithm | Time (Avg) | Time (Worst) | Space | Operations Model |
|---|---|---|---|---|
| Push Swap | O(n²) | O(n²) | O(n) | Limited stack ops |
| Bubble Sort | O(n²) | O(n²) | O(1) | Array swaps |
| Quick Sort | O(n log n) | O(n²) | O(log n) | Array partitioning |
| Merge Sort | O(n log n) | O(n log n) | O(n) | Array merging |
| Heap Sort | O(n log n) | O(n log n) | O(1) | Heap operations |
Why Push Swap?
Despite having O(n²) complexity, Push Swap is optimal for its specific constraints:Constraint-Optimized
Designed for the limited operation set (push, swap, rotate)
Low Constants
Efficient implementation with good constant factors
Cache-Friendly
Sequential node access patterns work well with CPU caches
Predictable
Consistent performance without pathological cases
Best, Average, and Worst Cases
Best Case: O(1)
Scenario: Stack is already sortedAverage Case: O(n²)
Scenario: Random input order- Cost analysis examines O(n²) node pairs
- Most elements require moderate rotations
- Simultaneous rotations provide optimization
Worst Case: O(n²)
Scenario: Reverse sorted order (largest to smallest)- Maximum rotations required
- Minimal benefit from simultaneous rotations
- Cost calculation still O(n²)
Practical Performance
Benchmarks
Based on typical implementations:- Small Inputs (n ≤ 10)
- Medium Inputs (n = 100)
- Large Inputs (n = 500)
- Time: Microseconds
- Operations: < 20
- Bottleneck: None (negligible)
Optimization Techniques
The implementation includes several optimizations:- Early termination:
is_stack_sorted()checks before sorting - Median tracking:
above_medianflag minimizes rotation cost - Simultaneous rotations:
rr/rrrreduce operation count - Greedy selection: Always picks the cheapest move
Summary
Time Complexity
O(n²) average and worst case
O(1) for n ≤ 3
O(1) for n ≤ 3
Space Complexity
O(n) for storing stacks
O(1) auxiliary space
O(1) auxiliary space
Operation Count
~7n to ~23n operations depending on input
Highly optimized for stack constraints
Highly optimized for stack constraints
Practical Performance
Milliseconds for n=100
Suitable for real-time sorting needs
Suitable for real-time sorting needs
While O(n²) is not the best theoretical complexity, Push Swap’s implementation is highly efficient for its specific constraints and produces competitive operation counts in practice.