Skip to main content

Complexity Overview

The Push Swap algorithm’s complexity varies based on the input size and the sorting strategy employed. Understanding these complexities helps evaluate the algorithm’s efficiency and expected performance.

Time Complexity

By Input Size

Best Case: 0 operations (already sorted)
Worst Case: 1 operation (sa)
For 2 elements, the algorithm performs a constant-time check and at most one swap operation.
if (size == 2)
    sa(a, false);  // Single operation
Best Case: 0 operations (already sorted)
Worst Case: 2 operations
The sort_three() function uses a hardcoded decision tree with 6 possible cases. Each case requires 0-2 operations.Example worst case: [3, 2, 1] requires sa + rraSince the number of operations is bounded by a constant (2), this is O(1).
For larger stacks, the Turkish algorithm dominates the complexity:Average Case: O(n²)
Worst Case: O(n²)
The quadratic complexity comes from:
  1. For each element to push (n iterations)
  2. Cost analysis examines all nodes in both stacks (O(n) per iteration)
  3. Target finding requires scanning stack B (O(n))
Total: n × (O(n) + O(n)) = O(n²)

Detailed Analysis for Large Stacks

Let’s break down the operations in sort_big_stacks():
1

Phase 1: Push to B

Iterations: n - 3 (leave 3 in A)Per iteration:
  • init_nodes_a(): O(n) for indexing + O(n × m) for target finding + O(n) for cost analysis
    • Where n = size of A, m = size of B
  • move_a_to_b(): O(n) in worst case to position nodes
Complexity: O(n) iterations × O(n²) per iteration = O(n³) naive analysisHowever, as B grows, A shrinks proportionally, so the average is closer to O(n²).
2

Phase 2: Sort 3 elements

Complexity: O(1) - constant time with sort_three()
3

Phase 3: Push back to A

Iterations: n - 3 (all elements in B)Per iteration:
  • init_nodes_b(): O(n) for indexing + O(n × m) for target finding
  • move_b_to_a(): O(n) to position target
Complexity: O(n) iterations × O(n) per iteration = O(n²)
Overall Time Complexity: O(n²) amortized
While the naive upper bound is O(n³), the actual behavior is closer to O(n²) because:
  • As A shrinks, B grows (inverse relationship)
  • Target finding is typically faster than O(n) due to early termination
  • Simultaneous rotations (rr/rrr) reduce actual operation count

Operation Count Analysis

The algorithm’s efficiency is often measured by the number of stack operations performed:

Theoretical Bounds

Input SizeOperations (Avg)Operations (Worst)Notes
20-11Direct swap
30-22Hardcoded cases
5~812Turkish algorithm
100~700~1500Cost-optimized
500~5500~11500Cost-optimized
These are approximate values. Actual operation counts depend on the specific input sequence.

Operation Count Examples

From the implementation, we can trace the operations:
Initial: [5, 2, 8, 1, 3]

PB: [2, 8, 1, 3] | [5]        // 1 op
PB: [8, 1, 3] | [2, 5]        // 2 ops

... cost analysis and moves ...

Total operations: ~8-12

Why Not O(n log n)?

Traditional comparison-based sorting algorithms like merge sort and quicksort achieve O(n log n). Push Swap’s O(n²) complexity comes from:
  1. Limited operations: Only push, swap, and rotate - no direct element access
  2. Cost calculation overhead: Must scan both stacks to calculate costs
  3. Greedy approach: Local optimization doesn’t guarantee global optimum
  4. Sequential processing: Can’t parallelize or divide-and-conquer effectively
Despite O(n²) complexity, Push Swap is highly optimized for the constraints of the problem (limited operations). Its operation count is competitive for the specific use case.

Space Complexity

Memory Usage

Space Complexity: O(n) The algorithm uses:
// Primary storage
Stack A: n nodes × sizeof(t_list) = n × 48 bytes
Stack B: n nodes × sizeof(t_list) = n × 48 bytes (max)

// Node structure
typedef struct s_list
{
    int nbr;              // 4 bytes
    int index;            // 4 bytes
    int push_cost;        // 4 bytes
    int above_median;     // 4 bytes
    int cheapest;         // 4 bytes
    struct s_list *target_node;  // 8 bytes
    struct s_list *next;         // 8 bytes
    struct s_list *prev;         // 8 bytes
}   t_list;              // Total: 44 bytes + padding = 48 bytes
Total Memory: O(2n × 48) = O(n) bytes

Auxiliary Space

The algorithm uses minimal auxiliary space:
  • A few integer variables for indexing and length tracking: O(1)
  • No recursive calls (iterative implementation): O(1) stack space
  • No additional data structures: O(1)
Auxiliary Space Complexity: O(1)
The O(n) space complexity is required to store the input. The algorithm itself uses only O(1) additional space.

Comparison with Other Algorithms

Classic Sorting Algorithms

AlgorithmTime (Avg)Time (Worst)SpaceOperations Model
Push SwapO(n²)O(n²)O(n)Limited stack ops
Bubble SortO(n²)O(n²)O(1)Array swaps
Quick SortO(n log n)O(n²)O(log n)Array partitioning
Merge SortO(n log n)O(n log n)O(n)Array merging
Heap SortO(n log n)O(n log n)O(1)Heap operations

Why Push Swap?

Despite having O(n²) complexity, Push Swap is optimal for its specific constraints:

Constraint-Optimized

Designed for the limited operation set (push, swap, rotate)

Low Constants

Efficient implementation with good constant factors

Cache-Friendly

Sequential node access patterns work well with CPU caches

Predictable

Consistent performance without pathological cases

Best, Average, and Worst Cases

Best Case: O(1)

Scenario: Stack is already sorted
is_stack_sorted(*a) returns true
// No operations performed
Total: 0 operations

Average Case: O(n²)

Scenario: Random input order
  • Cost analysis examines O(n²) node pairs
  • Most elements require moderate rotations
  • Simultaneous rotations provide optimization
Operation count: ~7n for n=100, ~11n for n=500

Worst Case: O(n²)

Scenario: Reverse sorted order (largest to smallest)
  • Maximum rotations required
  • Minimal benefit from simultaneous rotations
  • Cost calculation still O(n²)
Operation count: ~15n for n=100, ~23n for n=500
The “worst case” operation count is higher than average, but the time complexity remains O(n²) in both cases.

Practical Performance

Benchmarks

Based on typical implementations:
  • Time: Microseconds
  • Operations: < 20
  • Bottleneck: None (negligible)

Optimization Techniques

The implementation includes several optimizations:
  1. Early termination: is_stack_sorted() checks before sorting
  2. Median tracking: above_median flag minimizes rotation cost
  3. Simultaneous rotations: rr/rrr reduce operation count
  4. Greedy selection: Always picks the cheapest move

Summary

Time Complexity

O(n²) average and worst case
O(1) for n ≤ 3

Space Complexity

O(n) for storing stacks
O(1) auxiliary space

Operation Count

~7n to ~23n operations depending on input
Highly optimized for stack constraints

Practical Performance

Milliseconds for n=100
Suitable for real-time sorting needs
While O(n²) is not the best theoretical complexity, Push Swap’s implementation is highly efficient for its specific constraints and produces competitive operation counts in practice.

Build docs developers (and LLMs) love