Overview
Quick Sort is a highly efficient divide-and-conquer sorting algorithm. It works by selecting a ‘pivot’ element from the array and partitioning the other elements into two sub-arrays according to whether they are less than or greater than the pivot. The sub-arrays are then sorted recursively. Quick Sort is often faster in practice than other O(n log n) algorithms.Implementation
How It Works
- Choose Pivot: Select a pivot element (this implementation uses the middle element)
- Partition: Rearrange the array so elements smaller than the pivot come before it, and elements greater come after
- Recursively Sort: Apply the same process to the sub-arrays on either side of the pivot
- Base Case: Arrays with fewer than 2 elements are already sorted
The choice of pivot is crucial to Quick Sort’s performance. This implementation uses the middle element to avoid worst-case behavior on already-sorted arrays.
Complexity Analysis
Time Complexity
- Best Case: O(n log n) - balanced partitions
- Average Case: O(n log n)
- Worst Case: O(n²) - poor pivot choices
Space Complexity
- Space: O(1) - sorts in place
- Call Stack: O(log n) average, O(n) worst case
Usage Example
Visual Example
Let’s trace through sorting[10, 7, 8, 9, 1, 5]:
Characteristics
Stability
Stability
Quick Sort is not stable by default. Equal elements may not maintain their relative order. Stable variants exist but require additional space or complexity.
In-Place
In-Place
Yes, Quick Sort is in-place. It only requires O(log n) stack space for recursion in the average case.
Adaptive
Adaptive
No, standard Quick Sort is not adaptive. It doesn’t significantly benefit from pre-sorted data, though pivot selection strategies can help.
Online
Online
No, Quick Sort is not online. It needs access to all elements for partitioning.
When to Use
Quick Sort is often the fastest sorting algorithm in practice and is widely used in standard libraries.
- Large datasets (typically the fastest in practice)
- General-purpose sorting (used in many standard libraries)
- When average-case performance is more important than worst-case
- Memory-constrained environments (in-place sorting)
- When a stable sort is not required
- Worst-case performance guarantees are critical
- Stable sorting is required
- Data is already sorted or nearly sorted (without good pivot selection)
- Stack space is extremely limited
Performance Insights
Partition Strategy
This implementation uses the Hoare partition scheme:- Two pointers start at the ends and move toward each other
- Elements are swapped when pointers find elements on the wrong side of the pivot
- Generally more efficient than the Lomuto partition scheme
Pivot Selection Strategies
Different pivot selection methods affect performance:Middle Element (used here):
- Avoids worst case on sorted/reverse-sorted data
- Simple to implement
- Good for many real-world scenarios
- Simplest but worst case on sorted data
- O(n²) performance on already sorted arrays
- Probabilistically avoids worst case
- Expected O(n log n) even on adversarial input
- Take median of first, middle, and last elements
- Better pivot choice, fewer comparisons
- Used in many optimized implementations
Code Walkthrough
Helper Function: quick()
Main Function: quickSort()
Comparison with Other Sorts
| Algorithm | Best Case | Average Case | Worst Case | Space | Stable | In Practice |
|---|---|---|---|---|---|---|
| Quick Sort | O(n log n) | O(n log n) | O(n²) | O(log n) | No | Fastest |
| Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) | Yes | Predictable |
| Heap Sort | O(n log n) | O(n log n) | O(n log n) | O(1) | No | Consistent |
| Insertion Sort | O(n) | O(n²) | O(n²) | O(1) | Yes | Small arrays |