Version: 0.3.38
4. Quick Sort#
- Lecture:
Lecture 3.4
(slides)
- Objectives:
Understand how does quick sort works, and why does it sort fast
- Concepts:
Sorting, recursion, pivot element
We have already looked at three sorting algorithms: Insertion sort,
selection sort, and bubble sort. All are rather slow as they run in
quadratic time (see Lecture 2.5). We will
now scrutinize one of the faster, namely quick-sort, implemented
among others by the qsort
operation in the C standard library.
4.1. Quick Sort#
4.1.1. The Idea: Partitioning#
The idea of quick sort 1Quick sort is attributed to C. A. R. Hoare in 1959. He later received the Turing award in 1980 for his contribution to formal verification. relies on partitioning the given items around a pivot item. We pick the pivot at random, and then, every item that is smaller than pivot goes on one side, and the rest on the other side. Fig. 4.5 illustrates this idea. We will look later at what algorithm one can use to do that.
Fig. 4.5 Partitioning a sequence around a chosen pivot, here 22#
Note that partitioning is a property of a sorted sequence. Any item in a sorted sequence is preceded by smaller items and followed larger items.
4.1.2. The “Quick-sort” Algorithm#
Equipped with partitioning the sequence around a chosen pivot, we can describe quick-sort as follows:
Choose a pivot item, randomly.
Partition the sequence as follows: Items that are smaller than the pivot go to its left, and the rest to its right.
Apply quick-sort on the left of the pivot (recursion)
Apply quick-sort on the right of the pivot (recursion)
Listing 4.6 shows a Python program that implements quick sort, using recursion. Fig. 4.6 shows how this would unfold on a sequence of integers. The sequence is gradually broken down into smaller sequences, until it gets sequences of length that are sorted, by definition.
Fig. 4.6 Unfolding quick sort on a small sequence of integers.#
Caution
Quick sort, when written recursively, is a fairly short algorithm. Its behavior is not trivial however. This is often the case with recursive algorithms.
4.1.3. Out-of-place Partitioning#
How do we partition a sequence around a chosen pivot? The simplest solution is the “out-of-place” approach, where we create a new sequence that holds the result of the partitioning. If we accept using this extra memory, we can proceed as follows:
We create a new empty sequence that will hold the result of the partitioning.
We choose a pivot index, for instance the middle item. Other strategies are possible.
We copy the pivot item into our result sequence.
We traverse the given sequence, and for every item but the pivot:
If the current item is smaller than the pivot, we insert it in front of the result, that is before the pivot
If the current item is greater or equal to the pivot, we insert it at the end of result, that is after the pivot.
Listing 4.7 illustrates how that would look like in Python. Here we choose as pivot the middle element. The append() function insert an item at the end of a sequence.
def partition(sequence):
pivot_index = len(sequence) // 2
pivot = sequence[pivot_index]
result = [pivot]
for index in range(len(sequence)):
if index == pivot_index:
continue
current = sequence[index];
if current >= pivot:
result.append(current)
else:
result.insert(0, current)
return pivot, result
This approach is not ideal because it requires allocating a new sequence each time we partition the array. Keep in mind, that quick will partition sub sequences again and again. Besides, insertion in front of a sequence runs in \(O(n)\), so the runtime would not great either. A better way is the “in-place” partitioning where we only swap items without any extra memory cost.
4.1.4. In-place Partitioning#
To partition “in-place” we rely on the swap operation, which exchanges the position of two items in a sequence, and runs in \(O(1)\). To do that, we will organize our sequence as shown on Fig. 4.7. We will temporarily place the pivot in front, while we will divide the rest into smaller items on the left, larger items on the right, with the items yet to be partitioned in between.
Fig. 4.7 Setup used to partition a sequence: The pivot is placed in front (temporarily), while the rest is split between the smaller items on the left and the larger items on the right.#
As we progress, we move items from the middle to either smaller or
greater. The variable first
and last
keep track of
the remaining items yet to be partitioned. Overall, we proceed as
follows:
We choose a pivot item, and we swap it with first item, put it in “safe” place.
Initially,
first
andlast
points toward the second and the last item, respectively.As long as last is not smaller than first:
If
first
is greater or equal to the pivot, we swap it withlast
and we decrement last.If
first
is smaller than the pivot, we simply incrementfirst
.
We swap back the pivot with
first
, to put it back in the right place.
def partition(sequence, lower, upper):
pivot = (lower + upper) // 2
sequence.swap(lower, pivot)
first, last = lower+1, upper-1
while first <= last:
if sequence[first] <= sequence[lower]:
first += 1
else:
sequence.swap(first, last)
last -= 1
swap(array, lower, first-1)
return first-1
4.2. Efficiency#
Let’s now look at how fast is the quick sort. First we have to distinguish the best case from the worst case before to try to calculate their growth order.
4.2.1. Best Case#
Fig. 4.8 Unfolding the best case scenario for quick sort: The chosen pivot index turns out to always hold the median value, which yields a “perfect” split in halves.#
How fast would that be? Again, for such a recursive algorithm we have to model it using a recurrence relationship. At a high-level, the time spent sorting is the time spent partitioning plus the time spent sorting the left and right hand side of the pivot.
If we assume the time spent partitioning is proportional to the length of the given sequence, we get:
We can expand it to see a pattern emerge as follows:
Now the question becomes: When will this term \(\frac{n}{k}\) becomes smaller than 2. Or put in another way, how many time can one recursively divide \(n\) by 2? The answer is given by \(\log_2 n\).
Important
In the best case, quick sort runs in \(O(n \log n)\)
4.2.2. Worst Case#
What is the worst case? When we pick a pivot element, we pick an item in the middle. We pick an index, but the hope is that this value is the “median” value of the sequence, that is, it has as many items on its left as it has on its right. That way, we have a “perfect” split of the sequence in two halves. By contrast, the worst case occurs when the pivot turns out to be the minimum (or the maximum). As shown on Fig. 4.9, that yields broken splits where one side is empty, and in turns, \(n\) recursion levels.
Fig. 4.9 Unfolding the worst case: At every recursion, the chosen pivot turns out to be the minimum (resp. the maximum) of the sequence.#
How many operation would that require? To simplify the calculation let’s only count the swap operations, and assume that partitioning an sequences of length \(\ell\) runs in \(O(\ell)\).
As for other recursive algorithms, we have to formulate the efficiency using a recurrence relationship.
If we unfold this recurrence, we can see that it yields the sum of the \(n\) first integers.
Important
In the worst case, quick sort also runs in \(O(n^2)\), but in practice this case is rare enough.