Listing 3-1. Gnome Sort, An Example Sorting Algorithm
def gnomesort(seq):
i = 0
while i < len(seq):
if i == 0 or seq[i-1] <= seq[i]:
i += 1
else:
seq[i], seq[i-1] = seq[i-1], seq[i]
i -= 1
Listing 3-2. Merge Sort, Another Example Sorting Algorithm
def mergesort(seq):
mid = len(seq)//2
lft, rgt = seq[:mid], seq[mid:]
if len(lft) > 1: lft = mergesort(lft)
if len(rgt) > 1: rgt = mergesort(rgt)
res = []
while lft and rgt:
if lft[-1] >=rgt[-1]:
res.append(lft.pop())
else:
res.append(rgt.pop())
res.reverse()
return (lft or rgt) + res
Table 3-2. The Three Cases of the Master Theorem
Case
Condition
Solution
Example
1
f n
O n
b
a
( )
(
)
log
Î
-
e
T n
n
b
a
( )
(
)
log
ÎQ
T n
T n
n
( )
( / ) lg
=
+
2
2
2
f n
n
b
a
( )
(
)
log
ÎQ
T n
n
n
b
a
( )
(
lg )
log
ÎQ
T n
T n
n
( )
( / )
=
+
2
4
3
f n
n
b
a
( )
(
)
log
Î
+
W
e
T n
f n
( )
( ( ))
ÎQ
T n
T n
n
( )
( / )
=
+
2
3
7
Merge sort is a classic, first implemented by computer science legend John von Neumann on the EDVAC in 1945. You’ll learn
more about that and other similar algorithms in Chapter 6. Gnome sort was invented in 2000 by Hamid Sarbazi-Azad, under the
name Stupid sort.
ChapTer 3
■
CounTIng 101
64
Gnome sort contains a single while loop and an index variable that goes from 0 to len(seq)-1, which might
tempt us to conclude that it has a linear running time, but the statement i -= 1 in the last line would indicate
otherwise. To figure out how long it runs, you need to understand something about how it works. Initially, it scans
from a from the left (repeatedly incrementing i), looking for a position i where seq[i-1] is greater than seq[i], that
is, two values that are in the wrong order. At this point, the else part kicks in.
The else clause swaps seq[i] and seq[i-1] and decrements i. This behavior will continue until, once again,
seq[i-1] <= seq[i] (or we reach position 0) and order is restored. In other words, the algorithm alternately scans
upward in the sequence for an out-of-place (that is, too small) element and moves that element down to a valid
position by repeated swapping. What’s the cost of all this? Let’s ignore the average case and focus on the best and
worst. The best case occurs when the sequence is sorted: gnomesort will just scan through a without finding anything
out of place and then terminate, yielding a running time of Q(n).
The worst case is a little less straightforward but not much. Note that once we find an element that is out of place, all
elements before that point are already sorted, and moving the new element into a correct position won’t scramble them.
That means the number of sorted elements will increase by one each time we discover a misplaced element, and the
next misplaced element will have to be further to the right. The worst possible cost of finding and moving a misplaced
element into place is proportional to its position, so the worst running time could possibly get is 1 + 2 + ... + n–1, which
is Q(n
2
). This is a bit hypothetical at the moment—I’ve shown it can’t get worse than this, but can it ever get this bad?
Indeed it can. Consider the case when the elements are sorted in descending order (that is, reversed with respect
to what we want). Then every element is in the wrong place and will have to be moved all the way to the start, giving
us the quadratic running time. So, in general, the running time of gnome sort is W(n) and O(n
2
), and these are tight
bounds representing the best and worst cases, respectively.
Now, take a look at merge sort (Listing 3-2). It is a bit more complicated than gnome sort, so I’ll postpone
explaining how it manages to sort things until Chapter 6. Luckily, we can analyze its running time without
understanding how it works! Just look at the overall structure. The input (seq) has a size of n. There are two recursive
calls, each on a subproblem of n/2 (or as close as we can get with integer sizes). In addition, there is some work
performed in a while loop and in res.reverse(); Exercise 3-11 asks you to show that this work is Q(n). (Exercise 3-12
asks you what happens if you use pop(0) instead of pop().) This gives us the well-known recurrence number 8,
T(n) = 2T(n/2) + Q(n), which means that the running time of merge sort is Q(n lg n), regardless of the input. This
means that if we’re expecting the data to be almost sorted, we might prefer gnome sort, but in general we’d probably
be much better off scrapping it in favor of merge sort.
Note
■
python’s sorting algorithm, timsort, is a naturally adaptive version of merge sort. It manages to achieve the
linear best-case running time while keeping the loglinear worst case. You can find some more details in the “Black Box”
sidebar on timsort in Chapter 6.
Summary
The sum of the n first integers is quadratic, and the sum of the lg n first powers of two is linear. The first of these
identities can be illustrated as a round-robin tournament, with all possible pairings of n elements; the second is
related to a knockout tournament, with lg n rounds, where all but the winner must be knocked out. The number of
permutations of n is n!, while the number of k-combinations (subsets of size k) from n, written C(n, k), is n!/(k!·(n–k)!).
This is also known as the binomial coefficient.
A function is recursive if it calls itself (directly or via other functions). A recurrence relation is an equation that
relates a function to itself, in a recursive way (such as T(n) = T(n/2) + 1). These equations are often used to describe
the running times of recursive algorithms, and to be able to solve them, we need to assume something about the
base case of the recursion; normally, we assume that T(k) is Q(1), for some constant k. This chapter presents three
ChapTer 3
■
CounTIng 101
65
main ways of solving recurrences: (1) repeatedly apply the original equation to unravel the recursive occurrences of
T until you find a pattern; (2) guess a solution, and try to prove that it’s correct using induction; and (3) for divide and
conquer recurrences that fit one of the cases of the master theorem, simply use the corresponding solution.
If You’re Curious ...
The topics of this chapter (and the previous, for that matter) are commonly classified as part of what’s called discrete
mathematics.
8
There are plenty of books on this topic, and most of the ones I’ve seen have been pretty cool. If you like
that sort of thing, knock yourself out at the library or at a local or online bookstore. I’m sure you’ll find plenty to keep
you occupied for a long time.
One book I like that deals with counting and proofs (but not discrete math in general) is Proofs That Really
Count, by Benjamin and Quinn. It’s worth a look. If you want a solid reference that deals with sums, combinatorics,
recurrences, and lots of other meaty stuff, specifically written for computer scientists, you should check out the classic
Concrete Mathematics, by Graham, Knuth, and Patashnik. (Yeah, it’s that Knuth, so you know it’s good.) If you just
need some place to look up the solution for a sum, you could try Wolfram Alpha (
http://wolframalpha.com
), as
mentioned earlier, or get one of those pocket references full of formulas (again, probably available from your favorite
bookstore).
If you want more details on recurrences, you could look up the standard methods in one of the algorithm
textbooks I mentioned in Chapter 1, or you could research some of the more advanced methods, which let you deal
with more recurrence types than those I’ve dealt with here. For example, Concrete Mathematics explains how to use
so-called generating functions. If you look around online, you’re also bound to find lots of interesting stuff on solving
recurrences with annihilators or using the Akra-Bazzi theorem.
The sidebar on pseudopolynomiality earlier in this chapter used primality checking as an example. Many (older)
textbooks claim that this is an unsolved problem (that is, that there are no known polynomial algorithms for solving
it). Just so you know—that’s not true anymore: In 2002, Agrawal, Kayal, and Saxena published their groundbreaking
paper “PRIMES is in P,” describing how to do polynomial primality checking. (Oddly enough, factoring numbers is still
an unsolved problem.)
Exercises
3-1. Show that the properties described in the section “Working with Sums” are correct.
3-2. Use the rules from Chapter 2 to show that n(n–1)/2 is Q(n
2
).
3-3. The sum of the first k non-negative integer powers of 2 is 2
k+1
– 1. Show how this property lets you
represent any positive integer as a binary number.
3-4. In the section “The Hare and the Tortoise,” two methods of looking for a number are sketched.
Turn these methods into number-guessing algorithms, and implement them as Python programs.
3-5. Show that C(n, k) = C(n, n–k).
3-6. In the recursive function S early in the section “Recursion and Recurrences,” assume that instead
of using a position parameter, i, the function simply returned sec[0] + S(seq[1:]). What would the
asymptotic running time be now?
3-7. Solve recurrence 2 in Table
3-1
using repeated substitution.
8
If you’re not sure about the difference between discrete and discreet, you might want to look it up.
ChapTer 3
■
CounTIng 101
66
3-8. Solve recurrence 3 in Table
3-1
using repeated substitution.
3-9. Solve recurrence 4 in Table
3-1
using repeated substitution.
3-10. Show that x
log y
= y
log x
, no matter the base of the logarithm.
3-11. Show that f (n) is Q(n) for the implementation of merge sort in Listing 3-2.
3-12. In merge sort in Listing 3-2, objects are popped from the end of each half of the sequence (with
pop()). It might be more intuitive to pop from the beginning, with pop(0), to avoid having to reverse
res afterward (I’ve seen this done in real life), but pop(0), just like insert(0), is a linear operation, as
opposed to pop(), which is constant. What would such a switch mean for the overall running time?
References
Agrawal, M., Kayal, N., and Saxena, N. (2004). PRIMES is in P. The Annals of Mathematics, 160(2):781–793.
Akra, M. and Bazzi, L. (1998). On the solution of linear recurrence equations. Computational Optimization and
Applications, 10(2):195–210.
Benjamin, A. T. and Quinn, J. (2003). Proofs that Really Count: The Art of Combinatorial Proof. The Mathematical
Association of America.
Graham, R. L., Knuth, D. E., and Patashnik, O. (1994). Concrete Mathematics: A Foundation for Computer Science,
second edition. Addison-Wesley Professional.
67
Do'stlaringiz bilan baham: |