Table 3-1. Some Basic Recurrences with Solutions, as Well as Some Sample Applications
#
Recurrence
Solution
Example Applications
1
T (n) = T (n–1) + 1
Q(n)
Processing a sequence, for example, with reduce
2
T (n) = T (n–1) + n
Q(n
2
)
Handshake problem
3
T (n) = 2T (n–1) + 1
Q(2n)
Towers of Hanoi
4
T (n) = 2T (n–1) + n
Q(2n)
5
T (n) = T (n/2) + 1
Q(lg n)
Binary search (see the “Black Box” sidebar on bisect in Chapter 6)
6
T (n) = T (n/2) + n
Q(n)
Randomized select, average case (see Chapter 6)
7
T (n) = 2T (n/2) + 1
Q(n)
Tree traversal (see Chapter 5)
8
T (n) = 2T (n/2) + n
Q(n lg n)
Sorting by divide and conquer (see Chapter 6)
n
n
2
n
2
1
1
1
···
n −1
n
= 2
h
h
= lg
n
Figure 3-5. A summary of some important properties of perfectly balanced binary trees
Note
■
I’ve already mentioned the assumption that the base case has constant time (T
(k) = t
0
, k
£ n
0
, for some
constants t
0
and n
0
). In recurrences where the argument to T is n/b, for some constant b, we run up against another
technicality: The argument really should be an integer. We could achieve that by rounding (using
floor
and
ceil
all over
the place), but it’s common to simply ignore this detail (really assuming that n is a power of b). To remedy the sloppiness,
you should check your answers with the method described in “guessing and Checking” later in this chapter.
Look at recurrence 5. There’s only one recursive call, on half the problem, and a constant amount of work in
addition. If we see the full recursion as a tree (a recursion tree), this extra work (f (n)) is performed in each node, while
the structure of the recursive calls is represented by the edges. The total amount of work (T (n)) is the sum of f (n) over
ChapTer 3
■
CounTIng 101
57
all the nodes (or those involved). In this case, the work in each node is constant, so we need to count only the number
of nodes. Also, we have only one recursive call, so the full work is equivalent to a path from the root to a leaf. It should
be obvious that T (n) is logarithmic, but let’s see how this looks if we try to unravel the recurrence, step-by-step:
T n
T n
T n
T n
( )
( / )
{ ( / )
}
{ ( / )
}
=
+
=
+ +
=
+ + +
2
1
4
1
1
8
1
1 1
The curly braces enclose the part that is equivalent to the recursive call (T (...)) in the previous line. This stepwise
unraveling (or repeated substitution) is just the first step of our solution method. The general approach is as follows:
1. Unravel the recurrence until you see a pattern.
2. Express the pattern (usually involving a sum), using a line number variable, i.
3. Choose i so the recursion reaches its base case (and solve the sum).
The first step is what we have done already. Let’s have a go at step 2:
T n
T n
i
k
i
( )
( / )
=
+
=
2
1
1
å
I hope you agree that this general form captures the pattern emerging from our unraveling: For each unraveling
(each line further down), we halve the problem size (that is, double the divisor) and add another unit of work (another 1).
The sum at the end is a bit silly. We know we have i ones, so the sum is clearly just i. I’ve written it as a sum to show the
general pattern of the method here.
To get to the base case of the recursion, we must get T (n/2
i
) to become, say, T (1). That just means we have to
halve our way from n to 1, which should be familiar by now: The recursion height is logarithmic, or i = lg n. Insert that
into the pattern, and you get that T (n) is, indeed, Q(lg n).
The unraveling for recurrence 6 is quite similar, but here the sum is slightly more interesting:
T n
T n
n
T n
n
T n
n
n
n
T n
i
( )
( / )
{ ( / )
/ }
{ ( / )
/ }
/
( / )
(
=
+
=
+
+
=
+
+
+
=
+
2
4
2
8
4
2
2
n
n
n
k
k
i
/ )
2
0
1
=
-
å
If you’re having trouble seeing how I got to the general pattern, you might want to ponder it for a minute.
Basically, I’ve just used the sigma notation to express the sum n + n/2 + ... + n/(2
i–1
), which you can see emerging in the
early unraveling steps. Before worrying about solving the sum, we once again set i = lg n. Assuming that T (1) = 1, we
get the following:
T n
n
n
k
k
n
k
k
n
( )
( / )
( / )
lg
lg
= +
=
=
-
=
å
å
1
2
2
0
1
0
ChapTer 3
■
CounTIng 101
58
The last step there is just because n/2
lg n
= 1, so we can include the lonely 1 into the sum.
Now: Does this sum look familiar? Once again, take a look at Figure
3-5
: If k is a height, then n/2
k
is the number of
nodes at that height (we’re halving our way from the leaves to the root). That means the sum is equal to the number of
nodes, which is Q(n).
Recurrences 7 and 8 introduce a wrinkle: multiple recursive calls. Recurrence 7 is similar to recurrence 5: Instead
of counting the nodes on one path from root to leaves, we now follow both child edges from each node, so the count
is equal to the number of nodes, or Q(n). Can you see how recurrences 6 and 7 are just counting the same nodes in
two different ways? I’ll use our solution method on recurrence 8; the procedure for number 7 is very similar but worth
checking:
T n
T n
n
T n
n
T n
n
n
n
( )
( / )
{
( / )
/ }
( {
( / )
/ }
/ )
=
+
=
+
+
=
+
+
+
=
2
2
2 2
4
2
2 2 2
8
4
2
2
n
ii
i
T n
n i
( / )
2 + ×
As you can see, the twos keep piling up in front, resulting in the factor of 2
i
. The situation does seem a bit
messy inside the parentheses, but luckily, the halvings and doublings even out perfectly: The n/2 is inside the first
parentheses and is multiplied by 2; n/4 is multiplied by 4, and in general, n/2
i
is multiplied by 2
i
, meaning that we’re
left with a sum of i repetitions of n, or simply n·i. Once again, to get the base case, we choose i = lg n:
T n
T n
n
n n n n
n
n
( )
( /
)
lg
lg
lg
lg
=
+ ×
+
2
2
=
In other words, the running time is Q(n lg n). Can even this result be seen in Figure
3-5
? You bet! The work in the
root node of the recursion tree is n; in each of the two recursive calls (the child nodes), this is halved. In other words,
the work in each node is equal to the labels in Figure
3-5
. We know that each row then sums to n, and we know there
are lg n + 1 rows of nodes, giving us a grand sum of n lg n + n, or Q(n lg n).
Guessing and Checking
Both recursion and induction will be discussed in depth in Chapter 4. One of my main theses there is that they are
like mirror images of one another; one perspective is that induction shows you why recursion works. In this section,
I restrict the discussion to showing that our solutions to recurrences are correct (rather than discussing the recursive
algorithms themselves), but it should still give you a glimpse of how these things are connected.
As I said earlier in this chapter, the process of unraveling a recurrence and “finding” a pattern is somewhat
subject to unwarranted assumption. For example, we often assume that n is an integer power of two so that a
recursion depth of exactly lg n is attainable. In most common cases, these assumptions work out just fine, but to be
sure that a solution is correct, you should check it. The nice thing about being able to check the solution is that you
can just conjure up a solution by guesswork or intuition and then (ideally) show that it’s right.
Note
■
To keep things simple, I’ll stick to the Big oh in the following and work with upper limits. You can show the
lower limits (and get
W or Q) in a similar manner.
ChapTer 3
■
CounTIng 101
59
Let’s take our first recurrence, T (n) = T (n–1) + 1. We want to check whether it’s correct that T (n) is O(n). As with
experiments (discussed in Chapter 1), we can’t really get where we want with asymptotic notation; we have to be
more specific and insert some constants, so we try to verify that T (n) £ cn, for some an arbitrary c ³ 1. Per our standard
assumptions, we set T (1) = 1. So far, so good. But what about larger values for n?
This is where the induction comes in. The idea is quite simple: We start with T (1), where we know our solution
is correct, and then we try to show that it also applies to T (2), T (3), and so forth. We do this generically by proving an
induction step, showing that if our solution is correct for T (n–1), it will also be true for T (n), for n > 1. This step would
let us go from T (1) to T (2), from T (2) to T (3), and so forth, just like we want.
The key to proving an inductive step is the assumption (in this case) that we’ve got it right for T (n–1). This is
precisely what we use to get to T (n), and it’s called the inductive hypothesis. In our case, the inductive hypothesis is
that T (n–1) £ c(n–1) (for some c), and we want to show that this carries over to T (n):
T n
T n
c n
T n
c n
cn c
cn
( )
(
)
(
)
(
)
(
)
=
-
+
£
-
+
- £
-
=
- +
£
1
1
1
1
1
1
1
We assume that
Wee know that c
so c
³
- + £
1
1 0
,
I’ve highlighted the use of the induction hypotheses with boxes here: I replace T(n–1) with c(n–1), which (by the
induction hypothesis) I know is a greater (or equally great) value. This makes the replacement safe, as long as I switch
from an equality sign to “less than or equal” between the first and second lines. Some basic algebra later, and I’ve
shown that the assumption T(n–1) £ c(n–1) leads to T(n) £ cn, which (consequently) leads to T(n+1) £ c(n+1), and so
forth. Starting at our base case, T(1), we have now shown that T(n) is, in general, O (n).
The basic divide and conquer recurrences aren’t much harder. Let’s do recurrence 8 (from Table
3-1
). This time,
let’s use something called strong induction. In the previous example, I only assumed something about the previous
value (n–1, so-called weak induction); now, my induction hypothesis will be about all smaller numbers. More
specifically, I’ll assume that T(k) £ ck lg k for all positive integers k < n and show that this leads to T(n) £ cn lg n. The
basic idea is still the same—our solution will still “rub off” from T(1) to T(2), and so forth—it’s just that we get a little bit
more to work with. In particular, we now hypothesize something about T(n/2) as well, not just T(n–1). Let’s have a go:
T n
T n
n
n
n
n
T k
c k
k
k n
( )
( / )
(( / )lg( / ))
( )
( lg )
=
+
c
2
2
2
2
£
+
£
=
Assuming
for
//
(( / )(lg
lg ))
lg( / ) lg
lg
(( / )lg
/ )
2
2
2
2
2
2
2
<
=
-
+
=
-
=
-
+
n
n
n
n
n
n
n
n n
n
c
c
llg
lg
2 1
2
=
=
=
n n
c
Just set
As before, by assuming that we’ve already shown our result for smaller parameters, we show that it also holds
for T(n).
Caution
■
Be wary of asymptotic notation in recurrences, especially for the recursive part. Consider the following
(false) “proof” that T (n) = 2T (n/2) + n means that T (n) is O (n), using the Big oh directly in our induction hypothesis:
T (n) = 2 · T (n/2) + n = 2 · O (n/2) + n = O (n)
There are many things wrong with this, but the most glaring problem is, perhaps, that the induction hypothesis needs
to be specific to individual values of the parameter (k
= 1, 2...), but asymptotic notation necessarily applies to the entire
function.
ChapTer 3
■
CounTIng 101
60
Do'stlaringiz bilan baham: |