Chapter 4
■
InduCtIon and reCursIon ... and reduCtIon
68
Oh, That’s Easy!
Simply put, reducing a problem A to another problem B involves some form of transformation, after which a solution
to B gives you (directly or with some massaging) a solution to A. Once you’ve learned a bunch of standard algorithms
(you’ll encounter many in this book), this is what you’ll usually do when you come across a new problem. Can you
change it in some way so that it can be solved with one of the methods you know? In many ways, this is the core
process of
all problem solving.
Let’s take an example.
You have a list of numbers, and you want to find the two (nonidentical) numbers that are
closest to each other (that is, the two with the smallest absolute difference):
>>> from random import randrange
>>> seq = [randrange(10**10) for i in range(100)]
>>> dd = float("inf")
>>> for x in seq:
... for y in seq:
... if x == y: continue
... d = abs(x-y)
... if d < dd:
... xx, yy, dd = x, y, d
...
>>> xx, yy
(15743, 15774)
Two nested loops, both over seq; it should be obvious that this is quadratic, which is generally not a good thing.
Let’s say you’ve worked with algorithms a bit, and you know that sequences can often be easier to deal with if they’re
sorted. You also know that sorting is,
in general, loglinear, or Q(
n lg
n). See how this can help? The insight here is that
the two closest numbers must be
next to each other in the sorted sequence:
>>> seq.sort()
>>> dd = float("inf")
>>> for i in range(len(seq)-1):
... x, y = seq[i], seq[i+1]
... if x == y: continue
... d = abs(x-y)
... if d < dd:
... xx, yy, dd = x, y, d
...
>>> xx, yy
(15743, 15774)
Faster algorithm, same solution. The new running time is loglinear, dominated by the sorting. Our original
problem was “Find the two
closest numbers in a sequence,” and we reduced it to “Find the two closest numbers in
a sorted sequence,” by sorting seq. In this case, our reduction (the sorting) won’t affect which answers we get.
In general, we may need to transform the answer so it fits the original problem.
Note
■
In a way, we just split the problem into two parts, sorting and scanning the sorted sequence. You could also
say that the scanning is a way of reducing the original problem to the problem of sorting a sequence. It’s all a matter of
perspective.
Chapter 4
■
InduCtIon and reCursIon ... and reduCtIon
69
Reducing A to B is a bit like saying “You want to solve A? Oh, that’s easy, as long as you can solve B.” See Figure
4-1
for an illustration of how reductions work.
One, Two, Many
I’ve already used induction to solve some problems in Chapter 3, but let’s recap and work through a couple of
examples. When describing induction in the abstract, we say that we have a
proposition,
or statement,
P(
n), and we
want to show that it’s true for any natural number
n. For example, let’s say we’re investigating the sum of the first
n
odd numbers;
P(
n) could then be the following statement:
This is eerily familiar—it’s almost the same as the handshake sum we worked with in the previous chapter. You
could easily get this new result by tweaking the handshake formula, but let’s see how we’d prove it by induction
instead. The idea in induction is to make our proof “sweep” over all the natural numbers, a bit like a row of dominoes
falling. We start by establishing
P(1), which is quite obvious in this case, and then we
need to show that each domino,
if it falls, will topple the next.
In other words, we must show that
if the statement
P(
n–1) is true, it follows that
P(
n) is
also true.
If we can show this implication, that is,
P(
n–1) Þ
P(
n), the result will sweep across all values of
n, starting with
P(1), using
P(1) Þ
P(2) to establish
P(2), then move on to
P(3),
P(4), and so forth. In other words, the crucial thing is
to establish the implication that lets us move one step further. We call it the
inductive step. In our example, this means
that we’re assuming the following (
P(
n–1)):
Do'stlaringiz bilan baham: