Source code online books for professionals by professionals


reVerSe INDUCtION aND pOWerS OF tWO



Download 4,67 Mb.
Pdf ko'rish
bet66/266
Sana31.12.2021
Hajmi4,67 Mb.
#213682
1   ...   62   63   64   65   66   67   68   69   ...   266
Bog'liq
2 5296731884800181221

reVerSe INDUCtION aND pOWerS OF tWO
sometimes it can be useful to restrict the problem sizes we’re working with, such as dealing only with powers of 
two. this often occurs for divide-and-conquer algorithms, for example (see Chapters 3 and 6 for recurrences and 
examples, respectively). In many cases, whatever algorithms or complexities we find will still work for any value 
of n, but sometimes, as for the checkerboard covering problem described earlier in this chapter, this just isn’t the 
case. to be certain, we might need to prove that any value of n is safe. For recurrences, the induction method 
in Chapter 3 can be used. For showing correctness, you can use reverse induction. assume that the algorithm 
is correct for n and show that this implies correctness for n – 1.this can often be done by simply introducing a 
“dummy” element that doesn’t affect the solution but that increases the size to n. If you know the algorithm is 
correct for an infinite set of sizes (such as all powers of two), reverse induction will let you show that it’s true for 
all sizes.
Invariants and Correctness
The main focus of this chapter is on designing algorithms, where correctness follows from the design process. Perhaps 
a more common perspective on induction in computer science is correctness proofs. It’s basically the same thing that 
I’ve been discussing in this chapter but with a slightly different angle of approach. You’re presented with a finished 
algorithm, and you need to show that it works. For a recursive algorithm, the ideas I’ve already shown you can be used 
rather directly. For a loop, you can also think recursively, but there is a concept that applies more directly to induction 
proofs for iteration: loop invariants. A loop invariant is something that is true after each iteration of a loop, given some 
preconditions; it’s called an invariant because it doesn’t vary—it’s true from beginning to end.


Chapter 4 

 InduCtIon and reCursIon ... and reduCtIon 
87
Usually, the final solution is the special case that the invariant attains after the final iteration, so if the invariant 
always holds (given the preconditions of the algorithm) and you can show that the loop terminates, you’ve shown 
that the algorithm is correct. Let’s try this approach with insertion sort (Listing 4-2). The invariant for the loop is that 
the elements 0...i are sorted (as hinted at by the first comment in the code). If we want to use this invariant to prove 
correctness, we need to do the following:
 
1.  Use induction to show that it is, in fact, true after each iteration.
 
2.  Show that we’ll get the correct answer if the algorithm terminates.
 
3.  Show that the algorithm terminates.
The induction in step 1 involves showing a base case (that is, before the first iteration) and an inductive step 
(that a single run of the loop preserves the invariant). The second step involves using the invariant at the point of 
termination. The third step is usually easy to prove (perhaps by showing that you eventually “run out” of something).
15
Steps 2 and 3 should be obvious for insertion sort. The for loop will terminate after n iterations, with i = n–1. The 
invariant then says that elements 0...n–1 are sorted, which means that the problem is solved. The base case (i = 0)  
is trivial, so all that remains is the inductive step—to show that the loop preserves the invariant, which it does, by 
inserting the next element in the correct spot among the sorted values (without disrupting the sorting).
Relaxation and Gradual Improvement
The term relaxation is taken from mathematics, where it has several meanings. The term has been picked up by algorists 
and is used to describe the crucial step in several algorithms, particularly shortest-path algorithms based on dynamic 
programming (discussed in Chapters 8 and 9), where we gradually improve our approximations to the optimum. The idea 
of incrementally improving a solution in this way is also central to algorithms finding maximum flow (Chapter 10). I won’t 
go into how these algorithms work just yet, but let’s look at a simple example of something that might be called relaxation.
You are in an airport, and you can reach several other airports by plane. From each of those airports, you can take 
the train to several towns and cities. Let’s say that you have a dict or list of flight times, A, so that A[u] is the time it will 
take you to get to airport u. Similarly, B[u][v] will give you the time it takes to get from airport u to town v by train.  
(B can be a list of lists or a dict of dicts, for example; see Chapter 2.) Consider the following randomized way of 
estimating the time it will take you to get to each town, C[v]:
 
>>> for v in range(n):
...     C[v] = float('inf')
>>> for i in range(N):
...     u, v = randrange(n), randrange(n)
...     C[v] = min(C[v], A[u] + B[u][v]) # Relax
 
The idea here is to repeatedly see whether we can improve our estimate for C[v] by choosing another route. First 
go to u by plane, and then you take the train to v. If that gives us a better total time, we update C. As long as N is really 
large, we will eventually get the right answer for every town.
For relaxation-based algorithms that actually guarantee correct solutions, we need to do better than this. For 
the airplane + train problem, this is fairly easy (see Exercise 4-22). For more complex problems, you may need rather 
subtle approaches. For example, you can show that the value of your solution increases by an integer in every iteration; 
if the algorithm terminates only when you hit the optimal (integer) value, it must be correct. (This is similar to the 
case for maximum flow algorithms.) Or perhaps you need to show how correct estimates spread across elements of 
the problem instance, such as nodes in a graph. If this seems a bit general at the moment, don’t worry—I’ll get plenty 
specific when we encounter algorithms that use the technique.
15
Even though showing termination is usually easy, the general problem is, in fact, not (algorithmically) solvable. See the  
discussion of the halting problem in Chapter 11 for details.


Chapter 4 

 InduCtIon and reCursIon ... and reduCtIon 
88
Tip
 

  designing algorithms with relaxation can be like a game. each relaxation is one “move,” and you try to get the 
optimal solution with as few moves as possible. You can always get there by just relaxing all over the place, but the key 
lies in performing your moves in the right order. this idea will be explored further when we deal with shortest paths in 
daGs (Chapter 8), Bellman-Ford, and dijkstra’s algorithm (Chapter 9).
Reduction + Contraposition = Hardness Proof
This section is really just a bit of foreshadowing of what you’ll encounter in Chapter 11. You see, although reductions 
are used to solve problems, the only context in which most textbooks discuss them is problem complexity, where 
they’re used to show that you (probably) can’t solve a given problem. The idea is really quite simple, yet I’ve seen it 
trip up many (perhaps even most) of my students.
The hardness proofs are based on the fact that we only allow easy (that is, fast) reductions.
16
 Let’s say you’re able 
to reduce problem A to B (so a solution to B gives you one for A as well; take a look at Figure 
4-1
 if you need to refresh 
your memory on how this works). We then know that if B is easy, A must be easy as well. That follows directly from the 
fact that we can use B, along with an easy reduction, to solve A.
For example, let A be finding the longest path between two nodes in a DAG, and let B be finding the shortest path 
between two nodes in a DAG. You can then reduce A to B by simply viewing all edges as negative. Now, if you learn of 
some efficient algorithm to find shortest paths in DAGs that permits negative edge weights (which you will, in Chapter 
8), you automatically also have an efficient algorithm for finding for longest paths with positive edge weights.
17
 The 
reason for this is that, with asymptotic notation (which is implicitly used here), you could say that “fast + fast = fast.” In 
other words, fast reduction + fast solution to B = fast solution to A.
Now let’s apply our friend contraposition. We’ve established “If B is easy, then A is easy.” The contrapositive is 
“If A is hard, then B is hard.”
18
 This should still be quite easy to understand, intuitively. If we know that A is hard, no 
matter how we approach it, we know B can’t be easy—because if it were easy, it would supply us with an easy solution 
to A, and A wouldn’t be hard after all (a contradiction).
I hope the section has made sense so far. Now there’s just one last step to the reasoning. If I come across a new, 
unknown problem X, and I already know that the problem Y is hard, how can I use a reduction to show that X is hard?
There are basically two alternatives, so the odds should be about 50-50. Oddly enough, it seems that more than 
half the people I ask get this wrong before they think about it a bit. The answer is, reduce Y to X. (Did you get it right?) 
If you know Y is hard and you reduce it to X, then X must be hard, because otherwise it could be used to solve Y 
easily—a contradiction.
Reducing in the other direction doesn’t really get you anywhere. For example, fixing a smashed computer is hard, 
but if you want to know whether fixing your (unsmashed) computer is easy or hard, smashing it isn’t going to prove 
anything.
So, to sum up the reasoning here:
If you can (easily) reduce A to B, then B is at least as hard as A.

If you want to show that X is hard and you know that Y is hard, reduce Y to X.

16
The most important case in Chapter 11 is be when “easy” means polynomial. The logic applies in other cases too.
17
Only in DAGs, though. Finding longest paths in general graphs is an unsolved problem, as discussed in Chapter 11.
18
As you may recall, the contrapositive of “If X, then Y” is “If not Y, then not X,” and these statements are equivalent. For example, 
“I think, therefore I am” is equivalent to “I am not, therefore I think not.” However, it is not equivalent to “I am, therefore I think.”


Chapter 4 

 InduCtIon and reCursIon ... and reduCtIon 
89
One of the reasons this is so confusing for many people is that we normally think of reductions as transforming 
a problem to something easier. Even the name “reduction” connotes this. However, if we’re solving A by reducing it 
to B, it only seems like B is easier, because it’s something we already know how to solve. After the reduction, A is just 
as easy—because we can solve it through B (with the addition of an easy, fast reduction). In other words, as long as 
your reduction isn’t doing any heavy lifting, you can never reduce to something easier, because the act of reduction 
automatically evens things out. Reduce A to B, and B is automatically at least as hard as A.
Let’s leave it at that for now. I’ll get into the details in Chapter 11.
Problem Solving Advice
Here is some advice for solving algorithmic problems and designing algorithms, summing up some of the main ideas 
of this chapter:
•  Make sure you really understand the problem. What is the input? The output? What’s the 
precise relationship between the two? Try to represent the problem instances as familiar 
structures, such as sequences or graphs. A direct, brute-force solution can sometimes help 
clarify exactly what the problem is.
•  Look for a reduction. Can you transform the input so it works as input for another problem 
that you can solve? Can you transform the resulting output so that you can use it? Can you 
reduce an instance of size n to an instance of size k < n and extend the recursive solution 
(inductive hypothesis) back to n?
Together, these two form a powerful approach to algorithm design. I’m going to add a third 
item here, as well. It’s not so much a third step as something to keep in mind while working 
through the first two:


Download 4,67 Mb.

Do'stlaringiz bilan baham:
1   ...   62   63   64   65   66   67   68   69   ...   266




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©hozir.org 2024
ma'muriyatiga murojaat qiling

kiriting | ro'yxatdan o'tish
    Bosh sahifa
юртда тантана
Боғда битган
Бугун юртда
Эшитганлар жилманглар
Эшитмадим деманглар
битган бодомлар
Yangiariq tumani
qitish marakazi
Raqamli texnologiyalar
ilishida muhokamadan
tasdiqqa tavsiya
tavsiya etilgan
iqtisodiyot kafedrasi
steiermarkischen landesregierung
asarlaringizni yuboring
o'zingizning asarlaringizni
Iltimos faqat
faqat o'zingizning
steierm rkischen
landesregierung fachabteilung
rkischen landesregierung
hamshira loyihasi
loyihasi mavsum
faolyatining oqibatlari
asosiy adabiyotlar
fakulteti ahborot
ahborot havfsizligi
havfsizligi kafedrasi
fanidan bo’yicha
fakulteti iqtisodiyot
boshqaruv fakulteti
chiqarishda boshqaruv
ishlab chiqarishda
iqtisodiyot fakultet
multiservis tarmoqlari
fanidan asosiy
Uzbek fanidan
mavzulari potok
asosidagi multiservis
'aliyyil a'ziym
billahil 'aliyyil
illaa billahil
quvvata illaa
falah' deganida
Kompyuter savodxonligi
bo’yicha mustaqil
'alal falah'
Hayya 'alal
'alas soloh
Hayya 'alas
mavsum boyicha


yuklab olish