Appendix d
■
Hints for exercises
278
4-22. Instead of randomly selecting pairs
u,
v, simply
go through every possible pair, giving a quadratic running time.
(Do you see how this necessarily gives you the right answer for each town?)
4-23. To show that
foo was hard, you would have to reduce
bar to
foo. To show that
foo was easy, you would have to
reduce
foo to
baz.
Chapter 5
5-1. The asymptotic running time would be the same, but you’d probably get more overhead (that is, a higher constant
factor) because instead of adding lots of objects with a built-in operation, you’d run slower, custom Python code for
each object.
5-2. Try turning the induction proof into a recursive algorithm. (You might also want to look up Fleury’s algorithm.)
5-3. Try to reconstruct the inductive argument (and recursive algorithm) that’s used for undirected graphs—it’s
virtually identical. The link to Trémaux’s algorithm is the following: Because you’re allowed to traverse each maze
passage once in each direction, you can treat the passage as two directed edges, with opposite directions.
This means
that all intersections (nodes) will have equal in- and out-degrees, and you’re guaranteed that you can find a tour that
walks every edge twice, one in each direction. (Note that you couldn’t use Trémaux’s algorithm in the more general
case presented in this exercise.)
5-4. This is just a simple matter of traversing the grid that consists of pixels, with adjacent pixels acting as neighbors. It
is common to use DFS for this, but any traversal would do.
5-5. I’m sure there would be many ways of using this thread, but one possibility would be to use it like the stack of DFS
(or IDDFS), if you’re unable to make any other kinds of marks. You would probably end up visiting the same rooms
multiple times, but at least you’d never walk in a cycle.
5-6. It’s not really represented at all in the iterative version. It just implicitly occurs once you’ve popped off all your
“traversal descendants” from the stack.
5-7. As explained in Exercise 5-6, there is point in the code where backtracking occurs in the iterative DFS, so we
can’t just set the finish time at some specific place (like in the recursive one). Instead, we’d
need to add a marker to
the stack. For example, instead of adding the neighbors of u to the stack, we could add edges of the form (u, v), and
before all of them, we’d push (u, None), indicating the backtracking point for u.
5-8. Let’s say node
u must come before
v in a topological sorting. If we run DFS from (or through)
v first, we could
never reach
u, so
v would finish before we (at some later point) start a new DFS that is run either from or through
u. So
far, we’re safe. If, on the other hand, we pass
u first. Then, because there is a (direct or indirect) dependency (that is, a
path) between
u and
v, we will reach
v, which will (once again) finish before
u.
5-9. You could just supply some functions
as optional arguments here, for example.
5-10. If there is a cycle, DFS will always traverse the cycle as far as it can (possibly after backtracking from some
detours). This means it’ll eventually get to where it entered the cycle, creating a back edge. (Of course, it could already
have traversed this edge by following some
other cycle, but that would still make it a back edge.) So if there are no back
edges, there can’t be any cycles.
5-11. Other traversal algorithms would also be able to detect cycles by finding an edge from a visited node to one of
its ancestors in the traversal tree (a back edge). However, determining when this happens (that is, distinguishing back
edges from cross edges) wouldn’t necessarily be quite as easy. In undirected graphs, however, all you need in order to
find a cycle is to reach a node twice, and detecting that is easy, no matter what traversal algorithm you’re using.
5-12. Let’s say you
did find a forward
and cross edge to some node u. Because there are no direction restrictions,
DFS would never have backtracked beyond
u without exploring all its out-edges, which means it would already have
traversed the hypothetical forward/cross edge in the other direction!
Appendix d
■
Hints for exercises
279
5-13. This is just a matter of keeping track of the distance for each node instead of its predecessor, beginning with
zero for the start node. Instead of remembering a predecessor, you simply add 1 to the predecessor’s distance, and
remember that. (You could certainly do both, of course.)
5-14. The nice thing about this problem is that for an edge
uv, if you color
u white,
v must be black (or vice versa).
This is an idea we’ve seen before: If the
constraints of the problem force you to do something, it must be a safe
step to take when building a solution. Therefore, you can simply traverse the graph, making sure you’re coloring
neighbors in different colors; if, at some point, you can’t, there is no solution. Otherwise, you’ve successfully created a
bipartitioning.
5-15. In a strong component, every node can reach every other, so there must be at least one path in each direction. If
the edges are reversed, there still will be.
On the other hand, any pair that is
not connected by two paths like this won’t
be after the reversal either, so no new nodes will be added to the strong components either.
5-16. Let’s say the DFS starts somewhere in X. Then, at some point, it will migrate over to Y. We already know it can’t
get back to X without backtracking (the SCC graph is acyclic), so every node in Y must receive a finishing time before
we get back to X. In other words, at least one node in X will be finished
after all the nodes in Y are finished.
5-17. Try finding a simple example where this would give the wrong answer. (You can do it with a really small graph.)
Chapter 6
6-2. The asymptotic running time would be the same. The number of comparison goes
up, however. To see this,
consider the recurrences
B(
n) =
B(
n/2) + 1 and
T(
n) =
T(
n/3) + 2 for binary and ternary search, respectively (with base
cases
B(1) =
T(1) = 0 and
B(2) =
T(2) = 1). You can show (by induction) that
B(
n) < lg
n + 1 <
T(
n).
6-3. As shown in Exercise 6-2, the number of comparisons won’t go down; however, there can be other advantages.
For example, in the 2-3-tree, the 3-nodes help us with balancing. In the more general B-tree, the large nodes help
reduce the number of disk accesses. Note that it is common to use binary search
inside each node in a B-tree.
6-4. You could just traverse the tree and print or yield each node key between the recursive calls to the left and right
subtrees (
inorder traversal).
6-5.
First you find it the node; let’s call it
v. If it’s a leaf, just remove it. If it’s an internal node with a single child,
just replace it with its child. If the node has
two children, find the largest (rightmost) node in the left subtree or the
smallest (leftmost) in the right subtree—your choice. Now replace the key and value in
v with those of this descendant
and then delete the descendant. (To avoid making the tree unnecessarily unbalanced, you should switch between the
left and right versions.)
6-6. We’re inserting
n random values, so each time we insert a value, the probability of it being the smallest among the
Do'stlaringiz bilan baham: