Appendix d
■
Hints for exercises
279
5-13. This is just a matter of keeping track of the distance for each node instead of its predecessor, beginning with
zero for the start node. Instead of remembering a predecessor, you simply add 1 to the predecessor’s distance, and
remember that. (You could certainly do both, of course.)
5-14. The nice thing about this problem is that for an edge
uv, if you color
u white,
v must be black (or vice versa).
This is an idea we’ve seen before: If the constraints of the problem
force you to do something, it must be a safe
step to take when building a solution. Therefore, you can simply traverse the graph, making sure you’re coloring
neighbors in different colors; if, at some point, you can’t, there is no solution. Otherwise, you’ve successfully created a
bipartitioning.
5-15. In a strong component, every node can reach every other, so there must be at least one path in each direction. If
the edges are reversed, there still will be. On the other hand, any pair that is
not connected by two paths like this won’t
be after the reversal either, so no new nodes will be added to the strong components either.
5-16. Let’s say the DFS starts somewhere in X. Then, at some point, it will migrate over to Y. We already know it can’t
get back to X without backtracking (the SCC graph is acyclic), so every node in Y must receive a finishing time before
we get back to X. In other words, at least one node in X will be finished
after all the nodes in Y are finished.
5-17. Try finding a simple example where this would give the wrong answer. (You can do it with a really small graph.)
Chapter 6
6-2. The asymptotic running time would be the same. The number of comparison goes
up, however. To see this,
consider the recurrences
B(
n) =
B(
n/2) + 1 and
T(
n) =
T(
n/3) + 2 for binary and ternary search, respectively (with base
cases
B(1) =
T(1) = 0 and
B(2) =
T(2) = 1). You can show (by induction) that
B(
n) < lg
n + 1 <
T(
n).
6-3. As shown in Exercise 6-2, the number of comparisons won’t go down; however, there can be other advantages.
For example, in the 2-3-tree, the 3-nodes help us with balancing. In the more general B-tree, the large nodes help
reduce the number of disk accesses. Note that it is common to use binary search
inside each node in a B-tree.
6-4. You could just traverse the tree and print or yield each node key between the recursive calls to the left and right
subtrees (
inorder traversal).
6-5. First you find it the node; let’s call it
v. If it’s a leaf, just remove it. If it’s an internal node with a single child,
just replace it with its child. If the node has
two children, find the largest (rightmost) node in the left subtree or the
smallest (leftmost) in the right subtree—your choice. Now replace the key and value in
v with those of this descendant
and then delete the descendant. (To avoid making the tree unnecessarily unbalanced, you should switch between the
left and right versions.)
6-6. We’re inserting
n random values, so each time we insert a value, the probability of it being the smallest among the
Do'stlaringiz bilan baham: