Appendix d
■
Hints for exercises
280
6-9. It can be a bit tricky to see how building a heap is linear when considering the standard implementation, which
goes through the nodes level by level, from just above the leaves, performing a logarithmic operation on each node
as it goes. This almost looks loglinear. However, we can reformulate this into an equivalent divide-and-conquer
algorithm, which is a bit more familiar: Make a heap out of the left subtree, then of the right subtree, and then repair
the root. The recurrence becomes
T(
n) = 2
T(
n/2) + Q(lg
n), which we know (for example, by the master theorem) is
linear.
6-10. First, heaps give you direct access to the minimum (or maximum) node. This could, of course, be implemented
by maintaining a direct pointer to the leftmost (or rightmost) node in a search tree as well. Second, the heaps allows
you to maintain balance easily, and because it is perfectly balanced, it can be represented compactly, leading to very
low overhead (you save one reference per node, and you can keep the values located in the same memory area, for
example). Finally, building a (balanced) search tree takes loglinear time, while building a heap takes linear time.
6-13. For random input, it wouldn’t really make any difference (except the cost of the extra function call). In general,
though, it would mean that no single input would be guaranteed to always elicit worst-case behavior.
6-15. Here you can use the pigeonhole principle (if you try to fit more than
n pigeons into
n pigeonholes, at least
one hole will hold at least two pigeons). Divide the square into four of side
n/2. If you have more than four points,
one of these must hold at least two points. By simple geometry, the diagonal of these squares is less than
d, so this is
impossible.
6-16. Just do a pass over the data before you start, removing co-located points. They’re already sorted, so finding
duplicates would only be a linear-time operation. Now, when running the algorithm, the slices along the midline
can, at most, hold
six points (do you see why?), so you now need to compare to at most five following points in the
y-sequence.
6-17. This is similar to how the lower bound for sorting is used to prove the lower bound for the convex hull problem:
You can reduce the element uniqueness for real numbers to the closest pair problem. Just plot your numbers as points
on the
x-axis (linear time, which is asymptotically less than the bound at hand) and find the closest pair. If the two
points are identical, the elements aren’t unique; otherwise, they are. Because uniqueness
cannot be determined in
less than loglinear time, it would be
impossible for the closest pair problem to be any more efficient.
6-18. The crucial observation is that there’s never any point in including an initial portion of the slice whose sum
is zero or negative (you could always discard it and get the same or a higher sum). Also, there’s never any point in
Do'stlaringiz bilan baham: