Appendix d
■
Hints for exercises
282
7-17. It’s all hidden by the logarithmic operations of the heap. In the worst case, if we added each node only once,
these operations would be logarithmic in the number of nodes. Now, they could be logarithmic in the number
of
edges, but since the number of edges is polynomial (quadratic) in the number of nodes, that amounts only to a
constant difference: Q(lg
m) = Q(lg
n
2
) = Q(lg
n).
7-18. The interval with the earliest starting time could, potentially, cover the entire reminder of the set, which could
all be nonoverlapping. If we wanted to go with the
highest starting time, we’d be equally doomed to failure, by always
getting only a single element.
7-19. We’d have to sort them all, but after that, the scanning and elimination can be performed in linear time in total
(do you see how?). In other words, the total running time is dominated by the sorting, which is loglinear in general.
Chapter 8
8-1. Instead of checking whether the parameter tuple is already in the cache, just retrieve it and catch the KeyError
that might occur if it’s
not there. Using some nonexistent value (such as None) along with get might give even better
performance.
8-2. One way of viewing this might be counting subsets. Each element is either in the subset or not.
8-3. For fib, you just need the two previous values at each step, while for two_pow, you only need to keep doubling the
value you have.
8-5. Just use the “predecessor pointer” idea from Chapter 5. If you’re doing the forward version, store which choice
you made (that is, which out-edge you followed) in each node. If you’re doing the reverse version, store where you
came from to each node.
8-6. Because the topological sorting still has to visit every edge.
8-7. You could let each node observe its predecessors and then explicitly trigger an update in the estimate in the start
node (giving it a value of zero). The observers would be notified of changes and could update their own estimates
accordingly, triggering new updates in their observers. This is in many ways quite similar to the relaxation-based
solution in this chapter. The solution would be a bit “over-eager,” though. Because cascades of updates are triggered
instantly (instead of letting each node finish its out- or in-updates at a time), the solution could, in fact, have
exponential running time. (Do you see how?)
8-8. This can be shown in many ways—but one is simply to look at how the list is constructed. Each object is added
(either appended or overwriting an old element) using bisect, which finds the right place to put it, in sorted order. By
induction, end will be sorted. (Can you think of other ways of seeing that this list must be sorted?)
8-9. You don’t need bisect when the new element is larger than the last element or if end is empty. You could add an
if statement to check for that. It
might make the code faster, but it would probably make it a bit less readable.
8-10. Just like in the DAG shortest path problem, this would involve remembering “where you came from,” that is,
keeping track of predecessors. For the quadratic version, you could—instead of using predecessor pointers—simply
copy the list of predecessors at each step. It wouldn’t affect the asymptotic running time (copying all those lists would
be quadratic, but that’s what you already have), and the impact on actual running time and memory footprint should
be negligible.
8-11. This is quite similar to the LCS code in many ways. If you need more help, you could do a web search for
Do'stlaringiz bilan baham: