Source code online books for professionals by professionals



Download 4,67 Mb.
Pdf ko'rish
bet257/266
Sana31.12.2021
Hajmi4,67 Mb.
#213682
1   ...   253   254   255   256   257   258   259   260   ...   266
Bog'liq
2 5296731884800181221

discarding an initial portion whose sum is positive (including it will give a higher sum). Thus, we can start summing 
from the left side, always keeping the best sum (and corresponding interval) so far. Once the sum goes negative, we 
move i (the starting index) to the next position and start our sum afresh from there. (You should convince yourself 
that this really works; perhaps prove it by induction?)
Chapter 7
7-1. There are many possibilities here (such as dropping a few coins from the U.S. system). One significant example is 
the old British system (1, 2, 6, 12, 24, 48, 60).
7-2. This is just a way of viewing how a base-k number system works. This is especially easy to see with k = 10.
7-3. When you consider whether to include the greatest remaining element or not, it will always pay to include it 
because if you don’t, the sum of the remaining elements can’t make up for the lost value.
7-4. Let’s say Jack is the first one to get rejected by his best feasible wife, Jill, and that she rejects him for Adam. By 
assumption, Adam has not yet been rejected by his best feasible wife, Alice, which means that he likes Jill at least as 
well as her. Consider a stable pairing where Jack and Jill are together. (This must exist because Jill is a feasible wife for 
Jack.) In this pairing, Jill would still prefer Adam, of course. However, we know that Adam prefers Jill over Alice—or 
any other feasible wife—so this matching wouldn’t be stable after all! In other words, we have a contradiction, 
invalidating our assumption that some man was not paired with his best feasible wife.


Appendix d 

 Hints for exercises
281
7-5. Let’s say Jack was married to Alice and Jill to Adam in a stable pairing. Because Jill is Jack’s best feasible wife, he 
will prefer her to Alice. Because the pairing is stable, Jill must prefer Adam. This holds for any stable pairing where Jill 
has another husband—meaning that she’d prefer any other feasible husband to Jack.
7-6. A greedy algorithm would certainly work if the capacity of your knapsack was divisible by all the various 
increments. For example, if one item was breakable in increments of 2.3 and another in increments of 3.6 and your 
knapsack capacity was divisible by 8.28, you’d be OK, because you have a “resolution” that is good enough. (Do you 
see any further variations we could allow? Other implications of this idea?)
7-7. This follows rather directly from the tree structure. Because the codes all give us unique, deterministic 
instructions on how to navigate from the root to a leaf, there is never any doubt when we have arrived, or where we 
have arrived.
7-8. We know that a and b are the two items with the lowest frequencies; that means the frequency of a is lower than 
(or equal to) the one of c, and the same holds for b and d. If a and d have equal frequencies, we’d sandwich all the 
inequalities (including a £ b and c £ d), and all four frequencies are equal.
7-9. Take the case where all files are of equal, constant size. Then a balanced merge tree would give us a loglinear 
merging time (typical divide and conquer). However, if we make the merge tree completely unbalanced, we’d get a 
quadratic running time (just like insertion sort, for example). Now consider a set of files whose sizes are the powers of 
two up to n/2. The last file would have linear size, and in a balanced merge tree, it would be involved in a logarithmic 
number of merges, meaning that we’d get (at least) loglinear time. Now consider what Huffman’s algorithm would 
do: It would always merge the two smallest files, and they’d always sum to about (that is, one less than) the size of the 
next. We get a sum of powers and end up with a linear merge time.
7-10. You would need to have at least edges with the same weight that both are viable as part of a solution. For 
example, if the lowest weight was used twice, on two different edges, you’d have (at least) two solutions.
7-11. Because the number of edges in all spanning trees is the same, we could do this by simply negating the weights 
(that is, if an edge had weight w, we’d change it to –w) and finding the minimum spanning tree.
7-12. We need to show this for the general case where we have a set of edges that we know are going into the solution. 
The subproblem is then the remaining graph, and we want to show that finding a minimum spanning tree in the 
rest that’s compatible with what we have (no cycles) will give us an optimal solution globally. As usual, we show this 
by contradiction, assuming that we could find a nonoptimal solution to this subproblem that would give us a better 
global solution. Both subsolutions would be compatible with what we had, so they would be interchangeable. Clearly, 
swapping out the nonoptimal solution with the optimal one would improve the global sum, which gives us our 
contradiction.
7-13. Kruskal’s algorithm invariably finds a minimum spanning forest, which in the case of connected graphs turns 
into a minimum spanning tree. Prim’s algorithm could be extended with a loop, like depth-first search, so that it 
would restart in all components.
7-14. It will still run, but it won’t necessarily find the cheapest traversal (or min-cost arborescence).
7-15. Because you can use this to sort real numbers, which has a loglinear lower bound. (This is similar to the case 
with convex hulls.) You just use the numbers as x-coordinates and use identical y-coordinates. The minimum 
spanning tree would then be a path from the first number to the last, giving you the sorted order.
7-16. All we need to show is that the component trees have (at most) logarithmic height. The height of a component 
tree is equal to the highest rank in the component. This rank is increased only if two component tree of equal 
height are merged, and then it is increased by one. The only way to increase some rank in every union would be to 
merge the components in a balanced way, giving a logarithmic final rank (and height). Going some rounds without 
incrementing any ranks won’t help because we’re just “hiding” nodes in trees without changing their ranks, giving us 
less to work with. In other words, there is no way to get more than a logarithmic height for the component trees.


Appendix d 

 Hints for exercises
282
7-17. It’s all hidden by the logarithmic operations of the heap. In the worst case, if we added each node only once, 
these operations would be logarithmic in the number of nodes. Now, they could be logarithmic in the number 
of edges, but since the number of edges is polynomial (quadratic) in the number of nodes, that amounts only to a 
constant difference: Q(lg m) = Q(lg n
2
) = Q(lg n).
7-18. The interval with the earliest starting time could, potentially, cover the entire reminder of the set, which could 
all be nonoverlapping. If we wanted to go with the highest starting time, we’d be equally doomed to failure, by always 
getting only a single element.
7-19. We’d have to sort them all, but after that, the scanning and elimination can be performed in linear time in total 
(do you see how?). In other words, the total running time is dominated by the sorting, which is loglinear in general.
Chapter 8
8-1. Instead of checking whether the parameter tuple is already in the cache, just retrieve it and catch the KeyError 
that might occur if it’s not there. Using some nonexistent value (such as None) along with get might give even better 
performance.
8-2. One way of viewing this might be counting subsets. Each element is either in the subset or not.
8-3. For fib, you just need the two previous values at each step, while for two_pow, you only need to keep doubling the 
value you have.
8-5. Just use the “predecessor pointer” idea from Chapter 5. If you’re doing the forward version, store which choice 
you made (that is, which out-edge you followed) in each node. If you’re doing the reverse version, store where you 
came from to each node.
8-6. Because the topological sorting still has to visit every edge.
8-7. You could let each node observe its predecessors and then explicitly trigger an update in the estimate in the start 
node (giving it a value of zero). The observers would be notified of changes and could update their own estimates 
accordingly, triggering new updates in their observers. This is in many ways quite similar to the relaxation-based 
solution in this chapter. The solution would be a bit “over-eager,” though. Because cascades of updates are triggered 
instantly (instead of letting each node finish its out- or in-updates at a time), the solution could, in fact, have 
exponential running time. (Do you see how?)
8-8. This can be shown in many ways—but one is simply to look at how the list is constructed. Each object is added 
(either appended or overwriting an old element) using bisect, which finds the right place to put it, in sorted order. By 
induction, end will be sorted. (Can you think of other ways of seeing that this list must be sorted?)
8-9. You don’t need bisect when the new element is larger than the last element or if end is empty. You could add an 
if statement to check for that. It might make the code faster, but it would probably make it a bit less readable.
8-10. Just like in the DAG shortest path problem, this would involve remembering “where you came from,” that is, 
keeping track of predecessors. For the quadratic version, you could—instead of using predecessor pointers—simply 
copy the list of predecessors at each step. It wouldn’t affect the asymptotic running time (copying all those lists would 
be quadratic, but that’s what you already have), and the impact on actual running time and memory footprint should 
be negligible.
8-11. This is quite similar to the LCS code in many ways. If you need more help, you could do a web search for 

Download 4,67 Mb.

Do'stlaringiz bilan baham:
1   ...   253   254   255   256   257   258   259   260   ...   266




Ma'lumotlar bazasi mualliflik huquqi bilan himoyalangan ©hozir.org 2024
ma'muriyatiga murojaat qiling

kiriting | ro'yxatdan o'tish
    Bosh sahifa
юртда тантана
Боғда битган
Бугун юртда
Эшитганлар жилманглар
Эшитмадим деманглар
битган бодомлар
Yangiariq tumani
qitish marakazi
Raqamli texnologiyalar
ilishida muhokamadan
tasdiqqa tavsiya
tavsiya etilgan
iqtisodiyot kafedrasi
steiermarkischen landesregierung
asarlaringizni yuboring
o'zingizning asarlaringizni
Iltimos faqat
faqat o'zingizning
steierm rkischen
landesregierung fachabteilung
rkischen landesregierung
hamshira loyihasi
loyihasi mavsum
faolyatining oqibatlari
asosiy adabiyotlar
fakulteti ahborot
ahborot havfsizligi
havfsizligi kafedrasi
fanidan bo’yicha
fakulteti iqtisodiyot
boshqaruv fakulteti
chiqarishda boshqaruv
ishlab chiqarishda
iqtisodiyot fakultet
multiservis tarmoqlari
fanidan asosiy
Uzbek fanidan
mavzulari potok
asosidagi multiservis
'aliyyil a'ziym
billahil 'aliyyil
illaa billahil
quvvata illaa
falah' deganida
Kompyuter savodxonligi
bo’yicha mustaqil
'alal falah'
Hayya 'alal
'alas soloh
Hayya 'alas
mavsum boyicha


yuklab olish