Appendix d
■
Hints for exercises
286
11-11. We can trace things up the chain. Longest paths in DAGs can be used to find Hamilton paths, but only in DAGs.
This will, again, let us find directed Hamilton cycles in digraphs that become DAGs when we split them at a single
node (or, by fiddling with the reduction, something very close to that). However, the digraph we used for reducing
3-SAT to the directed Hamilton cycle was nothing like this. True, we could see a hint of this structure in the
s and
t
nodes, and the general downward direction of edges from
s to
t, but every row was
full of antiparallel edges, and the
ability to go in either direction was crucial to the proof. Therefore, things break down here if we assume acyclicity
further down.
11-12. The reasoning here is quite similar to that in Exercise 11-11.
11-13. As discussed in the main text, if the object is bigger than half the knapsack, we’re done. If it’s slightly less
(but not as small as a quarter of the knapsack), we can include two and again have filled more than half. The only
remaining case is if it’s even smaller. In either case, we can just keep piling on, until we get past the midline—and
because the objects is so small, it won’t extend far enough across the midline to get us into trouble.
11-14. This is actually easy. First, randomly order the nodes. This will give you two DAGs, consisting of the edges
going left-to-right and those going right-to-left. The largest of these must consist of at least half the edges, giving you a
2-approximation.
11-15. Let’s say
all the nodes are of odd degree (which will give the matching as large a weight as possible). That
means the cycle will consist only of these nodes, and every second edge of the cycle will be part of the matching.
Because we’re choosing the
minimum matching, we of course choose the smallest of the two possible alternating
sequences, ensuring that the weight is at most half the total of the cycle. Because we know the triangle inequality
holds, relaxing our assumption and dropping some nodes won’t make the cycle—or the matching—more expensive.
11-16. Feel free to be creative here. You could, perhaps, just try to add each of the objects individually, or you could
add some random objects? Or you could run the greedy bound initially—although that will happen already in one of
the first expansions …
11-17. Intuitively, you’re getting the most possible value out of the items. See whether you can come up with a more
convincing proof, though.
11-18. This requires some knowledge of probability theory, but it’s not that hard. Let’s look at a single clause, where
each literal (either a variable or its negation) is either true or false, and the probability of either outcome is 1/2. This
means that the probability of the entire clause being true is 1–(1/2)
3
= 7/8. This is also the expected number of clauses
that will be true, if we have only a single clause. If we have
m clauses, we can expect to have 7
m/8 true clauses. We
know that
m is an upper bound on the optimum, so our approximation ratio becomes
m/(7
m/8) = 8/7. Pretty neat,
don’t you think?
11-19. The problem is now expressive enough to solve (for example) the maximum independent set problem, which is
NP-hard. Therefore, your problem is also NP-hard. One reduction goes as follows: Set the compatibility for each guest
to 1 and add conflicts for each edge in the original graph. If you can now maximize the compatibility sum without
inviting guests who dislike each other, you have found the largest independent set.
11-20. The NP-hardness can easily be established, even for
m = 2, by reducing from the partition problem. If we can
distribute the jobs so that the machines finish at the same time, that will clearly minimize the completion time—and
if we can minimize the completion time, we will also know whether they can finish simultaneously (that is, whether
the values can be partitioned). The approximation algorithm is easy, too. We consider each job in turn (in some
arbitrary order) and assign it to the machine that currently has the earliest finish time (that is, the lowest workload).
In other words, it’s a straightforward greedy approach. Showing that it’s a 2-approximation is a little bit harder. Let
Do'stlaringiz bilan baham: