Appendix d
■
Hints for exercises
283
8-14. You could divide
c and all the elements in
w by their greatest common divisor.
8-16. The running time is pseudopolynomial—meaning that it is still exponential. You could easily crank up the
knapsack capacity so that the running time became unacceptable, while keeping the actual problem instance size low.
8-19. You could add a set of dummy leaf nodes representing failed searches. Each leaf node would then represent all
the nonexistent elements between two that were actually in the tree. You’d have to treat these separately in the sums.
Chapter 9
9-1. You have to somehow modify the algorithm or the graph so the detection mechanism for negative additive
cycles can be used to find multiplicative cycles where the product of the exchange rates ends up above 1. The easiest
solution is to simply take transform all the weights by taking their logarithms and negating them. You could then use
the standard version of Bellman–Ford, and a negative cycle would give you what you needed. (Do you see how?) Of
course, to actually
use this for anything, you should work out how to output which nodes are involved in this cycle.
9-2. This isn’t a problem, no more than it is in the DAG shortest path problem. It doesn’t
matter which one of them
ends up first in the ordering because the other one (which then comes later) can’t be used to create a shortcut
anyway.
9-3. It gives you a pseudopolynomial running time, all of a sudden (with respect to the original problem instance). Do
you see why?
9-4. This can depend on how you do it. Adding nodes multiple times is no longer a good idea, and you should
probably set things up so you can access and modify entries directly in your queue when you run relax. You could
then do this part in constant time, while the extraction from the queue would now be linear, and you’d end up with a
quadratic running time. For a dense graph, that’s actually quite OK.
9-5. Things can go wrong if there are negative cycles—but the Bellman–Ford algorithm would have raised an
exception in that case. Barring this, we can turn to the triangular inequality. We know that
h(
v) £
h(
u) +
w(
u,
v) for all
nodes
u and
v. This means that
w’(
u,
v) =
w(
u,
v) +
h(
u) –
h(
v) ³ 0, as required.
9-6. We might
preserve the shortest paths, but we couldn’t necessarily guarantee that the weights would be
nonnegative.
9-9. This requires few changes. You’d use a (binary, Boolean) adjacency matrix instead of a weight matrix. When
seeing if you could improve a path, you would not use addition and minimum; instead, you would see whether there
was a new path there. In other words, you would use A[u, v] = A[u, v] or A[u, k] and A[k, v].
9-10. The tighter stopping criterion tells us to stop as soon as
l +
r is greater than the shortest path we’ve found so
far, and we’ve already established that that is correct. At the point when both directions have yielded (and therefore
visited) the same node, we know the shortest path going through that node has been explored; because it is itself one
of those we have explored, it must be greater than or equal to the
smallest one of those we have explored.
9-11. No matter which edge you pick we know that
d(
s,
u) +
w(
u,
v) +
d(
v,
t) is smaller than
the length of the shortest
path found so far, which is, again, shorter than or equal to
l +
r. This means that both
l and
r would go past the
midpoint of the path, wherever that is. If the midpoint is inside an edge, just choose that; if it’s exactly on a node,
choosing either adjacent edge on the path would do the trick.
9-14. Consider the shortest path from
v to
t. The modified cost can be expressed in two ways. The first is as
d(
v,
t) –
h(
v)
+
h(
t), which is the same as
d(
v,
t) –
h(
v) because
h(
t) = 0. The other way of expressing this modified cost is as the sum
of the individual
modified edge weights; by assumptions, these are all nonnegative (that is,
h is feasible). Therefore,
we get
d(
v,
t) –
h(
v) ³ 0, or
d(
v,
t) ³
h(
v).
Appendix d
■
Hints for exercises
284
Chapter 10
10-1. Simply split each node
v into two nodes,
v and
v’, and add the edge
vv’ with the desired capacity. All in-edges
would then stay with
v, while all out-edges from
v would be moved to
v’.
10-2. You could modify the algorithm, or you could modify your data. For example, you could split each node into
two, with a unit-capacity edge between them, and give all remaining edges infinite capacity.
Then the maximum flow
would let you identify the vertex-disjoint paths.
10-3. We know the running time is
O(
m
2
), so all we need to do is construct a situation where the quadratic running time
occurs. One possibility is to have
m/2 nodes in addition to
s and
t, each with an edge from
s and to
t. In the worst case, the
traversal would visit all the unsaturated out-edges from
s, which (by the handshake sum) gives us a quadratic running
time.
10-4. Simply replace each edge
uv by
uv and
vu, both with the same capacity. In this case, you could, of course, end
up with flow going both ways at the same time. This isn’t really a problem—to find out the actual flow through the
undirected edge, just subtract one from the other. The sign of the result will indicate the direction of flow. (Some
books avoid having edges in both directions between nodes in order to simplify the use of residual networks. This can
be done by splitting one of the two directed edges in two, with a dummy node.)
10-6. For example, you could give the source a capacity (as described in Exercise 10-1) equal to the desired flow value.
If feasible, the maximum flow would then have that value.
10-8. You can solve
this by finding a minimum cut, as follows. If guest A will attend only if B also attends, add the edge
(A,B) to your network, with an infinite capacity. This edge will then never cross a cut (in the forward direction), if it
can be avoided. The friends you invite will be on the source side of the cut, while the others will be on the sink side.
Your compatibilities can be modeled as follows: any positive compatibility is used as a capacity for an edge from the
source, while any negative compatibility is used,
negated, as a capacity for an edge to the source. The algorithm will
then minimize the sum of such edges crossing the cut, keeping the ones you like on the source side and the ones you
don’t on the sink side (to the extent possible).
10-9. Because each person has a single favorite seat, each node on the left side has a single edge to the right. That
means that the augmenting paths all consist of single unsaturated edges—so the behavior described is equivalent
to the augmenting path algorithm, which we know will give an optimal answer (that is, it’ll pair as many people as
possible to their favorite seats).
10-10. Represent the groups for both rounds as nodes. Give the first groups
in-edges from the source, with a capacity
of
k. Similarly, give the second groups out-edges to the sink, also with capacity
k. Then add edges from all the first
groups to all the second groups, all with capacity 1. Each flow unit is then a person, and if you’re able to saturate the
edges from the source (or to the sink), you’ve succeeded. Each group will then have
k persons, and each of the second
groups will at most have one person from each of the first.
10-11. This solution combines the supply/demand idea with min-cost flow. Represent each planet by a node. Also add
a node for every passenger type (that is, for each valid combination of passenger origin and destination). Link every
planet to
i <
n to planet
i +1 with a capacity equal to the actual carrying capacity of the spaceship. The passenger type
nodes are given supplies equal to the number of passenger of that type (that is, the number of passengers wanting to
go from
i to
j).
Consider the node v, representing passengers wanting to go from planet
i to planet
j. These can either
make the trip or not. We represent that fact by adding an edge (with infinite capacity) from
v to
i and another to
j. We
then add a demand to node
j equal to the supply at
v. (In other words, we make sure that each planet has a demand
that will account for all passengers who want to go there.) Finally, we add a cost on (
v,
i) equal to the fare for the trip
from
i to
j, except it’s negative. This represents the amount we’ll make for each of the passengers at
v that we take on.
We now find a min-cost flow that is feasible with respect to these supplies and demands. This flow will make sure that
either each passenger is routed to their desired origin (meaning that they’ll take the trip) and then via the planet-to-
planet edges to their destination, adding to our revenue, or they are routed directly to their destination (meaning they
won’t take the trip) along a zero-cost edge.
Appendix d
■
Hints for exercises
285
Chapter 11
11-1. Because the running time of bisection is logarithmic. Even if the size of the
value range in question is
exponential as a function of the problem size, the actual running time will only be
linear. (Do you see why?)
11-2. Because they are all in NP, and all problems in NP can be reduced to any NP-complete problem (by the
definition of NP-completeness).
11-3. Because the running time is
O(
nW), where
W is the capacity. If
W is polynomial in
n, then so is the running time.
11-4. The reduction from the version with an arbitrary
k is simple enough: Simply add
–k as an element.
11-5. It should be clear that we could reduce an unbounded version of the subset sum problem to unbounded
knapsack (just set weights and values equal to the numbers in question). The challenge is getting from the
unbounded to the bounded case. This is basically a matter of juggling numbers, and there are several elements
to this juggling. We still want to keep the weights so that the optimization maximizes them. In addition, however,
we need to add some kind of constraints to make sure at most one of each number is used. Let’s look at this
constraint thing separately. For
n numbers, we can try to create
n “slots” using powers of two, representing
number
i by 2
Do'stlaringiz bilan baham: