unit of flow over a given edge.
*
This section is a bit hard and is not essential in order to understand the rest of the book. Feel free to skim it or even skip it entirely.
You might want to read the first couple of paragraphs, though, to get a feel for the problem.
ChApTeR 10
■
MATChINgs, CuTs, ANd Flows
220
An immediate application of this is an extension of the bipartite matching problem. We can keep using the zero-
one flow formulation but add costs to each of the edges. We then have a solution to the min-cost bipartite matching
(or assignment) problem, hinted at in the introduction: By finding a maximum flow, we know we have a maximum
matching, and by minimizing the cost, we get the matching we’re looking for.
This problem is often referred to simply as min-cost flow. That means that rather than looking for the cheapest
maximum flow, we’re simply looking for the cheapest flow of a given magnitude. For example, the problem might be
“give me a flow of size k, if such a flow exists, and make sure you construct it as cheaply as possible.” You could, for
example, construct a flow that is as great as possible, up to the value k. That way, finding the max-flow (or the min-cost
max-flow) would simply involve setting k to a sufficiently large value. It turns out that simply focusing on maximum
flow is sufficient, though; we can optimize to a specified flow value by a simple reduction, without modifying the
algorithm (see Exercise 10-6).
The idea introduced by Busacker and Gowen for solving the min-cost flow problem was this: Look for the
cheapest augmenting path. That is, use a shortest path algorithm for weighted graphs, rather than just BFS, during the
traversal step. The only wrinkle is that edges traversed backward have their cost negated for the purpose of finding the
shortest path. (They’re used for canceling flow, after all.)
If we could assume that the cost function was positive, we could use Dijkstra’s algorithm to find our augmenting
paths. The problem is that once you push some flow from u to v, we can suddenly traverse the (fictitious) reverse
edge vu, which has a negative cost. In other words, Dijkstra’s algorithm would work just fine in the first iteration, but
after that, we’d be doomed. Luckily, Edmonds and Karp thought of a neat trick to get around this problem—one that
is quite similar to the one used in Johnson’s algorithm (see Chapter 9). We can adjust all the weights in a way that (1)
makes them all positive, and (2) forms telescoping sums along all traversal paths, ensuring that the shortest paths are
still shortest.
Let’s say we are in the process of performing the algorithm, and we have established some feasible flow. Let w(u, v)
be the edge weight, adjusted according to the rules of augmenting path traversal (that is, it’s unmodified along edges
with residual capacity, and it’s negated along backward edges with positive flow). Let us once again (that is, just like
in Johnson’s algorithm) set h(v) = d(s, v), where the distance is computed with respect to w. We can then define an
adjusted weight, which we can use for finding our next augmenting path: w’(u, v) = w(u, v) + h(u) - h(v). Using the same
reasoning as in Chapter 9, we see that this adjustment will preserve all the shortest paths and, in particular, the shortest
augmenting paths from s to t.
Implementing the basic Busacker-Gowen algorithm is basically a question of replacing BFS with, for example,
Bellman-Ford (see Listing 9-2) in the code for bfs_aug (Listing 10-3). If you want to use Dijkstra’s algorithm, you
simply have to use the modified weights, as described earlier (Exercise 10-7). For an implementation based on
Bellman-Ford, see Listing 10-5. (The implementation assumes that edge weights are given by a separate map, so
W[u,v] is the weight, or cost, of the edge from u to v.) Note that the flow labeling from the Ford-Fulkerson labeling
approach has been merged with the relax operation of Bellman-Ford—both are performed in the label function. To
do anything, you must both have found a better path and have some free capacity along the new edge. If that is the
case, both the distance estimate and the flow label are updated.
The running time of the Busacker-Gowen method depends on which shortest path algorithm you choose.
We’re no longer using the Edmonds-Karp-approach, so we’re losing its running-time guarantees, but if we’re using
integral capacities and are looking for a flow of value k, we’re guaranteed at most k iterations.
4
Assuming Dijkstra’s
algorithm, the total running time becomes O(km lg n). For the min-cost bipartite matching, k would be O(n), so
we’d get O(nm lg n).
In a sense, this is a greedy algorithm, where we gradually build the flow but add as little cost as possible in each
step. Intuitively, this seems like it should work, and indeed it does, but proving as much can be a bit challenging—so
much so, in fact, that I’m not going into details here. If you want to read the proof (as well as more details on the
running time), have a look at the chapter on circulations in Graphs, Networks and Algorithms, by Dieter Jungnickel.
5
You can find a simpler proof for the special case of min-cost bipartite matching in Algorithm Design, by Kleinberg and
Tardos (see “References” in Chapter 1).
4
This is, of course, pseudopolynomial, so choose your capacities wisely.
5
Also available online:
http://books.google.com/books?id=NvuFAglxaJkC&pg=PA299
ChApTeR 10
■
MATChINgs, CuTs, ANd Flows
221
Do'stlaringiz bilan baham: |