integer programming. This is a version of the technique of linear programming, where a linear function is optimized,
under a set of linear constraints. In integer programming, though, you also require the variables to take on only
integral values—which breaks all existing algorithms. It also means that you can reduce from all kinds of problems,
with these knapsack-style ones as an obvious example. In fact, we can show that 0-1 integer programming, which
is special case, is NP-hard. Just let each item of the knapsack problem be a variable, which can take on the value of
0 or 1. You then make two linear functions over these, with the values and weights as coefficients, respectively. You
optimize the one based on the values and constrain the one based on the weights to be below the capacity. The result
will then give you the optimal solution to the knapsack problem.
11
What about the unbounded integral knapsack? In Chapter 8, I worked out a pseudopolynomial solution, but is
it really NP-hard? It does seem rather closely related to the 0-1 knapsack, for sure, but the correspondence isn’t really
close enough that a reduction is obvious. In fact, this is a good opportunity to try your hand at crafting a reduction—so
I’m just going to direct you to Exercise 11-5.
Cliques and Colorings
Let’s move on from subsets of numbers to finding structures in graphs. Many of these problems are about conflicts.
For example, you may be writing a scheduling software for a university, and you’re trying to minimize timing collisions
involving teachers, students, classes, and auditoriums. Good luck with that one. Or perhaps you’re writing a compiler,
and you want to minimize the number of registers used by finding out which variables can share a register? As before,
you may find acceptable solutions in practice, but you may not be able to optimally solve large instances in general.
I have talked about bipartite graphs several times already—graphs whose nodes can be partitioned into two sets
so that all edges are between the sets (that is, no edges connect nodes in the same set). Another way of viewing this is
as a two-coloring, where you color every node as either black or white (for example), but you ensure that no neighbors
have the same color. If this is possible, the graph is bipartite.
Now what if you’d like to see whether a graph is tripartite, that is, whether you can manage a three-coloring? As it
turns out, that’s not so easy. (Of course, a k-coloring for k > 3 is no easier; see Exercise 11-6.) Reducing 3-SAT to three-
coloring is, in fact, not so hard. It is, however, a bit involved (like the Hamilton cycle proof, earlier in this chapter), so
I’m just going to give you an idea of how it works.
Basically, you build some specialized components, or widgets, just like the rows used in the Hamilton cycle proof.
The idea here is to first create a triangle (three connected nodes), where one represents true, one false, and one is a
so-called base node. For a variable A, you then create a triangle consisting of one node for A, one for not A, and the
third being the base node. That way, if A gets the same color as the true node, then not A will get the color of the false
node, and vice versa.
At this point, a widget is constructed for each clause, linking the nodes for either A or not A to other nodes,
including the true and false nodes, so that the only way to find a three-coloring is if one of the variable nodes (of the
form A or not A) gets the same color as the true node. (If you play around with it, you’ll probably find ways of doing
this. If you want the full proof, it can be found in several algorithm books, such as the one by Kleinberg and Tardos;
see “References” in Chapter 1.)
11
This paragraph is probably easier to understand if you already know a little bit about linear programming. If you didn’t quite
catch all of it, don’t worry—it’s not really essential.
Chapter 11
■
hard problems and (lImIted) sloppIness
241
Now, given that k-coloring is NP-complete (for k > 2), so is finding the chromatic number of a graph—how many
colors you need. If the chromatic number is less than or equal to k, the answer to the k-coloring problem is yes;
otherwise, it is no. This kind of problem may seem abstract and pretty useless, but nothing could be further from the
truth. This is an essential problem for cases where you need to determine certain kinds of resource needs, both for
compilers and for parallel processing, for example.
Let’s take the problem of determining how many registers (a certain kind of efficient memory slots) a code
segment needs. To do that, you need to figure out which variables will be used at the same time. The variables are
nodes, and any conflicts are represented by edges. A conflict simply means that two variables are (or may be) used at
the same time and therefore can’t share a register. Now, finding the smallest number of registers that can be used is
equivalent to determining the chromatic number of this graph.
A close relative of the k-coloring is the so-called clique cover problem (also known as Do'stlaringiz bilan baham: |