Chapter 11
■
hard problems and (lImIted) sloppIness
233
The fact that every problem (in NP) can be reduced to these tough-nut problems means that they’re the hard
core—if you can solve them, you can solve any problem in NP (and suddenly, we’re not in Kansas anymore).
The figure should help make this clear: Solving
c means adding a solid arrow from
c to the ground (reducing it to
nothing), which immediately
gives us a path from every other problem in NP to the ground,
via c.
We have now used reductions to define the toughest problems in NP, but we can extend this idea slightly.
The right image in Figure
11-2
illustrates how we can use reductions
transitively, for hardness proofs such as the
ones we’ve been discussing before (like the one on the right in Figure
11-1
, for example). We know that
c is hard,
so
reducing it to u proves that
u is hard. We already know how this works, but this figure illustrates a slightly more
technical reason for why it is true in this case. By reducing
c to
u, we have now placed
u in the same position that
c was
in originally. We already knew that every problem in NP could be reduced to
c (meaning that it was NP-complete).
Now we also know that every
problem can be reduced to u, via
c. In other words,
u also satisfies the definition of
NP-completeness—and, as illustrated, if we can solve it in polynomial time, we will have established that P = NP.
Now, so far I’ve only been talking about decision problems. The main reason for this is that it makes quite a few
things in the formal reasoning (much of which I won’t cover here) a bit easier. Even so, these ideas are relevant for
other
kinds of problems, too, such as the many optimization problems we’ve been working with in this book (and will
work with later in this chapter).
Consider, for example, the problem of finding the shortest tour of Sweden. Because it’s not a decision problem,
it’s not in NP. Even so, it’s a very difficult problem (in the sense “hard for humans to solve” and “most likely
intractable”), and just like anything in NP, it would suddenly be easy if we found ourselves in Algorithmica.
Let’s consider these two points separately.
The term
completeness is reserved for the hardest problems
inside a class, so the NP-complete problems are
the class bullies of NP. We can use the same hardness criterion
for problems that might fall outside the class as well,
though. That is, any problem that is at least as hard (determined by polynomial-time reduction) as any problem in NP,
but that need not
itself be in NP. Such problems are called
NP-hard. This means that another definition of the class
NPC, of NP-complete problems, is that it consists of all the NP-hard problems in NP. And, yes,
finding the shortest
route through a graph (such as through the towns of Sweden) is an NP-hard problem called the Traveling Salesman
(or Salesrep) Problem, or often just TSP. I’ll get back to that problem a bit later.
About the other point: Why would an optimization problem such as this be easy if P = NP? There are some
technicalities about how a certificate could be used to find the actual route, and so on, but let’s just focus on the
difference between the yes-no nature of NP, and the numerical length we’re looking for in the TSP problem. To keep
things simple, let’s say all edge weights are integers. Also, because P = NP, we can solve both the yes and no instances of
our decision problems in polynomial time (see the sidebar “Asymmetry, Co-NP, and the Wonders of Algorithmica”).
One way to proceed is then to use the decision problem as a black box and perform a binary search for the optimal answer.
Do'stlaringiz bilan baham: