Let's say I have an optimization problem called $k$-foo which asks for a solution of size $k$ minimizing some quality criterion.
Now the corresponding decision problem $foo(M)$ would be:
Is there a solution to foo with quality at least $M$ of size $k$.
For problems on one parameter (for example vertex cover) it is obvious that solving the optimization problem sovles the decision problem.
But here I do not see such a correspondance between the $k$-foo optimization problem and the $foo(M)$ decision problem. How does for example showing that $foo(M)$ is NP-hard implies that $k$-foo is NP-hard?
The $k$-center problem is an example of such a problem where the decision version takes the radius as input and asks wether a solution of size $k$ exists.
-
1$\begingroup$ Only decision problems can be NP-hard. Optimization problems are not languages, they are (partial) functions. $\endgroup$Yuval Filmus– Yuval Filmus2013年04月27日 04:42:03 +00:00Commented Apr 27, 2013 at 4:42
-
1$\begingroup$ NP is a complexity class of languages, not optimization problems. When you say that some optimization problem is in NP, you really mean that the corresponding decision problem is in NP. $\endgroup$Yuval Filmus– Yuval Filmus2013年04月27日 04:46:37 +00:00Commented Apr 27, 2013 at 4:46
-
1$\begingroup$ [I was missing a not in my previous comment ]For a problem to be NP-hard it does NOT need to be in NP. $\endgroup$user695652– user6956522013年04月27日 04:53:50 +00:00Commented Apr 27, 2013 at 4:53
-
2$\begingroup$ Thank you for raising this point. There seems to be quite some confusion about this fact. While wikipedia is on my side "NP-hard problems may be of any type: decision problems, search problems, or optimization problems." [en.wikipedia.org/wiki/NP-hard] some lecture notes indeed say that "A language is NP-hard, if...". Do you know of any reference to a well agreed definition? $\endgroup$user695652– user6956522013年04月27日 18:39:10 +00:00Commented Apr 27, 2013 at 18:39
-
1$\begingroup$ @user695652 The book by Garey & Johnson (which I highly recommend) has a chapter about NP-hardness. They use the term for decision (using many-one-reductions) problems as well as for optimization problems (which they formalize as "string relations") using Turing reductions. Note that these are not two entirely different notions of NP-hardness. The latter is just a generalization of the first. So defining a language to be "NP-hard, if..." is just fine, if you only have to deal with languages. $\endgroup$Cornelius Brand– Cornelius Brand2013年04月27日 20:03:15 +00:00Commented Apr 27, 2013 at 20:03
1 Answer 1
Having more than one input doesn't change anything:
Consider an optimization problem $MinQ$:
Input: $x, k$
Output: find $y$ satisfying $Q(x,k,y)$ which minimizes $f(x,k,y)$: $$g(x,k) = \min_{y : Q(x,k,y)} f(x,k,y)$$
The corresponding decision problem $Q$ is:
Input: $x, k, l$
Query: is there a $y$ satisfying $Q(x,k,l)$ such that $f(x,k,y) \leq l$?
To solve the decision problem, we can solve the optimization problem, then check if the output is less than $l$. I.e. there is a very easy reduction from $Q$ to $MinQ$:
input: $x,k,l$
1. $m = MinQ(x,k)$
2. return $m \leq k$
Explore related questions
See similar questions with these tags.