Linear Program to solve NP-hard problems - complexity-theory

I'm pretty new to the realm of LP, and this question has been bothering me for a long time. I know that Simplex algorithm takes exponential time in the worst case but practical on average, but Ellipsoid can solve LP with exponential number of constraints in polynomial time in the worst case. Then my question is, when you try to formulate NP-hard problems into LP, shouldn't Ellipsoid algorithm be able to solve them in poly-time? Do LP-programmable NP-hard problems always imply exponential number of variables and constraints?

Related

Classification and complexity of generating all possible combinations: P, NP, NP-Complete or NP-Hard

The algorithm needs to generate all possible combinations from a given list (empty set excluded).
list => [1, 2, 3]
combinations => [{1}, {2}, {3}, {1,2}, {1,3}, {2,3}, {1,2,3}]
This algorithm would take O(2n) time complexity to generate all combinations. However, I'm not sure if an improved algorithm can bring this time complexity down. If an improved algorithm exists, please do share your knowledge!
In the case that it takes O(2n) which is exponential, I would like some insight regarding which class this algorithm belongs to P, NP, NP-Complete, or NP-Hard. Thanks in advance :)
P, NP, NP-complete, and NP-hard are all classes of decision problems, none of which contain problems that involve non-binary output (such as this enumeration problem).
Often people refer colloquially to problems in FNP as being in NP. This problem is not in FNP either because the length of the output string for the relation must be bounded by some polynomial function of the input length. It might be FNP-hard, but we're getting into the weeds that even a graduate CS education doesn't cover. Worth asking on the CS Stack Exchange if you care enough.
This problem is in none of them except, arguably, NP-hard.
It is not in P because there is no polynomial time algorithm to do it. You cannot generate an exponential number of things in polynomial time.
It is not in NP because there is no polynomial time algorithm to validate the answer. You cannot process an exponential number of things in polynomial time.
It is not in NP-complete because everything in NP-complete must be in NP and it is not.
The argument for it being in NP-hard goes like this. You can say anything that you want about the members of the empty set. Including that they make monkeys fly out of your nose and can solve any problem in NP in polynomial time. So if we could find a polynomial solution, we can solve any NP problem fast, and therefore it meets the definition of NP-hard. But uselessly so - we know that no polynomial solution exists.

If a polynomial time algorithm for an NP-Complete problem is found, does this imply that it is the same time complexity for all NP problems?

If a polynomial time algorithm for an NP-Complete problem is found, lets say its O(n^2) hypothetically, does this imply that there is an O(n^2) solution for all NP problem? I know this would imply that there is a polynomial solution for all NP-problems, but would it necessarily be O(n^2)?
Not necessarily
A problem x that is in NP is also in NP-Complete if and only if every
other problem in NP can be quickly (ie. in polynomial time)
transformed into x.
Therefore an algorithm that solves one NP-Complete problem means we can solve any other problem in NP by transforming it to an instance of that problem and solving it. But the transformation could be any complexity as long as its polynomial we satisfy the condition.
So the answer to your question in no, an O(N^2) algorithm to solve an NP-Complete problem does not imply all NP problems can be solved in O(N^2) time, it only guarantees there exists a polynomial time algorithm to solve it.
ie O(N^2) + T(N) where T(N) is the complexity to transform the problem instance

If we can prove that knapsack problem with limited capacity are solved in a polynomial time then all knapsack belongs to P

I found this question in my Optimization Algorithm course, the full question is this:
If we can prove all Knapsack problems with capacity limited to 100 can be solved in polynomial time, then all Knapsack problems belong to P. Is this sentence true or false? Justify.
With my book and some research I came out with something like this:
First of all KP is an NP-complete problem. With Dynamic programming it can reach a pseudopolynomial time, but it's not enough.
If, absurdly, we can prove that KP with capacity limited to 100 can be solved in polynomial time then we can assume that KP belongs to P.
What do you think about my answer? I think the absurd is not so right in the last sentence.
Proving that all knapsack problems with a limited capacity can be solved in polynomial time does not prove that all knapsack problems are in P. If a problem is in P, that means that it can be solved in polynomial time. This means that it can be solved in O(n^k) where k is some integer. Big O is an upper bound, meaning that, if an algorithm is O(n), as n approaches infinity, the time it takes to do the algorithms will never be longer than n. By proving that all problems with n<100 can be solved in polynomial time, this makes no guarantee for much larger n. Therefore we cannot say that there is an algorithm that runs in O(n^k) and is therefore in P.

What is a time complexity for Algorithm X for sudoku?

I've found here a statement that Algorithm X for sudoku has O(N^3) time complexity where N is a board size.
That's maybe logical, since for sudoku the binary matrix to compute has N^3 rows. But that makes sudoku problem solvable in a polynomial time, and sudoku is known to be NP problem, that means (as I understand it)
not possible to always solve in a polynomial time
possible to verify a solution in a polynomial time
So what is the time complexity of Algorithm X for sudoku,
and is it possible to solve a sudoku in a polynomial time or not ?
Thank you!
Mathematics of Sudoku explains this pretty well:
The general problem of solving Sudoku puzzles on n^2×n^2 grids of n×n
blocks is known to be NP-complete.
The runtime complexity of any algorithm solving Sudoku is thus at least exponential in n. For a normal Sudoku (n = 3) this means O(N^3) is perfectly reasonable.
For a complete analysis of running time see: https://11011110.github.io/blog/2008/01/10/analyzing-algorithm-x.html
There it is stated that
even the stupidest version of Algorithm X, one that avoids any chance to backtrack and always chooses its pivots in such a way as to maximize its running time, takes at most O(3n/3)
which places the algorithm in exponential running time (EXPTIME).

Can 1 approximation algorithm be used for multiple NP-Hard problems?

Since any NP Hard problem be reduced to any other NP Hard problem by mapping, my question is 1 step forward;
for example every step of that algo : could that also be mapped to the other NP hard?
Thanks in advance
From http://en.wikipedia.org/wiki/Approximation_algorithm we see that
NP-hard problems vary greatly in their approximability; some, such as the bin packing problem, can be approximated within any factor greater than 1 (such a family of approximation algorithms is often called a polynomial time approximation scheme or PTAS). Others are impossible to approximate within any constant, or even polynomial factor unless P = NP, such as the maximum clique problem.
(end quote)
It follows from this that a good approximation in one NP-complete problem is not necessarily a good approximation in another NP-complete problem. In that fortunate world we could use easily-approximated NP-complete problems to find good approximate algorithms for all other NP-complete problems, which is not the case here, as there are hard-to-approximate NP-complete problems.
When proving a problem is NP-Hard, we usually consider the decision version of the problem, whose output is either yes or no. However, when considering approximation algorithms, we consider the optimization version of the problem.
If you use one problem's approximation algorithm to solve another problem by using the reduction in the proof of NP-Hard, the approximation ratio may change. For example, if you have a 2-approximation algorithm for problem A and you use it to solve problem B, then you may get a O(n)-approximation algorithm for problem B, since the reduction does not preserve approximation ratio. Hence, if you want to use an approximation algorithm for one problem to solve another problem, you need to ensure that the reduction will not change approximation ratio too much in order to get a useful algorithm. For example, you can use L-reduction or PTAS reduction.

Resources