Classification and complexity of generating all possible combinations: P, NP, NP-Complete or NP-Hard - algorithm

The algorithm needs to generate all possible combinations from a given list (empty set excluded).
list => [1, 2, 3]
combinations => [{1}, {2}, {3}, {1,2}, {1,3}, {2,3}, {1,2,3}]
This algorithm would take O(2n) time complexity to generate all combinations. However, I'm not sure if an improved algorithm can bring this time complexity down. If an improved algorithm exists, please do share your knowledge!
In the case that it takes O(2n) which is exponential, I would like some insight regarding which class this algorithm belongs to P, NP, NP-Complete, or NP-Hard. Thanks in advance :)

P, NP, NP-complete, and NP-hard are all classes of decision problems, none of which contain problems that involve non-binary output (such as this enumeration problem).
Often people refer colloquially to problems in FNP as being in NP. This problem is not in FNP either because the length of the output string for the relation must be bounded by some polynomial function of the input length. It might be FNP-hard, but we're getting into the weeds that even a graduate CS education doesn't cover. Worth asking on the CS Stack Exchange if you care enough.

This problem is in none of them except, arguably, NP-hard.
It is not in P because there is no polynomial time algorithm to do it. You cannot generate an exponential number of things in polynomial time.
It is not in NP because there is no polynomial time algorithm to validate the answer. You cannot process an exponential number of things in polynomial time.
It is not in NP-complete because everything in NP-complete must be in NP and it is not.
The argument for it being in NP-hard goes like this. You can say anything that you want about the members of the empty set. Including that they make monkeys fly out of your nose and can solve any problem in NP in polynomial time. So if we could find a polynomial solution, we can solve any NP problem fast, and therefore it meets the definition of NP-hard. But uselessly so - we know that no polynomial solution exists.

Related

Understanding Polynomial TIme Approximation Scheme

Is an approximation algorithm the same as a Polynomial Time Approximation Algorithm (PTAS)? E.g. It can be shown that A(I) <= 2 * OPT(I) for vertex cover. Does it mean that Vertex Cover has a 2-polynomial time approximation algorithm or a PTAS?
Thanks!
Note: The text in Italics is the edit I made after I posted my question.
No, this isn't necessarily the case. A PTAS is an algorithm where given any ε > 0, you can approximate the answer to a factor of (1 + ε) in polynomial time. In other words, you can get arbitrarily good approximations.
Some problems are known (for example, MAX-3SAT) that have approximation algorithms for specific factors (for example, 5/8), but where it's known that unless P = NP there is a hard limit to how well the problem can be approximated in polynomial time. For example, the PCP theorem says that MAX-3SAT doesn't have a polynomial-time 7/8 approximation unless P = NP. It's therefore possible that MAX-3SAT has a PTAS, but only if P = NP.
Hope this helps!
Vertex cover having 2-approximation algorithm is not same as having a PTAS algorithm. Sometimes, there are problems where much better approximation is possible. These problems then admit PTAS.
Such algorithms take an instance of the problem as input, with another input parameter epsilon>0. And it gives an output whose value is at most (1+epsilon).OPT for minimisation problem; and (1/(1+epsilon)).OPT for maximisation problem.
Run time of a PTAS algorithm is polynomial in n (size of problem instance). Sometimes, the runtime is also polynomial in epsilon, then its called to admit FPTAS(fully PTAS).
Example:
Dynamic programming algorithm for KNAPSACK with integer-profits gives optimal solution.
While, KNAPSACK problem with real-valued profits do not admit polynomial-time algorithm. But it admits a FPTAS, where real-value profits are converted into integer profits; and DP algorithm is used to calculate the solution with "rounded" profits.
Another example, Max Independent Set does not admit a PTAS or FPTAS. Because, in this case, we can set a value for epsilon, which will always give optimal solution for any graph using that PTAS algorithm; which is not possible until P=NP.

Subset Sum theory and solutions [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
Subset problem is defined in Wikipedia as follows:
Given a set of integers, is there a non-empty
subset whose sum is zero? For example, given the set { −7, −3, −2, 5,
8}, the answer is yes because the subset { −3, −2, 5} sums to zero.
or
given a set of integers and an integer s, does any non-empty subset sum to s?
Brute force solution for this problem is exponential (cycle through all subsets of N numbers and, for every one of them, check if the subset sums to the right number), there some optimized version for brute force running in exponential time as well.
Let suppose there is an algorithm that can compute a brute force solution (exact solution to above questions) in between quadratic and polynomial time complexity
How it would be considered related to P=NP question, time complexity and so on?
Supposing algorithm exists, would be an improvement to state of the art for the subset sum problem?
(I'm not an expert on this area so if something does not make sense or is not clear I'll provide additional input to this question to the extent I'm able to :) )
Since the subset problem is NP-complete, if you can find a polynomial time solution to the problem, then you can solve all problems in NP in polynomial time, and P = NP.
Now, of course the above statement wouldn't make sense without understanding what NP and NP-completeness are. There are many ways to define NP problems, but the simplest way is that a problem is in NP if and only if there exists a verifier that can check the correctness of its solution in polynomial time. In the case of the subset sum problem, clearly you can verify its solution in polynomial time. Therefore, it's an NP problem.
The class NP-complete is a special set of problems in NP such that all problems in NP can be reduced to any problem in NP-complete in polynomial time. As an example, the first proven NP-complete problem by Cook is the SAT problem, where you try to decide if there exists a possible assignment to a set of boolean variables such that a boolean formula would evaluate to true. With the correct procedure, you can transform all decision problems in NP to SAT in polynomial time, and this makes SAT NP-complete. You can find more details about the original proof here, but it requires some understanding of the Turing machine.
To prove the NP-completeness of a new problem, you can try to reduce an existing NP-complete problem to the new one. As an example, we know that the SAT problem can be easily reduced to a 3-SAT problem. This means given a SAT problem, we can transform it into a 3-SAT version such that solving the equivalent 3-SAT problem would give us the result of the original SAT problem. Since all problems in NP can be reduced to SAT, and SAT can be reduced to 3-SAT, this makes the 3-SAT problem NP-complete.
Here is a nice proof of how you can reduce 3-SAT to the subset sum problem. As a consequence of the proof, the subset sum problem is NP-complete. Hence, if you can find a polynomial time solution to the subset sum problem, you can then solve all NP problems (yes, including problems such as the traveling salesman, graph coloring, knapsack, etc.) in polynomial time (since all reductions are done in polynomial time).

Can 1 approximation algorithm be used for multiple NP-Hard problems?

Since any NP Hard problem be reduced to any other NP Hard problem by mapping, my question is 1 step forward;
for example every step of that algo : could that also be mapped to the other NP hard?
Thanks in advance
From http://en.wikipedia.org/wiki/Approximation_algorithm we see that
NP-hard problems vary greatly in their approximability; some, such as the bin packing problem, can be approximated within any factor greater than 1 (such a family of approximation algorithms is often called a polynomial time approximation scheme or PTAS). Others are impossible to approximate within any constant, or even polynomial factor unless P = NP, such as the maximum clique problem.
(end quote)
It follows from this that a good approximation in one NP-complete problem is not necessarily a good approximation in another NP-complete problem. In that fortunate world we could use easily-approximated NP-complete problems to find good approximate algorithms for all other NP-complete problems, which is not the case here, as there are hard-to-approximate NP-complete problems.
When proving a problem is NP-Hard, we usually consider the decision version of the problem, whose output is either yes or no. However, when considering approximation algorithms, we consider the optimization version of the problem.
If you use one problem's approximation algorithm to solve another problem by using the reduction in the proof of NP-Hard, the approximation ratio may change. For example, if you have a 2-approximation algorithm for problem A and you use it to solve problem B, then you may get a O(n)-approximation algorithm for problem B, since the reduction does not preserve approximation ratio. Hence, if you want to use an approximation algorithm for one problem to solve another problem, you need to ensure that the reduction will not change approximation ratio too much in order to get a useful algorithm. For example, you can use L-reduction or PTAS reduction.

Is it necessary for NP problems to be decision problems ?

Professor Tim Roughgarden from Stanford University while teaching a MOOC said that solutions to problems in the class NP must be polynomial in length. But the wikipedia article says that NP problems are decision problems. So what type of problems are basically in the class NP ? And is it unnecessary to say that solutions to such problems have a polynomial length output(as decision problems necessarily output either 0 or 1) ?
He was probably talking about witnesses and verifiers.
For every problem in NP, there is a verifier—read algorithm/turing machine—that can verify "yes"-claims in polynomial time.
The idea is, that you have some kind of information—the witness—to help you do this given the time constraints.
For instance, in the travelling salesman problem:
TSP = {(G, k) if G has a hamiltonian cycle of cost <= k}
For a given input (G, k), you only need to determine whether or not the problem instance is in TSP. That's a yes/no answer.
Now, if someone comes along and says: This problem instance is in TSP, you will demand a proof. The other person will then probably give you a sequence of cities. You can then simply check whether the cities in that order form a Hamiltonian cycle and whether the total cost of the cycle is ≤ k.
You can perform this procedure in polynomial time—given that the witness is polynomial in length.
Using this sequence of cities, you were thus able to correctly determine that the problem instance was indeed in TSP.
That's the idea of verifiers: They take a proof object/witness that is polynomial in length to check in polynomial time, that a certain problem instance is in the language.
The standard definition of NP is that it is a class of decision problems only. Decision problems always produce a yes/no answer and thus have constant-sized output.
sDidn't watch the video/course, but I am guessing he was talking about certificates/verification and not solutions. Big difference.

NP-Hard solution question

i have NP hard problem. Let imagine I have found some polynomial algorithm that find ONLY one of many existing solutions of that problem, but at least one solution (if present in the probem). Is that algorithm considered as solution of NP=P question (if that algorithm transformed to mathematical proof)?
Thanks for answers
NP is a class of decision problems. Your algorithm should answer "yes" or "no" correctly to all possible instances (questions).
For example, the problem: "given graph G and number k, does G contain a clique of size >= k" is NP-hard. If you have a polynomial time algorithm that answers "yes" or "no" correctly each time, then it is a valid proof of P=NP. The algorithm doesn't need to explicitly show the clique - only answer if it exists for all possible G and k.
If you find a NP-hard problem and you can detect some cases that you can solve in polynomial time (leaving others for exponential time), then only if the fraction of cases remaining is on the order of log(N)/N will you change the order of the entire problem, and even then only if you can restrict your exponential case to examining only log(N) not all N possibilities.
Also, if you find a NP-hard problem where you think you can solve every case in polynomial time, you have probably made a mistake, either in posing a NP-hard problem correctly, or in finding the more troublesome examples. Try a larger test set before believing yourself!

Resources