In complexity class P, accepts=decides. Why not NP? - complexity-theory

Suppose some problem L is in the complexity class P. Then there is some polynomial time algorithm A that decides the problem. We have the following theorem: if A accepts L, then A decides L.
The proof works by noting that if A runs in polynomial time, then there are some non negative constants c, k such that the run time of A is cn^k where n is the size of the input L. So we can construct a polynomial time algorithm A' that calls A and returns 1 if A returns 1 in time<= cn^k, and returns 0 if A takes longer than cn^k to return something. By doing this, we note that if A tries to go into an infinite loop, then A' just halts the process in poly time and returns 0 which implies that A' rejects the compliment of L.
My question is: Why does this proof not work for the complexity class NP? Can we not just say that if L is in NP, then there is a non deterministic poly time algorithm A that decides L, so just define A' as above?

"Deciding" a problem means being able to say whether or not a given string is in the language. If a string is not in an NP language, there is no polynomial amount of time you can wait for acceptance to fail before declaring the string is not in the language, which makes your algorithm untenable.
For languages in P, you don't know what the "polynomial amount of time" actually is, but you do know that your algorithm will terminate in a finite amount of time for any input. But for NP, testing inputs not in the language may not terminate ever, so you can never tell if your input isn't in the language or you just haven't waited long enough.

Related

Non deterministic Polynomial(NP) vs Polynomial(P)?

I am actually looking for description what NP alogrithm actually means and what kind of algo/problem can be classified as NP problem
I have read many resources on net . I liked
https://www.quora.com/What-are-P-NP-NP-complete-and-NP-hard
What are the differences between NP, NP-Complete and NP-Hard?
Non deterministic Turing machine
What are NP problems?
What are NP and NP-complete problems?
Polynomial problem :-
If the running time is some polynomial function of the size of the input**, for instance if the algorithm runs in linear time or quadratic time or cubic time, then we say the algorithm runs in polynomial time . Example can be binary search
Now I do understand Polynomial problem . But not able to contrast it with NP.
NP(nondeterministic polynomial Problem):-
Now there are a lot of programs that don't (necessarily) run in polynomial time on a regular computer, but do run in polynomial time on a nondeterministic Turing machine. These programs solve problems in NP, which stands for nondeterministic polynomial time.
I am not able to to understand/think of example that does not run in polynomial time on a regular computer. Per mine current understanding, Every problem/algo can be solved
in some polynomial function of time which can or can't be proportional to time. I know i am missing something here but really could not grasp this concept. Could someone
give example of problem which can not be solved in polynomial time on regular computer but can be verified in polynomial time ?
One of the example given at second link mentioned above is Integer factorization is in NP. This is the problem that given integers n and m, is there an integer f with 1 < f < m, such that f divides n (f is a small factor of n)? why this can't be solved in some polynomial time on regular computer ? we can check for all number from 1 to n if they divide n or not. Right ?
Also where verification part come here(i mean if it can be solved in polynomial time but then how the problem solution can be verified in polynomial time)?
Your question touches several points.
First, in the sense relevant to your question, the size of a problem is defined to be the size of the representation of the problem. So, for example, when you write about the problem of a divisor of n. What is the representation of n? It is a series of characters of length q (I don't want to be more specific than that). In general, n is exponential in q. So when you talk about a simple loop from 1 to n, you're talking about something that is exponential in the size of the input. For example, the number "999999999999999" represents the number 999999999999999. That is quite a large number, but it is represented by 12 characters here.
Second, while there is more than a single way to define the class NP, perhaps the simplest one for decision problems (which is the type you raise in your question, namely is something true or not) is that if the answer is true, then there is an "certificate" that can be verified in polynomial time. For example, consider the Hamilton Path Problem. This is (probably) a hard problem to solve, but, if you are given a hamilton path as an answer, it is very easy to verify that it is so; specifically, it can be done in polynomial time. For the Hamilton Path Problem, the path is a polynomial-time verifiable certificate, and therefore this problem is NP.
It's probably worth noting how the idea of "checking a solution in polynomial time" relates to a nondeterministic Turing Machine solving a problem: in a normal (deterministic) Turing Machine, there is a well-defined set of instructions telling the machine exactly what to do in any situation ("if you're in state 3 and see an 'a', move left, if you're in state 7 and see a 'c', overwrite it with a 'b', etc.") whereas in a nondeterministic Turing Machine there is more than one option for what to do in some situations ("if you're in state 3 and see an 'a', either move right or overwrite it with a 'b'"). In terms of algorithms, this lets us "guess" solutions in the sense that if we can encode a problem into a language on an alphabet* then we can use a nondeterministic Turing Machine to generate strings on this alphabet, and then use a standard (deterministic) Turing Machine to ensure that it is correct. If we assume that we always make the right guess, then the runtime of our algorithm is simply the runtime of the deterministic checking part, which for NP problems runs in polynomial time. This is what it means for a problem to be 'decidable in polynomial time on a nondeterministic Turing Machine', and why it is often simply phrased as 'checking a solution/ certificate in polynomial time'.
*
Example: The Hamiltonian Path problem could be encoded as follows:
Label the vertices of the graph 1 through n, where n is the number of vertices. Our alphabet is then the numbers 1 through n, and our language consists of all words such that
a) every integer from 1 to n appears exactly once
and
b) for every consecutive pair of integers in a word, the vertices with those labels are connected
Polynomial Time :- Problem which can be solved in polynomial time of input size is called polynomial problem. In plain simple words :- Here Solution to problem is fast. For
example sorting, binary search
Non deterministic polynomial :- Theoretically the problems which can be verified in polynomial time irrespective of actual solution time complexity (which can be polynomial or not polynomial). So some problem which are P can also be NP.
But Informally people while conversation/posts use the NP term in below sense
Problem which can not be solved in polynomial time of input size is called polynomial problem. In plain simple words :- Here Solution to problem is not fast. You may have to try different permutation/combination or guessing work. But Verification part is fast and can be done in polynomial time. Like
input some numbers X and divide the numbers into two groups that difference in their sum is minimum
I really liked the Alex Flint answer at https://www.quora.com/What-are-P-NP-NP-complete-and-NP-hard .Above is just gist of that.

Is an iterative solution for Fibonacci series pseudo-polynomial?

So when we do an iterative solution to find the nth number in a Fibonacci sequence, we run a for loop (n-2) times. This would mean that the time complexity would be O(n). Is this correct or would it actually be pseudo-polynomial depending on the number of bits of the input, much like the Knapsack problem?
Here, I assume Fib(n) is an iterative version of a program that computes Fibonacci numbers. Perhaps something like:
def Fib(n):
a, b = 0, 1
for _ in xrange(n):
a, b = b, a + b
return a
"Fib(n) is pseudo-polynomial" means in this context that computing Fib is bounded by a polynomial of its argument, n, but isn't bounded by a polynomial function of the size of the argument, log(n). That's true in this case.
"Fib(n) is O(n)" is a statement about the running time of Fib with respect to the value of its argument. There's sometimes ambiguity what "n" is, but here there's none -- it's the input to Fib, otherwise "n" would refer to two different things in the original statement. That's true here (although see the technical side-note below).
"Fib is O(n)" is ambiguous. There are people who will tell you that n clearly refers to the argument, and there's others who will tell you that n always refers to the size of the argument. The truth is that it's ambiguous and if it's not clear in context you should say what you mean (or ask what it means if you hear it and are confused). One context where it's not ambiguous is when you're talking about classes of P/NP problems -- there it's assumed that complexities are always relative to the size of the input.
A technical side-note
The iterative version of Fib(n) performs O(n) arithmetic operations, but whether it's O(n) time depends on your computational model, and specifically whether it can perform arbitrary integer arithmetic operations in O(1) time. Personally, I'd be careful and say "Fib(n) performs O(n) arithmetic operations" rather than "Fib(n) is O(n)" -- and if you plot the running time of Fib(n), you'll find it's not linear time in practice, as real bignum implementations are certainly not O(1) for all basic operations.
Yes, it is infact O(n). The time complexity of Knapsack Problem is a really weird one and is an exception.

Minimum Cut in undirected graphs

I would like to quote from Wikipedia
In mathematics, the minimum k-cut, is a combinatorial optimization
problem that requires finding a set of edges whose removal would
partition the graph to k connected components.
It is said to be the minimum cut if the set of edges is minimal.
For a k = 2, It would mean Finding the set of edges whose removal would Disconnect the graph into 2 connected components.
However, The same article of Wikipedia says that:
For a fixed k, the problem is polynomial time solvable in O(|V|^(k^2))
My question is Does this mean that minimum 2-cut is a problem that belongs to complexity class P?
The min-cut problem is solvable in polynomial time and thus yes it is true that it belongs to complexity class P. Another article related to this particular problem is the Max-flow min-cut theorem.
First of all, the time complexity an algorithm should be evaluated by expressing the number of steps the algorithm requires to finish as a function of the length of the input (see Time complexity). More or less formally, if you vary the length of the input, how would the number of steps required by the algorithm to finish vary?
Second of all, the time complexity of an algorithm is not exactly the same thing as to what complexity class does the problem the algorithm solves belong to. For one problem there can be multiple algorithms to solve it. The primality test problem (i.e. testing if a number is a prime or not) is in P, but some (most) of the algorithms used in practice are actually not polynomial.
Third of all, in the case of most algorithms you'll find on the Internet evaluating the time complexity is not done by definition (i.e. not as a function of the length of the input, at least not expressed directly as such). Lets take the good old naive primality test algorithm (the one in which you take n as input and you check for division by 2,3...n-1). How many steps does this algo take? One way to put it is O(n) steps. This is correct. So is this algorithm polynomial? Well, it is linear in n, so it is polynomial in n. But, if you take a look at what time complexity means, the algorithm is actually exponential. First, what is the length of the input to your problem? Well, if you provide the input n as an array of bits (the usual in practice) then the length of the input is, roughly said, L = log n. Your algorithm thus takes O(n)=O(2^log n)=O(2^L) steps, so exponential in L. So the naive primality test is in the same time linear in n, but exponential in the length of the input L. Both correct. Btw, the AKS primality test algorithm is polynomial in the size of input (thus, the primality test problem is in P).
Fourth of all, what is P in the first place? Well, it is a class of problems that contains all decision problems that can be solved in polynomial time. What is a decision problem? A problem that can be answered with yes or no. Check these two Wikipedia pages for more details: P (complexity) and decision problems.
Coming back to your question, the answer is no (but pretty close to yes :p). The minimum 2-cut problem is in P if formulated as a decision problem (your formulation requires an answer that is not just a yes-or-no). In the same time the algorithm that solves the problem in O(|V|^4) steps is a polynomial algorithm in the size of the input. Why? Well, the input to the problem is the graph (i.e. vertices, edges and weights), to keep it simple lets assume we use an adjacency/weights matrix (i.e. the length of the input is at least quadratic in |V|). So solving the problem in O(|V|^4) steps means polynomial in the size of the input. The algorithm that accomplishes this is a proof that the minimum 2-cut problem (if formulated as decision problem) is in P.
A class related to P is FP and your problem (as you formulated it) belongs to this class.

Nondeterminism versus polynomial-time verifiability

I have read that an NP problem is verifiable in polynomial time or, equivalently, is solvable in polynomial time by a non-deterministic Turing machine. Why are these definitions equivalent?
First, let's show that anything that can be verified in polynomial time can be solved in nondeterministic polynomial time. Suppose you have some algorithm P(x, y) that decides whether y verifies x, and where P runs in time O(|x|k) for some constant k. The runtime of this algorithm is polynomial in x, meaning that it can only look at polynomially many bits of y. Then you could build this nondetermistic polynomial-time algorithm that solves the problem:
Nondeterministically guess a string y of length O(|x|k).
Deterministically run P(x, y) and output whatever it says.
This runs in nondeterministic polynomial time because y is constructed in nondeterministic polynomial time and P(x, y) then runs in polynomial time. If there is a y that verifies x, this machine can nondeterministically guess it. Otherwise, no guess works and the machine will output NO.
The other direction is trickier. Suppose there's a nondeterministic algorithm P(x) that runs in nondeterministic time O(|x|k) for some constant k. This nondeterministic algorithm, at each step, chooses from one of c different options of what to do next. Therefore, you can make a verification program Q(x, y) where y encodes what choices to make at each step of the computation of P. It can then simulate P(x), looking at the choices encoded in y to determine which step to take next. This will run in deterministic time O(|x|k) because it simply does all of the steps in the nondeterministic computation.
Hope this helps!
Perhaps this example gives some hint:
Given L = {w : expression w is satisfiable}
and Time for n variables:
Guess an assignment of the variables O(n)
Check if this is a satisfying assignment O(n)
Total time: O(n)
The satisfiability problem is an NP-Problem and Intractable but widely used in computing
applications due to the fact that it is linear time complexity for each guess.
The class NP is intended to isolate the notion of polynomial time “verifiability”.
NP is the class of languages that have polynomial time verifiers.

Is the time complexity of the empty algorithm O(0)?

So given the following program:
Is the time complexity of this program O(0)? In other words, is 0 O(0)?
I thought answering this in a separate question would shed some light on this question.
EDIT: Lots of good answers here! We all agree that 0 is O(1). The question is, is 0 O(0) as well?
From Wikipedia:
A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.
From this description, since the empty algorithm requires 0 time to execute, it has an upper bound performance of O(0). This means, it's also O(1), which happens to be a larger upper bound.
Edit:
More formally from CLR (1ed, pg 26):
For a given function g(n), we denote O(g(n)) the set of functions
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
The asymptotic time performance of the empty algorithm, executing in 0 time regardless of the input, is therefore a member of O(0).
Edit 2:
We all agree that 0 is O(1). The question is, is 0 O(0) as well?
Based on the definitions, I say yes.
Furthermore, I think there's a bit more significance to the question than many answers indicate. By itself the empty algorithm is probably meaningless. However, whenever a non-trivial algorithm is specified, the empty algorithm could be thought of as lying between consecutive steps of the algorithm being specified as well as before and after the algorithm steps. It's nice to know that "nothingness" does not impact the algorithm's asymptotic time performance.
Edit 3:
Adam Crume makes the following claim:
For any function f(x), f(x) is in O(f(x)).
Proof: let S be a subset of R and T be a subset of R* (the non-negative real numbers) and let f(x):S ->T and c ≥ 1. Then 0 ≤ f(x) ≤ f(x) which leads to 0 ≤ f(x) ≤ cf(x) for all x∈S. Therefore f(x) ∈ O(f(x)).
Specifically, if f(x) = 0 then f(x) ∈ O(0).
It takes the same amount of time to run regardless of the input, therefore it is O(1) by definition.
Several answers say that the complexity is O(1) because the time is a constant and the time is bounded by the product of some coefficient and 1. Well, it is true that the time is a constant and it is bounded that way, but that doesn't mean that the best answer is O(1).
Consider an algorithm that runs in linear time. It is ordinarily designated as O(n) but let's play devil's advocate. The time is bounded by the product of some coefficient and n^2. If we consider O(n^2) to be a set, the set of all algorithms whose complexity is small enough, then linear algorithms are in that set. But it doesn't mean that the best answer is O(n^2).
The empty algorithm is in O(n^2) and in O(n) and in O(1) and in O(0). I vote for O(0).
I have a very simple argument for the empty algorithm being O(0): For any function f(x), f(x) is in O(f(x)). Simply let f(x)=0, and we have that 0 (the runtime of the empty algorithm) is in O(0).
On a side note, I hate it when people write f(x) = O(g(x)), when it should be f(x) ∈ O(g(x)).
Big O is asymptotic notation. To use big O, you need a function - in other words, the expression must be parametrized by n, even if n is not used. It makes no sense to say that the number 5 is O(n), it's the constant function f(n) = 5 that is O(n).
So, to analyze time complexity in terms of big O you need a function of n. Your algorithm always makes arguably 0 steps, but without a varying parameter talking about asymptotic behaviour makes no sense. Assume that your algorithm is parametrized by n. Only now you may use asymptotic notation. It makes no sense to say that it is O(n2), or even O(1), if you don't specify what is n (or the variable hidden in O(1))!
As soon as you settle on the number of steps, it's a matter of the definition of big O: the function f(n) = 0 is O(0).
Since this is a low-level question it depends on the model of computation.
Under "idealistic" assumptions, it is possible you don't do anything.
But in Python, you cannot say def f(x):, but only def f(x): pass. If you assume that every instruction, even pass (NOP), takes time, then the complexity is f(n) = c for some constant c, and unless c != 0 you can only say that f is O(1), not O(0).
It's worth noting big O by itself does not have anything to do with algorithms. For example, you may say sin x = x + O(x3) when discussing Taylor expansion. Also, O(1) does not mean constant, it means bounded by constant.
All of the answers so far address the question as if there is a right and a wrong answer. But there isn't. The question is a matter of definition. Usually in complexity theory the time cost is an integer --- although that too is just a definition. You're free to say that the empty algorithm that quits immediately takes 0 time steps or 1 time step. It's an abstract question because time complexity is an abstract definition. In the real world, you don't even have time steps, you have continuous physical time; it may be true that one CPU has clock cycles, but a parallel computer could easily have asynchronoous clocks and in any case a clock cycle is extremely small.
That said, I would say that it's more reasonable to say that the halt operation takes 1 time step rather than that it takes 0 time steps. It does seem more realistic. For many situations it's arguably very conservative, because the overhead of initialization is typically far greater than executing one arithmetic or logical operation. Giving the empty algorithm 0 time steps would only be reasonable to model, for example, a function call that is deleted by an optimizing compiler that knows that the function won't do anything.
It should be O(1). The coefficient is always 1.
Consider:
If something grows like 5n, you don't say O(5n), you say O(n) [in other words, O(1n)]
If something grows like 7n^2, you don't say O(7n^2), you say O(n^2) [in other words, O(1n^2)]
Likewise you should say O(1), not O(some other constant)
There is no such thing as O(0). Even an oracle machine or a hypercomputer require the time for one operation, i.e. solve(the_goldbach_conjecture), ergo:
All machines, theoretical or real, finite or infinite produce algorithms with a minimum time complexity of O(1).
But then again, this code right here is O(0):
// Hello world!
:)
I would say it's O(1) by definition, but O(0) if you want to get technical about it: since O(k1g(n)) is equivalent to O(k2g(n)) for any constants k1 and k2, it follows that O(1 * 1) is equivalent to O(0 * 1), and therefore O(0) is equivalent to O(1).
However, the empty algorithm is not like, for example, the identity function, whose definition is something like "return your input". The empty algorithm is more like an empty statement, or whatever happens between two statements. Its definition is "do absolutely nothing with your input", presumably without even the implied overhead of simply having input.
Consequently, the complexity of the empty algorithm is unique in that O(0) has a complexity of zero times whatever function strikes your fancy, or simply zero. It follows that since the whole business is so wacky, and since O(0) doesn't already mean something useful, and since it's slightly ridiculous to even discuss such things, a reasonable special case for O(0) is something like this:
The complexity of the empty algorithm is O(0) in time and space. An algorithm with time complexity O(0) is equivalent to the empty algorithm.
So there you go.
Given the formal definition of Big O:
Let f(x) and g(x) be two functions defined over the set of real numbers. Then, we write:
f(x) = O(g(x)) as x approaches infinity iff there exists a real M and a real x0 so that:
|f(x)| <= M * |g(x)| for every x > x0
As I see it, if we substitute g(x) = 0 (in order to have a program with complexity O(0)), we must have:
|f(x)| <= 0, for every x > x0 (the constraint of existence of a real M and x0 is practically lifted here)
which can only be true when f(x) = 0.
So I would say that not only the empty program is O(0), but it is the only one for which that holds. Intuitively, this should've been true since O(1) encompasses all algorithms that require a constant number of steps regardless of the size of its task, including 0. It's essentially useless to talk about O(0); it's already in O(1). I suspect it's purely out of simplicity of definition that we use O(1), where it could as well be O(c) or something similar.
0 = O(f) for all function f, since 0 <= |f|, so it is also O(0).
Not only is this a perfectly sensible question, but it is important in certain situations involving amortized analysis, especially when "cost" means something other than "time" (for example, "atomic instructions").
Let's say there is a datastructure featuring multiple operation types, for which an amortized analysis is being conducted. It could well happen that one type of operation can always be funded fully using "coins" deposited during previous operations.
There is a simple example of this: the "multipop queue" described in Cormen, Leiserson, Rivest, Stein [CLRS09, 17.2, p. 457], and also on Wikipedia. Each time an item is pushed, a coin is put on the item, for a total amortized cost of 2. When (multi) pops occur, they can be fully paid for by taking one coin from each item popped, so the amortized cost of MULTIPOP(k) is O(0). To wit:
Note that the amortized cost of MULTIPOP is a constant (0)
...
Moreover, we can also charge MULTIPOP operations nothing. To pop the
first plate, we take the dollar of credit off the plate and use it to
pay the actual cost of a POP operation. To pop a second plate, we
again have a dollar of credit on the plate to pay for the POP
operation, and so on. Thus, we have always charged enough up front to
pay for MULTIPOP operations. In other words, since each plate on the
stack has 1 dollar of credit on it, and the stack always has a
nonnegative number of plates, we have ensured that the amount of
credit is always nonnegative.
Thus O(0) is an important "complexity class" for certain amortized operations.
O(1) means the algorithm's time complexity is always constant.
Let's say we have this algorithm (in C):
void doSomething(int[] n)
{
int x = n[0]; // This line is accessing an array position, so it is time consuming.
int y = n[1]; // Same here.
return x + y;
}
I am ignoring the fact that the array could have less than 2 positions, just to keep it simple.
If we count the 2 most expensive lines, we have a total time of 2.
2 = O(1), because:
2 <= c * 1, if c = 2, for every n > 1
If we have this code:
public void doNothing(){}
And we count it as having 0 expansive lines, there is no difference in saying it has O(0) O(1), or O(1000), because for every one of these functions, we can prove the same theorem.
Normally, if the algorithm takes a constant number of steps to complete, we say it has O(1) time complexity.
I guess this is just a convention, because you could use any constant number to represent the function inside the O().
No. It's O(c) by convention whenever you don't have dependence on input size, where c is any positive constant (typically 1 is used - O(1) = O(12.37)).

Resources