What is pseudopolynomial time? How does it differ from polynomial time? - algorithm

What is pseudopolynomial time? How does it differ from polynomial time? Some algorithms that run in pseudopolynomial time have runtimes like O(nW) (for the 0/1 Knapsack Problem) or O(√n) (for trial division); why doesn't that count as polynomial time?

To understand the difference between polynomial time and pseudopolynomial time, we need to start off by formalizing what "polynomial time" means.
The common intuition for polynomial time is "time O(nk) for some k." For example, selection sort runs in time O(n2), which is polynomial time, while brute-force solving TSP takes time O(n · n!), which isn't polynomial time.
These runtimes all refer to some variable n that tracks the size of the input. For example, in selection sort, n refers to the number of elements in the array, while in TSP n refers to the number of nodes in the graph. In order to standardize the definition of what "n" actually means in this context, the formal definition of time complexity defines the "size" of a problem as follows:
The size of the input to a problem is the number of bits required to write out that input.
For example, if the input to a sorting algorithm is an array of 32-bit integers, then the size of the input would be 32n, where n is the number of entries in the array. In a graph with n nodes and m edges, the input might be specified as a list of all the nodes followed by a list of all the edges, which would require Ω(n + m) bits.
Given this definition, the formal definition of polynomial time is the following:
An algorithm runs in polynomial time if its runtime is O(xk) for some constant k, where x denotes the number of bits of input given to the algorithm.
When working with algorithms that process graphs, lists, trees, etc., this definition more or less agrees with the conventional definition. For example, suppose you have a sorting algorithm that sorts arrays of 32-bit integers. If you use something like selection sort to do this, the runtime, as a function of the number of input elements in the array, will be O(n2). But how does n, the number of elements in the input array, correspond to the the number of bits of input? As mentioned earlier, the number of bits of input will be x = 32n. Therefore, if we express the runtime of the algorithm in terms of x rather than n, we get that the runtime is O(x2), and so the algorithm runs in polynomial time.
Similarly, suppose that you do depth-first search on a graph, which takes time O(m + n), where m is the number of edges in the graph and n is the number of nodes. How does this relate to the number of bits of input given? Well, if we assume that the input is specified as an adjacency list (a list of all the nodes and edges), then as mentioned earlier the number of input bits will be x = Ω(m + n). Therefore, the runtime will be O(x), so the algorithm runs in polynomial time.
Things break down, however, when we start talking about algorithms that operate on numbers. Let's consider the problem of testing whether a number is prime or not. Given a number n, you can test if n is prime using the following algorithm:
function isPrime(n):
for i from 2 to n - 1:
if (n mod i) = 0, return false
return true
So what's the time complexity of this code? Well, that inner loop runs O(n) times and each time does some amount of work to compute n mod i (as a really conservative upper bound, this can certainly be done in time O(n3)). Therefore, this overall algorithm runs in time O(n4) and possibly a lot faster.
In 2004, three computer scientists published a paper called PRIMES is in P giving a polynomial-time algorithm for testing whether a number is prime. It was considered a landmark result. So what's the big deal? Don't we already have a polynomial-time algorithm for this, namely the one above?
Unfortunately, we don't. Remember, the formal definition of time complexity talks about the complexity of the algorithm as a function of the number of bits of input. Our algorithm runs in time O(n4), but what is that as a function of the number of input bits? Well, writing out the number n takes O(log n) bits. Therefore, if we let x be the number of bits required to write out the input n, the runtime of this algorithm is actually O(24x), which is not a polynomial in x.
This is the heart of the distinction between polynomial time and pseudopolynomial time. On the one hand, our algorithm is O(n4), which looks like a polynomial, but on the other hand, under the formal definition of polynomial time, it's not polynomial-time.
To get an intuition for why the algorithm isn't a polynomial-time algorithm, think about the following. Suppose I want the algorithm to have to do a lot of work. If I write out an input like this:
10001010101011
then it will take some worst-case amount of time, say T, to complete. If I now add a single bit to the end of the number, like this:
100010101010111
The runtime will now (in the worst case) be 2T. I can double the amount of work the algorithm does just by adding one more bit!
An algorithm runs in pseudopolynomial time if the runtime is some polynomial in the numeric value of the input, rather than in the number of bits required to represent it. Our prime testing algorithm is a pseudopolynomial time algorithm, since it runs in time O(n4), but it's not a polynomial-time algorithm because as a function of the number of bits x required to write out the input, the runtime is O(24x). The reason that the "PRIMES is in P" paper was so significant was that its runtime was (roughly) O(log12 n), which as a function of the number of bits is O(x12).
So why does this matter? Well, we have many pseudopolynomial time algorithms for factoring integers. However, these algorithms are, technically speaking, exponential-time algorithms. This is very useful for cryptography: if you want to use RSA encryption, you need to be able to trust that we can't factor numbers easily. By increasing the number of bits in the numbers to a huge value (say, 1024 bits), you can make the amount of time that the pseudopolynomial-time factoring algorithm must take get so large that it would be completely and utterly infeasible to factor the numbers. If, on the other hand, we can find a polynomial-time factoring algorithm, this isn't necessarily the case. Adding in more bits may cause the work to grow by a lot, but the growth will only be polynomial growth, not exponential growth.
That said, in many cases pseudopolynomial time algorithms are perfectly fine because the size of the numbers won't be too large. For example, counting sort has runtime O(n + U), where U is the largest number in the array. This is pseudopolynomial time (because the numeric value of U requires O(log U) bits to write out, so the runtime is exponential in the input size). If we artificially constrain U so that U isn't too large (say, if we let U be 2), then the runtime is O(n), which actually is polynomial time. This is how radix sort works: by processing the numbers one bit at a time, the runtime of each round is O(n), so the overall runtime is O(n log U). This actually is polynomial time, because writing out n numbers to sort uses Ω(n) bits and the value of log U is directly proportional to the number of bits required to write out the maximum value in the array.

Pseudo-polynomial time complexity means polynomial in the value/magnitude of input but exponential in the size of input.
By size we mean the number of bits required to write the input.
From the pseudo-code of knapsack, we can find the time complexity to be O(nW).
// Input:
// Values (stored in array v)
// Weights (stored in array w)
// Number of distinct items (n) //
Knapsack capacity (W)
for w from 0 to W
do m[0, w] := 0
end for
for i from 1 to n do
for j from 0 to W do
if j >= w[i] then
m[i, j] := max(m[i-1, j], m[i-1, j-w[i]] + v[i])
else
m[i, j] := m[i-1, j]
end if
end for
end for
Here, W is not polynomial in the length of the input though, which is what makes it pseudo-polynomial.
Let s be number of bits required to represent W
i.e. size of input= s =log(W) (log= log base 2)
-> 2^(s)=2^(log(W))
-> 2^(s)=W (because 2^(log(x)) = x)
Now, running time of knapsack= O(nW) = O(n * 2^s)
which is not polynomial.

Related

Why a knapsack problem is not solvable in a polynomial time with an algorithm using dynamic programming?

I saw this explanation but I still could not fully understand it.
If we follow this logic: Suppose some algorithm works in O(n) time, then:
Let's assume n = 1000 in binary term (4-bit long)
so the time complexity T(n) = O(8)
Let’s double the size of input. n = 10000000 (8-bit long) T(n) = O(128)
The time increases in exponential term, so it means that O(n) is not a polynomial time?
The question is: "Polynomial as a function of what?". When we determine complexity of algorithms, we usually, but not always, express it as a function of the length of the input. Often, but not always, this length is represented by the letter n. For instance, if you study problems involving graphs, the letter n is often used to denote the number of vertices in the graph, rather than for the length of the input.
In your case, the variable n is the number of items, and variable W is the capacity of the bag.
Thus, the number n is relevant to the complexity; but it is not, in itself, the entire length of the input. Let's call N the real length of the input. Try not to confuse n and N. The complexity of your algorithms will have to be expressed as functions of N, and terms like "linear complexity", "polynomial complexity", "exponential complexity", etc., will always be in reference to N, not n.
What is the length N of the input?
Your input consists in a list of pairs (weight(i), value(i)), followed by the capacity W. Presumably, each item in the bag is allowed to have a weight ranging from 0 to W; and a value ranging from 0 to some max value V. Thus, weights need as many bits as W to be expressed, and values need as many bits as V to be expressed.
If you know a little about logarithms and about writing numbers, you should know that the number of bits required to write a number is proportional to its logarithm. Thus, every weight requires log(W) bits, and every value requires log(V) bits.
Thus: N = n * (log(W) + log(V)).
You can convince yourself that the lengths of W and V are relevant to the complexity. Here all our numbers are integers. But imagine a real world problem. A weight can be expressed in tons, or in grams. 1 ton is the same as 1000000 grams. A value can be expressed in cents, or in tens of thousands of euros. Before inputting our real-world problem into our algorithm, we need to choose our units.
Is your algorithm going to take a longer time to solve the problem, if the weight is expressed in grams rather than in tons?
The answer is yes. Because grams are a million times more precise than tons. So you are asking for a million times more precise answer, when asking the question in grams rather than in tons. Thus the algorithm will take more time to find that solution.
I hope I could convince you that the complexity of the algorithm should be expressed as a function of the actual length of the input, and not just the number of elements.

Algorithm is linear (O(n)) to size of input, but what if input size is exponential

The instructor said that the complexity of an algorithm is typically measured with respect to its input size.
So, when we say an algorithm is linear, then even if you give it an input size of 2^n (say 2^n being the number of nodes in a binary tree), the algorithm is still linear to the input size?
The above seems to be what the instructor means, but I’m having a hard time turning it in my head. If you give it a 2^n input, which is exponential to some parameter ‘n’, but then call this input “x”, then, sure, your algorithm is linear to x. But deep-down, isn’t it still exponential in ‘n’? What’s the point of saying its linear to x?
Whenever you see the term "linear," you should ask - linear in what? Usually, when we talk about an algorithm's runtime being "linear time," we mean "the algorithm's runtime is O(n), where n is the size of the input."
You're asking what happens if, say, n = 2k and we're passing in an exponentially-sized input into the function. In that case, since the runtime is O(n) and n = 2k, then the overall runtime would be O(2k). There's no contradiction here between this statement and the fact that the algorithm runs in linear time, since "linear time" means "linear as a function of the size of the input."
Notice that I'm explicitly choosing to use a different variable k in the expression 2k to call attention to the fact that there are indeed two different quantities here - the size of the input as a function of k (which is 2k) and the variable n representing the size of the input to the function more generally. You sometimes see this combined, as in "if the runtime of the algorithm is O(n), then running the algorithm on an input of size 2n takes time O(2n)." That statement is true but a bit tricky to parse, since n is playing two different roles there.
If an algorithm has a linear time-complexity, then it is linear regardless the size of the input. Whether it is a fixed size input, quadratic or exponential.
Obviously running that algorithm on a fixed size array, quadratic or exponential will take different time, but still, the complexity is O(n).
Perhaps this example will help you understand, does running merge-sort on an array of size 16 mean merge-sort is O(1) because it took constant operations to sort that array? the answer is NO.
When we say an algorithm is O(n), means if the input size is n, it is linear regards to the input size. Hence, if n is exponential in terms of another parameter k (for example n = 2^k), the algorithm is linear as well, in regards to the input size.
Another example is time complexity for the binary search for an input array with size n. We say that binary search for a sorted array with size n is in O(log(n)). It means in regards to the input size, it takes asymptotically at most log(n) comparison to search an item inside an input array with size n,
Lets say you are printing first n numbers, and to print each number it takes 3 operations:
n-> 10, number of operations -> 3 x 10 = 30
n-> 100, number of operations -> 3 x 100 = 300
n-> 1000, number of operations -> 3 x 1000 = 3000
n ->10000, we can also say, n = 100^2 (say k^2),
number of operations --> 3 x 10000 = 30,000
Even though n is exponent of something(in this case 100), our number of operations solely depends upon number on the input(n which is 10,000).
So we can say, it is linear time complexity algorithm.

complexity analysis of RSA algorithm

The security of RSA hinges upon a simple assumption:
Given N, e, and y = (x ^e) mod N, it is computationally intractable to
determine x.
This assumption is quite plausible. How might Eve try to guess x?
She could experiment with all possible values of x, each time checking whether x^e is equal to y mod N, but this would take exponential time. Or she could try to factor N to retrieve p and q, and then figure out d by inverting e modulo (p-1)(q-1), but we believe factoring to be hard. Intractability is normally a source of dismay; the insight of RSA lies in using it to advantage.
My question on above text
How we got exponential time for calcuation for each value of x in above context?
https://en.wikipedia.org/wiki/Time_complexity
...the time complexity is generally expressed as a function of the
size of the input
The size of the input of this task is proportional to b = the number of bits in N. Thus the iteration through all possible values of x has time complexity O(2^b) that is exponential.
The algorithms that have a polynomial time complexity in terms of numeric value of the input are called pseudo-polynomial algorithms. For example, it is well known that integer factorization problem has no known polynomial algorithm. But we can simply iterate from 2 to sqrt(N) and find all prime factors of number N in O(sqrt(N)) time. This algorithm has a polynomial complexity in terms of N, but the length of the input of this problem is not N, it is log(N) approximately. As a result, this iteration is only a pseudo-polynomial solution.

why exponential time is applied only to size in pseudo-polynomial time complexity

i have been searching about pseudo-polynomial time.
i have unresolved question about it.
for example, 0-1 knapsack algorithm's time complexity is O(NW).
N is the number of items and W is the size of knapsack.
it is pseudo-polynomial because time complexity is O(N X 2bits in W).
then, i think O(2bits in N X 2bits in W) is possible for time complexity. but why 0-1 knapsack algorithm is pseudo-polynomial only due to 'W' not 'N'?
Because it is about the size of the input, not its value. N is the size of an array. W is just a value.
I like to think it is pseudo-polynomial because it behaves exponentially if we encode the W in binary (or greater) representation, but if we encode the W in unary representation, it behaves "polynomially".
But for N this isn't true. No mater which representation we encode the input array, the time function will always behave as a polynomial.
In practice
This has a practical result in how easy it is to create an input that takes too long to process. If we want to attack on N, we must create an array with N elements. But if we choose W, it is as easy as creating an array with a single big value (using log2 W bits).
Note that N indicates the number of elements you have, each element is represented as (w_i,c_i) (weight, cost) - so overall the input size is Omega(n) - since it must contain all n elements.
However, W is a size, and the input contains only the number, but since you need log(W) bits to represent the size, the input size is Omega(log(W))
In conclusion, we say knapsack (for example) is pseudo polynomial - because it is NOT polynomial in the size of the INPUT, it is exponential in it, since it takes O(N*2^b) time, and the input size is O(N+b) - so it takes exponential time relative to the input size.

Precise Input Size and Time Complexity

When talking about time complexity we usually use n as input, which is not a precise measure of the actual input size. I am having trouble showing that, when using specific size for input (s) an algorithm remains in the same complexity class.
For instance, take a simple Sequential Search algorithm. In its worst case it takes W(n) time. If we apply specific input size (in base 2), the order should be W(lg L), where L is the largest integer.
How do I show that Sequential Search, or any algorithm, remains the same complexity class, in this case linear time? I understand that there is some sort of substitution that needs to take place, but I am shaky on how to come to the conclusion.
EDIT
I think I may have found what I was looking for, but I'm not entirely sure.
If you define worst case time complexity as W(s), the maximum number of steps done by an algorithm for an input size of s, then by definition of input size, s = lg n, where n is the input. Then, n = 2^s, leading to the conclusion that the time complexity is W(2^s), an exponential complexity. Therefore, the algorithm's performance with binary encoding is exponential, not linear as it is in terms of magnitude.
When talking about time complexity we usually use n as input, which is not a precise measure of the actual input size. I am having trouble showing that, when using specific size for input (s) an algorithm remains in the same complexity class.
For instance, take a simple Sequential Search algorithm. In its worst case it takes W(n) time. If we apply specific input size (in base 2), the order should be W(lg L), where L is the largest integer.
L is a variable that represents the largest integer.
n is a variable that represents the size of the input.
L is not a specific value anymore than n is.
When you apply a specific value, you aren't talking about a complexity class anymore, you are talking about an instance of that class.
Let's say you are searching a list of 500 integers. In other words, n = 500
The worst-case complexity class of Sequential Search is O(n)
The complexity is n
The specific instance of worst-case complexity is 500
Edit:
Your values will be uniform in the number of bits required to encode each value. If the input is a list of 32bit integers, then c = 32, the number of bits per integer. Complexity would be 32*n => O(n).
In terms of L, if L is the largest value, and lg L is the number of bits required to encode L, then lg L is the constant c. Your complexity in terms of bits is O(n) = c*n, where c = lg L is the constant specific input size.
What I know is that the maximum number
of steps done by Sequential Search is,
obviously, cn^2 + nlg L. cn^2 being
the number of steps to increment loops
and do branching.
That's not true at all. The maximum number of steps done by a sequential search is going to be c*n, where n is the number of items in the list and c is some constant. That's the worst case. There is no n^2 component or logarithmic component.
For example, a simple sequential search would be:
for (int i = 0; i < NumItems; ++i)
{
if (Items[i] == query)
return i;
}
return -1;
With that algorithm, if you search for each item, then half of the searches will require fewer than NumItems/2 iterations and half of the searches will require NumItems/2 or more iterations. If an item you search for isn't in the list, it will require NumItems iterations to determine that. The worst case running time is NumItems iterations. The average case is NumItems/2 iterations.
The actual number of operations performed is some constant, C, multiplied by the number of iterations. On average it's C*NumItems/2.
As Lucia Moura states: "Except for the unary encoding, all the other encodings for natural
numbers have lengths that are polynomially related"
Here is the source. Take a look at page 19.

Resources