When talking about time complexity we usually use n as input, which is not a precise measure of the actual input size. I am having trouble showing that, when using specific size for input (s) an algorithm remains in the same complexity class.
For instance, take a simple Sequential Search algorithm. In its worst case it takes W(n) time. If we apply specific input size (in base 2), the order should be W(lg L), where L is the largest integer.
How do I show that Sequential Search, or any algorithm, remains the same complexity class, in this case linear time? I understand that there is some sort of substitution that needs to take place, but I am shaky on how to come to the conclusion.
EDIT
I think I may have found what I was looking for, but I'm not entirely sure.
If you define worst case time complexity as W(s), the maximum number of steps done by an algorithm for an input size of s, then by definition of input size, s = lg n, where n is the input. Then, n = 2^s, leading to the conclusion that the time complexity is W(2^s), an exponential complexity. Therefore, the algorithm's performance with binary encoding is exponential, not linear as it is in terms of magnitude.
When talking about time complexity we usually use n as input, which is not a precise measure of the actual input size. I am having trouble showing that, when using specific size for input (s) an algorithm remains in the same complexity class.
For instance, take a simple Sequential Search algorithm. In its worst case it takes W(n) time. If we apply specific input size (in base 2), the order should be W(lg L), where L is the largest integer.
L is a variable that represents the largest integer.
n is a variable that represents the size of the input.
L is not a specific value anymore than n is.
When you apply a specific value, you aren't talking about a complexity class anymore, you are talking about an instance of that class.
Let's say you are searching a list of 500 integers. In other words, n = 500
The worst-case complexity class of Sequential Search is O(n)
The complexity is n
The specific instance of worst-case complexity is 500
Edit:
Your values will be uniform in the number of bits required to encode each value. If the input is a list of 32bit integers, then c = 32, the number of bits per integer. Complexity would be 32*n => O(n).
In terms of L, if L is the largest value, and lg L is the number of bits required to encode L, then lg L is the constant c. Your complexity in terms of bits is O(n) = c*n, where c = lg L is the constant specific input size.
What I know is that the maximum number
of steps done by Sequential Search is,
obviously, cn^2 + nlg L. cn^2 being
the number of steps to increment loops
and do branching.
That's not true at all. The maximum number of steps done by a sequential search is going to be c*n, where n is the number of items in the list and c is some constant. That's the worst case. There is no n^2 component or logarithmic component.
For example, a simple sequential search would be:
for (int i = 0; i < NumItems; ++i)
{
if (Items[i] == query)
return i;
}
return -1;
With that algorithm, if you search for each item, then half of the searches will require fewer than NumItems/2 iterations and half of the searches will require NumItems/2 or more iterations. If an item you search for isn't in the list, it will require NumItems iterations to determine that. The worst case running time is NumItems iterations. The average case is NumItems/2 iterations.
The actual number of operations performed is some constant, C, multiplied by the number of iterations. On average it's C*NumItems/2.
As Lucia Moura states: "Except for the unary encoding, all the other encodings for natural
numbers have lengths that are polynomially related"
Here is the source. Take a look at page 19.
Related
The instructor said that the complexity of an algorithm is typically measured with respect to its input size.
So, when we say an algorithm is linear, then even if you give it an input size of 2^n (say 2^n being the number of nodes in a binary tree), the algorithm is still linear to the input size?
The above seems to be what the instructor means, but I’m having a hard time turning it in my head. If you give it a 2^n input, which is exponential to some parameter ‘n’, but then call this input “x”, then, sure, your algorithm is linear to x. But deep-down, isn’t it still exponential in ‘n’? What’s the point of saying its linear to x?
Whenever you see the term "linear," you should ask - linear in what? Usually, when we talk about an algorithm's runtime being "linear time," we mean "the algorithm's runtime is O(n), where n is the size of the input."
You're asking what happens if, say, n = 2k and we're passing in an exponentially-sized input into the function. In that case, since the runtime is O(n) and n = 2k, then the overall runtime would be O(2k). There's no contradiction here between this statement and the fact that the algorithm runs in linear time, since "linear time" means "linear as a function of the size of the input."
Notice that I'm explicitly choosing to use a different variable k in the expression 2k to call attention to the fact that there are indeed two different quantities here - the size of the input as a function of k (which is 2k) and the variable n representing the size of the input to the function more generally. You sometimes see this combined, as in "if the runtime of the algorithm is O(n), then running the algorithm on an input of size 2n takes time O(2n)." That statement is true but a bit tricky to parse, since n is playing two different roles there.
If an algorithm has a linear time-complexity, then it is linear regardless the size of the input. Whether it is a fixed size input, quadratic or exponential.
Obviously running that algorithm on a fixed size array, quadratic or exponential will take different time, but still, the complexity is O(n).
Perhaps this example will help you understand, does running merge-sort on an array of size 16 mean merge-sort is O(1) because it took constant operations to sort that array? the answer is NO.
When we say an algorithm is O(n), means if the input size is n, it is linear regards to the input size. Hence, if n is exponential in terms of another parameter k (for example n = 2^k), the algorithm is linear as well, in regards to the input size.
Another example is time complexity for the binary search for an input array with size n. We say that binary search for a sorted array with size n is in O(log(n)). It means in regards to the input size, it takes asymptotically at most log(n) comparison to search an item inside an input array with size n,
Lets say you are printing first n numbers, and to print each number it takes 3 operations:
n-> 10, number of operations -> 3 x 10 = 30
n-> 100, number of operations -> 3 x 100 = 300
n-> 1000, number of operations -> 3 x 1000 = 3000
n ->10000, we can also say, n = 100^2 (say k^2),
number of operations --> 3 x 10000 = 30,000
Even though n is exponent of something(in this case 100), our number of operations solely depends upon number on the input(n which is 10,000).
So we can say, it is linear time complexity algorithm.
I am comparing two algorithms that determine whether a number is prime. I am looking at the upper bound for time complexity, but I can't understand the time complexity difference between the two, even though in practice one algorithm is faster than the other.
This pseudocode runs in exponential time, O(2^n):
Prime(n):
for i in range(2, n-1)
if n % i == 0
return False
return True
This pseudocode runs in half the time as the previous example, but I'm struggling to understand if the time complexity is still O(2^n) or not:
Prime(n):
for i in range(2, (n/2+1))
if n % i == 0
return False
return True
As a simple intuition of what big-O (big-O) and big-Θ (big-Theta) are about, they are about how changes the number of operations you need to do when you significantly increase the size of the problem (for example by a factor of 2).
The linear time complexity means that you increase the size by a factor of 2, the number of steps you need to perform also increases by about 2 times. This is what called Θ(n) and often interchangeably but not accurate O(n) (the difference between O and Θ is that O provides only an upper bound but Θ guarantees both upper and lower bounds).
The logarithmic time complexity (Θ(log(N))) means that when increase the size by a factor of 2, the number of steps you need to perform increases by some fixed amount of operations. For example, using binary search you can find given element in twice as long list using just one ore loop iterations.
Similarly the exponential time complexity (Θ(a^N) for some constant a > 1) means that if you increase that size of the problem just by 1, you need a times more operations. (Note that there is a subtle difference between Θ(2^N) and 2^Θ(N) and actually the second one is more generic, both lie inside the exponential time but neither of two covers it all, see wiki for some more details)
Note that those definition significantly depend on how you define "the size of the task"
As #DavidEisenstat correctly pointed out there are two possible context in which your algorithm can be seen:
Some fixed width numbers (for example 32-bit numbers). In such a context an obvious measure of the complexity of the prime-testing algorithm is the value being tested itself. In such case your algorithm is linear.
In practice there are many contexts where prime testing algorithm should work for really big numbers. For example many crypto-algorithms used today (such as Diffie–Hellman key exchange or RSA) rely on very big prime numbers like 512-bits, 1024-bits and so on. Also in those context the security is measured in the number of those bits rather than particular prime value. So in such contexts a natural way to measure the size of the task is the number of bits. And now the question arises: how many operations do we need to perform to check a value of known size in bits using your algorithm? Obviously if the value N has m bits it is about N ≈ 2^m. So your algorithm from linear Θ(N) converts into exponential 2^Θ(m). In other words to solve the problem for a value just 1 bit longer, you need to do about 2 times more work.
Exponential versus linear is a question of how the input is represented and the machine model. If the input is represented in unary (e.g., 7 is sent as 1111111) and the machine can do constant time division on numbers, then yes, the algorithm is linear time. A binary representation of n, however, uses about lg n bits, and the quantity n has an exponential relationship to lg n (n = 2^(lg n)).
Given that the number of loop iterations is within a constant factor for both solutions, they are in the same big O class, Theta(n). This is exponential if the input has lg n bits, and linear if it has n.
i hope this will explain you why they are in fact linear.
suppose you call function and see how many time they r executed
Prime(n): # 1 time
for i in range(2, n-1) #n-1-1 times
if n % i == 0 # 1 time
return False # 1 time
return True # 1 time
# overall -> n
Prime(n): # Time
for i in range(2, (n/2+1)) # n//(2+1) -1-1 time
if n % i == 0 # 1 time
return False # 1 time
return True # 1 time
# overall -> n/2 times -> n times
this show that prime is linear function
O(n^2) might be because of code block where this function is called.
I am currently trying to learn time complexity of algorithms, big-o notation and so on. However, some point confuses me a lot. I know that most of the time, the input size of an array or whatever we are dealing with determines the running time of the algorithm. Let's say I have an unsorted array with size N and I want to find the maximum element of this array without using any special algorithm. I just want to iterate over the array and find the maximum element. Since the size of my array is N, this process runs at O(N) or linear time. Let M is an integer that is the square root of N. So N can be written as the square of M that is M*M or M^2. So, I think there is nothing wrong if I want to replace N with M^2. I know that M^2 is also the size of my array so my big-o notation could be written as O(M^2). So, my new running time looks like running in quadratic time. Why does this happen?
You are correct, if it happens to be that you have some variable M such that M^2 ~= N is always true, you could say the algorithm runs in O(M^2).
But, note that now - the algorithm runs in quadratic related to M, and not quadratic time related to the input, it is still linear related to the size of the input.
The important thing here is defining linear/quadratic, etc.
More precisely, you have to detail linear/quadratic, etc. with respect to something (N or M for your example). The most natural choice is to study the complexity wrt. the size of the input (N for your example).
Another trap for big integers is that the size of n is log(n). So for instance if you loop over all smaller integers, your algorithm is not polynomial.
What is pseudopolynomial time? How does it differ from polynomial time? Some algorithms that run in pseudopolynomial time have runtimes like O(nW) (for the 0/1 Knapsack Problem) or O(√n) (for trial division); why doesn't that count as polynomial time?
To understand the difference between polynomial time and pseudopolynomial time, we need to start off by formalizing what "polynomial time" means.
The common intuition for polynomial time is "time O(nk) for some k." For example, selection sort runs in time O(n2), which is polynomial time, while brute-force solving TSP takes time O(n · n!), which isn't polynomial time.
These runtimes all refer to some variable n that tracks the size of the input. For example, in selection sort, n refers to the number of elements in the array, while in TSP n refers to the number of nodes in the graph. In order to standardize the definition of what "n" actually means in this context, the formal definition of time complexity defines the "size" of a problem as follows:
The size of the input to a problem is the number of bits required to write out that input.
For example, if the input to a sorting algorithm is an array of 32-bit integers, then the size of the input would be 32n, where n is the number of entries in the array. In a graph with n nodes and m edges, the input might be specified as a list of all the nodes followed by a list of all the edges, which would require Ω(n + m) bits.
Given this definition, the formal definition of polynomial time is the following:
An algorithm runs in polynomial time if its runtime is O(xk) for some constant k, where x denotes the number of bits of input given to the algorithm.
When working with algorithms that process graphs, lists, trees, etc., this definition more or less agrees with the conventional definition. For example, suppose you have a sorting algorithm that sorts arrays of 32-bit integers. If you use something like selection sort to do this, the runtime, as a function of the number of input elements in the array, will be O(n2). But how does n, the number of elements in the input array, correspond to the the number of bits of input? As mentioned earlier, the number of bits of input will be x = 32n. Therefore, if we express the runtime of the algorithm in terms of x rather than n, we get that the runtime is O(x2), and so the algorithm runs in polynomial time.
Similarly, suppose that you do depth-first search on a graph, which takes time O(m + n), where m is the number of edges in the graph and n is the number of nodes. How does this relate to the number of bits of input given? Well, if we assume that the input is specified as an adjacency list (a list of all the nodes and edges), then as mentioned earlier the number of input bits will be x = Ω(m + n). Therefore, the runtime will be O(x), so the algorithm runs in polynomial time.
Things break down, however, when we start talking about algorithms that operate on numbers. Let's consider the problem of testing whether a number is prime or not. Given a number n, you can test if n is prime using the following algorithm:
function isPrime(n):
for i from 2 to n - 1:
if (n mod i) = 0, return false
return true
So what's the time complexity of this code? Well, that inner loop runs O(n) times and each time does some amount of work to compute n mod i (as a really conservative upper bound, this can certainly be done in time O(n3)). Therefore, this overall algorithm runs in time O(n4) and possibly a lot faster.
In 2004, three computer scientists published a paper called PRIMES is in P giving a polynomial-time algorithm for testing whether a number is prime. It was considered a landmark result. So what's the big deal? Don't we already have a polynomial-time algorithm for this, namely the one above?
Unfortunately, we don't. Remember, the formal definition of time complexity talks about the complexity of the algorithm as a function of the number of bits of input. Our algorithm runs in time O(n4), but what is that as a function of the number of input bits? Well, writing out the number n takes O(log n) bits. Therefore, if we let x be the number of bits required to write out the input n, the runtime of this algorithm is actually O(24x), which is not a polynomial in x.
This is the heart of the distinction between polynomial time and pseudopolynomial time. On the one hand, our algorithm is O(n4), which looks like a polynomial, but on the other hand, under the formal definition of polynomial time, it's not polynomial-time.
To get an intuition for why the algorithm isn't a polynomial-time algorithm, think about the following. Suppose I want the algorithm to have to do a lot of work. If I write out an input like this:
10001010101011
then it will take some worst-case amount of time, say T, to complete. If I now add a single bit to the end of the number, like this:
100010101010111
The runtime will now (in the worst case) be 2T. I can double the amount of work the algorithm does just by adding one more bit!
An algorithm runs in pseudopolynomial time if the runtime is some polynomial in the numeric value of the input, rather than in the number of bits required to represent it. Our prime testing algorithm is a pseudopolynomial time algorithm, since it runs in time O(n4), but it's not a polynomial-time algorithm because as a function of the number of bits x required to write out the input, the runtime is O(24x). The reason that the "PRIMES is in P" paper was so significant was that its runtime was (roughly) O(log12 n), which as a function of the number of bits is O(x12).
So why does this matter? Well, we have many pseudopolynomial time algorithms for factoring integers. However, these algorithms are, technically speaking, exponential-time algorithms. This is very useful for cryptography: if you want to use RSA encryption, you need to be able to trust that we can't factor numbers easily. By increasing the number of bits in the numbers to a huge value (say, 1024 bits), you can make the amount of time that the pseudopolynomial-time factoring algorithm must take get so large that it would be completely and utterly infeasible to factor the numbers. If, on the other hand, we can find a polynomial-time factoring algorithm, this isn't necessarily the case. Adding in more bits may cause the work to grow by a lot, but the growth will only be polynomial growth, not exponential growth.
That said, in many cases pseudopolynomial time algorithms are perfectly fine because the size of the numbers won't be too large. For example, counting sort has runtime O(n + U), where U is the largest number in the array. This is pseudopolynomial time (because the numeric value of U requires O(log U) bits to write out, so the runtime is exponential in the input size). If we artificially constrain U so that U isn't too large (say, if we let U be 2), then the runtime is O(n), which actually is polynomial time. This is how radix sort works: by processing the numbers one bit at a time, the runtime of each round is O(n), so the overall runtime is O(n log U). This actually is polynomial time, because writing out n numbers to sort uses Ω(n) bits and the value of log U is directly proportional to the number of bits required to write out the maximum value in the array.
Pseudo-polynomial time complexity means polynomial in the value/magnitude of input but exponential in the size of input.
By size we mean the number of bits required to write the input.
From the pseudo-code of knapsack, we can find the time complexity to be O(nW).
// Input:
// Values (stored in array v)
// Weights (stored in array w)
// Number of distinct items (n) //
Knapsack capacity (W)
for w from 0 to W
do m[0, w] := 0
end for
for i from 1 to n do
for j from 0 to W do
if j >= w[i] then
m[i, j] := max(m[i-1, j], m[i-1, j-w[i]] + v[i])
else
m[i, j] := m[i-1, j]
end if
end for
end for
Here, W is not polynomial in the length of the input though, which is what makes it pseudo-polynomial.
Let s be number of bits required to represent W
i.e. size of input= s =log(W) (log= log base 2)
-> 2^(s)=2^(log(W))
-> 2^(s)=W (because 2^(log(x)) = x)
Now, running time of knapsack= O(nW) = O(n * 2^s)
which is not polynomial.
Let's say I have a matrix that has X rows and Y columns. The total number of elements is X*Y, correct? So does that make n=X*Y?
for (i=0; i<X; i++)
{
for (j=0; j<Y; j++)
{
print(matrix[i][j]);
}
}
Then wouldn't that mean that this nested for loop is O(n)? Or am I misunderstanding how time complexities work?
Generally, I thought all nested for loops were O(n^2), but if it goes through X*Y calls to print(), doesn't that mean that the time complexity is O(X*Y) and X*Y is equal to n?
If you have a matrix of size rows*columns, then the inner loop (let's say) is O(columns), and the nested loops together are O(rows*columns).
You are confusing a problem size of N for a problem size of N^2. You can either say your matrix is size N or your matrix is size N^2, though unless your matrix is square you should say that you have a matrix of size Rows*Columns.
You are right when you say n = X x Y but wrong when you say the nested loops should be O(n). The meaning of nested loop can be understood if you dry run your code. You will notice that for each iteration of the outer loop the inner loop runs n (or what ever is the size condition) times. Hence, by simple math, you can deduce that its O(n^2). But, if you had just one loop when you will be iterating over (X x Y) (Eg: for(i = 0; i<(X*Y); i++) elements, then it will be O(n) cause you are not restarting your iteration at any point of time.
Hope this makes sense.
This answer was written hastily and received a few downvotes, so I decided to clarify and rewrite it
Time complexity of an algorithm is an expression of the number of operations of the algorithm in terms of the size of the problem the algorithm is intended to solve.
There are two sizes involved here.
The first size is the number of elements of the matrix X × Y This corresponds to what is known in complexity theory as the size of input. Let k = X × Y denote the number of elements in the matrix. Since the number of operations in the twin loop is X × Y, it is in O(k).
The second size is the number of columns and rows of the matrix. Let m = max(X,Y). The number of operations in the twin loop is in O(m^2). Usually in Linear Algebra this kind of size is used to characterize the complexity of matrix operations on m × m matrices.
When you talk about complexity you have to specify precisely how you encode an instance problem and what parameter you use to specify its size. In Complexity Theory we usually assume that the input to an algorithm is given as a string of characters coming from some finite alphabet and measure the complexity of an algorithm in terms of an upper bound on the number of operations on an instance of a problem given by a string of length n. That is in Complexity Theory n is usually the size of input.
In practical Complexity Analysis of algorithms we often use other measures of the size of an instance that are more meaningful in specific context. For instance if A is a connectivity matrix of a graph, we may use the number of vertices V as a measure of complexity of an instance of a problem, or if A is a matrix of a linear operator acting on a vector space, we may use the dimension of a vector space as such a measure. For square matrices the convention is to specify the complexity in terms of the dimension of the matrix, that is to measure the complexity of algorithms acting upon n × n matrices in terms of n. It often makes practical sense and also agrees with the conventions of a specific application field even if it may contradict the conventions of Complexity Theory.
Let us give the name Matrix Scan to our twin loop. You may legitimately say that if the size of an instance of Matrix Scan is the length of a string encoding of a matrix. Assuming bounded size of the entries it is the number of elements in the matrix, k. Then we can say the complexity of Matrix Scan is in O(k). On the other hand if we take m = max(X,Y) as a parameter that characterizes the complexity of an instance, as is customary in many applications, then the complexity Matrix Scan for an X×Y matrix will is also in O(m^2). For a square matrix X = Y = m and O(k) = O(m^2).
Notice: Some people in the comments asked whether we can always find an encoding of the problem to reduce any polynomial problem to a linear problem. This is not true. For some algorithms the number of operations grows faster than the length of the string encoding of their input. For instance, there is no algorithm to multiply two m×m matrices with θ(m^2) number of operations. Here the size of input grows as m^2, however Ran Raz proved that the number of operations grows at least as fast as m^2 log m. If n is in O(m^2) then m^2 log m is in O(n log n) and the best known algorithms complexity grows as O(m^(2+c)) = O(n^(1+c/2)), where c is at least 0.372 for versions of Coppersmith-Winograd algorithm and c = 1 for the common iterative algorithm.
Generally, I thought all nested for loops were O(n^2),
You are wrong about that. What confuses you I guess is that often people use as an example square(X==Y) matrix so complexity is n*n(X==n,Y==n).
If you want to practise your O(*) skills try to figure out why matrix multiplication is O(n^3). IF you dont know the algorithm for matrix multiplication it is easy to find it online.