I am comparing two algorithms that determine whether a number is prime. I am looking at the upper bound for time complexity, but I can't understand the time complexity difference between the two, even though in practice one algorithm is faster than the other.
This pseudocode runs in exponential time, O(2^n):
Prime(n):
for i in range(2, n-1)
if n % i == 0
return False
return True
This pseudocode runs in half the time as the previous example, but I'm struggling to understand if the time complexity is still O(2^n) or not:
Prime(n):
for i in range(2, (n/2+1))
if n % i == 0
return False
return True
As a simple intuition of what big-O (big-O) and big-Θ (big-Theta) are about, they are about how changes the number of operations you need to do when you significantly increase the size of the problem (for example by a factor of 2).
The linear time complexity means that you increase the size by a factor of 2, the number of steps you need to perform also increases by about 2 times. This is what called Θ(n) and often interchangeably but not accurate O(n) (the difference between O and Θ is that O provides only an upper bound but Θ guarantees both upper and lower bounds).
The logarithmic time complexity (Θ(log(N))) means that when increase the size by a factor of 2, the number of steps you need to perform increases by some fixed amount of operations. For example, using binary search you can find given element in twice as long list using just one ore loop iterations.
Similarly the exponential time complexity (Θ(a^N) for some constant a > 1) means that if you increase that size of the problem just by 1, you need a times more operations. (Note that there is a subtle difference between Θ(2^N) and 2^Θ(N) and actually the second one is more generic, both lie inside the exponential time but neither of two covers it all, see wiki for some more details)
Note that those definition significantly depend on how you define "the size of the task"
As #DavidEisenstat correctly pointed out there are two possible context in which your algorithm can be seen:
Some fixed width numbers (for example 32-bit numbers). In such a context an obvious measure of the complexity of the prime-testing algorithm is the value being tested itself. In such case your algorithm is linear.
In practice there are many contexts where prime testing algorithm should work for really big numbers. For example many crypto-algorithms used today (such as Diffie–Hellman key exchange or RSA) rely on very big prime numbers like 512-bits, 1024-bits and so on. Also in those context the security is measured in the number of those bits rather than particular prime value. So in such contexts a natural way to measure the size of the task is the number of bits. And now the question arises: how many operations do we need to perform to check a value of known size in bits using your algorithm? Obviously if the value N has m bits it is about N ≈ 2^m. So your algorithm from linear Θ(N) converts into exponential 2^Θ(m). In other words to solve the problem for a value just 1 bit longer, you need to do about 2 times more work.
Exponential versus linear is a question of how the input is represented and the machine model. If the input is represented in unary (e.g., 7 is sent as 1111111) and the machine can do constant time division on numbers, then yes, the algorithm is linear time. A binary representation of n, however, uses about lg n bits, and the quantity n has an exponential relationship to lg n (n = 2^(lg n)).
Given that the number of loop iterations is within a constant factor for both solutions, they are in the same big O class, Theta(n). This is exponential if the input has lg n bits, and linear if it has n.
i hope this will explain you why they are in fact linear.
suppose you call function and see how many time they r executed
Prime(n): # 1 time
for i in range(2, n-1) #n-1-1 times
if n % i == 0 # 1 time
return False # 1 time
return True # 1 time
# overall -> n
Prime(n): # Time
for i in range(2, (n/2+1)) # n//(2+1) -1-1 time
if n % i == 0 # 1 time
return False # 1 time
return True # 1 time
# overall -> n/2 times -> n times
this show that prime is linear function
O(n^2) might be because of code block where this function is called.
Related
Why does the following code for each statement refer to big O constant (here I use 1 for the convention)?
I mean if the array size gets bigger the time complexity may get larger right? Also the number in total will get larger and larger, won't it affect the complexity?
Pseudocode:
def find_sum(given_array)
total = 0 # refers to O(1)
for each i in given array: #O(1)
total+=i #O(1)
return total #O(1)
TL;DR: Because the Big O notation is used to quantify an algorithm, with regards of how it behaves with an increment of its input.
I mean if the array size gets bigger the time complexity may get
larger right? Also the number in total will get larger and larger,
won't it affect the complexity?
You are mistaken the time taken by the algorithm with the time-complexity.
Let us start by clarifying what is Big O notation in the current context. From (source) one can read:
Big O notation is a mathematical notation that describes the limiting
behavior of a function when the argument tends towards a particular
value or infinity. (..) In computer science, big O notation is used to classify algorithms
according to how their run time or space requirements grow as the
input size grows.
Informally, in computer-science time-complexity and space-complexity theories, one can think of the Big O notation as a categorization of algorithms with a certain worst-case scenario concerning time and space, respectively. For instance, O(n):
An algorithm is said to take linear time/space, or O(n) time/space, if its time/space complexity is O(n). Informally, this means that the running time/space increases at most linearly with the size of the input (source).
So for this code:
def find_sum(given_array)
total = 0
for each i in given array:
total+=i
return total
the complexity is O(n) because with an increment of the input the complexity grows linear and not constant. More accurately Θ(n).
IMO it is not very accurate to find out the complexity like:
def find_sum(given_array)
total = 0 # refers to O(1)
for each i in given array: #O(1)
total+=i #O(1)
return total #O(1)
Since the Big O notation represents a set of functions with a certain asymptotic upper-bound; as one can read from source:
Big O notation characterizes functions according to their growth
rates: different functions with the same growth rate may be
represented using the same O notation.
More accurate would be :
def find_sum(given_array)
total = 0 # takes c1 time
for each i in given array:
total+=i # takes c2 time
return total # takes c3 time
So the time complexity would be c1 + n * c2 + c3, which can be simplified to n. And since both the lower and upper bounds of this function are the same we can use Θ(n) instead of O(n).
Why does the following code for each statement refer to big O constant (here I use 1 for the convention)?
Not sure, ask the person who wrote it. It seems clear the overall runtime is not O(1), so if that's the conclusion they arrive at, they are wrong. If they didn't mean to say that, what they wrote is either wrong or confusing.
I mean if the array size gets bigger the time complexity may get larger right?
Yes, it might. Indeed, here, it will, since you are at least iterating over the elements in the array. More elements in the array, more iterations of the loop. Straightforward.
Also the number in total will get larger and larger, won't it affect the complexity?
This is an interesting insight and the answer depends on how you conceive of numbers being represented. If you have fixed-length numeric representations (32-bit unsigned ints, double-precision floats, etc.) then addition is a constant-time operation. If you have variable-length representations (like a big integer library, or doing the algorithm by hand) then the complexity of adding would depend on the addition method used but would necessarily increase with number size (for regular add-with-carry, an upper logarithmic bound would be possible). Indeed, with variable-length representations, your complexity should at least include some parameter related to the size (perhaps max or average) of numbers in the array; otherwise, the runtime might be dominated by adding the numbers rather than looping (e.g., an array of two 1000^1000 bit integers would spend almost all time adding rather than looping).
No answer so far address the second question:
Also the number in total will get larger and larger, won't it affect the complexity?
which is very important and usually not accounted for.
The answer is, it depends on your computational model. If the underlying machine may add insanely arbitrarily large numbers in constant time, then no, it does not affect the time complexity.
A realistic machine, however, operates on values of fixed width. Modern computers happily add 64 bit quantities. Some may only add 16 bit-wide values at a time. A Turing machine - which is a base of the whole complexity theory - works with 1 bit at a time. In any case, once our numbers outgrow the register width, we must account for the fact that addition takes time proportional to the number of bits in the addends, which in this case is log(i) (or log(total), but since total grows as i*(i-1)/2, its bit width is approximately log(i*i) = 2 log(i)).
With this in mind, annotating
total+=i # O(log i)
is more prudent. Now the complexity of the loop
for each i in given array:
total+=i # O(log(i))
is sum[1..n] log(i) = log(n!) ~ n log(n). The last equality comes from the Stirling approximation of a factorial.
There is no way, that the loop:
for each i in given array:
total+=i
will run in O(1) time. Even if the size of input n is 1, asymptotic analysis will still indicate, that it runs in O(n), and not in O(1).
Asymptotic Analysis measures the time/space complexity in relation to the input size, and it does not necessarily show the exact number of operations performed.
Point, that O(1) is constant, does not mean that it's just one (1) operation, but rather it means, that this particular block (which takes O(1)) does not change when the input changes, and therefore, it has no correlation to the input, so it has a constant complexity.
O(n), on the other hand, indicates, that the complexity depends on n, and it changes depending on how the input n changes. O(n) is a linear relation, when input size and runtime have 1 to 1 correlation.
Correctly written comments would look like this:
def find_sum(given_array)
total = 0 #this is O(1), it doesn't depend on input
for each i in given array: #this is O(n), as loop will get longer as the input size gets longer
total+=i #again, O(1)
return total #again, O(1)
The instructor said that the complexity of an algorithm is typically measured with respect to its input size.
So, when we say an algorithm is linear, then even if you give it an input size of 2^n (say 2^n being the number of nodes in a binary tree), the algorithm is still linear to the input size?
The above seems to be what the instructor means, but I’m having a hard time turning it in my head. If you give it a 2^n input, which is exponential to some parameter ‘n’, but then call this input “x”, then, sure, your algorithm is linear to x. But deep-down, isn’t it still exponential in ‘n’? What’s the point of saying its linear to x?
Whenever you see the term "linear," you should ask - linear in what? Usually, when we talk about an algorithm's runtime being "linear time," we mean "the algorithm's runtime is O(n), where n is the size of the input."
You're asking what happens if, say, n = 2k and we're passing in an exponentially-sized input into the function. In that case, since the runtime is O(n) and n = 2k, then the overall runtime would be O(2k). There's no contradiction here between this statement and the fact that the algorithm runs in linear time, since "linear time" means "linear as a function of the size of the input."
Notice that I'm explicitly choosing to use a different variable k in the expression 2k to call attention to the fact that there are indeed two different quantities here - the size of the input as a function of k (which is 2k) and the variable n representing the size of the input to the function more generally. You sometimes see this combined, as in "if the runtime of the algorithm is O(n), then running the algorithm on an input of size 2n takes time O(2n)." That statement is true but a bit tricky to parse, since n is playing two different roles there.
If an algorithm has a linear time-complexity, then it is linear regardless the size of the input. Whether it is a fixed size input, quadratic or exponential.
Obviously running that algorithm on a fixed size array, quadratic or exponential will take different time, but still, the complexity is O(n).
Perhaps this example will help you understand, does running merge-sort on an array of size 16 mean merge-sort is O(1) because it took constant operations to sort that array? the answer is NO.
When we say an algorithm is O(n), means if the input size is n, it is linear regards to the input size. Hence, if n is exponential in terms of another parameter k (for example n = 2^k), the algorithm is linear as well, in regards to the input size.
Another example is time complexity for the binary search for an input array with size n. We say that binary search for a sorted array with size n is in O(log(n)). It means in regards to the input size, it takes asymptotically at most log(n) comparison to search an item inside an input array with size n,
Lets say you are printing first n numbers, and to print each number it takes 3 operations:
n-> 10, number of operations -> 3 x 10 = 30
n-> 100, number of operations -> 3 x 100 = 300
n-> 1000, number of operations -> 3 x 1000 = 3000
n ->10000, we can also say, n = 100^2 (say k^2),
number of operations --> 3 x 10000 = 30,000
Even though n is exponent of something(in this case 100), our number of operations solely depends upon number on the input(n which is 10,000).
So we can say, it is linear time complexity algorithm.
Say if we have an algorithm needs to list out all possibilities of choosing k elements from n elements (k<=n), is the time complexity of the particular algorithm exponential and why?
No.
There are n choose k = n!/(k!(n-k)!) possibilities [1].
Consider that, n choose k = n^k / (k!). [2].
Assuming you are keeping k constant, as n grows, the amount of possibilities increases in polynomial time.
For this example, ignore the (1/(k!)) term because it is constant. If k = 2, and you increase n from 2 to 3, then you have a 2^2 to 3^2 change. An exponential change would be from 2^2 to 2^3. This is not the same.
Keeping k constant and changing n results in a big O of O(n^k) (the 1/(k!) term is constant and you ignore it).
Thinking carefully about the size of the input instance is required since the input instance contains numbers - a basic familiarity with weak NP-hardness can also be helpful.
Assume that we fix k=1 and encode n in binary. Since the algorithm must visit n choose 1 = n numbers, it takes at least n steps. Since the magnitude of the number n may be exponential in the size of the input (the number of bits used to encode n), the algorithm in the worst case consumes exponential time.
You can get a feel for this exponential-time behavior by writing a simple C program that prints all the numbers from 1 to n with n = 2^64 and see how far you get in a minute. While the input is only 64 bits long, it would take you about 600 years to print all the numbers assuming that your device can print a million numbers per second.
An algorithm that finds all possibilities of choosing k elements from n unique elements (k<=n), does NOT have an exponential time complexity, O(K^n), because it instead has a factorial time complexity, O(n!). The relevant formula is:
p = n!/(k!(n-k)!)
In computer science, the iterated logarithm of n, written log* n (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to 1. The simplest formal definition is the result of this recursive function:
Is there any algorithm with time complexity O(lg * n) ?
If you implement union find algorithm with path compression and union by rank, both union and find will have complexity O(log*(n)).
It's rare but not unheard of to see log* n appear in the runtime analysis of algorithms. Here are a couple of cases that tend to cause log* n to appear.
Approach 1: Shrinking By a Log Factor
Many divide-and-conquer algorithms work by converting an input of size n into an input of size n / k. The number of phases of these algorithms is then O(log n), since you can only divide by a constant O(log n) times before you shrink your input down to a constant size. In that sense, when you see "the input is divided by a constant," you should think "so it can only be divided O(log n) times before we run out of things to divide."
In rarer cases, some algorithms work by shrinking the size of the input down by a logarithmic factor. For example, one data structure for the range semigroup query problem work by breaking a larger problem down into blocks of size log n, then recursively subdividing each block of size log n into blocks of size log log n, etc. This process eventually stops once the blocks hit some small constant size, which means that it stops after O(log* n) iterations. (This particular approach can then be improved to give a data structure in which the blocks have size log* n for an overall number of rounds of O(log** n), eventually converging to an optimal structure with runtime O(α(n)), where α(n) is the inverse Ackermann function.
Approach 2: Compressing Digits of Numbers
The above section talks about approaches that explicitly break a larger problem down into smaller pieces whose sizes are logarithmic in the size of the original problem. However, there's another way to take an input of size n and to reduce it to an input of size O(log n): replace the input with something roughly comparable in size to its number of digits. Since writing out the number n requires O(log n) digits to write out, this has the effect of shrinking the size of the number down by the amount needed to get an O(log* n) term to arise.
As a simple example of this, consider an algorithm to compute the digital root of a number. This is the number you get by repeatedly adding the digits of a number up until you're down to a single digit. For example, the digital root of 78979871 can be found by computing
7 + 8 + 9 + 7 + 9 + 8 + 7 + 1 = 56
5 + 6 = 11
1 + 1 = 2
2
and getting a digital root of two. Each time we sum the digits of the number, we replace the number n with a number that's at most 9 ⌈log10 n⌉, so the number of rounds is O(log* n). (That being said, the total runtime is O(log n), since we have to factor in the work associated with adding up the digits of the number, and adding the digits of the original number dominates the runtime.)
For a more elaborate example, there is a parallel algorithm for 3-coloring the nodes of a tree described in the paper "Parallel Symmetry-Breaking in Sparse Graphs" by Goldberg et al. The algorithm works by repeatedly replacing numbers with simpler numbers formed by summing up certain bits of the numbers, and the number of rounds needed, like the approach mentioned above, is O(log* n).
Hope this helps!
When talking about time complexity we usually use n as input, which is not a precise measure of the actual input size. I am having trouble showing that, when using specific size for input (s) an algorithm remains in the same complexity class.
For instance, take a simple Sequential Search algorithm. In its worst case it takes W(n) time. If we apply specific input size (in base 2), the order should be W(lg L), where L is the largest integer.
How do I show that Sequential Search, or any algorithm, remains the same complexity class, in this case linear time? I understand that there is some sort of substitution that needs to take place, but I am shaky on how to come to the conclusion.
EDIT
I think I may have found what I was looking for, but I'm not entirely sure.
If you define worst case time complexity as W(s), the maximum number of steps done by an algorithm for an input size of s, then by definition of input size, s = lg n, where n is the input. Then, n = 2^s, leading to the conclusion that the time complexity is W(2^s), an exponential complexity. Therefore, the algorithm's performance with binary encoding is exponential, not linear as it is in terms of magnitude.
When talking about time complexity we usually use n as input, which is not a precise measure of the actual input size. I am having trouble showing that, when using specific size for input (s) an algorithm remains in the same complexity class.
For instance, take a simple Sequential Search algorithm. In its worst case it takes W(n) time. If we apply specific input size (in base 2), the order should be W(lg L), where L is the largest integer.
L is a variable that represents the largest integer.
n is a variable that represents the size of the input.
L is not a specific value anymore than n is.
When you apply a specific value, you aren't talking about a complexity class anymore, you are talking about an instance of that class.
Let's say you are searching a list of 500 integers. In other words, n = 500
The worst-case complexity class of Sequential Search is O(n)
The complexity is n
The specific instance of worst-case complexity is 500
Edit:
Your values will be uniform in the number of bits required to encode each value. If the input is a list of 32bit integers, then c = 32, the number of bits per integer. Complexity would be 32*n => O(n).
In terms of L, if L is the largest value, and lg L is the number of bits required to encode L, then lg L is the constant c. Your complexity in terms of bits is O(n) = c*n, where c = lg L is the constant specific input size.
What I know is that the maximum number
of steps done by Sequential Search is,
obviously, cn^2 + nlg L. cn^2 being
the number of steps to increment loops
and do branching.
That's not true at all. The maximum number of steps done by a sequential search is going to be c*n, where n is the number of items in the list and c is some constant. That's the worst case. There is no n^2 component or logarithmic component.
For example, a simple sequential search would be:
for (int i = 0; i < NumItems; ++i)
{
if (Items[i] == query)
return i;
}
return -1;
With that algorithm, if you search for each item, then half of the searches will require fewer than NumItems/2 iterations and half of the searches will require NumItems/2 or more iterations. If an item you search for isn't in the list, it will require NumItems iterations to determine that. The worst case running time is NumItems iterations. The average case is NumItems/2 iterations.
The actual number of operations performed is some constant, C, multiplied by the number of iterations. On average it's C*NumItems/2.
As Lucia Moura states: "Except for the unary encoding, all the other encodings for natural
numbers have lengths that are polynomially related"
Here is the source. Take a look at page 19.