asymptotic complexity - algorithm

suppose a computer executes one instruction in a microsecond and an algorithm is known to have a complexity of O(2^n), if a maximum of 12 hours of computer time is given to this algorithm, determine the largest possible value of n for which the algorithm can be used

No can do.
O(2^n) means that there exists C such that limit n->infinity f(n)<=C*(2^n).
But this C can also be the number of 023945290378569237845692378456923847569283475635463463456 so even 12 hours cannot ensure that it will run even on small input.

Insufficient information. An algorithm that is O(2^n) doesn't necessarily take exactly 2^n steps for input of size n, it could take a constant factor of that. In fact, it could take C*(2^n)+B operations, where C and B are constant (that is, they don't depend on n), they are are both integers, and C >= 1 and B >= 0.

Well, as O(2^n) is an exponential complexity and you're asked for the "largest possible exponent", you're trying to find an N, so that 2^N is less than or equal to 12 hours (* 3600 seconds, * 1000000 for the microseconds). From there, you can either use logarithms to find the right value or estimate an initial N and iterate until you find a value.

Related

Is the algorithm that involves enumerating n choose k exponetial

Say if we have an algorithm needs to list out all possibilities of choosing k elements from n elements (k<=n), is the time complexity of the particular algorithm exponential and why?
No.
There are n choose k = n!/(k!(n-k)!) possibilities [1].
Consider that, n choose k = n^k / (k!). [2].
Assuming you are keeping k constant, as n grows, the amount of possibilities increases in polynomial time.
For this example, ignore the (1/(k!)) term because it is constant. If k = 2, and you increase n from 2 to 3, then you have a 2^2 to 3^2 change. An exponential change would be from 2^2 to 2^3. This is not the same.
Keeping k constant and changing n results in a big O of O(n^k) (the 1/(k!) term is constant and you ignore it).
Thinking carefully about the size of the input instance is required since the input instance contains numbers - a basic familiarity with weak NP-hardness can also be helpful.
Assume that we fix k=1 and encode n in binary. Since the algorithm must visit n choose 1 = n numbers, it takes at least n steps. Since the magnitude of the number n may be exponential in the size of the input (the number of bits used to encode n), the algorithm in the worst case consumes exponential time.
You can get a feel for this exponential-time behavior by writing a simple C program that prints all the numbers from 1 to n with n = 2^64 and see how far you get in a minute. While the input is only 64 bits long, it would take you about 600 years to print all the numbers assuming that your device can print a million numbers per second.
An algorithm that finds all possibilities of choosing k elements from n unique elements (k<=n), does NOT have an exponential time complexity, O(K^n), because it instead has a factorial time complexity, O(n!). The relevant formula is:
p = n!/(k!(n-k)!)

What constitutes exponential time complexity?

I am comparing two algorithms that determine whether a number is prime. I am looking at the upper bound for time complexity, but I can't understand the time complexity difference between the two, even though in practice one algorithm is faster than the other.
This pseudocode runs in exponential time, O(2^n):
Prime(n):
for i in range(2, n-1)
if n % i == 0
return False
return True
This pseudocode runs in half the time as the previous example, but I'm struggling to understand if the time complexity is still O(2^n) or not:
Prime(n):
for i in range(2, (n/2+1))
if n % i == 0
return False
return True
As a simple intuition of what big-O (big-O) and big-Θ (big-Theta) are about, they are about how changes the number of operations you need to do when you significantly increase the size of the problem (for example by a factor of 2).
The linear time complexity means that you increase the size by a factor of 2, the number of steps you need to perform also increases by about 2 times. This is what called Θ(n) and often interchangeably but not accurate O(n) (the difference between O and Θ is that O provides only an upper bound but Θ guarantees both upper and lower bounds).
The logarithmic time complexity (Θ(log(N))) means that when increase the size by a factor of 2, the number of steps you need to perform increases by some fixed amount of operations. For example, using binary search you can find given element in twice as long list using just one ore loop iterations.
Similarly the exponential time complexity (Θ(a^N) for some constant a > 1) means that if you increase that size of the problem just by 1, you need a times more operations. (Note that there is a subtle difference between Θ(2^N) and 2^Θ(N) and actually the second one is more generic, both lie inside the exponential time but neither of two covers it all, see wiki for some more details)
Note that those definition significantly depend on how you define "the size of the task"
As #DavidEisenstat correctly pointed out there are two possible context in which your algorithm can be seen:
Some fixed width numbers (for example 32-bit numbers). In such a context an obvious measure of the complexity of the prime-testing algorithm is the value being tested itself. In such case your algorithm is linear.
In practice there are many contexts where prime testing algorithm should work for really big numbers. For example many crypto-algorithms used today (such as Diffie–Hellman key exchange or RSA) rely on very big prime numbers like 512-bits, 1024-bits and so on. Also in those context the security is measured in the number of those bits rather than particular prime value. So in such contexts a natural way to measure the size of the task is the number of bits. And now the question arises: how many operations do we need to perform to check a value of known size in bits using your algorithm? Obviously if the value N has m bits it is about N ≈ 2^m. So your algorithm from linear Θ(N) converts into exponential 2^Θ(m). In other words to solve the problem for a value just 1 bit longer, you need to do about 2 times more work.
Exponential versus linear is a question of how the input is represented and the machine model. If the input is represented in unary (e.g., 7 is sent as 1111111) and the machine can do constant time division on numbers, then yes, the algorithm is linear time. A binary representation of n, however, uses about lg n bits, and the quantity n has an exponential relationship to lg n (n = 2^(lg n)).
Given that the number of loop iterations is within a constant factor for both solutions, they are in the same big O class, Theta(n). This is exponential if the input has lg n bits, and linear if it has n.
i hope this will explain you why they are in fact linear.
suppose you call function and see how many time they r executed
Prime(n): # 1 time
for i in range(2, n-1) #n-1-1 times
if n % i == 0 # 1 time
return False # 1 time
return True # 1 time
# overall -> n
Prime(n): # Time
for i in range(2, (n/2+1)) # n//(2+1) -1-1 time
if n % i == 0 # 1 time
return False # 1 time
return True # 1 time
# overall -> n/2 times -> n times
this show that prime is linear function
O(n^2) might be because of code block where this function is called.

What would the big O notation for this function?

What would be the worst time complexity big O notation for the following pseudocode? (assuming the function call is an O(1)) I'm very new to big O notation so I'm unsure of an answer but I was thinking O(log(n)) because the while loop parameters multiplied by 2 each time or would that just be O(loglog(n))? Or am I wrong on both counts? Any input/help is appreciated, I'm trying to grasp the concept of big O notation for worst time complexity which I just started learning. Thanks!
i ← 1
while(i<n)
doSomething(...)
i ← i * 2
done
If i is doubling every time, then the number of times the loop will execute is the number of times you can double i before reaching n. Or to write it mathematically, if x is the number of times the loop will execute we have 2^x <= n. Solving for x gives x <= log_2(n). Therefore the number of times the loop will execute is O(log(n))
i is growing exponentially, thus loop will be done in logarithmic time, O(log(n))
O(log(n)) is correct when you want to state the time complexity of that algorithm in terms of the number n. However in computer science complexity is often stated in the size of the input, i.e. the number of bits. Then your algorithm would be linear, i.e. in O(k) where k is the input size.
Typically, other operations like addition are also said to be linear not logarithmic. A logarithmic complexity usually means that an algorithm does not have to consider the complete input. (E.g. binary search).
If this is part of an exercise or you want to discuss complexity of algorithms in a computer science context this difference is important.
Also, if one would want to be really pedantic: comparison on large integers is not a constant time operation, and if you are considering the usual integer types, the algorithm is basically constant time as it only needs up to 32 or 64 iterations.

Understanding Time complexity of algorithm

I am just starting to learn the big O concept. What I learned is that if a function f is less than or equal to another constant multiple of function g, then f is O(g).
Now I came across an example in which a string of size "n" takes "2n" (double the size of input) steps of algorithm. So they say the time taken is O(2n) but then they follow this statement by saying As O(2n)=O(n), time complexity is O(n).
I dont understand this. As 2n will always be greater than n, how can we ignore the multple of 2 then? Anything less than or equal to 2n will not necessarily be less than n!
Doesn't it mean that we are somehow equating n and 2n? Sounds confusing. Please clarify in simplest possible way as I am just a beginner in this concept.
Best Regards :)
Big-O and related notations are intended to capture the aspects of algorithm performance that are most inherent to the algorithm, independent of how it is being run and measured.
Constant multipliers depend on the unit of measurement, seconds vs. microseconds vs. instructions vs. loop iterations. Even measured in the same units they will be different if measured on different systems. The same algorithm may take 20n instructions in one instruction set, 30n instructions on another. It may take 0.5n microseconds on one, 10n microseconds on another.
Many of the basic algorithm complexities you will see in the literature were calculated decades ago, but remain meaningful across significant changes in processor architecture and even more significant changes in performance.
Similar considerations apply to start-up and similar overheads.
A f(n) is O(n) if there exist constants N and c such that, for all n>=N, f(n) <= cn. For f(n) = 2n the constants are N=0 and c = 2. The first constant, N, is about ignoring overhead, the second, c, is about ignoring constant multipliers.
... As 2n will always be greater than n, how can we ignore the multple of 2 then? ...
Simply put, with growing n the multiplier loses its importance. The asymptotic behavior of a function describes what happens when n gets large.
Maybe it helps to consider not just O(n) and O(2n), because they are in the same class, but to contrast it with some other common classes. Example: Any O(n^2) algorithm will take longer than any O(n), in the long run (in the short run, their running times might even be reversed). Say you have two algorithms, one with linear time complexity of 100n and another with 8n^2. The quadratic algorithm will be faster for all n =< 12, but slower for all n > 12.
This property – that for any fixed nonnegative c and d you'll find an n, so that cn < dn^2 – constitues a part of the hierarchy of time complexities.
As you alluded to in your first paragraph, the time required to execute the algorithm is proportional to a constant multiple of the input size. You can think of O(n), to be O(C*n), where C is any constant multiplier.

compare Time complexity of O(2/n) and O(1)

If there are 2 functions with time complexity of O(2/n) and O(100). which function has lesser execution time ?. Is there any real function with time complexity of 2/n ?.
(found this in some algorithm question paper)
Firstly, in the O notation (as well as Theta and Omega), you can dismiss any constants, because the definition already includes the part "for some constant k".
So, basically, O(100) is equivalent to O(1), while O(2/n) is equivalent to O(1/n). Which has faster execution time -- depends on the n. If I presume that 100 and 2/n are directly used to calculate execution time, than the execution time is:
100 (units of time) for O(100) in all cases
more than 100 (units of time) for O(2/n) for n < 0.02
less than 100 (units of time) for O(2/n) for n >= 0.02
Now, I hope this question is purely theoretical, because in reality there is no algorithm with O(1/n) complexity -- it would mean that it takes less time (and that's note time per amount of data, that's just time) the more data it needs to process. I hope this is clear, that there is no algorithm that could take 0 time for infinite amount of data.
The other complexity, O(100) is an algorithm that takes the same steps no matter what the input data is, and thus always has constant time of execution (actually, it only has to be bounded by a constant, it can run faster that that sometimes). An example would be a program that reads an input from a file of integers, and then returns the sum of the first 100 numbers in that file or of all the numbers if there isn't a 100 numbers present. Since it always reads at most a 100 numbers (the rest can be ignored) and sums them, it is bounded by a constant number of steps.
An O(2/n) algorithm that does anything is practically impossible.
An algorithm is a finite sequence of steps that produces a result. Since a "step" on a computer takes a certain amount of time (for example at least one CPU cycle), the only way an algorithm can have O(2/n) time is if it takes zero time for sufficiently large n. Hence it does nothing.
Leaving aside algorithms and time complexity: an O(2/n) function is "less than" constant, in the sense that a O(2/n) function necessarily tends to 0 as n tends to infinity, whereas an O(1) function doesn't necessarily do that.
A remark on the text of the question from this paper: any function that is O(100) is also O(1), and any function that is O(2/n) is also O(1/n). It doesn't really make much sense to write big-O with unnecessary constants in there, but since it's an examination perhaps it's there to potentially confuse you.
Surely it depends on the value of N. The O(100) is basically fixed time, and may run faster or slower than the O(2/N) depending on what n is.
I can't think of an O(2/N) algorithm off hand, something that gets faster with more data... sounds a bit weird.
I am not familiar with any algorithm that has O(2/n) running time and I doubt one can exist, but let's look at the mathematical qustion.
The mathematical question should be: (1) is O(2/n) a subset of O(1)1 (2) Is O(1) a subset of O(2/n)?
yes. Let f(n) be a function in O(2/n) - that means that there are constants c,N such that for each n > N: c*f(n) < 2/n. There are also constants c2,N2 such that c*2/n < 1 for each n > N2, and thus min{c1,c2} * f(n) < 1 for each n > max{N1,N2}. so O(2/n) is subset of O(1)
No. Since lim(2/n) = 0 at infinity, for each c,N, there is n>N such that 2/n < 1*c and thus f(n) = 2/n is not in O(1), while it is in O(2/n)
Conclusion: a function that is O(2/n) is also O(1) - but not the other way around.
It means - each function in O(2/n) scales "smaller" then 1.
(1) It is identical to O(100), since O(1) = O(100)

Resources