Consider two programs, A and B. Program A requires 1000 * n^2 (1000 multiplied by n to the power of 2) operations, and Program B requires 2^n (2 to the power of n) operations. For which values of n will Program A execute faster than Program B?
The easiest way to compare both of these complexities is just to map them out as graphs.
Just look where the lines intersect, that is 18.36. So if n is less than 18.36 then program a will execute faster.
For the exact answer, you can make the two equations equal each other then solve for n or just use WolframAlpha: https://www.wolframalpha.com/input/?i=1000x%5E2%3D2%5Ex
For n > 18.3
Solve 1000 * n^2 = 2^n for the exact solution or use WolframAlpha
Related
Say if we have an algorithm needs to list out all possibilities of choosing k elements from n elements (k<=n), is the time complexity of the particular algorithm exponential and why?
No.
There are n choose k = n!/(k!(n-k)!) possibilities [1].
Consider that, n choose k = n^k / (k!). [2].
Assuming you are keeping k constant, as n grows, the amount of possibilities increases in polynomial time.
For this example, ignore the (1/(k!)) term because it is constant. If k = 2, and you increase n from 2 to 3, then you have a 2^2 to 3^2 change. An exponential change would be from 2^2 to 2^3. This is not the same.
Keeping k constant and changing n results in a big O of O(n^k) (the 1/(k!) term is constant and you ignore it).
Thinking carefully about the size of the input instance is required since the input instance contains numbers - a basic familiarity with weak NP-hardness can also be helpful.
Assume that we fix k=1 and encode n in binary. Since the algorithm must visit n choose 1 = n numbers, it takes at least n steps. Since the magnitude of the number n may be exponential in the size of the input (the number of bits used to encode n), the algorithm in the worst case consumes exponential time.
You can get a feel for this exponential-time behavior by writing a simple C program that prints all the numbers from 1 to n with n = 2^64 and see how far you get in a minute. While the input is only 64 bits long, it would take you about 600 years to print all the numbers assuming that your device can print a million numbers per second.
An algorithm that finds all possibilities of choosing k elements from n unique elements (k<=n), does NOT have an exponential time complexity, O(K^n), because it instead has a factorial time complexity, O(n!). The relevant formula is:
p = n!/(k!(n-k)!)
I asked myself if one can compute the nth Fibonacci number in time O(n) or O(1) and why?
Can someone explain please?
Yes. It is called Binet's Formula, or sometimes, incorrectly, De Moivre's Formula (the real De Moivre's formula is another, but De Moivre did discover Binet's formula before Binet), and involves the golden ratio Phi. The mathematical reasoning behind this (see link) is a bit involved, but doable:
While it is an approximate formula, Fibonacci numbers are integers -- so, once you achieve a high enough precision (depends on n), you can just approximate the number from Binet's formula to the closest integer.
Precision however depends on constants, so you basically have two versions, one with float numbers and one with double precision numbers, with the second also running in constant time, but slightly slower. For large n you will need an arbitrary precision number library, and those have processing times that do depend on the numbers involved; as observed by #MattTimmermans, you'll then probably end up with a O(log^2 n) algorithm. This should happen for large enough values of n that you'd be stuck with a large-number library no matter what (but I'd need to test this to be sure).
Otherwise, the Binet formula is mainly made up of two exponentiations and one division (the three sums and divisions by 2 are probably negligible), while the recursive formula mainly employs function calls and the iterative formula uses a loop. While the first formula is O(1), and the other two are O(n), the actual times are more like a, b n + c and d n + e, with values for a, b, c, d and e that depend on the hardware, compiler, implementation etc. . With a modern CPU it is very likely that a is not too larger than b or d, which means that the O(1) formula should be faster for almost every n. But most implementations of the iterative algorithm start with
if (n < 2) {
return n;
}
which is very likely to be faster for n = 0 and n = 1. I feel confident that Binet's formula is faster for any n beyond the single digits.
Instead of thinking about the recursive method, think of building the sequence from the bottom up, starting at 1+1.
You can also use a matrix m like this:
1 1
1 0
and calculate power n of it. then output m^n[0,0].
I am currently learning about big O notation but there is a concept that's confusing me. If for 8N^2 + 4N + 3 the complexity class would be N^2 because this is the fastest growing term. And for 5N the complexity class is N.
Then is it correct to say that of NLogN the complexity class is N since N grows faster than LogN?
The problem I'm trying to solve is that if configuration A consists of a fast algorithm that takes 5NLogN operations to sort a list on a computer that runs 10^6 operations per seconds and configuration B consists of a slow algorithm that takes N**2 operations to sort a list and is run on a computer that runs 10^9 operations per second. for smaller arrays
configuration 1 is faster, but for larger arrays configuration 2 is better. For what size of array does this transition occur?
What I thought was if I equated expressions for the time it took to solve the problem then I could get an N for the transition point however that yielded the equation N^2/10^9 = 5NLogN/10^6 which simplifies to N/5000 = LogN which is not solvable.
Thank you
In mathematics, the definition of f = O(g) for two real-valued functions defined on the reals, is that f(n)/g(n) is bounded when n approaches infinity. In other words, there exists a constant A, such that for all n, f(n)/g(n) < A.
In your first example, (8n^2 + 4n + 3)/n^2 = 8 + 4/n + 3/n^2 which is bounded when n approaches infinity (by 15, for example), so 8n^2 + 4n + 3 is O(n^2). On the other hand, nlog(n)/n = log(n) which approaches infinity when n approaches infinity, so nlog(n) is not O(n). It is however O(n^2), because nlog(n)/n^2 = log(n)/n which is bounded (it approches zero near infinity).
As to your actual problem, remember that if you can't solve an equation symbolically you can always resolve it numerically. The existence of solutions is clear.
Let's suppose that the base of your logarithm is b, so we are to compare
5N * log(b, N)
with
N^2
5N * log(b, N) = log(b, N^(5N))
N^2 = N^2 * log(b, b) = log(b, b^(N^2))
So we compare
N ^ (5N) with b^(N^2)
Let's compare them and analyze the relative value of (N^5N) / (b^(N^2)) compared to 1. You will observe that after a sertain limit it is smaller than 1.
Q: is it correct to say that of NLogN the complexity class is N?
A: No, here is why we can ignore smaller terms:
Consider N^2 + 1000000 N
For small values of N, the second term is the only one which matters, but as N grows, that does not matter. Consider the ratio 1000000N / N^2, which shows the relative size of the two terms. Reduce to 10000000/N, which approaches zero as N approaches infinity. Therefore the second term has less and less importance as N grows, literally approaching zero.
It is not just "smaller," it is irrelevant for sufficiently large N.
That is not true for multiplicands. n log n is always significantly bigger than n, by a margin that continues to increase.
Then is it correct to say that of NLogN the complexity class is N
since N grows faster than LogN?
Nop, because N and log(N) are multiplied and log(N) isn't constant.
N/5000 = LogN
Roughly 55.000
Then is it correct to say that of NLogN the complexity class is N
since N grows faster than LogN?
No, when you omit you should omit a TERM. When you have NLgN it is, as a whole, called a term. As of what you're suggesting then: NNN = (N^2)*N. And since N^2 has bigger growth rate we omit N. Which is completely WRONG. The order is N^3 not N^2. And NLgN works in the same manner. You only omit when the term is added/subtracted.
For example, NLgN + N = NLgN because it has faster growth than N.
The problem I'm trying to solve is that if configuration A consists of
a fast algorithm that takes 5NLogN operations to sort a list on a
computer that runs 10^6 operations per seconds and configuration B
consists of a slow algorithm that takes N**2 operations to sort a list
and is run on a computer that runs 10^9 operations per second. for
smaller arrays configuration 1 is faster, but for larger arrays
configuration 2 is better. For what size of array does this transition
occur?
This CANNOT be true. It is the absolute OPPOSITE. For small N values the faster computer with N^2 is better. For very large N the slower computer with NLgN is better.
Where is the point? Well, the second computer is 1000 times faster than the first one. So they will be equal in speed when N^2 = 1000NLgN which solves to N~=14,500. So for N<14,500 then N^2 will go faster (since the computer is 1000 times faster) but for N>14,500 the slower computer will be much faster. Now imagine N=1,000,000. The faster computer will need 50 times more than what the slower computer needs because N^2 = 50,000 NLgN and it is 1000 times faster.
Note: the calculations were made using the Big O where constant factors are omitted. And the logarithm used is of the base 2. In algorithms complexity analysis we usually use LgN not LogN where LgN is log N to the base 2 and LogN is log N to the base 10.
However, referring to CLRS (good book, I recommend reading it) the Big O defines as:
Take a look at this graph for better understanding:
It is all about N > No. So all the rules of the Big O notation are valid FOR BIG VALUES OF N. For small N it is NOT necessarily correct. I mean, for N=5 it is not necessary that the Big O will give a close approximation on the running time.
I hope this gives a good answer for the question.
Reference: Chapter3, Section1, [CLRS] Introduction To Algorithms, 3rd Edition.
Assuming iterated logarithm is defined as it is here: http://en.wikipedia.org/wiki/Iterated_logarithm
How should I go about comparing its complexity to other functions, for example lg(lg(n))? So far I've done all the comparing by calculating the limits, but how do you calculate a limit of iterated logarithm?
I understand it grows very slowly, slower than lg(n), but is there some function that grows at the same speed maybe as lg*(n) (where lg* is iterated logarithm on base 2) so it would ease comparing it to other functions? This way I could also compare lg*(lg(n)) to lg(lg*(n)) for example. Any tips on comparing functions to each other based on speed of growing would be appreciated.
The iterated logarithm function log* n isn't easily compared to another function that has similar behavior, the same way that log n isn't easily compared to another function with similar behavior.
To understand the log* function, I've found it helpful to keep a few things in mind:
This function grows extremely slowly. Figure that log* 22222 = 5, and that tower of 2s is a quantity that's bigger than the number of atoms in the known universe.
It plays with logarithms the way that logarithms play with division. The quotient rule for logarithms says that log (n / 2) = log n - log 2 = log n - 1, assuming our logs are base-2. One intuition for this is that, in a sense, log n represents "how many times do you have to divide n by two to drop n down to 1?," so log (n / 2) "should" be log n - 1 because we did one extra division beforehand. On the other hand, (log n) / 2 is probably a lot smaller than log n - 1. Similarly, log* n counts "how many times do you have to take the log of n before you drop n down to some constant?" so log* log n = log* n - 1 because we've taken one additional log. On the other hand, log log* n is probably much smaller than log* n - 1.
Hope this helps!
suppose a computer executes one instruction in a microsecond and an algorithm is known to have a complexity of O(2^n), if a maximum of 12 hours of computer time is given to this algorithm, determine the largest possible value of n for which the algorithm can be used
No can do.
O(2^n) means that there exists C such that limit n->infinity f(n)<=C*(2^n).
But this C can also be the number of 023945290378569237845692378456923847569283475635463463456 so even 12 hours cannot ensure that it will run even on small input.
Insufficient information. An algorithm that is O(2^n) doesn't necessarily take exactly 2^n steps for input of size n, it could take a constant factor of that. In fact, it could take C*(2^n)+B operations, where C and B are constant (that is, they don't depend on n), they are are both integers, and C >= 1 and B >= 0.
Well, as O(2^n) is an exponential complexity and you're asked for the "largest possible exponent", you're trying to find an N, so that 2^N is less than or equal to 12 hours (* 3600 seconds, * 1000000 for the microseconds). From there, you can either use logarithms to find the right value or estimate an initial N and iterate until you find a value.