I have gotten into an argument/debate recently and I am trying to get a clear verdict of the correct solution.
It is well known that n! grows very quickly, but exactly how quickly, enough to "hide" all additional constants that might be added to it?
Let's assume I have this silly & simple program (no particular language):
for i from 0 to n! do:
; // nothing
Given that the input is n, then the complexity of this is obviously O(n!) (or even ϴ(n!) but this isn't relevant here).
Now let's assume this program:
for i from 0 to n do:
for j from 0 to n do:
for k from 0 to n! do:
; // nothing
Bob claims: "This program's complexity is obviously O(n)O(n)O(n!) = O(n!n^2) = O((n+2)!)."
Alice responds: "I agree with you bob, but actually it would be sufficient if you said that the complexity is O(n!) since O(n!n^k) = O(n!) for any k >= 1 constant."
Is Alice right in her note of Bob's analysis?
Alice is wrong, and Bob is right.
Recall an equivalent definition to big O notation when using limit:
f(n) is in O(g(n)) iff
lim_n->infinity: f(n)/g(n) < infinity
For any k>0:
lim_n->infinity: (n!*n^k) / n! = lim_n->infinity n^k = infinity
And thus, n!*n^k is NOT in O(n!)
Amit solution is perfect, I would only add more "human" solution, because understanding definition can be difficult for beginners.
The definition basically says - if you are increasing the value n and the methods f(n) and g(n) differs "only" k-times, where k is constant and does not change (for example g(n) is always ~100times higher, no matter if n=10000 or n=1000000), then these functions have same complexity.
If the g(n) is 100times higher for n=10000 and 80times higher for n=1000000, then f(n) has higher complexity! Because as the n grows and grows, the f(n) would eventually at some point reach the g(n) and then it will grow more and more compare to g(n). In complexity theories, you are interested in, how it will end in "infinity" (or more imaginable extremely HIGH values of n).
if you compare n! and n!*n^2, you can see, that for n=10, the second function has 10^2=100 times higher value. For n=1000, it has 1000^2=1000000 times higher value. And as you can imagine, the difference will grow.
Related
Could please help me to understand notation's that mention in the picture?, I try to understand "Big O notation" in that under the "Family of Bachmann–Landau notations" Table there is "Formal Definition" column, in that, there are lot's notation with equation, i did't come across these notation before. could any one familiar with this ? https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann–Landau_notations
The logic behind that definitions are actually quite simple, it basically says that no matter what constants are multiplying the result, from some point where n is big enough, the one of the function will start to being bigger/smaller and it remains that way.
To see real difference, I will explain th small-o (which says that some function has smaller complexity than other), it says that for all k bigger than zero you can find some value of n called n_0 for which all n bigger than n_0 follows this pattern: f(n) <= k*g(n).
So you have two functions and you put there n as a parameter. Then no matter what you put as k, you always find value of n for which f(n) <= k*g(n) and all value that are bigger than the one you have find will also fit into this equation.
Consider for example:
f(n) = n * 100
g(n) = n^2
So if you try to put i.e. n=5 there, it does not say you what has bigger complexity, because 5*100=500 and 5^2=25. If you put number big enough, i.e. n=100, then f(n)=100*100=10000 and g(n)=100^2=100*100=10000. So we get to the same value. If you try to put anything bigger than that, the g(n) will become bigger and bigger.
It also have to follow the equation f(n) <= k*g(n). In example, if I put i.e. k=0.1 then
100*n <= 0.1*n^2 *10
1000n <= n^2 /n
1000 < n
So with that functions, you can see that for k=0.1 you have n_0 = 1000 to fulfill the equations, but it is enough. All n > 1000 will be bigger and the function g(n) will always be bigger, therefore it has higher complexity. (ok, the real proof is not that easy, but you can see the pattern). The point is, no matter what k will be, even if it is equal k=0.000000001, there always be breaking point of n_0 and from that point, all g(n) will be bigger than f(n)
We can also try some negative equations to see whats difference between O(n) and O(n^2).
Lets take:
f(n) = n
g(n) = 10*n
So in standard algebra the g(n) > f(n), right? But in complexity theory we need to know if it grows bigger and if so, if it grows bigger than just multiplying it with constant.
So if we consider that k=0.01, then you can see that no matter how big the n will be, you never find n_0 that fulfills the f(n) <= k*g(n), so the f(n) != o(g(n))
In terms of complexity theory you can take the notations as smaller/bigger, so
f(n) = o(g(n)) -> f(n) < g(n)
f(n) = O(g(n)) -> f(n) <= g(n)
f(n) = Big-Theta(g(n)) -> f(n) === g(n)
//... etc, remember these euqations are not algebraic, just for complexity
I have been asked the following question by one of my fellow mates.
Which of the following expressions is not sublinear?
O(log log n)
O(n)
O(logn)
O(root(n))
I have gone through https://en.wikipedia.org/wiki/Time_complexity#Sub-linear_time but couldn't but I am not sure that I have understood it completely. Could someone point me in the right direction.
A function, f(x), is said to grow faster than another function, g(x), if the limit of their ratios as x approaches infinity goes to some positive number (or infinity), as seen in the definition below.
In the case of sublinear, we want to prove that a function grows slower than c*n, where c is some positive number.
Thus, for each function, f(n), in your list, we want the ratio of f(n) to (c*n). If the limit is 0, this means the function, f(n), is sublinear. Otherwise it grows at the same (approximate) speed of n or faster.
lim n->inf (log log n)/(c*n) = 0 (via l'Hopital's)
(sublinear)
lim n->inf (n)/(c*n) = 1/c != 0
(linear)
lim n->inf (log n)/(c*n) = 0 (via l'Hopital's)
(sublinear)
lim n->inf (sqrt(n))/(c*n) = 0
(sublinear)
I think I understood why you're confused: the wikipedia page you link uses Little-Oh notation:
Sub-linear time
An algorithm is said to run in sub-linear time (often spelled sublinear time) if T(n) = o(n)
Beware that T(n) = o(n) is a stronger requirement than saying T(n) = O(n).
In particular for a function in O(n) you can't always have the inequality
f(x) < k g(x) for all x > a
satisfied for every k you choose. y=x and k=1 will prove you wrong and little-oh notation requires every k to satisfy that expression.
Any O(n) function is not also in o(n). Thus your non-sublinear expression is O(n).
I recommend reading this answer to continue your studies
Resources I've found on time complexity are unclear about when it is okay to ignore terms in a time complexity equation, specifically with non-polynomial examples.
It's clear to me that given something of the form n2 + n + 1, the last two terms are insignificant.
Specifically, given two categorizations, 2n, and n*(2n), is the second in the same order as the first? Does the additional n multiplication there matter? Usually resources just say xn is in an exponential and grows much faster... then move on.
I can understand why it wouldn't since 2n will greatly outpace n, but because they're not being added together, it would matter greatly when comparing the two equations, in fact the difference between them will always be a factor of n, which seems important to say the least.
You will have to go to the formal definition of the big O (O) in order to answer this question.
The definition is that f(x) belongs to O(g(x)) if and only if the limit limsupx → ∞ (f(x)/g(x)) exists i.e. is not infinity. In short this means that there exists a constant M, such that value of f(x)/g(x) is never greater than M.
In the case of your question let f(n) = n ⋅ 2n and let g(n) = 2n. Then f(n)/g(n) is n which will still grow infinitely. Therefore f(n) does not belong to O(g(n)).
A quick way to see that n⋅2ⁿ is bigger is to make a change of variable. Let m = 2ⁿ. Then n⋅2ⁿ = ( log₂m )⋅m (taking the base-2 logarithm on both sides of m = 2ⁿ gives n = log₂m ), and you can easily show that m log₂m grows faster than m.
I agree that n⋅2ⁿ is not in O(2ⁿ), but I thought it should be more explicit since the limit superior usage doesn't always hold.
By the formal definition of Big-O: f(n) is in O(g(n)) if there exist constants c > 0 and n₀ ≥ 0 such that for all n ≥ n₀ we have f(n) ≤ c⋅g(n). It can easily be shown that no such constants exist for f(n) = n⋅2ⁿ and g(n) = 2ⁿ. However, it can be shown that g(n) is in O(f(n)).
In other words, n⋅2ⁿ is lower bounded by 2ⁿ. This is intuitive. Although they are both exponential and thus are equally unlikely to be used in most practical circumstances, we cannot say they are of the same order because 2ⁿ necessarily grows slower than n⋅2ⁿ.
I do not argue with other answers that say that n⋅2ⁿ grows faster than 2ⁿ. But n⋅2ⁿ grows is still only exponential.
When we talk about algorithms, we often say that time complexity grows is exponential.
So, we consider to be 2ⁿ, 3ⁿ, eⁿ, 2.000001ⁿ, or our n⋅2ⁿ to be same group of complexity with exponential grows.
To give it a bit mathematical sense, we consider a function f(x) to grow (not faster than) exponentially if exists such constant c > 1, that f(x) = O(cx).
For n⋅2ⁿ the constant c can be any number greater than 2, let's take 3. Then:
n⋅2ⁿ / 3ⁿ = n ⋅ (2/3)ⁿ and this is less than 1 for any n.
So 2ⁿ grows slower than n⋅2ⁿ, the last in turn grows slower than 2.000001ⁿ. But all three of them grow exponentially.
You asked "is the second in the same order as the first? Does the additional n multiplication there matter?" These are two different questions with two different answers.
n 2^n grows asymptotically faster than 2^n. That's that question answered.
But you could ask "if algorithm A takes 2^n nanoseconds, and algorithm B takes n 2^n nanoseconds, what is the biggest n where I can find a solution in a second / minute / hour / day / month / year? And the answers are n = 29/35/41/46/51/54 vs. 25/30/36/40/45/49. Not much difference in practice.
The size of the biggest problem that can be solved in time T is O (ln T) in both cases.
Very Simple answer is 'NO'
see 2^n and n.2^n
as seen n.2^n > 2^n for any n>0
or you can even do it by applying log on both sides then you get
n.log(2) < n.log(2) + log(n)
hence by both type of analysis that is by
substituting a number
using log
we see that n.2^n is greater than 2^n as visibly seen
so if you get a equation like
O ( 2^n + n.2^n ) which can be replaced as O ( n.2^n)
I have two algorithms.
The complexity of the first one is somewhere between Ω(n^2*(logn)^2) and O(n^3).
The complexity of the second is ω(n*log(logn)).
I know that O(n^3) tells me that it can't be worse than n^3, but I don't know the difference between Ω and ω. Can someone please explain?
Big-O: The asymptotic worst case performance of an algorithm. The function n happens to be the lowest valued function that will always have a higher value than the actual running of the algorithm. [constant factors are ignored because they are meaningless as n reaches infinity]
Big-Ω: The opposite of Big-O. The asymptotic best case performance of an algorithm. The function n happens to be the highest valued function that will always have a lower value than the actual running of the algorithm. [constant factors are ignored because they are meaningless as n reaches infinity]
Big-Θ: The algorithm is so nicely behaved that some function n can describe both the algorithm's upper and lower bounds within the range defined by some constant value c. An algorithm could then have something like this: BigTheta(n), O(c1n), BigOmega(-c2n) where n == n throughout.
Little-o: Is like Big-O but sloppy. Big-O and the actual algorithm performance will actually become nearly identical as you head out to infinity. little-o is just some function that will always be bigger than the actual performance. Example: o(n^7) is a valid little-o for a function that might actually have linear or O(n) performance.
Little-ω: Is just the opposite. w(1) [constant time] would be a valid little omega for the same above function that might actually exihbit BigOmega(n) performance.
Big omega (Ω) lower bound:
A function f is an element of the set Ω(g) (which is often written as f(n) = Ω(g(n))) if and only if there exists c > 0, and there exists n0 > 0 (probably depending on the c), such that for every n >= n0 the following inequality is true:
f(n) >= c * g(n)
Little omega (ω) lower bound:
A function f is an element of the set ω(g) (which is often written as f(n) = ω(g(n))) if and only for each c > 0 we can find n0 > 0 (depending on the c), such that for every n >= n0 the following inequality is true:
f(n) >= c * g(n)
You can see that it's actually the same inequality in both cases, the difference is only in how we define or choose the constant c. This slight difference means that the ω(...) is conceptually similar to the little o(...). Even more - if f(n) = ω(g(n)), then g(n) = o(f(n)) and vice versa.
Returning to your two algorithms - the algorithm #1 is bounded from both sides, so it looks more promising to me. The algorithm #2 can work longer than c * n * log(log(n)) for any (arbitrarily large) c, so it might eventually loose to the algorithm #1 for some n. Remember, it's only asymptotic analysis - so all depends on actual values of these constants and the problem size which has some practical meaning.
I'm trying to figure out whether f(n)=n^(logb(n)) is in Theta(n^k) and therefore grows polynomial or in Theta(k^n) and therefore grows exponentially.
First I tried to simplify the function:
f(n) = n^(logb(n)) = n^(log(n)/log(b)) = n^((1/log(b))*log(n)) and because 1/log(b) is constant we get f(n)=n^log(n).
But now I'm stuck. My guess is that f(n) grows exponentially in Theta(n^log(n)) or even hyper exponentially because the exponent log(n) is also growing.
You can write n^(log(n)) as (k^(logk(n)))^(log(n)) = k^(K*(log(n)^2)). Since (log(n))^2 < n for n large enough, then this means that n^(log(n)) will grow slower than k^n
Try substituting n with b^m, which makes logb(n) = m. That should get you an idea of where to go.
it seems like it's not theta(exponential) or theta(polynomial); the posters above showed why it is not exponential. The reason why it is not polynomial can be done with a simple proof by counter example.
proof that n^logn is not O(n^k):
suppose there is some k, with some c and n_0 such that for n>n_0, c*n^k > n^log n. this is the definition of O(n^k).
it is simple to find a n, with the constants, such that this doesn't hold.
if c>1, then that n is (ck)^ck.
if c<1, then that n is k^k.