I am working on an assignment but a friend of mine disagree with the answer to one part.
f(n) = 2n-2n^3
I find the complexity to be f(n) = O(n^3)
Am I wrong?
You aren't wrong, since O(n^3) does not provide a tight bound. However, typically you assume that f is increasing and try to find the smallest function g for which f=O(g) is true. Consider a simple function f=n+3. It's correct to say that f=O(n^3), since n+3 < n^3 for all n > 2 (just to pick an arbitrary constant). However, it's "more" correct to say that f=O(n), since n+3 < 2n for all n > 3, and this gives you a better feel for how f behaves as n increases.
In your case, f is decreasing as n increases, so it is true that f = O(g) for any function that stays positive as n increases. The "smallest" (or rather, slowest growing) such function is a constant function for some positive constant, and we usually write that as 2n - 2n^3 = O(1), since 2n - 2n^3 < 1 for all n>0.
You could even find some function of n that is decreasing as n increases, but decreases more slowly than your f, but such usage is rare. Big-O notation is most commonly used to describe algorithm running times as the input size increases, so n is almost universally assumed to be positive.
Related
When we say a method has the time complexity of O(n^2) is it meant in the same way as in 10^2 = 100 or does it mean that the method is at max or closest to that notation? I am really confused on how to undertand Big O. I remember something called upper bound, would that mean at max?
If means that the running time is bounded above by N².
More precisely, T(N) < C.N², where C is some constant, and the inequality is true as of a certain N*.
For example, 2N²+4N+6 = O(N²), because 2N²+4N+6 < 3N² for all N>5.
Explanation
If a method f is inside O(g), with g being another function, it means that at some point (exist some n_0 such that for all n > n_0) the function f will always output a smaller value than g for that point. However, g is allowed to have an arbitrary constant k. So f(n) <= k * g(n) for all n above some n_0. So f is allowed to first be bigger, if it then starts to be smaller and keeps being smaller.
We say f is asymptotically bounded by g. Asymptotically means that we do not care how f behaves in the beginning. Only what it will do when approaching infinity. So we discard all inputs below n_0.
Illustration
An illustration would be this:
The blue function is k * g with some constant k, the red one is f. We see that f is greater at first, but then, starting at x_0, it will always be smaller than k * g. Thus f in O(g).
Definition
Mathematically, this can be expressed by
which is the usual definition of Big-O. From the explanation above, the definition should be clear. It says that from a certain n_0 on, the function f must be smaller than k * g for all inputs. k is allowed to be some constant.
Both images are taken from Wikipedia.
Examples
Here are a couple of examples to familiarize with the definition:
n is in O(n) (trivially)
n is in O(n^2) (trivially)
5n is in O(n^2) (starting from n_0 = 5)
25n^2 is in O(n^2) (taking k = 25 or greater)
2n^2 + 4n + 6 is in O(n^2) (take k = 3, starting from n_0 = 5)
Notes
Actually,O(g) is a set in the mathematical sense. It contains all functions with the above mentioned property (which are asymptotically bounded by g).
So, although some authors write f = O(g), it is actually wrong and should be f in O(g).
There are also other, similar, sets, which only differ in the direction of the bound:
Big-O: less equals <=
Small-o: less <
Big-Omega: greater equals >=
Small-omega: greater >
Theta: Big-O and Big-Omega at the same time (equals)
I've looked through the links, and I'm too braindead to understand the mechanical process of figuring them out. I understand the ideas of O, theta and omega, and I understand the "Rules". So let me work on this example with you guys to clear this up in my head :)
f(n) = 100n+logn
g(n) = n+(logn)2
I need to find: whether f = O(g), or f = Ω(g), or both (in which case f = Θ(g))
so I know that 100n and n are the same, and they are both slower than log(n). I just need to figure out if (log(n))^2 is slower or faster. but I can't really remember anything about logs. if the log(n) is bigger, does it mean the number gets bigger or smaller?
let me please add my real struggle is in figuring out BOTH omega and theta. By definition f(n) <= g(n) if there is a constant c that will make g(n) bigger, and same for the reverse for omega. but how do I really test this?
You can usually figure it out from these rules:
Broadly k < log(n)^k < n^k < k^n. You can replace k at each step with any positive number you want and it remains true for large enough n.
If x is big, then 1/x is very close to 0.
For positive x and y, x < y if and only if log(x) < log(y). (Sometimes taking logs can help with complicated and messy products.
log(k^n) = log(k) n.
For O, theta, and omega, you can ignore everything except the biggest term that doesn't cancel out.
Rules 1 and 5 suffice for your specific questions. But learn all of the rules.
You don't need to remember rules, but rather learn general principles.
Here, all you need to know is that log(n) is increasing and grows without limit, and the definition of big-O, namely f = O(g) if there's a c such that for all sufficiently large n, f(n) <= c * g(n). You might learn the fact about log by remembering that log(n) grows like the number of digits of n.
Can log^2(n) be O(log(n))? That would mean (using the definition of big-O) that log^2(n) <= c.log(n) for all sufficiently large n, so log^2(n)/log(n) <= c for sufficiently large n (*). But log^2(n)/log(n) = log(n), which grows without limit, so can't be bounded by c. So log^2(n) = O(log(n)).
Can log(n) be O(log^2(n))? Well, at some point log(n) > 1 (since it's increasing without limit), and from that point on, log(n) < log^2(n). That proves that log(n) = O(log^2(n)), with the constant c equal to 1.
(*) If you're being extra careful, you need to exclude the possibility that log(n) is infinitely many times zero.
Resources I've found on time complexity are unclear about when it is okay to ignore terms in a time complexity equation, specifically with non-polynomial examples.
It's clear to me that given something of the form n2 + n + 1, the last two terms are insignificant.
Specifically, given two categorizations, 2n, and n*(2n), is the second in the same order as the first? Does the additional n multiplication there matter? Usually resources just say xn is in an exponential and grows much faster... then move on.
I can understand why it wouldn't since 2n will greatly outpace n, but because they're not being added together, it would matter greatly when comparing the two equations, in fact the difference between them will always be a factor of n, which seems important to say the least.
You will have to go to the formal definition of the big O (O) in order to answer this question.
The definition is that f(x) belongs to O(g(x)) if and only if the limit limsupx → ∞ (f(x)/g(x)) exists i.e. is not infinity. In short this means that there exists a constant M, such that value of f(x)/g(x) is never greater than M.
In the case of your question let f(n) = n ⋅ 2n and let g(n) = 2n. Then f(n)/g(n) is n which will still grow infinitely. Therefore f(n) does not belong to O(g(n)).
A quick way to see that n⋅2ⁿ is bigger is to make a change of variable. Let m = 2ⁿ. Then n⋅2ⁿ = ( log₂m )⋅m (taking the base-2 logarithm on both sides of m = 2ⁿ gives n = log₂m ), and you can easily show that m log₂m grows faster than m.
I agree that n⋅2ⁿ is not in O(2ⁿ), but I thought it should be more explicit since the limit superior usage doesn't always hold.
By the formal definition of Big-O: f(n) is in O(g(n)) if there exist constants c > 0 and n₀ ≥ 0 such that for all n ≥ n₀ we have f(n) ≤ c⋅g(n). It can easily be shown that no such constants exist for f(n) = n⋅2ⁿ and g(n) = 2ⁿ. However, it can be shown that g(n) is in O(f(n)).
In other words, n⋅2ⁿ is lower bounded by 2ⁿ. This is intuitive. Although they are both exponential and thus are equally unlikely to be used in most practical circumstances, we cannot say they are of the same order because 2ⁿ necessarily grows slower than n⋅2ⁿ.
I do not argue with other answers that say that n⋅2ⁿ grows faster than 2ⁿ. But n⋅2ⁿ grows is still only exponential.
When we talk about algorithms, we often say that time complexity grows is exponential.
So, we consider to be 2ⁿ, 3ⁿ, eⁿ, 2.000001ⁿ, or our n⋅2ⁿ to be same group of complexity with exponential grows.
To give it a bit mathematical sense, we consider a function f(x) to grow (not faster than) exponentially if exists such constant c > 1, that f(x) = O(cx).
For n⋅2ⁿ the constant c can be any number greater than 2, let's take 3. Then:
n⋅2ⁿ / 3ⁿ = n ⋅ (2/3)ⁿ and this is less than 1 for any n.
So 2ⁿ grows slower than n⋅2ⁿ, the last in turn grows slower than 2.000001ⁿ. But all three of them grow exponentially.
You asked "is the second in the same order as the first? Does the additional n multiplication there matter?" These are two different questions with two different answers.
n 2^n grows asymptotically faster than 2^n. That's that question answered.
But you could ask "if algorithm A takes 2^n nanoseconds, and algorithm B takes n 2^n nanoseconds, what is the biggest n where I can find a solution in a second / minute / hour / day / month / year? And the answers are n = 29/35/41/46/51/54 vs. 25/30/36/40/45/49. Not much difference in practice.
The size of the biggest problem that can be solved in time T is O (ln T) in both cases.
Very Simple answer is 'NO'
see 2^n and n.2^n
as seen n.2^n > 2^n for any n>0
or you can even do it by applying log on both sides then you get
n.log(2) < n.log(2) + log(n)
hence by both type of analysis that is by
substituting a number
using log
we see that n.2^n is greater than 2^n as visibly seen
so if you get a equation like
O ( 2^n + n.2^n ) which can be replaced as O ( n.2^n)
This question already has answers here:
Difference between Big-O and Little-O Notation
(5 answers)
Closed 8 years ago.
What does nb = o(an) (o is little oh) mean, intuitively? I am just beginning to self teach my self algorithms and I am having hard time interpreting such expressions every time I see one. Here, the way I understood is that for the function nb, the rate of growth is an. But this is not making sense to me regardless of being right or wrong.
f(n)=o(g(n)) means that f(n)/g(n)->0 when n->infinite.
For your problem,it should hold a>1. (n^b)/(a^n)->0 when n->infinite, since (n^b)/(sqrt(a)^n*sqrt(a)^n))=((n^b)/sqrt(a)^n) * (1/sqrt(a)^n). Let f(n)=((n^b)/sqrt(a)^n) is a function increase first and then decrease, so you can get the maximum value of max(f(n))=M, then (n^b)/(a^n) < M/(sqrt(a)^n), since a>1, sqrt(a)>1, so (sqrt(a)^n)->infinite when n->infinite. That is M/(sqrt(a)^n)->0 when n->infinite, At last, we get (n^b)/(a^n)->0 when n->infinite. That is n^b=o(a^n) by definition.
(For simplicity I'll assume that all functions always return positive values. This is the case for example for functions measuring run-time of an algorithm, as no algorithm runs in "negative" time.)
First, a recap of big-O notation, to clear up a common misunderstanding:
To say that f is O(g) means that f grows asymptotically at most as fast as g. More formally, treating both f and g as functions of a variable n, to say that f(n) is O(g(n)) means that there is a constant K, so that eventually, f(n) < K * g(n). The word "eventually" here means that there is some fixed value N (which is a function of K, f, and g), so that if n > N then f(n) < K * g(n).
For example, the function f(n) = n + 2 is O(n^2). To see why, let K = 1. Then, if n > 10, we have n + 2 < n^2, so our conditions are satisfied. A few things to note:
For n = 1, we have f(n) = 3 and g(n) = 1, so f(n) < K * g(n) actually fails. That's ok! Remember, the inequality only needs to hold eventually, and it does not matter if the inequality fails for some small finite list of n.
We used K = 1, but we didn't need to. For example, K = 2 would also have worked. The important thing is that there is some value of K which gives us the inequality we want eventually.
We saw that n + 2 is O(n^2). This might look confusing, and you might say, "Wait, isn't n + 2 actually O(n)?" The answer is yes. n + 2 is O(n), O(n^2), O(n^3), O(n/3), etc.
Little-o notation is slightly different. Big-O notation, intuitively, says that if f is O(g), then f grows asymptotically at most as fast as g. Little-o notation says that if f is o(g), then f grows asymptotically strictly slower than g.
Formally, f is o(g) if for any (let's say positive) choice of K, eventually the inequality f(n) < K * o(g) holds. So, for instance:
The function f(n) = n is not o(n). This is because, for K = 1, there is no value of n so that f(n) < K * g(n). Intuitively, f and g grow asymptotically at the same rate, so f does not grow strictly slower than g does.
The function f(n) = n is o(n^2). Why is this? Pick your favorite positive value of K. (To see the actual point, try to make K small, for example 0.001.) Imagine graphing the functions f(n) and K * g(n). One is a straight line through the origin of positive slope, and the other is a concave-up parabola through the origin. Eventually the parabola will be higher than the line, and will stay that way. (If you remember your pre-calc/calculus...)
Now we get to your actual question: let f(n) = n^b and g(n) = a^n. You asked why f is o(g).
Presumably, the author of the original statement treats a and b as constant, positive real numbers, and moreover a > 1 (if a <= 1 then the statement is false).
The statement, in Engish, is:
For any positive real number b, and any real number a > 1, the function n^b grows asymptotically strictly slower than a^n.
This is an important thing to know if you are ever going to deal with algorithmic complexity. Put simpler, one can say "polynomials grow much slower than exponential functions." It isn't immediately obvious that this is true, and is too much to write out, so here is a reference:
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
Probably you will have to have some comfort with math to be able to read any proof of this fact.
Good luck!
The super high level meaning of the statement nb is o(an) is just that exponential functions like an grow much faster than polynomial functions, like nb.
The important thing to understand when looking at big O and little o notation is that they are both upper bounds. I'm guessing that's why you're confused. nb is o(an) because the growth rate of an is much bigger. You could probably find a tighter little o upper bound on nb (one where the gap between the bound and the function is smaller) but an is still valid. It's also probably worth looking at the difference between Big O and little o.
Remember that a function f is Big O of a function g if for some constant k > 0, you can eventually find a minimum value for n so that f(n) ≤ k * g(n).
A function f is little o of a function g if for any constant k > 0 you can eventually find a minimum value for n so that f(n) ≤ k * g(n).
Note that the little o requirement is harder to fulfill, meaning that if a function f is little o of a function g, it is also Big O of g, and it means the function g grows faster than if it were just Big O of g.
In your example, if b is 3 and a is 2 and we set k to 1, we can work out the minimum value for n so that nb ≤ k * an. In this case, it's between 9 and 10 since
9³ = 729 and 1 * 2⁹ = 512, which means at 9 an is not yet greater than nb
but
10³ = 1000 and 1 * 2¹⁰ = 1024, which means n is now greater than nb.
You can see graphing these functions that n will be greater than nb for any value of n > 10. At this point we've only shown that nb is Big O of n, since Big O only requires that for some value of k > 0 (we picked 1) an ≥ nb for some minimum n (in this case it's between 9 and 10)
To show that nb is little o of an, we would have to show that for any k greater than 0 you can still find a minimum value of n so that an > nb. For example, if you picked k = .5 the minimum of 10 we found earlier doesn't work, since 10³ = 1000, and .5 * 2¹⁰ = 512. But we can just keep sliding the minimum for n out further and further, the smaller you make k the bigger the minimum for n will b. Saying nb is little o of an means no matter how small you make k we will always be able to find a big enough value for n so that nb ≤ k * an
I have two algorithms.
The complexity of the first one is somewhere between Ω(n^2*(logn)^2) and O(n^3).
The complexity of the second is ω(n*log(logn)).
I know that O(n^3) tells me that it can't be worse than n^3, but I don't know the difference between Ω and ω. Can someone please explain?
Big-O: The asymptotic worst case performance of an algorithm. The function n happens to be the lowest valued function that will always have a higher value than the actual running of the algorithm. [constant factors are ignored because they are meaningless as n reaches infinity]
Big-Ω: The opposite of Big-O. The asymptotic best case performance of an algorithm. The function n happens to be the highest valued function that will always have a lower value than the actual running of the algorithm. [constant factors are ignored because they are meaningless as n reaches infinity]
Big-Θ: The algorithm is so nicely behaved that some function n can describe both the algorithm's upper and lower bounds within the range defined by some constant value c. An algorithm could then have something like this: BigTheta(n), O(c1n), BigOmega(-c2n) where n == n throughout.
Little-o: Is like Big-O but sloppy. Big-O and the actual algorithm performance will actually become nearly identical as you head out to infinity. little-o is just some function that will always be bigger than the actual performance. Example: o(n^7) is a valid little-o for a function that might actually have linear or O(n) performance.
Little-ω: Is just the opposite. w(1) [constant time] would be a valid little omega for the same above function that might actually exihbit BigOmega(n) performance.
Big omega (Ω) lower bound:
A function f is an element of the set Ω(g) (which is often written as f(n) = Ω(g(n))) if and only if there exists c > 0, and there exists n0 > 0 (probably depending on the c), such that for every n >= n0 the following inequality is true:
f(n) >= c * g(n)
Little omega (ω) lower bound:
A function f is an element of the set ω(g) (which is often written as f(n) = ω(g(n))) if and only for each c > 0 we can find n0 > 0 (depending on the c), such that for every n >= n0 the following inequality is true:
f(n) >= c * g(n)
You can see that it's actually the same inequality in both cases, the difference is only in how we define or choose the constant c. This slight difference means that the ω(...) is conceptually similar to the little o(...). Even more - if f(n) = ω(g(n)), then g(n) = o(f(n)) and vice versa.
Returning to your two algorithms - the algorithm #1 is bounded from both sides, so it looks more promising to me. The algorithm #2 can work longer than c * n * log(log(n)) for any (arbitrarily large) c, so it might eventually loose to the algorithm #1 for some n. Remember, it's only asymptotic analysis - so all depends on actual values of these constants and the problem size which has some practical meaning.