Exponentials: Little Oh [duplicate] - algorithm

This question already has answers here:
Difference between Big-O and Little-O Notation
(5 answers)
Closed 8 years ago.
What does nb = o(an) (o is little oh) mean, intuitively? I am just beginning to self teach my self algorithms and I am having hard time interpreting such expressions every time I see one. Here, the way I understood is that for the function nb, the rate of growth is an. But this is not making sense to me regardless of being right or wrong.

f(n)=o(g(n)) means that f(n)/g(n)->0 when n->infinite.
For your problem,it should hold a>1. (n^b)/(a^n)->0 when n->infinite, since (n^b)/(sqrt(a)^n*sqrt(a)^n))=((n^b)/sqrt(a)^n) * (1/sqrt(a)^n). Let f(n)=((n^b)/sqrt(a)^n) is a function increase first and then decrease, so you can get the maximum value of max(f(n))=M, then (n^b)/(a^n) < M/(sqrt(a)^n), since a>1, sqrt(a)>1, so (sqrt(a)^n)->infinite when n->infinite. That is M/(sqrt(a)^n)->0 when n->infinite, At last, we get (n^b)/(a^n)->0 when n->infinite. That is n^b=o(a^n) by definition.

(For simplicity I'll assume that all functions always return positive values. This is the case for example for functions measuring run-time of an algorithm, as no algorithm runs in "negative" time.)
First, a recap of big-O notation, to clear up a common misunderstanding:
To say that f is O(g) means that f grows asymptotically at most as fast as g. More formally, treating both f and g as functions of a variable n, to say that f(n) is O(g(n)) means that there is a constant K, so that eventually, f(n) < K * g(n). The word "eventually" here means that there is some fixed value N (which is a function of K, f, and g), so that if n > N then f(n) < K * g(n).
For example, the function f(n) = n + 2 is O(n^2). To see why, let K = 1. Then, if n > 10, we have n + 2 < n^2, so our conditions are satisfied. A few things to note:
For n = 1, we have f(n) = 3 and g(n) = 1, so f(n) < K * g(n) actually fails. That's ok! Remember, the inequality only needs to hold eventually, and it does not matter if the inequality fails for some small finite list of n.
We used K = 1, but we didn't need to. For example, K = 2 would also have worked. The important thing is that there is some value of K which gives us the inequality we want eventually.
We saw that n + 2 is O(n^2). This might look confusing, and you might say, "Wait, isn't n + 2 actually O(n)?" The answer is yes. n + 2 is O(n), O(n^2), O(n^3), O(n/3), etc.
Little-o notation is slightly different. Big-O notation, intuitively, says that if f is O(g), then f grows asymptotically at most as fast as g. Little-o notation says that if f is o(g), then f grows asymptotically strictly slower than g.
Formally, f is o(g) if for any (let's say positive) choice of K, eventually the inequality f(n) < K * o(g) holds. So, for instance:
The function f(n) = n is not o(n). This is because, for K = 1, there is no value of n so that f(n) < K * g(n). Intuitively, f and g grow asymptotically at the same rate, so f does not grow strictly slower than g does.
The function f(n) = n is o(n^2). Why is this? Pick your favorite positive value of K. (To see the actual point, try to make K small, for example 0.001.) Imagine graphing the functions f(n) and K * g(n). One is a straight line through the origin of positive slope, and the other is a concave-up parabola through the origin. Eventually the parabola will be higher than the line, and will stay that way. (If you remember your pre-calc/calculus...)
Now we get to your actual question: let f(n) = n^b and g(n) = a^n. You asked why f is o(g).
Presumably, the author of the original statement treats a and b as constant, positive real numbers, and moreover a > 1 (if a <= 1 then the statement is false).
The statement, in Engish, is:
For any positive real number b, and any real number a > 1, the function n^b grows asymptotically strictly slower than a^n.
This is an important thing to know if you are ever going to deal with algorithmic complexity. Put simpler, one can say "polynomials grow much slower than exponential functions." It isn't immediately obvious that this is true, and is too much to write out, so here is a reference:
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
Probably you will have to have some comfort with math to be able to read any proof of this fact.
Good luck!

The super high level meaning of the statement nb is o(an) is just that exponential functions like an grow much faster than polynomial functions, like nb.
The important thing to understand when looking at big O and little o notation is that they are both upper bounds. I'm guessing that's why you're confused. nb is o(an) because the growth rate of an is much bigger. You could probably find a tighter little o upper bound on nb (one where the gap between the bound and the function is smaller) but an is still valid. It's also probably worth looking at the difference between Big O and little o.
Remember that a function f is Big O of a function g if for some constant k > 0, you can eventually find a minimum value for n so that f(n) ≤ k * g(n).
A function f is little o of a function g if for any constant k > 0 you can eventually find a minimum value for n so that f(n) ≤ k * g(n).
Note that the little o requirement is harder to fulfill, meaning that if a function f is little o of a function g, it is also Big O of g, and it means the function g grows faster than if it were just Big O of g.
In your example, if b is 3 and a is 2 and we set k to 1, we can work out the minimum value for n so that nb ≤ k * an. In this case, it's between 9 and 10 since
9³ = 729 and 1 * 2⁹ = 512, which means at 9 an is not yet greater than nb
but
10³ = 1000 and 1 * 2¹⁰ = 1024, which means n is now greater than nb.
You can see graphing these functions that n will be greater than nb for any value of n > 10. At this point we've only shown that nb is Big O of n, since Big O only requires that for some value of k > 0 (we picked 1) an ≥ nb for some minimum n (in this case it's between 9 and 10)
To show that nb is little o of an, we would have to show that for any k greater than 0 you can still find a minimum value of n so that an > nb. For example, if you picked k = .5 the minimum of 10 we found earlier doesn't work, since 10³ = 1000, and .5 * 2¹⁰ = 512. But we can just keep sliding the minimum for n out further and further, the smaller you make k the bigger the minimum for n will b. Saying nb is little o of an means no matter how small you make k we will always be able to find a big enough value for n so that nb ≤ k * an

Related

When something is Big O, does it mean that it is exactly the result of Big O?

When we say a method has the time complexity of O(n^2) is it meant in the same way as in 10^2 = 100 or does it mean that the method is at max or closest to that notation? I am really confused on how to undertand Big O. I remember something called upper bound, would that mean at max?
If means that the running time is bounded above by N².
More precisely, T(N) < C.N², where C is some constant, and the inequality is true as of a certain N*.
For example, 2N²+4N+6 = O(N²), because 2N²+4N+6 < 3N² for all N>5.
Explanation
If a method f is inside O(g), with g being another function, it means that at some point (exist some n_0 such that for all n > n_0) the function f will always output a smaller value than g for that point. However, g is allowed to have an arbitrary constant k. So f(n) <= k * g(n) for all n above some n_0. So f is allowed to first be bigger, if it then starts to be smaller and keeps being smaller.
We say f is asymptotically bounded by g. Asymptotically means that we do not care how f behaves in the beginning. Only what it will do when approaching infinity. So we discard all inputs below n_0.
Illustration
An illustration would be this:
The blue function is k * g with some constant k, the red one is f. We see that f is greater at first, but then, starting at x_0, it will always be smaller than k * g. Thus f in O(g).
Definition
Mathematically, this can be expressed by
which is the usual definition of Big-O. From the explanation above, the definition should be clear. It says that from a certain n_0 on, the function f must be smaller than k * g for all inputs. k is allowed to be some constant.
Both images are taken from Wikipedia.
Examples
Here are a couple of examples to familiarize with the definition:
n is in O(n) (trivially)
n is in O(n^2) (trivially)
5n is in O(n^2) (starting from n_0 = 5)
25n^2 is in O(n^2) (taking k = 25 or greater)
2n^2 + 4n + 6 is in O(n^2) (take k = 3, starting from n_0 = 5)
Notes
Actually,O(g) is a set in the mathematical sense. It contains all functions with the above mentioned property (which are asymptotically bounded by g).
So, although some authors write f = O(g), it is actually wrong and should be f in O(g).
There are also other, similar, sets, which only differ in the direction of the bound:
Big-O: less equals <=
Small-o: less <
Big-Omega: greater equals >=
Small-omega: greater >
Theta: Big-O and Big-Omega at the same time (equals)

Big O notation for the complexity function of the fourth root of n

I am expected to find the Big O notation for the following complexity function: f(n) = n^(1/4).
I have come up with a few possible answers.
The more accurate answer would seem to be O(n^1/4). However, since it contains a root, it isn't a polynomial, and I've never seen this n'th rooted n in any textbook or online resource.
Using the mathematical definition, I can try to define an upper-bound function with a specified n limit. I tried plotting n^(1/4) in red with log2 n in blue and n in green.
The log2 n curve intersects with n^(1/4) at n=2.361 while n intersects with n^(1/4) at n=1.
Given the formal mathematical definition, we can come up with two additional Big O notations with different limits.
The following shows that O(n) works for n > 1.
f(n) is O(g(n))
Find c and n0 so that
n^(1/4) ≤ cn
where c > 0 and n ≥ n0
C = 1 and n0 = 1
f(n) is O(n) for n > 1
This one shows that O(log2 n) works for n > 3.
f(n) is O(g(n))
Find c and n0 so that
n^(1/4) ≤ clog2 n
where c > 0 and n ≥ n0
C = 1 and n0 = 3
f(n) is O(log2 n) for n > 3
Which Big O description of the complexity function would be typically used? Are all 3 "correct"? Is it up to interpretation?
Using O(n^1/4) is perfectly fine for big O notation. Here are some examples of fractures in exponents from real life examples
O(n) is also correct (because big O giving only upper bound), but it is not tight, so n^1/4 is in O(n), but not in Theta(n)
n^1/4 is NOT in O(log(n)) (proof guidelines follows).
For any value r>0, and for large enough value of n, log(n) < n^r.
Proof:
Have a look on log(log(n)) and r*log(n). The first is clearly smaller than the second for large enough values. In big O notation terminology, we can definetly say that the r*log(n)) is NOT in O(log(log(n)), and log(log(n)) is(1), so we can say that:
log(log(n)) < r*log(n) = log(n^r) for large enough values of n
Now, exponent each side with base of e. Note that both left hand and right hand values are positives for large enough n:
e^log(log(n)) < e^log(n^r)
log(n) < n^r
Moreover, with similar way, we can show that for any constant c, and for large enough values of n:
c*log(n) < n^r
So, by definition it means n^r is NOT in O(log(n)), and your specific case: n^0.25 is NOT in O(log(n)).
Footnotes:
(1) If you are still unsure, create a new variable m=log(n), is it clear than r*m is not in O(log(m))? Proving it is easy, if you want an exercise.

Finding Big-O, Omega and theta

I've looked through the links, and I'm too braindead to understand the mechanical process of figuring them out. I understand the ideas of O, theta and omega, and I understand the "Rules". So let me work on this example with you guys to clear this up in my head :)
f(n) = 100n+logn
g(n) = n+(logn)2
I need to find: whether f = O(g), or f = Ω(g), or both (in which case f = Θ(g))
so I know that 100n and n are the same, and they are both slower than log(n). I just need to figure out if (log(n))^2 is slower or faster. but I can't really remember anything about logs. if the log(n) is bigger, does it mean the number gets bigger or smaller?
let me please add my real struggle is in figuring out BOTH omega and theta. By definition f(n) <= g(n) if there is a constant c that will make g(n) bigger, and same for the reverse for omega. but how do I really test this?
You can usually figure it out from these rules:
Broadly k < log(n)^k < n^k < k^n. You can replace k at each step with any positive number you want and it remains true for large enough n.
If x is big, then 1/x is very close to 0.
For positive x and y, x < y if and only if log(x) < log(y). (Sometimes taking logs can help with complicated and messy products.
log(k^n) = log(k) n.
For O, theta, and omega, you can ignore everything except the biggest term that doesn't cancel out.
Rules 1 and 5 suffice for your specific questions. But learn all of the rules.
You don't need to remember rules, but rather learn general principles.
Here, all you need to know is that log(n) is increasing and grows without limit, and the definition of big-O, namely f = O(g) if there's a c such that for all sufficiently large n, f(n) <= c * g(n). You might learn the fact about log by remembering that log(n) grows like the number of digits of n.
Can log^2(n) be O(log(n))? That would mean (using the definition of big-O) that log^2(n) <= c.log(n) for all sufficiently large n, so log^2(n)/log(n) <= c for sufficiently large n (*). But log^2(n)/log(n) = log(n), which grows without limit, so can't be bounded by c. So log^2(n) = O(log(n)).
Can log(n) be O(log^2(n))? Well, at some point log(n) > 1 (since it's increasing without limit), and from that point on, log(n) < log^2(n). That proves that log(n) = O(log^2(n)), with the constant c equal to 1.
(*) If you're being extra careful, you need to exclude the possibility that log(n) is infinitely many times zero.

Big-O of `2n-2n^3`

I am working on an assignment but a friend of mine disagree with the answer to one part.
f(n) = 2n-2n^3
I find the complexity to be f(n) = O(n^3)
Am I wrong?
You aren't wrong, since O(n^3) does not provide a tight bound. However, typically you assume that f is increasing and try to find the smallest function g for which f=O(g) is true. Consider a simple function f=n+3. It's correct to say that f=O(n^3), since n+3 < n^3 for all n > 2 (just to pick an arbitrary constant). However, it's "more" correct to say that f=O(n), since n+3 < 2n for all n > 3, and this gives you a better feel for how f behaves as n increases.
In your case, f is decreasing as n increases, so it is true that f = O(g) for any function that stays positive as n increases. The "smallest" (or rather, slowest growing) such function is a constant function for some positive constant, and we usually write that as 2n - 2n^3 = O(1), since 2n - 2n^3 < 1 for all n>0.
You could even find some function of n that is decreasing as n increases, but decreases more slowly than your f, but such usage is rare. Big-O notation is most commonly used to describe algorithm running times as the input size increases, so n is almost universally assumed to be positive.

Order functions of algorithms

Can someone help me understand this question? I may have it on my tomorrow exam but I can't find similar question on internet or in my lectures.
First you need to express each function as a Theta(something).
For instance, for the first one: Theta((1-n)(n^3-17)) = Theta(n^4 + ...) = Theta(n^4).
For the second one: Theta(30+log(n^9)) = Theta(30 + 9logn) = Theta(logn).
These are sorted as g1, g2, because n^4 = Omega(logn).
And so on.
For the sorting: saying that g1 = Omega(g2) means that g1 grows at least as fast as g2, that is we are defining a lower bound. So, sort them from the worst (slowest, with fastest growth), to the best (NB: it is strange that the exercise want "the first to be to most preferable", but the definition of Omega leaves no doubt).
Btw: if you want to be more formal, here is the definition of the Omega notation:
f = Omega(g) iff exist c and n0 > 0 such that forall n >= n0 we have 0 <= c*g(n) <= f(n) (in words: f grows at least as fast as g).
First, you have to calculate the Theta notations by determing the growths-class of each function, e.G. 1, log(n), n, n log(n) and so on. To do that you have of course to expand those functions.
Having the growths-class of each function you have to order them by their goodness.
Last, you have to put these functions into relations, like g1 = omega(g2). Therefore just keep in mind that a function t(n) is said to be in omega(g(n)) if t(n) is bounded below by some multiple of g(n), e.G. n³ >= n² and therefore n³ is elemnt of omega(n²). This can also be written as n³ = omega(n²)
For theta, this answer and that one summarize what is to be found in your problem. Which g function can you find such that (say f is one of your 8 functions above)
multiplied by a constant bounds asymptotically above f (called O(g(n)))
multiplied by (usually) another constant bounds asymptotically below f (called omega(g(n))
For instance, for the iv: 10^5n, Θ(n) fits, as you can easily find two constants where k1.n bounds below 10^5n and k2.n to bounds it above, asymptotically. (here f is O(n) and Omega(n) as f, the iv. is an easy one).
You need to understand that all big O and Big Omega and Big theta apply for worse/best/average case
for some function:
Big O -> O(..) is the upper limit this function will never exceed .. e.g. for higher values
Big Omega -> is the lower pound the function never goes below it .e.g in small values
Big theta is like: there are 2 constants such that:
Big omega * c < Big Theta < Big O *c2
so going to your sample:
i) its of order n^4 for both Big Omega, and O(n^ + n).
viii) its constant so both Obig O and big Omega the same.. thus big Theta the same

Resources