Order functions of algorithms - algorithm

Can someone help me understand this question? I may have it on my tomorrow exam but I can't find similar question on internet or in my lectures.

First you need to express each function as a Theta(something).
For instance, for the first one: Theta((1-n)(n^3-17)) = Theta(n^4 + ...) = Theta(n^4).
For the second one: Theta(30+log(n^9)) = Theta(30 + 9logn) = Theta(logn).
These are sorted as g1, g2, because n^4 = Omega(logn).
And so on.
For the sorting: saying that g1 = Omega(g2) means that g1 grows at least as fast as g2, that is we are defining a lower bound. So, sort them from the worst (slowest, with fastest growth), to the best (NB: it is strange that the exercise want "the first to be to most preferable", but the definition of Omega leaves no doubt).
Btw: if you want to be more formal, here is the definition of the Omega notation:
f = Omega(g) iff exist c and n0 > 0 such that forall n >= n0 we have 0 <= c*g(n) <= f(n) (in words: f grows at least as fast as g).

First, you have to calculate the Theta notations by determing the growths-class of each function, e.G. 1, log(n), n, n log(n) and so on. To do that you have of course to expand those functions.
Having the growths-class of each function you have to order them by their goodness.
Last, you have to put these functions into relations, like g1 = omega(g2). Therefore just keep in mind that a function t(n) is said to be in omega(g(n)) if t(n) is bounded below by some multiple of g(n), e.G. n³ >= n² and therefore n³ is elemnt of omega(n²). This can also be written as n³ = omega(n²)

For theta, this answer and that one summarize what is to be found in your problem. Which g function can you find such that (say f is one of your 8 functions above)
multiplied by a constant bounds asymptotically above f (called O(g(n)))
multiplied by (usually) another constant bounds asymptotically below f (called omega(g(n))
For instance, for the iv: 10^5n, Θ(n) fits, as you can easily find two constants where k1.n bounds below 10^5n and k2.n to bounds it above, asymptotically. (here f is O(n) and Omega(n) as f, the iv. is an easy one).

You need to understand that all big O and Big Omega and Big theta apply for worse/best/average case
for some function:
Big O -> O(..) is the upper limit this function will never exceed .. e.g. for higher values
Big Omega -> is the lower pound the function never goes below it .e.g in small values
Big theta is like: there are 2 constants such that:
Big omega * c < Big Theta < Big O *c2
so going to your sample:
i) its of order n^4 for both Big Omega, and O(n^ + n).
viii) its constant so both Obig O and big Omega the same.. thus big Theta the same

Related

When something is Big O, does it mean that it is exactly the result of Big O?

When we say a method has the time complexity of O(n^2) is it meant in the same way as in 10^2 = 100 or does it mean that the method is at max or closest to that notation? I am really confused on how to undertand Big O. I remember something called upper bound, would that mean at max?
If means that the running time is bounded above by N².
More precisely, T(N) < C.N², where C is some constant, and the inequality is true as of a certain N*.
For example, 2N²+4N+6 = O(N²), because 2N²+4N+6 < 3N² for all N>5.
Explanation
If a method f is inside O(g), with g being another function, it means that at some point (exist some n_0 such that for all n > n_0) the function f will always output a smaller value than g for that point. However, g is allowed to have an arbitrary constant k. So f(n) <= k * g(n) for all n above some n_0. So f is allowed to first be bigger, if it then starts to be smaller and keeps being smaller.
We say f is asymptotically bounded by g. Asymptotically means that we do not care how f behaves in the beginning. Only what it will do when approaching infinity. So we discard all inputs below n_0.
Illustration
An illustration would be this:
The blue function is k * g with some constant k, the red one is f. We see that f is greater at first, but then, starting at x_0, it will always be smaller than k * g. Thus f in O(g).
Definition
Mathematically, this can be expressed by
which is the usual definition of Big-O. From the explanation above, the definition should be clear. It says that from a certain n_0 on, the function f must be smaller than k * g for all inputs. k is allowed to be some constant.
Both images are taken from Wikipedia.
Examples
Here are a couple of examples to familiarize with the definition:
n is in O(n) (trivially)
n is in O(n^2) (trivially)
5n is in O(n^2) (starting from n_0 = 5)
25n^2 is in O(n^2) (taking k = 25 or greater)
2n^2 + 4n + 6 is in O(n^2) (take k = 3, starting from n_0 = 5)
Notes
Actually,O(g) is a set in the mathematical sense. It contains all functions with the above mentioned property (which are asymptotically bounded by g).
So, although some authors write f = O(g), it is actually wrong and should be f in O(g).
There are also other, similar, sets, which only differ in the direction of the bound:
Big-O: less equals <=
Small-o: less <
Big-Omega: greater equals >=
Small-omega: greater >
Theta: Big-O and Big-Omega at the same time (equals)

Complexity from a recent exam that confused people

Do you think the following information is true?
If Θ(f(n)) = Θ(g(n)) AND g(n) > 0 everywhere THEN f(n)/g(n) ∈ Θ(1)
We are having bit of argument with our prof
f(n) = Θ(g(n)) means there's c, d, n0 such that cg(n) <= f(n) <= dg(n) for n > n0.
Then, since g(n) > 0, c <= f(n)/g(n) <= d for n > n0.
So f(n)/g(n) = Θ(1).
Dividing functions f(n),g(n) is not the same as dividing their Big-O. For example let:
f(n) = n^3 + n^2 + n
g(n) = n^3
so:
O(f(n)) = n^3
O(g(n)) = n^3
but:
f(n)/g(n) = 1 + 1/n + 1/n^2 != constant !!!
[Edit1]
but as kfx pointed you are comparing with complexity so you want:
O(f(n)/g(n)) = O(1 + 1/n + 1/n^2) = O(1)
So the answer is Yes.
But beware complexity theory is not really my cup of tea and also I do not have any context to the question of yours.
Using definitions for Landau notation https://en.wikipedia.org/wiki/Big_O_notation, it's easy to conclude that this is true, the limit of division must be less than infinity but larger than 0.
It does not have to be exactly 1 but it has to be a finite constant, which is Θ(1).
A counter example would be nice, and should be easy to be given if the statement isn't true. A positive rigorous proof would probably need to go from definition of limes with respect to series, to prove equivalence of formal and limit definitions.
I use this definition and haven't seen it proven wrong. I suppose the disagreement might lie in exact definition of Θ, it is known that people use those colloquially with minor differences, especially Big O. Or maybe some tricky cases. For positively defined functions and series, I don't think it fails.
Basically there are three options for any pair of functions f, g: Either the first grows asymptotically slower and we write f=o(g) (notice I'm using small o), the first grows asymptotically faster: f=ω(g) (again, small omega) or they are asymptotically tightly bound: f=Θ(g).
What f=o(g) means is stricter then big O in that it doesn't allow for f=Θ(g) to be true; f=Θ(g) implies both f=O(g) and f=Ω(g), but o, Θ and ω are exclusive.
To find out whether f=o(g) it's sufficient to evaluate limit for n going to infinity f(n)/g(n) and if it is zero, f=o(g) is true, if it is infinity f=ω(g) is true and if it is any real finite number, f=Θ(g) is your answer. This is not a definition, but merely a way to evaluate a statement. (One assumption I made here was that both f and g are positive.)
Special case is if limit for n goint to infinity f(n)/1 = f(n) is finite number, it means f(n)=Θ(1) (basically we chose constant function for g).
Now we're getting to your problem: Since f=g(Θ)implies f=O(g), we know that there exists c>0 and n0 such that f(n) <= c*g(n)for all n>n0. Thus we know that f(n)/g(n) <= (c*g(n))/g(n) = cfor all n>n0. The same can be done for Ω just with opposite unequality signs. Thus we get that f(n)/g(n)is between c1and c2 from some n0 which are known to be finite numbers because of how Θ is defined. Because we know our new function is somewhere in there we also know that its limit is finite number, thus proving it is indeed constant.
Conclusion, I believe you were right and I would like your professor to offer counterexample to dispruve the statement. If something didn't make sense feel free to ask more in the comments, I'll try to clarify.

Asymptotic Notion: What is n₀ in formula, how do we find constant

I was doing study on Asymptotic Notations Topic, i recon that its formula is so simple yet it tells nothing and there are couple of things i don't understand.
When we say
f(n) <= c.g(n) where n >= n₀
And we don't know the value of c =? and n=? at first but by doing division of f(n) or g(n) we get the value of c. (here is where confusion lies)
First Question: How do we decide which side's 'n' has to get divided in equation f(n) or g(n)?
Suppose we have to prove:
2n(square) = O(n(cube))
here f(n) = 2(n(square)) and g(n)=n(cube)
which will form as:
2(n(square)) = c . n(cube)
Now in the notes i have read they are dividing 2(n(square)) to get the value of c by doing that we get c = 1;
But if we do it dividing n(cube) [which i don't know whether we can do it or not] we get c = 2;
How do we know what value we have to divide ?
Second Problem: Where does n₀ come from what's its task ?
Well by formula we know n >= n(0) which means what ever we take n we should take the value of n(0) or should be greater what n is.
But i am confuse that where do we use n₀ ? Why it is needed ?
By just finding C and N can't we get to conclusion if
n(square) = O(n(cube)) or not.
Would any one like to address this? Many thanks in advance.
Please don't snub me if i ask anything stupid or give -1. Address it please any useful link which covers all this would be enough as well:3
I have gone through the following links before posting this question this is what i understand and here are those links:
http://openclassroom.stanford.edu/MainFolder/VideoPage.php?course=IntroToAlgorithms&video=CS161L2P8&speed=
http://faculty.cse.tamu.edu/djimenez/ut/utsa/cs3343/lecture3.html
https://sites.google.com/sites/algorithmss15
From the second url in your question:
Let's define big-Oh more formally:
O(g(n)) = { the set of all f such that there exist positive constants c and n0 satisfying 0 <= f(n) <= cg(n) for all n >= n0 }.
This means, that for f(n) = 4*n*n + 135*n*log(n) + 1e8*n the big-O is O(n*n).
Because for large enough c and n0 this is true:
4*n*n + 135*n*log(n) + 1e8*n = f(n) <= O(n*n) = c*n*n
In this particular case the [c,n0] can be for example [6, 1e8], because (this is of course not valid mathematical proof, but I hope it's "obvious" from it):
f(1e8) = 4*1e16 + 135*8*1e8 + 1e16 = 5*1e16 + 1080*1e8 <= 6*1e16 = 6*1e8*1e8 =~= O(n*n). There are of course many more possible [c,n0] for which the f(n) <= c*n*n holds true, but you need to find only one such pair to prove the f(n) has O(f(n)) of O(n*n).
As you can see, for n=1 you need quite a huge c (like 1e9), so at first look the f(n) may look much bigger than n*n, but in the asymptotic notion you don't care about the first few initial values, as long as the behaviour since some boundary is as desired. That boundary is some [c,n0]. If you can find such boundary ([6, 1e8]), then QED: "f(n) has big-O of n*n".
The n >= n₀ means that whatever you say in the lemma can be false for some first k (countable) parameters n' : n' < n₀, but since some n₀ the lemma is true for all the rest of (bigger) integers.
It says that you don't care about first few integers ("first few" can be as "little" as 1e400, or 1e400000, ...etc... from the theory point of view), and you only care about the bigger (big enough, bigger than n₀) n values.
Ultimately it means, that in the big-O notation you usually write the simplest and lowest function having the same asymptotic notion as the examined f(n).
For example for any f(n) of polynomial type like f(n) = ∑aini, i=0..k the O(f(n)) = O(nk).
So I did throw away all the lower 0..(k-1) powers of n, as they stand no chance against nk in the long run (for large n). And the ak does lose to some bigger c.
In case you are lost in that i,k,...:
f(n) = 34n4 + 23920392n2 has O(n4).
As for large enough n that n4 will "eclipse" any value created from n2. And 34n4 is only 34 times bigger than n4 => 34 is constant (relates to c) and can be omitted from big-O notation too.

Role of lower order terms in big O notation

In big O notation, we always say that we should ignore constant factors for most cases. That is, rather than writing,
3n^2-100n+6
we are almost always satisfied with
n^2
since that term is the fastest growing term in the equation.
But I found many algorithm courses starts comparing functions with many terms
2n^2+120n+5 = big O of n^2
then finding c and n0 for those long functions, before recommending to ignore low order terms in the end.
My question is what would I get from trying to understand and annalising these kinds of functions with many terms? Before this month I am comfortable with understanding what O(1), O(n), O(LOG(n)), O(N^3) mean. But am I missing some important concepts if I just rely on this typically used functions? What will I miss if I skipped analysing those long functions?
Let's first of all describe what we mean when we say that f(n) is in O(g(n)):
... we can say that f(n) is O(g(n)) if we can find a constant c such
that f(n) is less than c·g(n) or all n larger than n0, i.e., for all
n>n0.
In equation for: we need to find one set of constants (c, n0) that fulfils
f(n) < c · g(n), for all n > n0, (+)
Now, the result that f(n) is in O(g(n)) is sometimes presented in difference forms, e.g. as f(n) = O(g(n)) or f(n) ∈ O(g(n)), but the statement is the same. Hence, from your question, the statement 2n^2+120n+5 = big O of n^2 is just:
f(n) = 2n^2 + 120n + 5
a result after some analysis: f(n) is in O(g(n)), where
g(n) = n^2
Ok, with this out of the way, we look at the constant term in the functions we want to analyse asymptotically, and let's look at it educationally, using however, your example.
As the result of any big-O analysis is the asymptotic behaviour of a function, in all but some very unusual cases, the constant term has no effect whatsoever on this behaviour. The constant factor can, however, affect how to choose the constant pair (c, n0) used to show that f(n) is in O(g(n)) for some functions f(n) and g(n), i.e., the none-unique constant pair (c, n0) used to show that (+) holds. We can say that the constant term will have no effect of our result of the analysis, but it can affect our derivation of this result.
Lets look at your function as well as another related function
f(n) = 2n^2 + 120n + 5 (x)
h(n) = 2n^2 + 120n + 22500 (xx)
Using a similar approach as in this thread, for f(n), we can show:
linear term:
120n < n^2 for n > 120 (verify: 120n = n^2 at n = 120) (i)
constant term:
5 < n^2 for e.g. n > 3 (verify: 3^2 = 9 > 5) (ii)
This means that if we replace both 120n as well as 5 in (x) by n^2 we can state the following inequality result:
Given that n > 120, we have:
2n^2 + n^2 + n^2 = 4n^2 > {by (ii)} > 2n^2 + 120n + 5 = f(n) (iii)
From (iii), we can choose (c, n0) = (4, 120), and (iii) then shows that these constants fulfil (+) for f(n) with g(n) = n^2, and hence
result: f(n) is in O(n^2)
Now, for for h(n), we analogously have:
linear term (same as for f(n))
120n < n^2 for n > 120 (verify: 120n = n^2 at n = 120) (I)
constant term:
22500 < n^2 for e.g. n > 150 (verify: 150^2 = 22500) (II)
In this case, we replace 120n as well as 22500 in (xx) by n^2, but we need a larger less than constraint on n for these to hold, namely n > 150. Hence, we the following holds:
Given that n > 150, we have:
2n^2 + n^2 + n^2 = 4n^2 > {by (ii)} > 2n^2 + 120n + 5 = h(n) (III)
In same way as for f(n), we can, here, choose (c, n0) = (4, 150), and (III) then shows that these constants fulfil (+) for h(n), with g(n) = n^2, and hence
result: h(n) is in O(n^2)
Hence, we have the same result for both functions f(n) and h(n), but we had to use different constants (c,n0) to show these (i.e., somewhat different derivation). Note finally that:
Naturally the constants (c,n0) = (4,150) (used for h(n) analysis) are also valid to show that f(n) is in O(n^2), i.e., that (+) holds for f(n) with g(n)=n^2.
However, not the reverse: (c,n0) = (4,120) cannot be used to show that (+) holds for h(n) (with g(n)=n^2).
The core of this discussion is that:
As long as you look at sufficiently large values of n, you will be able to describe the constant terms in relations as constant < dominantTerm(n), where, in our example, we look at the relation with regard to dominant term n^2.
The asymptotic behaviour of a function will not (in all but some very unusual cases) depend on the constant terms, so we might as well skip looking at them at all. However, for a rigorous proof of the asymptotic behaviour of some function, we need to take into account also the constant terms.
Ever have intermediate steps in your work? That is what this likely is as when you are computing a big O, chances are you don't already know for sure what the highest order term is and thus you keep track of them all and then determine which complexity class makes sense in the end. There is also something to be said for understanding why the lower order terms can be ignored.
Take some graph algorithms like a minimum spanning tree or shortest path. Now, can just looking at an algorithm you know what the highest term will be? I know I wouldn't and so I'd trace through the algorithm and collect a bunch of terms.
If you want another example, consider Sorting Algorithms and whether you want to memorize all the complexities or not. Bubble Sort, Shell Sort, Merge Sort, Quick Sort, Radix Sort and Heap Sort are a few of the more common algorithms out there. You could either memorize both the algorithm and complexity or just the algorithm and derive the complexity from the pseudo code if you know how to trace them.

Exponentials: Little Oh [duplicate]

This question already has answers here:
Difference between Big-O and Little-O Notation
(5 answers)
Closed 8 years ago.
What does nb = o(an) (o is little oh) mean, intuitively? I am just beginning to self teach my self algorithms and I am having hard time interpreting such expressions every time I see one. Here, the way I understood is that for the function nb, the rate of growth is an. But this is not making sense to me regardless of being right or wrong.
f(n)=o(g(n)) means that f(n)/g(n)->0 when n->infinite.
For your problem,it should hold a>1. (n^b)/(a^n)->0 when n->infinite, since (n^b)/(sqrt(a)^n*sqrt(a)^n))=((n^b)/sqrt(a)^n) * (1/sqrt(a)^n). Let f(n)=((n^b)/sqrt(a)^n) is a function increase first and then decrease, so you can get the maximum value of max(f(n))=M, then (n^b)/(a^n) < M/(sqrt(a)^n), since a>1, sqrt(a)>1, so (sqrt(a)^n)->infinite when n->infinite. That is M/(sqrt(a)^n)->0 when n->infinite, At last, we get (n^b)/(a^n)->0 when n->infinite. That is n^b=o(a^n) by definition.
(For simplicity I'll assume that all functions always return positive values. This is the case for example for functions measuring run-time of an algorithm, as no algorithm runs in "negative" time.)
First, a recap of big-O notation, to clear up a common misunderstanding:
To say that f is O(g) means that f grows asymptotically at most as fast as g. More formally, treating both f and g as functions of a variable n, to say that f(n) is O(g(n)) means that there is a constant K, so that eventually, f(n) < K * g(n). The word "eventually" here means that there is some fixed value N (which is a function of K, f, and g), so that if n > N then f(n) < K * g(n).
For example, the function f(n) = n + 2 is O(n^2). To see why, let K = 1. Then, if n > 10, we have n + 2 < n^2, so our conditions are satisfied. A few things to note:
For n = 1, we have f(n) = 3 and g(n) = 1, so f(n) < K * g(n) actually fails. That's ok! Remember, the inequality only needs to hold eventually, and it does not matter if the inequality fails for some small finite list of n.
We used K = 1, but we didn't need to. For example, K = 2 would also have worked. The important thing is that there is some value of K which gives us the inequality we want eventually.
We saw that n + 2 is O(n^2). This might look confusing, and you might say, "Wait, isn't n + 2 actually O(n)?" The answer is yes. n + 2 is O(n), O(n^2), O(n^3), O(n/3), etc.
Little-o notation is slightly different. Big-O notation, intuitively, says that if f is O(g), then f grows asymptotically at most as fast as g. Little-o notation says that if f is o(g), then f grows asymptotically strictly slower than g.
Formally, f is o(g) if for any (let's say positive) choice of K, eventually the inequality f(n) < K * o(g) holds. So, for instance:
The function f(n) = n is not o(n). This is because, for K = 1, there is no value of n so that f(n) < K * g(n). Intuitively, f and g grow asymptotically at the same rate, so f does not grow strictly slower than g does.
The function f(n) = n is o(n^2). Why is this? Pick your favorite positive value of K. (To see the actual point, try to make K small, for example 0.001.) Imagine graphing the functions f(n) and K * g(n). One is a straight line through the origin of positive slope, and the other is a concave-up parabola through the origin. Eventually the parabola will be higher than the line, and will stay that way. (If you remember your pre-calc/calculus...)
Now we get to your actual question: let f(n) = n^b and g(n) = a^n. You asked why f is o(g).
Presumably, the author of the original statement treats a and b as constant, positive real numbers, and moreover a > 1 (if a <= 1 then the statement is false).
The statement, in Engish, is:
For any positive real number b, and any real number a > 1, the function n^b grows asymptotically strictly slower than a^n.
This is an important thing to know if you are ever going to deal with algorithmic complexity. Put simpler, one can say "polynomials grow much slower than exponential functions." It isn't immediately obvious that this is true, and is too much to write out, so here is a reference:
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
Probably you will have to have some comfort with math to be able to read any proof of this fact.
Good luck!
The super high level meaning of the statement nb is o(an) is just that exponential functions like an grow much faster than polynomial functions, like nb.
The important thing to understand when looking at big O and little o notation is that they are both upper bounds. I'm guessing that's why you're confused. nb is o(an) because the growth rate of an is much bigger. You could probably find a tighter little o upper bound on nb (one where the gap between the bound and the function is smaller) but an is still valid. It's also probably worth looking at the difference between Big O and little o.
Remember that a function f is Big O of a function g if for some constant k > 0, you can eventually find a minimum value for n so that f(n) ≤ k * g(n).
A function f is little o of a function g if for any constant k > 0 you can eventually find a minimum value for n so that f(n) ≤ k * g(n).
Note that the little o requirement is harder to fulfill, meaning that if a function f is little o of a function g, it is also Big O of g, and it means the function g grows faster than if it were just Big O of g.
In your example, if b is 3 and a is 2 and we set k to 1, we can work out the minimum value for n so that nb ≤ k * an. In this case, it's between 9 and 10 since
9³ = 729 and 1 * 2⁹ = 512, which means at 9 an is not yet greater than nb
but
10³ = 1000 and 1 * 2¹⁰ = 1024, which means n is now greater than nb.
You can see graphing these functions that n will be greater than nb for any value of n > 10. At this point we've only shown that nb is Big O of n, since Big O only requires that for some value of k > 0 (we picked 1) an ≥ nb for some minimum n (in this case it's between 9 and 10)
To show that nb is little o of an, we would have to show that for any k greater than 0 you can still find a minimum value of n so that an > nb. For example, if you picked k = .5 the minimum of 10 we found earlier doesn't work, since 10³ = 1000, and .5 * 2¹⁰ = 512. But we can just keep sliding the minimum for n out further and further, the smaller you make k the bigger the minimum for n will b. Saying nb is little o of an means no matter how small you make k we will always be able to find a big enough value for n so that nb ≤ k * an

Resources