Given two functions, Is one function the big-O of another function? - big-o

My question refers to the big-Oh notation in algorithm analysis. While Big-Oh seems to be a math question, it's much useful in algorithm analysis.
Suppose two functions are defined below:
f(n) = 2( to the power n) when n is even
f(n) = n when n is odd
g(n) = n when n is even
g(n) = 2( to the power n) when n is odd.
For the above two functions which one is big-Oh of other? Or whether any function is not a Big-Oh of another function.
Thanks!

In this case,
f ∉ O(g), and
g ∉ O(f).
This is because no matter what constants N and k you pick,
there exists i ≥ N such that f(i) > k g(i), and
there exists j ≥ N such that g(j) > k f(j).

The Big-Oh relationship is quite specific in that one function is, after a finite n, always larger than the other.
Is this true here? If so, give such a n. If not, you should prove it.

Usually Big-O and Big-Theta notations get confused.
A layman attempt at definition could be that Big-O means that one function is growing as fast or faster than another one, i.e. that given a large enough n, f(n)<=k*g(n) where k is some constant. That means that if f(x) = 2x^3, then it is in O(x^3), O(x^4), O(2^x), O(x!) etc..
Big-Theta means that one function is growing as fast as another one, with neither one being able to "outgrow" the other, or, k1*g(n)<=f(n)<=k2*g(n) for some k1 and k2. In programming terms that means that these two functions have the same level of complexity. If f(x) = 2x^3, then it is in Θ(x^3), as for example, if k1=1, and k2=3, 1*x^3 < 2*x^3 < 3*x^3
In my experience whenever programmers are talking about Big-O, the discussion is actually about Big-Θ, as we are concerned more with the as fast as part more, than in the no faster than part.
That said, if two functions with different Θ's are combined, as in your example, the larger one - (Θ(2^n) - swallows the smaller - Θ(n), so both f and g have the exact same Big-O and Big-Θ complexities. In this case, it's both correct that
f(n) = O(g(n)), also f(n) = Θ(g(n))
g(n) = O(f(n)), also g(n) = Θ(f(n))
so, as they have the same complexity, they are O and Θ bound by each other.

Related

Family of Bachmann–Landau notations

Could please help me to understand notation's that mention in the picture?, I try to understand "Big O notation" in that under the "Family of Bachmann–Landau notations" Table there is "Formal Definition" column, in that, there are lot's notation with equation, i did't come across these notation before. could any one familiar with this ? https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann–Landau_notations
The logic behind that definitions are actually quite simple, it basically says that no matter what constants are multiplying the result, from some point where n is big enough, the one of the function will start to being bigger/smaller and it remains that way.
To see real difference, I will explain th small-o (which says that some function has smaller complexity than other), it says that for all k bigger than zero you can find some value of n called n_0 for which all n bigger than n_0 follows this pattern: f(n) <= k*g(n).
So you have two functions and you put there n as a parameter. Then no matter what you put as k, you always find value of n for which f(n) <= k*g(n) and all value that are bigger than the one you have find will also fit into this equation.
Consider for example:
f(n) = n * 100
g(n) = n^2
So if you try to put i.e. n=5 there, it does not say you what has bigger complexity, because 5*100=500 and 5^2=25. If you put number big enough, i.e. n=100, then f(n)=100*100=10000 and g(n)=100^2=100*100=10000. So we get to the same value. If you try to put anything bigger than that, the g(n) will become bigger and bigger.
It also have to follow the equation f(n) <= k*g(n). In example, if I put i.e. k=0.1 then
100*n <= 0.1*n^2 *10
1000n <= n^2 /n
1000 < n
So with that functions, you can see that for k=0.1 you have n_0 = 1000 to fulfill the equations, but it is enough. All n > 1000 will be bigger and the function g(n) will always be bigger, therefore it has higher complexity. (ok, the real proof is not that easy, but you can see the pattern). The point is, no matter what k will be, even if it is equal k=0.000000001, there always be breaking point of n_0 and from that point, all g(n) will be bigger than f(n)
We can also try some negative equations to see whats difference between O(n) and O(n^2).
Lets take:
f(n) = n
g(n) = 10*n
So in standard algebra the g(n) > f(n), right? But in complexity theory we need to know if it grows bigger and if so, if it grows bigger than just multiplying it with constant.
So if we consider that k=0.01, then you can see that no matter how big the n will be, you never find n_0 that fulfills the f(n) <= k*g(n), so the f(n) != o(g(n))
In terms of complexity theory you can take the notations as smaller/bigger, so
f(n) = o(g(n)) -> f(n) < g(n)
f(n) = O(g(n)) -> f(n) <= g(n)
f(n) = Big-Theta(g(n)) -> f(n) === g(n)
//... etc, remember these euqations are not algebraic, just for complexity

Complexity from a recent exam that confused people

Do you think the following information is true?
If Θ(f(n)) = Θ(g(n)) AND g(n) > 0 everywhere THEN f(n)/g(n) ∈ Θ(1)
We are having bit of argument with our prof
f(n) = Θ(g(n)) means there's c, d, n0 such that cg(n) <= f(n) <= dg(n) for n > n0.
Then, since g(n) > 0, c <= f(n)/g(n) <= d for n > n0.
So f(n)/g(n) = Θ(1).
Dividing functions f(n),g(n) is not the same as dividing their Big-O. For example let:
f(n) = n^3 + n^2 + n
g(n) = n^3
so:
O(f(n)) = n^3
O(g(n)) = n^3
but:
f(n)/g(n) = 1 + 1/n + 1/n^2 != constant !!!
[Edit1]
but as kfx pointed you are comparing with complexity so you want:
O(f(n)/g(n)) = O(1 + 1/n + 1/n^2) = O(1)
So the answer is Yes.
But beware complexity theory is not really my cup of tea and also I do not have any context to the question of yours.
Using definitions for Landau notation https://en.wikipedia.org/wiki/Big_O_notation, it's easy to conclude that this is true, the limit of division must be less than infinity but larger than 0.
It does not have to be exactly 1 but it has to be a finite constant, which is Θ(1).
A counter example would be nice, and should be easy to be given if the statement isn't true. A positive rigorous proof would probably need to go from definition of limes with respect to series, to prove equivalence of formal and limit definitions.
I use this definition and haven't seen it proven wrong. I suppose the disagreement might lie in exact definition of Θ, it is known that people use those colloquially with minor differences, especially Big O. Or maybe some tricky cases. For positively defined functions and series, I don't think it fails.
Basically there are three options for any pair of functions f, g: Either the first grows asymptotically slower and we write f=o(g) (notice I'm using small o), the first grows asymptotically faster: f=ω(g) (again, small omega) or they are asymptotically tightly bound: f=Θ(g).
What f=o(g) means is stricter then big O in that it doesn't allow for f=Θ(g) to be true; f=Θ(g) implies both f=O(g) and f=Ω(g), but o, Θ and ω are exclusive.
To find out whether f=o(g) it's sufficient to evaluate limit for n going to infinity f(n)/g(n) and if it is zero, f=o(g) is true, if it is infinity f=ω(g) is true and if it is any real finite number, f=Θ(g) is your answer. This is not a definition, but merely a way to evaluate a statement. (One assumption I made here was that both f and g are positive.)
Special case is if limit for n goint to infinity f(n)/1 = f(n) is finite number, it means f(n)=Θ(1) (basically we chose constant function for g).
Now we're getting to your problem: Since f=g(Θ)implies f=O(g), we know that there exists c>0 and n0 such that f(n) <= c*g(n)for all n>n0. Thus we know that f(n)/g(n) <= (c*g(n))/g(n) = cfor all n>n0. The same can be done for Ω just with opposite unequality signs. Thus we get that f(n)/g(n)is between c1and c2 from some n0 which are known to be finite numbers because of how Θ is defined. Because we know our new function is somewhere in there we also know that its limit is finite number, thus proving it is indeed constant.
Conclusion, I believe you were right and I would like your professor to offer counterexample to dispruve the statement. If something didn't make sense feel free to ask more in the comments, I'll try to clarify.

Compare two complexity functions by taking logarithm?

I found that taking the logarithm of both sides when comparing two functions asymptotically is a usual technique(according to some solutions to the problems for the CLRS book).
But does it always hold that the asymptotic relation of two functions after taking their logarithm indicates their original asymptotic relation?
I kind of doubt if it works when comparing two exponential functions.
For example log(3^n) = nlog3, log(2^n) = nlog2, then it should indicate that O(2^n) and O(3^n) are on the same level of running time, which is not right.
Asymptotic bounds implicitly include a multiplicative constant which is ignored.
Formally, f(n) = O(g(n)) means that you can find N and C such that n > N => f(n) < C.g(n).
When taking the logarithm, the multiplicative constant become an additive one, log(f(n)) < log(C) + log(g(n)), and it isn't true that f(n) = O(g(n)) <=> log(f(n)) = O(log(g(n))).
So if you compare two complexities by their logarithms, you cannot drop a multiplicative constant, but an additive one, and n.Log(3) indeed differs from n.Log(2).
Similarly, O(n²) and O(n³) differ because 2.Log(n) and 3.Log(n) don't have the same cefficient.

Asymptotic Notations: (an + b) ∈ O(n^2)

I was reading Intro to Algorithms, by Thomas H. Corman when I encountered this statement (in Asymptotic Notations)
when a>0, any linear function an+b is in O(n^2) which is essentially verified by taking c = a + |b| and no = max(1, -b/a)
I can't understand why O(n^2) and not O(n). When will O(n) upper bound fail.
For example, for 3n+2, according to the book
3n+2 <= (5)n^2 n>=1
but this also holds good
3n+2 <= 5n n>=1
So why is the upper bound in terms of n^2?
Well I found the relevant part of the book. Indeed the excerpt comes from the chapter introducing big-O notation and relatives.
The formal definition of the big-O is that the function in question does not grow asymptotically faster than the comparison function. It does not say anything about whether the function grows asymptotically slower, so:
f(n) = n is in O(n), O(n^2) and also O(e^n) because n does not grow asymptotically faster than any of these. But n is not in O(1).
Any function in O(n) is also in O(n^2) and O(e^n).
If you want to describe the tight asymptotic bound, you would use the big-Θ notation, which is introduced just before the big-O notation in the book. f(n) ∊ Θ(g(n)) means that f(n) does not grow asymptotically faster than g(n) and the other way around. So f(n) ∊ Θ(g(n)) is equivalent to f(n) ∊ O(g(n)) and g(n) ∊ O(f(n)).
So f(n) = n is in Θ(n) but not in Θ(n^2) or Θ(e^n) or Θ(1).
Another example: f(n) = n^2 + 2 is in O(n^3) but not in Θ(n^3), it is in Θ(n^2).
You need to think of O(...) as a set (which is why the set theoretic "element-of"-symbol is used). O(g(n)) is the set of all functions that do not grow asymptotically faster than g(n), while Θ(g(n)) is the set of functions that neither grow asymptotically faster nor slower than g(n). So a logical consequence is that Θ(g(n)) is a subset of O(g(n)).
Often = is used instead of the ∊ symbol, which really is misleading. It is pure notation and does not share any properties with the actual =. For example 1 = O(1) and 2 = O(1), but not 1 = O(1) = 2. It would be better to avoid using = for the big-O notation. Nonetheless you will later see that the = notation is useful, for example if you want to express the complexity of rest terms, for example: f(n) = 2*n^3 + 1/2*n - sqrt(n) + 3 = 2*n^3 + O(n), meaning that asymptotically the function behaves like 2*n^3 and the neglected part does asymptotically not grow faster than n.
All of this is kind of against the typically usage of big-O notation. You often find the time/memory complexity of an algorithm defined by it, when really it should be defined by big-Θ notation. For example if you have an algorithm in O(n^2) and one in O(n), then the first one could actually still be asymptotically faster, because it might also be in Θ(1). The reason for this may sometimes be that a tight Θ-bound does not exist or is not known for given algorithm, so at least the big-O gives you a guarantee that things won't take longer than the given bound. By convention you always try to give the lowest known big-O bound, while this is not formally necessary.
The formal definition (from Wikipedia) of the big O notation says that:
f(x) = O(g(x)) as x → ∞
if and only if there is a positive constant M such that for all
sufficiently large values of x, f(x) is at most M multiplied by g(x)
in absolute value. That is, f(x) = O(g(x)) if and only if there exists
a positive real number M and a real number x0 such that
|f(x)|≤ M|g(x)| for all x > x₀ (mean for x big enough)
In our case, we can easily show that
|an + b| < |an + n| (for n sufficiently big, ie when n > b)
Then |an + b| < (a+1)|n|
Since a+1 is constant (corresponds to M in the formal definition), definitely
an + b = O(n)
Your were right to doubt.

Which algorithm is better?

I have two algorithms.
The complexity of the first one is somewhere between Ω(n^2*(logn)^2) and O(n^3).
The complexity of the second is ω(n*log(logn)).
I know that O(n^3) tells me that it can't be worse than n^3, but I don't know the difference between Ω and ω. Can someone please explain?
Big-O: The asymptotic worst case performance of an algorithm. The function n happens to be the lowest valued function that will always have a higher value than the actual running of the algorithm. [constant factors are ignored because they are meaningless as n reaches infinity]
Big-Ω: The opposite of Big-O. The asymptotic best case performance of an algorithm. The function n happens to be the highest valued function that will always have a lower value than the actual running of the algorithm. [constant factors are ignored because they are meaningless as n reaches infinity]
Big-Θ: The algorithm is so nicely behaved that some function n can describe both the algorithm's upper and lower bounds within the range defined by some constant value c. An algorithm could then have something like this: BigTheta(n), O(c1n), BigOmega(-c2n) where n == n throughout.
Little-o: Is like Big-O but sloppy. Big-O and the actual algorithm performance will actually become nearly identical as you head out to infinity. little-o is just some function that will always be bigger than the actual performance. Example: o(n^7) is a valid little-o for a function that might actually have linear or O(n) performance.
Little-ω: Is just the opposite. w(1) [constant time] would be a valid little omega for the same above function that might actually exihbit BigOmega(n) performance.
Big omega (Ω) lower bound:
A function f is an element of the set Ω(g) (which is often written as f(n) = Ω(g(n))) if and only if there exists c > 0, and there exists n0 > 0 (probably depending on the c), such that for every n >= n0 the following inequality is true:
f(n) >= c * g(n)
Little omega (ω) lower bound:
A function f is an element of the set ω(g) (which is often written as f(n) = ω(g(n))) if and only for each c > 0 we can find n0 > 0 (depending on the c), such that for every n >= n0 the following inequality is true:
f(n) >= c * g(n)
You can see that it's actually the same inequality in both cases, the difference is only in how we define or choose the constant c. This slight difference means that the ω(...) is conceptually similar to the little o(...). Even more - if f(n) = ω(g(n)), then g(n) = o(f(n)) and vice versa.
Returning to your two algorithms - the algorithm #1 is bounded from both sides, so it looks more promising to me. The algorithm #2 can work longer than c * n * log(log(n)) for any (arbitrarily large) c, so it might eventually loose to the algorithm #1 for some n. Remember, it's only asymptotic analysis - so all depends on actual values of these constants and the problem size which has some practical meaning.

Resources