I have two algorithms.
The complexity of the first one is somewhere between Ω(n^2*(logn)^2) and O(n^3).
The complexity of the second is ω(n*log(logn)).
I know that O(n^3) tells me that it can't be worse than n^3, but I don't know the difference between Ω and ω. Can someone please explain?
Big-O: The asymptotic worst case performance of an algorithm. The function n happens to be the lowest valued function that will always have a higher value than the actual running of the algorithm. [constant factors are ignored because they are meaningless as n reaches infinity]
Big-Ω: The opposite of Big-O. The asymptotic best case performance of an algorithm. The function n happens to be the highest valued function that will always have a lower value than the actual running of the algorithm. [constant factors are ignored because they are meaningless as n reaches infinity]
Big-Θ: The algorithm is so nicely behaved that some function n can describe both the algorithm's upper and lower bounds within the range defined by some constant value c. An algorithm could then have something like this: BigTheta(n), O(c1n), BigOmega(-c2n) where n == n throughout.
Little-o: Is like Big-O but sloppy. Big-O and the actual algorithm performance will actually become nearly identical as you head out to infinity. little-o is just some function that will always be bigger than the actual performance. Example: o(n^7) is a valid little-o for a function that might actually have linear or O(n) performance.
Little-ω: Is just the opposite. w(1) [constant time] would be a valid little omega for the same above function that might actually exihbit BigOmega(n) performance.
Big omega (Ω) lower bound:
A function f is an element of the set Ω(g) (which is often written as f(n) = Ω(g(n))) if and only if there exists c > 0, and there exists n0 > 0 (probably depending on the c), such that for every n >= n0 the following inequality is true:
f(n) >= c * g(n)
Little omega (ω) lower bound:
A function f is an element of the set ω(g) (which is often written as f(n) = ω(g(n))) if and only for each c > 0 we can find n0 > 0 (depending on the c), such that for every n >= n0 the following inequality is true:
f(n) >= c * g(n)
You can see that it's actually the same inequality in both cases, the difference is only in how we define or choose the constant c. This slight difference means that the ω(...) is conceptually similar to the little o(...). Even more - if f(n) = ω(g(n)), then g(n) = o(f(n)) and vice versa.
Returning to your two algorithms - the algorithm #1 is bounded from both sides, so it looks more promising to me. The algorithm #2 can work longer than c * n * log(log(n)) for any (arbitrarily large) c, so it might eventually loose to the algorithm #1 for some n. Remember, it's only asymptotic analysis - so all depends on actual values of these constants and the problem size which has some practical meaning.
Related
My professor recently brushed over the formal definition of Big O:
To be completely honest even after him explaining it to a few different students we all seem to still not understand it at its core. The problems in comprehension mostly occurred with the following examples we went through:
So far my reasoning is as follows:
When you multiply a function's highest term by a constant, you get a new function that eventually surpasses the initial function at a given n. He called this n a "witness" to the function O(g(n))
How is this c term created/found? He mentioned bounds a couple of times but didn't really specify what bounds signify or how to find them/use them.
I think I just need a more solid foundation of the formal definition and how these examples back up the definition.
I think that the way this definition is typically presented in terms of c values and n0's is needlessly confusing. What f(n) being O(g(n)) really means is that when you disregard constant and lower order terms, g(n) is an asymptotic upper bound for f(n) (for a function to g to asymptotically upper bound f just means that past a certain point g is always greater than or equal to f). Put another way, f(n) grows no faster than g(n) as n goes to infinity.
Big O itself is a little confusing, because f(n) = O(g(n)) doesn't mean that g(n) grows strictly faster than f(n). It means when you disregard constant and lower order terms, g(n) grows faster than f(n), or it grows at the same rate (strictly faster would be "little o"). A simple, formal way to put this concept is to say:
That is, for this limit to hold true, the highest order term of f(n) can be at most a constant multiple of the highest order term of g(n). f(n) is O(g(n)) iff it grows no faster than g(n).
For example, f(n) = n is in O(g(n) = n^2), because past a certain point n^2 is always bigger than n. The limit of n^2 over n is positive, so n is in O(n^2)
As another example, f(n) = 5n^2 + 2n is in O(g(n) = n^2), because in the limit, f(n) can only be about 5 times larger than g(n). It's not infinitely bigger: they grow at the same rate. To be precise, the limit of n^2 over 5n^2 + 3n is 1/5, which is more than zero, so 5n^2 + 3n is in O(n^2). Hopefully this limit based definition provides some intuition, as it is completely equivalent mathematically to the provided definition.
Finding a particular constant value c and x value n0 for which the provided inequality holds true is just a particular way of showing that in the limit as n goes to infinity, g(n) grows at least as fast as f(n): that f(n) is in O(g(n)). That is, if you've found a value past which c*g(n) is always greater than f(n), you've shown that f(n) grows no more than a constant multiple (c times) faster than g(n) (if f grew faster than g by more than a constant multiple, finding such a c and n0 would be impossible).
There's no real art to finding a particular c and n0 value to demonstrate f(n) = O(g(n)). They can be literally whatever positive values you need them to be to make the inequality true. In fact, if it is true that f(n) = O(g(n)) then you can pick any value you want for c and there will be some sufficiently large n0 value that makes the inequality true, or, similarly you could pick any n0 value you want, and if you make c big enough the inequality will become true (obeying the restrictions that c and n0 are both positive). That's why I don't really like this formalization of big O: it's needlessly particular and proofs involving it are somewhat arbitrary, distracting away from the main concept which is the behavior of f and g as n goes to infinity.
So, as for how to handle this in practice, using one of the example questions: why is n^2 + 3n in O(n^2)?
Answer: because the limit as n goes to infinity of n^2 / n^2 + 3n is 1, which is greater than 0.
Or, if you're wanting/needing to do it the other way, pick any positive value you want for n0, and evaluate f at that value. f(1) will always be easy enough:
f(1) = 1^2 + 3*1 = 4
Then find the constant you could multiply g(1) by to get the same value as f(1) (or, if not using n0 = 1 use whatever n0 for g that you used for f).
c*g(1) = 4
c*1^2 = 4
c = 4
Then, you just combine the statements into an assertion to show that there exists a positive n0 and a constant c such that cg(n) <= f(n) for all n >= n0.
n^2 + 3n <= (4)n^2 for all n >= 1, implying n^2 + 3n is in O(n^2)
If you're using this method of proof, the above statement you use to demonstrate the inequality should ideally be immediately obvious. If it's not, maybe you want to change your n0 so that the final statement is more clearly true. I think that showing the limit of the ratio g(n)/f(n) is positive is much clearer and more direct if that route is available to you, but it is up to you.
Moving to a negative example, it's quite easy with the limit method to show that f(n) is not in O(g(n)). To do so, you just show that the limit of g(n) / f(n) = 0. Using the third example question: is nlog(n) + 2n in O(n)?
To demonstrate it the other way, you actually have to show that there exists no positive pair of numbers n0, c such that for all n >= n0 f(n) <= cg(n).
Unfortunately showing that f(n) = nlogn + 2n is in O(nlogn) by using c=2, n0=8 demonstrates nothing about whether f(n) is in O(n) (showing a function is in a higher complexity class implies nothing about it not being a lower complexity class).
To see why this is the case, we could also show a(n) = n is in g(n) = nlogn using those same c and n0 values (n <= 2(nlog(n) for all n >= 8, implying n is in O(nlogn))`), and yet a(n)=n clearly is in O(n). That is to say, to show f(n)=nlogn + 2n is not in O(n) with this method, you can't just show that it is in O(nlogn). You would have to show that no matter what n0 you pick, you can never find a c value large enough such that f(n) >= c(n) for all n >= n0. Showing that such a pair of numbers does not exist is not impossible, but relatively speaking it's a tricky thing to do (and would probably itself involve limit equations, or a proof by contradiction).
To sum things up, f(n) is in O(g(n)) if the limit of g(n) over f(n) is positive, which means f(n) doesn't grow any faster than g(n). Similarly, finding a constant c and x value n0 beyond which cg(n) >= f(n) shows that f(n) cannot grow asymptotically faster than g(n), implying that when discarding constants and lower order terms, g(n) is a valid upper bound for f(n).
Do you think the following information is true?
If Θ(f(n)) = Θ(g(n)) AND g(n) > 0 everywhere THEN f(n)/g(n) ∈ Θ(1)
We are having bit of argument with our prof
f(n) = Θ(g(n)) means there's c, d, n0 such that cg(n) <= f(n) <= dg(n) for n > n0.
Then, since g(n) > 0, c <= f(n)/g(n) <= d for n > n0.
So f(n)/g(n) = Θ(1).
Dividing functions f(n),g(n) is not the same as dividing their Big-O. For example let:
f(n) = n^3 + n^2 + n
g(n) = n^3
so:
O(f(n)) = n^3
O(g(n)) = n^3
but:
f(n)/g(n) = 1 + 1/n + 1/n^2 != constant !!!
[Edit1]
but as kfx pointed you are comparing with complexity so you want:
O(f(n)/g(n)) = O(1 + 1/n + 1/n^2) = O(1)
So the answer is Yes.
But beware complexity theory is not really my cup of tea and also I do not have any context to the question of yours.
Using definitions for Landau notation https://en.wikipedia.org/wiki/Big_O_notation, it's easy to conclude that this is true, the limit of division must be less than infinity but larger than 0.
It does not have to be exactly 1 but it has to be a finite constant, which is Θ(1).
A counter example would be nice, and should be easy to be given if the statement isn't true. A positive rigorous proof would probably need to go from definition of limes with respect to series, to prove equivalence of formal and limit definitions.
I use this definition and haven't seen it proven wrong. I suppose the disagreement might lie in exact definition of Θ, it is known that people use those colloquially with minor differences, especially Big O. Or maybe some tricky cases. For positively defined functions and series, I don't think it fails.
Basically there are three options for any pair of functions f, g: Either the first grows asymptotically slower and we write f=o(g) (notice I'm using small o), the first grows asymptotically faster: f=ω(g) (again, small omega) or they are asymptotically tightly bound: f=Θ(g).
What f=o(g) means is stricter then big O in that it doesn't allow for f=Θ(g) to be true; f=Θ(g) implies both f=O(g) and f=Ω(g), but o, Θ and ω are exclusive.
To find out whether f=o(g) it's sufficient to evaluate limit for n going to infinity f(n)/g(n) and if it is zero, f=o(g) is true, if it is infinity f=ω(g) is true and if it is any real finite number, f=Θ(g) is your answer. This is not a definition, but merely a way to evaluate a statement. (One assumption I made here was that both f and g are positive.)
Special case is if limit for n goint to infinity f(n)/1 = f(n) is finite number, it means f(n)=Θ(1) (basically we chose constant function for g).
Now we're getting to your problem: Since f=g(Θ)implies f=O(g), we know that there exists c>0 and n0 such that f(n) <= c*g(n)for all n>n0. Thus we know that f(n)/g(n) <= (c*g(n))/g(n) = cfor all n>n0. The same can be done for Ω just with opposite unequality signs. Thus we get that f(n)/g(n)is between c1and c2 from some n0 which are known to be finite numbers because of how Θ is defined. Because we know our new function is somewhere in there we also know that its limit is finite number, thus proving it is indeed constant.
Conclusion, I believe you were right and I would like your professor to offer counterexample to dispruve the statement. If something didn't make sense feel free to ask more in the comments, I'll try to clarify.
I am working on an assignment but a friend of mine disagree with the answer to one part.
f(n) = 2n-2n^3
I find the complexity to be f(n) = O(n^3)
Am I wrong?
You aren't wrong, since O(n^3) does not provide a tight bound. However, typically you assume that f is increasing and try to find the smallest function g for which f=O(g) is true. Consider a simple function f=n+3. It's correct to say that f=O(n^3), since n+3 < n^3 for all n > 2 (just to pick an arbitrary constant). However, it's "more" correct to say that f=O(n), since n+3 < 2n for all n > 3, and this gives you a better feel for how f behaves as n increases.
In your case, f is decreasing as n increases, so it is true that f = O(g) for any function that stays positive as n increases. The "smallest" (or rather, slowest growing) such function is a constant function for some positive constant, and we usually write that as 2n - 2n^3 = O(1), since 2n - 2n^3 < 1 for all n>0.
You could even find some function of n that is decreasing as n increases, but decreases more slowly than your f, but such usage is rare. Big-O notation is most commonly used to describe algorithm running times as the input size increases, so n is almost universally assumed to be positive.
The informal idea of Big-O is described as "it's the highest order of growth of a function" ie f(n) = 3n^2 + 5n + 50 is just O(n^2).
I do understand that Big-O is just a way of saying "guaranteed to not be worse than this period". Formally, it appears the definition is f(n) -> O(g(n)) iff f(n) <= c * g(n) where c is positive
First some mathy stuff.. if f(n) = 5n^2, g(n)=n I should be able to show 5n^2 isn't O(g(n)) by doing
5n^2 <= cn
5n <= c
If the idea is that is that c isn't a constant(I have no idea if that's a requirement), and that is proof f(n) isnt in O(g(n)), what about if g(n) were n^3 (of which it surely should be contained)?
5n^2 <= cn^3
5/n <= c
I have a misunderstanding of how the math works out for all of this I assume, so I ask:
How does all this fancy stuff work
How does it connect to the simple definition given in my data structures class?
Thanks for any help
n is a positive integer, which means that 1<=n and therefore 5/n<=5/1=5, so you can pick c=5.
A more complete definition also allows you to pick n0 and a, both constants, and only prove that f(n)<=a+c*g(n) for all n0<n
c is a constant (i.e. independent of n)
In your first example (it's proof by contradiction):
i.e. assume
5n^2 <= cn
5n <= c
But for any fixed constant c, we can find a value of n that makes it untrue.
For example pick c = 1000000, then a value of n = 200001 would be a contradiction.
In your second example, we know that f(n) is O(n^2), therefore it is also O(n^3) and above. If you are bounded by k(n^2), you are also bounded by j(n^3)
The informal idea of Big-O is described as "it's the highest order of growth of a function" ie f(n) = 3n^2 + 5n + 50 is just O(n^2).
I wouldn't say that this is the idea behind Big-O. Informally Big-O is some rough estimation of what a given function cannot exceed. And it's usage is mostly approximating how something will grow for big numbers.
For example, if we take a 6 digit number, we can definitely say that it's less than million without looking for its digits. There are a lot of cases when this is enough and we don't need to analyse all digits.
For analysing function growth two factors play their role:
we only interested in function behaviour for very large numbers
if f is bigger than g, but we can fix it with multiplying g by some big constant, that's means f's advantage is not because of growth
This leads us to two parts of the definition: (1) some constant and (2) for big enough n
And for polynomials indeed, the higher order component defines grow speed.
My question refers to the big-Oh notation in algorithm analysis. While Big-Oh seems to be a math question, it's much useful in algorithm analysis.
Suppose two functions are defined below:
f(n) = 2( to the power n) when n is even
f(n) = n when n is odd
g(n) = n when n is even
g(n) = 2( to the power n) when n is odd.
For the above two functions which one is big-Oh of other? Or whether any function is not a Big-Oh of another function.
Thanks!
In this case,
f ∉ O(g), and
g ∉ O(f).
This is because no matter what constants N and k you pick,
there exists i ≥ N such that f(i) > k g(i), and
there exists j ≥ N such that g(j) > k f(j).
The Big-Oh relationship is quite specific in that one function is, after a finite n, always larger than the other.
Is this true here? If so, give such a n. If not, you should prove it.
Usually Big-O and Big-Theta notations get confused.
A layman attempt at definition could be that Big-O means that one function is growing as fast or faster than another one, i.e. that given a large enough n, f(n)<=k*g(n) where k is some constant. That means that if f(x) = 2x^3, then it is in O(x^3), O(x^4), O(2^x), O(x!) etc..
Big-Theta means that one function is growing as fast as another one, with neither one being able to "outgrow" the other, or, k1*g(n)<=f(n)<=k2*g(n) for some k1 and k2. In programming terms that means that these two functions have the same level of complexity. If f(x) = 2x^3, then it is in Θ(x^3), as for example, if k1=1, and k2=3, 1*x^3 < 2*x^3 < 3*x^3
In my experience whenever programmers are talking about Big-O, the discussion is actually about Big-Θ, as we are concerned more with the as fast as part more, than in the no faster than part.
That said, if two functions with different Θ's are combined, as in your example, the larger one - (Θ(2^n) - swallows the smaller - Θ(n), so both f and g have the exact same Big-O and Big-Θ complexities. In this case, it's both correct that
f(n) = O(g(n)), also f(n) = Θ(g(n))
g(n) = O(f(n)), also g(n) = Θ(f(n))
so, as they have the same complexity, they are O and Θ bound by each other.