Finding Big-O, Omega and theta - algorithm

I've looked through the links, and I'm too braindead to understand the mechanical process of figuring them out. I understand the ideas of O, theta and omega, and I understand the "Rules". So let me work on this example with you guys to clear this up in my head :)
f(n) = 100n+logn
g(n) = n+(logn)2
I need to find: whether f = O(g), or f = Ω(g), or both (in which case f = Θ(g))
so I know that 100n and n are the same, and they are both slower than log(n). I just need to figure out if (log(n))^2 is slower or faster. but I can't really remember anything about logs. if the log(n) is bigger, does it mean the number gets bigger or smaller?
let me please add my real struggle is in figuring out BOTH omega and theta. By definition f(n) <= g(n) if there is a constant c that will make g(n) bigger, and same for the reverse for omega. but how do I really test this?

You can usually figure it out from these rules:
Broadly k < log(n)^k < n^k < k^n. You can replace k at each step with any positive number you want and it remains true for large enough n.
If x is big, then 1/x is very close to 0.
For positive x and y, x < y if and only if log(x) < log(y). (Sometimes taking logs can help with complicated and messy products.
log(k^n) = log(k) n.
For O, theta, and omega, you can ignore everything except the biggest term that doesn't cancel out.
Rules 1 and 5 suffice for your specific questions. But learn all of the rules.

You don't need to remember rules, but rather learn general principles.
Here, all you need to know is that log(n) is increasing and grows without limit, and the definition of big-O, namely f = O(g) if there's a c such that for all sufficiently large n, f(n) <= c * g(n). You might learn the fact about log by remembering that log(n) grows like the number of digits of n.
Can log^2(n) be O(log(n))? That would mean (using the definition of big-O) that log^2(n) <= c.log(n) for all sufficiently large n, so log^2(n)/log(n) <= c for sufficiently large n (*). But log^2(n)/log(n) = log(n), which grows without limit, so can't be bounded by c. So log^2(n) = O(log(n)).
Can log(n) be O(log^2(n))? Well, at some point log(n) > 1 (since it's increasing without limit), and from that point on, log(n) < log^2(n). That proves that log(n) = O(log^2(n)), with the constant c equal to 1.
(*) If you're being extra careful, you need to exclude the possibility that log(n) is infinitely many times zero.

Related

Family of Bachmann–Landau notations

Could please help me to understand notation's that mention in the picture?, I try to understand "Big O notation" in that under the "Family of Bachmann–Landau notations" Table there is "Formal Definition" column, in that, there are lot's notation with equation, i did't come across these notation before. could any one familiar with this ? https://en.wikipedia.org/wiki/Big_O_notation#Family_of_Bachmann–Landau_notations
The logic behind that definitions are actually quite simple, it basically says that no matter what constants are multiplying the result, from some point where n is big enough, the one of the function will start to being bigger/smaller and it remains that way.
To see real difference, I will explain th small-o (which says that some function has smaller complexity than other), it says that for all k bigger than zero you can find some value of n called n_0 for which all n bigger than n_0 follows this pattern: f(n) <= k*g(n).
So you have two functions and you put there n as a parameter. Then no matter what you put as k, you always find value of n for which f(n) <= k*g(n) and all value that are bigger than the one you have find will also fit into this equation.
Consider for example:
f(n) = n * 100
g(n) = n^2
So if you try to put i.e. n=5 there, it does not say you what has bigger complexity, because 5*100=500 and 5^2=25. If you put number big enough, i.e. n=100, then f(n)=100*100=10000 and g(n)=100^2=100*100=10000. So we get to the same value. If you try to put anything bigger than that, the g(n) will become bigger and bigger.
It also have to follow the equation f(n) <= k*g(n). In example, if I put i.e. k=0.1 then
100*n <= 0.1*n^2 *10
1000n <= n^2 /n
1000 < n
So with that functions, you can see that for k=0.1 you have n_0 = 1000 to fulfill the equations, but it is enough. All n > 1000 will be bigger and the function g(n) will always be bigger, therefore it has higher complexity. (ok, the real proof is not that easy, but you can see the pattern). The point is, no matter what k will be, even if it is equal k=0.000000001, there always be breaking point of n_0 and from that point, all g(n) will be bigger than f(n)
We can also try some negative equations to see whats difference between O(n) and O(n^2).
Lets take:
f(n) = n
g(n) = 10*n
So in standard algebra the g(n) > f(n), right? But in complexity theory we need to know if it grows bigger and if so, if it grows bigger than just multiplying it with constant.
So if we consider that k=0.01, then you can see that no matter how big the n will be, you never find n_0 that fulfills the f(n) <= k*g(n), so the f(n) != o(g(n))
In terms of complexity theory you can take the notations as smaller/bigger, so
f(n) = o(g(n)) -> f(n) < g(n)
f(n) = O(g(n)) -> f(n) <= g(n)
f(n) = Big-Theta(g(n)) -> f(n) === g(n)
//... etc, remember these euqations are not algebraic, just for complexity

Big-O of `2n-2n^3`

I am working on an assignment but a friend of mine disagree with the answer to one part.
f(n) = 2n-2n^3
I find the complexity to be f(n) = O(n^3)
Am I wrong?
You aren't wrong, since O(n^3) does not provide a tight bound. However, typically you assume that f is increasing and try to find the smallest function g for which f=O(g) is true. Consider a simple function f=n+3. It's correct to say that f=O(n^3), since n+3 < n^3 for all n > 2 (just to pick an arbitrary constant). However, it's "more" correct to say that f=O(n), since n+3 < 2n for all n > 3, and this gives you a better feel for how f behaves as n increases.
In your case, f is decreasing as n increases, so it is true that f = O(g) for any function that stays positive as n increases. The "smallest" (or rather, slowest growing) such function is a constant function for some positive constant, and we usually write that as 2n - 2n^3 = O(1), since 2n - 2n^3 < 1 for all n>0.
You could even find some function of n that is decreasing as n increases, but decreases more slowly than your f, but such usage is rare. Big-O notation is most commonly used to describe algorithm running times as the input size increases, so n is almost universally assumed to be positive.

Are 2^n and n*2^n in the same time complexity?

Resources I've found on time complexity are unclear about when it is okay to ignore terms in a time complexity equation, specifically with non-polynomial examples.
It's clear to me that given something of the form n2 + n + 1, the last two terms are insignificant.
Specifically, given two categorizations, 2n, and n*(2n), is the second in the same order as the first? Does the additional n multiplication there matter? Usually resources just say xn is in an exponential and grows much faster... then move on.
I can understand why it wouldn't since 2n will greatly outpace n, but because they're not being added together, it would matter greatly when comparing the two equations, in fact the difference between them will always be a factor of n, which seems important to say the least.
You will have to go to the formal definition of the big O (O) in order to answer this question.
The definition is that f(x) belongs to O(g(x)) if and only if the limit limsupx → ∞ (f(x)/g(x)) exists i.e. is not infinity. In short this means that there exists a constant M, such that value of f(x)/g(x) is never greater than M.
In the case of your question let f(n) = n ⋅ 2n and let g(n) = 2n. Then f(n)/g(n) is n which will still grow infinitely. Therefore f(n) does not belong to O(g(n)).
A quick way to see that n⋅2ⁿ is bigger is to make a change of variable. Let m = 2ⁿ. Then n⋅2ⁿ = ( log₂m )⋅m (taking the base-2 logarithm on both sides of m = 2ⁿ gives n = log₂m ), and you can easily show that m log₂m grows faster than m.
I agree that n⋅2ⁿ is not in O(2ⁿ), but I thought it should be more explicit since the limit superior usage doesn't always hold.
By the formal definition of Big-O: f(n) is in O(g(n)) if there exist constants c > 0 and n₀ ≥ 0 such that for all n ≥ n₀ we have f(n) ≤ c⋅g(n). It can easily be shown that no such constants exist for f(n) = n⋅2ⁿ and g(n) = 2ⁿ. However, it can be shown that g(n) is in O(f(n)).
In other words, n⋅2ⁿ is lower bounded by 2ⁿ. This is intuitive. Although they are both exponential and thus are equally unlikely to be used in most practical circumstances, we cannot say they are of the same order because 2ⁿ necessarily grows slower than n⋅2ⁿ.
I do not argue with other answers that say that n⋅2ⁿ grows faster than 2ⁿ. But n⋅2ⁿ grows is still only exponential.
When we talk about algorithms, we often say that time complexity grows is exponential.
So, we consider to be 2ⁿ, 3ⁿ, eⁿ, 2.000001ⁿ, or our n⋅2ⁿ to be same group of complexity with exponential grows.
To give it a bit mathematical sense, we consider a function f(x) to grow (not faster than) exponentially if exists such constant c > 1, that f(x) = O(cx).
For n⋅2ⁿ the constant c can be any number greater than 2, let's take 3. Then:
n⋅2ⁿ / 3ⁿ = n ⋅ (2/3)ⁿ and this is less than 1 for any n.
So 2ⁿ grows slower than n⋅2ⁿ, the last in turn grows slower than 2.000001ⁿ. But all three of them grow exponentially.
You asked "is the second in the same order as the first? Does the additional n multiplication there matter?" These are two different questions with two different answers.
n 2^n grows asymptotically faster than 2^n. That's that question answered.
But you could ask "if algorithm A takes 2^n nanoseconds, and algorithm B takes n 2^n nanoseconds, what is the biggest n where I can find a solution in a second / minute / hour / day / month / year? And the answers are n = 29/35/41/46/51/54 vs. 25/30/36/40/45/49. Not much difference in practice.
The size of the biggest problem that can be solved in time T is O (ln T) in both cases.
Very Simple answer is 'NO'
see 2^n and n.2^n
as seen n.2^n > 2^n for any n>0
or you can even do it by applying log on both sides then you get
n.log(2) < n.log(2) + log(n)
hence by both type of analysis that is by
substituting a number
using log
we see that n.2^n is greater than 2^n as visibly seen
so if you get a equation like
O ( 2^n + n.2^n ) which can be replaced as O ( n.2^n)

Exponentials: Little Oh [duplicate]

This question already has answers here:
Difference between Big-O and Little-O Notation
(5 answers)
Closed 8 years ago.
What does nb = o(an) (o is little oh) mean, intuitively? I am just beginning to self teach my self algorithms and I am having hard time interpreting such expressions every time I see one. Here, the way I understood is that for the function nb, the rate of growth is an. But this is not making sense to me regardless of being right or wrong.
f(n)=o(g(n)) means that f(n)/g(n)->0 when n->infinite.
For your problem,it should hold a>1. (n^b)/(a^n)->0 when n->infinite, since (n^b)/(sqrt(a)^n*sqrt(a)^n))=((n^b)/sqrt(a)^n) * (1/sqrt(a)^n). Let f(n)=((n^b)/sqrt(a)^n) is a function increase first and then decrease, so you can get the maximum value of max(f(n))=M, then (n^b)/(a^n) < M/(sqrt(a)^n), since a>1, sqrt(a)>1, so (sqrt(a)^n)->infinite when n->infinite. That is M/(sqrt(a)^n)->0 when n->infinite, At last, we get (n^b)/(a^n)->0 when n->infinite. That is n^b=o(a^n) by definition.
(For simplicity I'll assume that all functions always return positive values. This is the case for example for functions measuring run-time of an algorithm, as no algorithm runs in "negative" time.)
First, a recap of big-O notation, to clear up a common misunderstanding:
To say that f is O(g) means that f grows asymptotically at most as fast as g. More formally, treating both f and g as functions of a variable n, to say that f(n) is O(g(n)) means that there is a constant K, so that eventually, f(n) < K * g(n). The word "eventually" here means that there is some fixed value N (which is a function of K, f, and g), so that if n > N then f(n) < K * g(n).
For example, the function f(n) = n + 2 is O(n^2). To see why, let K = 1. Then, if n > 10, we have n + 2 < n^2, so our conditions are satisfied. A few things to note:
For n = 1, we have f(n) = 3 and g(n) = 1, so f(n) < K * g(n) actually fails. That's ok! Remember, the inequality only needs to hold eventually, and it does not matter if the inequality fails for some small finite list of n.
We used K = 1, but we didn't need to. For example, K = 2 would also have worked. The important thing is that there is some value of K which gives us the inequality we want eventually.
We saw that n + 2 is O(n^2). This might look confusing, and you might say, "Wait, isn't n + 2 actually O(n)?" The answer is yes. n + 2 is O(n), O(n^2), O(n^3), O(n/3), etc.
Little-o notation is slightly different. Big-O notation, intuitively, says that if f is O(g), then f grows asymptotically at most as fast as g. Little-o notation says that if f is o(g), then f grows asymptotically strictly slower than g.
Formally, f is o(g) if for any (let's say positive) choice of K, eventually the inequality f(n) < K * o(g) holds. So, for instance:
The function f(n) = n is not o(n). This is because, for K = 1, there is no value of n so that f(n) < K * g(n). Intuitively, f and g grow asymptotically at the same rate, so f does not grow strictly slower than g does.
The function f(n) = n is o(n^2). Why is this? Pick your favorite positive value of K. (To see the actual point, try to make K small, for example 0.001.) Imagine graphing the functions f(n) and K * g(n). One is a straight line through the origin of positive slope, and the other is a concave-up parabola through the origin. Eventually the parabola will be higher than the line, and will stay that way. (If you remember your pre-calc/calculus...)
Now we get to your actual question: let f(n) = n^b and g(n) = a^n. You asked why f is o(g).
Presumably, the author of the original statement treats a and b as constant, positive real numbers, and moreover a > 1 (if a <= 1 then the statement is false).
The statement, in Engish, is:
For any positive real number b, and any real number a > 1, the function n^b grows asymptotically strictly slower than a^n.
This is an important thing to know if you are ever going to deal with algorithmic complexity. Put simpler, one can say "polynomials grow much slower than exponential functions." It isn't immediately obvious that this is true, and is too much to write out, so here is a reference:
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
Probably you will have to have some comfort with math to be able to read any proof of this fact.
Good luck!
The super high level meaning of the statement nb is o(an) is just that exponential functions like an grow much faster than polynomial functions, like nb.
The important thing to understand when looking at big O and little o notation is that they are both upper bounds. I'm guessing that's why you're confused. nb is o(an) because the growth rate of an is much bigger. You could probably find a tighter little o upper bound on nb (one where the gap between the bound and the function is smaller) but an is still valid. It's also probably worth looking at the difference between Big O and little o.
Remember that a function f is Big O of a function g if for some constant k > 0, you can eventually find a minimum value for n so that f(n) ≤ k * g(n).
A function f is little o of a function g if for any constant k > 0 you can eventually find a minimum value for n so that f(n) ≤ k * g(n).
Note that the little o requirement is harder to fulfill, meaning that if a function f is little o of a function g, it is also Big O of g, and it means the function g grows faster than if it were just Big O of g.
In your example, if b is 3 and a is 2 and we set k to 1, we can work out the minimum value for n so that nb ≤ k * an. In this case, it's between 9 and 10 since
9³ = 729 and 1 * 2⁹ = 512, which means at 9 an is not yet greater than nb
but
10³ = 1000 and 1 * 2¹⁰ = 1024, which means n is now greater than nb.
You can see graphing these functions that n will be greater than nb for any value of n > 10. At this point we've only shown that nb is Big O of n, since Big O only requires that for some value of k > 0 (we picked 1) an ≥ nb for some minimum n (in this case it's between 9 and 10)
To show that nb is little o of an, we would have to show that for any k greater than 0 you can still find a minimum value of n so that an > nb. For example, if you picked k = .5 the minimum of 10 we found earlier doesn't work, since 10³ = 1000, and .5 * 2¹⁰ = 512. But we can just keep sliding the minimum for n out further and further, the smaller you make k the bigger the minimum for n will b. Saying nb is little o of an means no matter how small you make k we will always be able to find a big enough value for n so that nb ≤ k * an

Algorithm analysis , Big O Notation Homework

Hi can someone explain me how to resolve this homework?
(n + log n)3^n = O((4^n)/n).
i think it's the same as resolving this inequality: (n + log n)3^n < c((4^n)/n)).
thanks in advance
You need to find a c (as you mentioned in your problem), and you need to show that the inequality holds for all n greater than some k.
By showing that you can find the c and k in question, then by definition you've proved the big-O bound.
Conversely, if you can't find such a c and k, this is because the function on the left is not really upper-bounded by the function on the right. That shouldn't be the case here, though (and you'll know you're getting a more intuitive understanding of asymptotic growth/bounding when you can articulate exactly why).
By definition, f(n) = O(g(n)) is true if there exists a constant M such that |f(n)| < M|g(n)| for every n. In computer science, numbers are nonnegative, so this amounts to finding an M such that f(n) / g(n) < M
This, in turn, can be done by proving that f(n) / g(n) has a finite limit as n increases towards infinity (by definition of a limit). Which, in the case of your (n^2 + n log n) * (3/4)^n is pretty obvious by virtue of how exponential functions work.

Resources