If I have the following closed form solution for a recurrence relation, how can I simplify it under big O:
f(n) = 3^n + n.9^n
I would hazard a guess at:
f(n) is a member of O(9^n) -> Am not sure if this right? Could someone please let me know how to simplify the above equation under big O and also state which rule you used...
Thanks in advance
http://en.wikipedia.org/wiki/Big_O_notation
If f(x) is a sum of several terms, the one with the largest growth rate is kept, and all others omitted.
So O(n * 9^n), assuming that with n.9^n you meant n * 9^n.
Simple relations which helps you is:
O(1) < O(log(N) < O(N^Epsilon)<O(N)<O(N logN)<O(N^c)<O(c^n)<O(n!)<O(n^n)
for c >1 and 0 < Epsilon <1.
See big O in wiki for better understanding
Related
I am not able to justify that 3nlogn-2n is big omega.
I have tried putting in different values of c and n for trade off but in every case where n>=2 (n>=2 because if n<2 log n become 0 and c becomes 0) which is not true for, big omega as c>0. so in every case I tried to put in f(n)
f(n)>=cg(n) for 3nlogn-2n as state by my book and i am ending up with f(n)<=cg(n)
just need a guide line i know there must be a simple answer I am mixing up things with, any help is appreciated.
Big Omega is a lower bound, and in your case we have that (since n > nlogn)
3nlog(n) - 2n > 3nlog(n) - 2nlog(n) = nlog(n)
And we immediatly got that it's in Omega(nlog(n))
What is the recurrence relation and time complexity for the following pseudo-code?
temp = 1
repeat
for i=1 to n
temp = temp +1
n=n/2
until n>=1
When we are dealing with asymptotic notations like Big-Oh , Omega and Theta, we don't consider the constants. No doubt your time complexity will go like
n + n/2 + n/4 + ... + 1
but if you add on this decreasing G.P series you will get exact answer equals to c*n where c will be some constant greater than 1. But in Asymptotic notations as I said earlier, constants doesn't matter so whether value of c is 2 or 50 or 100 or 10000 or anything, it will be O(n) only.
Another thing, Try not to use Master's Theorem for solving recurrence relations and use Recursive Tree method as it is pure conceptual and will help you in building up your concepts and can be used in every case. Master's theorem is like short cut and also have limitations.
The informal idea of Big-O is described as "it's the highest order of growth of a function" ie f(n) = 3n^2 + 5n + 50 is just O(n^2).
I do understand that Big-O is just a way of saying "guaranteed to not be worse than this period". Formally, it appears the definition is f(n) -> O(g(n)) iff f(n) <= c * g(n) where c is positive
First some mathy stuff.. if f(n) = 5n^2, g(n)=n I should be able to show 5n^2 isn't O(g(n)) by doing
5n^2 <= cn
5n <= c
If the idea is that is that c isn't a constant(I have no idea if that's a requirement), and that is proof f(n) isnt in O(g(n)), what about if g(n) were n^3 (of which it surely should be contained)?
5n^2 <= cn^3
5/n <= c
I have a misunderstanding of how the math works out for all of this I assume, so I ask:
How does all this fancy stuff work
How does it connect to the simple definition given in my data structures class?
Thanks for any help
n is a positive integer, which means that 1<=n and therefore 5/n<=5/1=5, so you can pick c=5.
A more complete definition also allows you to pick n0 and a, both constants, and only prove that f(n)<=a+c*g(n) for all n0<n
c is a constant (i.e. independent of n)
In your first example (it's proof by contradiction):
i.e. assume
5n^2 <= cn
5n <= c
But for any fixed constant c, we can find a value of n that makes it untrue.
For example pick c = 1000000, then a value of n = 200001 would be a contradiction.
In your second example, we know that f(n) is O(n^2), therefore it is also O(n^3) and above. If you are bounded by k(n^2), you are also bounded by j(n^3)
The informal idea of Big-O is described as "it's the highest order of growth of a function" ie f(n) = 3n^2 + 5n + 50 is just O(n^2).
I wouldn't say that this is the idea behind Big-O. Informally Big-O is some rough estimation of what a given function cannot exceed. And it's usage is mostly approximating how something will grow for big numbers.
For example, if we take a 6 digit number, we can definitely say that it's less than million without looking for its digits. There are a lot of cases when this is enough and we don't need to analyse all digits.
For analysing function growth two factors play their role:
we only interested in function behaviour for very large numbers
if f is bigger than g, but we can fix it with multiplying g by some big constant, that's means f's advantage is not because of growth
This leads us to two parts of the definition: (1) some constant and (2) for big enough n
And for polynomials indeed, the higher order component defines grow speed.
Can someone help me understand this question? I may have it on my tomorrow exam but I can't find similar question on internet or in my lectures.
First you need to express each function as a Theta(something).
For instance, for the first one: Theta((1-n)(n^3-17)) = Theta(n^4 + ...) = Theta(n^4).
For the second one: Theta(30+log(n^9)) = Theta(30 + 9logn) = Theta(logn).
These are sorted as g1, g2, because n^4 = Omega(logn).
And so on.
For the sorting: saying that g1 = Omega(g2) means that g1 grows at least as fast as g2, that is we are defining a lower bound. So, sort them from the worst (slowest, with fastest growth), to the best (NB: it is strange that the exercise want "the first to be to most preferable", but the definition of Omega leaves no doubt).
Btw: if you want to be more formal, here is the definition of the Omega notation:
f = Omega(g) iff exist c and n0 > 0 such that forall n >= n0 we have 0 <= c*g(n) <= f(n) (in words: f grows at least as fast as g).
First, you have to calculate the Theta notations by determing the growths-class of each function, e.G. 1, log(n), n, n log(n) and so on. To do that you have of course to expand those functions.
Having the growths-class of each function you have to order them by their goodness.
Last, you have to put these functions into relations, like g1 = omega(g2). Therefore just keep in mind that a function t(n) is said to be in omega(g(n)) if t(n) is bounded below by some multiple of g(n), e.G. n³ >= n² and therefore n³ is elemnt of omega(n²). This can also be written as n³ = omega(n²)
For theta, this answer and that one summarize what is to be found in your problem. Which g function can you find such that (say f is one of your 8 functions above)
multiplied by a constant bounds asymptotically above f (called O(g(n)))
multiplied by (usually) another constant bounds asymptotically below f (called omega(g(n))
For instance, for the iv: 10^5n, Θ(n) fits, as you can easily find two constants where k1.n bounds below 10^5n and k2.n to bounds it above, asymptotically. (here f is O(n) and Omega(n) as f, the iv. is an easy one).
You need to understand that all big O and Big Omega and Big theta apply for worse/best/average case
for some function:
Big O -> O(..) is the upper limit this function will never exceed .. e.g. for higher values
Big Omega -> is the lower pound the function never goes below it .e.g in small values
Big theta is like: there are 2 constants such that:
Big omega * c < Big Theta < Big O *c2
so going to your sample:
i) its of order n^4 for both Big Omega, and O(n^ + n).
viii) its constant so both Obig O and big Omega the same.. thus big Theta the same
Hi can someone explain me how to resolve this homework?
(n + log n)3^n = O((4^n)/n).
i think it's the same as resolving this inequality: (n + log n)3^n < c((4^n)/n)).
thanks in advance
You need to find a c (as you mentioned in your problem), and you need to show that the inequality holds for all n greater than some k.
By showing that you can find the c and k in question, then by definition you've proved the big-O bound.
Conversely, if you can't find such a c and k, this is because the function on the left is not really upper-bounded by the function on the right. That shouldn't be the case here, though (and you'll know you're getting a more intuitive understanding of asymptotic growth/bounding when you can articulate exactly why).
By definition, f(n) = O(g(n)) is true if there exists a constant M such that |f(n)| < M|g(n)| for every n. In computer science, numbers are nonnegative, so this amounts to finding an M such that f(n) / g(n) < M
This, in turn, can be done by proving that f(n) / g(n) has a finite limit as n increases towards infinity (by definition of a limit). Which, in the case of your (n^2 + n log n) * (3/4)^n is pretty obvious by virtue of how exponential functions work.