I am learning Algorithm recently and know that there are usually some good algorithm existed already that we don't need to write our own. I think the problem I am facing in a question paper.
I have a question in my past paper that if a function is O(n) then can it be O(n^2) ?
can we say that if a function is O(n) then it's also O(n^2)???
Big O is an upper bound. So, yes, n is in O(n^2), but not vice-versa. Also, both n and n^2 are in O(n^3).
Related
Which of these functions is not O(n2)?
A. O(n2log2n)
B. O(log2(log2n)
C. O(nlog2n)
D. (log2n)2
I believe, that it is not A, because n2 takes dominance in that one, which would make it a O(n2) as well.
My question becomes how do you figure out which one is not a O(n2)? I have not worked much with logarithms, and I am still fuzzy about how to work with them.
Thanks.
Answer is A
Big-O is an upper bound and in programming represents the worst-case runtime. For example f(n) = n^2 means f(n) is O(n^2) and O(n^3) and O(n^100*logn). So O(n^2) is also O(n^3) but O(n^3) is not O(n^2)
In the case of your question, the only answer that is > n^2 is A. Graphing the functions can help you visualize it. As you can see A (red) is the only one increasing faster than n^2 (black)
This article may help you as well
https://en.wikipedia.org/wiki/Big_O_notation and go to Orders of common functions
When I take the Algorithm course in Coursera, I met a question about Big-O notation that says O(n2) = O(n). I check some other answers in Stack overflow and some posts said that Big Notation means the "upper bound". Based on this def: could I sai O(n) = O(2^n) because O(n)<= O(2^n)?
enter image description here
In some cases polynomials are considered pretty much equivalent with regards to anything exponential.
But most probably 2 was a scalar, O(2*N) is the same as O(N) because constant factors are usually ignored in big O notation.
In any case, no O(n) and O(2 to the n) are not =, O(n) is however a subset of O(higher order N).
It would be inappropriate to say O(n) equals O(n^2). Rather, say, O(n) falls within the bounds of O(n^2).
A function like f(n)=3n^2+2 is O(n^2) because n^2 is the biggest exponent in the function. However, the function 1f(n)= n^31 is not O(n^2) because the biggest exponent is 3, not 2.
So in order to make a guess like this on Big Omega or Big Theta, what should we look for in the function? Can we do something analogous to what we did for Big O notation above?
For example, let's say the questions asks us to find the Big Omega or Big Theta of the function f(n)= 3n^2 +1. Is f(n)= O(n), Big Omega(n) or a Big Theta(n)? If I am about to take an educated guess on whether this function is Big O(n), I would say no (because the biggest exponent of the function is 2, not 1). I would prove this more formally using induction.
So, can we do something analogous to what we did with Big O notation in the first example? What should I look for in the function to guess what the Big Omega and Theta will be, and to determine if the "educated guess" is correct?
Your example uses polynomials, so I will assume that.
your polynomial is O(n^k) if k is greater than or equal to the order of your polynomial.
your polynomial is Omega(n^k) if k is less than or equal to the order of your polynomial.
your polynomial is Theta(n^k) if it is both O(n^k) and Omega(n^k).
So, can we do something analogous to what we did with Big O notation
in the first example?
If you're looking for something that allows you to eyeball if something is Big Omega, Big O, or Big Theta for polynomials, you can use the Theorem of Polynomial Orders (pretty much what Patrick87 said).
Basically, the Theorem of Polynomial Orders allows you to solely look at the highest order term and use that as your desired bound for Big O, Big Omega, and Big Theta.
What should I look for in the function to guess what the Big Omega and
Theta will be, and to determine if the "educated guess" is correct?
Ideally, your function would be a polynomial as that would make it the problem much simpler. But, it can also be a logarithmic function or an exponential function.
To determine if the "educated guess" is correct, you have to first understand what kind of runtime you are looking for. Ask yourself: am I looking for the worst case running time for this algorithm? or am I looking for the best case running time for this algorithm? or am I looking for the general running time of the algorithm?
If you are looking at the worst-case running time of the algorithm, you can simply prove Big Omega using the theorem of polynomial order (if it's a polynomial function) or through an example. However, you must analyze the algorithm to be able to prove Big O and Big Theta.
If you are looking at the best-case running time of the algorithm, you can prove Big O using an example or through the theorem of polynomial order (if it's a polynomial). However, Big Omega and Big Theta can only be proved by analyzing the algorithm.
Basically, you can only prove the least informative bounds for best-case running time and worst-case running time of the algorithm with an example.
For proving the general running time of an algorithm, you have to make sure that the function for the algorithm's running time you have been given is for all input - a single example is not sufficient in this case. When you don't have a function for all input, you have to analyze the algorithm to prove any of the three (Big O, Big Omega, Big Theta) for all inputs for the algorithm.
I have a question reagarding to my lecture datastructures and algortihms.
I have to problem to understand how a algorithm grows. I dont unterstand the difference between the O Notations. And i dont understand the difference between them for example O(lgn)and O(nlgn).
I hope anyone can help me. thank you
To compare time complexities you should be able to make some mathematical proofs. In your example:
for every n>1 we have by multiplying with logn: nlogn>logn so nlogn is worse than logn. An easy way to understand this is by comparing the graphs of functions as suggested in comments or even try some big inputs to see the asymptotic behavior. For example for n=1000000 :
logn(1000000)=6 and 1000000log(1000000)=6000000 which is greater.
Also notice that you dount count constants in big O notation for example 4n is O(n) , n is O(n) and also cn+w is O(n) for c,w constants.
I went through many lectures, videos and sources regarding Asymptotic notations. I understood what O, Omega and Theta were. But in algorithms, why do we use only Big Oh notation always, why not Theta and Omega (I know it sounds noobish, but please help me with this). What exactly is this upperbound and lowerbound in accordance with Algorithms?
My next question is, how do we find the complexity from an algorithm. Say I have an algorithm, how do I find the recurrence relation T(N) and then compute the complexity out of it? How do I form these equations? Like in the case of Linear Search using Recursive way, T(n)=T(N-1) + 1. How?
It would be great if someone can explain me considering me a noob so that I can understand even better. I found some answers but wasn't convincing enough in StackOverFlow.
Thank you.
Why we use big-O so much compared to Theta and Omega: This is partly cultural, rather than technical. It is extremely common for people to say big-O when Theta would really be more appropriate. Omega doesn't get used much in practice both because we frequently are more concerned about upper bounds than lower bounds, and also because non-trivial lower bounds are often much more difficult to prove. (Trivial lower bounds are usually the kind that say "You have to look at all of the input, so the running time is at least equal to the size of the input.")
Of course, these comments about lower bounds also partly explain Theta, since Theta involves both an upper bound and a lower bound.
Coming up with a recurrence relation: There's no simple recipe that addresses all cases. Here's a description for relatively simple recursive algorithmms.
Let N be the size of the initial input. Suppose there are R recursive calls in your recursive function. (Example: for mergesort, R would be 2.) Further suppose that all the recursive calls reduce the size of the initial input by the same amount, from N to M. (Example: for mergesort, M would be N/2.) And, finally, suppose that the recursive function does W work outside of the recursive calls. (Example: for mergesort, W would be N for the merge.)
Then the recurrence relation would be T(N) = R*T(M) + W. (Example: for mergesort, this would be T(N) = 2*T(N/2) + N.)
When we create an algorithm, it's always in order to be the fastest and we need to consider every case. This is why we use O, because we want to major the complexity and be sure that our algorithm will never overtake this.
To assess the complexity, you have to count the number of step. In the equation T(n) = T(n-1) + 1, there is gonna be N step before compute T(0), then the complixity is linear. (I'm talking about time complexity and not space complexity).