Algorithm complexity and big O notation [duplicate] - algorithm

This question already has answers here:
What is the difference between Θ(n) and O(n)?
(9 answers)
Closed 5 years ago.
I am taking an online class on algorithms and I had the following quiz. I got it wrong and am trying to understand the reason for the answer.
Which of the following is O(n^3)?
a) 11n + 151 gn + 100
b) 1/3 n^2
c) 25000 n^3
d) All of the above.
The correct answer is (d) all of the above. The reason is that Big-O notation provides only the upper bound on the growth rate of function as n gets large.
I am not sure why the answer is not (c). For example, the upper bound on (b) is less than n^3.

The reason is that formally, big-O notation is an asymptotic upper bound.
So 1/3*n^2 is O(n^2), but it is also O(n^3) and also O(2^n).
While in every-day conversion about complexity O(...) is used as a tight (both upper and lower bound), the theta-notation, or Θ(...) is the technically correct term for this.
For more info see What is the difference between Θ(n) and O(n)?

Related

How does bigO(5*2^n + 1000n^100) become bigO(2^n)? [duplicate]

This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 5 years ago.
I have just started "Cracking the Coding Interview" by Gayle Macdowell. In this BigO topic, It says we should drop the non-dominant term.
O(n^2 + n) becomes O(n^2) and O(n + log n) becomes O(n).
Well, I understand that. If we suppose the value of n to be some large number then we can ignore the smaller result since, it will be comparatively much more smaller than the larger one.
But, in this case how can O(5*2^n + 1000n^100) become O(2^n)) ?
Isn't n^100 dominant than 2 ^n ?
n^100, or n raised to any constant, does not dominate 2^n.

Confused about big O of n^2 vs 2^n [duplicate]

This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 5 years ago.
I read in a book that the following expression O(2^n + n^100) will be reduced to: O(2^n) when we drop the insignificant parts. I am confused because as per my understanding if the value of n is 3 then the part n^100 seems to have a higher count of executions. What am I missing?
Big O notation is asymptotic in nature, that means we consider the expression as n tends to infinity.
You are right that for n = 3, n^100 is greater than 2^n but once n > 1000, 2^n is always greater than n^100 so we can disregard n^100 in O(2^n + n^100) for n much greater than 1000.
For a formal mathematical description of Big O notation the wikipedia article does a good job
For a less mathematical description this answer also does a good job:
What is a plain English explanation of "Big O" notation?
The big O notation is used to describe asymptotic complexity. The word asymptotic plays a significant role. Asymptotic basically means that your n is not gonna be 3 or some other integer. You should think of n being infinitely large.
Even though n^100 grows faster in the beginning, there will be a point where 2^n will outgrow n^100.
You are missing the fact that O(n) is the asymptotic complexity. Speaking more strictly, you could calculate lim(2^n / n^100) when n -> infinity and you will see it equals to infinity, so it means that asymptotically 2^n grows faster than n^100.
When complexity is measured with n you should consider all possible values of n and not just 1 example. so in most cases, n is bigger than 100. this is why n^100 is insignificant.

How are the following functions O(N^3)?

I'm taking the "Intro To Algorithms" course on Coursera, and I've arrived at the video which deals with Big-Theta, Big-Omega and Big-O notation. The end-of-video quiz presents the following question:
Q: Which of the following functions is O(N^3)?
a) 11N + 15lgN + 100
b) (N^2)/3
c) 25,000*(N^3)
d) All of the above
I answered "c" and was told my answer was incorrect, and that the correct answer is actually "d". The explanation provided by the course wasn't much help:
Recall that big-Oh notation provides only an upper bound on the growth
rate of a function as N gets large. In this course, we primarily use
tilde notation because it more accurately describes the function—it
provides both an upper and lower bound on the function as well as the
coefficient of the leading term.
I was under the impression that one should drop the lesser-order terms (i.e. "15lgN + 100") and focus only on the highest-order terms. Furthermore, I can't see how N^3 could be the upper bound on a quadratic (as opposed to a cubic) function like N^2.
So my question is, why are "a" and "b" classified as O(N^3) in this case?
Do you know, f(n) = O(g(n)) implies f(n) <= constant* g(n), right?
In other words, it means, when you plot the graph of f(n) and g(n) then after some value of, g(n) will always be more than f(n).
Here g(n) is N^3 and remaining comes in f(n). Now, N^3 is always >= options a, b, c. hence answer id D :)
Edit:
Following statements are true,
n=O(n)
n=O(n^2)
n=O(n^3)
But only n = O(n) is tight upper bound and that is what we should use in time complexity derivation of algorithms. If we are using 2nd and 3rd option, then we are misusing the Big-O notation or let's say they are upper bounds but not tightly bounded!
Edit 2: See following image
G(x) is tight upper bound for F(x) and H(x) is upper of F(x) but not tight! Still we would say, F(x)=O(G(x)) & F(x)=O(H(x)). When somebody in exam/interview ask for time complexity they are asking for tight bounds, but not an upper bound. Unfortunately, tight upper bound and upper bound terms are used in exams/interviews interchangeably.
The explanation says it: "Recall that big-Oh notation provides only an upper bound on the growth
rate of a function as N gets large."
In this particular context, the upper bound can be read as "does not grow faster than N³".
It is true that 11N + 15lgN + 100 does not grow faster than N³.
Think of O(N^2) also being O(n^3), O(n^4) and so on. O(N^2) is always bound under O(n^3), therefore O(n^2) is indeed O(n^3).
http://en.wikipedia.org/wiki/Big_O_notation#/media/File:Big-O-notation.png
As many have already quoted a function f(n) which has upper bound say O(n) complexity is also O(n^2) , O(n^3), O(n^4)... etc
Does that make sense or if it gets confused, think in absolute layman terms.
Suppose an process takes max upper bound of 10 secs to execute, come whatever be the input we can conclude this :-
Whatever be the input the execution will complete in less or equal to 10 seconds.
If that is true, even the following is true :-
Whatever be the input the execution will complete in less or equal to 100 seconds.
Whatever be the input the execution will complete in less or equal to 1000 seconds.
and so on.......
And thus you can correlate the answer. Hope that gave you a glimpse.

log base 2 equals log base 3 when analyzing time complexity? [duplicate]

This question already has answers here:
Is Big O(logn) log base e?
(7 answers)
Closed 8 years ago.
Most solutions to Exercise 4.4.6 of Intro. to Algorithms 3rd edition say,
n*log3(n) = Big omega of (n*lg(n)).
Dose it mean log3(n) is equivalent to log2(n) when we are discussing time complexity of algorithms?
Thanks
As far as big-Oh notation is concerned, the base of the logarithms doesn't make any real difference, because of this important property, called Change of Base.
According to this property, changing the base of the logarithm, in terms of big-oh notation, only affects the complexity by a constant factor.
So, yes. In terms of big-Oh notation, log3(n) is equivalent to log2(n).

List of increasing order of complexity? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Plain English explanation of Big O
I am reading an the " Introduction to Algorithms" Book, but dont understand this.
O(100), O(log(n)), O(n*log(n)), O(n2), O(n3)
Ok Thanks, i dident even know what it was, so i am going to read that Big O post now.
But if anyone can explain this any further in layman's terms it would be much appreciated.
Thanks
That is the big O notation and an order of efficiency of algorithms:
O(1), not O(100) - constant time - whatever the input, the algorithm executes in constant time
O(log(n)) - logarithmic time - as input gets larger, so will the time, but by a decreasing amount
O(n*log(n)) - linear * logarithmic - increases larger than linear, but not as fast as the following
O(n^2), or generally O(n^k) where k is a constant - polynomial time, probably the worst of feasible algorithms
There are worse algorithms, that are considered unfeasible for non-small inputs:
O(k^n) - exponential
O(n!) - factorial
Algorithms that follow an Ackerman function...
This notation is orientative. For example, some algorithms in O(n^2) can perform, on average, faster than O(n*log(n)) - see quicksort.
This notation is also an upper bound, meaning it describes a worst case scenario.
It can be used for space complexity or time complexity, where n is the size of the input provided.
Big O (simplifying) indicates how long will a given algorithm to complete, n being the amount of entry.
For example:
O(100) -> will take 100 units to complete no matter how much entry.
O(log(n)) -> will take log(n) to complete
O(n2) -> will take n^2 (n * n) to complete

Resources