Show that g(n) is O(g(n)) [duplicate] - algorithm

This question already has an answer here:
Show that g(n) is O(g(n)) for each of the following [closed]
(1 answer)
Closed 4 years ago.
I don't get how to show it---I take the log of both sides, and?
This question is to prove that f(n) is O(g(n)), which I know how to do for things that have the same base. not as much for this.
2^(sqrt(log(n)) is O(n(^4/3))

For sufficiently large n, sqrt(log(n)) is positive and bounded from above by log(n). Since 2^x is monotonically increasing, 2^sqrt(log(n)) is bounded from above by 2^log(n) = n. Moreover, for large n, n is clearly bounded from above by n^(4/3). Therefore the original function itself is bounded from above by n^(4/3) as well.

Related

How does bigO(5*2^n + 1000n^100) become bigO(2^n)? [duplicate]

This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 5 years ago.
I have just started "Cracking the Coding Interview" by Gayle Macdowell. In this BigO topic, It says we should drop the non-dominant term.
O(n^2 + n) becomes O(n^2) and O(n + log n) becomes O(n).
Well, I understand that. If we suppose the value of n to be some large number then we can ignore the smaller result since, it will be comparatively much more smaller than the larger one.
But, in this case how can O(5*2^n + 1000n^100) become O(2^n)) ?
Isn't n^100 dominant than 2 ^n ?
n^100, or n raised to any constant, does not dominate 2^n.

Algorithm complexity and big O notation [duplicate]

This question already has answers here:
What is the difference between Θ(n) and O(n)?
(9 answers)
Closed 5 years ago.
I am taking an online class on algorithms and I had the following quiz. I got it wrong and am trying to understand the reason for the answer.
Which of the following is O(n^3)?
a) 11n + 151 gn + 100
b) 1/3 n^2
c) 25000 n^3
d) All of the above.
The correct answer is (d) all of the above. The reason is that Big-O notation provides only the upper bound on the growth rate of function as n gets large.
I am not sure why the answer is not (c). For example, the upper bound on (b) is less than n^3.
The reason is that formally, big-O notation is an asymptotic upper bound.
So 1/3*n^2 is O(n^2), but it is also O(n^3) and also O(2^n).
While in every-day conversion about complexity O(...) is used as a tight (both upper and lower bound), the theta-notation, or Θ(...) is the technically correct term for this.
For more info see What is the difference between Θ(n) and O(n)?

Big Oh Notation Definition? [duplicate]

This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 9 years ago.
Can someone please explain to me what this means:
Definition: Given functions f(n) and g(n), then we say that
f(n) is O( g(n) )
if and only if there exist positive constants c and n0 such that
f(n) <= c g(n) for all n => n0
It basically means, that for large enough n and ignoring constant factors, f(n) does not grow faster than g(n).

What's the difference between big O and big Omega? [duplicate]

This question already has answers here:
What is the difference between Θ(n) and O(n)?
(9 answers)
Closed 2 months ago.
Big Omega is supposed to be the opposite of Big O, but they can always have the same value, because by definition Big O means:
g(x) so that cg(x) is bigger or equal to f(x)
and Big Omega means
g(x) so that cg(x) is smaller or equal to f(x)
the only thing that changes is the value of c, if the value of c is an arbitrary value (a value that we choose to meet inequality), then Big Omega and Big O will be the same. So what's the point of those two? What purpose do they serve?
Big O is bounded above by (up to constant factor) asymptotically while Big Omega is bounded below by (up to constant factor) asymptotically.
Mathematically speaking, f(x) = O(g(x)) (big-oh) means that the growth rate of f(x) is asymptotically less than or equal to to the growth rate of g(x).
f(x) = Ω(g(x)) (big-omega) means that the growth rate of f(x) is asymptotically greater than or equal to the growth rate of g(x)
See the Wiki reference below:
Big O notation
Sometimes you want to prove an upper bound (Big Oh), some other times you want to prove a lower bound (Big Omega).
http://en.wikipedia.org/wiki/Big_O_notation:
You're correct when you assert that such a g exists, but that doesn't mean it's known.
In addition to talking about the complexity of algorithms you can also talk about the complexity of problems.
It's known that multiplication for example is Ω(n) and O(n log(n) log(log(n))) in the number of bits, but a precise characterization (denoted by Θ) is unknown. It's the same story with integer factorization and NP problems in general which is what the whole P versus NP thing is about.
Furthermore there are apparently algorithms and ones proven to be optimal no less whose complexity is unknown. See http://en.wikipedia.org/wiki/User:Erel_Segal/Optimal_algorithms_with_unknown_runtime_complexity

Understanding the big O notation [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Some standard books on Algorithms produce this:
0 ≤ f(n) ≤ c⋅g(n) for all n > n0
While defining big-O, can anyone explain to me what this means, using a strong example which can help me to visualize and understand big-O more precisely?
Assume you have a function f(n) and you are trying to classify it - is it a big O of some other function g(n).
The definition basically says that f(n) is in O(g(n)) if there exists two constants C,N such that
f(n) <= c * g(n) for each n > N
Now, let's understand what it means.
Start with the n>N part - it means, we do not "care" for low values of n, we only care for high values, and if some (final number of) low values do not follow the criteria - we can silently ignore them by choosing N bigger then them.
Have a look on the following example:
Though we can see that for low values of n: n^2 < 10nlog(n), the second quickly catches up and after N=10 we get that for all n>10 the claim 10nlog(n) < n^2 is correct, and thus 10nlog(n) is in O(n^2).
The constant c means we can also tolerate some multiple by constant factor, and we can still accept it as desired behavior (useful for example to show that 5*n is O(n), because without it we could never find N such that for each n > N: 5n < n, but with the constant c, we can use c=6 and show 5n < 6n and get that 5n is in O(n).
This question is a math problem, not an algorithmic one.
You can find a definition and a good example here: https://math.stackexchange.com/questions/259063/big-o-interpretation
As #Thomas pointed out, Wikipedia also has a good article on this: http://en.wikipedia.org/wiki/Big_O_notation
If you need more details, try to ask a more specific question.

Resources