There is a lot of Explanation about big-0, But i'm really confused on this part.
Acoording to the definition of Big-O in this function
f (n) ≤ c ·g(n), for n ≥ n0
“ f (n) is big-Oh of g(n).”
But A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.
so for e.g here 34 is a upper bound for the set { 5, 10, 34 }
So if in this graph how f(n) is O(g(n)) because if i get the upper bound of g(n) function it's value would be different than what is mentioned here for n>=n0 ..
Beyond n0, f(n) will not grow faster than g(n). f(n)'s rate of growth as a function of n is at most g(n).
g(n)'s rate of growth is said to be an upper-bound of f(n)'s rate of growth of f(n) is Big-O of g(n).
The worst case rate of growth of f(n) will be at most g(n) since f(n) is Big-O of g(n).
This is all about knowing just how big f(n) can grow relative to another known function.
For example, if f(n) = n^2, and g(n) is n^3, then trivially f(n) is Big-O of g(n) since n^2 will never grow faster than n^3.
"c" is used for mathematical proofs - it's just a linear scaling variable. We can't just go around and claim something is Big-O of something else. If we choose n0 and c for a given g(n), and this equation holds
f(n) ≤ c ·g(n), for n ≥ n0
then we can show that truly f(n) is Big-O of g(n).
Example:
f(n) = n^2;
g(n) = n^3;
We can choose n0 = 1, and c = 1 such that
f(n) ≤ 1 ·g(n), for n ≥ 1
which becomes
n^2 ≤ 1 ·n^3, for n ≥ 1
which always holds, thus f(n) is proven to be Big-O of g(n).
Proofs can get more complicated, for instance this, but this is the gist of it.
Related
I know that f(n) grows slower than g(n), but could f(n) has the same growth rate as g(n) since there is an equality sign?
Based on the Big-O definition, yes. For example n is in O(n) as well. In this case, f(n) = n and g(n) = n are even equal, a stronger relation than having the same growth.
If a function f(n) grows more slowly than a function g(n), why is f(n) = O(g(n))?
e.g. if f(n) is 4n^4 and g(n) is log(4n^n^4)
My book says f=O(g(n)) because g=n^4*log(4n)=n^4(logn + log4)=O(n^4*logn). I understand why g=O(n^4*logn), but I'm not sure how they reached the conclusion that f=O(g(n)) from big O of g.
I understand that f(n) grows more slowly than g(n) just by thinking about the graphs, but I'm having trouble understanding asymptotic behavior and why f=O(g(n)) in general.
The formal definition of big-O notation is that
f(n) = O(g(n)) if there are constants n0 and c such that for any n ≥ n0, we have f(n) ≤ c · g(n).
In other words, f(n) = O(g(n)) if for sufficiently large values of n, the value of f(n) is upper-bounded by some constant multiple of g(n).
Notice that this just says that f(n) is upper-bounded by g(n), not that f(n)'s rate of growth is the same as g(n)'s rate of growth. In that sense, you can think of f(n) = O(g(n)) as akin to saying something like "f ≤ g," that f doesn't grow faster than g, leaving open the possibility that g grows a lot faster than f does.
I'm fairly new to the Big-O stuff and I'm wondering what's the complexity of the algorithm.
I understand that every addition, if statement and variable initialization is O(1).
From my understanding first 'i' loop will run 'n' times and the second 'j' loop will run 'n^2' times. Now, the third 'k' loop is where I'm having issues.
Is it running '(n^3)/2' times since the average value of 'j' will be half of 'n'?
Does it mean the Big-O is O((n^3)/2)?
We can use Sigma notation to calculate the number of iterations of the inner-most basic operation of you algorithm, where we consider the sum = sum + A[k] to be a basic operation.
Now, how do we infer that T(n) is in O(n^3) in the last step, you ask?
Let's loosely define what we mean by Big-O notation:
f(n) = O(g(n)) means c · g(n) is an upper bound on f(n). Thus
there exists some constant c such that f(n) is always ≤ c · g(n),
for sufficiently large n (i.e. , n ≥ n0 for some constant n0).
I.e., we want to find some (non-unique) set of positive constants c and n0 such that the following holds
|f(n)| ≤ c · |g(n)|, for some constant c>0 (+)
for n sufficiently large (say, n>n0)
for some function g(n), which will show that f(n) is in O(g(n)).
Now, in our case, f(n) = T(n) = (n^3 - n^2) / 2, and we have:
f(n) = 0.5·n^3 - 0.5·n^2
{ n > 0 } => f(n) = 0.5·n^3 - 0.5·n^2 ≤ 0.5·n^3 ≤ n^3
=> f(n) ≤ 1·n^3 (++)
Now (++) is exactly (+) with c=1 (and choose n0 as, say, 1, n>n0=1), and hence, we have shown that f(n) = T(n) is in O(n^3).
From the somewhat formal derivation above it's apparent that any constants in function g(n) can just be extracted and included in the constant c in (+), hence you'll never (at least should not) see time complexity described as e.g. O((n^3)/2). When using Big-O notation, we're describing an upper bound on the asymptotic behaviour of the algorithm, hence only the dominant term is of interest (however not how this is scaled with constants).
Are there any functions such as f(n) and g(n) that both;
f(n) != O(g(n)) and
g(n) != O(f(n)).
Are there any functions that fulfills the requirements at the above?
f(n)=n and g(n)=n^(1 + sin(x)).
f(n) is not O(g(n)) and g(n) is not O(f(n)).
Refer http://c2.com/cgi/wiki?BigOh
Consider:
f(n) = 0 if n is odd, else n*n
g(n) = n
Then for odd values g(n) is more than a constant factor bigger than f(n) (and so g(n) is not O(f(n)), while for even values f(n) is more than a constant factor bigger than g(n) (and so f(n) is not O(g(n))).
Observe that f(n) does not have a limit at infinity as n approaches infinity, so in some sense this is a cheap example. But you could fix that by replacing 0, n, n*n with n, n*n, n*n*n.
I think if two non-negative functions have the property that f(n)/g(n) has a (perhaps infinite) limit as n approaches infinity, then it follows that one of them is big-O the other one. If the limit is 0 then f(n) is O(g(n)), if the limit is finite then each is big-O the other, and if the limit is infinite then g(n) is O(f(n)). But I'm too lazy to confirm by writing a proof.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Plain English explanation of Big O
I need to figure out O(n) of the following:
f(n) = 10n^2 + 10n + 20
All I can come up with is 50, and I am just too embarrassed to state how I came up with that.
Can someone explain what it means and how I should calculate it for f(n) above?
Big-O notation is to do with complexity analysis. A function is O(g(n)) if (for all except some n values) it is upper-bounded by some constant multiple of g(n) as n tends to infinity. More formally:
f(n) is in O(g(n)) iff there exist constants n0 and c such that for all n >= n0, f(n) <= c.g(n)
In this case, f(n) = 10n^2 + 10n + 20, so f(n) is in O(n^2), O(n^3), O(n^4), etc. The tightest upper bound is O(n^2).
In layman's terms, what this means is that f(n) grows no worse than quadratically as n tends to infinity.
There's a corresponding Big-Omega notation which can be used to lower-bound functions in a similar manner. In this case, f(n) is also Omega(n^2): that is, it grows no better than quadratically as n tends to infinity.
Finally, there's a Big-Theta notation which combines the two, i.e. iff f(n) is in O(g(n)) and f(n) is in Omega(g(n)) then f(n) is in Theta(g(n)). In this case, f(n) is in Theta(n^2): that is, it grows exactly quadratically as n tends to infinity.
--> The point of all this is that as n gets big, the linear (10n) and constant (20) terms become essentially irrelevant, as the value of the function is far more affected by the quadratic term. <--