Can someone please give the mathematical definition of f(n) and O(f(n))?
You can check this page to see a math definition of big-O notation.
Let f and g be two functions defined on some subset of the real numbers. One writes
if and only if there is a positive constant M such that for all sufficiently large values of x, the absolute value of f(x) is at most M multiplied by the absolute value of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
In many contexts, the assumption that we are interested in the growth rate as the variable x goes to infinity is left unstated, and one writes more simply that f(x) = O(g(x)).
Related
In a tutorial on the Big-O notation it is said that if T(n) = 4n2-2n+2, then T(n)=O(n2). However, we know that f(x) = O(g(x)) if there exists N and C such that |f(x)| <= C|g(x)| for all x>N.
But the thing is that n2 < 4n2-2n+2 for any n. Shouldn't we say that n2 = O(4n2-2n+2) in this case?
All of the below statements are true:
n2 ∈ O(n2)
n2 ∈ O(4n2-2n+2)
4n2-2n+2 ∈ O(4n2-2n+2)
4n2-2n+2 ∈ O(n2)
However talking about O(4n2-2n+2) does not make much sense as it's the exact same set as O(n2)*, so since the latter is simpler, there's no reason to refer to it as the former.
* For every function f such that ∃N,C: ∀x>N: |f(x)| ≤ C|4n2-2n+2|, it is also true that ∃N,C: ∀x>N: |f(x)| ≤ C|n2| and vice versa.
The thing about big-O notation and questions like this is to consider which term of the equation dominates as n (or x or other suitable variable name) gets really big. That is, which term contributes most to the overall shape of the equation graph. What term should you plot against the equation result to get the closest approximation of a straight line (approximately one-to-one correspondence).
With regards to the rest of your question, it doesn’t say that C>1. I presume C>0. As n grows, 2n + 2 becomes tiny in comparison with the squared term.
With regards as to why it’s relevant to coding: how long will your code take to run? Can you make it run faster? Which equation/code is more efficient? How big do your variables need to be (i.e. int or long choices in C). I presume if there is a "big-o" tag, there’s been at least one question on this before.
we have RNG which generate No. between [0.0,1000.0[
in a while loop we count to find out how long does it take to produce a No. < 1.0.
code:
n =0
while ( RNG1000() >= 1.0){
n =+
}
question:
what is the O(?)
A function f of variable n is said to be big-Oh of a function g of variable n - often denoted f(n) = O(g(n)), or f in O(g) - if there exists an n0 such that for all n greater than n0, f(n) <= g(n).
Suppose our algorithm's runtime were big-Oh of some function g of n. Then there is an n0 such that for n > n0, our algorithm's runtime is less than or equal to g(n). Let us choose some fixed n' > n0 and consider what this means: it means that on input n', our algorithm is guaranteed to terminate within g(n') steps. We must have that g(n') is some number; let's call it t'. So for input n', our loop cannot take more than t' steps. But this is patently false, since - assuming that a whole loop iteration takes 1 step - the probability of iterating for more than t' steps is finite but non-zero (assuming the RNG is a true RNG). This is a contradiction, so our assumption that the runtime is big-Oh of g was wrong - that is, there is no upper bound on the runtime.
It is tempting to say that this is O(1) in terms of n, since the loop condition doesn't depend on n, but that is technically incorrect, given the above analysis. There really is no upper bound here.
"Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity."
The variable n in your case does not constrain the upper limit of a function. In fact there is no function in this case (at least not one that returns repeatable results.) I would suggest that there is no Big O notation to describe this and it is undefined. However, some might argue that the worst case scenario is simply O(∞). What is confusing here is that you are not actually using your variable, n, to constrain your behavior.
My understanding about "O" and "o" notation is that former is an upper bound and latter is a tight bound. My question is, if a function., f(n) is tight bound by some random function., say o(g(n)). Can this bound be made an upper bound i.e.,(O(g(n)) by multiplying some constant "c" such that it will be an upper bound even when n->infinity.
f ∈ O(g) says, essentially
For at least one choice of a constant k > 0, you can find a constant a
such that the inequality 0 <= f(x) <= k g(x) holds for all x > a.
Note that O(g) is the set of all functions for which this condition holds.
f ∈ o(g) says, essentially
For every choice of a constant k > 0, you can find a constant a such
that the inequality 0 <= f(x) < k g(x) holds for all x > a.
Once again, note that o(g) is a set.
From Wikipedia:
Note the difference between the earlier formal definition for the
big-O notation, and the present definition of little-o: while the
former has to be true for at least one constant M the latter must hold
for every positive constant ε, however small. In this way, little-o
notation makes a stronger statement than the corresponding big-O
notation: every function that is little-o of g is also big-O of g, but
not every function that is big-O of g is also little-o of g (for
instance g itself is not, unless it is identically zero near ∞).
This link contains good explanation:
http://www.stat.cmu.edu/~cshalizi/uADA/13/lectures/app-b.pdf
Let f(n) and g(n) complexity functions. Why this statement holds true?. How can i prove it?
f(n) - g(n) is O(min(f(n),g(n)))
The proposition is clearly false. Consider f(n)=n and g(n)=0. min(f(n),g(n)) is zero for n>=0, but f(n)-g(n) = n, which is not O(0).
For every n>=0, f(n)-g(n) <= f(n) so f(n)-g(n) is O(f(n)). I think that is the strongest statement that can be made in general, without a lower bound on g(n) that is a positive function of n.
==========================================================================
The second paragraph above is incorrect, because, as #Dukeling pointed out in a comment, g(n) may be so big that f(n)-g(n) is negative, possibly with an absolute magnitude greater than f(n). What happens in that case depends on the definition of big-O.
A NIST web page defines it as follows: "Formal Definition: f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n."
By that definition, a function that for every positive number k has at least one n>=k for which f(n) is negative is not big-O anything.
A Wikipedia page defines it as follows (converted to ASCII): f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x_0 such that
|f(x)| <= M |g(x)| for all x>x_0
This definition does allow use of big-O notation for a function that is negative for large argument values, by working with its absolute value. With this definition, f(n)-g(n) is O(max(f(n),g(n))).
The statement is actually false. First, note that O deals with magnitudes, so the sign of f and/or g is irrelevant. The correct relationship is
O(|f(n)| + |g(n)|)
(see here for instance). If |f| grows faster than |g|, then |f| is going to asymptotically dominate |f-g|, and likewise if the converse is true.
If there are other facts about f and g that are not in your post (e.g., that both functions are always negative), then it may be possible that O(|f| + |g|) = O(min(f, g)) (= O(|min(f, g)|)).
If f(x) = O(g(x)) as x -> infinity, then
A. g is the upper bound of f
B. f is the upper bound of g.
C. g is the lower bound of f.
D. f is the lower bound of g.
Can someone please tell me when they think it is and why?
The real answer is that none of these is correct.
The definition of big-O notation is that:
|f(x)| <= k|g(x)|
for all x > x0, for some x0 and k.
In specific cases, |k| might be less than or equal to 1, in which case it would be correct to say that "|g| is the upper bound of |f|". But in general, that's not true.
Answer
g is the upper bound of f
When x goes towards infinity, worst case scenario is O(g(x)). That means actual exec time can be lower than g(x), but never worse than g(x).
EDIT:
As Oli Charlesworth pointed out, that is only true with arbitrary constant k <= 1 and not in general. Please look at his answer for the general case.
The question checks your understanding of the basics of asymptotic algebra, or big-oh notation. In
f(x) = O(g(x)) as x approaches infinity
the question says that when you feed the function f a value x, the value which f computes from x is then in the order of that returned from another function, g(x). As an example, suppose
f(x) = 2x
g(x) = x
then the value g(x) returns when fed x is of the same order as that f(x) returns for x. Specifically, the two functions return a value that is in the order of x; the functions are both linear. It doesn't matter whether f(x) is 2x or ½x; for any constant factor at all f(x) will return a value that is in the order of x. This is because big-oh notation is about ignoring constant factors. Constant factors don't grow as x grows and so we assume they don't matter nearly as much as x does.
We restrict g(x) to a specific set of functions. g(x) can be x, or ln(x), or log(x) and so on and so forth. It may look as if when
f(x) = 2x
g(x) = x
f(x) yields values higher than g(x) and therefore is the upper bound of g(x). But once again, we ignore the constant factor, and we say that the order-of upper bound, which is what big-oh is all about, is that of g(x).