My understanding about "O" and "o" notation is that former is an upper bound and latter is a tight bound. My question is, if a function., f(n) is tight bound by some random function., say o(g(n)). Can this bound be made an upper bound i.e.,(O(g(n)) by multiplying some constant "c" such that it will be an upper bound even when n->infinity.
f ∈ O(g) says, essentially
For at least one choice of a constant k > 0, you can find a constant a
such that the inequality 0 <= f(x) <= k g(x) holds for all x > a.
Note that O(g) is the set of all functions for which this condition holds.
f ∈ o(g) says, essentially
For every choice of a constant k > 0, you can find a constant a such
that the inequality 0 <= f(x) < k g(x) holds for all x > a.
Once again, note that o(g) is a set.
From Wikipedia:
Note the difference between the earlier formal definition for the
big-O notation, and the present definition of little-o: while the
former has to be true for at least one constant M the latter must hold
for every positive constant ε, however small. In this way, little-o
notation makes a stronger statement than the corresponding big-O
notation: every function that is little-o of g is also big-O of g, but
not every function that is big-O of g is also little-o of g (for
instance g itself is not, unless it is identically zero near ∞).
This link contains good explanation:
http://www.stat.cmu.edu/~cshalizi/uADA/13/lectures/app-b.pdf
Related
In other words, is o(f(n)) = O(f(n)) - Θ(f(n))?
f ∈ O(g) [big O] says, essentially
For at least one choice of a constant k > 0, you can find a constant y such that the inequality 0 <= f(x) <= k g(x) holds for all x > y.
f ∈ Θ(g) [theta] says, essentially
For at least one choice of constants k1, k2 > 0, you can find a constant y such that the inequality 0 <= k1 g(x) <= f(x) <= k2 g(x) holds for all x > y.
f ∈ o(g) [little o] says, essentially
For every choice of a constant k > 0, you can find a constant a such that the inequality 0 <= f(x) < k g(x) holds for all x > y.
By the definition, it is easy to realize that o(g) ⊆ O(g), and Θ(g) ⊆ O(g). And it makes sense to one complement each other. I couldn't find any counter example of function that is in O(f(n)) and not in Θ(f(n)) that is not in o(f(n)).
Surprisingly, no, this isn’t the case. Intuitively, big-O notation gives an upper bound without making any claims about lower bounds. If you subtract out the big-Θ class, you’ve removed functions that are bounded from above and below by the function. That leaves you with some extra functions that are upper-bounded by the function but not lower bounded by it.
As an example, let f(n) be n if n is even and 0 otherwise. Then f(n) = O(n) but f(n) ≠ Θ(n). However, it’s not true that f(n) = o(n).
we have RNG which generate No. between [0.0,1000.0[
in a while loop we count to find out how long does it take to produce a No. < 1.0.
code:
n =0
while ( RNG1000() >= 1.0){
n =+
}
question:
what is the O(?)
A function f of variable n is said to be big-Oh of a function g of variable n - often denoted f(n) = O(g(n)), or f in O(g) - if there exists an n0 such that for all n greater than n0, f(n) <= g(n).
Suppose our algorithm's runtime were big-Oh of some function g of n. Then there is an n0 such that for n > n0, our algorithm's runtime is less than or equal to g(n). Let us choose some fixed n' > n0 and consider what this means: it means that on input n', our algorithm is guaranteed to terminate within g(n') steps. We must have that g(n') is some number; let's call it t'. So for input n', our loop cannot take more than t' steps. But this is patently false, since - assuming that a whole loop iteration takes 1 step - the probability of iterating for more than t' steps is finite but non-zero (assuming the RNG is a true RNG). This is a contradiction, so our assumption that the runtime is big-Oh of g was wrong - that is, there is no upper bound on the runtime.
It is tempting to say that this is O(1) in terms of n, since the loop condition doesn't depend on n, but that is technically incorrect, given the above analysis. There really is no upper bound here.
"Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity."
The variable n in your case does not constrain the upper limit of a function. In fact there is no function in this case (at least not one that returns repeatable results.) I would suggest that there is no Big O notation to describe this and it is undefined. However, some might argue that the worst case scenario is simply O(∞). What is confusing here is that you are not actually using your variable, n, to constrain your behavior.
Do you think the following information is true?
If Θ(f(n)) = Θ(g(n)) AND g(n) > 0 everywhere THEN f(n)/g(n) ∈ Θ(1)
We are having bit of argument with our prof
f(n) = Θ(g(n)) means there's c, d, n0 such that cg(n) <= f(n) <= dg(n) for n > n0.
Then, since g(n) > 0, c <= f(n)/g(n) <= d for n > n0.
So f(n)/g(n) = Θ(1).
Dividing functions f(n),g(n) is not the same as dividing their Big-O. For example let:
f(n) = n^3 + n^2 + n
g(n) = n^3
so:
O(f(n)) = n^3
O(g(n)) = n^3
but:
f(n)/g(n) = 1 + 1/n + 1/n^2 != constant !!!
[Edit1]
but as kfx pointed you are comparing with complexity so you want:
O(f(n)/g(n)) = O(1 + 1/n + 1/n^2) = O(1)
So the answer is Yes.
But beware complexity theory is not really my cup of tea and also I do not have any context to the question of yours.
Using definitions for Landau notation https://en.wikipedia.org/wiki/Big_O_notation, it's easy to conclude that this is true, the limit of division must be less than infinity but larger than 0.
It does not have to be exactly 1 but it has to be a finite constant, which is Θ(1).
A counter example would be nice, and should be easy to be given if the statement isn't true. A positive rigorous proof would probably need to go from definition of limes with respect to series, to prove equivalence of formal and limit definitions.
I use this definition and haven't seen it proven wrong. I suppose the disagreement might lie in exact definition of Θ, it is known that people use those colloquially with minor differences, especially Big O. Or maybe some tricky cases. For positively defined functions and series, I don't think it fails.
Basically there are three options for any pair of functions f, g: Either the first grows asymptotically slower and we write f=o(g) (notice I'm using small o), the first grows asymptotically faster: f=ω(g) (again, small omega) or they are asymptotically tightly bound: f=Θ(g).
What f=o(g) means is stricter then big O in that it doesn't allow for f=Θ(g) to be true; f=Θ(g) implies both f=O(g) and f=Ω(g), but o, Θ and ω are exclusive.
To find out whether f=o(g) it's sufficient to evaluate limit for n going to infinity f(n)/g(n) and if it is zero, f=o(g) is true, if it is infinity f=ω(g) is true and if it is any real finite number, f=Θ(g) is your answer. This is not a definition, but merely a way to evaluate a statement. (One assumption I made here was that both f and g are positive.)
Special case is if limit for n goint to infinity f(n)/1 = f(n) is finite number, it means f(n)=Θ(1) (basically we chose constant function for g).
Now we're getting to your problem: Since f=g(Θ)implies f=O(g), we know that there exists c>0 and n0 such that f(n) <= c*g(n)for all n>n0. Thus we know that f(n)/g(n) <= (c*g(n))/g(n) = cfor all n>n0. The same can be done for Ω just with opposite unequality signs. Thus we get that f(n)/g(n)is between c1and c2 from some n0 which are known to be finite numbers because of how Θ is defined. Because we know our new function is somewhere in there we also know that its limit is finite number, thus proving it is indeed constant.
Conclusion, I believe you were right and I would like your professor to offer counterexample to dispruve the statement. If something didn't make sense feel free to ask more in the comments, I'll try to clarify.
I came across this sentence in a book about algorithms:
O -notation expresses the upper bound of a function within a constant factor
What does it mean?
g(n) another function taking n as a parameter. e.g. g(n) = n; g(n)=nlogn etc.
f(n) = O(g(n))
then there exists constants c and k such that for all n >= k, f(n) <= c*g(n).
It means, that on a real line, there exists a number k for which there exists a constant c that for each n >= k, f(n) <= c*g(n).
Less formal (less true): f won't grow faster than c times g.
This is just an attempted English description of the mathematical definition of big-O notation. If we have that f(n) = O(g(n)), then there exists constants c and k such that for all n >= k, f(n) <= c*g(n).
I believe the constant factor is referring to c.
O -notation expresses
Big-O refers to a a way of describing...
the upper bound of a function
..a "worst case scenario" for how quickly a function can grow based on in its input...
within a constant factor".
..that only guarantees the estimate not diverge indefinitely (i.e., that there is some number k such that, nomatter what input you enter, the actual function will not be k times worse than the Big-O estimate)
Does that help?
Let f(n) and g(n) complexity functions. Why this statement holds true?. How can i prove it?
f(n) - g(n) is O(min(f(n),g(n)))
The proposition is clearly false. Consider f(n)=n and g(n)=0. min(f(n),g(n)) is zero for n>=0, but f(n)-g(n) = n, which is not O(0).
For every n>=0, f(n)-g(n) <= f(n) so f(n)-g(n) is O(f(n)). I think that is the strongest statement that can be made in general, without a lower bound on g(n) that is a positive function of n.
==========================================================================
The second paragraph above is incorrect, because, as #Dukeling pointed out in a comment, g(n) may be so big that f(n)-g(n) is negative, possibly with an absolute magnitude greater than f(n). What happens in that case depends on the definition of big-O.
A NIST web page defines it as follows: "Formal Definition: f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n."
By that definition, a function that for every positive number k has at least one n>=k for which f(n) is negative is not big-O anything.
A Wikipedia page defines it as follows (converted to ASCII): f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x_0 such that
|f(x)| <= M |g(x)| for all x>x_0
This definition does allow use of big-O notation for a function that is negative for large argument values, by working with its absolute value. With this definition, f(n)-g(n) is O(max(f(n),g(n))).
The statement is actually false. First, note that O deals with magnitudes, so the sign of f and/or g is irrelevant. The correct relationship is
O(|f(n)| + |g(n)|)
(see here for instance). If |f| grows faster than |g|, then |f| is going to asymptotically dominate |f-g|, and likewise if the converse is true.
If there are other facts about f and g that are not in your post (e.g., that both functions are always negative), then it may be possible that O(|f| + |g|) = O(min(f, g)) (= O(|min(f, g)|)).