I came across this sentence in a book about algorithms:
O -notation expresses the upper bound of a function within a constant factor
What does it mean?
g(n) another function taking n as a parameter. e.g. g(n) = n; g(n)=nlogn etc.
f(n) = O(g(n))
then there exists constants c and k such that for all n >= k, f(n) <= c*g(n).
It means, that on a real line, there exists a number k for which there exists a constant c that for each n >= k, f(n) <= c*g(n).
Less formal (less true): f won't grow faster than c times g.
This is just an attempted English description of the mathematical definition of big-O notation. If we have that f(n) = O(g(n)), then there exists constants c and k such that for all n >= k, f(n) <= c*g(n).
I believe the constant factor is referring to c.
O -notation expresses
Big-O refers to a a way of describing...
the upper bound of a function
..a "worst case scenario" for how quickly a function can grow based on in its input...
within a constant factor".
..that only guarantees the estimate not diverge indefinitely (i.e., that there is some number k such that, nomatter what input you enter, the actual function will not be k times worse than the Big-O estimate)
Does that help?
Related
we have RNG which generate No. between [0.0,1000.0[
in a while loop we count to find out how long does it take to produce a No. < 1.0.
code:
n =0
while ( RNG1000() >= 1.0){
n =+
}
question:
what is the O(?)
A function f of variable n is said to be big-Oh of a function g of variable n - often denoted f(n) = O(g(n)), or f in O(g) - if there exists an n0 such that for all n greater than n0, f(n) <= g(n).
Suppose our algorithm's runtime were big-Oh of some function g of n. Then there is an n0 such that for n > n0, our algorithm's runtime is less than or equal to g(n). Let us choose some fixed n' > n0 and consider what this means: it means that on input n', our algorithm is guaranteed to terminate within g(n') steps. We must have that g(n') is some number; let's call it t'. So for input n', our loop cannot take more than t' steps. But this is patently false, since - assuming that a whole loop iteration takes 1 step - the probability of iterating for more than t' steps is finite but non-zero (assuming the RNG is a true RNG). This is a contradiction, so our assumption that the runtime is big-Oh of g was wrong - that is, there is no upper bound on the runtime.
It is tempting to say that this is O(1) in terms of n, since the loop condition doesn't depend on n, but that is technically incorrect, given the above analysis. There really is no upper bound here.
"Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity."
The variable n in your case does not constrain the upper limit of a function. In fact there is no function in this case (at least not one that returns repeatable results.) I would suggest that there is no Big O notation to describe this and it is undefined. However, some might argue that the worst case scenario is simply O(∞). What is confusing here is that you are not actually using your variable, n, to constrain your behavior.
Do you think the following information is true?
If Θ(f(n)) = Θ(g(n)) AND g(n) > 0 everywhere THEN f(n)/g(n) ∈ Θ(1)
We are having bit of argument with our prof
f(n) = Θ(g(n)) means there's c, d, n0 such that cg(n) <= f(n) <= dg(n) for n > n0.
Then, since g(n) > 0, c <= f(n)/g(n) <= d for n > n0.
So f(n)/g(n) = Θ(1).
Dividing functions f(n),g(n) is not the same as dividing their Big-O. For example let:
f(n) = n^3 + n^2 + n
g(n) = n^3
so:
O(f(n)) = n^3
O(g(n)) = n^3
but:
f(n)/g(n) = 1 + 1/n + 1/n^2 != constant !!!
[Edit1]
but as kfx pointed you are comparing with complexity so you want:
O(f(n)/g(n)) = O(1 + 1/n + 1/n^2) = O(1)
So the answer is Yes.
But beware complexity theory is not really my cup of tea and also I do not have any context to the question of yours.
Using definitions for Landau notation https://en.wikipedia.org/wiki/Big_O_notation, it's easy to conclude that this is true, the limit of division must be less than infinity but larger than 0.
It does not have to be exactly 1 but it has to be a finite constant, which is Θ(1).
A counter example would be nice, and should be easy to be given if the statement isn't true. A positive rigorous proof would probably need to go from definition of limes with respect to series, to prove equivalence of formal and limit definitions.
I use this definition and haven't seen it proven wrong. I suppose the disagreement might lie in exact definition of Θ, it is known that people use those colloquially with minor differences, especially Big O. Or maybe some tricky cases. For positively defined functions and series, I don't think it fails.
Basically there are three options for any pair of functions f, g: Either the first grows asymptotically slower and we write f=o(g) (notice I'm using small o), the first grows asymptotically faster: f=ω(g) (again, small omega) or they are asymptotically tightly bound: f=Θ(g).
What f=o(g) means is stricter then big O in that it doesn't allow for f=Θ(g) to be true; f=Θ(g) implies both f=O(g) and f=Ω(g), but o, Θ and ω are exclusive.
To find out whether f=o(g) it's sufficient to evaluate limit for n going to infinity f(n)/g(n) and if it is zero, f=o(g) is true, if it is infinity f=ω(g) is true and if it is any real finite number, f=Θ(g) is your answer. This is not a definition, but merely a way to evaluate a statement. (One assumption I made here was that both f and g are positive.)
Special case is if limit for n goint to infinity f(n)/1 = f(n) is finite number, it means f(n)=Θ(1) (basically we chose constant function for g).
Now we're getting to your problem: Since f=g(Θ)implies f=O(g), we know that there exists c>0 and n0 such that f(n) <= c*g(n)for all n>n0. Thus we know that f(n)/g(n) <= (c*g(n))/g(n) = cfor all n>n0. The same can be done for Ω just with opposite unequality signs. Thus we get that f(n)/g(n)is between c1and c2 from some n0 which are known to be finite numbers because of how Θ is defined. Because we know our new function is somewhere in there we also know that its limit is finite number, thus proving it is indeed constant.
Conclusion, I believe you were right and I would like your professor to offer counterexample to dispruve the statement. If something didn't make sense feel free to ask more in the comments, I'll try to clarify.
I'm fairly new to the Big-O stuff and I'm wondering what's the complexity of the algorithm.
I understand that every addition, if statement and variable initialization is O(1).
From my understanding first 'i' loop will run 'n' times and the second 'j' loop will run 'n^2' times. Now, the third 'k' loop is where I'm having issues.
Is it running '(n^3)/2' times since the average value of 'j' will be half of 'n'?
Does it mean the Big-O is O((n^3)/2)?
We can use Sigma notation to calculate the number of iterations of the inner-most basic operation of you algorithm, where we consider the sum = sum + A[k] to be a basic operation.
Now, how do we infer that T(n) is in O(n^3) in the last step, you ask?
Let's loosely define what we mean by Big-O notation:
f(n) = O(g(n)) means c · g(n) is an upper bound on f(n). Thus
there exists some constant c such that f(n) is always ≤ c · g(n),
for sufficiently large n (i.e. , n ≥ n0 for some constant n0).
I.e., we want to find some (non-unique) set of positive constants c and n0 such that the following holds
|f(n)| ≤ c · |g(n)|, for some constant c>0 (+)
for n sufficiently large (say, n>n0)
for some function g(n), which will show that f(n) is in O(g(n)).
Now, in our case, f(n) = T(n) = (n^3 - n^2) / 2, and we have:
f(n) = 0.5·n^3 - 0.5·n^2
{ n > 0 } => f(n) = 0.5·n^3 - 0.5·n^2 ≤ 0.5·n^3 ≤ n^3
=> f(n) ≤ 1·n^3 (++)
Now (++) is exactly (+) with c=1 (and choose n0 as, say, 1, n>n0=1), and hence, we have shown that f(n) = T(n) is in O(n^3).
From the somewhat formal derivation above it's apparent that any constants in function g(n) can just be extracted and included in the constant c in (+), hence you'll never (at least should not) see time complexity described as e.g. O((n^3)/2). When using Big-O notation, we're describing an upper bound on the asymptotic behaviour of the algorithm, hence only the dominant term is of interest (however not how this is scaled with constants).
Why is ω(n) smaller than O(n)?
I know what is little omega (for example, n = ω(log n)), but I can't understand why ω(n) is smaller than O(n).
Big Oh 'O' is an upper bound and little omega 'ω' is a Tight lower bound.
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0}
ω(g(n)) = { f(n): for all constants c > 0, there exists a constant n0 such that 0 ≤ cg(n) < f(n) for all n ≥ n0}.
ALSO: infinity = lim f(n)/g(n)
n ∈ O(n) and n ∉ ω(n).
Alternatively:
n ∈ ω(log(n)) and n ∉ O(log(n))
ω(n) and O(n) are at the opposite side of the spectrum, as is illustrated below.
Formally,
For more details, see CSc 345 — Analysis of Discrete Structures
(McCann), which is the source of the graph above. It also contains a compact representation of the definitions, which makes them easy to remember:
I can't comment, so first of all let me say that n ≠ Θ(log(n)). Big Theta means that for some positive constants c1, c2, and k, for all values of n greater than k, c1*log(n) ≤ n ≤ c2*log(n), which is not true. As n approaches infinity, it will always be larger than log(n), no matter log(n)'s coefficient.
jesse34212 was correct in saying that n = ω(log(n)). n = ω(log(n)) means that n ≠ Θ(log(n)) AND n = Ω(log(n)). In other words, little or small omega is a loose lower bound, whereas big omega can be loose or tight.
Big O notation signifies a loose or tight upper bound. For instance, 12n = O(n) (tight upper bound, because it's as precise as you can get), and 12n = O(n^2) (loose upper bound, because you could be more precise).
12n ≠ ω(n) because n is a tight bound on 12n, and ω only applies to loose bounds. That's why 12n = ω(log(n)), or even 12n = ω(1). I keep using 12n, but that value of the constant does not affect the equality.
Technically, O(n) is a set of all functions that grow asymptotically equal to or slower than n, and the belongs character is most appropriate, but most people use "= O(n)" (instead of "∈ O(n)") as an informal way of writing it.
Algorithmic complexity has a mathematic definition.
If f and g are two functions, f = O(g) if you can find two constants c (> 0) and n such as f(x) < c * g(x) for every x > n.
For Ω, it is the opposite: you can find constants such as f(x) > c * g(x).
f = Θ(g) if there are three constants c, d and n such as c * g(x) < f(x) < d * g(x) for every x > n.
Then, O means your function is dominated, Θ your function is equivalent to the other function, Ω your function has a lower limit.
So, when you are using Θ, your approximation is better for you are "wrapping" your function between two edges ; whereas O only set a maximum. Ditto for Ω (minimum).
To sum up:
O(n): in worst situations, your algorithm has a complexity of n
Ω(n): in best case, your algorithm has a complexity of n
Θ(n): in every situation, your algorithm has a complexity of n
To conclude, your assumption is wrong: it is Θ, not Ω. As you may know, n > log(n) when n has a huge value. Then, it is logic to say n = Θ(log(n)), according to previous definitions.
Let f(n) and g(n) complexity functions. Why this statement holds true?. How can i prove it?
f(n) - g(n) is O(min(f(n),g(n)))
The proposition is clearly false. Consider f(n)=n and g(n)=0. min(f(n),g(n)) is zero for n>=0, but f(n)-g(n) = n, which is not O(0).
For every n>=0, f(n)-g(n) <= f(n) so f(n)-g(n) is O(f(n)). I think that is the strongest statement that can be made in general, without a lower bound on g(n) that is a positive function of n.
==========================================================================
The second paragraph above is incorrect, because, as #Dukeling pointed out in a comment, g(n) may be so big that f(n)-g(n) is negative, possibly with an absolute magnitude greater than f(n). What happens in that case depends on the definition of big-O.
A NIST web page defines it as follows: "Formal Definition: f(n) = O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n."
By that definition, a function that for every positive number k has at least one n>=k for which f(n) is negative is not big-O anything.
A Wikipedia page defines it as follows (converted to ASCII): f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x_0 such that
|f(x)| <= M |g(x)| for all x>x_0
This definition does allow use of big-O notation for a function that is negative for large argument values, by working with its absolute value. With this definition, f(n)-g(n) is O(max(f(n),g(n))).
The statement is actually false. First, note that O deals with magnitudes, so the sign of f and/or g is irrelevant. The correct relationship is
O(|f(n)| + |g(n)|)
(see here for instance). If |f| grows faster than |g|, then |f| is going to asymptotically dominate |f-g|, and likewise if the converse is true.
If there are other facts about f and g that are not in your post (e.g., that both functions are always negative), then it may be possible that O(|f| + |g|) = O(min(f, g)) (= O(|min(f, g)|)).