Prove small-oh with big-oh - algorithm

If I have already known f(n) is O(g(n)). From the definition of little-oh, how to prove that f(n) is o(n * g(n))?

Given: f(n) is in O(g(n)).
Using the definition of big-O notation, we can write this as:
f(n) is in O(g(n))
=> |f(n)| ≤ k*|g(n)|, for some constant k>0 (+)
for n sufficiently large (say, n>N)
For the definition of big-O used as above, see e.g.
https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/big-o-notation
Prove: Given (+), then f(n) is in o(n*g(n)).
Lets first state what little-o notation means:
Formally, f(n) = o(g(n)) (or f(n) ∈ o(g(n))) as n → ∞ means that
for every positive constant ε there exists a constant N such that
|f(n)| ≤ ε*|g(n)|, for all n > N (++)
From https://en.wikipedia.org/wiki/Big_O_notation#Little-o_notation.
Now, using (+), we can write
|f(n)| ≤ k*|g(n)|, som k>0, n sufficiently large
<=> { n > 0 } <=> n*|f(n)| ≤ k*n*|g(n)|
<=> n*|f(n)| ≤ k*|n*g(n)|
<=> |f(n)| ≤ (k/n)*|n*g(n)| (+++)
Return to the definition of little-o, specifically (++), and let, without loss of generality, k be fixed. Now, every positive constant ε can be described as
ε = k/C, for some constant C>0 (with k fixed, k>0) (*)
Now, assume, without loss of generality, that n is larger than this C, i.e., n>C. Then, (*) and (+++) yields
|f(n)| ≤ (k/n)*|n*g(n)| < (k/C)*|n*g(n)| = ε*|n*g(n)| (**)
^ ^
| |
since `n>C` (*)
Since we're studying asymptotic behaviour, we can choose to to assign a lower bound to n to any value larger than C (in fact, that's in the definition of both big-O and little-o, "n sufficiently large"), and hence---by the definition of little-oh above---, we have:
- As shown above, (+) implies (**)
- By the definition of little-o, (**) shows that f(n) is in o(n*g(n))
- Subsequently, we've shown that, given (+), then: f(n) is in o(n*g(n))
Result: If f(n) is in O(g(n)), then f(n) is in o(n*g(n)), where these two relations refer big-O and litte-O asymptotic bounds, respectively.
Comment: The result is, in fact, quite trivial. The big-O and little-o notation differ only in one of the two constants used in proving the upper bounds, i.e., we can write the definitions of big-O and little-O as:
f(n) is said to be in O(g(n)) if we can find a set of positive constants (k, N), such that f(n) < k*g(n) holds for all n>N.
f(n) is said to be in o(g(n)) if we can find a positive constant N, such that f(n) < ε*g(n) holds for all n>N, and for every positive constant ε.
The latter is obvious a stricter constraint, but if we can make use of one extra power of n in the left-hand-side of f(n) < ε*g(n) (i.e., f(n) < ε*n*g(n)), then even for infinitesimal values of ε, we can always choose the other constant N freely to be sufficiently large for ε*n to provide us any constant k that can be used to show that f(n) is in O(g(n)) (as, recall, n>N).

Related

Proving a single-term function is big Omega

I was given the function 5n^3+2n+8 to prove for big-O and big-Omega. I finished big-O, but for big-Omega I end up with a single-term function. I canceled out 2n and 8 because they're positive and make my function larger, so I just end up with 5n^3. How do I choose C and n_0? or is it simply trivial in this case?
From Big-Ω (Big-Omega) notation (slightly modified):
If a running time of some function f(n) is Ω(g(n)), then for large
enough n, say n > n_0 > 0, the running time of f(n) is at least
C⋅g(n), for some constant C > 0.
Hence, if f(n) is in Ω(g(n)), then there exists some positive constants C and n_0 such at the following holds
f(n) ≥ C⋅g(n), for all n > n_0 (+)
Now, the choice of C and n_0 is not unique, it suffices that you can show one such set of constants (such that (+) holds) to be able to describe the running time using the Big-Omega notation, as posted above.
Hence, you are indeed almost there
f(n) = 5n^3+2n+8 > 5n^3 holds for all n larger than say, 1
=> f(n) ≥ 5⋅n^3 for all n > n_0 = 1 (++)
Finally, (++) is just (+) for g(n) = n^3 and C=5, and hence, by (+), f(n) is in Ω(n^3).

What is the Complexity (BigO) of this Algorithm?

I'm fairly new to the Big-O stuff and I'm wondering what's the complexity of the algorithm.
I understand that every addition, if statement and variable initialization is O(1).
From my understanding first 'i' loop will run 'n' times and the second 'j' loop will run 'n^2' times. Now, the third 'k' loop is where I'm having issues.
Is it running '(n^3)/2' times since the average value of 'j' will be half of 'n'?
Does it mean the Big-O is O((n^3)/2)?
We can use Sigma notation to calculate the number of iterations of the inner-most basic operation of you algorithm, where we consider the sum = sum + A[k] to be a basic operation.
Now, how do we infer that T(n) is in O(n^3) in the last step, you ask?
Let's loosely define what we mean by Big-O notation:
f(n) = O(g(n)) means c · g(n) is an upper bound on f(n). Thus
there exists some constant c such that f(n) is always ≤ c · g(n),
for sufficiently large n (i.e. , n ≥ n0 for some constant n0).
I.e., we want to find some (non-unique) set of positive constants c and n0 such that the following holds
|f(n)| ≤ c · |g(n)|, for some constant c>0 (+)
for n sufficiently large (say, n>n0)
for some function g(n), which will show that f(n) is in O(g(n)).
Now, in our case, f(n) = T(n) = (n^3 - n^2) / 2, and we have:
f(n) = 0.5·n^3 - 0.5·n^2
{ n > 0 } => f(n) = 0.5·n^3 - 0.5·n^2 ≤ 0.5·n^3 ≤ n^3
=> f(n) ≤ 1·n^3 (++)
Now (++) is exactly (+) with c=1 (and choose n0 as, say, 1, n>n0=1), and hence, we have shown that f(n) = T(n) is in O(n^3).
From the somewhat formal derivation above it's apparent that any constants in function g(n) can just be extracted and included in the constant c in (+), hence you'll never (at least should not) see time complexity described as e.g. O((n^3)/2). When using Big-O notation, we're describing an upper bound on the asymptotic behaviour of the algorithm, hence only the dominant term is of interest (however not how this is scaled with constants).

Big Theta Questions

I don't understand this situation.
f(n) ∈ O(g(n)),
g(n) ∈ Θ(f(n))
for these situations why is are the following the correct answers.
f(n) <= g(n) for all n>1, neither always true or false
g(n) ∈ Ω(f(n)), always true
f(n)<= Θ(g(n)), always true
My logic is since g(n) ∈ Θ(f(n)), g(n) and f(n) would have to have the same highest power (for example: n=n, n^2=n^2). In that case, wouldn't all 3 statements be true?
I don't understand why the first one is neither always true or false and third one is always true.
Big-O, Big-Ω and Big-Θ notation, in mathematics, describes the asymptotic behaviour of a function, as an upper bound, lower bound and a tight bound (upper and lower bound), respectively. On SE, in the context of programming, we usually use these notations to describe the asymptotic behaviour of algorithms, with regard to the size of the problem that the algorithm is intended to solve (typically this size is denoted n).
For reference, see e.g.
https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/asymptotic-notation
For answering your question, we shall treat this subject in the context of the asymptotic (/limiting) behaviour or functions. Lets cover your two properties of and three statements of f(n) and g(n) one by one.
Property i) f(n) ∈ O(g(n))
Given is that
f(n) ∈ O(g(n)).
For some constant k>0, (the dominant term(s) of) g(n) provides an upper bound to the asymptotic behaviour of f(n), i.e.,
f(n) < k*g(n), n sufficiently large. (*)
As a quick example of what I mean by dominant terms: if g(n) is a some polynomial, we have
O(g(n)) = O(a_0 + a_1*n + ... + a_j*n^j) = O(n^j),
i.e., asymptotically dominant term n^j.
Property ii) g(n) ∈ Θ(f(n))
Given is that
g(n) ∈ Θ(f(n)).
For some constants k_1>0 and k_2>0, k_1*f(n) and k_2*f(n) provides lower and upper bounds, respectively, on the asymptotic behaviour of g(n), i.e.,
k_1*f(n) < g(n) < k_2*f(n), n sufficiently large. (**)
Again, as we describe asymptotic behaviour, it's really the dominant terms of g(n) and f(n) that are of interest.
Assume from now on that i) and ii) both holds, for all sufficiently large n
We move only to your three statements.
Statement a) f(n) <= g(n) for all n>1, either always true or false
First of all, given i) and ii), we cannot draw any conclusion of the behaviour of f(n) and g(n) for "n smaller than sufficiently large n", i.e., we cannot say any statements regarding f(n) and g(n) for all n>1. Properties in i) and ii) only describe asymptotic behaviours of f(n) and g(n). If we adjust the statement to
f(n) <= g(n) for all sufficiently large n; either always true or false,
we can analyse it. Assume the following holds true (for n sufficiently large):
f(n) <= g(n). (a1)
For n sufficiently large, we also know that (*) and (**) holds, that is
(*) f(n) < k*g(n), for some constant k>0, (a2)
(**) f(n) < (1/k_1)*g(n), for some constant k_1>0, (a3)
g(n) < k_2*f(n), for some constant k_2>0, (a4)
Since (a1) holds, by assumption, we can consider (a2) and (a3) as redundant by choosing some k=(1/k_1)>1. This leaves us with
f(n) <= g(n) < k_2*f(n), for some constant k_2>0. (a1, a4)
This is simply property ii) above, g(n) ∈ Θ(f(n)), where we've found that constant k_1 = 1 (or, strictly, k_1 very near 1) satisfy the left hand side of (**).
On the other hand, if we assume that f(n) <= g(n) is always false (sufficiently large n), we arrive at the result
g(n) < f(n), (a1.false)
g(n) < k_2*f(n), for some constant k_2>0. (a4)
Which, naturally, holds (k_2=1).
To round off, statement a) is kind of weird.
Statement b) g(n) ∈ Ω(f(n)), always true
Much like in i) (for upper bound), given that
g(n) ∈ Ω(f(n)),
then, for sufficiently large n,
k*f(n) < g(n), for some constant k>0, (b1)
holds. We already know this to be true from ii), as this is given in the left hand side of (**)
(**) k_1*f(n) < g(n), for some constant k_1>0. (b2)
Hence, given ii), it is trivial that g(n) ∈ Ω(f(n)) holds.
Statement c) f(n)<= Θ(g(n)), always true
Recall Big-Θ from ii); f(n) ∈ Θ(g(n)) can be described as
k_1*g(n) < f(n) < k_2*g(n), n sufficiently large, (c1)
for some constants k_1>0, k_2>0.
In this context, f(n) <= Θ(g(n)) does not make much sense, as O(g(n)) either describes a property of f(n) or set of functions f(n) that conforms to the property O(g(n)) (in context of asymptotic behaviour. From Wikipedia article of Big O notation (https://en.wikipedia.org/wiki/Big_O_notation):
"the notation O(g(n)) is also used to denote the set of all functions
f(n) that satisfy the relation f(n)=O(g(n)); f(n) ∈ Θ(g(n))"
Perhaps the leq operator "<=" has some special meaning in the context of Big-...-notation, but it's not something I have ever encountered myself.

Big O vs Small omega

Why is ω(n) smaller than O(n)?
I know what is little omega (for example, n = ω(log n)), but I can't understand why ω(n) is smaller than O(n).
Big Oh 'O' is an upper bound and little omega 'ω' is a Tight lower bound.
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0}
ω(g(n)) = { f(n): for all constants c > 0, there exists a constant n0 such that 0 ≤ cg(n) < f(n) for all n ≥ n0}.
ALSO: infinity = lim f(n)/g(n)
n ∈ O(n) and n ∉ ω(n).
Alternatively:
n ∈ ω(log(n)) and n ∉ O(log(n))
ω(n) and O(n) are at the opposite side of the spectrum, as is illustrated below.
Formally,
For more details, see CSc 345 — Analysis of Discrete Structures
(McCann), which is the source of the graph above. It also contains a compact representation of the definitions, which makes them easy to remember:
I can't comment, so first of all let me say that n ≠ Θ(log(n)). Big Theta means that for some positive constants c1, c2, and k, for all values of n greater than k, c1*log(n) ≤ n ≤ c2*log(n), which is not true. As n approaches infinity, it will always be larger than log(n), no matter log(n)'s coefficient.
jesse34212 was correct in saying that n = ω(log(n)). n = ω(log(n)) means that n ≠ Θ(log(n)) AND n = Ω(log(n)). In other words, little or small omega is a loose lower bound, whereas big omega can be loose or tight.
Big O notation signifies a loose or tight upper bound. For instance, 12n = O(n) (tight upper bound, because it's as precise as you can get), and 12n = O(n^2) (loose upper bound, because you could be more precise).
12n ≠ ω(n) because n is a tight bound on 12n, and ω only applies to loose bounds. That's why 12n = ω(log(n)), or even 12n = ω(1). I keep using 12n, but that value of the constant does not affect the equality.
Technically, O(n) is a set of all functions that grow asymptotically equal to or slower than n, and the belongs character is most appropriate, but most people use "= O(n)" (instead of "∈ O(n)") as an informal way of writing it.
Algorithmic complexity has a mathematic definition.
If f and g are two functions, f = O(g) if you can find two constants c (> 0) and n such as f(x) < c * g(x) for every x > n.
For Ω, it is the opposite: you can find constants such as f(x) > c * g(x).
f = Θ(g) if there are three constants c, d and n such as c * g(x) < f(x) < d * g(x) for every x > n.
Then, O means your function is dominated, Θ your function is equivalent to the other function, Ω your function has a lower limit.
So, when you are using Θ, your approximation is better for you are "wrapping" your function between two edges ; whereas O only set a maximum. Ditto for Ω (minimum).
To sum up:
O(n): in worst situations, your algorithm has a complexity of n
Ω(n): in best case, your algorithm has a complexity of n
Θ(n): in every situation, your algorithm has a complexity of n
To conclude, your assumption is wrong: it is Θ, not Ω. As you may know, n > log(n) when n has a huge value. Then, it is logic to say n = Θ(log(n)), according to previous definitions.

Big-O with positive constants

In this Big-O / Computational Complexity problem
given that a and b are positive constants greater than 1 and n is a variable parameter.
I assumed that an+1 = O(an) , abn = O(an) & an+b = O(an)
First I need to know if I am correct in assuming this.
If so, how would I prove that f(n) = O(f(n)).
Recall the definition of big-O:
f(n) ∈ O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.
Let g=f, c=1 and k=0, then you have a trivial demonstration of f(n) ∈ O(f(n))
Similarly, from an+1=a⋅an, let f(n)=an+1, g(n)=an, c=a, k=0, again the proof of O(an+1)=O(an) is trivial. The proof for O(an+b)=O(an) is identical.
O(abn) is not equal to O(an) with a,b>1, see Exponential growth in big-o notation

Resources