Big-O with positive constants - big-o

In this Big-O / Computational Complexity problem
given that a and b are positive constants greater than 1 and n is a variable parameter.
I assumed that an+1 = O(an) , abn = O(an) & an+b = O(an)
First I need to know if I am correct in assuming this.
If so, how would I prove that f(n) = O(f(n)).

Recall the definition of big-O:
f(n) ∈ O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.
Let g=f, c=1 and k=0, then you have a trivial demonstration of f(n) ∈ O(f(n))
Similarly, from an+1=a⋅an, let f(n)=an+1, g(n)=an, c=a, k=0, again the proof of O(an+1)=O(an) is trivial. The proof for O(an+b)=O(an) is identical.
O(abn) is not equal to O(an) with a,b>1, see Exponential growth in big-o notation

Related

Proving a single-term function is big Omega

I was given the function 5n^3+2n+8 to prove for big-O and big-Omega. I finished big-O, but for big-Omega I end up with a single-term function. I canceled out 2n and 8 because they're positive and make my function larger, so I just end up with 5n^3. How do I choose C and n_0? or is it simply trivial in this case?
From Big-Ω (Big-Omega) notation (slightly modified):
If a running time of some function f(n) is Ω(g(n)), then for large
enough n, say n > n_0 > 0, the running time of f(n) is at least
C⋅g(n), for some constant C > 0.
Hence, if f(n) is in Ω(g(n)), then there exists some positive constants C and n_0 such at the following holds
f(n) ≥ C⋅g(n), for all n > n_0 (+)
Now, the choice of C and n_0 is not unique, it suffices that you can show one such set of constants (such that (+) holds) to be able to describe the running time using the Big-Omega notation, as posted above.
Hence, you are indeed almost there
f(n) = 5n^3+2n+8 > 5n^3 holds for all n larger than say, 1
=> f(n) ≥ 5⋅n^3 for all n > n_0 = 1 (++)
Finally, (++) is just (+) for g(n) = n^3 and C=5, and hence, by (+), f(n) is in Ω(n^3).

What is the Complexity (BigO) of this Algorithm?

I'm fairly new to the Big-O stuff and I'm wondering what's the complexity of the algorithm.
I understand that every addition, if statement and variable initialization is O(1).
From my understanding first 'i' loop will run 'n' times and the second 'j' loop will run 'n^2' times. Now, the third 'k' loop is where I'm having issues.
Is it running '(n^3)/2' times since the average value of 'j' will be half of 'n'?
Does it mean the Big-O is O((n^3)/2)?
We can use Sigma notation to calculate the number of iterations of the inner-most basic operation of you algorithm, where we consider the sum = sum + A[k] to be a basic operation.
Now, how do we infer that T(n) is in O(n^3) in the last step, you ask?
Let's loosely define what we mean by Big-O notation:
f(n) = O(g(n)) means c · g(n) is an upper bound on f(n). Thus
there exists some constant c such that f(n) is always ≤ c · g(n),
for sufficiently large n (i.e. , n ≥ n0 for some constant n0).
I.e., we want to find some (non-unique) set of positive constants c and n0 such that the following holds
|f(n)| ≤ c · |g(n)|, for some constant c>0 (+)
for n sufficiently large (say, n>n0)
for some function g(n), which will show that f(n) is in O(g(n)).
Now, in our case, f(n) = T(n) = (n^3 - n^2) / 2, and we have:
f(n) = 0.5·n^3 - 0.5·n^2
{ n > 0 } => f(n) = 0.5·n^3 - 0.5·n^2 ≤ 0.5·n^3 ≤ n^3
=> f(n) ≤ 1·n^3 (++)
Now (++) is exactly (+) with c=1 (and choose n0 as, say, 1, n>n0=1), and hence, we have shown that f(n) = T(n) is in O(n^3).
From the somewhat formal derivation above it's apparent that any constants in function g(n) can just be extracted and included in the constant c in (+), hence you'll never (at least should not) see time complexity described as e.g. O((n^3)/2). When using Big-O notation, we're describing an upper bound on the asymptotic behaviour of the algorithm, hence only the dominant term is of interest (however not how this is scaled with constants).

Prove small-oh with big-oh

If I have already known f(n) is O(g(n)). From the definition of little-oh, how to prove that f(n) is o(n * g(n))?
Given: f(n) is in O(g(n)).
Using the definition of big-O notation, we can write this as:
f(n) is in O(g(n))
=> |f(n)| ≤ k*|g(n)|, for some constant k>0 (+)
for n sufficiently large (say, n>N)
For the definition of big-O used as above, see e.g.
https://www.khanacademy.org/computing/computer-science/algorithms/asymptotic-notation/a/big-o-notation
Prove: Given (+), then f(n) is in o(n*g(n)).
Lets first state what little-o notation means:
Formally, f(n) = o(g(n)) (or f(n) ∈ o(g(n))) as n → ∞ means that
for every positive constant ε there exists a constant N such that
|f(n)| ≤ ε*|g(n)|, for all n > N (++)
From https://en.wikipedia.org/wiki/Big_O_notation#Little-o_notation.
Now, using (+), we can write
|f(n)| ≤ k*|g(n)|, som k>0, n sufficiently large
<=> { n > 0 } <=> n*|f(n)| ≤ k*n*|g(n)|
<=> n*|f(n)| ≤ k*|n*g(n)|
<=> |f(n)| ≤ (k/n)*|n*g(n)| (+++)
Return to the definition of little-o, specifically (++), and let, without loss of generality, k be fixed. Now, every positive constant ε can be described as
ε = k/C, for some constant C>0 (with k fixed, k>0) (*)
Now, assume, without loss of generality, that n is larger than this C, i.e., n>C. Then, (*) and (+++) yields
|f(n)| ≤ (k/n)*|n*g(n)| < (k/C)*|n*g(n)| = ε*|n*g(n)| (**)
^ ^
| |
since `n>C` (*)
Since we're studying asymptotic behaviour, we can choose to to assign a lower bound to n to any value larger than C (in fact, that's in the definition of both big-O and little-o, "n sufficiently large"), and hence---by the definition of little-oh above---, we have:
- As shown above, (+) implies (**)
- By the definition of little-o, (**) shows that f(n) is in o(n*g(n))
- Subsequently, we've shown that, given (+), then: f(n) is in o(n*g(n))
Result: If f(n) is in O(g(n)), then f(n) is in o(n*g(n)), where these two relations refer big-O and litte-O asymptotic bounds, respectively.
Comment: The result is, in fact, quite trivial. The big-O and little-o notation differ only in one of the two constants used in proving the upper bounds, i.e., we can write the definitions of big-O and little-O as:
f(n) is said to be in O(g(n)) if we can find a set of positive constants (k, N), such that f(n) < k*g(n) holds for all n>N.
f(n) is said to be in o(g(n)) if we can find a positive constant N, such that f(n) < ε*g(n) holds for all n>N, and for every positive constant ε.
The latter is obvious a stricter constraint, but if we can make use of one extra power of n in the left-hand-side of f(n) < ε*g(n) (i.e., f(n) < ε*n*g(n)), then even for infinitesimal values of ε, we can always choose the other constant N freely to be sufficiently large for ε*n to provide us any constant k that can be used to show that f(n) is in O(g(n)) (as, recall, n>N).

Big O vs Small omega

Why is ω(n) smaller than O(n)?
I know what is little omega (for example, n = ω(log n)), but I can't understand why ω(n) is smaller than O(n).
Big Oh 'O' is an upper bound and little omega 'ω' is a Tight lower bound.
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0}
ω(g(n)) = { f(n): for all constants c > 0, there exists a constant n0 such that 0 ≤ cg(n) < f(n) for all n ≥ n0}.
ALSO: infinity = lim f(n)/g(n)
n ∈ O(n) and n ∉ ω(n).
Alternatively:
n ∈ ω(log(n)) and n ∉ O(log(n))
ω(n) and O(n) are at the opposite side of the spectrum, as is illustrated below.
Formally,
For more details, see CSc 345 — Analysis of Discrete Structures
(McCann), which is the source of the graph above. It also contains a compact representation of the definitions, which makes them easy to remember:
I can't comment, so first of all let me say that n ≠ Θ(log(n)). Big Theta means that for some positive constants c1, c2, and k, for all values of n greater than k, c1*log(n) ≤ n ≤ c2*log(n), which is not true. As n approaches infinity, it will always be larger than log(n), no matter log(n)'s coefficient.
jesse34212 was correct in saying that n = ω(log(n)). n = ω(log(n)) means that n ≠ Θ(log(n)) AND n = Ω(log(n)). In other words, little or small omega is a loose lower bound, whereas big omega can be loose or tight.
Big O notation signifies a loose or tight upper bound. For instance, 12n = O(n) (tight upper bound, because it's as precise as you can get), and 12n = O(n^2) (loose upper bound, because you could be more precise).
12n ≠ ω(n) because n is a tight bound on 12n, and ω only applies to loose bounds. That's why 12n = ω(log(n)), or even 12n = ω(1). I keep using 12n, but that value of the constant does not affect the equality.
Technically, O(n) is a set of all functions that grow asymptotically equal to or slower than n, and the belongs character is most appropriate, but most people use "= O(n)" (instead of "∈ O(n)") as an informal way of writing it.
Algorithmic complexity has a mathematic definition.
If f and g are two functions, f = O(g) if you can find two constants c (> 0) and n such as f(x) < c * g(x) for every x > n.
For Ω, it is the opposite: you can find constants such as f(x) > c * g(x).
f = Θ(g) if there are three constants c, d and n such as c * g(x) < f(x) < d * g(x) for every x > n.
Then, O means your function is dominated, Θ your function is equivalent to the other function, Ω your function has a lower limit.
So, when you are using Θ, your approximation is better for you are "wrapping" your function between two edges ; whereas O only set a maximum. Ditto for Ω (minimum).
To sum up:
O(n): in worst situations, your algorithm has a complexity of n
Ω(n): in best case, your algorithm has a complexity of n
Θ(n): in every situation, your algorithm has a complexity of n
To conclude, your assumption is wrong: it is Θ, not Ω. As you may know, n > log(n) when n has a huge value. Then, it is logic to say n = Θ(log(n)), according to previous definitions.

Exponential growth in big-o notation

I have got problem about understanding the following question. It says:
Prove that exponential functions have different orders of growth for different
values of base.
It looks to me like for example, consider an. If a=3, its growth rate will be larger than when a=2. It looks obvious. Is that really what the question wants? How can i do a formal proof for that?
Thanks in advance for your help.
f(n) ∈ O(g(n)) means there are positive constants c and k, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ k. The values of c and k must be fixed for the function f and must not depend on n.
Let 1>a>b without loss of generality, and suppose b^n ∈ O(a^n). This implies that there are positive constants c and k such that 0 ≤ b^n ≤ c.a^n for all n ≥ k, which is impossible :
b^n ≤ c.a^n for all n ≥ k implies (b/a)^n ≤ c for all n ≥ k
which is in contradiction with lim (b/a)^n = +inf because b/a>1.
If 1>a>b then b^n ∉ O(a^n), but a^n ∈ O(b^n) so O(a^n)⊊O(b^n)

Resources