Rank the functions in increasing order of growth - big-o

Rank the functions in increasing order of growth:
F1(n) = n^(n/2)
F2(n) = (n/2)^n
F3(n) = (log n)^(log n)
F4(n) = 8^(log n)
F5(n) = n^(4/3)
F6(n) = n^3 - n^2
F7(n) = 2^(log n)^2
F8(n) = n log n
I have the functions ranked as follows:
F8 < F5 < F6 ~ F4 < F3 < F7 < F1 ~ F2
f(n) < g(n) means f(n) = Little-o(g(n)) and
f(n) ~ g(n) means f(n) = Big-Theta(g(n))
Appreciate any second opinions on this! Particularly, F1 and F2 as well as F6 and F4.
Main intuition that I used was linear < polynomial < exponential and simplifying certain functions such as F4(n) = 8^(log n) = n^3 and F7(n) = 2^(log n)^2 = n^(log n).

Ranking of the functions should be:
F3(n) < F7(n) < F4(n) < F8(n) < F5(n) < F6(n) < F1(n) < F2(n)
F3(n) - Logarithmic functions grow slower than polynomial functions, and in general, exponential functions with larger bases grow faster than those with smaller bases.
F4(n) will grow faster than F7(n), while they will both grow at an exponential rate, the base of the function will determine the rate of growth. Also, you can write 8^(log(n)) as 2^(3log(n)). Even though F4(n) is faster than F7(n), it is still slower than the polynomial function.
F8(n) grows faster than logarithmic function but is smaller than polynomial function like n^2, n^3, n^4, ...
F5(n) grows faster than F8(n) but is slower than the polynomial function like n^2, n^3, n^4, ...
F6(n) is a polynomial function of degree 3 and is faster than the F5(n), but is slower than the exponential function F1(n).
F2(n) > F1(n) - If you take the logarithm of both sides you get log((n/2)^n)=n(log(n) - log(2)) > log(n^(n/2))=(n/2)log(n).

Related

Is log(n-f(n)) big theta of log(n)

The problem is that I need to know if log(n-f(n)) is big theta of log(n), where f(n) is a lower order function than n, e.g., log(n) or sqrt(n).
I tried to use some log rules and plotting seems to confirm the bound, but I can't get it exactly.
As f(n) is a lower order function than n, f(n) = o(n). Hence, n-o(n) < 2n and n - o(n) = O(n). Also, n - o(n) > n - 0.01 n <=> 0.01 n > o(n) (0.01 can be specified with the o(n)). Therfore, n - o(n) = Omega(n), and n-o(n) = Theta(n).
As log function is an increasing function we can say log(n-o(n)) = Theta(log(n)).

Asymptotic Complexity comparison

Can anybody explain which one of them has highest asymptotic complexity and why,
10000000n vs 1.000001^n vs n^2
You can use standard domination rules from asymptotic analysis.
Domination rules tell you that when n -> +Inf, n = o(n^2). (Note the difference between the notations O(.) and o(.), the latter meaning f(n) = o(g(n)) iff there exists a sequence e(n) which converges to 0 as n -> +Inf such that f(n) = e(n)g(n). With f(n) = n, g(n) = n^2, you can see that f(n)/g(n) = 1/n -> 0 as n -> +Inf.)
Furthermore, you know that for any integer k and real x > 1, we have n^k/x^n -> 0 as n -> +Inf. x^n (exponential) complexity dominates n^k (polynomial) complexity.
Therefore, in order of increasing complexity, you have:
n << n^2 << 1.000001^n
Note:10000000n could be written O(n) with the loose written conventions used for asymptotic analysis in computer science. Recall that the complexity C(n) of an algorithm is O(n) (C(n) = O(n)) if and only if (iff) there exists an integer p >= 0 and K >= 0 such that for all n >= p the relation |C(n)| <= K.n holds.
When calculating asymptotic time complexity, you need to ignore all coefficients of n and just focus on its exponent.
The higher the exponent, the higher the time complexity.
In this case
We ignore the coefficients of n, leaving n^2, x^n and n.
However, we ignore the second one as it has an exponent of n. As n^2 is higher than n, the answer to your question is n^2.

If g(n) = sqrt(n)^sqrt(n), does the complexity of g(n) = O(2^n)?

If g(n) = sqrt(n)sqrt(n), does the complexity of g(n) = O(2n)?
Any help is appreciated.
A useful technique when comparing two exponential functions is to get them to have the same base:
√n√n = (2lg √n)√n = 2√n lg √n
Now you're comparing 2√n lg √n against 2n, and hopefully from that it's easy to see that the former function does not grow as rapidly as the latter, so √n√n = O(2n) is indeed true.
The other proofs are short and nice, but here is more detailed proof going to the definitions of the big-oh notations and the computation of the needed limits.
A function g(n) is upper-bounded by another function f(n) by the big-Oh notation (g(n) = O(f(n))) if it holds that
(source)
Put in the functions and we must compute
First some algebraic massage on the g(n) term. By the root-identities, it holds that sqrt(n) = n^(1/2). Furthermore it holds that (x^a)^b = x^(a*b). With that:
Furthermore, 2^n is exp(log( 2^n )) by the logarithmic identities, and then log(a^b) = b*log(a) we have 2^n = exp(log( 2^n )) = exp(n * log(2)). The same can be applied to n^(1/2 sqrt(n)), it becomes exp(log( n^(1/2 sqrt(n)) = exp(1/2*sqrt(n)*log(n)). So now we have
At this point we can compare the growth of the exponents, i.e. compare
That limit is 0 because const * n grows faster than sqrt(n)*log(n). This can in turn be shown the calculating the limit explicitly. Put the 1/2 and the log2 in the denumerator. Since n = sqrt(n) * sqrt(n), we can simplify it to:
This limit is indeed zero, because the squareroot grows faster than logarithm by the Orders of common functions. Thus the exponent of the lower function grows faster than the exponent of the upper function. Thus g(n) = O(2^n) is rigorously proven by the first theorem.
One can assume O(log n) < O(sqrt(n)) (Order of common functions - wikipedia)
The transformation works as follows:
sqrt(n)^sqrt(n) 2^n # a^b = e^(ln(a) * b)
e^(ln(sqrt(n)) * sqrt(n)) e^(ln(2) * n) # if e^a < e^b, then a < b
ln(sqrt(n)) * sqrt(n) ln(2) * n # / sqrt(n)
ln(sqrt(n)) ln(2) * sqrt(n) # ln(a^b) = b * ln(a)
0.5 ln(n) ln(2) * sqrt(n) # ln(a) / ln(b) = log(a base b)
0.5 log(n base 2) sqrt(n) # base and constant factor don't matter
log(n) sqrt(n)
I've omitted complexity-classes for simplicity. The above should be read bottom to top for a proper proof.

Asymptotic complexity of logarithmic functions

I know that in terms of complexity, O(logn) is faster than O(n), which is faster than O(nlogn), which is faster than O(n2).
But what about O(n2) and O(n2log), or O(n2.001) and O(n2log):
T1(n)=n^2 + n^2logn
What is the big Oh and omega of this function? Also, what's little oh?
versus:
T2(n)=n^2.001 + n^2logn
Is there any difference in big Oh now?
I'm having trouble understanding how to compare logn with powers of n. As in, is logn approximately n^0.000000...1 or n^1.000000...1?
O(n^k) is faster than O(n^k') for all k, k' >= 0 and k' > k
O(n^2) would be faster than O(n^2*logn)
Note that you can only ignore constants, nothing involving the input size can be ignored.
Thus, complexity of T(n)=n^2 + n^2logn would be the worse of the two, which is O(n^2logn).
Little-oh
Little oh in loose terms is a guaranteed upper bound. Yes, it is called little, and it is more restrictive.
n^2 = O(n^k) for k >= 2 but n^2 = o(n^k) for k > 2
Practically, it is Big-Oh which takes most of the limelight.
What about T(n)= n^2.001 + n^2logn?
We have n2.001 = n2*n0.001 and n2 * log(n).
To settle the question, we need to figure out what would eventually be bigger, n0.001 or log(n).
It turns out that a function of the form nk with k > 0 will eventually take over log(n) for a sufficiently large n.
Same is the case here, and thus T(n) = O(n2.001).
Practically though, log(n) will be larger than n0.001.
(103300)0.001 < log(103300) (1995.6 < 3300), and the sufficiently large n in this case would be just around 103650, an astronomical number.
Worth mentioning again, 103650. There are 1082 atoms in the universe.
T(n)=n^2 + n^2logn
What is the big Oh and omega of this function? Also, what's little oh?
Quoting a previous answer:
Don't forget big O notation represents a set. O(g(n)) is the set of
of all function f such that f does not grows faster than g,
formally is the same is saying that there exists C and n0 such
that we have |f(n)| <= C|g(n)| for every n >= n0. The expression
f(n) = O(g(n)) is a shorthand for saying that f(n) is in the set
O(g(n))
Also you can think of big O as ≤ and of small o as < (reference). So you care of more of finding relevant big O bound than small o. In your case it's even appropriate to use big theta which is =. Since n^2 log n dominates n^2 it's true that
T1(n)=n^2 + n^2logn = Ө(n^2 logn)
Now the second part. log n grows so slowly that even n^e, e > 0 dominates it. Interestingly, you can even prove that lim n^e/(logn)^k=inf as n goes to infinity. From this you have that n^0.001 dominates log n then
T2(n)=n^2.001 + n^2logn = Ө(n^2.001).
If f(n) = Ө(g(n)) it's also true that f(n) = O(g(n)) so to answer your question:
T1(n)=O(n^2 logn)
T2(n)=O(n^2.001)

How to calculate big-theta

Can some one provide me a real time example for how to calculate big theta.
Is big theta some thing like average case, (min-max)/2?
I mean (minimum time - big O)/2
Please correct me if I am wrong, thanks
Big-theta notation represents the following rule:
For any two functions f(n), g(n), if f(n)/g(n) and g(n)/f(n) are both bounded as n grows to infinity, then f = Θ(g) and g = Θ(f). In that case, g is both an upper bound and a lower bound on the growth of f.
Here's an example algorithm:
def find-minimum(List)
min = +∞
foreach value in List
min = value if min > value
return min
We wish to evaluate the cost function c(n) where n is the size of the input list. This algorithm will perform one comparison for every item in the list, so c(n) = n.
c(n)/n = 1 which remains bounded as n goes to infinity, so c(n) grows no faster than n. This is what is meant by big-O notation c(n) = O(n). Conversely, n/C(n) = 1 also remains bounded, so c(n) grows no slower than n. Since it grows neither slower nor faster, it must grow at the same speed. This is what is meant by theta notation c(n) = Θ(n).
Note that c(n)/n² is also bounded, so c(n) = O(n²) as well — big-O notation is merely an upper bound on the complexity, so any O(n) function is also O(n²), O(n³)...
However, since n²/c(n) = n is not bounded, then c(n) ≠ Θ(n²). This is the interesting property of big-theta notation: it's both an upper bound and a lower bound on the complexity.
Big theta is a tight bound, for a function T(n): if: Omega(f(n))<=T(n)<=O(f(n)), then Theta(f(n)) is the tight bound for T(n).
In other words Theta(f(n)) 'describes' a function T(n), if both O [big O] and Omega, 'describe' the same T, with the same f.
for example, a quicksort [with correct median choices], always takes at most O(nlogn), at at least Omega(nlogn), so quicksort [with good median choices] is Theta(nlogn)
EDIT:
added discussion in comments:
Searching an array is still Theta(n). the Theta function does not indicate worst/best case, but the behavior of the desired case. i.e, searching for an array, T(n)=number of ops for worst case. in here, obviously T(n)<=O(n), but also T(n)>=n/2, because at worst case you need to iterate the whole array, so T(n)>=Omega(n) and therefore Theta(n) is asymptotic bound.
From http://en.wikipedia.org/wiki/Big_O_notation#Related_asymptotic_notations, we learn that "Big O" denotes an upper bound, whereas "Big Theta" denotes an upper and lower bound, i.e. in the limit as n goes to infinity:
f(n) = O(g(n)) --> |f(n)| < k.g(n)
f(n) = Theta(g(n)) --> k1.g(n) < f(n) < k2.g(n)
So you cannot infer Big Theta from Big O.
ig-Theta (Θ) notation provides an asymptotic upper and lower bound on the growth rate of an algorithm's running time. To calculate the big-Theta notation of a function, you need to find two non-negative functions, f(n) and g(n), such that:
There exist positive constants c1, c2 and n0 such that 0 <= c1 * g(n) <= f(n) <= c2 * g(n) for all n >= n0.
f(n) and g(n) have the same asymptotic growth rate.
The big-Theta notation for the function f(n) is then written as Θ(g(n)). The purpose of this notation is to provide a rough estimate of the running time, ignoring lower order terms and constant factors.
For example, consider the function f(n) = 2n^2 + 3n + 1. To calculate its big-Theta notation, we can choose g(n) = n^2. Then, we can find c1 and c2 such that 0 <= c1 * n^2 <= 2n^2 + 3n + 1 <= c2 * n^2 for all n >= n0. For example, c1 = 1/2 and c2 = 2. So, f(n) = Θ(n^2).

Resources