Arrange the functions according to growth rate using Asymptotic Notation.
Can someone confirm whether the below listed sequence in ascending order is true or false ?
n0.01, squareroot(n),6nlogn,4n3/2,2n log2 n,4logn, n 2 (logn).
This facts could helps:
squareroot(n) = n1/2
O(n1/2) > O(log(n))
when k is a constant number O(kn) = O(n)
4log n = (2log n) 2 = n2
so you have this order :
n0.01, squareroot(n), 6n log n, 2n log2 n, 4n3/2, 4logn, n 2 logn
Related
The problem is that I need to know if log(n-f(n)) is big theta of log(n), where f(n) is a lower order function than n, e.g., log(n) or sqrt(n).
I tried to use some log rules and plotting seems to confirm the bound, but I can't get it exactly.
As f(n) is a lower order function than n, f(n) = o(n). Hence, n-o(n) < 2n and n - o(n) = O(n). Also, n - o(n) > n - 0.01 n <=> 0.01 n > o(n) (0.01 can be specified with the o(n)). Therfore, n - o(n) = Omega(n), and n-o(n) = Theta(n).
As log function is an increasing function we can say log(n-o(n)) = Theta(log(n)).
Can anybody explain which one of them has highest asymptotic complexity and why,
10000000n vs 1.000001^n vs n^2
You can use standard domination rules from asymptotic analysis.
Domination rules tell you that when n -> +Inf, n = o(n^2). (Note the difference between the notations O(.) and o(.), the latter meaning f(n) = o(g(n)) iff there exists a sequence e(n) which converges to 0 as n -> +Inf such that f(n) = e(n)g(n). With f(n) = n, g(n) = n^2, you can see that f(n)/g(n) = 1/n -> 0 as n -> +Inf.)
Furthermore, you know that for any integer k and real x > 1, we have n^k/x^n -> 0 as n -> +Inf. x^n (exponential) complexity dominates n^k (polynomial) complexity.
Therefore, in order of increasing complexity, you have:
n << n^2 << 1.000001^n
Note:10000000n could be written O(n) with the loose written conventions used for asymptotic analysis in computer science. Recall that the complexity C(n) of an algorithm is O(n) (C(n) = O(n)) if and only if (iff) there exists an integer p >= 0 and K >= 0 such that for all n >= p the relation |C(n)| <= K.n holds.
When calculating asymptotic time complexity, you need to ignore all coefficients of n and just focus on its exponent.
The higher the exponent, the higher the time complexity.
In this case
We ignore the coefficients of n, leaving n^2, x^n and n.
However, we ignore the second one as it has an exponent of n. As n^2 is higher than n, the answer to your question is n^2.
My professor just taught us that any operation that halves the length of the input has an O(log(n)) complexity as a thumb rule. Why is it not O(sqrt(n)), aren't both of them equivalent?
They are not equivalent: sqrt(N) will increase a lot more quickly than log2(N). There is no constant C so that you would have sqrt(N) < C.log(N) for all values of N greater than some minimum value.
An easy way to grasp this, is that log2(N) will be a value close to the number of (binary) digits of N, while sqrt(N) will be a number that has itself half the number of digits that N has. Or, to state that with an equality:
log2(N) = 2log2(sqrt(N))
So you need to take the logarithm(!) of sqrt(N) to bring it down to the same order of complexity as log2(N).
For example, for a binary number with 11 digits, 0b10000000000 (=210), the square root is 0b100000, but the logarithm is only 10.
Assuming natural logarithms (otherwise just multiply by a constant), we have
lim {n->inf} log n / sqrt(n) = (inf / inf)
= lim {n->inf} 1/n / 1/(2*sqrt(n)) (by L'Hospital)
= lim {n->inf} 2*sqrt(n)/n
= lim {n->inf} 2/sqrt(n)
= 0 < inf
Refer to https://en.wikipedia.org/wiki/Big_O_notation for alternative defination of O(.) and thereby from above we can say log n = O(sqrt(n)),
Also compare the growth of the functions below, log n is always upper bounded by sqrt(n) for all n > 0.
Just compare the two functions:
sqrt(n) ---------- log(n)
n^(1/2) ---------- log(n)
Plug in Log
log( n^(1/2) ) --- log( log(n) )
(1/2) log(n) ----- log( log(n) )
It is clear that: const . log(n) > log(log(n))
No, It's not equivalent.
#trincot gave one excellent explanation with example in his answer. I'm adding one more point. Your professor taught you that
any operation that halves the length of the input has an O(log(n)) complexity
It's also true that,
any operation that reduces the length of the input by 2/3rd, has a O(log3(n)) complexity
any operation that reduces the length of the input by 3/4th, has a O(log4(n)) complexity
any operation that reduces the length of the input by 4/5th, has a O(log5(n)) complexity
So on ...
It's even true for all reduction of lengths of the input by (B-1)/Bth. It then has a complexity of O(logB(n))
N:B: O(logB(n)) means B based logarithm of n
One way to approach the problem can be to compare the rate of growth of O()
and O( )
As n increases we see that (2) is less than (1). When n = 10,000 eq--1 equals 0.005 while eq--2 equals 0.0001
Hence is better as n increases.
No, they are not equivalent; you can even prove that
O(n**k) > O(log(n, base))
for any k > 0 and base > 1 (k = 1/2 in case of sqrt).
When talking on O(f(n)) we want to investigate the behaviour for large n,
limits is good means for that. Suppose that both big O are equivalent:
O(n**k) = O(log(n, base))
which means there's a some finite constant C such that
O(n**k) <= C * O(log(n, base))
starting from some large enough n; put it in other terms (log(n, base) is not 0 starting from large n, both functions are continuously differentiable):
lim(n**k/log(n, base)) = C
n->+inf
To find out the limit's value we can use L'Hospital's Rule, i.e. take derivatives for numerator and denominator and divide them:
lim(n**k/log(n)) =
lim([k*n**(k-1)]/[ln(base)/n]) =
ln(base) * k * lim(n**k) = +infinity
so we can conclude that there's no constant C such that O(n**k) < C*log(n, base) or in other words
O(n**k) > O(log(n, base))
No, it isn't.
When we are dealing with time complexity, we think of input as a very large number. So let's take n = 2^18. Now for sqrt(n) number of operation will be 2^9 and for log(n) it will be equal to 18 (we consider log with base 2 here). Clearly 2^9 much much greater than 18.
So, we can say that O(log n) is smaller than O(sqrt n).
To prove that sqrt(n) grows faster than lgn(base2) you can take the limit of the 2nd over the 1st and proves it approaches 0 as n approaches infinity.
lim(n—>inf) of (lgn/sqrt(n))
Applying L’Hopitals Rule:
= lim(n—>inf) of (2/(sqrt(n)*ln2))
Since sqrt(n) and ln2 will increase infinitely as n increases, and 2 is a constant, this proves
lim(n—>inf) of (2/(sqrt(n)*ln2)) = 0
I am relatively new to Big-O notation and I came across this question:
Sort the following functions by order of growth from slowest to fastest - Big-O Notation. For each pair of adjacent functions in your list, please write a sentence describing why it is ordered the way it is. 7n^3 - 10n, 4n^2, n; n^8621909; 3n; 2^loglog n; n log n; 6n log n; n!; 1:1^n
So I have got this order -
1-> n^8621909
2->7n^3 - 10n
3->4n^2
4->3n
5->6n log n
6->n!
7->n
8->n log n
9-> 1.1^n
10->2^loglogn
I am unsure if this would be the correct order or not and also if this is the correct order, I am unsure of how to describe it the way it is because I ordered these in this particular manner using certain values for n and then arranging them.
1. n! = O(n!)
2. 1.1^n = O(1.1^n)
3. n^8621909 = O(n^8621909)
4. 7n^3 - 10n = O(n^3)
5. 4n^2 = O(n^2)
6. 6n log n = O(nlogn)
6. n log n = O(nlogn)
8. 3n = O(n)
8. n = O(n)
10. 2^loglog n = O(logn)
Some explanations:
O(c^n) < O(n!) < O(n^n) (for some constant c)
O(n^c) < O(c^n)
2^loglogn can be reduced to logn by setting 2^loglogn = x and taking the log of both sides
I am learning about algorithm complexity, and I just want to verify my understanding is correct.
1) T(n) = 2n + 1 = O(n)
This is because we drop the constants 2 and 1, and we are left with n. Therefore, we have O(n).
2) T(n) = n * n - 100 = O(n^2)
This is because we drop the constant -100, and are left with n * n, which is n^2. Therefore, we have O(n^2)
Am I correct?
Basically you have those different levels determined by the "dominant" factor of your function, starting from the lowest complexity :
O(1) if your function only contains constants
O(log(n)) if the dominant part is in log, ln...
O(n^p) if the dominant part is polynomial and the highest power is p (e.g. O(n^3) for T(n) = n*(3n^2 + 1) -3 )
O(p^n) if the dominant part is a fixed number to n-th power (e.g. O(3^n) for T(n) = 3 + n^99 + 2*3^n)
O(n!) if the dominant part is factorial
and so on...