Order of growth of following function - big-o

Can someone tell me if the ranking of following functions by order of growth is correct ? (increasing to decreasing)
2^n, n^2, (nlgn, lg(n!)), n^(1/lgn), 4

Most of these are right. However, look at
n1/lg n.
Notice that, for nonzero n, we have n = 2lg n, so
n1/lg n = (2lg n)1/lg n = 2lg n / lg n = 21 = 2.
So while 2 and 4 do grow at the same (nonexistent) rate, n1/lg n is always smaller than 4 for any nonzero n.

Related

Why is √n the optimal value for m in a jump search?

I am currently learning about searching algorithms and I came across jump search, which has a time complexity of O(√n). Why is √n the optimal value for m (jump size) in the jump search algorithm and how does it affect the time complexity?
Let m be the jump size and n be the number of elements.
In the worst case, the maximum number of elements you have to check is the maximum number of jumps (n/m - 1) plus the number of elements between jumps (m), and the time you take is approximately proportional to the total number of elements you check.
The goal in choosing m, therefore, is to minimize: (n/m)+m-1.
The derivative by m is 1 - (n/m2), and the minimum occurs where the derivative is 0:
1 - (n/m2) = 0
(n/m2) = 1
n = m2
m = √n
Assuming the block size is k, the worst case scenario requires roughly n / k iterations for finding the block and k iterations for finding the element in the block.
To minimize n / k + k where n is a constant, we can use differentiation (see Matt's answer) or the AM-GM inequality to get:
n / k + k >= 2sqrt(n)
We can clearly see that 2sqrt(n) is the minimal number of iterations and is attainable when k = sqrt(n).

how can i prove the following algorithm?

Exp(n)
If n = 0
Return 1
End If
If n%2==0
temp = Exp(n/2)
Return temp × temp
Else //n is odd
temp = Exp((n−1)/2)
Return temp × temp × 2
End if
how can i prove by strong induction in n that for all n ≥ 1, the number of multiplications made by
Exp (n) is ≤ 2 log2 n.
ps: Exp(n) = 2^n
A simple way is to use strong induction.
First, prove that Exp(0) terminates and returns 2^0.
Let N be some arbitrary even nonnegative number.
Assume the function Exp correctly calculates and returns 2^n for every n in [0, N].
Under this assumption, prove that Exp(N+1) and Exp(N+2) both terminate and correctly return 2^(N+1) and 2^(N+2).
You're done! By induction it follows that for any nonnegative N, Exp(N) correctly returns 2^N.
PS: Note that in this post, 2^N means "two to the power of N" and not "bitwise xor of the binary representations of 2 and N".
The program exactly applies the following recurrence:
P[0] = 1
n even -> P[n] -> P[n/2]²
n odd -> P[n] -> P[(n-1)/2]².2
the program always terminates, because for n>0, n/2 and (n-1)/2 < n and the argument of the recursive calls always decreases.
P[n] = 2^n is the solution of the recurrence. Indeed,
n = 0 -> 2^0 = 1
n = 2m -> 2^n = (2^m)²
n = 2m+1 -> 2^n = 2.(2^n)²
and this covers all cases.
As every call decreases the number of significant bits of n by one and performs one or two multiplications, the total number does not exceed two times the number of significant bits.

Is complexity O(log(n)) equivalent to O(sqrt(n))?

My professor just taught us that any operation that halves the length of the input has an O(log(n)) complexity as a thumb rule. Why is it not O(sqrt(n)), aren't both of them equivalent?
They are not equivalent: sqrt(N) will increase a lot more quickly than log2(N). There is no constant C so that you would have sqrt(N) < C.log(N) for all values of N greater than some minimum value.
An easy way to grasp this, is that log2(N) will be a value close to the number of (binary) digits of N, while sqrt(N) will be a number that has itself half the number of digits that N has. Or, to state that with an equality:
        log2(N) = 2log2(sqrt(N))
So you need to take the logarithm(!) of sqrt(N) to bring it down to the same order of complexity as log2(N).
For example, for a binary number with 11 digits, 0b10000000000 (=210), the square root is 0b100000, but the logarithm is only 10.
Assuming natural logarithms (otherwise just multiply by a constant), we have
lim {n->inf} log n / sqrt(n) = (inf / inf)
= lim {n->inf} 1/n / 1/(2*sqrt(n)) (by L'Hospital)
= lim {n->inf} 2*sqrt(n)/n
= lim {n->inf} 2/sqrt(n)
= 0 < inf
Refer to https://en.wikipedia.org/wiki/Big_O_notation for alternative defination of O(.) and thereby from above we can say log n = O(sqrt(n)),
Also compare the growth of the functions below, log n is always upper bounded by sqrt(n) for all n > 0.
Just compare the two functions:
sqrt(n) ---------- log(n)
n^(1/2) ---------- log(n)
Plug in Log
log( n^(1/2) ) --- log( log(n) )
(1/2) log(n) ----- log( log(n) )
It is clear that: const . log(n) > log(log(n))
No, It's not equivalent.
#trincot gave one excellent explanation with example in his answer. I'm adding one more point. Your professor taught you that
any operation that halves the length of the input has an O(log(n)) complexity
It's also true that,
any operation that reduces the length of the input by 2/3rd, has a O(log3(n)) complexity
any operation that reduces the length of the input by 3/4th, has a O(log4(n)) complexity
any operation that reduces the length of the input by 4/5th, has a O(log5(n)) complexity
So on ...
It's even true for all reduction of lengths of the input by (B-1)/Bth. It then has a complexity of O(logB(n))
N:B: O(logB(n)) means B based logarithm of n
One way to approach the problem can be to compare the rate of growth of O()
and O( )
As n increases we see that (2) is less than (1). When n = 10,000 eq--1 equals 0.005 while eq--2 equals 0.0001
Hence is better as n increases.
No, they are not equivalent; you can even prove that
O(n**k) > O(log(n, base))
for any k > 0 and base > 1 (k = 1/2 in case of sqrt).
When talking on O(f(n)) we want to investigate the behaviour for large n,
limits is good means for that. Suppose that both big O are equivalent:
O(n**k) = O(log(n, base))
which means there's a some finite constant C such that
O(n**k) <= C * O(log(n, base))
starting from some large enough n; put it in other terms (log(n, base) is not 0 starting from large n, both functions are continuously differentiable):
lim(n**k/log(n, base)) = C
n->+inf
To find out the limit's value we can use L'Hospital's Rule, i.e. take derivatives for numerator and denominator and divide them:
lim(n**k/log(n)) =
lim([k*n**(k-1)]/[ln(base)/n]) =
ln(base) * k * lim(n**k) = +infinity
so we can conclude that there's no constant C such that O(n**k) < C*log(n, base) or in other words
O(n**k) > O(log(n, base))
No, it isn't.
When we are dealing with time complexity, we think of input as a very large number. So let's take n = 2^18. Now for sqrt(n) number of operation will be 2^9 and for log(n) it will be equal to 18 (we consider log with base 2 here). Clearly 2^9 much much greater than 18.
So, we can say that O(log n) is smaller than O(sqrt n).
To prove that sqrt(n) grows faster than lgn(base2) you can take the limit of the 2nd over the 1st and proves it approaches 0 as n approaches infinity.
lim(n—>inf) of (lgn/sqrt(n))
Applying L’Hopitals Rule:
= lim(n—>inf) of (2/(sqrt(n)*ln2))
Since sqrt(n) and ln2 will increase infinitely as n increases, and 2 is a constant, this proves
lim(n—>inf) of (2/(sqrt(n)*ln2)) = 0

counting binary digit algorithm || prove big oh

Is the big oh (Log n ) ?
how can i prove it by using summation
//Input: A positive decimal integer n
//Output: The number of binary digits in n’s binary representation
count ← 1
while n > 1 do
count ← count + 1
n ← ⌊n/2⌋
return count
The n is reducing like this:
n + n / 2 + n / 4 + n / 8 + .... + 8 + 4 + 2 + 1
The summation of above series is 2^(log(n)) - 1.
Now come to the above question. The number of times the loop executed is the number of items appears in above series = time complexity of the algorithm.
The number of items appears in above series is logn. So the algorithm time complexity is O(logn).
Example:
n = 8; the corresponding series:
8 + 4 + 2 + 1 = 15(2^4 - 1) ~ 2^4 ~ 2^logn
Here, number of items in series = 4
therefore,
time complexity = O(number of iteration)
= O(number of elements in series)
= O(logn)
Just check the returned value count. As it would be closer to logN, you can state that the TC is log(N).
Just think reverse way (mathematically):
1 -> 2 -> 4 -> 8 -> ... N (xth value considering 0-indexing system)
2^x = N
x = logN
You have to consider that larger numbers use more memory and take more processing for each operation. Note: time complexity only cares what happens for the largest values not the smaller ones.
The number of iterations is log2(n) however the cost of the n > 0 and n = n / 2 is proportional to the size of the integer. i.e. 128 -bit costs twice 64-bit and 1024-bit is 16 times greater. So the cost of each operation is log(m) where m is the maximum unsigned value which the number of bits stores. If you assume there is a fixed wasted bits e.g. no more than 64-bit this means the cost is O(log(n) * log(n)) or O(log(n)^2)
If you used Java's BigInteger, that's what the time complexity would be.
Big Oh complexity can easily be calculated by counting number of times the while loop runs as the operations inside while loop take constant time.In this case N varies as-
N,N/2,N/4,N/16.... ,1
Just counting number of terms in above series will give me number of times loop runs.So,
N/2^p=1 (p is number of times loop runs)
This gives p=logN thus complexity O(logN)

Calculate big Theta bound for 2 recursive calls

T(m,n) = 2T(m/2,n)+n, assume T(m,n) is constant if either m<2 or n<2
So what I don't understand is, can this problem be solved using Master Theorem? If so how? If not, is this table correct?
level # of instances size cost of each level total cost
0 1 m, n n n
1 2 m/2, n n 2n
2 4 m/4, n n 4n
i 2^i m/(2^i), n n 2^i * n
k m 1, n n n*m
Thanks!
The Master Theorem might be a little bit of an overkill here and your solution method is not bad (log means logarithmus to base 2, c=T(1,n)):
T(m,n)=n+2T(m/2,n)=n+2n+4T(m/4,n)=n*(1+2+4+..+2^log(m))+2^log(m)*c
=n*(2^(log(m)+1)-1)+m*c=Theta(n*m)
If you use the Master Theorem by treating n as a constant, than you would easily get T(m,n)=Theta(m*C(n)) with a constant C depending on n, but the Master Theorem does not tell you much about this constant C. If you get too smart and inattentive you could easily get burned:
T(m,n)=n+2T(m/2,n)=n*(1+2/nT(m/2,n))=n*Theta(2^(log(m/n)))
=n*Theta(m/n)=Theta(m)
And now, because you left out C(n) in the third step, you got a wrong result!

Resources