counting binary digit algorithm || prove big oh - algorithm

Is the big oh (Log n ) ?
how can i prove it by using summation
//Input: A positive decimal integer n
//Output: The number of binary digits in n’s binary representation
count ← 1
while n > 1 do
count ← count + 1
n ← ⌊n/2⌋
return count

The n is reducing like this:
n + n / 2 + n / 4 + n / 8 + .... + 8 + 4 + 2 + 1
The summation of above series is 2^(log(n)) - 1.
Now come to the above question. The number of times the loop executed is the number of items appears in above series = time complexity of the algorithm.
The number of items appears in above series is logn. So the algorithm time complexity is O(logn).
Example:
n = 8; the corresponding series:
8 + 4 + 2 + 1 = 15(2^4 - 1) ~ 2^4 ~ 2^logn
Here, number of items in series = 4
therefore,
time complexity = O(number of iteration)
= O(number of elements in series)
= O(logn)

Just check the returned value count. As it would be closer to logN, you can state that the TC is log(N).
Just think reverse way (mathematically):
1 -> 2 -> 4 -> 8 -> ... N (xth value considering 0-indexing system)
2^x = N
x = logN

You have to consider that larger numbers use more memory and take more processing for each operation. Note: time complexity only cares what happens for the largest values not the smaller ones.
The number of iterations is log2(n) however the cost of the n > 0 and n = n / 2 is proportional to the size of the integer. i.e. 128 -bit costs twice 64-bit and 1024-bit is 16 times greater. So the cost of each operation is log(m) where m is the maximum unsigned value which the number of bits stores. If you assume there is a fixed wasted bits e.g. no more than 64-bit this means the cost is O(log(n) * log(n)) or O(log(n)^2)
If you used Java's BigInteger, that's what the time complexity would be.

Big Oh complexity can easily be calculated by counting number of times the while loop runs as the operations inside while loop take constant time.In this case N varies as-
N,N/2,N/4,N/16.... ,1
Just counting number of terms in above series will give me number of times loop runs.So,
N/2^p=1 (p is number of times loop runs)
This gives p=logN thus complexity O(logN)

Related

If stack operations are constant time O(1), what is the time complexity of this algorithm?

BinaryConversion:
We are inputting a positive integer n with the output being a binary representation of n on a stack.
What would the time complexity here be? I'm thinking it's O(n) as the while loop halves every time, meaning the iterations for a set of inputs size 'n' decrease to n/2, n/4, n/8 etc.
Applying sum of geometric series whereby n = a and r = 1/2, we get 2n.
Any help appreciated ! I'm still a noob.
create empty stack S
while n > 0 do
push (n mod 2) onto S
n = floor(n / 2)
end while
return S
If the loop was
while n>0:
for i in range n:
# some action
n = n/2
Then the complexity would have been O(n + n/2 + n/4 ... 1) ~ O(n), and your answer would have been correct.
while n > 0 do
# some action
n = n / 2
Here however, the complexity will should be the number of times the outer loop runs, since the amount of work done in each iteration is O(1). So the answer will be O(log(n)) (since n is getting halved each time).
The number of iterations is the number of times you have to divide n by 2 to get 0, which is O(log n).

Why is √n the optimal value for m in a jump search?

I am currently learning about searching algorithms and I came across jump search, which has a time complexity of O(√n). Why is √n the optimal value for m (jump size) in the jump search algorithm and how does it affect the time complexity?
Let m be the jump size and n be the number of elements.
In the worst case, the maximum number of elements you have to check is the maximum number of jumps (n/m - 1) plus the number of elements between jumps (m), and the time you take is approximately proportional to the total number of elements you check.
The goal in choosing m, therefore, is to minimize: (n/m)+m-1.
The derivative by m is 1 - (n/m2), and the minimum occurs where the derivative is 0:
1 - (n/m2) = 0
(n/m2) = 1
n = m2
m = √n
Assuming the block size is k, the worst case scenario requires roughly n / k iterations for finding the block and k iterations for finding the element in the block.
To minimize n / k + k where n is a constant, we can use differentiation (see Matt's answer) or the AM-GM inequality to get:
n / k + k >= 2sqrt(n)
We can clearly see that 2sqrt(n) is the minimal number of iterations and is attainable when k = sqrt(n).

Cost analysis for implementing a stack as an array?

Please Refer to answer 2 of the material above. I can follow the text up to that point. I always seem to loose conceptualisation when there's no illustration maybe due to the fact that I'm new to math notation.
I understand the cost for an expensive operation (double the array when the stack is full)
1 + 2 + 4 + 8 + ... + 2^i where i is the index of that sequence. So index 0 = 1, 1 = 2, 2 = 4 and 3 = 8.
I can see the sequence for costly operations but I get confused with the following explanation.
Now, in any sequence of n operations, the total cost for resizing is
1 + 2 + 4 + 8 + ... + 2^i for some 2^i < n (if all operations are pushes
then 2^i will be the largest power of 2 less than n). This sum is at
most 2n − 1. Adding in the additional cost of n for
inserting/removing, we get a total cost < 3n, and so our amortised
cost per operation is < 3
I don't understand that explanation?
the total cost for resizing is 1 + 2 + 4 + 8 + ... + 2^i for some 2^i < n
What does it mean for some 2^i < n
does it say that the number of operations n will always be larger than 2^i? and does n stand for the number of operations or the length of the array?
And the following I just don't follow:
if all operations are pushes then 2^i will be the largest power of 2
less than n. This sum is at most 2n − 1.
Could someone illustrate this please?
n is the largest stack size, intrinsic array size at this moment is the least power of two 2^(i+1)>=n, so the last operation of expansion takes 2^i<n time.
For example, if array reaches size n=11, the last expansion causes grow from 8 to 16 with 8 items moved
About the second question: sum of geometric progression
1 + 2 + 4 + 8 + ... + 2^i = 2^(i+1) - 1

Big-O complexity of a piece of code

I have a question in algorithm design about complexity. In this question a piece of code is given and I should calculate this code's complexity.
The pseudo-code is:
for(i=1;i<=n;i++){
j=i
do{
k=j;
j = j / 2;
}while(k is even);
}
I tried this algorithm for some numbers. and I have gotten different results. for example if n = 6 this algorithm output is like below
i = 1 -> executes 1 time
i = 2 -> executes 2 times
i = 3 -> executes 1 time
i = 4 -> executes 3 times
i = 5 -> executes 1 time
i = 6 -> executes 2 times
It doesn't have a regular theme, how should I calculate this?
The upper bound given by the other answers is actually too high. This algorithm has a O(n) runtime, which is a tighter upper bound than O(n*logn).
Proof: Let's count how many total iterations the inner loop will perform.
The outer loop runs n times. The inner loop runs at least once for each of those.
For even i, the inner loop runs at least twice. This happens n/2 times.
For i divisible by 4, the inner loop runs at least three times. This happens n/4 times.
For i divisible by 8, the inner loop runs at least four times. This happens n/8 times.
...
So the total amount of times the inner loop runs is:
n + n/2 + n/4 + n/8 + n/16 + ... <= 2n
The total amount of inner loop iterations is between n and 2n, i.e. it's Θ(n).
You always assume you get the worst scenario in each level.
now, you iterate over an array with N elements, so we start with O(N) already.
now let's say your i is always equals to X and X is always even (remember, worst case every time). how many times you need to divide X by 2 to get 1 ? (which is the only condition for even numbers to stop the division, when they reach 1).
in other words, we need to solve the equation
X/2^k = 1 which is X=2^k and k=log<2>(X)
this makes our algorithm take O(n log<2>(X)) steps, which can easly be written as O(nlog(n))
For such loop, we cannot separate count of inner loop and outer loop -> variables are tighted!
We thus have to count all steps.
In fact, for each iteration of outer loop (on i), we will have
1 + v_2(i) steps
where v_2 is the 2-adic valuation (see for example : http://planetmath.org/padicvaluation) which corresponds to the power of 2 in the decomposition in prime factor of i.
So if we add steps for all i we get a total number of steps of :
n_steps = \sum_{i=1}^{n} (1 + v_2(i))
= n + v_2(n!) // since v_2(i) + v_2(j) = v_2(i*j)
= 2n - s_2(n) // from Legendre formula (see http://en.wikipedia.org/wiki/Legendre%27s_formula with `p = 2`)
We then see that the number of steps is exactly :
n_steps = 2n - s_2(n)
As s_2(n) is the sum of the digits of n in base 2, it is negligible (at most log_2(n) since digit in base 2 is 0 or 1 and as there is at most log_2(n) digits) compared to n.
So the complexity of your algorithm is equivalent to n:
n_steps = O(n)
which is not the O(nlog(n)) stated in many other solutions but a smaller quantity!
lets start with worst case:
if you keep dividing with 2 (integral) you don't need to stop until you
get to 1. basically making the number of steps dependent on bit-width,
something you find out using two's logarithm. so the inner part is log n.
the outer part is obviously n, so N log N total.
A do loop halves j until k becomes odd. k is initially a copy of j which is a copy of i, so do runs 1 + power of 2 which divides i:
i=1 is odd, so it makes 1 pass through do loop,
i=2 divides by 2 once, so 1+1,
i=4 divides twice by 2, so 1+2, etc.
That makes at most 1+log(i) do executions (logarithm with base 2).
The for loop iterates i from 1 through n, so the upper bound is n times (1+log n), which is O(n log n).

Computational complexity in Big-O notation

How can I specify computational complexity of an algorithm in Big-O notation whose execution follows the following pattern, according to input sizes?
Input size: 4
Number of execution steps: 4 + 3 + 2 + 1 = 10
Input size: 5
Number of execution steps: 5 + 4 + 3 + 2 + 1 = 15
Input size: 6
Number of execution steps: 6 + 5 + 4 + 3 + 2 + 1 = 21
Technically, you have not given enough information to determine the complexity. On the information so far, it could take 21 steps for all input sizes greater than 5. In that case, it would be O(1) regardless of the behavior for sizes 4 and 5. Complexity is about limiting behavior for large input sizes, not about how something behaves for three extremely small sizes.
If the number of steps for size n is, in general, the sum of the numbers from 1 through n then the formula for the number of steps is indeed n(n+1)/2 and it is O(n^2).
For any series which follows the pattern 1 + 2 + 3 + 4 + ....+n , the sum of the series is given as n(n+1)/2, which is O(n^2)..
In your case , the number of steps will never be n^2, because at each level we are doing one step less(n + n-1 + n-2),however the big Oh notation is used to specify the upper bound on the growth rate of a function thus your big O will be O(n^2) because it is more than n number of steps but less than n^2 number of steps.
I think it should be O(N), because it's just like : Given an array with N size, and iterate it (if we don't care the plus computation time).
By the way, I think the (n + 1)*n/2 is just the result of sum, not the algorithm complexity.

Resources