Show the O-notation for the following code fragment - big-o

The question is to Show the O-notation for the following code fragments (show each line)
for x=1 to n
{
y=1
while y < n
y=y+y
}
The O notation for the first line is n, I believe.
I am unsure what the O notation is for the while loop and why?
The answer given is O(n log 2n )
Can someone please explain this to me? Thanks!

Let's assume n=64 (or 26), then the while loop will run 6 times with the following final value of y:
2
4
8
16
32
64
If you repeat this for n=256 (or 28), you will find that there are 8 iterations. In more general terms the number of executions for a given value of n will be log 2n. As the outer loop is n, the total execution time is O(n log 2n )

In the inner loop y takes the values 1, 2, 4...
y is multiplied by 2 each time so it is of the form 2^k
This loop stops for the largest value k such as 2^k < n i.e. k < log_2(n)
There will be no more than log_2 (n) iterations in this loop
x ranging from 1 to n, may increase the total number of iteration by n.log_2(n)

Related

If stack operations are constant time O(1), what is the time complexity of this algorithm?

BinaryConversion:
We are inputting a positive integer n with the output being a binary representation of n on a stack.
What would the time complexity here be? I'm thinking it's O(n) as the while loop halves every time, meaning the iterations for a set of inputs size 'n' decrease to n/2, n/4, n/8 etc.
Applying sum of geometric series whereby n = a and r = 1/2, we get 2n.
Any help appreciated ! I'm still a noob.
create empty stack S
while n > 0 do
push (n mod 2) onto S
n = floor(n / 2)
end while
return S
If the loop was
while n>0:
for i in range n:
# some action
n = n/2
Then the complexity would have been O(n + n/2 + n/4 ... 1) ~ O(n), and your answer would have been correct.
while n > 0 do
# some action
n = n / 2
Here however, the complexity will should be the number of times the outer loop runs, since the amount of work done in each iteration is O(1). So the answer will be O(log(n)) (since n is getting halved each time).
The number of iterations is the number of times you have to divide n by 2 to get 0, which is O(log n).

How to analize the complexity of this Algorithm? In term of T(n) [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Analyze the complexity of the following algorithms. Said T(n) the running time of the algorithm, determine a function f (n) such that T(n) = O(f(n)). Also, let's say if it also applies T(n) = Θ(f(n)). The answers must be motivated.
I never do this kind of exercise.
Could someone explain what I have to analyze and how can I do it?
j=1,k=0;
while j<=n do
for l=1 to n-j do
k=k+j;
end for
j=j*4;
end while
Thank you.
Step 1
Following on from the comments, the value of j can be written as a power of 4. Therefore the code can be re-written in the following way:
i=0,k=0; // new loop variable i
while (j=pow(4,i)) <= n do // equivalent loop condition which calculates j
for l=1 to n-j do
k=k+j;
end for
i=i+1; // equivalent to j=j*4
end while
The value of i increases as 0, 1, 2, 3, 4 ..., and the value of j as 1, 4, 16, 64, 256 ... (i.e. powers of 4).
Step 2
What is the maximum value of i, i.e. how many times does the outer loop run? Inverting the equivalent loop condition:
pow(4,i) <= n // loop condition inequality
--> i <= log4(n) // take logarithm base-4 of both sides
--> max(i) = floor(log4(n)) // round down
Now that the maximum value of i is known, it's time to re-write the code again:
i=0,k=0;
m=floor(log4(n)) // maximum value of i
while i<=m do // equivalent loop condition in terms of i only
j=pow(4,i) // value of j for each i
for l=1 to n-j do
k=k+j;
end for
i=i+1;
end while
Step 3
You have correctly deduced that the inner loop runs for n - j times for every outer loop. This can be summed over all values of j to give the total time complexity:
j≤n
T(n) = ∑ (n - j)
j
i≤m
= ∑ (n - pow(4,i)) // using the results of steps 1) and 2)
i=0
i≤m
= (m+1)*n - ∑ pow(4,i) // separate the sum into two parts
i=0
\_____/ \_________/
A B
The term A is obviously O(n log n), because m=floor(log4(n)). What about B?
Step 4
B is a geometric series, for which there is a standard formula (source – Wikipedia):
Substituting the relevant numbers "a" = 1, "n" = m+1, "r" = 4:
B = (pow(4,m+1) - 1) / (4 - 1)
= 3 * pow(4, floor(log4(n))+1) - 3
If a number is rounded down (floor), the result is always greater than the original value minus 1. Therefore m can be asymptotically written as:
m = log4(n) + O(1)
--> B = 3 * pow(4, log4(n) + O(1)) - 3
= 3 * pow(4, O(1)) * n - 3
----------------
this is O(1)
= O(n)
Step 5
A = O(n log n), B = O(n), so asymptotically A overshadows B.
The total time complexity is O(n log n).
Consider the number of times each instruction is executed depending on n (the variable input). Let's call that the cost of each instruction. Typically, some parts of the algorithm are run a significantly greater number of times more often than other parts. Also typically, this "significantly greater number" is such that it asymptotically dominates all others, meaning that as n grows larger, the cost of all other instructions become negligible. Once you understand that, you simply have to figure out the cost of the significant instruction, or at least what it is proportional to.
In your example, two instructions are potentially costly; let k=k+j; cost x, and j=j*4; cost y.
j=1,k=0; // Negligible
while j<=n do
for l=1 to n-j do
k=k+j; // Run x times
end for
j=j*4; // Run y times
end while
Being tied to only one loop, y is easier to determine. The outer loop runs for j from 1 to n, with j growing exponentially: its value follows the sequence [1, 4, 16, 64, ...] (the i-th term is 4^i, starting at 0). That simply means that y is proportional to the logarithm of n (of any base, because all logarithms are proportional). So y = O(log n).
Now for x: we know it is a multiple of y since it is tied to an inner loop. For each time the outer loop runs, this inner loop runs for l from 1 to n-j, with l growing linearly (it's a for loop). That means it simply runs n-j-1 times, or n-1 - 4^i with i being the index of the current outer loop, starting at 0.
Since y = O(log n), x is proportional to the sum of n - 1 - 4^i, for i from 0 to log n, or
(n-1 - 4^0) + (n-1 - 4^1) + (n-1 - 4^2) + ... =
((log n)-1) * (n-1) - (1-4^(log n))/(1-4) =
O(log n * n) + O(n) =
O(n log n)
And here is your answer: x = O(n log n), which dominates all other costs, so the total complexity of the algorithm is O(n log n).
You need to calculate how many times each line will execute.
j=1,k=0; // 1
while j<=n do //n+1
for l=1 to n-j do // ∑n
k=k+j; //∑n-1
end for
j=j*4; //n
end while
total complexity [add execution time of all lines]
= 1+(n+1)+ ∑ n + ∑ (n-1) + n
= 2n+2+ n^2/2 + n/2 + (n-1)^2/2 + (n-1)/2
take max term of above and skip constant factors then you will left with n^2
total runtime complexity will be o(n^2)
Looks like a homework question, but to give you a hint: The coplexity can be calculated by the amount of loops. One loop means O(n) two loops O(n^2) and three loops O(n^3).
This only goes for neste loops:
while () {
while () {
while() {
}
}
}
this is O(n^3)
But...
while () {
}
while() {
}
Still is O(n), because the loopsdo not run over each other and will stop after n iterations.
EDIT
The correct answer should be O(n*log(n)), beacuse of the inner for-loop the amount of iterations depends on the value of j. Which can be different every iteration.

counting binary digit algorithm || prove big oh

Is the big oh (Log n ) ?
how can i prove it by using summation
//Input: A positive decimal integer n
//Output: The number of binary digits in n’s binary representation
count ← 1
while n > 1 do
count ← count + 1
n ← ⌊n/2⌋
return count
The n is reducing like this:
n + n / 2 + n / 4 + n / 8 + .... + 8 + 4 + 2 + 1
The summation of above series is 2^(log(n)) - 1.
Now come to the above question. The number of times the loop executed is the number of items appears in above series = time complexity of the algorithm.
The number of items appears in above series is logn. So the algorithm time complexity is O(logn).
Example:
n = 8; the corresponding series:
8 + 4 + 2 + 1 = 15(2^4 - 1) ~ 2^4 ~ 2^logn
Here, number of items in series = 4
therefore,
time complexity = O(number of iteration)
= O(number of elements in series)
= O(logn)
Just check the returned value count. As it would be closer to logN, you can state that the TC is log(N).
Just think reverse way (mathematically):
1 -> 2 -> 4 -> 8 -> ... N (xth value considering 0-indexing system)
2^x = N
x = logN
You have to consider that larger numbers use more memory and take more processing for each operation. Note: time complexity only cares what happens for the largest values not the smaller ones.
The number of iterations is log2(n) however the cost of the n > 0 and n = n / 2 is proportional to the size of the integer. i.e. 128 -bit costs twice 64-bit and 1024-bit is 16 times greater. So the cost of each operation is log(m) where m is the maximum unsigned value which the number of bits stores. If you assume there is a fixed wasted bits e.g. no more than 64-bit this means the cost is O(log(n) * log(n)) or O(log(n)^2)
If you used Java's BigInteger, that's what the time complexity would be.
Big Oh complexity can easily be calculated by counting number of times the while loop runs as the operations inside while loop take constant time.In this case N varies as-
N,N/2,N/4,N/16.... ,1
Just counting number of terms in above series will give me number of times loop runs.So,
N/2^p=1 (p is number of times loop runs)
This gives p=logN thus complexity O(logN)

Order of growth of following function

Can someone tell me if the ranking of following functions by order of growth is correct ? (increasing to decreasing)
2^n, n^2, (nlgn, lg(n!)), n^(1/lgn), 4
Most of these are right. However, look at
n1/lg n.
Notice that, for nonzero n, we have n = 2lg n, so
n1/lg n = (2lg n)1/lg n = 2lg n / lg n = 21 = 2.
So while 2 and 4 do grow at the same (nonexistent) rate, n1/lg n is always smaller than 4 for any nonzero n.

Big-O complexity of a piece of code

I have a question in algorithm design about complexity. In this question a piece of code is given and I should calculate this code's complexity.
The pseudo-code is:
for(i=1;i<=n;i++){
j=i
do{
k=j;
j = j / 2;
}while(k is even);
}
I tried this algorithm for some numbers. and I have gotten different results. for example if n = 6 this algorithm output is like below
i = 1 -> executes 1 time
i = 2 -> executes 2 times
i = 3 -> executes 1 time
i = 4 -> executes 3 times
i = 5 -> executes 1 time
i = 6 -> executes 2 times
It doesn't have a regular theme, how should I calculate this?
The upper bound given by the other answers is actually too high. This algorithm has a O(n) runtime, which is a tighter upper bound than O(n*logn).
Proof: Let's count how many total iterations the inner loop will perform.
The outer loop runs n times. The inner loop runs at least once for each of those.
For even i, the inner loop runs at least twice. This happens n/2 times.
For i divisible by 4, the inner loop runs at least three times. This happens n/4 times.
For i divisible by 8, the inner loop runs at least four times. This happens n/8 times.
...
So the total amount of times the inner loop runs is:
n + n/2 + n/4 + n/8 + n/16 + ... <= 2n
The total amount of inner loop iterations is between n and 2n, i.e. it's Θ(n).
You always assume you get the worst scenario in each level.
now, you iterate over an array with N elements, so we start with O(N) already.
now let's say your i is always equals to X and X is always even (remember, worst case every time). how many times you need to divide X by 2 to get 1 ? (which is the only condition for even numbers to stop the division, when they reach 1).
in other words, we need to solve the equation
X/2^k = 1 which is X=2^k and k=log<2>(X)
this makes our algorithm take O(n log<2>(X)) steps, which can easly be written as O(nlog(n))
For such loop, we cannot separate count of inner loop and outer loop -> variables are tighted!
We thus have to count all steps.
In fact, for each iteration of outer loop (on i), we will have
1 + v_2(i) steps
where v_2 is the 2-adic valuation (see for example : http://planetmath.org/padicvaluation) which corresponds to the power of 2 in the decomposition in prime factor of i.
So if we add steps for all i we get a total number of steps of :
n_steps = \sum_{i=1}^{n} (1 + v_2(i))
= n + v_2(n!) // since v_2(i) + v_2(j) = v_2(i*j)
= 2n - s_2(n) // from Legendre formula (see http://en.wikipedia.org/wiki/Legendre%27s_formula with `p = 2`)
We then see that the number of steps is exactly :
n_steps = 2n - s_2(n)
As s_2(n) is the sum of the digits of n in base 2, it is negligible (at most log_2(n) since digit in base 2 is 0 or 1 and as there is at most log_2(n) digits) compared to n.
So the complexity of your algorithm is equivalent to n:
n_steps = O(n)
which is not the O(nlog(n)) stated in many other solutions but a smaller quantity!
lets start with worst case:
if you keep dividing with 2 (integral) you don't need to stop until you
get to 1. basically making the number of steps dependent on bit-width,
something you find out using two's logarithm. so the inner part is log n.
the outer part is obviously n, so N log N total.
A do loop halves j until k becomes odd. k is initially a copy of j which is a copy of i, so do runs 1 + power of 2 which divides i:
i=1 is odd, so it makes 1 pass through do loop,
i=2 divides by 2 once, so 1+1,
i=4 divides twice by 2, so 1+2, etc.
That makes at most 1+log(i) do executions (logarithm with base 2).
The for loop iterates i from 1 through n, so the upper bound is n times (1+log n), which is O(n log n).

Resources