I have this algorithm and I wanna analyse the time complexity but I am not sure I am correct:
n = int(input("Enter Target Value: "))
x = 1
count = 0
while n != x:
if n % 2 == 0:
n /= 2
count +=1
else:
n -= 1
count +=1
print(count)
for the while loop, n/2 will have the time complexity of O(logn) and n-1 will be O(n), so O(logn)+O(n) will still be O(logn) in the for loop. The 3 initializing will be O(1), so the run time complexity of this algo will be O(logn). Am I correct? Thanks
The outcome is correct, but the reasoning is not. The n-=1 statement will not be executed O(n) times, and O(logn)+O(n) is actually O(n), not O(logn).
Imagine n in its binary representation. Then the n-=1 operation will be executed just as many times as there are 1-bits in that representation. The n/=2 statement will be executed just as many times as there are bits in the representation, regardless of whether they are 0 or 1. This is because a 1-bit will first be converted to a 0-bit with the n-=1 operation, and then the next iteration will pick up that same bit (which has become 0) for the n/=2 operation, which actually drops that bit.
So in the worst case, all the significant bits of n are 1-bits. And then you have O(logn) executions of n-=1 operation, and O(logn) executions of n/=2. In total the loop makes 2O(logn) iterations, which gives this algorithm a O(logn) time complexity.
Related
Normally when I see a loop, I assume it is O(n). But this was given as an interview question and seems to easy.
let a = 1;
while(a < n){
a = a * 2;
}
Am I oversimplifying? It appears to simply compute the powers of 2.
Never assume that a loop is always O(n). Loops that iterate over every element of a sequential container (like arrays) normally have a time complexity of O(n), but it ultimately depends on the the condition of the loop and how the loop iterates. In your case, a is doubling in value until it becomes greater than or equal to n. If you double n a few times this is what you see:
n # iterations
----------------
1 0
2 1
4 2
8 3
16 4
32 5
64 6
As you can see, the number of iterations is proportional to log(n), making the time complexity O(log n).
Looks like a grows exponentially with n, so the loop will likely complete in O(log(n))
I haven't done all the math, but a is not LINEAR wrt n...
But if you put in a loop counter, that counter would approximate log-base-2(n)
BinaryConversion:
We are inputting a positive integer n with the output being a binary representation of n on a stack.
What would the time complexity here be? I'm thinking it's O(n) as the while loop halves every time, meaning the iterations for a set of inputs size 'n' decrease to n/2, n/4, n/8 etc.
Applying sum of geometric series whereby n = a and r = 1/2, we get 2n.
Any help appreciated ! I'm still a noob.
create empty stack S
while n > 0 do
push (n mod 2) onto S
n = floor(n / 2)
end while
return S
If the loop was
while n>0:
for i in range n:
# some action
n = n/2
Then the complexity would have been O(n + n/2 + n/4 ... 1) ~ O(n), and your answer would have been correct.
while n > 0 do
# some action
n = n / 2
Here however, the complexity will should be the number of times the outer loop runs, since the amount of work done in each iteration is O(1). So the answer will be O(log(n)) (since n is getting halved each time).
The number of iterations is the number of times you have to divide n by 2 to get 0, which is O(log n).
What is the worst case time complexity of the following:
def fun(n):
count=0
i=n
while i>0:
for j in range(0,i):
count+=1
i/=2
return count
Worst case time complexity if n is really big integer.
O(log(n)*n) (Guessing).
Here's how I came to my conclusion.
while i>0:
...
i/=2
This will run log(n) times because it's getting halved every time it runs.
for j in range(0,i):
This will run n times first, n/2 times the 2nd time, and so on. So, total running time for this line is n + n/2 + n/4 .... 1 = (2n-1)
count+=1
This is a cheap operation so is O(1).
Thus making total running time of this function O(log(n)) * O(2n-1), if n is an integer. Simplifying becomes O(log(n)*(n)).
This algorithm appears to have a quadratic efficiency. (Why?)
To analyze complexity, you just have to count the number of operations.
Here, there are two nested loops:
for i in 0 to n – 1 do
for j in 0 to n – 1 do
operation() // Do something
done
done
With i = 0, operation will be ran for all j in [0,n-1] that is n times. Then increment i, and repeat until i > n-1. That is the operation is ran n*n times in the worst case.
So in the end, this codes does n^2 operations, that's why it has quadratic efficiency.
This is a trick question, it appears to be both n^2 and 2^n at the same time depending on the whether the if statement on line j is executed.
Our prof and various materials say Summation(n) = (n) (n+1) /2 and hence is theta(n^2). But intuitively, we just need one loop to find the sum of first n terms! So, it has to be theta(n).I'm wondering what am I missing here?!
All of these answers are misunderstanding the problem just like the original question: The point is not to measure the runtime complexity of an algorithm for summing integers, it's talking about how to reason about the complexity of an algorithm which takes i steps during each pass for i in 1..n. Consider insertion sort: On each step i to insert one member of the original list the output list is i elements long, thus it takes i steps (on average) to perform the insert. What is the complexity of insertion sort? It's the sum of all of those steps, or the sum of i for i in 1..n. That sum is n(n+1)/2 which has an n^2 in it, thus insertion sort is O(n^2).
The running time of the this code is Θ(1) (assuming addition/subtraction and multiplaction are constant time operations):
result = n*(n + 1)/2 // This statement executes once
The running time of the following pseudocode, which is what you described, is indeed Θ(n):
result = 0
for i from 1 up to n:
result = result + i // This statement executes exactly n times
Here is another way to compute it which has a running time of Θ(n²):
result = 0
for i from 1 up to n:
for j from i up to n:
result = result + 1 // This statement executes exactly n*(n + 1)/2 times
All three of those code blocks compute the natural numbers' sum from 1 to n.
This Θ(n²) loop is probably the type you are being asked to analyse. Whenever you have a loop of the form:
for i from 1 up to n:
for j from i up to n:
// Some statements that run in constant time
You have a running time complexity of Θ(n²), because those statements execute exactly summation(n) times.
I think the problem is that you're incorrectly assuming that the summation formula has time complexity theta(n^2).
The formula has an n^2 in it, but it doesn't require a number of computations or amount of time proportional to n^2.
Summing everything up to n in a loop would be theta(n), as you say, because you would have to iterate through the loop n times.
However, calculating the result of the equation n(n+1)/2 would just be theta(1) as it's a single calculation that is performed once regardless of how big n is.
Summation(n) being n(n+1)/2 refers to the sum of numbers from 1 to n. Which is a mathematical formula and can be calculated without a loop which is O(1) time. If you iterate an array to sum all values that is an O(n) algorithm.