What is the worst-case time complexity for this code? - algorithm

I had a quiz in my class and didn't do so well on it. I'm looking to find out if someone can explain to me what I did wrong here - our professor is overwhelmed with office hours as we moved online so I thought I'd post here.
def functionA(n):
level = n
total = 0
while level > 1:
for i in range(0,n):
level = level // 2
total = total + i
return total
My answer: The above function is O(log n) because the for loop divides the level variable in half on each iteration.
I got 5/10 points but it doesn't really have an explanation as to what was wrong or correct about it. What did I get wrong with this and why?
Image for proof that the quiz was already graded and returned. Just trying to figure it out.

The problem is this line:
for i in range(0,n):
Since n and level are two totally independent variables that are copies of n and n never changes, this loop is always O(n).
Once we've established that the inner loop is O(n), we need to figure out the complexity of the outer loop.
On the first iteration of the outer loop, the inner loop repeatedly sets level = level // 2. Since this assignment will quickly reduce level down to 1, the outer loop is guaranteed to terminate after its first iteration, making it constant time.
We're left with an overall complexity of O(n) for the single iteration of the inner for loop.

Related

Time Complexity with increasing queue size

I've searched Google and StackOverflow for the past half hour or so, and, while I've found a lot of interesting information, I have yet to find the solution to my problem. I think this should be fairly simple to answer; I just don't know how to answer it.
I'm using the following loop in a program I'm working on (this is pseudocode of my algorithm, obviously):
while Q is not empty
x = Q.dequeue()
for(i = 1 to N)
if x.s[i] = 0
y = Combine(x, i)
Q.add(G[y])
In this loop:
Q is a queue
x is an object with that contains an integer array s
N is an integer representing the problem size
y is a new instance of the same type of object as x
Combine is a method that returns a new object of the same type as x
Combine only contains a For loop, so it has a worst-case time complexity of O(N)
My question is how I should be trying to calculate the time complexity of this loop. Because I'm potentially adding N-1 items to the queue on each loop, I'm assuming that it will increase the complexity beyond just the simple O(N) from a normal loop.
If you need any more information to help me with this, please let me know and I'll get you what I can when I can.

Time complexity Big Oh using Summations

Few questions about deriving expressions to find the runtime using summations.
The Big-Oh time complexity is already given, so using summations to find the complexity is what I am focused on.
So I know that there are 2 instructions that must be run before the first iteration of the loop, and 2 instructions that have to be run, (the comparison, and increment of i) after the first iteration. Of course, there is only 1 instruction within the for loop. So deriving I have 2n + 3, ridding of the 3 and the 2, I know the time complexity is O(n).
Here I know how to start writing the summation, but the increment in the for loop is still a little confusing for me.
Here is what I have:
So I know my summation time complexity derivation is wrong.
Any ideas as to where I'm going wrong?
Thank you
Just use n / 2 on the top and i = 1 on the bottom:
The reason it's i = 1 and not i = 0 is because the for loop's condition is i < n so you need to account for being one off since in the summation, i will increase all the way up to n / 2 and not 1 short.

Hare and tortoise algorithm. Why is the collision spot and head of linked list at the same distance of the start of the loop?

In the Cracking the Coding interview book, it is given the following explanation about the how to find where the beginning of a loop is in a linked list:
There is a FastRunner which advances two steps, and a SlowRunner which advances two steps (per unit of time).
k -> number of nodes before loop
K = mod(k, LOOP_SIZE) -> number of steps the fast runner is inside the loop when the slow runner just hit the first node in the loop.
FastRunner is LOOP_SIZE - K steps behind SlowRunner
FastRunner catches up to SLowRunner at a rate of 1 step per unit of time.
They meet after LOOP_SIZE - K, at this point they will be K steps before the head of the loop.
In order to find the start of the loop, the author says that:
Since K = mod(k, LOOP_SIZE), we can say that k = K + M * LOOP_SIZE (for any integer M), then here is where I get lost, she says that it is correct to say that it is k nodes from the loop start.
I understand that the collision point is K nodes from the loop start, but why it is also k nodes? How does she finds that K = k?
Well K is the number of nodes inside the loop the faster one "jumped" through until the slower one entered the loop. The faster one goes 2x faster than the slower one, so the faster one had to jump through 2*k nodes in total (because it takes k for the slower one to get to the loop). So the number of jumps it did in the loop is K=2*k-k (all jumps minus jumps it made to get to the loop) so K == k

Time complexity of an algorithm with nested loops

I've read a ton of questions on here already about finding the time complexity of different algorithms which I THINK I understand until I go to apply it to the outer loop of an algorithm which states:
for i=1 step i←n∗i while i < n^4 do
I can post the full algorithm if necessary but I'd prefer not to as it is for a piece of homework that i'd like to otherwise complete by myself if possible.
I just can't figure out what the complexity of that loop is. I think its just 4 unless n=1 but I am blank as to how to express that formally. Its that or im totally wrong anyway!
Is anyone able to help with this?
Translating your loop into C (just to make sure I understand your pseudo code):
for (i=1; i < n^4; i = i * n ) {
...
}
The key question is what is i after xth iteration? Answer: i^x (you can prove by induction). So when x is 4, then i is n^4 and the loop exits. So it runs in 4 iterations and is constant time.

Find the span of this algorithm

I have the following algorithms:
SUM-ARRAY(A,B,C):
n = A.length
grain-size = 1
r = ceil(n/grain-size)
for k = 0 to r-1:
spawn ADD-S(A,B,C,k*grain-size+1, min((k+1)*grain-size,n))
sync
ADD-S(A,B,C,i,j):
for k=i to j:
C[k]=A[k]+B[k]
Okay and I have the following discussion with my group:
We want to find the span of this algorithm and some of us think it is theta(1) and other theta(n).
Is there any help out there?
Span, or critical path length, can be defined as "the theoretically fastest time the work could be executed on a computer with an infinite number of processors".
In your case, all spawned iterations are independent, so all can be executed simultaneously if there is enough processors. And each iteration processes grain-size big piece of work. So, the span is Theta(grain-size), which can be equivalent to either Theta(1) or Theta(n) or even Theta(sqrt(n)) if you set the grain size in such a way. For the grain size of 1, as in your code, span is Theta(1), i.e. independent on the number of iterations.
I assume you want the complexity of the algorithm.
So, you're essentially adding two arrays A and B into another array C, and your doing this by spawning r sub-process, each of which adds a portion of length grain_size of A and B.
I reason like this:
ADD-S adds m = grain_size elements of two arrays, and so its complexity is Theta(m)
SUM-ARRAY spawns r sub-processes, each of which does ADD-S, and so its complexity is Theta(r*m) = Theta(n)
So, my answer is Theta(n).
When grain-size is 1, the span is O(n), since the for loop will take n times of O(1), even if all children will run in parallel, the parent thread will still take O(1) for the "spawn" operation.
Source: This is from the solution of CS course Algorithms.

Resources