I've read a ton of questions on here already about finding the time complexity of different algorithms which I THINK I understand until I go to apply it to the outer loop of an algorithm which states:
for i=1 step i←n∗i while i < n^4 do
I can post the full algorithm if necessary but I'd prefer not to as it is for a piece of homework that i'd like to otherwise complete by myself if possible.
I just can't figure out what the complexity of that loop is. I think its just 4 unless n=1 but I am blank as to how to express that formally. Its that or im totally wrong anyway!
Is anyone able to help with this?
Translating your loop into C (just to make sure I understand your pseudo code):
for (i=1; i < n^4; i = i * n ) {
...
}
The key question is what is i after xth iteration? Answer: i^x (you can prove by induction). So when x is 4, then i is n^4 and the loop exits. So it runs in 4 iterations and is constant time.
Related
I had a quiz in my class and didn't do so well on it. I'm looking to find out if someone can explain to me what I did wrong here - our professor is overwhelmed with office hours as we moved online so I thought I'd post here.
def functionA(n):
level = n
total = 0
while level > 1:
for i in range(0,n):
level = level // 2
total = total + i
return total
My answer: The above function is O(log n) because the for loop divides the level variable in half on each iteration.
I got 5/10 points but it doesn't really have an explanation as to what was wrong or correct about it. What did I get wrong with this and why?
Image for proof that the quiz was already graded and returned. Just trying to figure it out.
The problem is this line:
for i in range(0,n):
Since n and level are two totally independent variables that are copies of n and n never changes, this loop is always O(n).
Once we've established that the inner loop is O(n), we need to figure out the complexity of the outer loop.
On the first iteration of the outer loop, the inner loop repeatedly sets level = level // 2. Since this assignment will quickly reduce level down to 1, the outer loop is guaranteed to terminate after its first iteration, making it constant time.
We're left with an overall complexity of O(n) for the single iteration of the inner for loop.
So I have have been reading the cracking the coding interview book and there is a problem where we’re we have a function that does O(n* n* n!) work. The book then says this can be expressed by O((n+2)!). It says Similarly O(n*n!)can be expressed by O((n+1)!). I looked in all rules if permutations and did not find any way to logically get there . My first step was cool I have O(n^2 +n!) now what? I don’t know what steps to take next .
You already know (I think) that n! = 1*2*3*...*n.
So n*n*n! = 1*2*3*...*n*n*n.
As n gets really big, adding 1 or 2 to a factor has a decreasingly significant effect. I'm no specialist but what matter with O() is either the power of n or, in our case, the number in the ()! expression.
Which gets us to shorten this to 1*2*3*...*n*(n+1)*(n+2)=(n+2)!.
Eventually, O(n*n*n!) can be expressed O((n+2)!).
To calculatee x! you do x*(x-1)! recursively until x-1==1 so x!==(x-1)*(x-2)*...*1 is O(n!). Therefore to do x*x! we have
(x-0)*(x-1)*...*1 which takes one extra call to our recursive function (but at the beginning, with large x value) i.e. (x+1)! iterations. Similarly, (x-0)*(x-0)*(x-1)*(x-2)*...*1==x²*x! requires (x+2)! function evaluations to compute, hence O((n+2)!) efficiency.
I have this pseudocode and I want to analysis time complexity of this algorithm But I have no idea about it
Proc Sort(A,l,r)
if(r-l+1<4)
then Quicksort(A,l,r)
else
Sort(A,l,r-3)
Sort(A,l+3,r)
So I know that if the element of an array is less than 4 we pass it through the quicksort else we reduce the size of the array by three and then pass the left and right part So we will do this untile we get exactly the array of size n<4 the problem is I can't get to the recurrence and I am not sure if this algorithm works better in the worst case analysis
Thank you for your help
Well, whether or not this sorting function actually works, the way to figure out the run time is pretty easy here:
You write down a mathematical expression for the run time as a function of array size:
T(N) = ???
Well, if N <= 4, then we call Quicksort. Now, we don't have the function definition available, but regardless of that, since it will only ever called with input of at most size 4, we can treat its run time as a constant and just call it C.
And if N >= 4, then we call Sort again, with arrays that are 3 smaller.
So:
T(N) for N >= 4 is 2 * T(N-3).
Now at this point that should give you all the information you need for runtime analysis. Why don't you try it from here and get back to us when you get stuck?
This is an algorithm that I have been given to find the run time of. I know how to do this fairly well except he has not explained what to do for while loops and he said he is not going to. I also do not know what the syntax of begin and end is about. He doesn't normally have that after a for loop, so since it is there now I am confused.
procedure f(n)
s=0;
for i=1 to 5n do
begin
j=4i;
while j<i^3 do
begin
s=s+i-j
j=5j
end
end
Looking at the second loop we can see that the while loop starts from 4i and ends after k iterations, where k is such that 4*i*5^k=i^3, that is k = log_5{i^2/4}. So your run-time is:
In the second to last equation we used Stirling's approximation.
Few questions about deriving expressions to find the runtime using summations.
The Big-Oh time complexity is already given, so using summations to find the complexity is what I am focused on.
So I know that there are 2 instructions that must be run before the first iteration of the loop, and 2 instructions that have to be run, (the comparison, and increment of i) after the first iteration. Of course, there is only 1 instruction within the for loop. So deriving I have 2n + 3, ridding of the 3 and the 2, I know the time complexity is O(n).
Here I know how to start writing the summation, but the increment in the for loop is still a little confusing for me.
Here is what I have:
So I know my summation time complexity derivation is wrong.
Any ideas as to where I'm going wrong?
Thank you
Just use n / 2 on the top and i = 1 on the bottom:
The reason it's i = 1 and not i = 0 is because the for loop's condition is i < n so you need to account for being one off since in the summation, i will increase all the way up to n / 2 and not 1 short.