running time of pseudocode algorithm - algorithm

This is an algorithm that I have been given to find the run time of. I know how to do this fairly well except he has not explained what to do for while loops and he said he is not going to. I also do not know what the syntax of begin and end is about. He doesn't normally have that after a for loop, so since it is there now I am confused.
procedure f(n)
s=0;
for i=1 to 5n do
begin
j=4i;
while j<i^3 do
begin
s=s+i-j
j=5j
end
end

Looking at the second loop we can see that the while loop starts from 4i and ends after k iterations, where k is such that 4*i*5^k=i^3, that is k = log_5{i^2/4}. So your run-time is:
In the second to last equation we used Stirling's approximation.

Related

Can loops affect other loops' complexity without being inside it?

Can a loop affect any other loop without being inside it?
Will the total time complexity for the code be changed?
I found this code in the internet as an example for what I'm talking about:
int i, j, k, val=0, c=0;
for (int i=n; i>1; i=i/2)
val++;
for (j=1; k<val; j=j*2)
c++;
I thuoght that the time complexity for this code is n^2 but it seems I was wrong
Sorry for my English.
Yes, it can, and in your example, it does. The first loop calculates some value used to determine the number of iterations the second loop will execute. The number of iterations is (related closely to) the complexity.
Currently, the first loop does ~log(n) iterations and the second does ~log(log(n)) iterations. If the first loop were changed to do ~n iterations, the second would do ~log(n). If the first were changed to calculate val in a way that made it ~2^n, the second loop would do ~n iterations.
There's nothing special about this other code being a loop: any code before a loop can affect the complexity of the loop.

What is the worst-case time complexity for this code?

I had a quiz in my class and didn't do so well on it. I'm looking to find out if someone can explain to me what I did wrong here - our professor is overwhelmed with office hours as we moved online so I thought I'd post here.
def functionA(n):
level = n
total = 0
while level > 1:
for i in range(0,n):
level = level // 2
total = total + i
return total
My answer: The above function is O(log n) because the for loop divides the level variable in half on each iteration.
I got 5/10 points but it doesn't really have an explanation as to what was wrong or correct about it. What did I get wrong with this and why?
Image for proof that the quiz was already graded and returned. Just trying to figure it out.
The problem is this line:
for i in range(0,n):
Since n and level are two totally independent variables that are copies of n and n never changes, this loop is always O(n).
Once we've established that the inner loop is O(n), we need to figure out the complexity of the outer loop.
On the first iteration of the outer loop, the inner loop repeatedly sets level = level // 2. Since this assignment will quickly reduce level down to 1, the outer loop is guaranteed to terminate after its first iteration, making it constant time.
We're left with an overall complexity of O(n) for the single iteration of the inner for loop.

Time complexity Big Oh using Summations

Few questions about deriving expressions to find the runtime using summations.
The Big-Oh time complexity is already given, so using summations to find the complexity is what I am focused on.
So I know that there are 2 instructions that must be run before the first iteration of the loop, and 2 instructions that have to be run, (the comparison, and increment of i) after the first iteration. Of course, there is only 1 instruction within the for loop. So deriving I have 2n + 3, ridding of the 3 and the 2, I know the time complexity is O(n).
Here I know how to start writing the summation, but the increment in the for loop is still a little confusing for me.
Here is what I have:
So I know my summation time complexity derivation is wrong.
Any ideas as to where I'm going wrong?
Thank you
Just use n / 2 on the top and i = 1 on the bottom:
The reason it's i = 1 and not i = 0 is because the for loop's condition is i < n so you need to account for being one off since in the summation, i will increase all the way up to n / 2 and not 1 short.

Time complexity of an algorithm with nested loops

I've read a ton of questions on here already about finding the time complexity of different algorithms which I THINK I understand until I go to apply it to the outer loop of an algorithm which states:
for i=1 step i←n∗i while i < n^4 do
I can post the full algorithm if necessary but I'd prefer not to as it is for a piece of homework that i'd like to otherwise complete by myself if possible.
I just can't figure out what the complexity of that loop is. I think its just 4 unless n=1 but I am blank as to how to express that formally. Its that or im totally wrong anyway!
Is anyone able to help with this?
Translating your loop into C (just to make sure I understand your pseudo code):
for (i=1; i < n^4; i = i * n ) {
...
}
The key question is what is i after xth iteration? Answer: i^x (you can prove by induction). So when x is 4, then i is n^4 and the loop exits. So it runs in 4 iterations and is constant time.

How is the complexity of this algorithm logarithmic?

Here is the code:
def intToStr(i):
digits = '0123456789'
if i == 0:
return '0'
result = ''
while i > 0:
result = digits[i%10] + result
i = i/10
return result
I understand that with logarithmic complexity, you are essentially dividing the necessary steps by some value each time you iterate (for example binary search algorithm). However in this example, we are not really dividing by a number, instead we remove one letter at a time. So by dividing i by 10 in i/10, we eliminate one number at a time. I can't really wrap my head around this algorithm... Is there a name for this algorithm so I can better understand why this is logarithmic?
The run time of this algorithm is linear with respect to the size (number of bits) of the input, so it's not logarithmic according to the usual definition. However, the run time is logarithmic with respect to the numerical value of the input, so it could be called "pseudo-logarithmic".
See also: Pseudo-polynomial time.
Well lets look at the steps for 123:
i result
123 ""
12 "3" -- after first iteration
1 "23" -- second iteration
0 "123" -- third iteration
For the number 123 we need 3 steps to convert it to a string. By doing further tests, we essentially see that the number of iterations is always equal to the number of digits of the number we want to convert. So for any n we can say that the algorithm needs floor(log10(n)+1) steps, which equals log(n) in Big O Notation.
EDIT:
hammar's answer is much more informative on the details of the complexity (one could say he hit the nail right on the head (pun intended)) so if you want to exactly know the complexity and want to be able to refer to it correctly you should look into his answer otherwise I think this "pseudo-logarithmic" fulfils your needs.

Resources