How do you find the runtime of loops that affect eachother? - runtime

I am not sure the technical term for these kinds of loops (if one even exists), so I will provide an example:
x=0
i = 1
while(i<n)
for(j=1 to n/i)
x = x + (i-j)
i*=2
return(x)
In this example, the while loop is directly changing the number of times the for loop runs, which is throwing me off for some reason
Normally, I would go line by line and see how many times each line would run, but because the number of times changes, I tried doing a summation but got a little lost... What would be a step by step way to solve this type of problem?
The answer in the notes is O(n), but when I did this I got nlog(n)
Any help is appreciated, this is review for my final
Also, if you know of any good places to find practice problems of this sort, I would appreciate it!
Thank you

I think the analysis to this code is very similar to the one in this lecture to find the running time of the procedure of building a max heap. the straightforward analysis of it resulted in nlgn complexity but when analysed using summations it turned out to be n just like your problem.
So back to your question, the outer loop runs
times and the inner runs n / i. But since i grows exponentially, we can use another variable j that that is increased once at a loop iteration so it can be used in summation and change the bounds according to the relation .
The summation is
The sum is a geometric sequence whose result is
so when n tends to infinity, it converges to a constant (2). Hence the summation is considered a constant factor and doesn't affect the asymptotic complexity of the code which is only O(n).

Related

Big O notation for inverse exponential algorithm

Let's say you had an algorithm which had n^(-1/2) complexity, say a scientific algorithm where one sample doesn't give much information so it takes ages to process it, but many samples to cross-reference made it faster. Would you represent that as O(n^(-1/2))? Is that even possible theoretically? Tldr can you have an inverse exponential time complexity?
You could define O(n^(-0.5)) using this set:
O(n^(-0.5)) := {g(n) : There exist positive constants c and N such that 0<=g(n)<=cn^(-0.5), for n > N}.
The function n^(-1), for example, belongs to this set.
None of the elements of the set above, however, could be a an upper bound on the running time of an algorithm.
Note that for any constant c:
if: n>c^2 then: n^(-0.5)*c < 1.
This means that your algorithm do less than one simple operation for input large enough. Since it must execute a natural number of simple operation, we have that it does exactly 0 operations - nothing at all.
A decreasing running time doesn't make sense in practice (even less if it decreases to zero). If that existed, you would find ways to add dummy elements and increase N artificially.
But most algorithm have at least O(N) complexity (whenever every data element influences the final solution); even if not, just the representation of N gets longer and longer which will eventually increase the running time (like O(Log N)).

Calculating Time complexity for a transpose matrix

The code is a summarized version of a piece of code trying to transpose a matrix. My task for this is to find the time complexity of this program.
I am only interested in the time complexity for the number of swaps that occur. I found out that on the outer loop for the swapping it occurs n-1 times and as for the inner loop it occurs (n^2 -n)/2 times.
I derived these solutions by subst n with a number.
When n=4, my inner loop would loop 1+2+3 times
When n=5, my inner loop would loop 1+2+3+4 times
Therefore innerloop=(n^2 - n) / 2
How would i calculate the time complexity of this code?
I saw somewhere online where the person just took the values from his innerloop count and determine it was a O(n) complexity.
[Edit]
Do i need to cater in the other loops to calculate the time complexity as well?
Your swap algorithm is a clear (1+2+3+...n) case, which translates to nĂ—(n+1)/2.
Since constants and lower degree parts of a calculation don't count in Big O notation, this translates to O(n^2).
Taking in the other loops, won't change your time complexity, since they are also O(n^2). In terms of Big O, it doesn't make sense to write O(3(n^2)), because constants are dropped.
But what could help you here is that you don't need to bother with the more complex swap for loop. (I don't mean when you're learning. I mean, when you're dealing with real-world problems.)
On a side note, I would recommend reading example 3 of the Big O section in Cracking the Coding Interview (6th edition) (2015) by - McDowell, Gayle, which addresses this exact situation and many similar ones.

Big-theta runtime of two linear nested loops, the inner running half as many times for each iteration of the outer.

I'm having a lot of trouble with this algorithms problem. I'm supposed to find a big-theta analysis of the following algorithm:
function example(n):
int j=n
for i=0;i<n;i++:
doSomethingA()
for k=0;k<=j;k++:
doSomethingB()
j/=2
My approach is to break the entire execution of the algorithm into two parts. One part where doSomethingA() and doSomethingB() are both called, and the second part after j becomes 0 and only doSomethingA() is called until the program halts.
With this approach, you have part 1 occurring for Logn iterations of the outer loop, part 2 occurring for n-logn iterations of the outer loop.
The number of times the inner loop runs is halved for each run, so in total the number of times it runs should be 2n-1. So the runtime for part 1 should be (2n-1)*c, a constant. I'm not entirely sure if this is valid
For part 2, the work inside the loop is always constant, and the loop repeats (n-logn) times.
So we have ((2n-1)+(n-logn))*c
I'm not sure whether the work I've done up to here is correct, nor am I certain how to continue. My intuition tells me this is O(n), but I'm not sure how to rationalize that in big-theta. Beyond that it's possible my entire approach is flawed. How should I attack a question like this? If my approach is valid, how should I complete it?
Thanks.
It is easier to investigate how often doSomethingA and doSomethingB is executed.
For doSomethingA it is clearly n times.
For doSomethingB we get (n+1) + (n/2+1) + (n/4+1) + ... + 1 so roughly 2n + n. The 2n from the n+n/2+n/4+... and the n from summing up the 1s.
All together we get O(n) and also Theta(n) since you need at least Omega(n) as can be seen from the n times doSomethingA is executed.

Finding the worse-case runtime of a function

I seriously do not understand where to go with the following question.
Any hints or steps would be greatly appreciated.
I really want to know what I'm supposed to do, as opposed to just getting the answers.
I understand why we use Big-Oh (Worst-case) but I can't wrap my mind behind the mathematics. How to you calculate the total runtime?
To calculate Big-Oh notation you need to find the worst case running time.
We do this for three loops
1) i varies from 1 to n-1 so total times the loop runs n-1=O(n)
2) j goes from 2 to n,3 to n..............n-1 to n so here total times the loop runs (n-2)+(n-3).......+1=(n-1)(n-2)/2=O(n^2)
3) now for each value of j say a. k goes from 1 to a . This will add up another O(n) for each loop.(as value of a can go upto n)
Conclusion
Total loops of j O(n^2) and inside each loop we have O(n) operation.
Therefore O((n^2)*n)=O(n^3)
Hope i was able to explain how we get Big-Oh notation

Analyzing best-case run time of copy algorithm

I'm at a loss as to how to calculate the best case run time - omega f(n) - for this algorithm. In particular, I don't understand how there could be a "best case" - this seems like a very straightforward algorithm. Can someone give me some hints / a general approach for solving these types of questions?
for i = 0 to n do
for j = n to 0 do
for k = 1 to j-i do
print (k)
Have a look at this question and answers Still sort of confused about Big O notation it may help you sort out some of the confusion between worst case, best case, Big O, Big Omega, etc.
As to your specific problem, you are asked to find some (asymptotical) lower bound of the time complexity of the algorithm (you should also clarify whether you mean Big Omega or small omega).
Otherwise you are right, this problem is quite simple. You just have to think about how many times print (k) will be executed for a given n.
You can see that the first loop goes n-times. The second one also n-times.
For the third one, you see that if i = 1, then j = n-1, thus k is 1 to n-1-1 = n-2, which makes me think whether your example is correct and if there should be i-j instead.
In any case, the third loop will execute at least n/2 times. It is n/2 because you subtract j-i and j decreases while i increases so when they are n/2 the result will be 0 and then it will be negative at which point the innermost loop will not execute anymore.
Therefore print (k) will execute n*n*n/2-times which is Omega(n^3) which you can easily verify from definition.
Just beware, as #Yves points out in his answer, that this is all assuming that print(k) is done in constant time which is probably what was meant in your exercise.
In the real world, it wouldn't be true because printing the number also takes time and the time increases with log(n) if print(k) prints in base 2 or base 10 (it would be n for base 1).
Otherwise, in this case the best case is the same as the worst case. There is just one input of size n so you cannot find "some best instance" of size n and "worst instance" of size n. There is just instance of size n.
I do not see anything in that function that varies between different executions except for n
aside from that, as far as I can tell, there is only one case, and that is both the best case and the worst case at the same time
it's O(n3) in case you're wondering.
If this is a sadistic exercise, the complexity is probably Theta(n^3.Log(n)), because of the triple loop but also due to the fact that the number of digits to be printed increases with n.
As n the only factor in the behavior of your algorithm, best case is n being 0. One initialization, one test, and then done...
Edit:
The best case describe the algorithm behavior under optimal condition. You provide us a code snippet which depend on n. What is the nbest case? n being zero.
Another example : What is the best case of performing a linear search on an array? It is either that the key match the first element or the array is empty.

Resources