I seriously do not understand where to go with the following question.
Any hints or steps would be greatly appreciated.
I really want to know what I'm supposed to do, as opposed to just getting the answers.
I understand why we use Big-Oh (Worst-case) but I can't wrap my mind behind the mathematics. How to you calculate the total runtime?
To calculate Big-Oh notation you need to find the worst case running time.
We do this for three loops
1) i varies from 1 to n-1 so total times the loop runs n-1=O(n)
2) j goes from 2 to n,3 to n..............n-1 to n so here total times the loop runs (n-2)+(n-3).......+1=(n-1)(n-2)/2=O(n^2)
3) now for each value of j say a. k goes from 1 to a . This will add up another O(n) for each loop.(as value of a can go upto n)
Conclusion
Total loops of j O(n^2) and inside each loop we have O(n) operation.
Therefore O((n^2)*n)=O(n^3)
Hope i was able to explain how we get Big-Oh notation
Related
I'm having a lot of trouble with this algorithms problem. I'm supposed to find a big-theta analysis of the following algorithm:
function example(n):
int j=n
for i=0;i<n;i++:
doSomethingA()
for k=0;k<=j;k++:
doSomethingB()
j/=2
My approach is to break the entire execution of the algorithm into two parts. One part where doSomethingA() and doSomethingB() are both called, and the second part after j becomes 0 and only doSomethingA() is called until the program halts.
With this approach, you have part 1 occurring for Logn iterations of the outer loop, part 2 occurring for n-logn iterations of the outer loop.
The number of times the inner loop runs is halved for each run, so in total the number of times it runs should be 2n-1. So the runtime for part 1 should be (2n-1)*c, a constant. I'm not entirely sure if this is valid
For part 2, the work inside the loop is always constant, and the loop repeats (n-logn) times.
So we have ((2n-1)+(n-logn))*c
I'm not sure whether the work I've done up to here is correct, nor am I certain how to continue. My intuition tells me this is O(n), but I'm not sure how to rationalize that in big-theta. Beyond that it's possible my entire approach is flawed. How should I attack a question like this? If my approach is valid, how should I complete it?
Thanks.
It is easier to investigate how often doSomethingA and doSomethingB is executed.
For doSomethingA it is clearly n times.
For doSomethingB we get (n+1) + (n/2+1) + (n/4+1) + ... + 1 so roughly 2n + n. The 2n from the n+n/2+n/4+... and the n from summing up the 1s.
All together we get O(n) and also Theta(n) since you need at least Omega(n) as can be seen from the n times doSomethingA is executed.
My question
I am currently learning time complexity and I was presented with the following example to solve the time complexity:
sum=0
for j=1 to 12
add = add + figures[j]
average = add/12
print average
for x=12 to n-1{
total = total - figures[x-11] + figures[x+1]
average=total/12
print average
}
My answer: time complexity = n
Explanation/Thought Process:
The first loop is executed 12 times and The second loop is executed n-12 times. I believe the time complexity is 'n' because n-12+12=n and the loops are not nested.
Teacher's answer: time complexity = 2n
I am not sure why though. Any help in understanding would be great!
Also, Is there a method to helping realize other common complexities?
For Example:
n
log n
n log n
n2 (power to two)
n3 (power to three)
2n (power to n)
I had a very interesting discussion with my professor once on this same topic. I worked for a really cool company that did thermal modelling with beowulf clusters. Very, very computationally intensive. My professor said if I could take a function from 4n^2 to 2n^2 my boss would not be happy. But if I could take it from 4n^2 to 4n he would be happy. I raised my hand, saying if I could cut the time in half my boss would be very, very happy with that. She said no, he'd only be happy with an order of magnitude improvement - linear improvements in the same order of magnitude are irrelevant.
I called my boss that day. He said if I could cut the runtime in half, he'd fly me home, double my pay, and buy me steak dinners for a year.
Sometimes it matters to know what the coefficients are. Other times it doesn't. So you're absolutely right that when dealing with the order of complexity of these code snippets, 4n or 10000n or n are all the same - they are O(n).
By the way, he may be using 2n instead of n because there are two lookups in the second loop, while the first loop (which is run a constant number of times) has only 1 lookup.
So if he says "Time complexity is 2n" he's correct. If he says "Time complexity is O(2n)" he is incorrect - O(...) means something very specific, and does not have coefficients.
Yes his answer is correct. The time complexity will be 2n.
In this sample, the for loop will run for some value n. Whether it runs from 1 to 12 or 1 to 1000 does not matter because whatever that value n is, the for loop will end execution once it reaches its limit, n.
The 2 in his answer, time complexity = 2n, comes from the fact that there are two for loops that will be executed for this sample is done executing.
Check this article out. It provides some pretty straight forward explanations
I am not sure the technical term for these kinds of loops (if one even exists), so I will provide an example:
x=0
i = 1
while(i<n)
for(j=1 to n/i)
x = x + (i-j)
i*=2
return(x)
In this example, the while loop is directly changing the number of times the for loop runs, which is throwing me off for some reason
Normally, I would go line by line and see how many times each line would run, but because the number of times changes, I tried doing a summation but got a little lost... What would be a step by step way to solve this type of problem?
The answer in the notes is O(n), but when I did this I got nlog(n)
Any help is appreciated, this is review for my final
Also, if you know of any good places to find practice problems of this sort, I would appreciate it!
Thank you
I think the analysis to this code is very similar to the one in this lecture to find the running time of the procedure of building a max heap. the straightforward analysis of it resulted in nlgn complexity but when analysed using summations it turned out to be n just like your problem.
So back to your question, the outer loop runs
times and the inner runs n / i. But since i grows exponentially, we can use another variable j that that is increased once at a loop iteration so it can be used in summation and change the bounds according to the relation .
The summation is
The sum is a geometric sequence whose result is
so when n tends to infinity, it converges to a constant (2). Hence the summation is considered a constant factor and doesn't affect the asymptotic complexity of the code which is only O(n).
I would like to know things about things and such
for i ← 1 to 2n do means i takes 2*n different values, and for each of them, j takes an other i different values.
So overall, s←s+i is executed O(2*n*2*n) times, which is O(n^2).
Same reasoning for the second example gives us O(n^2*n^2) = O(n^4)
I'm at a loss as to how to calculate the best case run time - omega f(n) - for this algorithm. In particular, I don't understand how there could be a "best case" - this seems like a very straightforward algorithm. Can someone give me some hints / a general approach for solving these types of questions?
for i = 0 to n do
for j = n to 0 do
for k = 1 to j-i do
print (k)
Have a look at this question and answers Still sort of confused about Big O notation it may help you sort out some of the confusion between worst case, best case, Big O, Big Omega, etc.
As to your specific problem, you are asked to find some (asymptotical) lower bound of the time complexity of the algorithm (you should also clarify whether you mean Big Omega or small omega).
Otherwise you are right, this problem is quite simple. You just have to think about how many times print (k) will be executed for a given n.
You can see that the first loop goes n-times. The second one also n-times.
For the third one, you see that if i = 1, then j = n-1, thus k is 1 to n-1-1 = n-2, which makes me think whether your example is correct and if there should be i-j instead.
In any case, the third loop will execute at least n/2 times. It is n/2 because you subtract j-i and j decreases while i increases so when they are n/2 the result will be 0 and then it will be negative at which point the innermost loop will not execute anymore.
Therefore print (k) will execute n*n*n/2-times which is Omega(n^3) which you can easily verify from definition.
Just beware, as #Yves points out in his answer, that this is all assuming that print(k) is done in constant time which is probably what was meant in your exercise.
In the real world, it wouldn't be true because printing the number also takes time and the time increases with log(n) if print(k) prints in base 2 or base 10 (it would be n for base 1).
Otherwise, in this case the best case is the same as the worst case. There is just one input of size n so you cannot find "some best instance" of size n and "worst instance" of size n. There is just instance of size n.
I do not see anything in that function that varies between different executions except for n
aside from that, as far as I can tell, there is only one case, and that is both the best case and the worst case at the same time
it's O(n3) in case you're wondering.
If this is a sadistic exercise, the complexity is probably Theta(n^3.Log(n)), because of the triple loop but also due to the fact that the number of digits to be printed increases with n.
As n the only factor in the behavior of your algorithm, best case is n being 0. One initialization, one test, and then done...
Edit:
The best case describe the algorithm behavior under optimal condition. You provide us a code snippet which depend on n. What is the nbest case? n being zero.
Another example : What is the best case of performing a linear search on an array? It is either that the key match the first element or the array is empty.