Run time to Theta Notation - runtime

After looking at the code:
for(i=n-1; i>=0; i-=2)
for(j=15; j<100; j+=3)
sum +=i+j
I would say that the run time for this in terms of Theta Notation to be Θ(n^2), as for there are two loops and const (i and j). Would this be correct?

I’ll put in another plug for that old asymptotic maxim of
“When in doubt, work inside-out!”
Let’s take another look at that code:
for(i=n-1; i>=0; i-=2)
for(j=15; j<100; j+=3)
sum +=i+j;
Let’s begin with the innermost statement, the one that adds into the variable sum. That statement’s runtime is independent of any of the other variables here, so it does Θ(1) work. So let’s rewrite the code like this:
for(i=n-1; i>=0; i-=2)
for(j=15; j<100; j+=3)
do Theta(1) work
Now, let’s look at that inner for loop. Notice that this loop always runs the exact same number of times (around 30ish), regardless of the values of the other variables. That means that this loop runs a constant number of times and does a constant amount of work, and so the net effect of that loop is to do Θ(1) work. This is shown here:
for(i=n-1; i>=0; i-=2)
do Theta(1) work
So now we’re left this this final loop. Here, we see that the work done depends directly and linearly on n. Specifically, this loop does Θ(n) iterations and does Θ(1) work per iteration, and so the total work done is Θ(n).
Notice that it’s not the number of for loops that determines the runtime, but rather what those loops are doing. Counting the number of loops is a good way to get a rough estimate for the runtime, but the approach I illustrated above of working from the inside outward is more precise.

Related

If my Termination statement in a for-loop is i < n * n , is my running time then O(n^2)?

So im just a bit confused on how to correctly interpret the running time of this for-loop:
for (int i = 0; i < n * n; ++i) {}
I know the basics of O-Notation im just insecure of how to correctly interpret the running time and I couldn't find similar examples.
The problem is actually a triple nested for loop and I know you just multiply the running time of nested loops but this one makes me insecure.
Yes.
n multiplied by itself is n2, and you perform n2 iterations.
There are no constant factors and no other considerations in this short example.
The complexity is simply O(n2).
Note that this does not consider any hypothetical operations performed inside the loop. Also note that, if we take the loop exactly at face value, it doesn't actually do any meaningful work so we could say that it has no algorithmic complexity at all. You would need to present a real example to really say.

How do you find the runtime of loops that affect eachother?

I am not sure the technical term for these kinds of loops (if one even exists), so I will provide an example:
x=0
i = 1
while(i<n)
for(j=1 to n/i)
x = x + (i-j)
i*=2
return(x)
In this example, the while loop is directly changing the number of times the for loop runs, which is throwing me off for some reason
Normally, I would go line by line and see how many times each line would run, but because the number of times changes, I tried doing a summation but got a little lost... What would be a step by step way to solve this type of problem?
The answer in the notes is O(n), but when I did this I got nlog(n)
Any help is appreciated, this is review for my final
Also, if you know of any good places to find practice problems of this sort, I would appreciate it!
Thank you
I think the analysis to this code is very similar to the one in this lecture to find the running time of the procedure of building a max heap. the straightforward analysis of it resulted in nlgn complexity but when analysed using summations it turned out to be n just like your problem.
So back to your question, the outer loop runs
times and the inner runs n / i. But since i grows exponentially, we can use another variable j that that is increased once at a loop iteration so it can be used in summation and change the bounds according to the relation .
The summation is
The sum is a geometric sequence whose result is
so when n tends to infinity, it converges to a constant (2). Hence the summation is considered a constant factor and doesn't affect the asymptotic complexity of the code which is only O(n).

How to arrive at log(n) running time of loop?

A loop whose variable is multiplied/divided by a constant factor at each iteration is considered to run in O(log(n)) time.
for example:
for(i=1; i<=n;i*2){
--some O(1) operations ...
}
How do I calculate or establish that this loop will run log(n) times?
I was previously explained that the factor by which the variable is divided/multiplied will be the base of the log.
I understand running times and their meanings, I just do not understand the math/reasoning that is required to arrive at this particular solution.
How does one mathematically arrive at this solution from knowing that the loop runs from i=1 to i=n multiplying i by 2 each time?
(I am trying to understand this as a basis to understanding how increasing the variable by a constant power leads to a running time of log(log(n)).)
This is how I make sense of it myself: Try to come up with a function f(x) to model your for loop, such that on the xth iteration of your for loop, your iterator i=f(x). For the simple case of for(i=0;i<n;i++) it is easy to see that for every 1 iteration, i goes up by one, so we can say that f(x)=x, on the xth iteration of the loop, i=x. On the 0th iteration i=0, on the first i=1, on the second i=2, and so on.
For the other case, for (i=1;i<n;i*=2), we need to come up with an f(x) that will model the fact that for every xth iteration, i is doubled. Successive doubling can be expressed as powers of 2, so let f(x)=2^x. On the 0th iteration, i=1, and 2^0=1. On the first, i=2, and 2^1=2, on the second, i=4, and 2^2=4, then i=8, 2^3=8, then i=16, and 2^4=16. So we can say that f(x)=2^x accurately models our loop.
To figure out how many steps the loop takes to complete to reach a certain n, solve the equation f(x)=n. Using an example of 16, ie for (i=1;i<16;i*=2), it becomes f(2^x)=16. log2(2^x=16) = x=4, which agrees with the fact that our loop completes in 4 iterations.
According to the wikipedia:
the time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of the length of the string representing the input.
So lets assume we have a function T from input length N, T(N) means it takes T(N) seconds to run the algo on array of N.
for(i=1; i<=n;i*2)
In case when we double the array size in your algorithm , we will get a recurrent formula:
T(2*N)=T(N)+C , this means that if we double the array lenght it takes the same time as T(N) plus a constant time for operation.
There is a theory about how to solve such recurrences, but you can use simple approach with wolfram alpha solver, so , for this particular case we have the result
When you go in a loop via
for(i=1; i<=n;i+=3)
then T(N)=T(N-3)+C, which leads us to a linear time of execution.

Big O Notation - Growth Rate

I am trying to understand if my reasoning is correct:
If I am given the following snippet of code and asked to find it's Big O:
for(int i = 3; i < 1000; i++)
sum++;
I want to say O(n) because we are dealing with one for loop and sum++ which is iterated say n times but then looking at this I realise we are not dealing with n at all as we are given the amount of times this for loop iterates... but in my mind it would be wrong to say that this has a Big O of O(1) because the growth is linear and not constant and depends on the size of this loop (although the loop is 'constant'). Would I be correct in saying that this is O(n)?
Also, another one that has me thinking around which has a similar setup:
for(int i = 0; i < n * n * n; i++)
for(int j = 0; j < i; j++)
sum++;
Now here again I know that when dealing with a nested loop containing and outer and inner loop we would use the multiplication rule to derive our Big O. Let's assume that the inner loop was in fact j < n then I would say that the Big O of this snippet of code is O(n^4) but as it isn't and we have a the second loop running its iterations off i and not n then would it be correct to say this as a Big Order of O(n^3)?
I think what is throwing me is where 'n' is not appearing and we're given a constant or another variable and all of a sudden I'm assuming n must not be considered for that section of code. However, having said that the other part of my reasoning is telling me that despite not seeing an 'n' I should still treat the code as though there were an n as the growth rate would be the same regardless of the variable?
It works best if you consider the code to always be within a function, where the function's arguments are used to calculate complexity. Thus:
// this is O(1), since it always takes the same time
void doSomething() {
for(int i = 3; i < 1000; i++)
sum++;
}
And
// this is O(n^6), since it only takes one argument
// and if you plot it, the curve matches t = k * n^6
void doSomethingElse(int n) {
for(int i = 0; i < n * n * n; i++)
for(int j = 0; j < i; j++)
sum++;
}
In the end, the whole point of big-O is to say what the run-times (or memory-footprints; but if you don't say anything, you are referring to run-times) look like as the problem size increases. It matters not what happens in the inside (although you can use that to estimate complexity) - what really matters is what you would measure outside.
Looking closer at your second snippet, it's O(n^6) because:
outer loop runs exactly n^3 times; inner loop runs, on average, n^3 / 2 times.
therefore, inner sum runs n^3 * k * n^3 times (with k a constant). In big-O notation, that's O(n^6).
The first is either O(1) or simply a wrong question, just like you understand it.
The second is O(n6). Try to imagine the size of the inner loop. On first iteration, it will be 1. On the second, 2. On the ith, it will be i, and on the last, it will be n*n*n. So it will be n*n*n/2, but that's O(n*n*n). That, times the outer O(n3) is O(n6) overall.
Although the calculation of O() for your question, by others, may be correct, here is a little more insight that should help delineate the conceptual outlook for this whole asymptotic analysis story.
I think what is throwing me is where 'n' is not appearing and we're given a constant or another
variable and all of a sudden I'm assuming n must not be considered for
that section of code.
The simplest way to understand this one is to identify if the execution of a line of code is affected by/related to the current value of n.
Had the inner loop been, let's say, j < 10 instead of j < i, the complexity would have well been O(n^3).
Why is any constant considered O(1)?
This may agreeably sound a little counter-intuitive at first however, here is a small conceptual summary to clear the air.
Let us say that your first loop runs 1000 times. Then you set it to 10^1000 times and try to believe that hey, it doesn't take the same time anymore.
Fair enough! Even though it may now take your computer 5 seconds more to run the same piece of code, the time complexity still remains O(1).
What this practically means is that you can actually calculate the time that it takes your computer to execute that piece of code and it will remain constant forever (for the same configuration).
Big-Oh is actually a function on the input and not the measure of the discrete value itself (time/space).
I hope that the above explanation also helps clarify why we actually ignore the constants in the O() notation.
Why is this Big-Oh thing so generalized and why is it used at the first place?
I thought of including this extra info as I myself had this question in mind when learning this topic for the first time.
Asymptotic time-complexity is an apriori analysis of any algorithm to understand the worst (Big-Oh) behavior (time/space) of that program regardless of the size of the input.
Eg. Your second code can not perform worse than O(n^6).
It is generalized because from one computer to another, only the constant changes, not Big-Oh.
With more experience, you will realize that practically, you would want your algorithm's time-complexity to be as asymptotically small as possible. Till a polynomial function it is fine. But for large inputs, today's computers start coughing if you try to run an algorithm with exponential time complexity of the order O(k^n) or O(n^n), eg. The Travelling Salesman and other NP-C/H problems.
Hope this adds to the info. :)

Big Oh Notation and Calculating the Running Time for a Triple-Nested For-Loop

In Computer Science, it is very important for Computer Scientists to know how to calculate the running times of algorithms in order to optimize code. For you Computer Scientists, I pose a question.
I understand that, in terms of n, a double-nested for-loop typically has a running time of n2 and a triple-nested for-loop typically has a running time of n3.
However, for a case where the code looks like this, would the running time be n4?
x = 0;
for(a = 0; a < n; a++)
for(b = 0; b < 2a; b++)
for (c=0; c < b*b; c++)
x++;
I simplified the running time for each line to be virtually (n+1) for the first loop, (2n+1) for the second loop, and (2n)2+1 for the third loop. Assuming the terms are multiplied together, and we extract the highest term to find the Big Oh, would the running time be n4, or would it still follow the usual running-time of n3?
I would appreciate any input. Thank you very much in advance.
You are correct, n*2n*4n2 = O(n4).
The triple nested loop only means there will be three numbers to multiply to determine the final Big O - each multiplicand itself is dependent on how much "processing" each loop does though.
In your case the first loop does O(n) operations, the second one O(2n) = O(n) and the inner loop does O(n2) operations, so overall O(n*n*n2) = O(n4).
Formally, using Sigma Notation, you can obtain this:
Could this be a question for Mathematics?
My gut feelings, like BrokenGlass is that it is O(n⁴).
EDIT: Sum of squares and Sum of cubes give a pretty good understanding of what is involved. The answer is a resounding O(n^4): sum(a=0 to n) of (sum(b=0 to 2a) of (b^2)). The inner sum is congruent to a^3. Therefore your outer sum is congruent to n^4.
Pity, I thought you might get away with some log instead of n^4. Never mind.

Resources