Running Time in Big-O - algorithm

I'm having some trouble fully understanding Big-O notation and how nested loops impact the running time. I know time complexity of nested loops is equal to the number of times the innermost statement is executed but I just want to check my understanding.
1
count=1
for i=1 to N
for j=1 to N
for k=1 to N
count+=1
Since there are 2 nested loops, it would be O(N3), correct?
2
count=1
for i=1 to N^2
for j=1 to N
count+=1
The outer loop iterates N2 and inner loops runs N times which makes it O(N3), right?
3
count=1
for (i=0;i<N;++i)
for (j=0;j<i*i;++j)
for (k=0;k<j;++k)
++count
This would be O(N) right since N is only being called in the first for loop?

The Big O -notation describes how the algorithm or program scales in performance (running time, memory requirement, etc.) with regard to the amount of input.
For example:
O(1) says it will take constant time no matter the amount of data
O(N) scales linearily: give it five times the input and it will take five times a long to complete
O(N^2) scales quadratically: e.g. ten times the input will take a hundred times as long to complete
Your third example is O(N^4), because that is how the the total number of innermost iterations scale for N.
For these simple cases you can count the innermost number of iterations, but there are certainly more complicated algorithms than that.

Related

Simple algorithm Big-O Runtime

I am a beginner, and i have this fundamental doubt...
What is the Big-O runtime of this simple algorithm ?
Is it O(n2) [ cause of two for loops ] or O(n3) [ including product of two numbers which itself should be O(n) ] ?
MaxPairwiseProductNaive(A[1 . . . n]):
product ← 0
for i from 1 to n:
for j from i + 1 to n:
product ← max(product, A[i] · A[j])
return product
Both your first and second loops are O(n), so together they are O(n^2).
Inside your loop, neither the max or multiplication depend on the number of array elements. For languages like C, C++, C#, Java, etc., getting a value from an array does not add time complexity as n increases, so we say that is O(1).
While the multiplication and max will take time, they are also O(1), because they will always run at a constant time, regardless of n. (I will note that the values here will get extremely large for arrays containing values >1 because of the max, so I'm guessing some languages might start to slow down past a certain value, but that's just speculation.)
All in all, you have
O(n):
O(n):
O(1)
So the total is O(n^2)
Verification:
As a reminder, while time complexity analysis is a vital tool, you can always measure it. I did this in C# and measured both the time and number of inner loop executions.
That trendline is just barely under executions = 0.5n^2. Which makes sense if you think about low numbers of n. You can step through your loops on a piece of paper and immediately see the pattern.
n=5: 5 outer loop iterations, 10 total inner loop iterations
n=10 10 outer loop iterations, 45 total inner loop iterations
n=20 20 outer loop iterations, 190 total inner loop iterations
For timing, we see an almost identical trend with just a different constant. This indicates that our time, T(n), is directly proportional to the number of inner loop iterations.
Takeaway:
The analysis which gave O(n^2) worked perfectly. The statements within the loop were O(1) as expected. The check didn't end up being necessary, but it's useful to see just how closely the analysis and verification match.

What is the complexity of this while loop?

Let m be the size of Array A and n be the size of Array B. What is the complexity of the following while loop?
while (i<n && j<m){ if (some condition) i++ else j++}
Example for an array: A=[1,2,3,4] B=[1,2,3,4] the while loop executes at most 5+4 times O(m+n).
Example for an array: A=[1,2,3,4,7,8,9,10] B=[1,2,3,4] the while loop executes at most 4 times O(n).
I am not able to figure out how to represent the complexity of the while loop.
One common approach is to describe the worst-case time complexity. In your example, the worst-case time complexity is O(m + n), because no matter what some condition is during a given loop iteration, the total number of loop iterations is at most m + n.
If it's important to emphasize that the time complexity has a lesser upper bound in some cases, then you'll need to figure out what those cases are, and find a way to express them. (For example, if a given algorithm takes an array of size n and has worst-case O(n2) time, it might also be possible to describe it as "O(mn) time, where m is the number of distinct values in the array" — only if that's true, of course — where we've introduced an extra variable m to let us capture the impact on the performance of having more vs. fewer duplicate values.)

Having Hard time with Runtime Complexity

I'm very very fresh to C# and programming overall (especially on algorithms).
I'm trying to learn the basic of algorithms and I really do not know to answer on certain questions:
I need to answer on each, what is the complexity.
Now I've answered the following:
1) O(2N)
2) O(1)? I guessed here and couldn't tell why O(1)
3) Couldn't tell
4) Couldn't tell
5) O(N^2) ? took a nice guess here.
I would really really really appreciate any help followed by explanation.
This loop counts up from 0 to n–1, which is n iterations. Each iteration performs 2 basic operations. Hence a total of 2*n* basic operations are performed. O(2*n*) is the same as O(n) because we disregard constants.
This loop counts down from 100 to 1, which is 99 iterations. Each iteration performs 2 basic operations. Hence a total of 198 basic operations are performed. O(198) is the same as O(1) because we disregard constants.
The outer loop counts from 100 to floor(n/2)–1. If n<200, then no iterations are executed and the run time is O(1) for the loop initialization and test. Otherwise n≥200, then approximately (n/2)–100 iterations are executed. The inner loop executes n times, performing 1 basic operation each time. Hence a total of about ((n/2)–100)×n×1 = (n2)/2 – 100n basic operations are executed, which is O(n2).
The first loop executes n basic operations in total. The second and third loops combined execute n2 basic operations in total. Thus a total of n + n2 basic operations are executed. The n2 term has a higher power than the n term, so the overall complexity is simply O(n2).
The outer loop executes n times. The inner loop executes n2 times per outer loop iteration. Hence in total, n3 basic operations are executed, which is O(n3).

for loop with many iterations vs for loop with fewer iterations

Let's say we have for loop (a) that will have 100 iterations and for loop (b) that will have 50 iterations.
Which is more efficient?
I would think that (b) is more efficient because it has less iterations, but big - o for (a) and (b) are both n.
Am I overthinking this and misusing the concept of big o ?
To learn more about in which sense having fewer iterations is more efficient, read about loop unrolling (or loop unwinding).
Big O tells the upper bound of a functions growth rate. Its only an estimation.
In your case both the loops (or in fact the same loop with different number of iterations ) have same time complexity of "O(n)". i.e. the performance of the loops is directly proportional to the number of inputs. (plotting a linear Time vs inputs graph)
Two different functions/algos having same complexity for a particular case (worst/best/average) only means that they have same "growth rate" for that case and not that they will perform exactly same in actual run.
so in actual measurements, same loop with 50 (n=50) iterations will take less time than loop with 100 (n=100) iterations.
may be your are not appreciating that its a theoretical mechanism for growth rate/performance estimation for a function/algorithm.

How to find a formula for time complexity of an algorithm

To find time complexity, I set a value for n, but once I iterated through the algorithm, I was unable to determine what it was. Any suggestions on how to find a formal for it so I can determine what the big-O is.
for (int i = 0; i < 2*n; i++){
X
X
}
for (int i = n; i > 0; i--) {
X
}
X are just operations in the algorithm.
I set n to two and it increases very fast every time it goes through the loop n doubles. It looks like it might be 2^n.
Since i increases by 1 each time through the first loop and n doubles each time through the loop, do you think the loop would ever terminate? (It probably would terminate when 2*n overflows, but if you're operating in an environment that, say, automatically switches to a Big Integer package when numbers exceed the size of a primitive int, the program will probably simply crash when the system runs out of memory.)
But let's say that this is not what's happening and that n is constant. You don't say whether the execution time for X depends on the value of i, but let's say it does not.
The way to approach this is to answer the question: Since i increases by 1 each time through the first loop, how many iterations will be required to reach the loop termination condition (for i to be at least 2*n)? The answer, of course, is O(n) iterations. (Constant coefficients are irrelevant for O() calculations.)
By the same reasoning, the second loop requires O(n) iterations.
Thus the overall program requires O(n + n) = O(n) iterations (and O(n) executions of X).
Your time complexity should be O(n). I assume you dont got any other loops where X is provided. You are using a n*2 which just doubles your n for this algorithm and your time complexity will then increase linear.
If you for an example using floyds algortihm which includes 3 nested for loops you can draw the conclusion that floyd have a time complexity of O(n^3) where n is the number of elements and 3 is represented by 3 nested for loops.
You may proceed like the following:
Note that you can ameliorate your algorithm (first loop) by avoiding to multiply n by 2 at every

Resources