Simple algorithm Big-O Runtime - algorithm

I am a beginner, and i have this fundamental doubt...
What is the Big-O runtime of this simple algorithm ?
Is it O(n2) [ cause of two for loops ] or O(n3) [ including product of two numbers which itself should be O(n) ] ?
MaxPairwiseProductNaive(A[1 . . . n]):
product ← 0
for i from 1 to n:
for j from i + 1 to n:
product ← max(product, A[i] · A[j])
return product

Both your first and second loops are O(n), so together they are O(n^2).
Inside your loop, neither the max or multiplication depend on the number of array elements. For languages like C, C++, C#, Java, etc., getting a value from an array does not add time complexity as n increases, so we say that is O(1).
While the multiplication and max will take time, they are also O(1), because they will always run at a constant time, regardless of n. (I will note that the values here will get extremely large for arrays containing values >1 because of the max, so I'm guessing some languages might start to slow down past a certain value, but that's just speculation.)
All in all, you have
O(n):
O(n):
O(1)
So the total is O(n^2)
Verification:
As a reminder, while time complexity analysis is a vital tool, you can always measure it. I did this in C# and measured both the time and number of inner loop executions.
That trendline is just barely under executions = 0.5n^2. Which makes sense if you think about low numbers of n. You can step through your loops on a piece of paper and immediately see the pattern.
n=5: 5 outer loop iterations, 10 total inner loop iterations
n=10 10 outer loop iterations, 45 total inner loop iterations
n=20 20 outer loop iterations, 190 total inner loop iterations
For timing, we see an almost identical trend with just a different constant. This indicates that our time, T(n), is directly proportional to the number of inner loop iterations.
Takeaway:
The analysis which gave O(n^2) worked perfectly. The statements within the loop were O(1) as expected. The check didn't end up being necessary, but it's useful to see just how closely the analysis and verification match.

Related

What is the complexity of this while loop?

Let m be the size of Array A and n be the size of Array B. What is the complexity of the following while loop?
while (i<n && j<m){ if (some condition) i++ else j++}
Example for an array: A=[1,2,3,4] B=[1,2,3,4] the while loop executes at most 5+4 times O(m+n).
Example for an array: A=[1,2,3,4,7,8,9,10] B=[1,2,3,4] the while loop executes at most 4 times O(n).
I am not able to figure out how to represent the complexity of the while loop.
One common approach is to describe the worst-case time complexity. In your example, the worst-case time complexity is O(m + n), because no matter what some condition is during a given loop iteration, the total number of loop iterations is at most m + n.
If it's important to emphasize that the time complexity has a lesser upper bound in some cases, then you'll need to figure out what those cases are, and find a way to express them. (For example, if a given algorithm takes an array of size n and has worst-case O(n2) time, it might also be possible to describe it as "O(mn) time, where m is the number of distinct values in the array" — only if that's true, of course — where we've introduced an extra variable m to let us capture the impact on the performance of having more vs. fewer duplicate values.)

Running Time in Big-O

I'm having some trouble fully understanding Big-O notation and how nested loops impact the running time. I know time complexity of nested loops is equal to the number of times the innermost statement is executed but I just want to check my understanding.
1
count=1
for i=1 to N
for j=1 to N
for k=1 to N
count+=1
Since there are 2 nested loops, it would be O(N3), correct?
2
count=1
for i=1 to N^2
for j=1 to N
count+=1
The outer loop iterates N2 and inner loops runs N times which makes it O(N3), right?
3
count=1
for (i=0;i<N;++i)
for (j=0;j<i*i;++j)
for (k=0;k<j;++k)
++count
This would be O(N) right since N is only being called in the first for loop?
The Big O -notation describes how the algorithm or program scales in performance (running time, memory requirement, etc.) with regard to the amount of input.
For example:
O(1) says it will take constant time no matter the amount of data
O(N) scales linearily: give it five times the input and it will take five times a long to complete
O(N^2) scales quadratically: e.g. ten times the input will take a hundred times as long to complete
Your third example is O(N^4), because that is how the the total number of innermost iterations scale for N.
For these simple cases you can count the innermost number of iterations, but there are certainly more complicated algorithms than that.

Why are only n/2 iterations considered for lower bound of selection sort?

From "Algorithms Unlocked" - Cormen, in Chapter 3- Algorithms for Sorting and Searching, under "Selection Sort".
The running time is Omega(n^2). This lower bound is calculated by considering n/2 iterations of the outer loop. Why n/2 iterations are considered?
Isn't the entire array traversed in all cases, including the lower-bound case? If yes, shouldn't 'n' be considered instead of n/2?
Edit1: That n/2 is considered even for the inner loop is not helping understand the logic.
The logic is the following: Different iterations of outer loop have different lengths of inner loop. In the first n/2 iterations of the outer loop we know that the inner loop has length >= n/2. Thus, we know that total amount of work is greater than the amount of work for the first n/2 iterations which is greater than (n/2)*(n/2) = n^2/4, thus Omega(n^2).
If we considered entire outer loop, we would not be able to state that inner loop has much work to do because the last iteration of outer loop has inner loop of length 1, and we would get the estimate n*1.
For better understanding you should keep in mind the following two moments:
Definition of Omega(n^2). It does not say that complexity is >= n^2. It says that there is a constant C such that complexity >= C * n^2. C might be equal 1, then >= n^2, or 100, then >= 100 * n^2, or 0.000001, then >= 0.000001 * n^2. But since we've found that for selection sort we have >= (1/4)*n^2, we know that selection sort is Omega(n^2) with C equal 0.25.
In analyzing the complexity of selection sort we do not discuss "worst case" or "best case" scenarios. In fact, the algorithm does not depend on data we are sorting and it is always performing exactly the same iterations in outer and inner loops. (To be precise: the only difference, depending on the data, is whether we perform the swap operation or not. But we do not count the swap operation in the analysis.) Thus, when we are talking about "first n/2 iterations" of the outer loop, we do not assume that the algorism was able to complete the job in n/2 iterations. We just evaluate the amount of work done in the first n/2 iterations leaving the work in remaining iterations aside.

Having Hard time with Runtime Complexity

I'm very very fresh to C# and programming overall (especially on algorithms).
I'm trying to learn the basic of algorithms and I really do not know to answer on certain questions:
I need to answer on each, what is the complexity.
Now I've answered the following:
1) O(2N)
2) O(1)? I guessed here and couldn't tell why O(1)
3) Couldn't tell
4) Couldn't tell
5) O(N^2) ? took a nice guess here.
I would really really really appreciate any help followed by explanation.
This loop counts up from 0 to n–1, which is n iterations. Each iteration performs 2 basic operations. Hence a total of 2*n* basic operations are performed. O(2*n*) is the same as O(n) because we disregard constants.
This loop counts down from 100 to 1, which is 99 iterations. Each iteration performs 2 basic operations. Hence a total of 198 basic operations are performed. O(198) is the same as O(1) because we disregard constants.
The outer loop counts from 100 to floor(n/2)–1. If n<200, then no iterations are executed and the run time is O(1) for the loop initialization and test. Otherwise n≥200, then approximately (n/2)–100 iterations are executed. The inner loop executes n times, performing 1 basic operation each time. Hence a total of about ((n/2)–100)×n×1 = (n2)/2 – 100n basic operations are executed, which is O(n2).
The first loop executes n basic operations in total. The second and third loops combined execute n2 basic operations in total. Thus a total of n + n2 basic operations are executed. The n2 term has a higher power than the n term, so the overall complexity is simply O(n2).
The outer loop executes n times. The inner loop executes n2 times per outer loop iteration. Hence in total, n3 basic operations are executed, which is O(n3).

Determining time complexity of an algorithm

Below is some pseudocode I wrote that, given an array A and an integer value k, returns true if there are two different integers in A that sum to k, and returns false otherwise. I am trying to determine the time complexity of this algorithm.
I'm guessing that the complexity of this algorithm in the worst case is O(n^2). This is because the first for loop runs n times, and the for loop within this loop also runs n times. The if statement makes one comparison and returns a value if true, which are both constant time operations. The final return statement is also a constant time operation.
Am I correct in my guess? I'm new to algorithms and complexity, so please correct me if I went wrong anywhere!
Algorithm ArraySum(A, n, k)
for (i=0, i<n, i++)
for (j=i+1, j<n, j++)
if (A[i]+A[j]=k)
return true
return false
Azodious's reasoning is incorrect. The inner loop does not simply run n-1 times. Thus, you should not use (outer iterations)*(inner iterations) to compute the complexity.
The important thing to observe is, that the inner loop's runtime changes with each iteration of the outer loop.
It is correct, that the first time the loop runs, it will do n-1 iterations. But after that, the amount of iterations always decreases by one:
n - 1
n - 2
n - 3
…
2
1
We can use Gauss' trick (second formula) to sum this series to get n(n-1)/2 = (n² - n)/2. This is how many times the comparison runs in total in the worst case.
From this, we can see that the bound can not get any tighter than O(n²). As you can see, there is no need for guessing.
Note that you cannot provide a meaningful lower bound, because the algorithm may complete after any step. This implies the algorithm's best case is O(1).
Yes. In the worst case, your algorithm is O(n2).
Your algorithm is O(n2) because every instance of inputs needs time complexity O(n2).
Your algorithm is Ω(1) because there exist one instance of inputs only needs time complexity Ω(1).
Following appears in chapter 3, Growth of Function, of Introduction to Algorithms co-authored by Cormen, Leiserson, Rivest, and Stein.
When we say that the running time (no modifier) of an algorithm is Ω(g(n)), we mean that no mater what particular input of size n is chosen for each value of n, the running time on that input is at least a constant time g(n), for sufficiently large n.
Given an input in which the summation of first two elements is equal to k, this algorithm would take only one addition and one comparison before returning true.
Therefore, this input costs constant time complexity and make the running time of this algorithm Ω(1).
No matter what the input is, this algorithm would take at most n(n-1)/2 additions and n(n-1)/2 comparisons before returning value.
Therefore, the running time of this algorithm is O(n2)
In conclusion, we can say that the running time of this algorithm falls between Ω(1) and O(n2).
We could also say that worst-case running of this algorithm is Θ(n2).
You are right but let me explain a bit:
This is because the first for loop runs n times, and the for loop within this loop also runs n times.
Actually, the second loop will run for (n-i-1) times, but in terms of complexity it'll be taken as n only. (updated based on phant0m's comment)
So, in worst case scenerio, it'll run for n * (n-i-1) * 1 * 1 times. which is O(n^2).
in best case scenerio, it's run for 1 * 1 * 1 * 1 times, which is O(1) i.e. constant.

Resources