Calculating a Run-Time of a loop - algorithm

I'm trying to prove that the following algorithm runs in O(n^2) time.
I'm given the Code:
Which is mostly psedocode
function generalFunction(value)
for(i=1 to value.length-1)// value is an array, and this runs `n-1 Times`
for(j=1 to value.length-i // This runs `n-1(n-1)` times?
if value[j-1] > value[j] //This runs n^2 times?
swap A[j+1] and value[j] //How would would this run?
For the first line, I calculated that it runs n-1 times. Because the loop goes n times, but since we are subtracting a 1 from the length of the arbritray array, it would be n-1 times (I believe).
The Same can be said for the second line, except we have to multiply it by the original for loop.
However, I'm not sure about the last two lines, would the third one run in n^2 times? I only wrote n^2 because of the two nested loops. I'm not sure how to go about approaching the last line either, any input would be much appreciated.

Yes, this will run in n^2 - as per your comments. Note that the execution of the inner if statement (swap) is irrelevant to the fact that double loop run n^2 times. Also, the minus one part (n-1) still makes it n^2, since you are basically looking for an upper bound ~approximation~ and n^2 is the tightest such bound. Basically (n-1)(n-1) = n^2 - 2n +1 is dominated by n^2 term.
For definition and workable example similar to this see Wikipedi - example section
P.S. Bug O is about the -worst- case scenario. So in the worst case, the if statement will always be true, hence swap will hit each loop cycle. meaning if you put a break point, it will get hit (n-1)*(n-1) times. Expanding that means n^2 - 2n + 1 times.

Related

Big-theta runtime of two linear nested loops, the inner running half as many times for each iteration of the outer.

I'm having a lot of trouble with this algorithms problem. I'm supposed to find a big-theta analysis of the following algorithm:
function example(n):
int j=n
for i=0;i<n;i++:
doSomethingA()
for k=0;k<=j;k++:
doSomethingB()
j/=2
My approach is to break the entire execution of the algorithm into two parts. One part where doSomethingA() and doSomethingB() are both called, and the second part after j becomes 0 and only doSomethingA() is called until the program halts.
With this approach, you have part 1 occurring for Logn iterations of the outer loop, part 2 occurring for n-logn iterations of the outer loop.
The number of times the inner loop runs is halved for each run, so in total the number of times it runs should be 2n-1. So the runtime for part 1 should be (2n-1)*c, a constant. I'm not entirely sure if this is valid
For part 2, the work inside the loop is always constant, and the loop repeats (n-logn) times.
So we have ((2n-1)+(n-logn))*c
I'm not sure whether the work I've done up to here is correct, nor am I certain how to continue. My intuition tells me this is O(n), but I'm not sure how to rationalize that in big-theta. Beyond that it's possible my entire approach is flawed. How should I attack a question like this? If my approach is valid, how should I complete it?
Thanks.
It is easier to investigate how often doSomethingA and doSomethingB is executed.
For doSomethingA it is clearly n times.
For doSomethingB we get (n+1) + (n/2+1) + (n/4+1) + ... + 1 so roughly 2n + n. The 2n from the n+n/2+n/4+... and the n from summing up the 1s.
All together we get O(n) and also Theta(n) since you need at least Omega(n) as can be seen from the n times doSomethingA is executed.

Why are only n/2 iterations considered for lower bound of selection sort?

From "Algorithms Unlocked" - Cormen, in Chapter 3- Algorithms for Sorting and Searching, under "Selection Sort".
The running time is Omega(n^2). This lower bound is calculated by considering n/2 iterations of the outer loop. Why n/2 iterations are considered?
Isn't the entire array traversed in all cases, including the lower-bound case? If yes, shouldn't 'n' be considered instead of n/2?
Edit1: That n/2 is considered even for the inner loop is not helping understand the logic.
The logic is the following: Different iterations of outer loop have different lengths of inner loop. In the first n/2 iterations of the outer loop we know that the inner loop has length >= n/2. Thus, we know that total amount of work is greater than the amount of work for the first n/2 iterations which is greater than (n/2)*(n/2) = n^2/4, thus Omega(n^2).
If we considered entire outer loop, we would not be able to state that inner loop has much work to do because the last iteration of outer loop has inner loop of length 1, and we would get the estimate n*1.
For better understanding you should keep in mind the following two moments:
Definition of Omega(n^2). It does not say that complexity is >= n^2. It says that there is a constant C such that complexity >= C * n^2. C might be equal 1, then >= n^2, or 100, then >= 100 * n^2, or 0.000001, then >= 0.000001 * n^2. But since we've found that for selection sort we have >= (1/4)*n^2, we know that selection sort is Omega(n^2) with C equal 0.25.
In analyzing the complexity of selection sort we do not discuss "worst case" or "best case" scenarios. In fact, the algorithm does not depend on data we are sorting and it is always performing exactly the same iterations in outer and inner loops. (To be precise: the only difference, depending on the data, is whether we perform the swap operation or not. But we do not count the swap operation in the analysis.) Thus, when we are talking about "first n/2 iterations" of the outer loop, we do not assume that the algorism was able to complete the job in n/2 iterations. We just evaluate the amount of work done in the first n/2 iterations leaving the work in remaining iterations aside.

How do i figure out the time complexity of these two small pieces of code?

I would like to know things about things and such
for i ← 1 to 2n do means i takes 2*n different values, and for each of them, j takes an other i different values.
So overall, s←s+i is executed O(2*n*2*n) times, which is O(n^2).
Same reasoning for the second example gives us O(n^2*n^2) = O(n^4)

Determining time complexity of an algorithm

Below is some pseudocode I wrote that, given an array A and an integer value k, returns true if there are two different integers in A that sum to k, and returns false otherwise. I am trying to determine the time complexity of this algorithm.
I'm guessing that the complexity of this algorithm in the worst case is O(n^2). This is because the first for loop runs n times, and the for loop within this loop also runs n times. The if statement makes one comparison and returns a value if true, which are both constant time operations. The final return statement is also a constant time operation.
Am I correct in my guess? I'm new to algorithms and complexity, so please correct me if I went wrong anywhere!
Algorithm ArraySum(A, n, k)
for (i=0, i<n, i++)
for (j=i+1, j<n, j++)
if (A[i]+A[j]=k)
return true
return false
Azodious's reasoning is incorrect. The inner loop does not simply run n-1 times. Thus, you should not use (outer iterations)*(inner iterations) to compute the complexity.
The important thing to observe is, that the inner loop's runtime changes with each iteration of the outer loop.
It is correct, that the first time the loop runs, it will do n-1 iterations. But after that, the amount of iterations always decreases by one:
n - 1
n - 2
n - 3
…
2
1
We can use Gauss' trick (second formula) to sum this series to get n(n-1)/2 = (n² - n)/2. This is how many times the comparison runs in total in the worst case.
From this, we can see that the bound can not get any tighter than O(n²). As you can see, there is no need for guessing.
Note that you cannot provide a meaningful lower bound, because the algorithm may complete after any step. This implies the algorithm's best case is O(1).
Yes. In the worst case, your algorithm is O(n2).
Your algorithm is O(n2) because every instance of inputs needs time complexity O(n2).
Your algorithm is Ω(1) because there exist one instance of inputs only needs time complexity Ω(1).
Following appears in chapter 3, Growth of Function, of Introduction to Algorithms co-authored by Cormen, Leiserson, Rivest, and Stein.
When we say that the running time (no modifier) of an algorithm is Ω(g(n)), we mean that no mater what particular input of size n is chosen for each value of n, the running time on that input is at least a constant time g(n), for sufficiently large n.
Given an input in which the summation of first two elements is equal to k, this algorithm would take only one addition and one comparison before returning true.
Therefore, this input costs constant time complexity and make the running time of this algorithm Ω(1).
No matter what the input is, this algorithm would take at most n(n-1)/2 additions and n(n-1)/2 comparisons before returning value.
Therefore, the running time of this algorithm is O(n2)
In conclusion, we can say that the running time of this algorithm falls between Ω(1) and O(n2).
We could also say that worst-case running of this algorithm is Θ(n2).
You are right but let me explain a bit:
This is because the first for loop runs n times, and the for loop within this loop also runs n times.
Actually, the second loop will run for (n-i-1) times, but in terms of complexity it'll be taken as n only. (updated based on phant0m's comment)
So, in worst case scenerio, it'll run for n * (n-i-1) * 1 * 1 times. which is O(n^2).
in best case scenerio, it's run for 1 * 1 * 1 * 1 times, which is O(1) i.e. constant.

Runtime of bubble/simple sort

In class, simple sort is used as like one of our first definitions of O(N) runtimes...
But since it goes through one less iteration of the array every time it runs, wouldn't it be something more along the lines of...
Runtime bubble= sum(i = 0, n, (n-i)) ?
And aren't only the biggest processes when run one after another counted in asymptotic analysis which would be the N iteration, why is by definition this sort not O(N)?
The sum of 1 + 2 + ... + N is N*(N+1)/2 ... (high school maths) ... and that approaches (N^2)/2 as N goes to infinity. Classic O(N^2).
I'm not sure where you (or your professor) got the notion that bubble sort is O(n). If your professor had a guaranteed O(n) sort algorithm, they'd be wise to try and patent it :-)
A bubble sort is, by it's very nature, O(n2).
That's because it has to make a full pass of the entire data set, to correctly place the first element.
Then a second pass of N - 1 elements to correctly place the second. And a third pass of N - 2 elements to correctly place the third.
And so on, effectively ending up with close to N * N / 2 operations which, removing the superfluous 0.5 constant, is O(n2).
The time complexity of bubble sort is O(n^2).
When considering the complexity, only the largest expression is considered (but not the factor)

Resources