I'm very very fresh to C# and programming overall (especially on algorithms).
I'm trying to learn the basic of algorithms and I really do not know to answer on certain questions:
I need to answer on each, what is the complexity.
Now I've answered the following:
1) O(2N)
2) O(1)? I guessed here and couldn't tell why O(1)
3) Couldn't tell
4) Couldn't tell
5) O(N^2) ? took a nice guess here.
I would really really really appreciate any help followed by explanation.
This loop counts up from 0 to n–1, which is n iterations. Each iteration performs 2 basic operations. Hence a total of 2*n* basic operations are performed. O(2*n*) is the same as O(n) because we disregard constants.
This loop counts down from 100 to 1, which is 99 iterations. Each iteration performs 2 basic operations. Hence a total of 198 basic operations are performed. O(198) is the same as O(1) because we disregard constants.
The outer loop counts from 100 to floor(n/2)–1. If n<200, then no iterations are executed and the run time is O(1) for the loop initialization and test. Otherwise n≥200, then approximately (n/2)–100 iterations are executed. The inner loop executes n times, performing 1 basic operation each time. Hence a total of about ((n/2)–100)×n×1 = (n2)/2 – 100n basic operations are executed, which is O(n2).
The first loop executes n basic operations in total. The second and third loops combined execute n2 basic operations in total. Thus a total of n + n2 basic operations are executed. The n2 term has a higher power than the n term, so the overall complexity is simply O(n2).
The outer loop executes n times. The inner loop executes n2 times per outer loop iteration. Hence in total, n3 basic operations are executed, which is O(n3).
Related
I am a beginner, and i have this fundamental doubt...
What is the Big-O runtime of this simple algorithm ?
Is it O(n2) [ cause of two for loops ] or O(n3) [ including product of two numbers which itself should be O(n) ] ?
MaxPairwiseProductNaive(A[1 . . . n]):
product ← 0
for i from 1 to n:
for j from i + 1 to n:
product ← max(product, A[i] · A[j])
return product
Both your first and second loops are O(n), so together they are O(n^2).
Inside your loop, neither the max or multiplication depend on the number of array elements. For languages like C, C++, C#, Java, etc., getting a value from an array does not add time complexity as n increases, so we say that is O(1).
While the multiplication and max will take time, they are also O(1), because they will always run at a constant time, regardless of n. (I will note that the values here will get extremely large for arrays containing values >1 because of the max, so I'm guessing some languages might start to slow down past a certain value, but that's just speculation.)
All in all, you have
O(n):
O(n):
O(1)
So the total is O(n^2)
Verification:
As a reminder, while time complexity analysis is a vital tool, you can always measure it. I did this in C# and measured both the time and number of inner loop executions.
That trendline is just barely under executions = 0.5n^2. Which makes sense if you think about low numbers of n. You can step through your loops on a piece of paper and immediately see the pattern.
n=5: 5 outer loop iterations, 10 total inner loop iterations
n=10 10 outer loop iterations, 45 total inner loop iterations
n=20 20 outer loop iterations, 190 total inner loop iterations
For timing, we see an almost identical trend with just a different constant. This indicates that our time, T(n), is directly proportional to the number of inner loop iterations.
Takeaway:
The analysis which gave O(n^2) worked perfectly. The statements within the loop were O(1) as expected. The check didn't end up being necessary, but it's useful to see just how closely the analysis and verification match.
I'm having a lot of trouble with this algorithms problem. I'm supposed to find a big-theta analysis of the following algorithm:
function example(n):
int j=n
for i=0;i<n;i++:
doSomethingA()
for k=0;k<=j;k++:
doSomethingB()
j/=2
My approach is to break the entire execution of the algorithm into two parts. One part where doSomethingA() and doSomethingB() are both called, and the second part after j becomes 0 and only doSomethingA() is called until the program halts.
With this approach, you have part 1 occurring for Logn iterations of the outer loop, part 2 occurring for n-logn iterations of the outer loop.
The number of times the inner loop runs is halved for each run, so in total the number of times it runs should be 2n-1. So the runtime for part 1 should be (2n-1)*c, a constant. I'm not entirely sure if this is valid
For part 2, the work inside the loop is always constant, and the loop repeats (n-logn) times.
So we have ((2n-1)+(n-logn))*c
I'm not sure whether the work I've done up to here is correct, nor am I certain how to continue. My intuition tells me this is O(n), but I'm not sure how to rationalize that in big-theta. Beyond that it's possible my entire approach is flawed. How should I attack a question like this? If my approach is valid, how should I complete it?
Thanks.
It is easier to investigate how often doSomethingA and doSomethingB is executed.
For doSomethingA it is clearly n times.
For doSomethingB we get (n+1) + (n/2+1) + (n/4+1) + ... + 1 so roughly 2n + n. The 2n from the n+n/2+n/4+... and the n from summing up the 1s.
All together we get O(n) and also Theta(n) since you need at least Omega(n) as can be seen from the n times doSomethingA is executed.
From "Algorithms Unlocked" - Cormen, in Chapter 3- Algorithms for Sorting and Searching, under "Selection Sort".
The running time is Omega(n^2). This lower bound is calculated by considering n/2 iterations of the outer loop. Why n/2 iterations are considered?
Isn't the entire array traversed in all cases, including the lower-bound case? If yes, shouldn't 'n' be considered instead of n/2?
Edit1: That n/2 is considered even for the inner loop is not helping understand the logic.
The logic is the following: Different iterations of outer loop have different lengths of inner loop. In the first n/2 iterations of the outer loop we know that the inner loop has length >= n/2. Thus, we know that total amount of work is greater than the amount of work for the first n/2 iterations which is greater than (n/2)*(n/2) = n^2/4, thus Omega(n^2).
If we considered entire outer loop, we would not be able to state that inner loop has much work to do because the last iteration of outer loop has inner loop of length 1, and we would get the estimate n*1.
For better understanding you should keep in mind the following two moments:
Definition of Omega(n^2). It does not say that complexity is >= n^2. It says that there is a constant C such that complexity >= C * n^2. C might be equal 1, then >= n^2, or 100, then >= 100 * n^2, or 0.000001, then >= 0.000001 * n^2. But since we've found that for selection sort we have >= (1/4)*n^2, we know that selection sort is Omega(n^2) with C equal 0.25.
In analyzing the complexity of selection sort we do not discuss "worst case" or "best case" scenarios. In fact, the algorithm does not depend on data we are sorting and it is always performing exactly the same iterations in outer and inner loops. (To be precise: the only difference, depending on the data, is whether we perform the swap operation or not. But we do not count the swap operation in the analysis.) Thus, when we are talking about "first n/2 iterations" of the outer loop, we do not assume that the algorism was able to complete the job in n/2 iterations. We just evaluate the amount of work done in the first n/2 iterations leaving the work in remaining iterations aside.
I'm trying to find out which is the Theta complexity of this algorithm.
(a is a list of integers)
def sttr(a):
for i in xrange(0,len(a)):
while s!=[] and a[i]>=a[s[-1]]:
s.pop()
s.append(i)
return s
On the one hand, I can say that append is being executed n (length of a array) times, so pop too and the last thing I should consider is the while condition which could be executed probably 2n times at most.
From this I can say that this algorithm is at most 4*n so it is THETA(n).
But isn't it amortised analysis?
On the other hand I can say this:
There are 2 nested cycles. The for cycle is being executed exactly n times. The while cycle could be executed at most n times since I have to remove item in each iteration. So the complexity is THETA(n*n).
I want to compute THETA but don't know which of these two options is correct. Could you give me advice?
The answer is THETA(n) and your arguments are correct.
This is not amortized analysis.
To get to amortized analysis you have to look at the inner loop. You can't easily say how fast the while will execute if you ignore the rest of the algorithm. Naive approach would be O(N) and that's correct since that's the maximum number of iterations. However, since we know that the total number of executions is O(N) (your argument) and that this will be executed N time we can say that the complexity of the inner loop is O(1) amortized.
I'm trying to prove that the following algorithm runs in O(n^2) time.
I'm given the Code:
Which is mostly psedocode
function generalFunction(value)
for(i=1 to value.length-1)// value is an array, and this runs `n-1 Times`
for(j=1 to value.length-i // This runs `n-1(n-1)` times?
if value[j-1] > value[j] //This runs n^2 times?
swap A[j+1] and value[j] //How would would this run?
For the first line, I calculated that it runs n-1 times. Because the loop goes n times, but since we are subtracting a 1 from the length of the arbritray array, it would be n-1 times (I believe).
The Same can be said for the second line, except we have to multiply it by the original for loop.
However, I'm not sure about the last two lines, would the third one run in n^2 times? I only wrote n^2 because of the two nested loops. I'm not sure how to go about approaching the last line either, any input would be much appreciated.
Yes, this will run in n^2 - as per your comments. Note that the execution of the inner if statement (swap) is irrelevant to the fact that double loop run n^2 times. Also, the minus one part (n-1) still makes it n^2, since you are basically looking for an upper bound ~approximation~ and n^2 is the tightest such bound. Basically (n-1)(n-1) = n^2 - 2n +1 is dominated by n^2 term.
For definition and workable example similar to this see Wikipedi - example section
P.S. Bug O is about the -worst- case scenario. So in the worst case, the if statement will always be true, hence swap will hit each loop cycle. meaning if you put a break point, it will get hit (n-1)*(n-1) times. Expanding that means n^2 - 2n + 1 times.