What is the (a) worst case, (b) best case, and (c) average case complexity of the following function which calculates the mean
n=0
sum=0
input(x)
while x!=-999 do
n=n+1
sum=sum+x
input(x)
end {while}
mean=sum/n
How would you justify the complexity?
You stop as soon as -999 is found.
Therefore:
the best case is O(1) : you find -999 directly
the worst case is O(n) -n being the size of input-: you find -999 at the very end of input
the average case is in that case O(n / 2) = O(n): you find -999 in the middle
Related
I know this algorithm has a time complexity of o(nlogn), but if we speak about only the merge step, is this one still o(nlogn)? Or is it reduced to o(logn)? I believe the second is the answer but since we still have to touch every element in the array, I'm suspecting the complexity remains the same
Cheers!
The "split" step is the one that takes o(logn), and the merge one is o(n), just realized that via a comment.
The split step of Merge Sort will take O(n) instead of O(log(n)).
If we have the runtime function of split step:
T(n) = 2T(n/2) + O(1)
with T(n) is the runtime for input size n, 2 is the number of new problems and n/2 is the size of each new problem, O(1) is the constant time to split an array in half.
We also has the base case: T(4) = O(1) and T(3) = O(1) .
We might come up with (not really accurate):
T(n) = n/2 * O(1) = O(n/2) = O(n)
Moreover, to understand the time complexity of Merge step (finger algorithm), we should understand the number of sub-array.
The number of sub-array has the asymptotic growth rate at the worst case = O(n/2 + 1) = O(n).
The "Finger Algorithm" grow linear with the growth of number of sub-array, it will loop through each sub-array O(n), and at each sub-array at the worst case it will need to loop 2 more times -> the time complexity of merge step (finger algorithm) = O(2n) = O(n).
How does a program's worst case or average case dependent on log function? How does the base of log come in play?
The log factor appears when you split your problem to k parts, of size n/k each and then "recurse" (or mimic recursion) on some of them.
A simple example is the following loop:
foo(n):
while n > 0:
n = n/2
print n
The above will print n, n/2, n/4, .... , 1 - and there are O(logn) such values.
the complexity of the above program is O(logn), since each printing requires constant amount of time, and number of values n will get along the way is O(logn)
If you are looking for "real life" examples, in quicksort (and for simplicity let's assume splitting to exactly two halves), you split the array of size n to two subarrays of size n/2, and then you recurse on both of them - and invoke the algorithm on each half.
This makes the complexity function of:
T(n) = 2T(n/2) + O(n)
From master theorem, this is in Theta(nlogn).
Similarly, on binary search - you split the problem to two parts, and recurse only on one of them:
T(n) = T(n/2) + 1
Which will be in Theta(logn)
The base is not a factor in big O complexity, because
log_k(n) = log_2(n)/log_2(k)
and log_2(k) is constant, for any constant k.
I was looking at the bubble sort algorithim in wiki, it seems that the worst case is o(n2).
Let's take an array size of n.
int a = [1,2,3,4,5.....n]
For any n elements, the total number of comparisons, therefore, is (n - 1) + (n - 2)...(2) + (1) = n(n - 1)/2 or O(n2).
Can anyone explain me how is n(n-1)/2 equals o(n2). I am not able to understand on how they came to a conclusion that the worst case analsysis of this algorithim is o(n2)
They are looking at the case when N is getting closer to infinity. So n(n-1)/2 would be practically the same as n*n/2 or n^2 / 2.
And since they are only looking at how much does the time (which is required for it to run) increase as N increases that means that constants are irrelevant. In this case when N doubles the algorithm takes 4 times longer to execute. So we end up with n^2 or O(n^2).
What is the (a) worst case, (b) best case, and (c) average case complexity of the following function which does simple division
while n>=1 do
n=n DIV 2
end {while}
How would you justify the complexity?
The function is O(log n), since it will require exactly floor(log2(n)) + 1 iterations.
Can someone explain to me why this is true. I heard a professor mention this is his lecture
The two notions are orthogonal.
You can have worst case asymptotics. If f(n) denotes the worst case time taken by a given algorithm with input n, you can have eg. f(n) = O(n^3) or other asymptotic upper bounds of the worst case time complexity.
Likewise, you can have g(n) = O(n^2 log n) where g(n) is the average time taken by the same algorithm with (say) uniformly distributed (random) inputs of size n.
Or you can have h(n) = O(n) where h(n) is the average time taken by the same algorithm with particularly distributed random inputs of size n (eg. almost sorted sequences for a sorting algorithm).
Asymptotic notation is a "measure". You have to specify what you want to count: worst case, best case, average, etc.
Sometimes, you are interested in stating asymptotic lower bounds of (say) the worst case complexity. Then you write f(n) = Omega(n^2) to state that in the worst case, the complexity is at least n^2. The big-Omega notation is opposite to big-O: f = Omega(g) if and only if g = O(f).
Take quicksort for an example. Each successive recursive call n of quicksort has a run-time complexity T(n) of
T(n) = O(n) + 2 T[ (n-1)/2 ]
in the 'best case' if the unsorted input list is splitted into two equal sublists of size (n-1)/2 in each call. Solving for T(n) gives O(n log n), in this case. If the partition is not perfect, and the two sublists are not of equal size n, i.e.
T(n) = O(n) + T(k) + T(n - 1 - k),
we still obtain O(n log n) even if k=1, just with a larger constant factor. This is because the number of recursive calls of quicksort is rising exponentially while processing the input list as long as k>0.
However, in the 'worst case' no division of the input list takes place, i.e.:
T(n) = O(n) + T(0) + T(n - 1) = O(n) + O(n-1) + T(n-1) + T(n-2) ... .
This happens e.g. if we take the first element of a sorted list as the pivot element.
Here, T(0) means one of the resulting sublists is zero and therefore takes no computing time (since the sublist has zero elements). All the remaining load T(n-1) is needed for the second sublist. In this case, we obtain O(n²).
If an algorithm had no worst case scenario, it would be not only be O[f(n)] but also o[f(n)] (Big-O vs. little-o notation).
The asymptotic bound is the expected behaviour as the number of operations go to infinity. Mathematically it is just that lim as n goes to infinity. The worst case behaviour however is applicable to finite number of operations.