I was looking at the bubble sort algorithim in wiki, it seems that the worst case is o(n2).
Let's take an array size of n.
int a = [1,2,3,4,5.....n]
For any n elements, the total number of comparisons, therefore, is (n - 1) + (n - 2)...(2) + (1) = n(n - 1)/2 or O(n2).
Can anyone explain me how is n(n-1)/2 equals o(n2). I am not able to understand on how they came to a conclusion that the worst case analsysis of this algorithim is o(n2)
They are looking at the case when N is getting closer to infinity. So n(n-1)/2 would be practically the same as n*n/2 or n^2 / 2.
And since they are only looking at how much does the time (which is required for it to run) increase as N increases that means that constants are irrelevant. In this case when N doubles the algorithm takes 4 times longer to execute. So we end up with n^2 or O(n^2).
Related
After analyzing an algorithm i was working on this is the running-time
N Time
1000 0.019123315811157227
10000 0.11949563026428223
100000 1.4074015617370605
1000000 16.07071304321289
The algorithm simply returns common points in 2 2D arrays ( 'a' and 'b')
This is the code that has been used
def common_points(a,b):
start=time.time()
specialSuperSorting(a) #Insertion Sort - ~1/4 N^2
specialSuperSorting(b) #Insertion Sort - ~1/4 N^2
common=[]
for i in range(len(a)):
x=a[i]
#BinarySearch coordinates(x,y)-Returns True if found, false if not
y=specialBinarySearch(b,x)
if(y):
common.append(x)
end=time.time()
print(len(a),' ',end-start)
return common
I know I could have used a faster sorting algorithm ... but I just took the easier path to save my time as this was just an exercise
So is this sub-quadratic time ? and how can i decide based on the table of N against T(N) only... without having the algorithm itself
The runtime of an algorithm will be dominated by its slowest component. Insertion sort is already O(n^2) or quadratic, so your algorithm will be at least this slow. Assuming specialBinarySearch is O(log n), and you run it n times, this part of the algorithm will be O(n log n).
In summary, your algorithm runs in O(1/4 n^2 + 1/4 n^2 + n log n) = O(n^2). It's quadratic. The 1/4 doesn't change that. You can see this trend in your data, which tends upwards much faster than linear or n log n, if you were to plot it in a graph.
I would like to know, because I couldn't find any information online, how is an algorithm like O(n * m^2) or O(n * k) or O(n + k) supposed to be analysed?
Does only the n count?
The other terms are superfluous?
So O(n * m^2) is actually O(n)?
No, here the k and m terms are not superfluous,they do have a valid existence and essential for computing time complexity. They are wrapped together to provide a concrete-complexity to the code.
It may seem like the terms n and k are independent to each other in the code,but,they both combinedly determines the complexity of the algorithm.
Say, if you've to iterate a loop of size n-elements, and, in between, you have another loop of k-iterations, then the overall complexity turns O(nk).
Complexity of order O(nk), you can't dump/discard k here.
for(i=0;i<n;i++)
for(j=0;j<k;j++)
// do something
Complexity of order O(n+k), you can't dump/discard k here.
for(i=0;i<n;i++)
// do something
for(j=0;j<k;j++)
// do something
Complexity of order O(nm^2), you can't dump/discard m here.
for(i=0;i<n;i++)
for(j=0;j<m;j++)
for(k=0;k<m;k++)
// do something
Answer to the last question---So O(n.m^2) is actually O(n)?
No,O(nm^2) complexity can't be reduced further to O(n) as that would mean m doesn't have any significance,which is not the case actually.
FORMALLY: O(f(n)) is the SET of ALL functions T(n) that satisfy:
There exist positive constants c and N such that, for all n >= N,
T(n) <= c f(n)
Here are some examples of when and why factors other than n matter.
[1] 1,000,000 n is in O(n). Proof: set c = 1,000,000, N = 0.
Big-Oh notation doesn't care about (most) constant factors. We generally leave constants out; it's unnecessary to write O(2n), because O(2n) = O(n). (The 2 is not wrong; just unnecessary.)
[2] n is in O(n^3). [That's n cubed]. Proof: set c = 1, N = 1.
Big-Oh notation can be misleading. Just because an algorithm's running time is in O(n^3) doesn't mean it's slow; it might also be in O(n). Big-Oh notation only gives us an UPPER BOUND on a function.
[3] n^3 + n^2 + n is in O(n^3). Proof: set c = 3, N = 1.
Big-Oh notation is usually used only to indicate the dominating (largest
and most displeasing) term in the function. The other terms become
insignificant when n is really big.
These aren't generalizable, and each case may be different. That's the answer to the questions: "Does only the n count? The other terms are superfluous?"
Although there is already an accepted answer, I'd still like to provide the following inputs :
O(n * m^2) : Can be viewed as n*m*m and assuming that the bounds for n and m are similar then the complexity would be O(n^3).
Similarly -
O(n * k) : Would be O(n^2) (with the bounds for n and k being similar)
and -
O(n + k) : Would be O(n) (again, with the bounds for n and k being similar).
PS: It would be better not to assume the similarity between the variables and to first understand how the variables relate to each other (Eg: m=n/2; k=2n) before attempting to conclude.
i came across this piece of code to perform merge sort on a link list..
the author claims that it runs in time O(nlog n)..
here is the link for it...
http://www.geeksforgeeks.org/merge-sort-for-linked-list/
my claim is that it takes atleast O(n^2) time...and here is my argument...
look, you divide the list(be it array or linked list), log n times(refer to recursion tree), during each partition, given a list of size i=n, n/2, ..., n/2^k, we would take O(i) time to partition the original/already divided list..since sigma O(i)= O(n),we can say , we take O(n) time to partition for any given call of partition(sloppily), so given the time taken to perform a single partition, the question now arises as to how many partitions are going to happen all in all, we observe that the number of partitions at each level i is equal to 2^i , so summing 2^0+2^1+....+2^(lg n ) gives us [2(lg n)-1] as the sum which is nothing but (n-1) on simplification , implying that we call partition n-1, (let's approximate it to n), times so , the complexity is atleast big omega of n^2..
if i am wrong, please let me know where...thanks:)
and then after some retrospection , i applied master method to the recurrence relation where i replaced theta of 1 which is there for the conventional merge sort on arrays with theta of n for this type of merge sort (because the divide and combine operations take theta of n time each), the running time turned out to be theta of (n lg n)...
also i noticed that the cost at each level is n (because 2 power i * (n/(2pow i)))...is the time taken for each level...so its theta of n at each level* lg n levels..implying that its theta of (n lg n).....did i just solve my own question??pls help i am kinda confused myself..
The recursive complexity definition for an input list of size n is
T(n) = O(n) + 2 * T(n / 2)
Expanding this we get:
T(n) = O(n) + 2 * (O(n / 2) + 2 * T(n / 4))
= O(n) + O(n) + 4 * T(n / 4)
Expanding again we get:
T(n) = O(n) + O(n) + O(n) + 8 * T(n / 8)
Clearly there is a pattern here. Since we can repeat this expansion exactly O(log n) times, we have
T(n) = O(n) + O(n) + ... + O(n) (O(log n) terms)
= O(n log n)
You are performing a sum twice for some weird reason.
To split and merge a linked list of size n, it takes O(n) time. The depth of recursion is O(log n)
Your argument was that a splitting step takes O(i) time and sum of split steps become O(n) And then you call it the time taken to perform only one split.
Instead, lets consider this, a problem of size n forms two n/2 problems, four n/4 problems eight n/8 and so on until 2^log n n/2^logn subproblems are formed. Sum these up you get O(nlogn) to perform splits.
Another O(nlogn) to combine sub problems.
For 3-way Quicksort (dual-pivot quicksort), how would I go about finding the Big-O bound? Could anyone show me how to derive it?
There's a subtle difference between finding the complexity of an algorithm and proving it.
To find the complexity of this algorithm, you can do as amit said in the other answer: you know that in average, you split your problem of size n into three smaller problems of size n/3, so you will get, in è log_3(n)` steps in average, to problems of size 1. With experience, you will start getting the feeling of this approach and be able to deduce the complexity of algorithms just by thinking about them in terms of subproblems involved.
To prove that this algorithm runs in O(nlogn) in the average case, you use the Master Theorem. To use it, you have to write the recursion formula giving the time spent sorting your array. As we said, sorting an array of size n can be decomposed into sorting three arrays of sizes n/3 plus the time spent building them. This can be written as follows:
T(n) = 3T(n/3) + f(n)
Where T(n) is a function giving the resolution "time" for an input of size n (actually the number of elementary operations needed), and f(n) gives the "time" needed to split the problem into subproblems.
For 3-Way quicksort, f(n) = c*n because you go through the array, check where to place each item and eventually make a swap. This places us in Case 2 of the Master Theorem, which states that if f(n) = O(n^(log_b(a)) log^k(n)) for some k >= 0 (in our case k = 0) then
T(n) = O(n^(log_b(a)) log^(k+1)(n)))
As a = 3 and b = 3 (we get these from the recurrence relation, T(n) = aT(n/b)), this simplifies to
T(n) = O(n log n)
And that's a proof.
Well, the same prove actually holds.
Each iteration splits the array into 3 sublists, on average the size of these sublists is n/3 each.
Thus - number of iterations needed is log_3(n) because you need to find number of times you do (((n/3) /3) /3) ... until you get to one. This gives you the formula:
n/(3^i) = 1
Which is satisfied for i = log_3(n).
Each iteration is still going over all the input (but in a different sublist) - same as quicksort, which gives you O(n*log_3(n)).
Since log_3(n) = log(n)/log(3) = log(n) * CONSTANT, you get that the run time is O(nlogn) on average.
Note, even if you take a more pessimistic approach to calculate the size of the sublists, by taking minimum of uniform distribution - it will still get you first sublist of size 1/4, 2nd sublist of size 1/2, and last sublist of size 1/4 (minimum and maximum of uniform distribution), which will again decay to log_k(n) iterations (with a different k>2) - which will yield O(nlogn) overall - again.
Formally, the proof will be something like:
Each iteration takes at most c_1* n ops to run, for each n>N_1, for some constants c_1,N_1. (Definition of big O notation, and the claim that each iteration is O(n) excluding recursion. Convince yourself why this is true. Note that in here - "iteration" means all iterations done by the algorithm in a certain "level", and not in a single recursive invokation).
As seen above, you have log_3(n) = log(n)/log(3) iterations on average case (taking the optimistic version here, same principles for pessimistic can be used)
Now, we get that the running time T(n) of the algorithm is:
for each n > N_1:
T(n) <= c_1 * n * log(n)/log(3)
T(n) <= c_1 * nlogn
By definition of big O notation, it means T(n) is in O(nlogn) with M = c_1 and N = N_1.
QED
Can someone explain to me why this is true. I heard a professor mention this is his lecture
The two notions are orthogonal.
You can have worst case asymptotics. If f(n) denotes the worst case time taken by a given algorithm with input n, you can have eg. f(n) = O(n^3) or other asymptotic upper bounds of the worst case time complexity.
Likewise, you can have g(n) = O(n^2 log n) where g(n) is the average time taken by the same algorithm with (say) uniformly distributed (random) inputs of size n.
Or you can have h(n) = O(n) where h(n) is the average time taken by the same algorithm with particularly distributed random inputs of size n (eg. almost sorted sequences for a sorting algorithm).
Asymptotic notation is a "measure". You have to specify what you want to count: worst case, best case, average, etc.
Sometimes, you are interested in stating asymptotic lower bounds of (say) the worst case complexity. Then you write f(n) = Omega(n^2) to state that in the worst case, the complexity is at least n^2. The big-Omega notation is opposite to big-O: f = Omega(g) if and only if g = O(f).
Take quicksort for an example. Each successive recursive call n of quicksort has a run-time complexity T(n) of
T(n) = O(n) + 2 T[ (n-1)/2 ]
in the 'best case' if the unsorted input list is splitted into two equal sublists of size (n-1)/2 in each call. Solving for T(n) gives O(n log n), in this case. If the partition is not perfect, and the two sublists are not of equal size n, i.e.
T(n) = O(n) + T(k) + T(n - 1 - k),
we still obtain O(n log n) even if k=1, just with a larger constant factor. This is because the number of recursive calls of quicksort is rising exponentially while processing the input list as long as k>0.
However, in the 'worst case' no division of the input list takes place, i.e.:
T(n) = O(n) + T(0) + T(n - 1) = O(n) + O(n-1) + T(n-1) + T(n-2) ... .
This happens e.g. if we take the first element of a sorted list as the pivot element.
Here, T(0) means one of the resulting sublists is zero and therefore takes no computing time (since the sublist has zero elements). All the remaining load T(n-1) is needed for the second sublist. In this case, we obtain O(n²).
If an algorithm had no worst case scenario, it would be not only be O[f(n)] but also o[f(n)] (Big-O vs. little-o notation).
The asymptotic bound is the expected behaviour as the number of operations go to infinity. Mathematically it is just that lim as n goes to infinity. The worst case behaviour however is applicable to finite number of operations.