What is the order of f=(nlog(log n)/ log n? Will it be O(n)? - big-o

I was wondering the O(n) time for my function. I beleive it is O(n) since log(log(n)) <= log(n) giving us N< c* n^c, resulting in lowest possible value of 1

Related

Big O Notation With Separated Procedures

Today marks the first day I begin to study Algorithms and Algorithm Analysis. More specifically asymptotic analysis. But before I dive in I have one simple question that needs clarification that I cant seem to find anywhere else. Given the code snippet below what would the algorithm complexity be in Big O notation?
// Linear Computation: O(n)
....
//Merge Sort: O(n log(n))
....
//Merge Sort: O(n log(n))
....
//Nested for loops iterating n times each: O(n^2)
....
My assumption would be Big O: O(n) + (2 * O(n log (n) )) + O(n^2) but by definition of Big O do we simplify this further? Would we just call this program O(n^2) considering it is the worse of the three and can be upper bounded by a constant c * n^2?
When calculating time complexities, we compute it in terms of BIG-O. Now as our programs are huge it is not possible to compute the total of all complexities and on the other hand, there is no sense of doing it because in any expression consisting of big and small terms, if we change big terms then there will significant change in value but if we change small-term then there is no significant change. For eg take value 10000021 if we change leading one to 2, now our value will be 20000021 (huge change), now change ending 1 to 2, now our value will be 10000022 (little change). Similarly, when the program contains n^2 it is considered instead of o(n) or O(logn) . Change in n^2 is not considered. Therefore we consider n^2.
Order -- n! , 2^n , n^R ..... , n^3 , n^2 , nlogn , n , logn
Consider the maximum which is present in the program.
Usually, when calculating time complexities, large value of input is kept in mind. Say like n value is 1000000.
When you say,
Big O: O(n) + (2 * O(n log (n) )) + O(n^2),
for this large n, 'n' and (2 * O(n log (n) )) will not grow as much as O(n^2).
So the O(n^2) is the complexity decider for large n, and thatswhy overall complexity given is O(n^2)
For given C1, C2, C3, you can find C such that
C1 n + C2 n log(n) + C3 n² <= C n²
The n^2 term dominates, so the big-O is going to be that. The fact that there's two nlogn terms doesn't change anything, since there's a fixed number of them. And when n is big, n^2 is bigger than nlogn.

Algorithm Running Time for O(n.m^2)

I would like to know, because I couldn't find any information online, how is an algorithm like O(n * m^2) or O(n * k) or O(n + k) supposed to be analysed?
Does only the n count?
The other terms are superfluous?
So O(n * m^2) is actually O(n)?
No, here the k and m terms are not superfluous,they do have a valid existence and essential for computing time complexity. They are wrapped together to provide a concrete-complexity to the code.
It may seem like the terms n and k are independent to each other in the code,but,they both combinedly determines the complexity of the algorithm.
Say, if you've to iterate a loop of size n-elements, and, in between, you have another loop of k-iterations, then the overall complexity turns O(nk).
Complexity of order O(nk), you can't dump/discard k here.
for(i=0;i<n;i++)
for(j=0;j<k;j++)
// do something
Complexity of order O(n+k), you can't dump/discard k here.
for(i=0;i<n;i++)
// do something
for(j=0;j<k;j++)
// do something
Complexity of order O(nm^2), you can't dump/discard m here.
for(i=0;i<n;i++)
for(j=0;j<m;j++)
for(k=0;k<m;k++)
// do something
Answer to the last question---So O(n.m^2) is actually O(n)?
No,O(nm^2) complexity can't be reduced further to O(n) as that would mean m doesn't have any significance,which is not the case actually.
FORMALLY: O(f(n)) is the SET of ALL functions T(n) that satisfy:
There exist positive constants c and N such that, for all n >= N,
T(n) <= c f(n)
Here are some examples of when and why factors other than n matter.
[1] 1,000,000 n is in O(n). Proof: set c = 1,000,000, N = 0.
Big-Oh notation doesn't care about (most) constant factors. We generally leave constants out; it's unnecessary to write O(2n), because O(2n) = O(n). (The 2 is not wrong; just unnecessary.)
[2] n is in O(n^3). [That's n cubed]. Proof: set c = 1, N = 1.
Big-Oh notation can be misleading. Just because an algorithm's running time is in O(n^3) doesn't mean it's slow; it might also be in O(n). Big-Oh notation only gives us an UPPER BOUND on a function.
[3] n^3 + n^2 + n is in O(n^3). Proof: set c = 3, N = 1.
Big-Oh notation is usually used only to indicate the dominating (largest
and most displeasing) term in the function. The other terms become
insignificant when n is really big.
These aren't generalizable, and each case may be different. That's the answer to the questions: "Does only the n count? The other terms are superfluous?"
Although there is already an accepted answer, I'd still like to provide the following inputs :
O(n * m^2) : Can be viewed as n*m*m and assuming that the bounds for n and m are similar then the complexity would be O(n^3).
Similarly -
O(n * k) : Would be O(n^2) (with the bounds for n and k being similar)
and -
O(n + k) : Would be O(n) (again, with the bounds for n and k being similar).
PS: It would be better not to assume the similarity between the variables and to first understand how the variables relate to each other (Eg: m=n/2; k=2n) before attempting to conclude.

Running time complexity of bubble sort

I was looking at the bubble sort algorithim in wiki, it seems that the worst case is o(n2).
Let's take an array size of n.
int a = [1,2,3,4,5.....n]
For any n elements, the total number of comparisons, therefore, is (n - 1) + (n - 2)...(2) + (1) = n(n - 1)/2 or O(n2).
Can anyone explain me how is n(n-1)/2 equals o(n2). I am not able to understand on how they came to a conclusion that the worst case analsysis of this algorithim is o(n2)
They are looking at the case when N is getting closer to infinity. So n(n-1)/2 would be practically the same as n*n/2 or n^2 / 2.
And since they are only looking at how much does the time (which is required for it to run) increase as N increases that means that constants are irrelevant. In this case when N doubles the algorithm takes 4 times longer to execute. So we end up with n^2 or O(n^2).

O(n) - the next permutation lexicographically

i'm just wondering what is efficiency (O(n)) of this algorithm:
Find the largest index k such that a[k] < a[k + 1]. If no such index exists, the permutation is the last permutation.
Find the largest index l such that a[k] < a[l]. Since k + 1 is such an index, l is well defined and satisfies k < l.
Swap a[k] with a[l].
Reverse the sequence from a[k + 1] up to and including the final element a[n].
As I understand the worst case O(n) = n (when k is the first element of previous permutation), best case O(n) = 1 (when k is last element of previous permutation).
Can I say that O(n) = n/2 ?
O(n) = n/2 makes no sense. Let f(n) = n be the running time of your algorithm. Then the right way to say it is that f(n) is in O(n). O(n) is a set of functions that are at most asymptotically linear in n.
Your optimization makes the expected running time g(n) = n/2. g(n) is also in O(n). In fact O(n) = O(n/2) so your saving of half of the time does not change the asymptotic complexity.
All steps in the algorithm takes O(n) asymptotically.
Your averaging is incorrect. Just because best case is O(1) and worst case is O(n), you can't say the algorithm takes O(n)=n/2. Big O notation is simply for the upper bound of the algorithm.
So the algorithm is still O(n) irrespective of the best case scenario.
There is no such thing as O(n) = n/2.
When you do O(n) calculations you're just trying to find the functional dependency, you don't care about coefficients. So there's no O(n)= n/2 just like there's no O(n) = 5n
Asymptotically, O(n) is the same as O(n/2). In any case, the algorithm is performed for each of the n! permutations, so the order is much greater than your estimate (on the order of n!).

Is worst case analysis not equal to asymptotic bounds

Can someone explain to me why this is true. I heard a professor mention this is his lecture
The two notions are orthogonal.
You can have worst case asymptotics. If f(n) denotes the worst case time taken by a given algorithm with input n, you can have eg. f(n) = O(n^3) or other asymptotic upper bounds of the worst case time complexity.
Likewise, you can have g(n) = O(n^2 log n) where g(n) is the average time taken by the same algorithm with (say) uniformly distributed (random) inputs of size n.
Or you can have h(n) = O(n) where h(n) is the average time taken by the same algorithm with particularly distributed random inputs of size n (eg. almost sorted sequences for a sorting algorithm).
Asymptotic notation is a "measure". You have to specify what you want to count: worst case, best case, average, etc.
Sometimes, you are interested in stating asymptotic lower bounds of (say) the worst case complexity. Then you write f(n) = Omega(n^2) to state that in the worst case, the complexity is at least n^2. The big-Omega notation is opposite to big-O: f = Omega(g) if and only if g = O(f).
Take quicksort for an example. Each successive recursive call n of quicksort has a run-time complexity T(n) of
T(n) = O(n) + 2 T[ (n-1)/2 ]
in the 'best case' if the unsorted input list is splitted into two equal sublists of size (n-1)/2 in each call. Solving for T(n) gives O(n log n), in this case. If the partition is not perfect, and the two sublists are not of equal size n, i.e.
T(n) = O(n) + T(k) + T(n - 1 - k),
we still obtain O(n log n) even if k=1, just with a larger constant factor. This is because the number of recursive calls of quicksort is rising exponentially while processing the input list as long as k>0.
However, in the 'worst case' no division of the input list takes place, i.e.:
T(n) = O(n) + T(0) + T(n - 1) = O(n) + O(n-1) + T(n-1) + T(n-2) ... .
This happens e.g. if we take the first element of a sorted list as the pivot element.
Here, T(0) means one of the resulting sublists is zero and therefore takes no computing time (since the sublist has zero elements). All the remaining load T(n-1) is needed for the second sublist. In this case, we obtain O(n²).
If an algorithm had no worst case scenario, it would be not only be O[f(n)] but also o[f(n)] (Big-O vs. little-o notation).
The asymptotic bound is the expected behaviour as the number of operations go to infinity. Mathematically it is just that lim as n goes to infinity. The worst case behaviour however is applicable to finite number of operations.

Resources