having a problem calculating the running time - performance

I have problem in calculating the running time of the algorithm using Θ.
this is the algorithm:
input:a natural number n
i = 2
while i ≤ n do
i = i^2
end while
I would appreciate if anyone can help me.

The loop will exit when i's exponent has reached ceil(lg n), because 2^(ceil lg n) >= n by definition.
Every iteration, the exponent of i doubles, because we are squaring it. So, the loop executes Θ(lg lg n) times, each incurring constant cost, giving a total running time of Θ(lg lg n).
edit: Thanks to Pete Kirkham for pointing out that the cost of the arithmetic is not constant as n becomes arbitrarily large. The asymptotically fastest algorithm for integer multiplication would appear to be the Harvey Hoeven algorithm with a bound of O(n lg n), which would give a total running time of O(n lg n lg lg n).

Related

O(n log n) vs O(m) for algorithm

I am finding an algorithm for a problem where I have two sets A and B of points with n and m points. I have two algorithms for the sets with complexity O(n log n) and O(m) and I am now wondering whether the complexity for the both algorithms combined is O(n log n) or O(m).
Basically, I am wondering whether there is some relation between m and n which would result in O(m).
If m and n are truly independent of one another and neither quantity influences the other, then the runtime of running an O(n log n)-time algorithm and then an O(m)-time algorithm is will be O(n log n + m). Neither term dominates the other - if n gets huge compared to m then the n log n part dominates, and if m is huge relative to n then the m term dominates.
This gets more complicated if you know how m and n relate to one another in some way. Many graph algorithms, for example, use m to denote the number of edges and n to denote the number of nodes. In those cases, you can sometimes simplify these expressions, but sometimes cannot. For example, the cost of implementing Dijkstra’s algorithm with a Fibonacci heap is O(m + n log n), the same as what we have above.
Size of your input is x: = m + n.
Complexity of a combined (if both are performed at most a constant number of times in the combined algorithm) algorithm is:
O(n log n) + O(m) = O(x log x) + O(x) = O(x log x).
Yes if m ~ n^n, then O(logm) = O(nlogn).
There is a log formula:
log(b^c) = c*log(b)
EDIT:
For both the algos combined the Big O is always the one that is larger because we are concerned about the asymptotic upper bound.
So it will depend on value of n and m. Eg: While n^n < m, the complexity is Olog(m), after that it becomes O(nlog(n)).
For Big-O notation we are only concerned about the larger values, so if n^n >>>> m then it is O(nlog(n)), else if m >>>> n^n then it is O(logm)

Proof of Ω(n logk) worst case complexity in a comparison sort algorithm

I'm writing a comparison algorithm that takes n numbers and a number k as input.
It separates the n numbers to k groups so that all numbers in group 1 are smaller than all numbers of group 2 , ... , smaller than group k.
The numbers of the same group are not necessarily sorted.
I'm using a selection(A[],left,right,k) to find the k'th element , which in my case is the n/k element (to divide the whole array in to 2 pieces) and then repeat for each piece , until the initial array is divided to k parts of n/k numbers each.
It has a complexity of Θ(n logk) as its a tree of logk levels (depth) that cost maximum cn calculations each level. This is linear time as logk is considered a constant.
I am asked to prove that all comparison algorithms that sort an Array[n] to k groups in this way, cost Ω(nlogk) in the worst case.
I've searched around here , google and my algorithm's book (Jon Kleinberg Eva Tardos) I only find proof for comparison algorithms that sort ALL the elements. The proof of such algorithm complexity is not accepted in my case because all of these are under circumstances that do not meet my problem, nor can they be altered to meet my problem. ( also consider that regular quicksort with random selection results in Θ(nlogn) which is not linear as Ω(nlogk) is)
You can find the general algorithm proof here:
https://www.cs.cmu.edu/~avrim/451f11/lectures/lect0913.pdf
where it is also clearly explained why my problem does not belong in the comparison sort case of O(nlogn)
Sorting requires lg(n!) = Omega(n log n) comparisons because there are n! different output permutations.
For this problem there are
n!
-------
k
(n/k)!
equivalence classes of output permutations because the order within k independent groups of n/k elements does not matter. We compute
n!
lg ------- = lg (n!) - k lg((n/k)!)
k
(n/k)!
= n lg n - n - k ((n/k) lg (n/k) - n/k) ± O(lg n + k lg (n/k))
(write lg (...!) as a sum, bound with two integrals;
see https://en.wikipedia.org/wiki/Stirling's_approximation)
= n (lg n - lg (n/k)) ± O(lg n + k lg (n/k))
= n lg k ± O(lg n + k lg (n/k))
= Omega(n lg k).
(O(lg n + k lg (n/k)) = O(n), since k <= n)
prove that all comparison algorithms that sort an Array[n] to k groups in this way, cost Ω(nlogk) in the worst case.
I think the statement is false. If using quickselect with a poor pivot choice (such as always using first or last element), then the worst case is probably O(n^2).
Only some comparison algorithms will have a worst case of O(n log(k)). Using median of medians (the n/5 version) for the pivot prevents quickselect solves the pivot issue. There are other algorithms that would also be O(n log(k)).

n^2 log n complexity

I am just a bit confused. If time complexity of an algorithm is given by
what is that in big O notation? Just or we keep the log?
If that's the time-complexity of the algorithm, then it is in big-O notation already, so, yes, keep the log. Asymptotically, there is a difference between O(n^2) and O((n^2)*log(n)).
A formal mathematical proof would be nice here.
Let's define following variables and functions:
N - input length of the algorithm,
f(N) = N^2*ln(N) - a function that computes algorithm's execution time.
Let's determine whether growth of this function is asymptotically bounded by O(N^2).
According to the definition of the asymptotic notation [1], g(x) is an asymptotic bound for f(x) if and only if: for all sufficiently large values of x, the absolute value of f(x) is at most a positive constant multiple of g(x). That is, f(x) = O(g(x)) if and only if there exists a positive real number M and a real number x0 such that
|f(x)| <= M*g(x) for all x >= x0 (1)
In our case, there must exists a positive real number M and a real number N0 such that:
|N^2*ln(N)| <= M*N^2 for all N >= N0 (2)
Obviously, such M and x0 do not exist, because for any arbitrary large M there is N0, such that
ln(N) > M for all N >= N0 (3)
Thus, we have proved that N^2*ln(N) is not asymptotically bounded by O(N^2).
References:
1: - https://en.wikipedia.org/wiki/Big_O_notation
A simple way to understand the big O notation is to divide the actual number of atomic steps by the term withing the big O and validate you get a constant (or a value that is smaller than some constant).
for example if your algorithm does 10n²⋅logn steps:
10n²⋅logn/n² = 10 log n -> not constant in n -> 10n²⋅log n is not O(n²)
10n²⋅logn/(n²⋅log n) = 10 -> constant in n -> 10n²⋅log n is O(n²⋅logn)
You do keep the log because log(n) will increase as n increases and will in turn increase your overall complexity since it is multiplied.
As a general rule, you would only remove constants. So for example, if you had O(2 * n^2), you would just say the complexity is O(n^2) because running it on a machine that is twice more powerful shouldn't influence the complexity.
In the same way, if you had complexity O(n^2 + n^2) you would get to the above case and just say it's O(n^2). Since O(log(n)) is more optimal than O(n^2), if you had O(n^2 + log(n)), you would say the complexity is O(n^2) because it's even less than having O(2 * n^2).
O(n^2 * log(n)) does not fall into the above situation so you should not simplify it.
if complexity of some algorithm =O(n^2) it can be written as O(n*n). is it O(n)?absolutely not. so O(n^2*logn) is not O(n^2).what you may want to know is that O(n^2+logn)=O(n^2).
A simple explanation :
O(n2 + n) can be written as O(n2) because when we increase n, the difference between n2 + n and n2 becomes non-existent. Thus it can be written O(n2).
Meanwhile, in O(n2logn) as the n increases, the difference between n2 and n2logn will increase unlike the above case.
Therefore, logn stays.

Big O notation is O(nlgn) same as O(n+nlgn) in terms of computational complexity?

I working on a problem for which I came up with two algorithms: one takes O(n lgn) time but requires extra space and other takes O(n+nlgn) time. So just wanted to ask is O(n lgn) time complexity an improvement over O(n+nlgn) or both will be considered equal considering nlgn is the biggest value.
They are the same:
n + n lg n <= 2 n lg n -- for n >= base of logarithm
= O(n lg n)

When c > 0 Log(n) = O(n)? Not sure why it isn't O(log n)

In my homework, the question asks to determine the asymptotic complexity of n^.99999*log(n). I figured that it would be closer to O( n log n) but the answer key suggests that when c > 0, log n = O(n). I'm not quite sure why that is, could someone provide an explanation?
It's also true that lg n = O( nk ) (in fact, it is o(nk); did the hint actually say that, perhaps?) for any constant k, not just 1. Now consider k=0.00001. Then n0.99999 lg n = O(n0.99999 n0.00001 ) = O(n). Note that this bound is not tight, since I could choose an even smaller k, so it's perfectly fine to say that n0.99999 lg n is O(n0.99999 lg n), just as we say n lg n is O(n lg n).

Resources