complexity of calling a function n times - complexity-theory

lets say I have a function sort that has O(n log n) complexity. Now If I have a list of k lists, each of a different length how would I go about calculating the total complexity of calling sort on each element in the list?
It seems to be the sum from 1 to k of
O(n1 log n1) + O(n2 log n2) + O(n3 log n3) ...
where each n is of arbitrary length.

Related

O(n log n) vs O(m) for algorithm

I am finding an algorithm for a problem where I have two sets A and B of points with n and m points. I have two algorithms for the sets with complexity O(n log n) and O(m) and I am now wondering whether the complexity for the both algorithms combined is O(n log n) or O(m).
Basically, I am wondering whether there is some relation between m and n which would result in O(m).
If m and n are truly independent of one another and neither quantity influences the other, then the runtime of running an O(n log n)-time algorithm and then an O(m)-time algorithm is will be O(n log n + m). Neither term dominates the other - if n gets huge compared to m then the n log n part dominates, and if m is huge relative to n then the m term dominates.
This gets more complicated if you know how m and n relate to one another in some way. Many graph algorithms, for example, use m to denote the number of edges and n to denote the number of nodes. In those cases, you can sometimes simplify these expressions, but sometimes cannot. For example, the cost of implementing Dijkstra’s algorithm with a Fibonacci heap is O(m + n log n), the same as what we have above.
Size of your input is x: = m + n.
Complexity of a combined (if both are performed at most a constant number of times in the combined algorithm) algorithm is:
O(n log n) + O(m) = O(x log x) + O(x) = O(x log x).
Yes if m ~ n^n, then O(logm) = O(nlogn).
There is a log formula:
log(b^c) = c*log(b)
EDIT:
For both the algos combined the Big O is always the one that is larger because we are concerned about the asymptotic upper bound.
So it will depend on value of n and m. Eg: While n^n < m, the complexity is Olog(m), after that it becomes O(nlog(n)).
For Big-O notation we are only concerned about the larger values, so if n^n >>>> m then it is O(nlog(n)), else if m >>>> n^n then it is O(logm)

Can this be approximated?

The time complexity of finding k largest element using min-heap is given as
O(k + (n-k)log k) as mentioned here link Can it be approximated to O((n-k) log k)?
Since O(N+Nlog(k))=O(Nlog(k)) is above approximation also true ?
No you can't simplify it like that. This can be shown with a few example values for k that are close to n:
k = n
Now the complexity is defined as: O(n + 0log n) = O(n). If you would have left out the first term of the sum, you would have ended of with O(0), which obviously is wrong.
k = n - 1
We get: O((n-1) + 1log(n-1)) = O(n + log(n)) = O(n). Without the first term, you would get O(log(n)), which again is wrong.

Is O(K + (N-K)logK) equivalent to O(K + N log K)?

Can we say O(K + (N-K)logK) is equivalent to O(K + N logK) for 1 < = K <= N?
The short answer is they are not equivalent and it depends on the value of k. If k is equal to N, then the first complexity is O(N), and the second complexity is O(N + Nlog N) which is equivalent to O(NlogN). However, O(N) is not equivalent to O(N log N).
Moreover, if a function is in O(K + (N-K) log K) is in O(K + N log K) (definitely for every positive K), and the proof of this is straightforward.
Yes because in the worst case (N-K) logK is at most N logK given your constraints since 1 <= K <= N.
Not exactly.
If they are equivalent, then every function in O(k + (n-k)log k) is also in O(k + n log k) and vice-versa.
Let f(n,k) = n log k
This function is certainly in O(k + n log k), but not in O(k + (n-k)log k).
Let g(n,k) = k + (n-k)log k
Then as x approaches infinity, f(x,x)/g(x,x) grows without bound, since:
f(x,x) / g(x,x)
= (x log x) / x
= log x
See the definition of big-O notation for multiple variables: http://mathwiki.cs.ut.ee/asymptotics/04_multiple_variables
Wikipedia provides the same information, but in less accessible notation:
https://en.wikipedia.org/wiki/Big_O_notation#Multiple_variables

Nested time complexity

I hope this is a simple question, but Google did not give me any immediate results.
If I have a function with running time O(n log n) and inside that function is another function, also taking O(n log n), then what is the total running time of the function?
Say I have a list of lists.
it takes n log n time to the find the desired list and then n log n time again to find the desired item within that list.
something like
find list in n log n time
find element in list in n log n time
Is the running time still just n log n?
Thank you in advance.
What if the function looks like this:
for each element e1 in list // (O(N) time)
if e1 is the one we are looking for
for each element e2 in e1 // (O(N) time)
do something
It is O(N) inside O(N), but the second O(N) is only executed once in the first loop.
It depends how often you call the second function.
If you execute a function that finds a list within a list of lists in O(n log n) time
and then searches just that one list for desired element, which it finds in O(m log m) time,
then the total running time is O(n log n + m log m).
If m=n then the total time is just O(n log n).
If the outer loop performs O(n log n) "steps", and at each step you consider one list from the list of lists and call a function that takes O(m log m) time to find a desired item in that list, then the total running time is O(mn (log m)(log n)). I'm having difficulty imagining what application would use an algorithm like this, however.
If you execute a loop O(N) times, and during at most one of the iterations of the loop you execute an "inner" loop that runs in O(M) time, then the total running time of the outer loop is O(N + M). Note that the reason we say O(M + N) is that we don't have
any other information in this paragraph about which grows faster, M or N, and O(M + N)
covers us in either case. Again, if we knew that M=N, or even if we just knew that M is O(N) (doesn't grow faster than N), then we could just write the total time as O(N).
Well, mathematically speaking, you just multiply “what's inside” the big Os. You get O(n² log²(n)).
Your notation obscures the truth, there are no "functions inside another". (More precisely, there is no function which calls another several times.)
What is actually done is
find list in n log n time;
find element in list in n log n time
which has complexity of order n log n.
In the second example:
for each element e1 in list // (O(N) time)
if e1 is the one we are looking for
break // Found
for each element e2 in e1 // (O(N) time)
do something
for a total of O(N).
This is to be contrasted with true nesting:
for each element e1 in list // (O(N) passes)
for each element e2 in e1 // (O(N) time)
do something
for a total of O(N²).
For
find list in n log n time
find element in list in n log n time
we get
Time = n log n * n log n = 2 n log n ~ O(n log n)
And for
for each element e1 in list // (O(N) time)
if e1 is the one we are looking for
for each element e2 in e1 // (O(N) time)
do something
we get
Time = n + k n = (k+1) n
Best-Case: only 1 matching value exist
Time(Best-Case) = n + 1 * n = 2n ~ n
Worst-Case: all values are identical and matching
Time(Worst-Case) = n + n * n = n + n^2 ~ n^2

Time complexity of O( n log(log n)) + n O(L)

I want to find the overall time complexity of this:
O( n log(log n)) + n O(L)
where n is the number of objects and each object has a string with length L.
L is constant so you can rewrite it as
O(n log(log(n)) + O(n)
as n is less than n log(log(n)) the result is
O(n log(log(n))

Resources