Related
I have the following problem. Let's suppose we have function f(n). Complexity of f(n) is O(n!). However, there is also parameter k=n*(n-1). My question is - what is the complexity of f(k)? Is it f(k)=O(k!/k^2) or something like that, taking into consideration that there is a quadratic relation between k and n?
Computational complexity is interpreted base on the size of the input. Hence, if f(n) = O(n!) when your input is n, then f(k) = O(k!) when your input is k.
Therefore, you don't need to compute the complexity for each value of input for the function f(n). For example, f(2) = O(2!), you don't need to compute the complexity of f(10) likes O((5*2)!) as 10 = 5 * 2, and try to simplify it base on the value of 2!. We can say f(10) = O(10!).
Anyhow, if you want compute (n*(n-1))! = (n^2 - n)!(n^2 - n + 1)...(n^2 - n + n) /(n^2 - n + 1)...(n^2 - n + n) = (n^2)!/\theta(n^3) = O((n^2)!/n^(2.9))
Did you consider that there is a m, such that the n you used in your f(n) is equal to m * (m - 1).
Does that change the complexity?
The n in f(n) = O(n!) represents all the valid inputs.
You are trying to pass a variable k whose actual value in terms of another variable is n * (n - 1). That does not change the complexity. It will be O(k!) only.
I'm studying for an exam, and i've come across the following question:
Provide a precise (Θ notation) bound for the running time as a
function of n for the following function
for i = 1 to n {
j = i
while j < n {
j = j + 4
}
}
I believe the answer would be O(n^2), although I'm certainly an amateur at the subject but m reasoning is the initial loop takes O(n) and the inner loop takes O(n/4) resulting in O(n^2/4). as O(n^2) is dominating it simplifies to O(n^2).
Any clarification would be appreciated.
If you proceed using Sigma notation, and obtain T(n) equals something, then you get Big Theta.
If T(n) is less or equal, then it's Big O.
If T(n) is greater or equal, then it's Big Omega.
I would like to know, because I couldn't find any information online, how is an algorithm like O(n * m^2) or O(n * k) or O(n + k) supposed to be analysed?
Does only the n count?
The other terms are superfluous?
So O(n * m^2) is actually O(n)?
No, here the k and m terms are not superfluous,they do have a valid existence and essential for computing time complexity. They are wrapped together to provide a concrete-complexity to the code.
It may seem like the terms n and k are independent to each other in the code,but,they both combinedly determines the complexity of the algorithm.
Say, if you've to iterate a loop of size n-elements, and, in between, you have another loop of k-iterations, then the overall complexity turns O(nk).
Complexity of order O(nk), you can't dump/discard k here.
for(i=0;i<n;i++)
for(j=0;j<k;j++)
// do something
Complexity of order O(n+k), you can't dump/discard k here.
for(i=0;i<n;i++)
// do something
for(j=0;j<k;j++)
// do something
Complexity of order O(nm^2), you can't dump/discard m here.
for(i=0;i<n;i++)
for(j=0;j<m;j++)
for(k=0;k<m;k++)
// do something
Answer to the last question---So O(n.m^2) is actually O(n)?
No,O(nm^2) complexity can't be reduced further to O(n) as that would mean m doesn't have any significance,which is not the case actually.
FORMALLY: O(f(n)) is the SET of ALL functions T(n) that satisfy:
There exist positive constants c and N such that, for all n >= N,
T(n) <= c f(n)
Here are some examples of when and why factors other than n matter.
[1] 1,000,000 n is in O(n). Proof: set c = 1,000,000, N = 0.
Big-Oh notation doesn't care about (most) constant factors. We generally leave constants out; it's unnecessary to write O(2n), because O(2n) = O(n). (The 2 is not wrong; just unnecessary.)
[2] n is in O(n^3). [That's n cubed]. Proof: set c = 1, N = 1.
Big-Oh notation can be misleading. Just because an algorithm's running time is in O(n^3) doesn't mean it's slow; it might also be in O(n). Big-Oh notation only gives us an UPPER BOUND on a function.
[3] n^3 + n^2 + n is in O(n^3). Proof: set c = 3, N = 1.
Big-Oh notation is usually used only to indicate the dominating (largest
and most displeasing) term in the function. The other terms become
insignificant when n is really big.
These aren't generalizable, and each case may be different. That's the answer to the questions: "Does only the n count? The other terms are superfluous?"
Although there is already an accepted answer, I'd still like to provide the following inputs :
O(n * m^2) : Can be viewed as n*m*m and assuming that the bounds for n and m are similar then the complexity would be O(n^3).
Similarly -
O(n * k) : Would be O(n^2) (with the bounds for n and k being similar)
and -
O(n + k) : Would be O(n) (again, with the bounds for n and k being similar).
PS: It would be better not to assume the similarity between the variables and to first understand how the variables relate to each other (Eg: m=n/2; k=2n) before attempting to conclude.
If I understand Big-O notation correctly, k should be a constant time for the efficiency of an algorithm. Why would a constant time be considered O(1) rather than O(k), considering it takes a variable time? Linear growth ( O(n + k) ) uses this variable to shift the time right by a specific amount of time, so why not the same for constant complexity?
There is no such linear growth asymptotic O(n + k) where k is a constant. If k were a constant and you went back to the limit representation of algorithmic growth rates, you'd see that O(n + k) = O(n) because constants drop out in limits.
Your answer may be O(n + k) due to a variable k that is fundamentally independent of the other input set n. You see this commonly in compares vs moves in sorting algorithm analysis.
To try to answer your question about why we drop k in Big-O notation (which I think is taught poorly, leading to all this confusion), one definition (as I recall) of O() is as follows:
Read: f(n) is in O( g(n) ) iff there exists d and n_0 where for all n > n_0,
f(n) <= d * g(n)
Let's try to apply it to our problem here where k is a constant and thus f(x) = k and g(x) = 1.
Is there a d and n_0 that exist to satisfy these requirements?
Trivially, the answer is of course yes. Choose d > k and for n > 0, the definition holds.
My question refers to the big-Oh notation in algorithm analysis. While Big-Oh seems to be a math question, it's much useful in algorithm analysis.
Suppose two functions are defined below:
f(n) = 2( to the power n) when n is even
f(n) = n when n is odd
g(n) = n when n is even
g(n) = 2( to the power n) when n is odd.
For the above two functions which one is big-Oh of other? Or whether any function is not a Big-Oh of another function.
Thanks!
In this case,
f ∉ O(g), and
g ∉ O(f).
This is because no matter what constants N and k you pick,
there exists i ≥ N such that f(i) > k g(i), and
there exists j ≥ N such that g(j) > k f(j).
The Big-Oh relationship is quite specific in that one function is, after a finite n, always larger than the other.
Is this true here? If so, give such a n. If not, you should prove it.
Usually Big-O and Big-Theta notations get confused.
A layman attempt at definition could be that Big-O means that one function is growing as fast or faster than another one, i.e. that given a large enough n, f(n)<=k*g(n) where k is some constant. That means that if f(x) = 2x^3, then it is in O(x^3), O(x^4), O(2^x), O(x!) etc..
Big-Theta means that one function is growing as fast as another one, with neither one being able to "outgrow" the other, or, k1*g(n)<=f(n)<=k2*g(n) for some k1 and k2. In programming terms that means that these two functions have the same level of complexity. If f(x) = 2x^3, then it is in Θ(x^3), as for example, if k1=1, and k2=3, 1*x^3 < 2*x^3 < 3*x^3
In my experience whenever programmers are talking about Big-O, the discussion is actually about Big-Θ, as we are concerned more with the as fast as part more, than in the no faster than part.
That said, if two functions with different Θ's are combined, as in your example, the larger one - (Θ(2^n) - swallows the smaller - Θ(n), so both f and g have the exact same Big-O and Big-Θ complexities. In this case, it's both correct that
f(n) = O(g(n)), also f(n) = Θ(g(n))
g(n) = O(f(n)), also g(n) = Θ(f(n))
so, as they have the same complexity, they are O and Θ bound by each other.