If I have a function that takes n*g operations for input size n, but g << n, would I be able to say the function is linear wrt n?
Not necessarily. For example, if g = log(n), it is true that g << n yet O(n * g) is not linear in n (it is O(n log(n))).
Related
I am finding an algorithm for a problem where I have two sets A and B of points with n and m points. I have two algorithms for the sets with complexity O(n log n) and O(m) and I am now wondering whether the complexity for the both algorithms combined is O(n log n) or O(m).
Basically, I am wondering whether there is some relation between m and n which would result in O(m).
If m and n are truly independent of one another and neither quantity influences the other, then the runtime of running an O(n log n)-time algorithm and then an O(m)-time algorithm is will be O(n log n + m). Neither term dominates the other - if n gets huge compared to m then the n log n part dominates, and if m is huge relative to n then the m term dominates.
This gets more complicated if you know how m and n relate to one another in some way. Many graph algorithms, for example, use m to denote the number of edges and n to denote the number of nodes. In those cases, you can sometimes simplify these expressions, but sometimes cannot. For example, the cost of implementing Dijkstra’s algorithm with a Fibonacci heap is O(m + n log n), the same as what we have above.
Size of your input is x: = m + n.
Complexity of a combined (if both are performed at most a constant number of times in the combined algorithm) algorithm is:
O(n log n) + O(m) = O(x log x) + O(x) = O(x log x).
Yes if m ~ n^n, then O(logm) = O(nlogn).
There is a log formula:
log(b^c) = c*log(b)
EDIT:
For both the algos combined the Big O is always the one that is larger because we are concerned about the asymptotic upper bound.
So it will depend on value of n and m. Eg: While n^n < m, the complexity is Olog(m), after that it becomes O(nlog(n)).
For Big-O notation we are only concerned about the larger values, so if n^n >>>> m then it is O(nlog(n)), else if m >>>> n^n then it is O(logm)
Just some interesting discussion inspired by a conversation in my class.
There are two algorithms, one has time complexity log n and another log (n+m).
Am I correct to argue for average cases, log (n+m) one will perform faster while they make no differences in running time when considering it asymptotically? Because taking the limit of both and f1'/f2' will result in a constant, therefore they have the same order of growth.
Thanks!
As I can see from the question, both n and m are independent variables. So
when stating that
O(m + n) = O(n)
it should hold for any m, which is not: the counter example is
m = exp(n)
O(log(m + n)) = O(log(n + exp(n))) = O(log(exp(n))) = O(n) > O(log(n))
That's why in general case we can only say, that
O(log(m + n)) >= O(log(n))
An interesting problem is when O(m + n) = O(n). If m grows not faster then polynom from n, i.e. O(m) <= O(P(n)):
O(log(m + n)) = O(log(P(n) + n)) = O(log(P(n))) = k * O(log(n)) = O(log(n))
In case of (multi)graphs seldom have we that many edges O(m) > P(n): even complete graph Kn contains only m = n * (n - 1) / 2 = P(n) edges, that's why
O(m + n) = O(n)
holds for ordinary graph (no parallel/multiple edges, no loops)
I am trying to solve the following question on computational complexity:
Compute the computational complexity of the following algorithm and
write down its complexity in Big O, Big Omega and Theta
for i=1 to m {
x(i) =0;
for j=1 to n {
x(i) = x(i) + A(i,j) * b(j)
}
}
where A is mxn and b is nx1.
I ended up with Big O O(mn^2)
Big Omega(1) and Theta(mn^2).
Assuming that the following statement runs in constant time:
x(i) = x(i) + A(i,j) * b(j)
this is thus done in O(1), and does not depend on the values for i and j. Since you iterate over this statement in the inner for loop, exactly n times, you can say that the following code runs in O(n):
x(i) =0;
for j=1 to n {
meth1
}
(assuming the assignment is done in constant time as well). Again it does not depend on the exact value for i. Finally we take the outer loop into account:
for i=1 to m {
meth2
}
The method meth2 is repeated exactly m times, thus a tight upper bound for the time complexity in O(n m).
Since there are no conditional statements, nor recursive and the structure of the data A, b and x does not change the execution of the program, the algorithm is also big Omega(m n) and big Theta(m n).
Of course you can over-estimate big oh and under-estimate big omega: for every algorithm you can say it is Ω(1) and for some that it is O(2n), but the point is that you do not buy much with that.
Recall that f = Theta(g) if and only if f=O(g) and f=Omega(g).
The matrix-vector product can be computed in Theta(mn) time (assuming naive implementation) and the sum of vectors in O(m), so the total running time is Theta(mn). From here it follows that the time is also O(mn) and Omega(mn).
I'm currently taking a class in algorithms. The following is a question I got wrong from a quiz: Basically, we have to indicate the worst case running time in Big O notation:
int foo(int n)
{
m = 0;
while (n >=2)
{
n = n/4;
m = m + 1;
}
return m;
}
I don't understand how the worst case time for this just isn't O(n). Would appreciate an explanation. Thanks.
foo calculates log4(n) by dividing n by 4 and counting number of 4's in n using m as a counter. At the end, m is going to be the number of 4's in n. So it is linear in the final value of m, which is equal to log base 4 of n. The algorithm is then O(logn), which is also O(n).
Let's suppose that the worst case is O(n). That implies that the function takes at least n steps.
Now let's see the loop, n is being divided by 4 (or 2²) at each step. So, in the first iteration n is reduced to n/4, in the second, to n/8. It isn't being reduced linearly. It's being reduced by a power of two so, in the worst case, it's running time is O(log n).
The computation can be expressed as a recurrence formula:
f(r) = 4*f(r+1)
The solution is
f(r) = k * 4 ^(1-r)
Where ^ means exponent. In our case we can say f(0) = n
So f(r) = n * 4^(-r)
Solving for r on the end condition we have: 2 = n * 4^(-r)
Using log in both sides, log(2) = log(n) - r* log(4) we can see
r = P * log(n);
Not having more branches or inner loops, and assuming division and addition are O(1) we can confidently say the algorithm, runs P * log(n) steps therefore is a O((log(n)).
http://www.wolframalpha.com/input/?i=f%28r%2B1%29+%3D+f%28r%29%2F4%2C+f%280%29+%3D+n
Nitpickers corner: A C int usually means the largest value is 2^32 - 1 so in practice it means only max 15 iterations, which is of course O(1). But I think your teacher really means O(log(n)).
Below is my code.
How can you perform the big O analysis for intDiv(m-n, n), where m & n are two random inputs?
intDiv(int m, int n) {
if(n>m)
return 0;
else
return 1 + intDiv(m-n, n);
} //this code finds the number of times n goes into m
It depends on the values of the m and n. In general, The number of steps involved for any pair of (m,n), such that n, m >=0 are integer_ceil(m/n).
Therefore, the time complexity of the above algorithm : O([m/n]), where, [] represents the ceil of the number.
I think it totally depends on n and m. For example if n=1 then it it is O(m). And when n=m/2 then it is O(log(m)).