I am relatively new to Big-O notation and I came across this question:
Sort the following functions by order of growth from slowest to fastest - Big-O Notation. For each pair of adjacent functions in your list, please write a sentence describing why it is ordered the way it is. 7n^3 - 10n, 4n^2, n; n^8621909; 3n; 2^loglog n; n log n; 6n log n; n!; 1:1^n
So I have got this order -
1-> n^8621909
2->7n^3 - 10n
3->4n^2
4->3n
5->6n log n
6->n!
7->n
8->n log n
9-> 1.1^n
10->2^loglogn
I am unsure if this would be the correct order or not and also if this is the correct order, I am unsure of how to describe it the way it is because I ordered these in this particular manner using certain values for n and then arranging them.
1. n! = O(n!)
2. 1.1^n = O(1.1^n)
3. n^8621909 = O(n^8621909)
4. 7n^3 - 10n = O(n^3)
5. 4n^2 = O(n^2)
6. 6n log n = O(nlogn)
6. n log n = O(nlogn)
8. 3n = O(n)
8. n = O(n)
10. 2^loglog n = O(logn)
Some explanations:
O(c^n) < O(n!) < O(n^n) (for some constant c)
O(n^c) < O(c^n)
2^loglogn can be reduced to logn by setting 2^loglogn = x and taking the log of both sides
Related
In particular, I'm interested in finding the Theta complexity. I can see the algorithm is bounded by log(n) but I'm not sure how to proceed considering the problem size decreases exponentially.
i = n
j = 2
while (i >= 1)
i = i/j
j = 2j
The simplest way to answer your question is to look at the algorithm through the eyes of the logarithm (in my case the binary logarithm):
log i_0 = log n
log j_0 = 1
k = 0
while (log i_k >= 0) # as log increases monotonically
log i_{k+1} = log i_k - log j_k
log j_{k+1} = (log j_k) + 1
k++
This way we see that log i decreases by log j = k + 1 during every step.
Now when will we exit the loop?
This happens for
The maximum number of steps is thus the smallest integer k such that
holds.
Asymptotically, this is equivalent to , so your algorithm is in
Let us denote i(k) and j(k) the value of i and j at iteration k (so assume that i(1)=n and j(1)=2 ). We can easily prove by induction that j(k)=2^k and that
Knowing the above formula on i(k), you can compute an upper bound on the value of k that is needed in order to have i(k) <= 1 and you will obtain that the complexity is
I'm currently taking a class in algorithms. The following is a question I got wrong from a quiz: Basically, we have to indicate the worst case running time in Big O notation:
int foo(int n)
{
m = 0;
while (n >=2)
{
n = n/4;
m = m + 1;
}
return m;
}
I don't understand how the worst case time for this just isn't O(n). Would appreciate an explanation. Thanks.
foo calculates log4(n) by dividing n by 4 and counting number of 4's in n using m as a counter. At the end, m is going to be the number of 4's in n. So it is linear in the final value of m, which is equal to log base 4 of n. The algorithm is then O(logn), which is also O(n).
Let's suppose that the worst case is O(n). That implies that the function takes at least n steps.
Now let's see the loop, n is being divided by 4 (or 2²) at each step. So, in the first iteration n is reduced to n/4, in the second, to n/8. It isn't being reduced linearly. It's being reduced by a power of two so, in the worst case, it's running time is O(log n).
The computation can be expressed as a recurrence formula:
f(r) = 4*f(r+1)
The solution is
f(r) = k * 4 ^(1-r)
Where ^ means exponent. In our case we can say f(0) = n
So f(r) = n * 4^(-r)
Solving for r on the end condition we have: 2 = n * 4^(-r)
Using log in both sides, log(2) = log(n) - r* log(4) we can see
r = P * log(n);
Not having more branches or inner loops, and assuming division and addition are O(1) we can confidently say the algorithm, runs P * log(n) steps therefore is a O((log(n)).
http://www.wolframalpha.com/input/?i=f%28r%2B1%29+%3D+f%28r%29%2F4%2C+f%280%29+%3D+n
Nitpickers corner: A C int usually means the largest value is 2^32 - 1 so in practice it means only max 15 iterations, which is of course O(1). But I think your teacher really means O(log(n)).
I know the big-O complexity of this algorithm is O(n^2), but I cannot understand why.
int sum = 0;
int i = 1; j = n * n;
while (i++ < j--)
sum++;
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
During every iteration you increment i and decrement j which is equivalent to just incrementing i by 2. Therefore, total number of iterations is n^2 / 2 and that is still O(n^2).
big-O complexity ignores coefficients. For example: O(n), O(2n), and O(1000n) are all the same O(n) running time. Likewise, O(n^2) and O(0.5n^2) are both O(n^2) running time.
In your situation, you're essentially incrementing your loop counter by 2 each time through your loop (since j-- has the same effect as i++). So your running time is O(0.5n^2), but that's the same as O(n^2) when you remove the coefficient.
You will have exactly n*n/2 loop iterations (or (n*n-1)/2 if n is odd).
In the big O notation we have O((n*n-1)/2) = O(n*n/2) = O(n*n) because constant factors "don't count".
Your algorithm is equivalent to
while (i += 2 < n*n)
...
which is O(n^2/2) which is the same to O(n^2) because big O complexity does not care about constants.
Let m be the number of iterations taken. Then,
i+m = n^2 - m
which gives,
m = (n^2-i)/2
In Big-O notation, this implies a complexity of O(n^2).
Yes, this algorithm is O(n^2).
To calculate complexity, we have a table the complexities:
O(1)
O(log n)
O(n)
O(n log n)
O(n²)
O(n^a)
O(a^n)
O(n!)
Each row represent a set of algorithms. A set of algorithms that is in O(1), too it is in O(n), and O(n^2), etc. But not at reverse. So, your algorithm realize n*n/2 sentences.
O(n) < O(nlogn) < O(n*n/2) < O(n²)
So, the set of algorithms that include the complexity of your algorithm, is O(n²), because O(n) and O(nlogn) are smaller.
For example:
To n = 100, sum = 5000. => 100 O(n) < 200 O(n·logn) < 5000 (n*n/2) < 10000(n^2)
I'm sorry for my english.
Even though we set j = n * n at the beginning, we increment i and decrement j during each iteration, so shouldn't the resulting number of iterations be a lot less than n*n?
Yes! That's why it's O(n^2). By the same logic, it's a lot less than n * n * n, which makes it O(n^3). It's even O(6^n), by similar logic.
big-O gives you information about upper bounds.
I believe you are trying to ask why the complexity is theta(n) or omega(n), but if you're just trying to understand what big-O is, you really need to understand that it gives upper bounds on functions first and foremost.
I'm taking Data Structures and Algorithm course and I'm stuck at this recursive equation:
T(n) = logn*T(logn) + n
obviously this can't be handled with the use of the Master Theorem, so I was wondering if anybody has any ideas for solving this recursive equation. I'm pretty sure that it should be solved with a change in the parameters, like considering n to be 2^m , but I couldn't manage to find any good fix.
The answer is Theta(n). To prove something is Theta(n), you have to show it is Omega(n) and O(n). Omega(n) in this case is obvious because T(n)>=n. To show that T(n)=O(n), first
Pick a large finite value N such that log(n)^2 < n/100 for all n>N. This is possible because log(n)^2=o(n).
Pick a constant C>100 such that T(n)<Cn for all n<=N. This is possible due to the fact that N is finite.
We will show inductively that T(n)<Cn for all n>N. Since log(n)<n, by the induction hypothesis, we have:
T(n) < n + log(n) C log(n)
= n + C log(n)^2
< n + (C/100) n
= C * (1/100 + 1/C) * n
< C/50 * n
< C*n
In fact, for this function it is even possible to show that T(n) = n + o(n) using a similar argument.
This is by no means an official proof but I think it goes like this.
The key is the + n part. Because of this, T is bounded below by o(n). (or should that be big omega? I'm rusty.) So let's assume that T(n) = O(n) and have a go at that.
Substitute into the original relation
T(n) = (log n)O(log n) + n
= O(log^2(n)) + O(n)
= O(n)
So it still holds.
An algorithm decomposes (divides) a problem of size n into b sub-problems each of size n/b where b is an integer. The cost of decomposition is n, and C(1)=1. Show, using repeated substitution, that for all values of 2≥b, the complexity of the algorithm is O(n lg n).
This is what I use for my initial equation C(n) = C(n/b) + n
and after k-steps of substituting I get C(n) = C(n/b^k) + n [summation(from i=0 to k-1) of (1/b)^i]
k = log(base b) n
I'm not sure I'm getting all of this right because when I finish doing this i don't get n lgn, anybody can help me figure out what to do?
I think your recurrence is wrong. Since there are b separate subproblems of size n/b, there should be a coefficient of b in front of the C(n / b) term. The recurrence should be
C(1) = 1
C(n) = b C(n/b) +O(n).
Using the Master Theorem, this solves to O(n log n). Another way to see this is that after expanding the recurrence k times, we get
C(n) = bk C(n / bk) + kn
This terminates when k = logb n. Plugging in that value of k and simplifying yields a value that is O(n log n).
Hope this helps!