In particular, I'm interested in finding the Theta complexity. I can see the algorithm is bounded by log(n) but I'm not sure how to proceed considering the problem size decreases exponentially.
i = n
j = 2
while (i >= 1)
i = i/j
j = 2j
The simplest way to answer your question is to look at the algorithm through the eyes of the logarithm (in my case the binary logarithm):
log i_0 = log n
log j_0 = 1
k = 0
while (log i_k >= 0) # as log increases monotonically
log i_{k+1} = log i_k - log j_k
log j_{k+1} = (log j_k) + 1
k++
This way we see that log i decreases by log j = k + 1 during every step.
Now when will we exit the loop?
This happens for
The maximum number of steps is thus the smallest integer k such that
holds.
Asymptotically, this is equivalent to , so your algorithm is in
Let us denote i(k) and j(k) the value of i and j at iteration k (so assume that i(1)=n and j(1)=2 ). We can easily prove by induction that j(k)=2^k and that
Knowing the above formula on i(k), you can compute an upper bound on the value of k that is needed in order to have i(k) <= 1 and you will obtain that the complexity is
Related
I'm learning how to prove/disprove big-Oh, big-Omega, and little-oh, and I have the following algorithm f(n). However I'm unsure how to prove this f(n) as it has an if statement which I've never come across before. How can I prove, for example, that this f(n) is O( n^2 )?
if n is even
4 sum(n/2,n)
else
2n-1 sum(n−3,n)
where sum(j,k) is a ‘partial arithmetic sum’ of the integers from j up to k, that is sum(j,k)=
if j > k
0
else
j+(j+1)+(j+2)+...+j
e.g. sum(3,4) = 3 + 4 = 7, etc.
Note that sum(j,k) = sum(1,k) – sum(1,j-1).
ok. Got it no worries. I'll try to help you understand this.
Big O notation is used to define an upper limit on how much time a program will take in term of its input size.
Let's try to see how much time each statement will take in this function
f(n) {
if n is even // O(1) .....#1
4 * sum(n/2,n) // O(n) .....#2
else // O(1) ................#3
(2n-1) * sum(n−3,n) // O(n) .......#4
}
if n is even
This can be done by a check like if ((n%2) == 0)) As you can see that this is a constant time operation. no loop nothing just one computation.
sum(j, k) function is being computated by iterating from j to k whenever j <= k. So, it will run (k - j + 1) times which is linear time
So, total complexity will be summation of complexity of the if block or the else block
For analyzing complexity, one needs to take care of worst time.
Complexity of if block = #1 + #2 = O(1) + O(n) = O(n)
Similarly for else block = #3 + #4 = O(1) + O(n) = O(n)
Max of both = maximum of(O(n), O(n)) = O(n)
Thus, the overall complexity = O(n)
In pseudo-code:
j = 5;
while (j <= n) {
j = j* j* j * j;
}
What is the time complexity of this code?
It is way shorter than O(logn), is there even any reason to go lower than that?
Let's trace through the execution of the code. Suppose we start with initial value j0:
0. j ← j0
1. j ← j0^4
2. j ← [j0^4]^4 = j0^(4^2)
3. j ← [j0^(4^2)]^4 = j0^(4^3)
4. j ← [j0^(4^3)]^4 = j0^(4^4)
...
m. j ← [j0^(4^(m-1))]^4 = j0^(4^m)
... after m loops.
The loop terminates when the value exceeds n:
j0^(4^m) > n
→m > log(4, log(j0, n))
Thus the time complexity is O(m) = O(log log n).
I used help from MathSE to find out how to solve this. The answer is same as another one by #meowgoesthedog, but I understand it the following way:
On every iteration, the value of j is going to increase by its own 4th power. Or, we can look at it from the side of n, that on every iteration n is going to reduce by its 4th root. Hence, the recurrence will look like:
T(n) = 1 + T(n1/4)
For any integer k, with 24k + 1 <= n <= 24k + 1, the recurrence will become:
T(n) = 1 + k
if we go on to assume that the 4th root will always be an integer. It won't matter if it is not as the constant of +/- 1 will be ignored in the Big-O calculation.
Now, since the assumption of 4th root being an integer simplifies things for us, we can try to solve the following equation:
n = 24k,
with the equation yielding k = (Log(Log(n)) - Log(2))/Log(4).
This implies that O(T(n)) = O(Log(Log(n))).
I am relatively new to Big-O notation and I came across this question:
Sort the following functions by order of growth from slowest to fastest - Big-O Notation. For each pair of adjacent functions in your list, please write a sentence describing why it is ordered the way it is. 7n^3 - 10n, 4n^2, n; n^8621909; 3n; 2^loglog n; n log n; 6n log n; n!; 1:1^n
So I have got this order -
1-> n^8621909
2->7n^3 - 10n
3->4n^2
4->3n
5->6n log n
6->n!
7->n
8->n log n
9-> 1.1^n
10->2^loglogn
I am unsure if this would be the correct order or not and also if this is the correct order, I am unsure of how to describe it the way it is because I ordered these in this particular manner using certain values for n and then arranging them.
1. n! = O(n!)
2. 1.1^n = O(1.1^n)
3. n^8621909 = O(n^8621909)
4. 7n^3 - 10n = O(n^3)
5. 4n^2 = O(n^2)
6. 6n log n = O(nlogn)
6. n log n = O(nlogn)
8. 3n = O(n)
8. n = O(n)
10. 2^loglog n = O(logn)
Some explanations:
O(c^n) < O(n!) < O(n^n) (for some constant c)
O(n^c) < O(c^n)
2^loglogn can be reduced to logn by setting 2^loglogn = x and taking the log of both sides
The algorithm below has runtime O(n) according to our professor, however I am confused as to why it is not
O(n log(n)), because the outer loop can run up to log(n) times and the inner loop can run up to n times.
Algoritme Loop5(n)
i = 1
while i ≤ n
j = 1
while j ≤ i
j = j + 1
i = i∗2
Your professor was right, the running time is O(n).
In the i-th iteration of the outer while-loop, when we have i=2^k for k=0,1,...,log n, the inner while-loop makes O(i) iterations. (When I say log n I mean the base-2 logarithm log_2 n.)
The running time is O(1+2+2^2+2^3+...+2^k) for k=floor(log n). This sums to O(2^{k+1}) which is O(2^{log n}). (This follows from the formula for the partial sum of geometric series.)
Because 2^{log n} = n, the total running time is O(n).
For the interested, here's a proof that the powers of two really sum to what I claim they sum to. (This is a very special case of a more general result.)
Claim. For any natural k, we have 1+2+2^2+...+2^k = 2^{k+1}-1.
Proof. Note that (2-1)*(1+2+2^2+...+2^k) = (2 - 1) + (2^2 - 2) + ... + (2^{k+1} - 2^k) where all 2^i for 0<i<k+1 cancel out, except for i=0 and i=k+1, and we are left with 2^{k+1}-1. QED.
Got an 'Essential Algorithms' exam so doing a bit of revision.
Came across this question and unsure whether my answer is right.
This imgur link has the question and my working.
http://imgur.com/SfKUrQO
Could someone verify whether i am right / where ive went wrong?
I can't really follow your handwriting to point out where you went wrong, but here's how I would do it:
T(n) = 2T(n^(1/2)) + c
= 2(2T(n^(1/4)) + c)
= ...
= 2^kT(n^(1/2^k)) + 2^(k - 1)c
So we need to find the smallest k such that:
n^(1/2^k) = 1 (considering the integer part)
We can apply a logarithm to this expression:
1/(2^k) log n = 0 (remember we're considering integer parts)
=> 2^k >= log n | apply a logarithm again
=> k log 2 >= log log n
=> k = O(log log n) because log 2 is a constant
So we have:
2^O(log log n)T(1) + 2^O(log log n - 1)c
= O(2^log log n)
= O(log n)
I see you got O(sqrt(n)), which isn't wrong either, because log n < sqrt n, so if log n is an upper bound, so is sqrt n. It's just not a tight bound.