Hello, I am trying to solve a question in above image, but I can't.
Specially, my question is about C(n) in the image, I got "7logn + n^(1/3)" at the end.
we know that left side of + sign, "7logn<=n for all n>7 (witness c=1, k=7)", and right side of + sign, "n^(1/3)<=n".
Both sides between + sign from my perspective is O(n) and thus whole C(n) is O(n).
But why the answer is Big-theta(n^1/3)?
It is only true if log is the logarithm of base 2 (then log(8) = 3, because 2^3 = 8).
8^(log(n)/9) = (8^log(n))^(1/9) = (n^log(8))^(1/9) = (n^3)^(1/9) = n^(3 * 1/9) = n^(1/3)
n^(1/3) is the same as the 3rd root of n.
It is O(n^(1/3)) and not O(log(n)) because the former term is growing faster:
Limit of n towards infinity of log(n) / (n^(1/3)) equals 0. If you would have to switch the expressions to get 0 then the other one would be growing faster. E.g. n + log(n) would be O(n) because n is growing faster, not to be confused with n * log(n) which is O(n * log(n)).
Related
I am learning about algorithm complexity, and I just want to verify my understanding is correct.
1) T(n) = 2n + 1 = O(n)
This is because we drop the constants 2 and 1, and we are left with n. Therefore, we have O(n).
2) T(n) = n * n - 100 = O(n^2)
This is because we drop the constant -100, and are left with n * n, which is n^2. Therefore, we have O(n^2)
Am I correct?
Basically you have those different levels determined by the "dominant" factor of your function, starting from the lowest complexity :
O(1) if your function only contains constants
O(log(n)) if the dominant part is in log, ln...
O(n^p) if the dominant part is polynomial and the highest power is p (e.g. O(n^3) for T(n) = n*(3n^2 + 1) -3 )
O(p^n) if the dominant part is a fixed number to n-th power (e.g. O(3^n) for T(n) = 3 + n^99 + 2*3^n)
O(n!) if the dominant part is factorial
and so on...
I know that when you split the size of the problem set into a specified fraction you're dealing with O(log(n)). However I am confused when they're more then 1 recursive calls that do this. For example in this function over here we're going to calculate the value of an exponent.
public static long pow(long x, int n)
{
if(n==1)
return x;
if(isEven(n))
return pow(x,n/2) * pow(x,n/2);
else
return pow(x*x, n/2) * x
}
After doing the analysis, I got the run time equals O(N). Am I correct? Thanks for your time
Yes, you are correct, at least under worst case analysis.
Note that for n = 2^k, for some natural k, you get that except when reaching the stop clause, the condition is always true, and the recursive function will be ran twice.
When that is established, it is enough to analyze:
T(n) = T(n/2) + T(n/2) + X
Where X is some constant, (each recursive call takes constant amount of work, if ignoring the other recursive calls).
From master theorem case 1, with:
f(n) = X
a = 2, b = 2, c = 0 (since X is in O(n^0)=O(1))
And since c=0 < 1 = log_2(2), conditions for case 1 apply, and we can conclude the function T(n) is in Theta(n^log_2(2)) = Theta(n)
Average case analysis:
For average case, with uniformly distributed number n, half of the bits in the binary representation of this number will be up (1), and half will be 'down' (0).
Since dividing by 2 is basically arithmetic shift right, and the condition isEven(n) is true if and only if the least significant bit is 0, this means the mean complexity function is:
T(n) = 0.5 T(n/2) + 0.5 * 2 T(n/2) + X = 0.5 * 3 * T(n/2) + X
= 1.5 * T(n/2) + X
So
T(n) = 3/2 T(n/2) + X
Case 1 still applies (assuming constant X):
a = 3/2, b=2, c = 0
and you get average case complexity of Theta(n^log_1.5(2))~=Theta(n^0.58)
Quick Note:
This assumes indeed all arithmetics are O(1). If this is not the case (very large numbers), you should put their complexity instead of the constant in the definition of T(n), and reanalyze. Assuming each such operation is sub-linear (in the number, not the bits representing it), the result remains Theta(n), since case 1 of master theorem still applies. (For average case, for any function "better" than ~O(n^0.58) won't change the result shown
Varun, you are partly correct. Let's see the two cases:
If n is even, then you are just dividing the task into two halves without optimizing significantly, since pow(x, n/2) is calculated twice.
If n is odd, then you have a special case of decomposition. x will be replaced by xx, which makes xx the new base, which saves you from recalculating x*x.
In the first case we have a slight optimization, since it is easier to repeat smaller productions than doing the whole thing, as:
(3 * 3 * 3 * 3) * (3 * 3 * 3 * 3) is easier to be calculated than (3 * 3 * 3 * 3 * 3 * 3 * 3 * 3), so the first case slightly improves the calculation by using the fact that production is an associative operation. The number of production executed in the first case is not reduced, but the way the computation is done is better.
In the second case, however, you have significant optimizations. Let's suppose that n = (2^p) - 1. In that case, we reduce the problem to a problem, where n = ((2^p - 1) - 1) / 2 = ((2^p) - 2) / 2 = (2^(p-1)) - 1. So, if p is a natural number and n = (2^p) - 1, then you are reducing it to its half. So, the algorithm is logarithmic in best case scenario n = (2^p) - 1 and it is linear in worst case scenario n = 2^p.
We usually analyze worst case time complexity, which happens when isEven(n) is true. In that case, we have
T(n) = 2T(n/2) + O(1)
where T(n) means the time complexity of pow(x, n).
Apply Master theorem, Case 1 to get the Big-O notation form of T(n). That is:
T(n) = O(n)
What is or what should be complexity of (divide and conquer) trominoes algorithm and why?
I've been given a 2^k * 2^k sized board, and one of the tiles is randomly removed making it a deficient board. The task is to fill the with "trominos" which are an L-shaped figure made of 3 tiles.
Tiling Problem
– Input: A n by n square board, with one of the 1 by 1 square
missing, where n = 2k for some k ≥ 1.
– Output: A tiling of the board using a tromino, a three square tile
obtained by deleting the upper right 1 by 1 corner from a 2 by 2
square.
– You are allowed to rotate the tromino, for tiling the board.
Base Case: A 2 by 2 square can be tiled.
Induction:
– Divide the square into 4, n/2 by n/2 squares.
– Place the tromino at the “center”, where the tromino does not
overlap the n/2 by n/2 square which was earlier missing out 1 by 1
square.
– Solve each of the four n/2 by n/2 boards inductively.
This algorithm runs in time O(n2) = O(4k). To see why, notice that your algorithm does O(1) work per grid, then makes four subcalls to grids whose width and height of half the original size. If we use n as a parameter denoting the width or height of the grid, we have the following recurrence relation:
T(n) = 4T(n / 2) + O(1)
By the Master Theorem, this solves to O(n2). Since n = 2k, we see that n2 = 4k, so this is also O(4k) if you want to use k as your parameter.
We could also let N denote the total number of squares on the board (so N = n2), in which case the subcalls are to four grids of size N / 4 each. This gives the recurrence
S(N) = 4S(N / 4) + O(1)
This solves to O(N) = O(n2), confirming the above result.
Hope this helps!
To my understanding, the complexity can be determined as follows. Let T(n) denote the number of steps needed to solve a board of side length n. From the description in the original question above, we have
T(2) = c
where c is a constant and
T(n) = 4*T(n/2) + b
where b is a constant for placing the tromino. Using the master theorem, the runtime bound is
O(n^2)
via case 1.
I'll try to offer less formal solutions but without making use of the Master theorem.
– Place the tromino at the “center”, where the tromino does not overlap the n/2 by n/2 square which was earlier missing out 1 by 1 square.
I'm guessing this is an O(1) operation? In that case, if n is the board size:
T(1) = O(1)
T(n) = 4T(n / 4) + O(1) =
= 4(4T(n / 4^2) + O(1)) + O(1) =
= 4^2T(n / 4^2) + 4*O(1) + O(1) =
= ... =
= 4^kT(n / 4^k) + 4^(k - 1)*O(1)
But n = 2^k x 2^k = 2^(2k) = (2^2)^k = 4^k, so the whole algorithm is O(n).
Note that this does not contradict #Codor's answer, because he took n to be the side length of the board, while I took it to be the entire area.
If the middle step is not O(1) but O(n):
T(n) = 4T(n / 4) + O(n) =
= 4(4*T(n / 4^2) + O(n / 4)) + O(n) =
= 4^2T(n / 4^2) + 2*O(n) =
= ... =
= 4^kT(n / 4^k) + k*O(n)
We have:
k*O(n) = n log n because 4^k = n
So the entire algorithm would be O(n log n).
You do O(1) work per tromino placed. Since there's (n^2-1)/3 trominos to place, the algorithm takes O(n^2) time.
I have to do an exercise from my algorithm book. Suppose a mergesort is implemented to split an array by α, which is in a range from 0.1 to 0.9.
This is the original method to calculate the split point
middle = fromIndex + (toIndex - fromIndex)/2;
I would like to change it to this:
factor = 0.1; //varies in range from 0.1 to 0.9
middle = fromIndex + (toIndex - fromIndex)*factor;
So my questions are:
Does this impact the computational complexity?
What's the impact on recursion tree depths?
This does change the actual complexity, but not the asymptotic complexity.
If you think about the new recurrence relation you'll get, it will be
T(1) = 1
T(n) = T(αn) + T((1 - α)n) + Θ(n)
Looking over the recursion tree, each level of the tree still has a total of Θ(n) work per level, but the number of levels will be greater. Specifically, let's suppose that 0.5 ≤ α < 1. Then after k recursive calls, the size of the smallest block remaining in the recursion will have size n αk. The recurrence stops when this hits size one. Solving, we get:
n αk = 1
α k = 1/n
k log α = -log n
k = -log n / log α
k = log n / log (1/α)
In other words, varying α varies the constant factor on the logarithmic term of the depth of the recursion. The above equation is minimized when α = 0.5 (since we are subject to the restriction that α ≥ 0.5), so this would be the optimal way to split. However, picking other splits still gives runtime Θ(n log n), though with a higher constant term.
Hope this helps!
I am watching Intro to Algorithms (MIT) lecture 1. Theres something like below (analysis of merge sort)
T(n) = 2T(n/2) + O(n)
Few questions:
Why work at bottom level becomes O(n)? It said that the boundary case may have a different constant ... but I still don't get it ...
Its said total = cn(lg n) + O(n). Where does O(n) part come from? The original O(n)?
Although this one's been answered a lot, here's one way to reason it:
If you expand the recursion you get:
t(n) = 2 * t(n/2) + O(n)
t(n) = 2 * (2 * t(n/4) + O(n/2)) + O(n)
t(n) = 2 * (2 * (2 * t(n/8) + O(n/4)) + O(n/2)) + O(n)
...
t(n) = 2^k * t(n / 2^k) + O(n) + 2*O(n/2) + ... + 2^k * O(n/2^k)
The above stops when 2^k = n. So, that means n = log_2(k).
That makes n / 2^k = 1 which makes the first part of the equality simple to express, if we consider t(1) = c (constant).
t(n) = n * c + O(n) + 2*O(n/2) + ... + (2^k * O(n / 2^k))
If we consider the sum of O(n) + .. + 2^k * O(n / 2^k) we can observe that there are exactly k terms, and that each term is actually equivalent to n. So we can rewrite it like so:
t(n) = n * c + {n + n + n + .. + n} <-- where n appears k times
t(n) = n * c + n *k
but since k = log_2(n), we have
t(n) = n * c + n * log_2(n)
And since in Big-Oh notation n * log_2(n) is equivalent to n * log n, and it grows faster than n * c, it follows that the Big-O of the closed form is:
O(n * log n)
I hope this helps!
EDIT
To clarify, your first question, regarding why work at the bottom becomes O(n) is basically because you have n unit operations that take place (you have n leaf nodes in the expansion tree, and each takes a constant c time to complete). In the closed-formula, the work-at-the-bottom is expressed as the first term in the sum: 2 ^ k * t(1). As I said above, you have k levels in the tree, and the unit operation t(1) takes constant time.
To answer the second question, the O(n) does not actually come from the original O(n); it represents the work at the bottom (see answer to first question above).
The original O(n) is the time complexity required to merge the two sub-solutions t(n/2). Since the time complexity of the merge operation is assumed to grow (or decrease) linearly with the size of the problem, that means that at each level you will have a sum of O(n / 2^level), of 2^level terms; this is equivalent to one O(n) operation performed once. Now, since you have k levels, the merge complexity for the initial problem is {O(n) at each level} * {number of levels} which is essentially O(n) * k. Since k = log(n) levels, it follows that the time complexity of the merge operation is: O(n * log n).
Finally, when you examine all the operations performed, you see that the work at the bottom is less than the actual work performed to merge the solutions. Mathematically speaking, the work performed for each of the n items, grows asymptotically slower than the work performed to merge the sub-solutions; put differently, for large values of n, the merge operation dominates. So in Big-Oh analysis, the formula becomes: O(n * log(n)).