How can I prove Big Theta using quantificational definition? I know that u have to find 2 constants such that c1*g(n)<= f(n)<= c2*g(n)- but how do you find these constants?
Could anyone help me prove the following to show an example 5x3 − 7x2 + 5x + 1 = Θ(x3)?
Let's assume x > 0, which is usually what we have.
5x^3 − 7x^2 + 5x + 1 <= 5x^3 + 5x + 1
<= 5x^3 + 5x^3 + x^3 ; x >= 1
= 11x^3
On the other hand
5x^3 − 7x^2 + 5x + 1 >= 5x^3 - 7x^2
>= 5x^3 - 4x^3 ; if 7x^2 <= 4x3, i.e. x >= 7/4
= x^3
In conclusion, for x >= 7/4 we have:
x^3 <= (5x^3 − 7x^2 + 5x + 1) <= 11x^3
and we are done.
Related
I'm a computer science student, I got the final exam next week and I got confused trying to find the time complexity for the following function. Can you explain this to me?
int bss(int n){
if(n <= 1)
return n;
return bss(n/2) + bss(n/2);
}
For problems like this, you should figure out the recurrence relation first (by looking at the code), then solve the recurrence relation (using mathematics).
To do step 1, we need to look at each line and see what it contributes to the overall running time T(n) of our algorithm:
int bss(int n){
if(n <= 1) // contributes a constant time a
return n; // contributes a constant time b in the base case only
return bss(n/2) + bss(n/2); // contributes a constant time c
// for the two divisions and one addition,
// plus 2T(n/2)
}
Adding up, we get two cases:
n <= 1: T(n) = a + b
n > 1: T(n) = a + c + 2T(n/2)
To solve this system, we can start writing out terms for values of n. Because we divide n by 2, we might as well choose even n only. Also, it would be nice to have already calculated T(n/2) to calculate T(n); so, we can double our test values of n each time.
n T(n)
---------
1 a + b
2 a + c + 2T(1) = a + c + 2a + 2b = 3a + 2b + c
4 a + c + 2T(2) = a + c + 6a + 4b + 2c = 7a + 4b + 3c
8 a + c + 2T(4) = a + c + 14a + 8b + 6c = 15a + 8b + 7c
16 a + c + 2T(8) = a + c + 30a + 16b + 14c = 31a + 16b + 15c
...
k (2k - 1)a + kb + (k - 1)c
Based on the pattern we saw, it seems as though the solution for n = k is (2k - 1)a + kb + (k - 1)c. We can try to verify this by plugging it into our equations:
k = 1: (2k - 1)a + kb + (k - 1)c = a + b = T(1) ... correct
k > 1:
(2k - 1)a + kb + (k - 1)c ?= a + c + 2[(2k/2 - 1)a + (k/2)b + (k/2 - 1)c]
?= a + c + (2k - 2)a + kb + (k - 2)c
?= a + c + 2ka - 2a + kb + kc - 2c
?= -a -c + 2ka + kb + kc
?= (2k - 1)a + kb + (k - 1)c ... correct
So, we have found a valid solution to our recurrence relation. The solution is:
T(n) = (2n - 1)a + nb + (n - 1)c
Rearranging:
T(n) = (2a + c + 1)n - (a + c)
T(n) is the equation of a line.
I am trying to prove an equation given in the CLRS exercise book. The equation is:
Sigma k=0 to k=infinity (k-1)/2^k = 0
I solved the LHS but my answer is 1 whereas the RHS should be 0
Following is my solution:
Let's say S = k/2^k = 1/2 + 2/2^2 + 3/2^3 + 4/2^4 ....
2S = 1 + 2/2 + 3/2^2 + 4/2^3 ...
2S - S = 1 + ( 2/2 - 1/2) + (3/2^2 - 2/2^2) + (4/2^3 - 3/2^3)..
S = 1+ 1/2 + 1/2^2 + 1/2^3 + 1/2^4..
S = 2 -- eq 1
Now let's say S1 = (k-1)/2^k = 0/2 + 1/2^2 + 2/2^3 + 3/2^4...
S - S1 = 1/2 + (2/2^2 - 1/2^2) + (3/2^3 - 2/2^3) + (4/2^4 - 3/2^4)....
S - S1 = 1/2 + 1/2^2 + 1/2^3 + 1/2^4...
= 1
From eq 1
2 - S1 = 1
S1 = 1
Whereas the required RHS is 0. Is there anything wrong with my solution? Thanks..
Yes, you have issues in your solution to the problem.
While everything is correct in formulating the value of S, you have calculated the value of S1 incorrectly. You missed substituting the value for k=0 in S1. Whereas, for S, even after putting the value of k, the first term will be 0, so no effect.
Therefore,
S1 = (k-1)/2^k = -1 + 0/2 + 1/2^2 + 2/2^3 + 3/2^4...
// you missed -1 here because you started substituting values from k=1
S - S1 = -(-1) + 1/2 + (2/2^2 - 1/2^2) + (3/2^3 - 2/2^3) + (4/2^4 - 3/2^4)....
S - S1 = 1 + (1/2 + 1/2^2 + 1/2^3 + 1/2^4...)
= 1 + 1
= 2.
From eq 1
2 - S1 = 2
S1 = 0.
for (int i=0; i < n; i++)
for (j=0;j<i*i;j++)
x++
Would the big O be O(n^3)? I'm just confused about how the i's relate to the n.
The required math (summation) is:
0 + 1 + 4 + 9 + ... + n * n = n**3 / 3 + n**2 / 2 + n / 6 = O(n**3)
So, you're right: it's O(n**3); moreover
0 + 1 + ... + n**k = O(n**(k + 1))
Methodically, to proceed using Sigma notation like below, would get you where you need to go:
I just heard about that x mod (2^32-1) and x / (2^32-1) would be easy, but how?
to calculate the formula:
xn = (xn-1 + xn-1 / b)mod b.
For b = 2^32, its easy, x%(2^32) == x & (2^32-1); and x / (2^32) == x >> 32. (the ^ here is not XOR). How to do that when b = 2^32 - 1.
In the page https://en.wikipedia.org/wiki/Multiply-with-carry. They say "arithmetic for modulus 2^32 − 1 requires only a simple adjustment from that for 2^32". So what is the "simple adjustment"?
(This answer only handles the mod case.)
I'll assume that the datatype of x is more than 32 bits (this answer will actually work with any positive integer) and that it is positive (the negative case is just -(-x mod 2^32-1)), since if it at most 32 bits, the question can be answered by
x mod (2^32-1) = 0 if x == 2^32-1, x otherwise
x / (2^32 - 1) = 1 if x == 2^32-1, 0 otherwise
We can write x in base 2^32, with digits x0, x1, ..., xn. So
x = x0 + 2^32 * x1 + (2^32)^2 * x2 + ... + (2^32)^n * xn
This makes the answer clearer when we do the modulus, since 2^32 == 1 mod 2^32-1. That is
x == x0 + 1 * x1 + 1^2 * x2 + ... + 1^n * xn (mod 2^32-1)
== x0 + x1 + ... + xn (mod 2^32-1)
x mod 2^32-1 is the same as the sum of the base 2^32 digits! (we can't drop the mod 2^32-1 yet). We have two cases now, either the sum is between 0 and 2^32-1 or it is greater. In the former, we are done; in the later, we can just recur until we get between 0 and 2^32-1. Getting the digits in base 2^32 is fast, since we can use bitwise operations. In Python (this doesn't handle negative numbers):
def mod_2to32sub1(x):
s = 0 # the sum
while x > 0: # get the digits
s += x & (2**32-1)
x >>= 32
if s > 2**32-1:
return mod_2to32sub1(s)
elif s == 2**32-1:
return 0
else:
return s
(This is extremely easy to generalise to x mod 2^n-1, in fact you just replace any occurance of 32 with n in this answer.)
(EDIT: added the elif clause to avoid an infinite loop on mod_2to32sub1(2**32-1). EDIT2: replaced ^ with **... oops.)
So you compute with the "rule" 232 = 1. In general, 232+x = 2x. You can simplify 2a by taking the exponent modulo 32. Example: 266 = 22.
You can express any number in binary, and then lower the exponents. Example: the number 240 + 238 + 220 + 2 + 1 can be simplified to 28 + 26 + 220 + 2 + 1.
In general, you can group the exponents every 32 powers of 2, and "downgrade" all exponents modulo 32.
For 64 bit words, the number can be expressed as
232 A + B
where 0 <= A,B <= 232-1. Getting A and B is easy with bitwise operations.
So you can simplify that to A + B, which is much smaller: at most 233. Then, check if this number is at least 232-1, and subtract 232 - 1 in that case.
This avoids expensive direct division.
The modulus has already been explained, nevertheless, let's recapitulate.
To find the remainder of k modulo 2^n-1, write
k = a + 2^n*b, 0 <= a < 2^n
Then
k = a + ((2^n-1) + 1) * b
= (a + b) + (2^n-1)*b
≡ (a + b) (mod 2^n-1)
If a + b >= 2^n, repeat until the remainder is less than 2^n, and if that leads you to a + b = 2^n-1, replace that with 0. Each "shift right by n and add to the last n bits" moves the first set bit right by n or n-1 places (unless k < 2^(2*n-1), when the first set bit after the shift-and-add may be the 2^n bit). So if the width of the type is large compared to n, this will need many shifts - consider a 128-bit type and n = 3, for large k you will need over 40 shifts. To reduce the number of shifts required, you can exploit the fact that
2^(m*n) - 1 = (2^n - 1) * (2^((m-1)*n) + 2^((m-2)*n) + ... + 2^(2*n) + 2^n + 1),
of which we will only use that 2^n - 1 divides 2^(m*n) - 1 for all m > 0. Then you shift by multiples of n that are roughly half the maximal bit-length the value can have at that step. For the above example of a 128-bit type and the remainder modulo 7 (2^3 - 1), the closest multiples of 3 to 128/2 are 63 and 66, first shift by 63 bits
r_1 = (k & (2^63 - 1)) + (k >> 63) // r_1 < 2^63 + 2^(128-63) < 2^66
to get a number with at most 66 bits, then shift by 66/2 = 33 bits
r_2 = (r_1 & (2^33 - 1)) + (r_1 >> 33) // r_2 < 2^33 + 2^(66-33) = 2^34
to reach at most 34 bits. Next shift by 18 bits, then 9, 6, 3
r_3 = (r_2 & (2^18 - 1)) + (r_2 >> 18) // r_3 < 2^18 + 2^(34-18) < 2^19
r_4 = (r_3 & (2^9 - 1)) + (r_3 >> 9) // r_4 < 2^9 + 2^(19-9) < 2^11
r_5 = (r_4 & (2^6 - 1)) + (r_4 >> 6) // r_5 < 2^6 + 2^(11-6) < 2^7
r_6 = (r_5 & (2^3 - 1)) + (r_5 >> 3) // r_6 < 2^3 + 2^(7-3) < 2^5
r_7 = (r_6 & (2^3 - 1)) + (r_6 >> 3) // r_7 < 2^3 + 2^(5-3) < 2^4
Now a single subtraction if r_7 >= 2^3 - 1 suffices. To calculate k % (2^n -1) in a b-bit type, O(log2 (b/n)) shifts are needed.
The quotient is obtained similarly, again we write
k = a + 2^n*b, 0 <= a < 2^n
= a + ((2^n-1) + 1)*b
= (2^n-1)*b + (a+b),
so k/(2^n-1) = b + (a+b)/(2^n-1), and we continue while a+b > 2^n-1. Here we unfortunately cannot reduce the work by shifting and masking about half the width, so the method is only efficient when n is not much smaller than the width of the type.
Code for the fast cases where n is not too small:
unsigned long long modulus_2n1(unsigned n, unsigned long long k) {
unsigned long long mask = (1ULL << n) - 1ULL;
while(k > mask) {
k = (k & mask) + (k >> n);
}
return k == mask ? 0 : k;
}
unsigned long long quotient_2n1(unsigned n, unsigned long long k) {
unsigned long long mask = (1ULL << n) - 1ULL, quotient = 0;
while(k > mask) {
quotient += k >> n;
k = (k & mask) + (k >> n);
}
return k == mask ? quotient + 1 : quotient;
}
For the special case where n is half the width of the type, the loop runs at most twice, so if branches are expensive, it may be better to unroll the loop and unconditionally execute the loop body twice.
It is not. What must you have heard is x mod 2^n and x/2^n being easier. x/2^n can be performed as x>>n, and x mod 2^n, do x&(1<<n-1)
Determine the positive number c & n0 for the following recurrences (Using Substitution Method):
T(n) = T(ceiling(n/2)) + 1 ... Guess is Big-Oh(log base 2 of n)
T(n) = 3T(floor(n/3)) + n ... Guess is Big-Omega (n * log base 3 of n)
T(n) = 2T(floor(n/2) + 17) + n ... Guess is Big-Oh(n * log base 2 of n).
I am giving my Solution for Problem 1:
Our Guess is: T(n) = O (log_2(n)).
By Induction Hypothesis assume T(k) <= c * log_2(k) for all k < n,here c is a const & c > 0
T(n) = T(ceiling(n/2)) + 1
<=> T(n) <= c*log_2(ceiling(n/2)) + 1
<=> " <= c*{log_2(n/2) + 1} + 1
<=> " = c*log_2(n/2) + c + 1
<=> " = c*{log_2(n) - log_2(2)} + c + 1
<=> " = c*log_2(n) - c + c + 1
<=> " = c*log_2(n) + 1
<=> T(n) not_<= c*log_2(n) because c*log_2(n) + 1 not_<= c*log_2(n).
To solve this remedy used a trick a follows:
T(n) = T(ceiling(n/2)) + 1
<=> " <= c*log(ceiling(n/2)) + 1
<=> " <= c*{log_2 (n/2) + b} + 1 where 0 <= b < 1
<=> " <= c*{log_2 (n) - log_2(2) + b) + 1
<=> " = c*{log_2(n) - 1 + b} + 1
<=> " = c*log_2(n) - c + bc + 1
<=> " = c*log_2(n) - (c - bc - 1) if c - bc -1 >= 0
c >= 1 / (1 - b)
<=> T(n) <= c*log_2(n) for c >= {1 / (1 - b)}
so T(n) = O(log_2(n)).
This solution is seems to be correct to me ... My Ques is: Is it the proper approach to do?
Thanks to all of U.
For the first exercise:
We want to show by induction that T(n) <= ceiling(log(n)) + 1.
Let's assume that T(1) = 1, than T(1) = 1 <= ceiling(log(1)) + 1 = 1 and the base of the induction is proved.
Now, we assume that for every 1 <= i < nhold that T(i) <= ceiling(log(i)) + 1.
For the inductive step we have to distinguish the cases when n is even and when is odd.
If n is even: T(n) = T(ceiling(n/2)) + 1 = T(n/2) + 1 <= ceiling(log(n/2)) + 1 + 1 = ceiling(log(n) - 1) + 1 + 1 = ceiling(log(n)) + 1.
If n is odd: T(n) = T(ceiling(n/2)) + 1 = T((n+1)/2) + 1 <= ceiling(log((n+1)/2)) + 1 + 1 = ceiling(log(n+1) - 1) + 1 + 1 = ceiling(log(n+1)) + 1 = ceiling(log(n)) + 1
The last passage is tricky, but is possibile because n is odd and then it cannot be a power of 2.
Problem #1:
T(1) = t0
T(2) = T(1) + 1 = t0 + 1
T(4) = T(2) + 1 = t0 + 2
T(8) = T(4) + 1 = t0 + 3
...
T(2^(m+1)) = T(2^m) + 1 = t0 + (m + 1)
Letting n = 2^(m+1), we get that T(n) = t0 + log_2(n) = O(log_2(n))
Problem #2:
T(1) = t0
T(3) = 3T(1) + 3 = 3t0 + 3
T(9) = 3T(3) + 9 = 3(3t0 + 3) + 9 = 9t0 + 18
T(27) = 3T(9) + 27 = 3(9t0 + 18) + 27 = 27t0 + 81
...
T(3^(m+1)) = 3T(3^m) + 3^(m+1) = ((3^(m+1))t0 + (3^(m+1))(m+1)
Letting n = 3^(m+1), we get that T(n) = nt0 + nlog_3(n) = O(nlog_3(n)).
Problem #3:
Consider n = 34. T(34) = 2T(17+17) + 34 = 2T(34) + 34. We can solve this to find that T(34) = -34. We can also see that for odd n, T(n) = 1 + T(n - 1). We continue to find what values are fixed:
T(0) = 2T(17) + 0 = 2T(17)
T(17) = 1 + T(16)
T(16) = 2T(25) + 16
T(25) = T(24) + 1
T(24) = 2T(29) + 24
T(29) = T(28) + 1
T(28) = 2T(31) + 28
T(31) = T(30) + 1
T(30) = 2T(32) + 30
T(32) = 2T(33) + 32
T(33) = T(32) + 1
We get T(32) = 2T(33) + 32 = 2T(32) + 34, meaning that T(32) = -34. Working backword, we get
T(32) = -34
T(33) = -33
T(30) = -38
T(31) = -37
T(28) = -46
T(29) = -45
T(24) = -96
T(25) = -95
T(16) = -174
T(17) = -173
T(0) = -346
As you can see, this recurrence is a little more complicated than the others, and as such, you should probably take a hard look at this one. If I get any other ideas, I'll come back; otherwise, you're on your own.
EDIT:
After looking at #3 some more, it looks like you're right in your assessment that it's O(nlog_2(n)). So you can try listing a bunch of numbers - I did it from n=0 to n=45. You notice a pattern: it goes from negative numbers to positive numbers around n=43,44. To get the next even-index element of the sequence, you add powers of two, in the following order: 4, 8, 4, 16, 4, 8, 4, 32, 4, 8, 4, 16, 4, 8, 4, 64, 4, 8, 4, 16, 4, 8, 4, 32, ...
These numbers are essentially where you'd mark an arbitary-length ruler... quarters, halves, eights, sixteenths, etc. As such, we can solve the equivalent problem of finding the order of the sum 1 + 2 + 1 + 4 + 1 + 2 + 1 + 8 + ... (same as ours, divided by 4, and ours is shifted, but the order will still work). By observing that the sum of the first k numbers (where k is a power of 2) is equal to sum((n/(2^(k+1))2^k) = (1/2)sum(n) for k = 0 to log_2(n), we get that the simple recurrence is given by (n/2)log_2(n). Multiply by 4 to get ours, and shift x to the right by 34 and perhaps add a constant value to the result. So we're playing around with y = 2nlog_n(x) + k' for some constant k'.
Phew. That was a tricky one. Note that this recurrence does not admit of any arbitary "initial condiditons"; in other words, the recurrence does not describe a family of sequences, but one specific one, with no parameterization.