I have a question about geometric series. Why is
1 + c + c2 + ... + cn = Θ(cn)
when c > 1? I understand why it is Θ(n) if c = 1 and it is Θ(1) if c < 1, but I just can't figure out why it is Θ(cn) if c>1.
Thanks!
The sum of the first n terms of the geometric series
c0 + c1 + ... + cn-1
is given by the quantitiy
(cn - 1) / (c - 1)
Note that if c > 1, then this quantity is bounded from above by cn - 1 and from below by cn-1 - 1 / c. Therefore, it is O(cn) and Ω(cn), so it is Θ(cn).
Hope this helps!
Let c > 1 and g(c) = 1 + c + c2 + ... + cn.
The first thing to realize is that for some n we have S(n) = (cn+1 - 1) / (c - 1) where S(n) is the sum of the series.
So we have that (cn+1 - cn) / (c-1) <= (cn+1 - 1) / (c - 1) = S(n) since cn >= 1.
So (cn+1 - cn) / (c-1) = (cn(c-1)) / (c-1) = cn <= S(n)
Thus we have that S(n) >= cn.
Now that we have found our lower bound, let's look for the upper bound.
Observe that S(n) = (cn+1 - 1) / (c - 1) <= (cn+1) / (c - 1) = ((cn) c) / (c -1).
To simply our view of the algebra a bit, let y = c / (c-1) and substitute it into the equation above.
Hence, S(n) <= y * cn where y is just some constant since c is! This is an important observation since now it is just a multiple of cn.
So now we have found our upper bound as well.
Thus we have cn <= S(n) <= y * cn.
Therefore, S(n) = Θ(cn) when c > 1.
Related
I am trying to solve a recurrence relation for the Fibonacci Sequence, but the problem is that it is not homogeneous.
The recurrence relation is as follows:
F(n) = F(n -1) + F(n -2) + Θ(n) for n > 1, and Θ(n) = c1n + c2 , where c1, c2 > 0
Initial conditions: F(0) = 0, F(1) = 1
I've tried to solve it by treating it as a homogeneous linear second-ordered recurrence with constant coefficients, but I'm not sure how to solve it when I have:
F(n) - F(n - 1) - F(n - 2) = c1n + c2
Instead of:
F(n) - F(n - 1) - F(n - 2) = 0
What is the best method for solving this type of recurrence relation?
You can find a lower and upper bound using the following inequalities:
2F(n-2) + Theta(n) < T(n) < 2F(n-1) + Theta(n)
You can easily justify that the lower and upper bounds are in Theta(n 2^n). Hence, T(n) = Theta(n 2^n).
With the Ansatz F1(n) = a*n + b you get
F1(n) - F1(n-1) - F1(n-2) = -a*n + 3a - b = Θ(n) = c1*n + c2.
So we have a = -c1 and b = -3c1 - c2, i.e.
F1(n) = -c1*n - 3c1 - c2
solves the given recursion without looking at the initial conditions. Combine this with the solution F0 of the homogenous recursion (see Binet's formula)
F0(n) = d1*q1^n + d2*q2^n
with q1/2 = (1 +/- sqrt(5))/2 to get
F(n) = F0(n) + F1(n) = d1*q1^n + d2*q2^n - c1*n - 3c1 - c2 .
Now one can adjust the factors d1,d2 to match the given inital conditions by solving this system of linear equations
F(0) = d1 + d2 - 3c1 - c2 = 0
F(1) = d1*q1 + d2*q2 - 4c1 - c2 = 1
for d1,d2.
Question:
In less O(n) find a number K in sequence 1,2,3...N such that sum of 1,2,3...K is exactly half of sum of 1,2,3..N
Maths:
I know that the sum of the sequence 1,2,3....N is N(N+1)/2.
Therefore our task is to find K such that:
K(K+1) = 1/2 * (N)(N+1)/2 if such a K exists.
Pseudo-Code:
sum1 = n(n+1)/2
sum2 = 0
for(i=1;i<n;i++)
{
sum2 += i;
if(sum2 == sum1)
{
index = i
break;
}
}
Problem: The solution is O(n) but I need better such as O(n), O(log(n))...
You're close with your equation, but you dropped the divide by 2 from the K side. You actually want
K * (K + 1) / 2 = N * (N + 1) / (2 * 2)
Or
2 * K * (K + 1) = N * (N + 1)
Plugging that into wolfram alpha gives the real solutions:
K = 1/2 * (-sqrt(2N^2 + 2N + 1) - 1)
K = 1/2 * (sqrt(2N^2 + 2N + 1) - 1)
Since you probably don't want the negative value, the second equation is what you're looking for. That should be an O(1) solution.
The other answers show the analytical solutions of the equation
k * (k + 1) = n * (n + 1) / 2 Where n is given
The OP needs k to be a whole number, though, and such value may not exist for every chosen n.
We can adapt the Newton's method to solve this equation using only integer arithmetics.
sum_n = n * (n + 1) / 2
k = n
repeat indefinitely // It usually needs only a few iterations, it's O(log(n))
f_k = k * (k + 1)
if f_k == sum_n
k is the solution, exit
if f_k < sum_n
there's no k, exit
k_n = (f_k - sum_n) / (2 * k + 1) // Newton step: f(k)/f'(k)
if k_n == 0
k_n = 1 // Avoid inifinite loop
k = k - k_n;
Here there is a C++ implementation.
We can find all the pairs (n, k) that satisfy the equation for 0 < k < n ≤ N adapting the algorithm posted in the question.
n = 1 // This algorithm compares 2 * k * (k + 1) and n * (n + 1)
sum_n = 1 // It finds all the pairs (n, k) where 0 < n ≤ N in O(N)
sum_2k = 1
for every n <= N // Note that n / k → sqrt(2) when n → ∞
while sum_n < sum_2k
n = n + 1 // This inner loop requires a couple of iterations,
sum_n = sum_n + n // at most.
if ( sum_n == sum_2k )
print n and k
k = k + 1
sum_2k = sum_2k + 2 * k
Here there is an implementation in C++ that can find the first pairs where N < 200,000,000:
N K K * (K + 1)
----------------------------------------------
3 2 6
20 14 210
119 84 7140
696 492 242556
4059 2870 8239770
23660 16730 279909630
137903 97512 9508687656
803760 568344 323015470680
4684659 3312554 10973017315470
27304196 19306982 372759573255306
159140519 112529340 12662852473364940
Of course it becomes impractical for too large values and eventually overflows.
Besides, there's a far better way to find all those pairs (have you noticed the patterns in the sequences of the last digits?).
We can start by manipulating this Diophantine equation:
2k(k + 1) = n(n + 1)
introducing u = n + 1 → n = u - 1
v = k + 1 k = v - 1
2(v - 1)v = (u - 1)u
2(v2 - v) = u2 + u
2(4v2 - 4v) = 4u2 + 4u
2(4v2 - 4v) + 2 = 4u2 - 4u + 2
2(4v2 - 4v + 1) = (4u2 - 4u + 1) + 1
2(2v - 1)2 = (2u - 1)2 + 1
substituting x = 2u - 1 → u = (x + 1)/2
y = 2v - 1 v = (y + 1)/2
2y2 = x2 + 1
x2 - 2y2 = -1
Which is the negative Pell's equation for 2.
It's easy to find its fundamental solutions by inspection, x1 = 1 and y1 = 1. Those would correspond to n = k = 0, a solution of the original Diophantine equation, but not of the original problem (I'm ignoring the sums of 0 terms).
Once those are known, we can calculate all the other ones with two simple recurrence relations
xi+1 = xi + 2yi
yi+1 = yi + xi
Note that we need to "skip" the even ys as they would lead to non integer solutions. So we can directly use theese
xi+2 = 3xi + 4yi → ui+1 = 3ui + 4vi - 3 → ni+1 = 3ni + 4ki + 3
yi+2 = 2xi + 3yi vi+1 = 2ui + 3vi - 2 ki+1 = 2ni + 3ki + 2
Summing up:
n k
-----------------------------------------------
3* 0 + 4* 0 + 3 = 3 2* 0 + 3* 0 + 2 = 2
3* 3 + 4* 2 + 3 = 20 2* 3 + 3* 2 + 2 = 14
3*20 + 4*14 + 3 = 119 2*20 + 3*14 + 2 = 84
...
It seems that the problem is asking to solve the diophantine equation
2K(K+1) = N(N+1).
By inspection, K=2, N=3 is a solution !
Note that technically this is an O(1) problem, because N has a finite value and does not vary (and if no solution exists, the dependency on N is even meanignless).
The condition you have is that the sum of 1..N is twice the sum of 1..K
So you have N(N+1) = 2K(K+1) or K^2 + K - (N^2 + N) / 2 = 0
Which means K = (-1 +/- sqrt(1 + 2(N^2 + N)))/2
Which is O(1)
T(n) = 1/2(T(n − 1) + T(n − 2)) + cn, with c > 0
I am having trouble understanding how to solve recurrences with multiple T(n)s. I did a lot of practices by solving recurrence with just one T(n) and following the definition I can do it well. But this is not a recurrence directly solvable with the Master theorem. Anyway I can start a good approach to this question?
solve the homogeneous recurrence:
T_H(n) = 1/2(T_H(n − 1) + T_H(n − 2))
r^2 - r/2 - 1/2 = 0
r = 1 or r = -1/2
T_H(n) = alpha * 1^n + beta * (-1/2)^n (alpha and beta to be determined by initial conditions)
solve the special solution
(1) we want to find a s(n) such that s(n) = 1/2(s(n-1)+s(n-2)) + cn
we know cn is a polynome (in n) so special solution can be found as a polynome too.
Trying with s(n) = an leads to:
an = 1/2(an-1 + an-2) + cn and all terms in an simplify themselves so try the next degree: s(n)=an^2 + bn
an^2 + bn = 1/2 (a(n-1)^2 + b(n-1) + a(n-2)^2 + b(n-2) ) + cn
developping everybody then identifying we get
a = c/3
b = 5c/9
A quick check if we don't trust our ability to make valid calculus:
since s(n) must be valid for all n, let's put arbitrarily n=2, c=7 and check whether s(2) still verifies (1) idem
n = 2, c=7
s(n)-1/2(s(n-1)+s(n-2))-cn ?= 0
below octave shows that indeed s(2) = 0
octave:1> n=2
n = 2
octave:2> c=7
c = 7
octave:3> c/3*n^2 + 5*c/9*n - 1/2*(c/3*(n-1)^2 + 5*c/9*(n-1) +c/3*(n-2)^2 + 5*c/9*(n-2))-c*n
ans = 0
Complexity
T(n) = T_H(n) + sp(n) = alpha + beta (-1/2)^n + c/3n^2 + 5c/9n
so T(n) is in O(n^2)
Please can anyone help me with this:
Solve using iteration Method T (n) = T (n - 1) + (n - 1)
And prove that T (n) ∈Θ (n²)
Please, if you can explain step by step I would be grateful.
I solved an easy way :
T (n) = T (n - 1) + (n - 1)-----------(1)
//now submit T(n-1)=t(n)
T(n-1)=T((n-1)-1)+((n-1)-1)
T(n-1)=T(n-2)+n-2---------------(2)
now submit (2) in (1) you will get
i.e T(n)=[T(n-2)+n-2]+(n-1)
T(n)=T(n-2)+2n-3 //simplified--------------(3)
now, T(n-2)=t(n)
T(n-2)=T((n-2)-2)+[2(n-2)-3]
T(n-2)=T(n-4)+2n-7---------------(4)
now submit (4) in (2) you will get
i.e T(n)=[T(n-4)+2n-7]+(2n-3)
T(n)=T(n-4)+4n-10 //simplified
............
T(n)=T(n-k)+kn-10
now, assume k=n-1
T(n)=T(n-(n-1))+(n-1)n-10
T(n)=T(1)+n^2-n-10
According to the complexity 10 is constant
So , Finally O(n^2)
T(n) = T(n - 1) + (n - 1)
= (T(n - 2) + (n - 2)) + (n - 1)
= (T(n - 3) + (n - 3)) + (n - 2) + (n - 1)
= ...
= T(0) + 1 + 2 + ... + (n - 3) + (n - 2) + (n - 1)
= C + n * (n - 1) / 2
= O(n2)
Hence for sufficient large n, we have:
n * (n - 1) / 3 ≤ T(n) ≤ n2
Therefore we have T(n) = Ω(n²) and T(n) = O(n²), thus T(n) = Θ (n²).
T(n)-T(n-1) = n-1
T(n-1)-T(n-2) = n-2
By substraction
T(n)-2T(n-1)+T(n-2) = 1
T(n-1)-2T(n-2)+T(n-3) = 1
Again, by substitution
T(n)-3T(n-1)+3T(n-2)-T(n-3) = 0
Characteristic equation of the recursion is
x^3-3x^2+3x-1 = 0
or
(x-1)^3 = 0.
It has roots x_1,2,3 = 1,
so general solution of the recursion is
T(n) = C_1 1^n + C_2 n 1^n + C_3 n^2 1^n
or
T(n) = C_1 + C_2 n + C_3 n^2.
So,
T(n) = Θ(n^2).
I have tried determining the running time given by a recurrence relation, but my result is not correct.
Recurrence
T(n) = c + T(n-1) if n >= 1
= d if n = 0
My attempt
I constructed this recursion tree:
n
|
n-1
|
n-2
|
n-3
|
n-4
|
n-5
|
|
|
|
|
|
Till we get 1
Now at level i, the size of the sub problem should be, n-i
But at last we want a problem of size 1. Thus, at the last level, n-i=1 which gives, i=n-1.
So the depth of the tree becomes n-1 and the height becomes n-1+1= n.
Now the time required to solve this recursion = height of the tree*time required at each level which is :
n+(n-1)+(n-2)+(n-3)+(n-4)+(n-5)+ ...
==> (n+n+n+n+n+ ... )-(1+2+3+4+5+ ... )
==> n - (n(n+1)/2)
Now the time taken = n* ((n-n2)/2) which should give the order to be n2, but that is not the correct answer.
Now at level i, the size of the sub problem should be, n-i
Yes, that is correct. But you're assuming, that the runtime equals the sum of all the subproblem sizes. Just think about it, already summing the first two levels gives n + (n - 1) = 2n - 1, why would the problem size increase? Disclaimer: A bit handwavy and not an entirely accurate statement.
What the formula actually says
T(n) = c + T(n-1)
The formula says, solving it for some n takes the same time it takes to solve it for a problem size that is one less, plus an additional constant c: c + T(n - 1)
Another way to put the above statement is this: Given the problem takes some time t for a certain problem size, it will take t + c for a problem size, that is bigger by one.
We know, that at a problem size of n = 0, this takes time d. According to the second statement, for a size of one more, n = 1, it will take d + c. Applying our rule again, it thus takes d + c + c for n = 2. We conclude, that it must take d + n*c time for any n.
This is not a proof. To actually prove this, you must use induction as shown by amit.
A correct recursion tree
Your recursion tree only lists the problem size. That's pretty much useless, I'm afraid. Instead, you need to list the runtime for said problem size.
Every node in the tree corresponds to a certain problem size. What you write into that node is the additional time it takes for the problem size. I.e. you sum over all the descendants of a node plus the node itself to get the runtime for a certain problem size.
A graphical representation of such a tree would look like this
Tree Corresponding problem size
c n
|
c n - 1
|
c n - 2
|
c n - 3
.
.
.
|
c 2
|
c 1
|
d 0
Formalizing: As already mentioned, the label of a node is the additional runtime it takes to solve for that problem size, plus all its descendants. The uppermost node represents a problem size of n, bearing the label c because that's in addition to T(n-1), to which it is connected using a |.
In a formula, you would only write this relation: T(n) = c + T(n-1). Given that tree, you can see how this applies to every n>=1. You could write this down like this:
T(n) = c + T(n - 1) # This means, `c` plus the previous level
T(n - 1) = c + T(n - 2) # i.e. add the runtime of this one to the one above^
T(n - 2) = c + T(n - 3)
...
T(n - (n - 2)) = c + T(1)
T(n - (n - 1)) = c + T(0)
T(0) = d
You can now expand the terms from bottom to top:
T(n - (n - 1)) = c + T(0)
T(0) = d
T(n - (n - 2)) = c + T(1)
T(n - (n - 1)) = c + d
T(0) = d
T(n - (n - 3)) = c + T(2)
T(n - (n - 2)) = c + (c + d)
T(n - (n - 1)) = c + d
T(0) = d
T(n - (n - 4)) = c + T(3)
T(n - (n - 3)) = c + (2*c + d)
T(n - (n - 2)) = c + (c + d)
...
T(n) = c + T(n - 1)
T(n - 1) = c + ((n-2)c + d)
T(n) = c + (n-1)c + d = n*c + d
T(n - 1) = (n-1)c + d
Summing 1 to n
n+(n-1)+(n-2)+(n-3)+(n-4)+(n-5)+ ...
==> (n+n+n+n+n+ ... )-(1+2+3+4+5+ ... )
==> n - (n(n+1)/2)
From the first line to the second line, you have reduced your problem from summing 1 to n to, well, summing 1 to n-1. That's not very helpful, because you're stuck with the same problem.
I'm not sure what you did on the third line, but your transition from the first to the second is basically correct.
This would have been the correct formula:
T(n) = c + T(n-1)
= c + (c + T(n-2))
= ...
= c*i + T(n-i)
= ...
= c*n + T(0)
= c*n + d
If we assume c,d are constants - it gets you O(n)
To prove it mathematically - one can use mathematical induction
For each k < n assume T(n) = c*n + d
Base is T(0) = 0*n + d = d, which is correct for n < 1
T(n) = c + T(n-1) (*)
= c + (n-1)*c + d
= c*n + d
(*) is the induction hypothesis, and is valid since n-1 < n
The complexity would be O(n).
As you described the functions converts the problem for input n into problem for (n-1) by using a constant operation 'c'.
So moving down the recursion tree we will have in total n levels, and at each step we require some constant operation 'c'.
So there will be total c*n operations resulting into the complexity O(n).