nSo we were taught about recurrence relations a day ago and we were given some codes to practice with:
int pow(int base, int n){
if (n == 0)
return 1;
else if (n == 1)
return base;
else if(n%2 == 0)
return pow(base*base, n/2);
else
return base * pow(base*base, n/2);
}
The farthest I've got to getting its closed form is T(n) = T(n/2^k) + 7k.
I'm not sure how to go any further as the examples given to us were simple and does not help that much.
How do you actually solve for the recurrence relation of this code?
Let us count only the multiplies in a call to pow, denoted as M(N), assuming they dominate the cost (a nowadays strongly invalid assumption).
By inspection of the code we see that:
M(0) = 0 (no multiply for N=0)
M(1) = 0 (no multiply for N=1)
M(N), N>1, N even = M(N/2) + 1 (for even N, recursive call after one multiply)
M(N), N>1, N odd = M(N/2) + 2 (for odd N, recursive call after one multiply, followed by a second multiply).
This recurrence is a bit complicated by the fact that it handles differently the even and odd integers. We will work around this by considering sequences of even or odd numbers only.
Let us first handle the case of N being a power of 2. If we iterate the formula, we get M(N) = M(N/2) + 1 = M(N/4) + 2 = M(N/8) + 3 = M(N/16) + 4. We easily spot the pattern M(N) = M(N/2^k) + k, so that the solution M(2^n) = n follows. We can write this as M(N) = Lg(N) (base 2 logarithm).
Similarly, N = 2^n-1 will always yield odd numbers after divisions by 2. We have M(2^n-1) = M(2^(n-1)-1) + 2 = M(2^(n-2)-1) + 4... = 2(n-1). Or M(N) = 2 Lg(N+1) - 2.
The exact solution for general N can be fairly involved but we can see that Lg(N) <= M(N) <= 2 Lg(N+1) - 2. Thus M(N) is O(Log(N)).
Related
Note: This may involve a good deal of number theory, but the formula I found online is only an approximation, so I believe an exact solution requires some sort of iterative calculation by a computer.
My goal is to find an efficient algorithm (in terms of time complexity) to solve the following problem for large values of n:
Let R(a,b) be the amount of steps that the Euclidean algorithm takes to find the GCD of nonnegative integers a and b. That is, R(a,b) = 1 + R(b,a%b), and R(a,0) = 0. Given a natural number n, find the sum of R(a,b) for all 1 <= a,b <= n.
For example, if n = 2, then the solution is R(1,1) + R(1,2) + R(2,1) + R(2,2) = 1 + 2 + 1 + 1 = 5.
Since there are n^2 pairs corresponding to the numbers to be added together, simply computing R(a,b) for every pair can do no better than O(n^2), regardless of the efficiency of R. Thus, to improve the efficiency of the algorithm, a faster method must somehow calculate the sum of R(a,b) over many values at once. There are a few properties that I suspect might be useful:
If a = b, then R(a,b) = 1
If a < b, then R(a,b) = 1 + R(b,a)
R(a,b) = R(ka,kb) where k is some natural number
If b <= a, then R(a,b) = R(a+b,b)
If b <= a < 2b, then R(a,b) = R(2a-b,a)
Because of the first two properties, it is only necessary to find the sum of R(a,b) over pairs where a > b. I tried using this in addition to the third property in a method that computes R(a,b) only for pairs where a and b are also coprime in addition to a being greater than b. The total sum is then n plus the sum of (n / a) * ((2 * R(a,b)) + 1) over all such pairs (using integer division for n / a). This algorithm still had time complexity O(n^2), I discovered, due to Euler's totient function being roughly linear.
I don't need any specific code solution, I just need to figure out the procedure for a more efficient algorithm. But if the programming language matters, my attempts to solve this problem have used C++.
Side note: I have found that a formula has been discovered that nearly solves this problem, but it is only an approximation. Note that the formula calculates the average rather than the sum, so it would just need to be multiplied by n^2. If the formula could be expanded to reduce the error, it might work, but from what I can tell, I'm not sure if this is possible.
Using Stern-Brocot, due to symmetry, we can look at just one of the four subtrees rooted at 1/3, 2/3, 3/2 or 3/1. The time complexity is still O(n^2) but obviously performs less calculations. The version below uses the subtree rooted at 2/3 (or at least that's the one I looked at to think through :). Also note, we only care about the denominators there since the numerators are lower. Also note the code relies on rules 2 and 3 as well.
C++ code (takes about a tenth of a second for n = 10,000):
#include <iostream>
using namespace std;
long g(int n, int l, int mid, int r, int fromL, int turns){
long right = 0;
long left = 0;
if (mid + r <= n)
right = g(n, mid, mid + r, r, 1, turns + (1^fromL));
if (mid + l <= n)
left = g(n, l, mid + l, mid, 0, turns + fromL);
// Multiples
int k = n / mid;
// This subtree is rooted at 2/3
return 4 * k * turns + left + right;
}
long f(int n) {
// 1/1, 2/2, 3/3 etc.
long total = n;
// 1/2, 2/4, 3/6 etc.
if (n > 1)
total += 3 * (n >> 1);
if (n > 2)
// Technically 3 turns for 2/3 but
// we can avoid a subtraction
// per call by starting with 2. (I
// guess that means it could be
// another subtree, but I haven't
// thought it through.)
total += g(n, 2, 3, 1, 1, 2);
return total;
}
int main() {
cout << f(10000);
return 0;
}
I think this is a hard problem. We can avoid division and reduce the space usage to linear at least via the Stern--Brocot tree.
def f(n, a, b, r):
return r if a + b > n else r + f(n, a + b, b, r) + f(n, a + b, a, r + 1)
def R_sum(n):
return sum(f(n, d, d, 1) for d in range(1, n + 1))
def R(a, b):
return 1 + R(b, a % b) if b else 0
def test(n):
print(R_sum(n))
print(sum(R(a, b) for a in range(1, n + 1) for b in range(1, n + 1)))
test(100)
How can I turn the following recursive algorithm into an iterative algorithm?
count(integer: n)
for i = 1...n
return count(n-i) + count(n-i)
return 1
Essentially this algorithm computes the following:
count(n-1) + count(n-2) + ... + count(1)
This is not a tail recursion, so it is not trivial to transform it into iterative.
However, a recursion can be simulated using a stack and loop pretty easily, by pushing to the stack rather than recursing.
stack = Stack()
stack.push(n)
count = 0
while (stack.empty() == false):
current = stack.pop()
count++
for i from current-1 to 1 inclusive (and descending):
stack.push(i)
return count
Another solution is doing it with Dynamic Programming, since you don't need to calculate the same thing multiple times:
DP = new int[n+1]
DP[0] = 1
for i from 1 to n:
DP[i] = 0
for j from 0 to i-1:
DP[i] += DP[j]
return DP[n]
Note that you can even optimize it to run in O(n) rather than O(n^2), by remembering the "so far sum":
sum = 1
current = 1
for i from 1 to n:
current = sum
sum = sum + current
return current
Lastly, this actually sums to something you can easily pre-calculate: count(n) = 2^(n-1), count(0) = 1 (You can suspect it from seeing the last iterative solution we have...)
base: count(0) automatically yields 1, as the loop's body is not reached.
Hypothesis: T(k) = 2^(k-1) for all k < n
Proof:
T(n) = T(n-1) + T(n-2) + ... + T(1) + T(0) = (induction hypothesis)
= 2^(n-2) + 2^(n-3) + ... + 2^0 + 1 =
= sum { 2^i | i=0,...,n-2 } + 1 = (sum of geometric series)
= (1-2^(n-1)/(1-2)) + 1 = (2^(n-1) - 1) + 1 = 2^(n-1)
If you define your problem in the following recursive way:
count(integer : n)
if n==0 return 1
return count(n-1)+count(n-1)
Converting to an iterative algorithm is a typical application of backwards induction where you should keep all previous results:
count(integer : n):
result[0] = 1
for i = 1..n
result[i] = result[i-1] + result[i-1]
return result[n]
Ir is clear that this is more complicated than it should be because the point is to exemplify backwards induction. I could be accumulating into a single place but I wanted to provide a more general concept that could be extended to other cases. In my opinion the idea is clearer this way.
The pseudocode can be improved after the key idea is clear. In fact, there are two very simple improvements that are applicable only to this specific case:
instead of keeping all previous values, only the last one is necessary
there is no need for two identical calls as there are no side-effects expected
Going beyond, it is possible to calculate that based on the definition of the function, count(n)= 2^n
The statement return count(n-i) + count(n-i) appears to be equivalent to return 2 * count(n-i). In that case:
count(integer: n)
result = 1
for i = 1...n
result = 2 * result
return result
What am I missing here?
I searched the answer for this question, i got various useful links but when i implemented the idea, i am getting wrong answer.
This is what I understood :
If m is prime, then it is very simple. Inverse modulus of any number 'a' can be calculated as:inverse_mod(a) = (a^(m-2))%m
but when m is not prime, the we have to find the prime factors of m ,
i.e. m= (p1^a1)*(p2^a2)*....*(pk^ak). Here p1,p2,....,pk are the prime factors of m and a1,a2,....,ak are their respective powers.
then we have to calculate :
m1 = a%(p1^a1),
m2 = a%(p2^a2),
.......
mk = a%(pk^ak)
then we have to combine all these remainders using Chinese Remainder Theorem (https://en.wikipedia.org/wiki/Chinese_remainder_theorem)
I implemented this idea for m=1000,000,000,but still i am getting Wrong Answer.
Here is my explanation for m=1000,000,000 which is not prime
m= (2^9)*(5^9) where 2 and 5 are m's prime factors.
let a is the number for which have to calculate inverse modulo m.
m1 = a%(2^9) = a^512
m2 = a%(5^9) = a^1953125
Our answer will be = m1*e1 + m2*e2
where e1= { 1 (mod 512)
0 (mod 1953125)
}
and e2= { 1 (mod 1953125)
0 (mod 512)
}
Now to calculate 'e1' and 'e2' , I used Extended Euclidean Algorithm.
https://en.wikipedia.org/wiki/Extended_Euclidean_algorithm
The Code is :
void extend_euclid(lld a,lld b,lld& x,lld& y)
{
if(a%b==0)
{
x=0;
y=1;
return ;
}
extend_euclid(b,a%b,x,y);
int tmp=x;
x=y;
y=tmp-(a/b)*y;
}
Now e1= 1953125*y and e2=512*y;
So, Our final answer will be = m1*e1 + m2*e2 .
But after doing all this, I am getting wrong answer.
please explain and point out any mistakes which I have made while understanding Chinese Remainder Theorem .
Thank You Very Much.
The inverse of a modulo m only exists if a and m are coprime. If they are not coprime, nothing will help. For example: what is the inverse of 2 mod 4?
2*0 = 0 mod 4
2*1 = 2 mod 4
2*2 = 0 mod 4
2*3 = 2 mod 4
So no inverse.
This can indeed be computed by using the extended euclidean algorithm (although I'm not sure if you're doing it right), but the simplest way, in my opinion, is by using Euler's theorem:
a^phi(m) = 1 (mod m)
a*a^(phi(m) - 1) = 1 (mod m)
=> a^(phi(m) - 1) is the invers of a (mod m)
Where phi is the totient function:
phi(x) = x * (1 - 1/f1)(1 - 1/f2)...(1 - 1/fk)
where fi > 1 is a divisor of x (not necessarily a prime divisor)
phi(36) = 36(1 - 1/2)(1 - 1/3)(1 - 1/4)(1 - 1/6)(1 - 1/9)(1 - 1/12)(1 - 1/18)(1 - 1/36)
So it can be computed in O(sqrt n).
The exponentiation can then be computed using exponentiation by squaring.
If you want to read about how you can use the extended Euclidean algorithm to find the inverse faster, read this. I don't think the Chinese remainder theorem can help here.
I believe the following function will do what you want. Change from long to int if appropriate. It returns -1 if there is no inverse, otherwise returns a positive number in the range [0..m).
public static long inverse(long a, long m) { // mult. inverse of a mod m
long r = m;
long nr = a;
long t = 0;
long nt = 1;
long tmp;
while (nr != 0) {
long q = r/nr;
tmp = nt; nt = t - q*nt; t = tmp;
tmp = nr; nr = r - q*nr; r = tmp;
}
if (r > 1) return -1; // no inverse
if (t < 0) t += m;
return t;
}
I can't follow your algorithm to see exactly what is wrong with it, but I have a few general comments: Euler's totient function is rather slow to calculate in general, depending as it does on prime factorizations. The Chinese Remainder Theorem is useful in many contexts for combining results mod coprimes but it's not necessary here and again overcomplicates this particular issue because you end up having to factor your modulus, a very slow operation. And it's faster to implement GCD and modular inverse in a loop, rather than using recursion, though of course the two methods are equally effective.
If you're trying to compute a^(-1) mod p^k for p prime, first compute a^(-1) mod p. Given an x such that ax = 1 (mod p^(k-1)), you can "Hensel lift"---you're looking for the y between 0 and p-1 such that a(x + y p^(k-1)) = 1 (mod p^k). Doing some algebra, you find that you're looking for the y such that a y p^(k-1) = 1 - ax (mod p^k)---i.e. a y = (1 - ax)/p^(k-1) (mod p), where the division by p^(k-1) is exact. You can work this out using a modular inverse for a (mod p).
(Alternatively, simply notice that a^(p^(k-1)(p-1) - 1) = 1 (mod p^k). I mention Hensel lifting because it works in much greater generality.)
I am expressing the algorithms in pseudo code. I'm just wondering if my design works just as well as the original one displayed below. The algorithm is supposed to compute the sum of n odd positive integers.
This is how the algorithm should look:
procedure sumofodds(n:positive integer)
if n = 1
return 1
else
return sumofodds(n-1) + (2n-1)
This is how i designed my algorithm:
procedure odd(n: positive integer)
if n = 1
return 1
if n % 2 > 0
return n + odd(n-1) // this means n is odd
if n % 2 = 0
return 0 + odd(n-1) // this means its even
Your algorithm is not the same as the original.
The original computes the sum of the first n odd numbers.
Your algorithm computes the sum of all the odd numbers in the range 1..n.
So for an input of n=3, the first algorithm will compute 1+3+5, while your algorithm will compute 1+3.
(If you want a quicker way, then the formula n*n computes the sum of the first n odd numbers)
One small improvement that might help is defining it with tail recursion. Tail recursion happens when the very last thing to execute is the recursive call. To make this tail recursive, use a helper method and pass the running sum as a parameter. I'm pretty sure the pseudo code below is tail recursive since, regardless of the result of the (if odd) check, the final step is the recursive call (the math happens before the recursive call).
procedure SumOdds(n)
return SumOddsHelper(n, 0)
procedure SumOddsHelper(n, sum)
if n = 1 return 1
if n is odd return SumOddsHelper(n-1, sum + n)
else return SumOddsHelper(n-1, sum)
Let me suggest that you implement your idea in Python. You may be surprised to see that the working code is very similar to pseudocode.
This is the original algorithm:
def sum_of_n_odds(n):
if n == 1:
return 1
else:
return sum_of_n_odds(n-1) + (2*n-1)
And this is the one you wrote:
def sum_of_odds_up_to_n(n):
if n == 1:
return 1
if n % 2 > 0: # this means n is odd
return n + sum_of_odds_up_to_n(n-1)
if n % 2 == 0: # this means it's even
return 0 + sum_of_odds_up_to_n(n-1)
These two algorithms compute different things. Calling sum_of_n_odds(10) yields the same result as calling sum_of_odds_up_to_n(19) or sum_of_odds_up_to_n(20). In general, sum_of_odds_up_to_n(n) is equivalent to sum_of_n_odds((n+1)//2), where // means integer division.
If you're interested in making your implementation a little more efficient, I suggest that you omit the final if condition, where n % 2 == 0. An integer is either odd or even, so if it isn't odd, it must be even.
You can get another performance gain by making the recursive call sum_of_odds_up_to(n-2) when n is odd. Currently you are wasting half of your function calls on even numbers.
With these two improvements, the code becomes:
def sum_of_odds_up_to_n(n):
if n <= 0:
return 0
if n % 2 == 0:
return sum_of_odds_up_to_n(n-1)
return n + sum_of_odds_up_to_n(n-2)
And this is the tail-recursive version:
def sum_of_odds_up_to_n(n, partial=0):
if n <= 0:
return partial
if n % 2 == 0:
return sum_of_odds_up_to_n(n-1, partial)
return sum_of_odds_up_to_n(n-2, partial+n)
You should not expect performance gains from the above because Python does not optimize for tail recursion. However, you can rewrite tail recursion as iteration, which will run faster because it doesn't spend time allocating a stack frame for each recursive call:
def sum_of_odds_up_to_n(n):
partial = 0
if n % 2 == 0:
n -= 1
while n > 0:
partial += n
n -= 2
return partial
The fastest implementation of all relies on mathematical insight. Consider the sum:
1 + 3 + 5 + ... + (n-4) + (n-2) + n
Observe that you can pair the first element with the last element, the second element with the second last element, the third element with the third last element, and so on:
(1 + n) + (3 + n-2) + (5 + n-4) + ...
It is easy to see that this is equal to:
(n + 1) + (n + 1) + (n + 1) + ...
How many terms (n + 1) are there? Since we're pairing up two terms at a time from the original sequence, there are half as many terms in the (n + 1) sequence.
You can check for yourself that the original sequence has (n + 1) / 2 terms. (Hint: see what you get if you add 1 to every term.)
The new sequence has half as many terms as that, or (n + 1) / 4. And each term in the sequence is (n + 1), so the sum of the whole sequence is:
(n + 1) * (n + 1) / 4
The resulting Python program is this:
def sum_of_odds_up_to_n(n):
if n <= 0:
return 0
if n % 2 == 0:
n -= 1
return (n+1)*(n+1)//4
I need to create a recurrence relation to capture the number of comparisons performed in this algorithm:
Func(n)
if n = 1
print "!"
return 1
else
return Func(n-1) * Func(n-1) * Func(n-1)
This is what I came up with - but I can't seem to figure out what I did wrong...
Base Case: M(1) = 0
M(n) = 3[M(n-1)]
= 3[3[M(n-2)]]
= 3[3[3[M(n-3)]]]
= 3^i[M(n-i)]
i = n-1 //to get base case
M(n) = 3^(n-1)[M(n-(n-1))]
= 3^(n-1)[M(1)]
= 3^(n-1)[0]
= 0 //????????????
Is my base case wrong? If so, why? Please and thank you for your help.
For Base case (n equals 1), M(1) should be taken as 1 (time complexity constant),
M(n) = 3^(n-1) then
The question is about the number of comparisons.
Every time you call the function, you execute exactly one comparison.
When the outcome is n=1, you are done, and when the outcome is n>1, you perform three recursive calls with n-1.
Clearly,
M(1) = 1
M(n) = 3 M(n-1)
By computing M for increasing values of n, you easily spot the pattern: 1, 3, 9, 27, 81...