How to write the recurrence relation of a pseudocode? - algorithm

Foo(A,f,l)
**Precondition: A[f ...l] is an array of integers, f,l are two naturals ≥ 1 with f ≤ l.
if (f = l) then
return A[f]
else
m ← floor of((f+l)/2)
return min(Foo(A,f,m), Foo(A,m + 1,l))
end if
Correct me if I'm wrong, but I think this code returns the smallest integer of the array. But how do I figure out what the recurrence relation that describe the time complexity in terms of array A? Could you please guide me to the solution so I can understand? I don't even know where to begin.

The recurrence relation we can recover from the structure of the pseudocode. We can let T(n) represent the time taken by the algorithm as a function of the input size. For n = 1, the time is constant, say T(1) = a. Our question now is for larger n, how can we express T(n)?
We will be in the else clause for n > 1. We do some extra work - let's call it b - and then call the function twice, once for an input of size floor(n/2) and once for an input of size ceiling(n/2). So we can write this part of the recursion as T(n) = b + T(floor(n/2)) + T(ceiling(n/2)). We can now write out some terms.
n T(n)
1 a
2 b + a + a = b + 2a
3 b + b + 2a + a = 2b + 3a
4 b + b + 2a + b + 2a = 3b + 4a
5 b + b + 2a + 2b + 3a = 4b + 5a
... ...
k = (k-1)b + (k)a = kb - b + ka = k(a + b) - b
We find a guess that T(n) = (a + b)n - b for some constants a and b that depend upon the cost of amounts of work we might take as constant (note that computing (f + l) / 2 is not really constant in terms of n, but it will not change our analysis). We can prove this using mathematical induction:
T(1) = a = (a + b)(1) - b is right;
Assume that T(n) = (a + b)n - b for all n <= k.
Does T(k + 1) = (a + b)(k + 1) - b hold? Remember that T(k + 1) = b + T(floor((k+1)/2)) + T(ceiling((k+1)/2). Suppose k+1 is even and m = (k+1)/2. Then T(k+1) = b + 2T(m) = b + 2[(a + b)(m) - b] = b + 2(m)(a+b) - 2b = (2m)(a+b) - b = (k+1)(a+b) - b, as required. The case wherek + 1` is odd is left as an exercise.
This is linear.

You're right. It returns the smallest integer of the array.
And the complexity is
O(nlog(n)); n = size of the array
Explanation: In each call, you are breaking the array into two equal parts which calls up to f=l. It calls the function O(log(n)) times for each number in the array. So, total complexity is O(nlog(n))

Related

Find formula to describe recursion in method

I am struggling with writing the formula that describes the recursive nature of the foo method.
The problem is that as far as I can tell, since every time n is divided with 2,
the binary tree formula should apply here.
This says that when in each call we divide the data we get a formula like this:
And then if we analyze for 2 so :
We get:
Which means that C(N) = log(N + 1), namely O(logN)
That all makes sense and seems to be the right choice for the foo method but it cant be because for
n = 8 I would get 3 + 1 iterations that are not n + 1 = 8 + 1 = 9 iterations
So here is your code:
void foo(int n) {
if (n == 1) System.out.println("Last line I print");
if (n > 1) {
System.out.println("I am printing one more line");
foo(n/2);
}
}
We can write a recurrence relation down for its runtime T as a function of the value of the parameter passed into it, n:
T(1) = a, a constant
T(n) = b + T(n/2), b constant, n > 1
We can write out some values of T(n) for various values of n to see if a pattern emerges:
n T(n)
---------
1 a
2 a + b
4 a + 2b
8 a + 3b
...
2^k a + kb
So for n = 2^k, T(n) = a + kb. We can solve for k in terms of n as follows:
n = 2^k <=> k = log(n)
Then we recover the expression T(n) = a + blog(n). We can verify this expression works easily:
a + blog(1) = a, as required
a + blog(n) = b + (a + blog(n/2))
= b + (a + b(log(n) - 1)
= b + a + blog(n) - b
= a + blog(n), as required
You can also use mathematical induction to do the same thing.

Calculating 1^X + 2^X + ... + N^X mod 1000000007

Is there any algorithm to calculate (1^x + 2^x + 3^x + ... + n^x) mod 1000000007?
Note: a^b is the b-th power of a.
The constraints are 1 <= n <= 10^16, 1 <= x <= 1000. So the value of N is very large.
I can only solve for O(m log m) if m = 1000000007. It is very slow because the time limit is 2 secs.
Do you have any efficient algorithm?
There was a comment that it might be duplicate of this question, but it is definitely different.
You can sum up the series
1**X + 2**X + ... + N**X
with the help of Faulhaber's formula and you'll get a polynomial with an X + 1 power to compute for arbitrary N.
If you don't want to compute Bernoulli Numbers, you can find the the polynomial by solving X + 2 linear equations (for N = 1, N = 2, N = 3, ..., N = X + 2) which is a slower method but easier to implement.
Let's have an example for X = 2. In this case we have an X + 1 = 3 order polynomial:
A*N**3 + B*N**2 + C*N + D
The linear equations are
A + B + C + D = 1 = 1
A*8 + B*4 + C*2 + D = 1 + 4 = 5
A*27 + B*9 + C*3 + D = 1 + 4 + 9 = 14
A*64 + B*16 + C*4 + D = 1 + 4 + 9 + 16 = 30
Having solved the equations we'll get
A = 1/3
B = 1/2
C = 1/6
D = 0
The final formula is
1**2 + 2**2 + ... + N**2 == N**3 / 3 + N**2 / 2 + N / 6
Now, all you have to do is to put an arbitrary large N into the formula. So far the algorithm has O(X**2) complexity (since it doesn't depend on N).
There are a few ways of speeding up modular exponentiation. From here on, I will use ** to denote "exponentiate" and % to denote "modulus".
First a few observations. It is always the case that (a * b) % m is ((a % m) * (b % m)) % m. It is also always the case that a ** n is the same as (a ** floor(n / 2)) * (a ** (n - floor(n/2)). This means that for an exponent <= 1000, we can always complete the exponentiation in at most 20 multiplications (and 21 mods).
We can also skip quite a few calculations, since (a ** b) % m is the same as ((a % m) ** b) % m and if m is significantly lower than n, we simply have multiple repeating sums, with a "tail" of a partial repeat.
I think Vatine’s answer is probably the way to go, but I already typed
this up and it may be useful, for this or for someone else’s similar
problem.
I don’t have time this morning for a detailed answer, but consider this.
1^2 + 2^2 + 3^2 + ... + n^2 would take O(n) steps to compute directly.
However, it’s equivalent to (n(n+1)(2n+1)/6), which can be computed in
O(1) time. A similar equivalence exists for any higher integral power
x.
There may be a general solution to such problems; I don’t know of one,
and Wolfram Alpha doesn’t seem to know of one either. However, in
general the equivalent expression is of degree x+1, and can be worked
out by computing some sample values and solving a set of linear
equations for the coefficients.
However, this would be difficult for large x, such as 1000 as in your
problem, and probably could not be done within 2 seconds.
Perhaps someone who knows more math than I do can turn this into a
workable solution?
Edit: Whoops, I see Fabian Pijcke had already posted a useful link about Faulhaber's formula before I posted.
If you want something easy to implement and fast, try this:
Function Sum(x: Number, n: Integer) -> Number
P := PolySum(:x, n)
return P(x)
End
Function PolySum(x: Variable, n: Integer) -> Polynomial
C := Sum-Coefficients(n)
P := 0
For i from 1 to n + 1
P += C[i] * x^i
End
return P
End
Function Sum-Coefficients(n: Integer) -> Vector of Rationals
A := Create-Matrix(n)
R := Reduced-Row-Echelon-Form(A)
return last column of R
End
Function Create-Matrix(n: Integer) -> Matrix of Integers
A := New (n + 1) x (n + 2) Matrix of Integers
Fill A with 0s
Fill first row of A with 1s
For i from 2 to n + 1
For j from i to n + 1
A[i, j] := A[i-1, j] * (j - i + 2)
End
A[i, n+2] := A[i, n]
End
A[n+1, n+2] := A[n, n+2]
return A
End
Explanation
Our goal is to obtain a polynomial Q such that Q(x) = sum i^n for i from 1 to x. Knowing that Q(x) = Q(x - 1) + x^n => Q(x) - Q(x - 1) = x^n, we can then make a system of equations like so:
d^0/dx^0( Q(x) - Q(x - 1) ) = d^0/dx^0( x^n )
d^1/dx^1( Q(x) - Q(x - 1) ) = d^1/dx^1( x^n )
d^2/dx^2( Q(x) - Q(x - 1) ) = d^2/dx^2( x^n )
... .
d^n/dx^n( Q(x) - Q(x - 1) ) = d^n/dx^n( x^n )
Assuming that Q(x) = a_1*x + a_2*x^2 + ... + a_(n+1)*x^(n+1), we will then have n+1 linear equations with unknowns a1, ..., a_(n+1), and it turns out the coefficient cj multiplying the unknown aj in equation i follows the pattern (where (k)_p = (k!)/(k - p)!):
if j < i, cj = 0
otherwise, cj = (j)_(i - 1)
and the independent value of the ith equation is (n)_(i - 1). Explaining why gets a bit messy, but you can check the proof here.
The above algorithm is equivalent to solving this system of linear equations.
Plenty of implementations and further explanations can be found in https://github.com/fcard/PolySum. The main drawback of this algorithm is that it consumes a lot of memory, even my most memory efficient version uses almost 1gb for n=3000. But it's faster than both SymPy and Mathematica, so I assume it's okay. Compare to Schultz's method, which uses an alternate set of equations.
Examples
It's easy to apply this method by hand for small n. The matrix for n=1 is
| (1)_0 (2)_0 (1)_0 | | 1 1 1 |
| 0 (2)_1 (1)_1 | = | 0 2 1 |
Applying a Gauss-Jordan elimination we then obtain
| 1 0 1/2 |
| 0 1 1/2 |
Result = {a1 = 1/2, a2 = 1/2} => Q(x) = x/2 + (x^2)/2
Note the matrix is always already in row echelon form, we just need to reduce it.
For n=2:
| (1)_0 (2)_0 (3)_0 (2)_0 | | 1 1 1 1 |
| 0 (2)_1 (3)_1 (2)_1 | = | 0 2 3 2 |
| 0 0 (3)_2 (2)_2 | | 0 0 6 2 |
Applying a Gauss-Jordan elimination we then obtain
| 1 1 0 2/3 | | 1 0 0 1/6 |
| 0 2 0 1 | => | 0 1 0 1/2 |
| 0 0 1 1/3 | | 0 0 1 1/3 |
Result = {a1 = 1/6, a2 = 1/2, a3 = 1/3} => Q(x) = x/6 + (x^2)/2 + (x^3)/3
The key to the algorithm's speed is that it doesn't calculate a factorial for every element of the matrix, instead it knows that (k)_p = (k)_(p-1) * (k - (p - 1)), therefore A[i,j] = (j)_(i-1) = (j)_(i-2) * (j - (i - 2)) = A[i-1, j] * (j - (i - 2)), so it uses the previous row to calculate the current one.

Recursive function runtime

1.Given that T(0)=1, T(n)=T([2n/3])+c (in this case 2n/3 is lower bound). What is big-Θ bound for T(n)? Is this just simply log(n)(base 3/2). Please tell me how to get the result.
2.Given the code
void mystery(int n) {
if(n < 2)
return;
else {
int i = 0;
for(i = 1; i <= 8; i += 2) {
mystery(n/3);
}
int count = 0;
for(i = 1; i < n*n; i++) {
count = count + 1;
}
}
}
According to the master theorem, the big-O bound is n^2. But my result is log(n)*n^2 (base 3) . I'm not sure of my result, and actually I do not really know how to deal with the runtime of recursive function. It is just simply the log function?
Or what if like in this code T(n)=4*T(n/3)+n^2?
Cheers.
For (1), the recurrence solves to c log3/2 n + c. To see this, you can use the iteration method to expand out a few terms of the recurrence and spot a pattern:
T(n) = T(2n/3) + c
= T(4n/9) + 2c
= T(8n/27) + 3c
= T((2/3)k n) + kc
Assuming that T(1) = c and solving for the choice of k that makes the expression inside the parentheses equal to 1, we get that
1 = (2/3)k n
(3/2)k = n
k = log3/2
Plugging in this choice of k into the above expression gives the final result.
For (2), you have the recurrence relation
T(n) = 4T(n/3) + n2
Using the master theorem with a = 4, b = 3, and d = 2, we see that logb a = log3 4 < d, so this solves to O(n2). Here's one way to see this. At the top level, you do n2 work. At the layer below that, you have four calls each doing n2 / 9 work, so you do 4n2 / 9 work, less than the top layer. The layer below that does 16 calls that each do n2 / 81 work for a total of 16n2 / 81 work, again much work than the layer above. Overall, each layer does exponentially less work than the layer above it, so the top layer ends up dominating all the other ones asymptotically.
Let's do some complexity analysis, and we'll find that the asymptotic behavior of T(n) depends on the constants of the recursion.
Given T(n) = A T(n*p) + C, with A,C>0 and p<1, let's first try to prove T(n)=O(n log n). We try to find D such that for large enough n
T(n) <= D(n * log(n))
This yields
A * D(n*p * log(n*p)) + C <= D*(n * log(n))
Looking at the higher order terms, this results in
A*D*p <= D
So, if A*p <= 1, this works, and T(n)=O(n log n).
In the special case that A<=1 we can do better, and prove that T(n)=O(log n):
T(n) <= D log(n)
Yields
A * D(log(n*p)) + C <= D*(log(n))
Looking at the higher order terms, this results in
A * D * log(n) + C + A * D *log(p) <= D * log(n)
Which is true for large enough D and n since A<=1 and log(p)<0.
Otherwise, if A*p>1, let's find the minimal value of q such that T(n)=O(n^q). We try to find the minimal q such that there exists D for which
T(n) <= D n^q
This yields
A * D p^q n^q + C <= D*n^q
Looking at the higher order terms, this results in
A*D*p^q <= D
The minimal q that satisfies this is defined by
A*p^q = 1
So we conclude that T(n)=O(n^q) for q = - log(A) / log(p).
Now, given T(n) = A T(n*p) + B n^a + C, with A,B,C>0 and p<1, try to prove that T(n)=O(n^q) for some q. We try to find the minimal q>=a such that for some D>0,
T(n) <= D n^q
This yields
A * D n^q p^q + B n^a + C <= D n^q
Trying q==a, this will work only if
ADp^a + B <= D
I.e. T(n)=O(n^a) if Ap^a < 1.
Otherwise we get to Ap^q = 1 as before, which means T(n)=O(n^q) for q = - log(A) / log(p).

Tetranacci Numbers in Log(n)

I have stumbled upon a problem, which requires me to calculate the nth Tetranacci Number in O(log n).
I have seen several solutions for doing this for Fibonacci Numbers
I was looking to follow a similar procedure (Matrix Multiplication/Fast Doubling) to achieve this, but I am not sure how to do it exactly (take a 4 by 4 matrix and 1 by 4 in a similar fashion doesn't seem to work). With dynamic programming/general loops/any other basic idea, I am not able to achieve sub-linear runtime. Any help appreciated!
Matrix multiplication of course works. Here's how to derive the matrix.
What we want is to find the entries that make the equation
[a b c d] [T(n-1)] [T(n) ]
[e f g h] [T(n-2)] [T(n-1)]
[i j k l] [T(n-3)] = [T(n-2)]
[m n o p] [T(n-4)] [T(n-3)]
true for all n. Expand.
a T(n-1) + b T(n-2) + c T(n-3) + d T(n-4) = T(n)
e T(n-1) + f T(n-2) + g T(n-3) + h T(n-4) = T(n-1)
i T(n-1) + j T(n-2) + k T(n-3) + l T(n-4) = T(n-2)
m T(n-1) + n T(n-2) + o T(n-3) + p T(n-4) = T(n-3)
The obvious settings here are a = b = c = d = 1 (using the recurrence) and e = j = o = 1 and f = g = h = i = k = l = m = n = p = 0 (basic algebra).
The initial vector is
[T(3)] [1]
[T(2)] [0]
[T(1)] = [0]
[T(0)] [0]
by definition.
I have derived the Tetranacci doubling formulas from the corresponding matrix as described in the other answers. The formulas are:
T(2n) = T(n+1)*(2*T(n+2) - T(n+1)) + T(n)*(2*T(n+3) - 2*T(n+2) - 2*T(n+1) - T(n))
T(2n+1) = T(n)^2 + T(n+2)^2 + T(n+1)*(2*T(n+3) - 2*T(n+2) - T(n+1))
T(2n+2) = T(n+1)*(2*T(n) + T(n+1)) + T(n+2)*(2*T(n+3) - T(n+2))
T(2n+3) = T(n+1)^2 + T(n+3)^2 + T(n+2)*(2*T(n) + 2*T(n+1) + T(n+2))
With these, we can implement the "fast doubling" method. Here's one such implementation in Python, whose native support for arbitrary-sized integers is very convenient:
def tetranacci_by_doubling(n):
if n >= 0:
a, b, c, d = 0, 0, 0, 1 # T(0), T(1), T(2), T(3)
else: # n < 0
a, b, c, d = 1, 0, 0, 0 # T(-1), T(0), T(1), T(2)
# unroll the last iteration to avoid computing unnecessary values.
for i in reversed(range(1, abs(n).bit_length())):
w = b*(2*c - b) + a*(2*(d - c - b) - a)
x = a*a + c*c + b*(2*(d - c) - b)
y = b*(2*a + b) + c*(2*d - c)
z = b*b + d*d + c*(2*(a + b) + c)
a, b, c, d = w, x, y, z
if (n >> i) & 1 == 1:
a, b, c, d = b, c, d, a + b + c + d
if n & 1 == 0:
return b*(2*c - b) + a*(2*(d - c - b) - a) # w
else: # n & 1 == 1
return a*a + c*c + b*(2*(d - c) - b) # x
def tetranacci(n):
a, b, c, d = 0, 0, 0, 1 # T(0), T(1), T(2), T(3)
# offset by 3 to reduce excess computation for large positive `n`
n -= 3
if n >= 0:
for _ in range(+n):
a, b, c, d = b, c, d, a + b + c + d
else: # n < 0
for _ in range(-n):
a, b, c, d = d - c - b - a, a, b, c
return d
# sanity check
print(all(tetranacci_by_doubling(n) == tetranacci(n) for n in range(-1000, 1001)))
I would've liked to adjust the doubling formulas to be T(2n-3),T(2n-2),T(2n-1),T(2n) in terms of T(n-3),T(n-2),T(n-1),T(n) to slightly reduce excess computation for large n, but simplifying the shifted formulas is tedious.
Update
Swapped to an iterative version since I figured out how to make it cleanly handle negative n with minimal duplication. Originally, this was the sole advantage of the recursive version.
Incorporated a technique that's described in several papers about computing Fibonacci & Lucas numbers--which is to perform the final doubling step manually after the loop to avoid computing extra unneeded values. This results in about ~40%-50% speed-up for large n (>= 10^6)! This optimization could also be applied to the recursive version, as well.
The speed-up due to the unrolling of the last iteration is pretty interesting. It suggests that nearly half of the computational work is done in the final step. This kind of makes sense, since the number of digits in T(n) (and therefore the cost of arithmetic) approximately doubles when n doubles, and we know that 2^n ~= 2^0 + 2^1 + ... + 2^(n-1). Applying the optimization to similar Fibonacci/Lucas doubling algorithms produces a similar speed-up of ~40%--although, if you're computing Fibonacci/etc. modulo some 64-bit M, I suspect this optimization isn't as valuable.
From the OEIS, this is the (1,4) entry of the nth power of
1 1 0 0
1 0 1 0
1 0 0 1
1 0 0 0
To compute the nth power of that matrix in O(log n) operations, you can use exponentiation by squaring. There might be a slightly simpler recurrence, but you should be able to implement the general technique.

Order by Recursion tree

I have tried determining the running time given by a recurrence relation, but my result is not correct.
Recurrence
T(n) = c + T(n-1) if n >= 1
= d if n = 0
My attempt
I constructed this recursion tree:
n
|
n-1
|
n-2
|
n-3
|
n-4
|
n-5
|
|
|
|
|
|
Till we get 1
Now at level i, the size of the sub problem should be, n-i
But at last we want a problem of size 1. Thus, at the last level, n-i=1 which gives, i=n-1.
So the depth of the tree becomes n-1 and the height becomes n-1+1= n.
Now the time required to solve this recursion = height of the tree*time required at each level which is :
n+(n-1)+(n-2)+(n-3)+(n-4)+(n-5)+ ...
==> (n+n+n+n+n+ ... )-(1+2+3+4+5+ ... )
==> n - (n(n+1)/2)
Now the time taken = n* ((n-n2)/2) which should give the order to be n2, but that is not the correct answer.
Now at level i, the size of the sub problem should be, n-i
Yes, that is correct. But you're assuming, that the runtime equals the sum of all the subproblem sizes. Just think about it, already summing the first two levels gives n + (n - 1) = 2n - 1, why would the problem size increase? Disclaimer: A bit handwavy and not an entirely accurate statement.
What the formula actually says
T(n) = c + T(n-1)
The formula says, solving it for some n takes the same time it takes to solve it for a problem size that is one less, plus an additional constant c: c + T(n - 1)
Another way to put the above statement is this: Given the problem takes some time t for a certain problem size, it will take t + c for a problem size, that is bigger by one.
We know, that at a problem size of n = 0, this takes time d. According to the second statement, for a size of one more, n = 1, it will take d + c. Applying our rule again, it thus takes d + c + c for n = 2. We conclude, that it must take d + n*c time for any n.
This is not a proof. To actually prove this, you must use induction as shown by amit.
A correct recursion tree
Your recursion tree only lists the problem size. That's pretty much useless, I'm afraid. Instead, you need to list the runtime for said problem size.
Every node in the tree corresponds to a certain problem size. What you write into that node is the additional time it takes for the problem size. I.e. you sum over all the descendants of a node plus the node itself to get the runtime for a certain problem size.
A graphical representation of such a tree would look like this
Tree Corresponding problem size
c n
|
c n - 1
|
c n - 2
|
c n - 3
.
.
.
|
c 2
|
c 1
|
d 0
Formalizing: As already mentioned, the label of a node is the additional runtime it takes to solve for that problem size, plus all its descendants. The uppermost node represents a problem size of n, bearing the label c because that's in addition to T(n-1), to which it is connected using a |.
In a formula, you would only write this relation: T(n) = c + T(n-1). Given that tree, you can see how this applies to every n>=1. You could write this down like this:
T(n) = c + T(n - 1) # This means, `c` plus the previous level
T(n - 1) = c + T(n - 2) # i.e. add the runtime of this one to the one above^
T(n - 2) = c + T(n - 3)
...
T(n - (n - 2)) = c + T(1)
T(n - (n - 1)) = c + T(0)
T(0) = d
You can now expand the terms from bottom to top:
T(n - (n - 1)) = c + T(0)
T(0) = d
T(n - (n - 2)) = c + T(1)
T(n - (n - 1)) = c + d
T(0) = d
T(n - (n - 3)) = c + T(2)
T(n - (n - 2)) = c + (c + d)
T(n - (n - 1)) = c + d
T(0) = d
T(n - (n - 4)) = c + T(3)
T(n - (n - 3)) = c + (2*c + d)
T(n - (n - 2)) = c + (c + d)
...
T(n) = c + T(n - 1)
T(n - 1) = c + ((n-2)c + d)
T(n) = c + (n-1)c + d = n*c + d
T(n - 1) = (n-1)c + d
Summing 1 to n
n+(n-1)+(n-2)+(n-3)+(n-4)+(n-5)+ ...
==> (n+n+n+n+n+ ... )-(1+2+3+4+5+ ... )
==> n - (n(n+1)/2)
From the first line to the second line, you have reduced your problem from summing 1 to n to, well, summing 1 to n-1. That's not very helpful, because you're stuck with the same problem.
I'm not sure what you did on the third line, but your transition from the first to the second is basically correct.
This would have been the correct formula:
T(n) = c + T(n-1)
= c + (c + T(n-2))
= ...
= c*i + T(n-i)
= ...
= c*n + T(0)
= c*n + d
If we assume c,d are constants - it gets you O(n)
To prove it mathematically - one can use mathematical induction
For each k < n assume T(n) = c*n + d
Base is T(0) = 0*n + d = d, which is correct for n < 1
T(n) = c + T(n-1) (*)
= c + (n-1)*c + d
= c*n + d
(*) is the induction hypothesis, and is valid since n-1 < n
The complexity would be O(n).
As you described the functions converts the problem for input n into problem for (n-1) by using a constant operation 'c'.
So moving down the recursion tree we will have in total n levels, and at each step we require some constant operation 'c'.
So there will be total c*n operations resulting into the complexity O(n).

Resources