Tribonacci series with different initial values - algorithm

How to find nth tribonacci number with matrix multiplication method if the initial values are some arbitrary numbers say 1, 2 3 i.e T(1) = 1, T(2) =2 and T(3) = 3.
If T(n) = T(n-1) + T(n-2) + T(n-3) then how to find T(n) if n is very very large, I would appreciate if anyone can explain with matrix multiplication method. How to construct initial matrix.

The matrix multiplication method involves using the matrix recurrence relation.
For the Fibonacci series, we can define a vector of length 2 to represent adjacent Fibonacci numbers. Using this vector, we can define a recurrence relation with a matrix multiplication:
Similarly, the Tribonacci series recurrence relation can be written in this way:
The only difference is that the vector and matrix sizes are different.
Now, to calculate a large Tribonacci number, we just apply the matrix multiplication n times, and we get:
The matrix to the power of n (Mn) can be efficiently calculated, because we can use an exponentiation algorithm.
Many efficient exponentiation algorithms for scalars are described by Wikipedia in Exponentiation by Squaring. We can use the same idea for matrix exponentiation.
I will describe a simple way to do this. First we write n as a binary number, eg:
n = 37 = 100101
Then, calculate M to each power of 2 by squaring the previous power of 2: M1, M2 = M1M1, M4 = M2M2, M8 = M4M4, M16 = M8M8, M32 = M16M16, ...
And finally, multiply the powers of M corresponding to the binary digits of n. In this case, Mn = M1M4M32.
After calculating that, we can multiply the matrix with the Tribonacci vector for the first 3 values, ie.
Because the matrices have fixed size, each matrix multiplication takes constant time. We must do O(log n) matrix multiplications. Thus, we can calculate the nth Tribonacci number in O(log n) time.
Compare this to the normal dynamic programming method, where it takes O(n) time, by calculating each Tribonacci number up to the nth Tribonacci number (ie. for (i = 3 to n) {T[i] = T[i-1]+T[i-2]+T[i-3];} return T[n];).
I will assume that you know how to code up matrix multiplication in the language of your choice.

Consider:
| a1 b1 c1 |
[f(n) f(n - 1) f(n - 2)] * | a2 b2 c2 | = [f(n + 1) f(n) f(n - 1)]
| a3 b3 c3 |
Find the unknowns in the matrix based on that and that will be the matrix you want.
The answer in this case is:
1 1 0
1 0 1
1 0 0
The method is general however, it works even if you sum k previous terms, even if they have constants in front of them etc.

Related

is the following way correct for determines the complexity of Fibonacci function? [duplicate]

I understand Big-O notation, but I don't know how to calculate it for many functions. In particular, I've been trying to figure out the computational complexity of the naive version of the Fibonacci sequence:
int Fibonacci(int n)
{
if (n <= 1)
return n;
else
return Fibonacci(n - 1) + Fibonacci(n - 2);
}
What is the computational complexity of the Fibonacci sequence and how is it calculated?
You model the time function to calculate Fib(n) as sum of time to calculate Fib(n-1) plus the time to calculate Fib(n-2) plus the time to add them together (O(1)). This is assuming that repeated evaluations of the same Fib(n) take the same time - i.e. no memoization is used.
T(n<=1) = O(1)
T(n) = T(n-1) + T(n-2) + O(1)
You solve this recurrence relation (using generating functions, for instance) and you'll end up with the answer.
Alternatively, you can draw the recursion tree, which will have depth n and intuitively figure out that this function is asymptotically O(2n). You can then prove your conjecture by induction.
Base: n = 1 is obvious
Assume T(n-1) = O(2n-1), therefore
T(n) = T(n-1) + T(n-2) + O(1) which is equal to
T(n) = O(2n-1) + O(2n-2) + O(1) = O(2n)
However, as noted in a comment, this is not the tight bound. An interesting fact about this function is that the T(n) is asymptotically the same as the value of Fib(n) since both are defined as
f(n) = f(n-1) + f(n-2).
The leaves of the recursion tree will always return 1. The value of Fib(n) is sum of all values returned by the leaves in the recursion tree which is equal to the count of leaves. Since each leaf will take O(1) to compute, T(n) is equal to Fib(n) x O(1). Consequently, the tight bound for this function is the Fibonacci sequence itself (~θ(1.6n)). You can find out this tight bound by using generating functions as I'd mentioned above.
Just ask yourself how many statements need to execute for F(n) to complete.
For F(1), the answer is 1 (the first part of the conditional).
For F(n), the answer is F(n-1) + F(n-2).
So what function satisfies these rules? Try an (a > 1):
an == a(n-1) + a(n-2)
Divide through by a(n-2):
a2 == a + 1
Solve for a and you get (1+sqrt(5))/2 = 1.6180339887, otherwise known as the golden ratio.
So it takes exponential time.
I agree with pgaur and rickerbh, recursive-fibonacci's complexity is O(2^n).
I came to the same conclusion by a rather simplistic but I believe still valid reasoning.
First, it's all about figuring out how many times recursive fibonacci function ( F() from now on ) gets called when calculating the Nth fibonacci number. If it gets called once per number in the sequence 0 to n, then we have O(n), if it gets called n times for each number, then we get O(n*n), or O(n^2), and so on.
So, when F() is called for a number n, the number of times F() is called for a given number between 0 and n-1 grows as we approach 0.
As a first impression, it seems to me that if we put it in a visual way, drawing a unit per time F() is called for a given number, wet get a sort of pyramid shape (that is, if we center units horizontally). Something like this:
n *
n-1 **
n-2 ****
...
2 ***********
1 ******************
0 ***************************
Now, the question is, how fast is the base of this pyramid enlarging as n grows?
Let's take a real case, for instance F(6)
F(6) * <-- only once
F(5) * <-- only once too
F(4) **
F(3) ****
F(2) ********
F(1) **************** <-- 16
F(0) ******************************** <-- 32
We see F(0) gets called 32 times, which is 2^5, which for this sample case is 2^(n-1).
Now, we want to know how many times F(x) gets called at all, and we can see the number of times F(0) is called is only a part of that.
If we mentally move all the *'s from F(6) to F(2) lines into F(1) line, we see that F(1) and F(0) lines are now equal in length. Which means, total times F() gets called when n=6 is 2x32=64=2^6.
Now, in terms of complexity:
O( F(6) ) = O(2^6)
O( F(n) ) = O(2^n)
There's a very nice discussion of this specific problem over at MIT. On page 5, they make the point that, if you assume that an addition takes one computational unit, the time required to compute Fib(N) is very closely related to the result of Fib(N).
As a result, you can skip directly to the very close approximation of the Fibonacci series:
Fib(N) = (1/sqrt(5)) * 1.618^(N+1) (approximately)
and say, therefore, that the worst case performance of the naive algorithm is
O((1/sqrt(5)) * 1.618^(N+1)) = O(1.618^(N+1))
PS: There is a discussion of the closed form expression of the Nth Fibonacci number over at Wikipedia if you'd like more information.
You can expand it and have a visulization
T(n) = T(n-1) + T(n-2) <
T(n-1) + T(n-1)
= 2*T(n-1)
= 2*2*T(n-2)
= 2*2*2*T(n-3)
....
= 2^i*T(n-i)
...
==> O(2^n)
Recursive algorithm's time complexity can be better estimated by drawing recursion tree, In this case the recurrence relation for drawing recursion tree would be T(n)=T(n-1)+T(n-2)+O(1)
note that each step takes O(1) meaning constant time,since it does only one comparison to check value of n in if block.Recursion tree would look like
n
(n-1) (n-2)
(n-2)(n-3) (n-3)(n-4) ...so on
Here lets say each level of above tree is denoted by i
hence,
i
0 n
1 (n-1) (n-2)
2 (n-2) (n-3) (n-3) (n-4)
3 (n-3)(n-4) (n-4)(n-5) (n-4)(n-5) (n-5)(n-6)
lets say at particular value of i, the tree ends, that case would be when n-i=1, hence i=n-1, meaning that the height of the tree is n-1.
Now lets see how much work is done for each of n layers in tree.Note that each step takes O(1) time as stated in recurrence relation.
2^0=1 n
2^1=2 (n-1) (n-2)
2^2=4 (n-2) (n-3) (n-3) (n-4)
2^3=8 (n-3)(n-4) (n-4)(n-5) (n-4)(n-5) (n-5)(n-6) ..so on
2^i for ith level
since i=n-1 is height of the tree work done at each level will be
i work
1 2^1
2 2^2
3 2^3..so on
Hence total work done will sum of work done at each level, hence it will be 2^0+2^1+2^2+2^3...+2^(n-1) since i=n-1.
By geometric series this sum is 2^n, Hence total time complexity here is O(2^n)
The proof answers are good, but I always have to do a few iterations by hand to really convince myself. So I drew out a small calling tree on my whiteboard, and started counting the nodes. I split my counts out into total nodes, leaf nodes, and interior nodes. Here's what I got:
IN | OUT | TOT | LEAF | INT
1 | 1 | 1 | 1 | 0
2 | 1 | 1 | 1 | 0
3 | 2 | 3 | 2 | 1
4 | 3 | 5 | 3 | 2
5 | 5 | 9 | 5 | 4
6 | 8 | 15 | 8 | 7
7 | 13 | 25 | 13 | 12
8 | 21 | 41 | 21 | 20
9 | 34 | 67 | 34 | 33
10 | 55 | 109 | 55 | 54
What immediately leaps out is that the number of leaf nodes is fib(n). What took a few more iterations to notice is that the number of interior nodes is fib(n) - 1. Therefore the total number of nodes is 2 * fib(n) - 1.
Since you drop the coefficients when classifying computational complexity, the final answer is θ(fib(n)).
It is bounded on the lower end by 2^(n/2) and on the upper end by 2^n (as noted in other comments). And an interesting fact of that recursive implementation is that it has a tight asymptotic bound of Fib(n) itself. These facts can be summarized:
T(n) = Ω(2^(n/2)) (lower bound)
T(n) = O(2^n) (upper bound)
T(n) = Θ(Fib(n)) (tight bound)
The tight bound can be reduced further using its closed form if you like.
It is simple to calculate by diagramming function calls. Simply add the function calls for each value of n and look at how the number grows.
The Big O is O(Z^n) where Z is the golden ratio or about 1.62.
Both the Leonardo numbers and the Fibonacci numbers approach this ratio as we increase n.
Unlike other Big O questions there is no variability in the input and both the algorithm and implementation of the algorithm are clearly defined.
There is no need for a bunch of complex math. Simply diagram out the function calls below and fit a function to the numbers.
Or if you are familiar with the golden ratio you will recognize it as such.
This answer is more correct than the accepted answer which claims that it will approach f(n) = 2^n. It never will. It will approach f(n) = golden_ratio^n.
2 (2 -> 1, 0)
4 (3 -> 2, 1) (2 -> 1, 0)
8 (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
14 (5 -> 4, 3) (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
(3 -> 2, 1) (2 -> 1, 0)
22 (6 -> 5, 4)
(5 -> 4, 3) (4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
(3 -> 2, 1) (2 -> 1, 0)
(4 -> 3, 2) (3 -> 2, 1) (2 -> 1, 0)
(2 -> 1, 0)
The naive recursion version of Fibonacci is exponential by design due to repetition in the computation:
At the root you are computing:
F(n) depends on F(n-1) and F(n-2)
F(n-1) depends on F(n-2) again and F(n-3)
F(n-2) depends on F(n-3) again and F(n-4)
then you are having at each level 2 recursive calls that are wasting a lot of data in the calculation, the time function will look like this:
T(n) = T(n-1) + T(n-2) + C, with C constant
T(n-1) = T(n-2) + T(n-3) > T(n-2) then
T(n) > 2*T(n-2)
...
T(n) > 2^(n/2) * T(1) = O(2^(n/2))
This is just a lower bound that for the purpose of your analysis should be enough but the real time function is a factor of a constant by the same Fibonacci formula and the closed form is known to be exponential of the golden ratio.
In addition, you can find optimized versions of Fibonacci using dynamic programming like this:
static int fib(int n)
{
/* memory */
int f[] = new int[n+1];
int i;
/* Init */
f[0] = 0;
f[1] = 1;
/* Fill */
for (i = 2; i <= n; i++)
{
f[i] = f[i-1] + f[i-2];
}
return f[n];
}
That is optimized and do only n steps but is also exponential.
Cost functions are defined from Input size to the number of steps to solve the problem. When you see the dynamic version of Fibonacci (n steps to compute the table) or the easiest algorithm to know if a number is prime (sqrt(n) to analyze the valid divisors of the number). you may think that these algorithms are O(n) or O(sqrt(n)) but this is simply not true for the following reason:
The input to your algorithm is a number: n, using the binary notation the input size for an integer n is log2(n) then doing a variable change of
m = log2(n) // your real input size
let find out the number of steps as a function of the input size
m = log2(n)
2^m = 2^log2(n) = n
then the cost of your algorithm as a function of the input size is:
T(m) = n steps = 2^m steps
and this is why the cost is an exponential.
Well, according to me to it is O(2^n) as in this function only recursion is taking the considerable time (divide and conquer). We see that, the above function will continue in a tree until the leaves are approaches when we reach to the level F(n-(n-1)) i.e. F(1). So, here when we jot down the time complexity encountered at each depth of tree, the summation series is:
1+2+4+.......(n-1)
= 1((2^n)-1)/(2-1)
=2^n -1
that is order of 2^n [ O(2^n) ].
No answer emphasizes probably the fastest and most memory efficient way to calculate the sequence. There is a closed form exact expression for the Fibonacci sequence. It can be found by using generating functions or by using linear algebra as I will now do.
Let f_1,f_2, ... be the Fibonacci sequence with f_1 = f_2 = 1. Now consider a sequence of two dimensional vectors
f_1 , f_2 , f_3 , ...
f_2 , f_3 , f_4 , ...
Observe that the next element v_{n+1} in the vector sequence is M.v_{n} where M is a 2x2 matrix given by
M = [0 1]
[1 1]
due to f_{n+1} = f_{n+1} and f_{n+2} = f_{n} + f_{n+1}
M is diagonalizable over complex numbers (in fact diagonalizable over the reals as well, but this is not usually the case). There are two distinct eigenvectors of M given by
1 1
x_1 x_2
where x_1 = (1+sqrt(5))/2 and x_2 = (1-sqrt(5))/2 are the distinct solutions to the polynomial equation x*x-x-1 = 0. The corresponding eigenvalues are x_1 and x_2. Think of M as a linear transformation and change your basis to see that it is equivalent to
D = [x_1 0]
[0 x_2]
In order to find f_n find v_n and look at the first coordinate. To find v_n apply M n-1 times to v_1. But applying M n-1 times is easy, just think of it as D. Then using linearity one can find
f_n = 1/sqrt(5)*(x_1^n-x_2^n)
Since the norm of x_2 is smaller than 1, the corresponding term vanishes as n tends to infinity; therefore, obtaining the greatest integer smaller than (x_1^n)/sqrt(5) is enough to find the answer exactly. By making use of the trick of repeatedly squaring, this can be done using only O(log_2(n)) multiplication (and addition) operations. Memory complexity is even more impressive because it can be implemented in a way that you always need to hold at most 1 number in memory whose value is smaller than the answer. However, since this number is not a natural number, memory complexity here changes depending on whether if you use fixed bits to represent each number (hence do calculations with error)(O(1) memory complexity this case) or use a better model like Turing machines, in which case some more analysis is needed.

Does the asymptotic complexity of a multiplication algorithm only rely on the larger of the two operands?

I'm taking an algorithms class and I repeatedly have trouble when I'm asked to analyze the runtime of code when there is a line with multiplication or division. How can I find big-theta of multiplying an n digit number with an m digit number (where n>m)? Is it the same as multiplying two n digit numbers?
For example, right now I'm attempting to analyze the following line of code:
return n*count/100
where count is at most 100. Is the asymptotic complexity of this any different from n*n/100? or n*n/n?
You can always look up here Computational complexity of mathematical operations.
In your complexity of n*count/100 is O(length(n)) as 100 is a constant and length(count) is at most 3.
In general multiplication of two numbers n and m digits length, takes O(nm), the same time required for division. Here i assume we are talking about long division. There are many sophisticated algorithms which will beat this complexity.
To make things clearer i will provide an example. Suppose you have three numbers:
A - n digits length
B - m digits length
C - p digits length
Find complexity of the following formula:
A * B / C
Multiply first. Complexity of A * B it is O(nm) and as result we have number D, which is n+m digits length. Now consider D / C, here complexity is O((n+m)p), where overall complexity is sum of the two O(nm + (n+m)p) = O(m(n+p) + np).
Divide first. So, we divide B / C, complexity is O(mp) and we have m digits number E. Now we calculate A * E, here complexity is O(nm). Again overall complexity is O(mp + nm) = O(m(n+p)).
From the analysis you can see that it is beneficial to divide first. Of course in real life situation you would account for numerical stability as well.
From Modern Computer Arithmetic:
Assume the larger operand has size
m, and the smaller has size n ≤ m, and denote by M(m,n) the corresponding
multiplication cost.
When m is an exact multiple of n, say m = kn, a trivial strategy is to cut the
larger operand into k pieces, giving M(kn,n) = kM(n) + O(kn).
Suppose m ≥ n and n is large. To use an evaluation-interpolation scheme,
we need to evaluate the product at m + n points, whereas balanced k by k
multiplication needs 2k points. Taking k ≈ (m+n)/2, we see that M(m,n) ≤ M((m + n)/2)(1 + o(1)) as n → ∞. On the other hand, from the discussion
above, we have M(m,n) ≤ ⌈m/n⌉M(n)(1 + o(1)).

What is the recurrence equation for this multiplication algorithm?

The multiplication algorithm is for multiplying two radix r numbers:
0 <= x,y < r^n
x = x1 * r^(n/2) + x0
y = y1 * r^(n/2) + y0
where x0 is the half of x that contains the least significant digits, and x1 is the half with the most significant digits, and similarly for y.
So if r = 10 and n = 4, we have that x = 9723 = 97 * 10^2 + 23, where x1 = 97 and x0 = 23.
The multiplication can be done as:
z = x*y = x1*y1 + (x0*y1 + x1*y0) + x0*y0
So we have now four multiplications of half-sized numbers (we initially had a multiplication of n digit numbers, and now we have four multiplications of n/2 digit numbers).
As I see it the recurrence for this algorithm is:
T(n) = O(1) + 4*T(n/2)
But apparently it is T(n) = O(n) + 3T(n/2)
Either way, the solution is T(n) = O(n^2), and I can see this, but I am wondering why there is an O(n) term instead of an O(1) term?
You are right, if you'll compute the term x0*y1 + x1*y0 naively, with two products, the time complexity is quadratic. This is because we do four products and the recurrence is, as you suggest, T(n) = O(n) + 4T(n/2), which solves to O(n^2).
However, Karatsuba observed that xy=z2 * r^n + z1 * r^(n/2) + z0, where we let z2=x1*y2, z0=x0*y0, and z1=x0*y1 + x1*y0, and that one can express the last term as z1=(x1+x0)(y1+y0)-z2-z0,which involves only one product. Using this trick, the recurrence does become T(n) = O(n) + 3T(n/2) because we do three products altogether (as opposed to four if don't use the trick).
Because the numbers are of order r^n we will need n digits to represent the numbers (in general, for a fixed r>=2, we need O(log N) digits to represent the number N). To add two numbers of that order, you need to "touch" all the digits. Since there are n digits, you need O(n) (formally I'd say Omega(n), meaning "at least order of n time", but let's leave the details aside) time to compute their sum.
For example, when computing the product N*M, the number of bits n will be max(log N, log M) (assuming the base r>=2 is constant).
The algebraic trick is explained in more detail on the Wiki page for the Karatsuba algorithm.

Divide and conquer for function compositions

I was searching the algorithm for finding the composition of 2 linear functions n times (where n can be as large as 10^18) in O(log n) time. I just got a pdf containing polynomial composition of 2 functions with large degrees using divide and conquer algorithm.
I was wondering whether my problem for composition of linear function n times can also be solved using divide and conquer algorithm in O(log n) complexity?
If yes, please explain the algorithm.
Thanks, in advance.
EDIT 1: The composition of function f(x) n times is fofof...n-times. Here the function is to be composed to itself n times. There are no 2 functions.
You can represent the application of a linear function f(x) = ax + b as a 2-by-2 matrix multiplied by the vector (x, 1).
(f(x)) = ( a b ) (x)
( 1 ) ( 0 1 ) (1)
Applying f n times to x is multiplying the matrix n times to (x, 1), or equivalently, multiplying the matrix raised to the power of n to (x, 1).
(f^n(x)) = ( a b )^n (x)
( 1 ) ( 0 1 ) (1)
You can compute the matrix power using exponentiation by squaring.
This works whether you're working over the real numbers, the integers, or the integers modulo some number M.

What is the BigO of linear regression?

How large a system is it reasonable to attempt to do a linear regression on?
Specifically: I have a system with ~300K sample points and ~1200 linear terms. Is this computationally feasible?
The linear regression is computed as (X'X)^-1 X'Y.
If X is an (n x k) matrix:
(X' X) takes O(n*k^2) time and produces a (k x k) matrix
The matrix inversion of a (k x k) matrix takes O(k^3) time
(X' Y) takes O(n*k^2) time and produces a (k x k) matrix
The final matrix multiplication of two (k x k) matrices takes O(k^3) time
So the Big-O running time is O(k^2*(n + k)).
See also: http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Matrix_algebra
If you get fancy it looks like you can get the time down to O(k^2*(n+k^0.376)) with the Coppersmith–Winograd algorithm.
You can express this as a matrix equation:
where the matrix is 300K rows and 1200 columns, the coefficient vector is 1200x1, and the RHS vector is 1200x1.
If you multiply both sides by the transpose of the matrix , you have a system of equations for the unknowns that's 1200x1200. You can use LU decomposition or any other algorithm you like to solve for the coefficients. (This is what least squares is doing.)
So the Big-O behavior is something like O(mmn), where m = 300K and n = 1200. You'd account for the transpose, the matrix multiplication, the LU decomposition, and the forward-back substitution to get the coefficients.
The linear regression is computed as (X'X)^-1 X'y.
As far as I learned, y is a vector of results (or in other words: dependant variables).
Therefore, if X is an (n × m) matrix and y is an (n × 1) matrix:
The transposing of a (n × m) matrix takes O(n⋅m) time and produces a (m × n) matrix
(X' X) takes O(n⋅m²) time and produces a (m × m) matrix
The matrix inversion of a (m × m) matrix takes O(m³) time
(X' y) takes O(n⋅m) time and produces a (m × 1) matrix
The final matrix multiplication of a (m × m) and a (m x 1) matrices takes O(m²) time
So the Big-O running time is O(n⋅m + n⋅m² + m³ + n⋅m + m²).
Now, we know that:
m² ≤ m³
n⋅m ≤ n⋅m²
so asymptotically, the actual Big-O running time is O(n⋅m² + m³) = O(m²(n + m)).
And that's what we have from
http://en.wikipedia.org/wiki/Computational_complexity_of_mathematical_operations#Matrix_algebra
But, we know that there's a significant difference between the case n → ∞ and m → ∞.
https://en.wikipedia.org/wiki/Big_O_notation#Multiple_variables
So which one should we choose? Obviously it's the number of observations which is more likely to grow, rather than the number of attributes.
So my conclusion is that if we assume that the number of attributes remains constant, we can ignore the m terms and that's a relief because the time complexity of a multivariate linear regression becomes a mere linear O(n). On the other hand, we can expect our computing time explodes by a large value when the number of attributes increase substantially.
The linear regression of closed-form model is computed as follow:
derivative of
RSS(W) = -2H^t (y-HW)
So, we solve for
-2H^t (y-HW) = 0
Then, the W value is
W = (H^t H)^-1 H^2 y
where:
W: is the vector of expected weights
H: is the features matrix N*D where N is the number of observations, and D is the number of features
y: is the actual value
Then, the complexity of
H^t H is n D^2
The complexity of the transpose is D^3
So, The complexity of
(H^t H)^-1 is n * D^2 + D^3

Resources