recurrence relation dependent inversly on n - algorithm

I recently learnt to find out nth number fibonacci series by matrix exponentiation.
but i am stuck on two relations :
1) F(n) = F(n−1) + n
2) F(n) = F(n−1) + 1/n
Is there any efficient way to solve these in
O(logn)
time like we have matrix expo. for fibonacci series ?

The first one is obviously equal to:
F(n) = F(0) + n*(n+1)/2
and can be computed in O(1) time. For the second, look here.
Supposing that you want to compute the first one with matrix exponentiation, in the same way that you did with the Fibonacci series, here's the matrix you should use:
| 1 1 0 |
A = | 0 1 1 |
| 0 0 1 |
The choice of matrix is obvious if you think of the following equation:
| F(n+1) | | 1 1 0 | | F(n) |
| n+1 | = | 0 1 1 | * | n |
| 1 | | 0 0 1 | | 1 |
Of course, the starting vector has to be: (F(0), 0, 1).
For the second series this is not so easy, as you would want to gradually compute the value 1/n, which cannot be computed linearly in this way. I guess it cannot be done but I won't try to prove it.

The first one can be calculated in O(1) just because this is an arithmetic progression and the sum is n*(n-1)/2.
The second one is a harmonic series and can not be calculated efficiently, but you can approximate it in O(1) with:
where the first one is a 0.57721566490153286060 and the second is approximately 1/(2k)

Related

Finding Nth Fibonacci Number in O(logn) time and space complexity?

I am trying to find the Nth Fibonacci number in O(logn) time. We can do O(logn) solution by using the Fibonacci Q matrix.
But, there is another solution by using the Cassini and Catalan identities to find the Nth Fibonacci number in O(logn) time. It states below:
If n is even then k = n/2: F(n) = [2*F(k-1) + F(k)]*F(k)
If n is odd then k = (n + 1)/2: F(n) = F(k)*F(k) + F(k-1)*F(k-1)
I could understand the proof only upto the marked equation:
But, I am not understanding the below equations and the final derived equations.
Can anyone please explain in details? Thanks in Advance.
I believe the idea is to take this matrix product
|1 1|^n |1 1|^m |1 1|^{m+n}
|1 0| x |1 0| = |1 0|
and use the fact that this can be rewritten as
|F_{n+1} F_n | |F_{m+1} F_m | |F_{m+n+1} F_{m+n} |
| | x | | = | |
|F_n F_{n-1}| |F_m F_{m-1}| |F_{m+n} F_{m+n-1}|
Now, multiply the matrices on the left using the standard matrix product formula. You get this reuslt:
|F_{n+1} F_{m+1} + F_n F_m F_{n+1}F_m + F_n F_{m-1} | |F_{m+n+1} F_{m+n} |
| | = | |
|F_n F_{m+1} + F_{n-1} F_m F_n F_m + F_{n-1} F_{m-1} | |F_{m+n} F_{m+n-1}|
This gives these four equations:
Fn+1Fm+1 + FnFm = Fm+n+1
Fn+1Fm + FnFm-1 = Fm+n
FnFm+1 + Fn-1Fm = Fm+n
FnFm + Fn-1Fm-1 = Fm+n-1
The remaining equalities fall out from just substituting in specific values of m and n.

Finding a path through a checkerboard which is closest to a given cost

I've been stuck on a problem for a while. I am in an algorithm course right now but this isn't a homework problem. We haven't gotten to dynamic programming in class yet, I'm just doing this on my own.
Given a NxN sized checkerboard where every coordinate has a cost and another integer M, find the cost of a path from the top left of the checkerboard to the bottom right of the checkerboard (only allowed moves are right or down 1 square) such that the total cost of the path is below M but as close to M as possible. All elements of NxN and M are positive.
If this asked me to find the minimum or maximum path, I could use the standard dynamic programming algorithms but since I'm bounded by M, I think I have to use another strategy. I've been trying to use memoization and construct an array filled with a set of the cost of all possible paths from the start to a given element. To construct the set for (i, j), I add the cost value of (i, j) to every element in the union of the the sets for (i-1, j) and (j-1, i) (if they exist, else just use the set {0} in its place). Once I complete this for all elements in the checkerboard, choosing the right path is trivial. Just pick the element in the set for (N, N) which is below M but closest to M.
For example:
+---+---+---+
| 0 | 1 | 3 |
| 3 | 2 | 1 |
| 5 | 2 | 1 |
+---+---+---+
Cost of paths to a given point:
+---+----------+----------------+
| 0 | 1 | 4 |
| 3 | 3, 5 | 4, 5, 6 |
| 8 | 5, 7, 10 | 5, 6, 7, 8, 11 |
+---+----------+----------------+
This is a really space inefficient way of doing things. If I did the math right, the worst case scenario for the number of elements in the set of the (N, N) node is (N+1)!/((N+1)/2)!. Is there a faster (space or time) way of approaching this problem that I'm missing?
No. If all the costs are integers, at each cell you need to store at most O(M) elements. So you need O(MN^2) memory. If the sum is >M you just ignore it.
In this paper there is a mention of a pseudo polynomial algorithm to solve similar problem (exact cost). You can either use same algorithm multiple time with exact cost = M..1, or maybe read the algorithm and find a variation that solves your problem directly.
Unfortunately that paper is paywalled :(

Number of Binary Search Trees with given right arm stretch

For a given array of distinct (unique) integers I want to know the number of BST in all permutations with right most arm of length k.
(If k = 3, root->right->right is a leaf node)
(At my current requirement, I can not afford an algorithm with cost greater than N^3)
Two identical BSTs generated from different permutations are considered different.
My approach so far is:
Assume a function:
F(arr) = {a1, a2, a3...}
where a1 is count of array with k = 1, a2 is count of array with k2 etc.
F(arr[1:n]) = for i in range 1 to n (1 + df * F(subarr where each element is larger than arr[i]))
Where df is dynamic factor (n-1)C(count of elements smaller than arr[i])
I am trying to create a dp to solve the problem
Sort the array
Start from largest number to smaller number
dp[i][i] = 1
for(j in range i-1 to 1) dp[j][i] = some func of dp[j][i-1], but I am unable to formulate
For ex: for arr{4, 3, 2, 1}, I expect the following dp
arr[i] 4 3 2 1
+---+---+---+---+
k = 1 | 1 | 1 | 2 | 6 |
+---+---+---+---+
k = 2 | - | 1 | 3 |11 |
+---+---+---+---+
k = 3 | - | - | 1 | 6 |
+---+---+---+---+
k = 4 | - | - | - | 1 |
+---+---+---+---+
verification(n!) 1 2 6 24
Any hint, suggestions, pointers or redirection to a good source where I can meet my curiosity is welcome.
Thank you.
edit: It seems I may need 3D dp array. I am working on the same.
edit: Corrected col 3 of dp
The good new is that is you don't want the permutation but only their numbers, there is a formula for that. These are know as (unsigned) Stirling numbers of the first kind. The reason for that is that is that the numbers appearing on the right arm of a binary search tree are the left to right minima, that is the i such that the number appearing before i are greater than i. Here is a example where the records are underlined
6 8 3 5 4 2 7 1 9
_ _ _ _
This gives the tree
6
3 8
2 5 7 9
1 4
Those number are know to count permutation according to various characteristics (number of cycles... ). It is know that maxima or minima are among those characteristics. .You can find more information on Entry A008275 of the The On-Line Encyclopedia of Integer Sequences.
Now to answer the question of computing them. Let S(n,k) be the number of permutations of n numbers with k left to right minima. You can use the recurrence:
S(n, 0) = 0 for all n
S(n+1, k) = n*S(n, k) + S(n, k-1) for all n>0 and k>0
If I understand your problem correctly.
You do not need to sort an array. Since all number in you array are unique, you can assume that every possible subtree is a unique one.
Therefore you just need to count how may unique trees you can build having N - k unique elements, where N is a length of your array and k is a length of right most arm. In other words it will be number of permutations of your left subtree if you fix your right subtree to a fixed structure (root (node1 (node2 ... nodeK)))
Here is a way to calculate the number of binary trees of size N:
public int numTrees(int n) {
int[] ut = new int[Math.max(n + 1, 3)];
ut[1] = 1;
ut[2] = 2;
for (int i = 3; i <= n; i++) {
int u = 0;
for (int j = 0; j < i; j++) {
u += Math.max(1, ut[j]) * Math.max(1, ut[i - j - 1]);
}
ut[i] = u;
}
return ut[n];
}
it has O(n^2) time complexity and O(n) space complexity.

Time Complexity of Insertion Sort

Could anyone explain why insertion sort has a time complexity of Θ(n²)?
I'm fairly certain that I understand time complexity as a concept, but I don't really understand how to apply it to this sorting algorithm. Should I just look to mathematical proofs to find this answer?
On average each insertion must traverse half the currently sorted list while making one comparison per step. The list grows by one each time.
So starting with a list of length 1 and inserting the first item to get a list of length 2, we have average an traversal of .5 (0 or 1) places. The rest are 1.5 (0, 1, or 2 place), 2.5, 3.5, ... , n-.5 for a list of length n+1.
This is, by simple algebra, 1 + 2 + 3 + ... + n - n*.5 = (n(n+1) - n)/2 = n^2 / 2 = O(n^2)
Note that this is the average case. In the worst case the list must be fully traversed (you are always inserting the next-smallest item into the ascending list). Then you have 1 + 2 + ... n, which is still O(n^2).
In the best case you find the insertion point at the top element with one comparsion, so you have 1+1+1+ (n times) = O(n).
It only applies to arrays/lists - i.e. structures with O(n) time for insertions/deletions. It can be different for other data structures. For example, for skiplists it will be O(n * log(n)), because binary search is possible in O(log(n)) in skiplist, but insert/delete will be constant.
Worst case time complexity of Insertion Sort algorithm is O(n^2).
Worst case of insertion sort comes when elements in the array already stored in decreasing order and you want to sort the array in increasing order.
Suppose you have an array
Step 1 => | 4 | 3 | 2 | 1 | No. of comparisons = 1 | No. of movements = 1
Step 2 => | 3 | 4 | 2 | 1 | No. of comparisons = 2 | No. of movements = 2
Step 3 => | 2 | 3 | 4 | 1 | No. of comparisons = 3 | No. of movements = 3
Step 4 => | 1 | 2 | 3 | 4 | No. of comparisons = 4 | No. of movements = 4
T(n) = 2 + 4 + 6 + 8 + ---------- + 2(n-1)
T(n) = 2 * ( 1 + 2 + 3 + 4 + -------- + (n-1))
T(n) = 2 * (n(n-1))/2
T(n) = O(n^2)

dynamic programming and the use of matrices

I'm always confused about how dynamic programming uses the matrix to solve a problem. I understand roughly that the matrix is used to store the results from previous subproblems, so that it can be used in later computation of a bigger problem.
But, how does one determine the dimension of the matrix, and how do we know what value each row/column of the matrix should represent? ie, is there like a generic procedure of constructing the matrix?
For example, if we're interested in making changes for S amount of money using coins of value c1,c2,....cn, what should be the dimension of the matrix, and what should each column/row represent?
Any directional guidance will help. Thank you!
A problem becomes eligible for dynamic programming when it exhibits both Overlapping Sub-problems as well as Optimal Substructure.
Secondly, dynamic programming comes in two variations:
Tabulation or the Bottom-up approach
Memoization or the Top-down approach (not MemoRization!)
Dynamic Programming stems from the ideology that a large problem can be further broken down into sub-problems. The bottom-up version simply starts with solving these sub-problems first and gradually building up the target solution. The top-down approach relies on using auxiliary storage doing away with re-computation.
is there like a generic procedure of constructing the matrix?
It really depends on what problem you're solving and how you're solving it! Matrices are typically used in tabulation, but it always need not be a matrix. The main goal here is to have the solutions to the sub-problems readily available on demand, it could be stored in an array, a matrix or even a hash-table.
The classic book Introduction to Algorithms demonstrates the solution to the rod-cutting problem in both ways where a 1D array is used as auxiliary storage.
For example, if we're interested in making changes for S amount of money using coins of value c1,c2,....cn, what should be the dimension of the matrix and what should each column/row represent?
If I'm not wrong, you're referring to the "total unique ways to make change" variant of the coin-change problem. You need to find the total ways a given amount can be constructed using given set of coins.
There is a great video on this that breaks it down pretty well. It uses a bottom-up approach: https://www.youtube.com/watch?v=DJ4a7cmjZY0
Assume you need to construct amount n = 10 from the given subset of coins c = {1, 2, 10}
Take an empty set and keep adding the coins one per row from c. For every next row, one coin from the set is added. The columns represent the sub-problems. For i in n = 1 : 10, the ith column represents the the total number of ways i can be constructed using the coins in that row:
---------------------------------------------------------
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---------------------------------------------------------
|{} | | | | | | | | | | | |
---------------------------------------------------------
|{1} | | X | | | | | | | | | |
---------------------------------------------------------
|{1, 2} | | | | | | | | | | | |
---------------------------------------------------------
|{1, 2, 10}| | | | Y | | | | | | | Z |
---------------------------------------------------------
In this table, X represents the number of ways amount 1 can be constructed using the coin {1}, Y represents the number of ways amount 3 can be represented using the coins {1, 2, 10} and Z represents the number of ways amount 10 can be represented using the coins {1, 2, 10}.
How are the cells populated?
Initially, the entire first column headed by 0 is filled with 1s because no matter how many coins you have, for the amount 0 you have exactly one way to make change that is to make no change.
The rest of the first row with the empty subset {} is filled with 0s because you can't make a change for any positive amount with no coins.
Now the matrix looks like this:
---------------------------------------------------------
| 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---------------------------------------------------------
|{} | 1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
---------------------------------------------------------
|{1} | 1 | X | | | | | | | | | |
---------------------------------------------------------
|{1, 2} | 1 | | | | | | | | | | |
---------------------------------------------------------
|{1, 2, 10}| 1 | | | Y | | | | | | | Z |
---------------------------------------------------------
Now, how do we fill X? You have two alternatives, either to use the 1 coin in this new super set or to not use it. If you did not use the coin, the ways are same as the above row that is 0. But since 1 can be used to make a change of amount 1, we use that coin, and subtract 1 from the amount 1 to be left with 0. Now lookup, 0's ways in the same row, that is the column previous to that of X which is 1. So add it to the amount from the top row to have a total of 1. So you fill this cell as 1.
But, how does one determine the dimension of the matrix, and how do we know what value each row/column of the matrix should represent? ie, is there like a generic procedure of constructing the matrix?
You need to find the recurrence relation and the state(number of parameters) required to represent a subproblem. The whole idea of DP is to avoid re-computation of a subproblem. You compute a subproblem only once the first time you require it, store it in memory and refer to the stored value when required. So if you want to refer to the stored result of a subproblem later, you need to have a key that uniquely identifies the subproblem. The state of the subproblem is usually good choice for this key. If a subproblem has 3 parameters x, y, z, then a tuple (value of x, value of y, value of z) is a good key to store result of the subproblem in a hash table for example. If these values are positive integers, you can use a matrix i.e., multi dimensional array instead of a hash table. Let's develop the ideas of finding the recurrence relation and identifying the state required to uniquely represent a subproblem so that your confusion about the matrix dimensions is cleared.
The most important step in being able to solve a DP problem(any recursive problem in general) is identifying and being able to write down the recurrence relationship. Once the recurrence relation is identified, I'd say 90% of the work is done. Let's first see how to write down the recurrence relation.
Three important ideas in any recursive problem is
identifying the trivial cases (the base cases whose answers are known),
identifying how to divide the problem into subproblems
knowing how to combine the results of the subproblems.
Let's take merge sort as example. It is not a DP problem as there are no overlapping subproblems but for the purpose of introducing recurrence relation, it is a good choice as it is famous and easy to understand. As you might already know, the trivial case in merge sort is array of size 0 or 1. Recursion step is to divide the problems into two subproblems of half the size of the current problem and combination step is the merging algorithm. Finally we can write the recurrence relation for merge sort as follows:
sort(0, n) = merge(sort(0, n/2), sort(n/2, n))
In the above recurrence relation for sort algorithm, the problem of range (0, n) is divided into two subproblems (0, n/2) and (n/2, 0). The combination step is the merge algorithm.
Now let's try to deduce the recurrence relation for some DP problems. You should be able to derive the dimensions of the state(and hence your confusion about dimensions of matrix) from the recurrence relation.
Remember that to find the recurrence relation, we need to identify the subproblems. Identifying subproblems is not always straightforward. Only practice of more DP problems to gain better intuition at these problems and identifying the patterns, trial and error etc are required.
Let's identify the recurrence relations for two problems that look almost similar but require different approach. I chose this problems only because the question was about confusion regarding the dimensions of the matrix.
Given coins of different denominations and an amount, find the minimum number of coins required to make the amount.
Let's represent the problem/algorithm of finding the minimum number of coins required for a given amount n as F(n). If the denominations are p, q, r.
If we know the answer for F(n-p), F(n-q) and F(n-r) i.e., the minimum number of coins required to make amounts n-p, n-q and n-r respectively, we can take the minimum of these and 1 to get the number of coins required to make the amount n.
The subproblems here are F(n-p), F(n-q) and F(n-r) and the combination step is to take the minimum of these values and adding one.
So the recurrence relation is:
F(n) = min(F(n-p), F(n-q), F(n-r)) + 1
# Base conditions
F(0) = 0
F(n) = infinity if n < 0
There is optimal substructure and there are repeated problems(if it is not obvious, take a sample problem and draw the recursion tree) and so we can use some storage to avoid repeated computation. Each of the subproblem is a node in the recursion tree.
From the recurrence relation you can see that the function F takes only one parameter i.e., one parameter is enough to represent the subproblem/node in the recursion tree and hence a 1D array or a hash table keyed by single value can be used to store the result of the subproblems.
Given coins of different denominations and an amount, find total number of combination of coins required to make the amount.
This problem is more subtle. Pause and think for moment and try to identify the recurrence relation.
Let's use the same terminology as above problem i.e., let's say the amount is n and p, q, r are the denominations.
Does the same recurrence as the above problem work? If F(n) represents the total number of combinations of counts to make n out of given denominations, can we combine F(n-p), F(n-q) and F(n-r) is some way to get F(n)? How about just adding them? Does F(n) = F(n-p) + F(n-q) + F(n-r) hold?
Take n = 3 and two denominations p, q = 1, 2
With above recurrence relation we get the answer as 3 corresponding to the splits [1, 1, 1], [1, 2], [2, 1] which is incorrect as [1, 2] and [2, 1] is the same combination of denominations. The above recurrence is calculating the number of permutations instead of combinations. To avoid the repeated results, we need to bring in order about the coins. We can choose it ourself by mandating that p comes before q and q comes before r. Focus on the number of combination with each denomination. Since we are enforcing the order ourself among the available denominations [p, q, r].
Let's start with p and solve the following recurrence.
F(n, only p allowed) = F(n-p, only p allowed)
## Base condition
F(0) = 1 # There is only one way to select 0 coins which is not selecting any coinss
Now let's allow the next denomination q and then solve the following recurrence.
F(n, p and q allowed) = F(n-q, p and q allowed) + F(n, only p allowed)
Finally,
F(n, p q and r allowed) = F(n-r, p q and r allowed) + F(n, p and q allowed)
The above three recurrence relations in general can be written as follows where i is the index in the denominations.
# F(n, i) = with denominations[i] + without denominations[i]
F(n, i) = F(n - denominations[i], i) + F(n, i-1)
## Base conditions
F(n, i) = 1 if n == 0
F(n, i) = 0 if n < 0 or i < 0
From the recurrence relation, we can see that you need two state variables to represent a subproblem and hence a 2D array or a hash table keyed by combination of these two values(a tuple for example) is needed to cache the results of subproblems.
Also see Thought process for arriving at dynamic programming solution of Coins change problem
This chapter explains it very well: http://www.cs.berkeley.edu/~vazirani/algorithms/chap6.pdf
At page 178 it gives some approaches to identify the sub problems that allow you to apply dynamic programming.
An array used by a DP solution is almost always based on the dimensions of the state space of the problem - that is, the valid values for each of its parameters
For example
fib[i+2] = fib[i+1] + fib[i]
Is the same as
def fib(i):
return fib(i-1)+fib(i-2]
You can make this more apparent by implementing memoization in your recursive functions
def fib(i):
if( memo[i] == null )
memo[i] = fib(i-1)+fib(i-2)
return memo[i]
If your recursive function has K parameters, you'll likely need a K-dimensional matrix.

Resources