How to find recurrence relation from recursive algorithm - algorithm

I know how to find recurrence relation from simple recursive algorithms.
For e.g.
QuickSort(A,LB, UB){
key = A[LB]
i = LB
j = UB + 1
do{
do{
i = i + 1
}while(A[i] < key)
do{
j = j - 1
}while(A[j] > key)
if(i < j){
swap(A[i]<->A[j])
}while(i <= j)
swap(key<->A[j]
QuickSort(A,LB,J-1)
QuickSort(A,J+1,UB)
}
T(n) = T(n - a) + T(a) + n
In the above recursive algorithm it was quite easy to understand how the input size is reducing after each recursive call. But how to find recurrence relation for any algorithm in general, which is not recursive but might also be iterative. So i started learning how to convert iterative algorithm to recursive just to make it easy to find recurrence relation.
I found this link http://refactoring.com/catalog/replaceIterationWithRecursion.html.
I used to convert my linear search algorithm to recursive.
LinearSearch(A,x,LB,UB){
PTR = LB
while(A[PTR]!=x && PTR<=UB){
if(PTR==UB+1){
print("Element does not exist")
}
else{
print("location"+PTR)
}
}
}
got converted to
LinearSearch(A,x,LB,UB){
PTR=LB
print("Location"+search(A,PTR,UB,x))
}
search(A,PTR,UB,x){
if(A[PTR]!=x && PTR<=UB){
if(PTR==UB+1){
return -1
}
else{
return search(A,PTR+1,UB,x)
}
}
else{
return PTR
}
}
This gives the recurrence relation to be T(n) = T(n-1) + 1
But i was wondering is this the right approach to find recurrence relation for any algorithm?
Plus i don't know how to find recurrence relation for algorithms where more than one parameter is increasing or decreasing.
e.g.
unsigned greatest_common_divisor (const unsigned a, const unsigned b)
{
if (a > b)
{
return greatest_common_divisor(a-b, b);
}
else if (b > a)
{
return greatest_common_divisor(a, b-a);
}
else // a == b
{
return a;
}
}

First of all, algorithms are very flexible so you should not expect to have a simple rule that covers all of them.
That said, one thing that I think will be helpful for you is to pay more attention to the structure of the input you pass to your algorithm than to the algorithm yourself. For example, consider that QuickSort you showed in your post. If you glance at those nested do-whiles you are probably going to guess its O(N^2) when in reality its O(N). The real answer is easier to find by looking at the inputs: i always increases and j always decreases and when they finaly meet each other, each of the N indices of the array will have been visited exactly once.
Plus I don't know how to find recurrence relation for algorithms where more than one parameter is increasing or decreasing.
Well, those algorithms are certainly harder than the ones with a single variable. For the euclidean algorithm you used as an example, the complexity is actually not trivial to figure out and it involves thinking about greatest-common-divisors instead of just looking at the source code for the algorithm's implementation.

Related

Time and Space Complexity of This Algorithm

Despite reading some previous questions here on stackoverflow and watching a few videos including this
one, time and space complexity are going straight over my head. I need to find the time and space complexity of this algorithm
public static int aPowB(int a, int b){
if(b == 0){
return 1;
}
int halfResult = aPowB(a, b/2);
if(b%2 == 0){
return halfResult * halfResult;
}
return a * halfResult * halfResult;
}
An explanation of the answer would be appreciated so I can try to understand. Thank you.
First of all, the inputs are a and b, so we can expect the time/space complexity to be dependent on this two parameters.
With recursive algorithms, always try to write down the recurrence relation for the time complexity T first. Here it's
T(a, 0) = O(1) // base case
T(a, b) = T(a, b/2) + O(1) // recursive call + some O(1) stuff at the end
This equation is one of the standard ones that you should just know by heart, so we can immediately give the solution
T(a, b) = O(log b)
(If you don't know the solution by hard, just ask yourself how many times you can divide b by 2 until you hit 0.)
The space complexity is also O(log b) because that's the depth of the recursion stack.

Big-O & Runing Time of this algorthm , how can convert this to an iterative algorthm

What is the runing time of this algorthm in Big-O and how i convert this to iterative algorthm?
public static int RecursiveMaxOfArray(int[] array) {
int array1[] = new int[array.length/2];
int array2[] = new int[array.length - (array.length/2)];
for (int index = 0; index < array.length/2 ; index++) {
array1[index] = array[index];
}
for (int index = array.length/2; index < array.length; index++) {
array2[index - array.length/2] = array[index] ;
}
if (array.length > 1) {
if(RecursiveMaxOfArray(array1) > RecursiveMaxOfArray(array2)) {
return RecursiveMaxOfArray(array1) ;
}
else {
return RecursiveMaxOfArray(array2) ;
}
}
return array[0] ;
}
At each stage, an array of size N is divided into equal halves. The function is then recursively called three times on an array of size N/2. Why three instead of the four which are written? Because the if statement only enters one of its clauses. Therefore the recurrence relation is T(N) = 3T(N/2) + O(N), which (using the Master theorem) gives O(N^[log2(3)]) = O(n^1.58).
However, you don't need to call it for the third time; just cache the return result of each recursive call in a local variable. The coefficient 3 in the recurrence relation becomes 2; I'll leave it to you to apply the Master theorem on the new recurrence.
There's another answer that accurately describes your algorithm's runtime complexity, how to determine it, and how to improve it, so I won't focus on that. Instead, let's look at the other part of your question:
how [do] i convert this to [an] iterative algorithm?
Well, there's a straightforward solution to that which you hopefully could have gotten yourself - loop over the list and track the smallest value you've seen so far.
However, I'm guessing your question is better phrased as this:
How do I convert a recursive algorithm into an iterative algorithm?
There are plenty of questions and answers on this, not just here on StackOverflow, so I suggest you do some more research on this subject. These blog posts on converting recursion to iteration may be an excellent place to start if this is your approach to take, though I can't vouch for them because I haven't read them. I just googled "convert recursion to iteration," picked the first result, then found this page which links to all four of the blog post.

Alternative approach for Rod Cutting Algorithm ( Recursive )

In the book Introduction To Algorithms , the naive approach to solving rod cutting problem can be described by the following recurrence:
Let q be the maximum price that can be obtained from a rod of length n.
Let array price[1..n] store the given prices . price[i] is the given price for a rod of length i.
rodCut(int n)
{
initialize q as q=INT_MIN
for i=1 to n
q=max(q,price[i]+rodCut(n-i))
return q
}
What if I solve it using the below approach:
rodCutv2(int n)
{
if(n==0)
return 0
initialize q = price[n]
for i = 1 to n/2
q = max(q, rodCutv2(i) + rodCutv2(n-i))
return q
}
Is this approach correct? If yes, why do we generally use the first one? Why is it better?
NOTE:
I am just concerned with the approach to solving this problem . I know that this problem exhibits optimal substructure and overlapping subproblems and can be solved efficiently using dynamic programming.
The problem with the second version is it's not making use of the price array. There is no base case of the recursion so it'll never stop. Even if you add a condition to return price[i] when n == 1 it'll always return the result of cutting the rod into pieces of size 1.
your 2nd approach is absolutely correct and its time complexity is also same as the 1st one.
In Dynamic Programming also, we can make tabulation on same approach.Here is my solution for recursion :
int rodCut (int price[],int n){
if(n<=0) return 0;
int ans = price[n-1];
for(int i=1; i<=n/2 ; ++i){
ans=max(ans, (rodCut(price , i) + rodCut(price , n-i)));
}
return ans;
}
And, Solution for Dynamic Programming :
int rodCut(int *price,int n){
int ans[n+1];
ans[0]=0; // if length of rod is zero
for(int i=1;i<=n;++i){
int max_value=price[i-1];
for(int j=1;j<=i/2;++j){
max_value=max(max_value,ans[j]+ans[i-j]);
}
ans[i]=max_value;
}
return ans[n];
}
Your algorithm looks almost correct - you need to be a bit careful when n is odd.
However, it's also exponential in time complexity - you make two recursive calls in each call to rodCutv2. The first algorithm uses memoisation (the price array), so avoids computing the same thing multiple times, and so is faster (it's polynomial-time).
Edit: Actually, the first algorithm isn't correct! It never stores values in prices, but I suspect that's just a typo and not intentional.

Example of Big O of 2^n

So I can picture what an algorithm is that has a complexity of n^c, just the number of nested for loops.
for (var i = 0; i < dataset.len; i++ {
for (var j = 0; j < dataset.len; j++) {
//do stuff with i and j
}
}
Log is something that splits the data set in half every time, binary search does this (not entirely sure what code for this looks like).
But what is a simple example of an algorithm that is c^n or more specifically 2^n. Is O(2^n) based on loops through data? Or how data is split? Or something else entirely?
Algorithms with running time O(2^N) are often recursive algorithms that solve a problem of size N by recursively solving two smaller problems of size N-1.
This program, for instance prints out all the moves necessary to solve the famous "Towers of Hanoi" problem for N disks in pseudo-code
void solve_hanoi(int N, string from_peg, string to_peg, string spare_peg)
{
if (N<1) {
return;
}
if (N>1) {
solve_hanoi(N-1, from_peg, spare_peg, to_peg);
}
print "move from " + from_peg + " to " + to_peg;
if (N>1) {
solve_hanoi(N-1, spare_peg, to_peg, from_peg);
}
}
Let T(N) be the time it takes for N disks.
We have:
T(1) = O(1)
and
T(N) = O(1) + 2*T(N-1) when N>1
If you repeatedly expand the last term, you get:
T(N) = 3*O(1) + 4*T(N-2)
T(N) = 7*O(1) + 8*T(N-3)
...
T(N) = (2^(N-1)-1)*O(1) + (2^(N-1))*T(1)
T(N) = (2^N - 1)*O(1)
T(N) = O(2^N)
To actually figure this out, you just have to know that certain patterns in the recurrence relation lead to exponential results. Generally T(N) = ... + C*T(N-1) with C > 1means O(x^N). See:
https://en.wikipedia.org/wiki/Recurrence_relation
Think about e.g. iterating over all possible subsets of a set. This kind of algorithms is used for instance for a generalized knapsack problem.
If you find it hard to understand how iterating over subsets translates to O(2^n), imagine a set of n switches, each of them corresponding to one element of a set. Now, each of the switches can be turned on or off. Think of "on" as being in the subset. Note, how many combinations are possible: 2^n.
If you want to see an example in code, it's usually easier to think about recursion here, but I can't think od any other nice and understable example right now.
Consider that you want to guess the PIN of a smartphone, this PIN is a 4-digit integer number. You know that the maximum number of bits to hold a 4-digit number is 14 bits. So, you will have to guess the value, the 14-bit correct combination let's say, of this PIN out of the 2^14 = 16384 possible values!!
The only way is to brute force. So, for simplicity, consider this simple 2-bit word that you want to guess right, each bit has 2 possible values, 0 or 1. So, all the possibilities are:
00
01
10
11
We know that all possibilities of an n-bit word will be 2^n possible combinations. So, 2^2 is 4 possible combinations as we saw earlier.
The same applies to the 14-bit integer PIN, so guessing the PIN would require you to solve a 2^14 possible outcome puzzle, hence an algorithm of time complexity O(2^n).
So, those types of problems, where combinations of elements in a set S differs, and you will have to try to solve the problem by trying all possible combinations, will have this O(2^n) time complexity. But, the exponentiation base does not have to be 2. In the example above it's of base 2 because each element, each bit, has two possible values which will not be the case in other problems.
Another good example of O(2^n) algorithms is the recursive knapsack. Where you have to try different combinations to maximize the value, where each element in the set, has two possible values, whether we take it or not.
The Edit Distance problem is an O(3^n) time complexity since you have 3 decisions to choose from for each of the n characters string, deletion, insertion, or replace.
int Fibonacci(int number)
{
if (number <= 1) return number;
return Fibonacci(number - 2) + Fibonacci(number - 1);
}
Growth doubles with each additon to the input data set. The growth curve of an O(2N) function is exponential - starting off very shallow, then rising meteorically.
My example of big O(2^n), but much better is this:
public void solve(int n, String start, String auxiliary, String end) {
if (n == 1) {
System.out.println(start + " -> " + end);
} else {
solve(n - 1, start, end, auxiliary);
System.out.println(start + " -> " + end);
solve(n - 1, auxiliary, start, end);
}
In this method program prints all moves to solve "Tower of Hanoi" problem.
Both examples are using recursive to solve problem and had big O(2^n) running time.
c^N = All combinations of n elements from a c sized alphabet.
More specifically 2^N is all numbers representable with N bits.
The common cases are implemented recursively, something like:
vector<int> bits;
int N
void find_solution(int pos) {
if (pos == N) {
check_solution();
return;
}
bits[pos] = 0;
find_solution(pos + 1);
bits[pos] = 1;
find_solution(pos + 1);
}
Here is a code clip that computes value sum of every combination of values in a goods array(and value is a global array variable):
fun boom(idx: Int, pre: Int, include: Boolean) {
if (idx < 0) return
boom(idx - 1, pre + if (include) values[idx] else 0, true)
boom(idx - 1, pre + if (include) values[idx] else 0, false)
println(pre + if (include) values[idx] else 0)
}
As you can see, it's recursive. We can inset loops to get Polynomial complexity, and using recursive to get Exponential complexity.
Here are two simple examples in python with Big O/Landau (2^N):
#fibonacci
def fib(num):
if num==0 or num==1:
return num
else:
return fib(num-1)+fib(num-2)
num=10
for i in range(0,num):
print(fib(i))
#tower of Hanoi
def move(disk , from, to, aux):
if disk >= 1:
# from twoer , auxilart
move(disk-1, from, aux, to)
print ("Move disk", disk, "from rod", from_rod, "to rod", to_rod)
move(disk-1, aux, to, from)
n = 3
move(n, 'A', 'B', 'C')
Assuming that a set is a subset of itself, then there are 2ⁿ possible subsets for a set with n elements.
think of it this way. to make a subset, lets take one element. this element has two possibilities in the subset you're creating: present or absent. the same applies for all the other elements in the set. multiplying all these possibilities, you arrive at 2ⁿ.

Coin change algorithm and pseudocode: Need clarification

I'm trying to understand the coin change problem solution, but am having some difficulty.
At the Algorithmist, there is a pseudocode solution for the dynamic programming solution, shown below:
n = goal number
S = [S1, S2, S3 ... Sm]
function sequence(n, m)
//initialize base cases
for i = 0 to n
for j = 0 to m
table[i][j] = table[i-S[j]][j] + table[i][j-1]
This is a pretty standard O(n^2) algorithm that avoids recalculating the same answer multiple times by using a 2-D array.
My issue is two-fold:
How to define the base cases and incorporate them in table[][] as initial values
How to extract out the different sequences from the table
Regarding issue 1, there are three base cases with this algorithm:
if n==0, return 1
if n < 0, return 0
if n >= 1 && m <= 0, return 0
How to incorporate them into table[][], I am not sure. Finally, I have no idea how to extract out the solution set from the array.
We can implement a dynamic programming algorithm in at least two different approaches. One is the top-down approach using memoization, the other is the bottom-up iterative approach.
For a beginner to dynamic programming, I would always recommend using the top-down approach first since this will help them understand the recurrence relationships in dynamic programming.
So in order to solve the coin changing problem, you've already understood what the recurrence relationship says:
table[i][j] = table[i-S[j]][j] + table[i][j-1]
Such a recurrence relationship is good but is not that well-defined since it doesn't have any boundary conditions. Therefore, we need to define boundary conditions in order to ensure the recurrence relationship could successfully terminate without going into an infinite loop.
So what will happen when we try to go down the recursive tree?
If we need to calculate table[i][j], which means the number of approaches to change i using coins from type 0 to j, there are several corner cases we need to handle:
1) What if j == 0?
If j == 0 we will try to solve the sub-problem table(i,j-1), which is not a valid sub-problem. Therefore, one boundary condition is:
if(j==0) {
if(i==0) table[i][j] = 1;
else table[i][j] = 0;
}
2) What if i - S[j] < 0?
We also need to handle this boundary case and we know in such a condition we should either not try to solve this sub-problem or initialize table(i-S[j],j) = 0 for all of these cases.
So in all, if we are going to implement this dynamic programming from a top-down memoization approach, we can do something like this:
int f(int i, int j) {
if(calc[i][j]) return table[i][j];
calc[i][j] = true;
if(j==0) {
if(i==0) return table[i][j]=1;
else return table[i][j]=0;
}
if(i>=S[j])
return table[i][j]=table[i-S[j][j]+table[i][j-1];
else
return table[i][j]=table[i][j-1];
}
In practice, it's also possible that we use the value of table arrays to help track whether this sub-problem has been calculated before (e.g. we can initialize a value of -1 means this sub-problem hasn't been calculated).
Hope the answer is clear. :)

Resources