Forming a recursive equation from a function - algorithm

I've been asked to form a recurrence equation from a recursive function and solve it for T(n). This function divides an array of 𝑛 elements into two halves; find the highest value of each half, then returns the highest of the two. However I am having some trouble understanding how to form a recurrence equation from this function.
I've looked at some similar questions here and elsewhere on the internet, and from what I think I've understood, this function does two recursive calls and splits the data into 2 each time, and size should be = n, however I am unsure regarding the other elements in the function and how to write them correctly.
π‘ π‘’π‘Žπ‘Ÿπ‘β„Žπ‘€π‘Žπ‘₯(𝐴[], π‘ π‘‘π‘Žπ‘Ÿπ‘‘πΌπ‘‘π‘₯, 𝑠𝑖𝑧𝑒)
{
𝑖𝑓 (𝑠𝑖𝑧𝑒 == 1)
π‘Ÿπ‘’π‘‘π‘’π‘Ÿπ‘› A[π‘ π‘‘π‘Žπ‘Ÿπ‘‘πΌπ‘‘π‘₯];
π‘›π‘’π‘š1 = π‘ π‘’π‘Žπ‘Ÿπ‘β„ŽMax(𝐴[], π‘ π‘‘π‘Žπ‘Ÿπ‘‘πΌπ‘‘π‘₯, βŒŠπ‘ π‘–π‘§π‘’/2βŒ‹);
π‘›π‘’π‘š2 = π‘ π‘’π‘Žπ‘Ÿπ‘β„ŽMax(𝐴[], π‘ π‘‘π‘Žπ‘Ÿπ‘‘πΌπ‘‘π‘₯ + βŒŠπ‘ π‘–π‘§π‘’/2βŒ‹, 𝑠𝑖𝑧𝑒 βˆ’ βŒŠπ‘ π‘–π‘§π‘’/2βŒ‹);
if (π‘›π‘’π‘š1 β‰₯ π‘›π‘’π‘š2)
π‘Ÿπ‘’π‘‘π‘’π‘Ÿπ‘› π‘›π‘’π‘š1;
𝑒𝑙𝑠𝑒
π‘Ÿπ‘’π‘‘π‘’π‘Ÿπ‘› π‘›π‘’π‘š2;
}

T(n) = 2T(n/2) + c
Time complexity - O(n)
The function makes 2 recursive calls on sub arrays of size n/2 and a constant work

Related

Counting primitive operations on recursive functions

I'm reading Algorithm Design and Applications, by Michael T. Goodrich and Roberto Tamassia, published by Wiley. They teach the concept of primitive operations and how to count then in a given algorithm. Everything was clear to me until the moment they showed a recursive function (a simple recursive way to calculate the maximum value of an array) and its primitive operation count.
The function (in pseudo-code) is this:
Algorithm recursiveMax(A, n):
Input: An array A storing n β‰₯ 1 integers.
Output: The maximum element in A.
if n = 1 then
return A[0]
return max{recursiveMax(A, n βˆ’ 1), A[n βˆ’ 1]}
where A is an array and n its length. The author states what follows concerning how we calculate the number of primitive operations this function has:
As with this example, recursive algorithms are often quite elegant. Analyzing
the running time of a recursive algorithm takes a bit of additional work, however.
In particular, to analyze such a running time, we use a recurrence equation, which
defines mathematical statements that the running time of a recursive algorithm must
satisfy. We introduce a function T (n) that denotes the running time of the algorithm
on an input of size n, and we write equations that T (n) must satisfy. For example,
we can characterize the running time, T (n), of the recursiveMax algorithm as T(n) = 3 if n = 1 or T(n - 1) + 7 otherwise, assuming that we count each comparison, array reference, recursive call, max calculation, or return as a single primitive operation. Ideally, we would like to characterize a recurrence equation like that above in closed form, where no references to the function T appear on the righthand side. For the recursiveMax algorithm, it isn’t too hard to see that a closed form would be T (n) = 7(n βˆ’ 1) + 3 = 7n βˆ’ 4.
I can clearly understand that in the case of a single item array, our T(n) would be just 3 (only 3 primitive operations will occur, i.e. the comparision n = 1, the array index A[0] and the return operation), but I cannot understand why in the case where n is not 1 we have T(n-1) + 7. Why + 7? From where did we get this constant?
Also, I cannot comprehend this closed form: how did he get that T(n) = 7(n - 1) + 3?
I appreciate any help.

Big O Recursive Method

I have a method called binary sum
Algorithm BinarySum(A, i, n):
Input: An array A and integers i and n
Output: The sum of the n integers in A starting at index i
if n = 1 then
return A[i]
return BinarySum(A, i, n/ 2) + BinarySum(A, i + n/ 2, n/ 2)
Ignoring the fact of making a simple problem complicated I have been asked to find the Big O. Here is my thought process. For an array of size N I will be making 1 + 2 + 4 .. + N recursive calls. This is close to half the sum from 1 to N so I will say it is about N(N + 1)/4. After making this many calls now I need to add them together. So once again I need to perform N(N+1)/4 additions. Adding them together we are left with N^2 as the dominate term.
So would the big O of this algorithm be O(N^2)? Or am I doing something wrong. It feels strange to have binary recursion and not have a 2^n or log n in the final answer
There are in-fact 2^n and log n terms in the final result... sort of.
For each call to a sub-array of length n, two recursive calls are made to both halves of this array, plus a constant amount of work (if-statement, addition, pushing onto the call stack etc). Thus the recurrence relation is given by:
At this point we could just use the Master theorem to directly arrive at the final result - O(n). But let's instead derive it by repeated expansion:
The stopping condition n = 1 gives the maximum value of m (ignoring rounding):
In step (*) we used the standard formula for geometric series. So as you can see the answer does involve log n and 2^n terms in a sense, but they "cancel" out to give a simple linear term, which is the same as for a simple loop.

Dynamic Programming : True or False

I have a conceptual doubt regarding Dynamic Programming:
In a dynamic programming solution, the space requirement is always at least as big as the number of unique sub problems.
I thought it in terms of Fibonacci numbers:
f(n) = f(n-1) + f(n-2)
Here there are two subproblems, the space required will be at least O(n) if input is n.
Right?
But, the answer is False.
Can someone explain this?
The answer is indeed false.
For example, in your fibonacci series, you can use Dynamic Programming with O(1) space, by remembering the only 2 last numbers:
fib(n):
prev = current = 1
i = 2
while i < n:
next = prev + current
prev = current
current = next
return current
This is a common practice where you don't need all smaller subproblems to solve the bigger one, and you can discard most of them and save some space.
If you implement Fibonacci calculation using bottom-up DP, you can discard earlier results which you don't need. This is an example:
fib = [0, 1]
for i in xrange(n):
fib = [fib[1], fib[0] + fib[1]]
print fib[1]
As this example shows, you only need memorize the last two elements in the array.
This statement is not correct. But it's almost correct.
Generally dynamic programming solution needs O(number of subproblems) space. In other words, if there is a dynamic programming solution to the problem it could be implemented using O(number of subproblems) memory.
In your particular problem "calculation of Fibonacci numbers", if you write down straightforward dynamic programming solution:
Integer F(Integer n) {
if (n == 0 || n == 1) return 1;
if (memorized[n]) return memorized_value[n];
memorized_value[n] = F(n - 1) + F(n - 2);
memorized[n] = true;
return memorized_value[n];
}
it will use O(number of subproblems) memory. But as you mentioned, by analyzing the recurrence you can come up with a more optimal solution that uses O(1) memory.
P.S. The recurrence for Fibonacci numbers that you've mentioned has n + 1 subproblems. Usually by subproblems people are referring to all f values you need to calculate to calculate a particular f value. Here you need to calculate f(0), f(1), f(2), ..., f(n) in order to compute f(n).

how to write a recurrence relation for a given piece of code

In my algorithm and data structures class we were given a few recurrence relations either to solve or that we can see the complexity of an algorithm.
At first, I thought that the mere purpose of these relations is to jot down the complexity of a recursive divide-and-conquer algorithm. Then I came across a question in the MIT assignments, where one is asked to provide a recurrence relation for an iterative algorithm.
How would I actually come up with a recurrence relation myself, given some code? What are the necessary steps?
Is it actually correct that I can jot down any case i.e. worst, best, average case with such a relation?
Could possibly someone give a simple example on how a piece of code is turned into a recurrence relation?
Cheers,
Andrew
Okay, so in algorithm analysis, a recurrence relation is a function relating the amount of work needed to solve a problem of size n to that needed to solve smaller problems (this is closely related to its meaning in math).
For example, consider a Fibonacci function below:
Fib(a)
{
if(a==1 || a==0)
return 1;
return Fib(a-1) + Fib(a-2);
}
This does three operations (comparison, comparison, addition), and also calls itself recursively. So the recurrence relation is T(n) = 3 + T(n-1) + T(n-2). To solve this, you would use the iterative method: start expanding the terms until you find the pattern. For this example, you would expand T(n-1) to get T(n) = 6 + 2*T(n-2) + T(n-3). Then expand T(n-2) to get T(n) = 12 + 3*T(n-3) + 2*T(n-4). One more time, expand T(n-3) to get T(n) = 21 + 5*T(n-4) + 3*T(n-5). Notice that the coefficient of the first T term is following the Fibonacci numbers, and the constant term is the sum of them times three: looking it up, that is 3*(Fib(n+2)-1). More importantly, we notice that the sequence increases exponentially; that is, the complexity of the algorithm is O(2n).
Then consider this function for merge sort:
Merge(ary)
{
ary_start = Merge(ary[0:n/2]);
ary_end = Merge(ary[n/2:n]);
return MergeArrays(ary_start, ary_end);
}
This function calls itself on half the input twice, then merges the two halves (using O(n) work). That is, T(n) = T(n/2) + T(n/2) + O(n). To solve recurrence relations of this type, you should use the Master Theorem. By this theorem, this expands to T(n) = O(n log n).
Finally, consider this function to calculate Fibonacci:
Fib2(n)
{
two = one = 1;
for(i from 2 to n)
{
temp = two + one;
one = two;
two = temp;
}
return two;
}
This function calls itself no times, and it iterates O(n) times. Therefore, its recurrence relation is T(n) = O(n). This is the case you asked about. It is a special case of recurrence relations with no recurrence; therefore, it is very easy to solve.
To find the running time of an algorithm we need to firstly able to write an expression for the algorithm and that expression tells the running time for each step. So you need to walk through each of the steps of an algorithm to find the expression.
For example, suppose we defined a predicate, isSorted, which would take as input an array a and the size, n, of the array and would return true if and only if the array was sorted in increasing order.
bool isSorted(int *a, int n) {
if (n == 1)
return true; // a 1-element array is always sorted
for (int i = 0; i < n-1; i++) {
if (a[i] > a[i+1]) // found two adjacent elements out of order
return false;
}
return true; // everything's in sorted order
}
Clearly, the size of the input here will simply be n, the size of the array. How many steps will be performed in the worst case, for input n?
The first if statement counts as 1 step
The for loop will execute nβˆ’1 times in the worst case (assuming the internal test doesn't kick us out), for a total time of nβˆ’1 for the loop test and the increment of the index.
Inside the loop, there's another if statement which will be executed once per iteration for a total of nβˆ’1 time, at worst.
The last return will be executed once.
So, in the worst case, we'll have done 1+(nβˆ’1)+(nβˆ’1)+1
computations, for a total run time T(n)≀1+(nβˆ’1)+(nβˆ’1)+1=2n and so we have the timing function T(n)=O(n).
So in brief what we have done is-->>
1.For a parameter 'n' which gives the size of the input we assume that each simple statements that are executed once will take constant time,for simplicity assume one
2.The iterative statements like loops and inside body will take variable time depending upon the input.
Which has solution T(n)=O(n), just as with the non-recursive version, as it happens.
3.So your task is to go step by step and write down the function in terms of n to calulate the time complexity
For recursive algorithms, you do the same thing, only this time you add the time taken by each recursive call, expressed as a function of the time it takes on its input.
For example, let's rewrite, isSorted as a recursive algorithm:
bool isSorted(int *a, int n) {
if (n == 1)
return true;
if (a[n-2] > a[n-1]) // are the last two elements out of order?
return false;
else
return isSorted(a, n-1); // is the initial part of the array sorted?
}
In this case we still walk through the algorithm, counting: 1 step for the first if plus 1 step for the second if, plus the time isSorted will take on an input of size nβˆ’1, which will be T(nβˆ’1), giving a recurrence relation
T(n)≀1+1+T(nβˆ’1)=T(nβˆ’1)+O(1)
Which has solution T(n)=O(n), just as with the non-recursive version, as it happens.
Simple Enough!! Practice More to write the recurrence relation of various algorithms keeping in mind how much time each step will be executed in algorithm

Finding time complexity of partition by quick sort metod

Here is an algorithm for finding kth smallest number in n element array using partition algorithm of Quicksort.
small(a,i,j,k)
{
if(i==j) return(a[i]);
else
{
m=partition(a,i,j);
if(m==k) return(a[m]);
else
{
if(m>k) small(a,i,m-1,k);
else small(a,m+1,j,k);
}
}
}
Where i,j are starting and ending indices of array(j-i=n(no of elements in array)) and k is kth smallest no to be found.
I want to know what is the best case,and average case of above algorithm and how in brief. I know we should not calculate termination condition in best case and also partition algorithm takes O(n). I do not want asymptotic notation but exact mathematical result if possible.
First of all, I'm assuming the array is sorted - something you didn't mention - because that code wouldn't otherwise work. And, well, this looks to me like a regular binary search.
Anyway...
The best case scenario is when either the array is one element long (you return immediately because i == j), or, for large values of n, if the middle position, m, is the same as k; in that case, no recursive calls are made and it returns immediately as well. That makes it O(1) in best case.
For the general case, consider that T(n) denotes the time taken to solve a problem of size n using your algorithm. We know that:
T(1) = c
T(n) = T(n/2) + c
Where c is a constant time operation (for example, the time to compare if i is the same as j, etc.). The general idea is that to solve a problem of size n, we consume some constant time c (to decide if m == k, if m > k, to calculate m, etc.), and then we consume the time taken to solve a problem of half the size.
Expanding the recurrence can help you derive a general formula, although it is pretty intuitive that this is O(log(n)):
T(n) = T(n/2) + c = T(n/4) + c + c = T(n/8) + c + c + c = ... = T(1) + c*log(n) = c*(log(n) + 1)
That should be the exact mathematical result. The algorithm runs in O(log(n)) time. An average case analysis is harder because you need to know the conditions in which the algorithm will be used. What is the typical size of the array? The typical size of k? What is the mos likely position for k in the array? If it's in the middle, for example, the average case may be O(1). It really depends on how you use this.

Resources