I'm trying to determine the recurrence relation of the following recursive function..I think I have done it correctly but would like some input on my method of solving..
Solve for C(n) the number of additions that this function does:
//precondition: n>0
int fct (const int A[], int n) {
if (n==1)
return A[0]*A[0];
else return A[n-1] * fct(A,n-1) * A[n-1];
}
Here, there are exactly two additions that occur as well as a recursive call for n-1.
C(1)=1
C(n)=2+C(n-1) //2 because of the number of additions plus the recursive call C(n-1)
Therefore
C(2)=C(1)+2=1+3=3
C(3)=C(2)+2=2+3=5
C(4)=C(3)+2=7
C(n)=2n-1
where big o is O(n)?
Correct. Keep in mind the structure of the recursive function depends on n:
int addRec(int A[], int n) {
The additions are otherwise constant time, so you are performing a constant operation n times, which results in the O(n) time complexity you got.
Related
I'm trying to understand the time and space complexity of an algorithm for generating an array's permutations. Given a partially built permutation where k out of n elements are already selected, the algorithm selects element k+1 from the remaining n-k elements and calls itself to select the remaining n-k-1 elements:
public static List<List<Integer>> permutations(List<Integer> A) {
List<List<Integer>> result = new ArrayList<>();
permutations(A, 0, result);
return result;
}
public static void permutations(List<Integer> A, int start, List<List<Integer>> result) {
if(A.size()-1==start) {
result.add(new ArrayList<>(A));
return;
}
for (int i=start; i<A.size(); i++) {
Collections.swap(A, start, i);
permutations(A, start+1, result);
Collections.swap(A, start, i);
}
}
My thoughts are that in each call we swap the collection's elements 2n times, where n is the number of elements to permute, and make n recursive calls. So the running time seems to fit the recurrence relation T(n)=nT(n-1)+n=n[(n-1)T(n-2)+(n-1)]+n=...=n+n(n-1)+n(n-1)(n-2)+...+n!=n![1/(n-1)!+1/(n-2)!+...+1]=n!e, hence the time complexity is O(n!) and the space complexity is O(max(n!, n)), where n! is the total number of permutations and n is the height of the recursion tree.
This problem is taken from the Elements of Programming Interviews book, and they're saying that the time complexity is O(n*n!) because "The number of function calls C(n)=1+nC(n-1) ... [which solves to] O(n!) ... [and] ... we do O(n) computation per call outside of the recursive calls".
Which time complexity is correct?
The time complexity of this algorithm, counted by the number of basic operations performed, is Θ(n * n!). Think about the size of the result list when the algorithm terminates-- it contains n! permutations, each of length n, and we cannot create a list with n * n! total elements in less than that amount of time. The space complexity is the same, since the recursion stack only ever has O(n) calls at a time, so the size of the output list dominates the space complexity.
If you count only the number of recursive calls to permutations(), the function is called O(n!) times, although this is usually not what is meant by 'time complexity' without further specification. In other words, you can generate all permutations in O(n!) time, as long as you don't read or write those permutations after they are generated.
The part where your derivation of run-time breaks down is in the definition of T(n). If you define T(n) as 'the run-time of permutations(A, start) when the input, A, has length n', then you can not define it recursively in terms of T(n-1) or any other function of T(), because the length of the input in all recursive calls is n, the length of A.
A more useful way to define T(n) is by specifying it as the run-time of permutations(A', start), when A' is any permutation of a fixed, initial array A, and A.length - start == n. It's easy to write the recurrence relation here:
T(x) = x * T(x-1) + O(x) if x > 1
T(1) = A.length
This takes into account the fact that the last recursive call, T(1), has to perform O(A.length) work to copy that array to the output, and this new recurrence gives the result from the textbook.
Most recursive functions I have seen being asked about (e.g. Fibonacci or Hanoi) have had O(1) returns, but what would the time complexity be if it wasn't O(1) but O(n) instead?
For example, a recursive Fibonacci with O(n) base case:
class fibonacci {
static int fib(int n) {
if (n <= 1)
for (int i=0;i<n;i++) {
// something
}
return n;
return fib(n-1) + fib(n-2);
}
public static void main (String args[])
{
int n = 9;
System.out.println(fib(n));
}
}
The base case for the function that you’ve written here actually still has time complexity O(1). The reason for this is that if the base case triggers here, then n ≤ 1, so the for loop here will run at most once.
Because so many base cases trigger when the input size is small, it’s comparatively rare to get a base case whose runtime is, say, O(n) when the input to the algorithm has size n. This would mean that the base case is independent of the array size, which can happen but is somewhat unusual.
A more common occurrence - albeit one I think is still pretty uncommon - would be for a recursive function to have two different parameters to it (say, n and k), where the recursion reduces n but leaves k unmodified. For example, imagine taking the code you have here and replacing the for loop on n in the base case with a for loop on k in the base case. What happens then?
This turns out to be an interesting question. In the case of this particular problem, it means that the total work done will be given by O(k) times the number of base cases triggered, plus O(1) times the number of non-base-case recursive calls. For the Fibonacci recursion, the number of base cases triggered computing Fn is Fn+1 and there are (Fn+1 - 1) non-base-case calls, so the overall runtime would be Θ(k Fn+1 + Fn+1) = Θ(k φn). For the Towers of Hanoi, you’d similarly see a scaling effect where the overall runtime would be Θ(k 2n). But for other recursive functions the runtime might vary in different ways, depending on how those functions were structured.
Hope this helps!
I have the following three algorithms that give the Fibonacci numbers. I would like to know how I would be able to find out each of their order of complexity. Does anyone know how I might be able to determine this?
Method 1
------------------------------------------------------------------------
long long fibb(long long a, long long b, int n) {
return (--n>0)?(fibb(b, a+b, n)):(a);
}
Method 2
------------------------------------------------------------------------
long long int fibb(int n) {
int fnow = 0, fnext = 1, tempf;
while(--n>0){
tempf = fnow + fnext;
fnow = fnext;
fnext = tempf;
}
return fnext;
}
Method 3
------------------------------------------------------------------------
long long unsigned fib(unsigned n) {
return floor( (pow(PHI, n) - pow(1 - PHI, n))/sqrt(5) );
}
Correct me if I am wrong, but my guess would be that method one is O(2^n) since it is recursive, medtod 2 will be O(n) and the last one will be O(1).
Methods 1 and 2 have complexity O(n). The reasons are straightforward:
Method 1 recurses exactly n-1 times, each recursion performs a simple arithmetic operation.
Method 2 iterates exactly n-1 times, each iteration has constant time, simple math again.
Method 3 has indeed a complexity of O(1), but may not compute the correct value, merely an approximation.
My code is :
vector<int> permutation(N);
vector<int> used(N,0);
void try(int which, int what) {
// try taking the number "what" as the "which"-th element
permutation[which] = what;
used[what] = 1;
if (which == N-1)
outputPermutation();
else
// try all possibilities for the next element
for (int next=0; next<N; next++)
if (!used[next])
try(which+1, next);
used[what] = 0;
}
int main() {
// try all possibilities for the first element
for (int first=0; first<N; first++)
try(0,first);
}
I was learning complexity from some website where I came across this code. As per my understanding, the following line iterates N times. So the complexity is O(N).
for (int first=0; first<N; first++)
Next I am considering the recursive call.
for (int next=0; next<N; next++)
if (!used[next])
try(which+1, next);
So, this recursive call has number of step involved = t(n) = N.c + t(0).(where is some constant step)
There we can say that for this step, the complexity is = O(N).
Thus the total complexity is - O(N.N) = O(N^2)
Is my understanding right?
Thanks!
Complexity of this algorithm is O(N!) (or even O(N! * N) if outputPermutation takes O(N) which seems possible).
This algorithm outputs all permutations of 0..N natural numbers without repetitions.
Recursive function try itself sequentially tries to put N elements into position which and for each try it recursively invokes itself for the next which position, until which reaches N-1. Also, for each iteration try is actually invoked (N - which) times, because on each level some element is marked as used in order to eliminate repetitions. Thus the algorithm takes N * (N - 1) * (N - 2) ... 1 steps.
It is a recursive function. The function "try" calls itself recursively, so there is a loop in main (), a loop in try (), a loop in the recursive call to try (), a loop in the next recursive call to try () and so on.
You need to analyse very carefully what this function does, or you will get a totally wrong result (as you did). You might consider actually running this code, with values of N = 1 to 20 and measuring the time. You will see that it is most definitely not O (N^2). Actually, don't skip any of the values of N; you will see why.
In my algorithm and data structures class we were given a few recurrence relations either to solve or that we can see the complexity of an algorithm.
At first, I thought that the mere purpose of these relations is to jot down the complexity of a recursive divide-and-conquer algorithm. Then I came across a question in the MIT assignments, where one is asked to provide a recurrence relation for an iterative algorithm.
How would I actually come up with a recurrence relation myself, given some code? What are the necessary steps?
Is it actually correct that I can jot down any case i.e. worst, best, average case with such a relation?
Could possibly someone give a simple example on how a piece of code is turned into a recurrence relation?
Cheers,
Andrew
Okay, so in algorithm analysis, a recurrence relation is a function relating the amount of work needed to solve a problem of size n to that needed to solve smaller problems (this is closely related to its meaning in math).
For example, consider a Fibonacci function below:
Fib(a)
{
if(a==1 || a==0)
return 1;
return Fib(a-1) + Fib(a-2);
}
This does three operations (comparison, comparison, addition), and also calls itself recursively. So the recurrence relation is T(n) = 3 + T(n-1) + T(n-2). To solve this, you would use the iterative method: start expanding the terms until you find the pattern. For this example, you would expand T(n-1) to get T(n) = 6 + 2*T(n-2) + T(n-3). Then expand T(n-2) to get T(n) = 12 + 3*T(n-3) + 2*T(n-4). One more time, expand T(n-3) to get T(n) = 21 + 5*T(n-4) + 3*T(n-5). Notice that the coefficient of the first T term is following the Fibonacci numbers, and the constant term is the sum of them times three: looking it up, that is 3*(Fib(n+2)-1). More importantly, we notice that the sequence increases exponentially; that is, the complexity of the algorithm is O(2n).
Then consider this function for merge sort:
Merge(ary)
{
ary_start = Merge(ary[0:n/2]);
ary_end = Merge(ary[n/2:n]);
return MergeArrays(ary_start, ary_end);
}
This function calls itself on half the input twice, then merges the two halves (using O(n) work). That is, T(n) = T(n/2) + T(n/2) + O(n). To solve recurrence relations of this type, you should use the Master Theorem. By this theorem, this expands to T(n) = O(n log n).
Finally, consider this function to calculate Fibonacci:
Fib2(n)
{
two = one = 1;
for(i from 2 to n)
{
temp = two + one;
one = two;
two = temp;
}
return two;
}
This function calls itself no times, and it iterates O(n) times. Therefore, its recurrence relation is T(n) = O(n). This is the case you asked about. It is a special case of recurrence relations with no recurrence; therefore, it is very easy to solve.
To find the running time of an algorithm we need to firstly able to write an expression for the algorithm and that expression tells the running time for each step. So you need to walk through each of the steps of an algorithm to find the expression.
For example, suppose we defined a predicate, isSorted, which would take as input an array a and the size, n, of the array and would return true if and only if the array was sorted in increasing order.
bool isSorted(int *a, int n) {
if (n == 1)
return true; // a 1-element array is always sorted
for (int i = 0; i < n-1; i++) {
if (a[i] > a[i+1]) // found two adjacent elements out of order
return false;
}
return true; // everything's in sorted order
}
Clearly, the size of the input here will simply be n, the size of the array. How many steps will be performed in the worst case, for input n?
The first if statement counts as 1 step
The for loop will execute n−1 times in the worst case (assuming the internal test doesn't kick us out), for a total time of n−1 for the loop test and the increment of the index.
Inside the loop, there's another if statement which will be executed once per iteration for a total of n−1 time, at worst.
The last return will be executed once.
So, in the worst case, we'll have done 1+(n−1)+(n−1)+1
computations, for a total run time T(n)≤1+(n−1)+(n−1)+1=2n and so we have the timing function T(n)=O(n).
So in brief what we have done is-->>
1.For a parameter 'n' which gives the size of the input we assume that each simple statements that are executed once will take constant time,for simplicity assume one
2.The iterative statements like loops and inside body will take variable time depending upon the input.
Which has solution T(n)=O(n), just as with the non-recursive version, as it happens.
3.So your task is to go step by step and write down the function in terms of n to calulate the time complexity
For recursive algorithms, you do the same thing, only this time you add the time taken by each recursive call, expressed as a function of the time it takes on its input.
For example, let's rewrite, isSorted as a recursive algorithm:
bool isSorted(int *a, int n) {
if (n == 1)
return true;
if (a[n-2] > a[n-1]) // are the last two elements out of order?
return false;
else
return isSorted(a, n-1); // is the initial part of the array sorted?
}
In this case we still walk through the algorithm, counting: 1 step for the first if plus 1 step for the second if, plus the time isSorted will take on an input of size n−1, which will be T(n−1), giving a recurrence relation
T(n)≤1+1+T(n−1)=T(n−1)+O(1)
Which has solution T(n)=O(n), just as with the non-recursive version, as it happens.
Simple Enough!! Practice More to write the recurrence relation of various algorithms keeping in mind how much time each step will be executed in algorithm