Measuring code complexity of a recursive function - code-complexity

According to my professor, this code is Teta(n^n)
Measuring line by line i cant discover myself why its n^n complexity
this is the code
any(v[], n, degree){
for(i=0; i<degree; i++){
any(v,n-1,degree)
}
}
i have been making myself.
any(v[], n, degree){
for(i=0 - C; i<degree c(n+1); i++ cn){
any(v,n-1,degree) n(T(n-1))
}
}
It is 2c+2cn+n(T(n-1)).

To start, it looks like this would actually be infinite since it doesn't break or return at n==0. Assuming that the algorithm does return at n==0 (it would have to in an if statement that is currently missing):
T(n) = degree*T(n-1), where T(0) = 1 and T(1) = degree
This reduces to O(degree^n)
I'm not really sure where the n^n comes from. Unless I did the maths wrong.

Your professor is right, that code would run for ever recursively calling itself and n growing negative. if that is not what you want, then you would have to implement a condition to end the recursion, i.e. value of n:
any(v[], n, degree){
if (n > -1) {
for(i=0;i< degree;i++){
any(v,n-1,degree)
}
}
}

Related

Alternative approach for Rod Cutting Algorithm ( Recursive )

In the book Introduction To Algorithms , the naive approach to solving rod cutting problem can be described by the following recurrence:
Let q be the maximum price that can be obtained from a rod of length n.
Let array price[1..n] store the given prices . price[i] is the given price for a rod of length i.
rodCut(int n)
{
initialize q as q=INT_MIN
for i=1 to n
q=max(q,price[i]+rodCut(n-i))
return q
}
What if I solve it using the below approach:
rodCutv2(int n)
{
if(n==0)
return 0
initialize q = price[n]
for i = 1 to n/2
q = max(q, rodCutv2(i) + rodCutv2(n-i))
return q
}
Is this approach correct? If yes, why do we generally use the first one? Why is it better?
NOTE:
I am just concerned with the approach to solving this problem . I know that this problem exhibits optimal substructure and overlapping subproblems and can be solved efficiently using dynamic programming.
The problem with the second version is it's not making use of the price array. There is no base case of the recursion so it'll never stop. Even if you add a condition to return price[i] when n == 1 it'll always return the result of cutting the rod into pieces of size 1.
your 2nd approach is absolutely correct and its time complexity is also same as the 1st one.
In Dynamic Programming also, we can make tabulation on same approach.Here is my solution for recursion :
int rodCut (int price[],int n){
if(n<=0) return 0;
int ans = price[n-1];
for(int i=1; i<=n/2 ; ++i){
ans=max(ans, (rodCut(price , i) + rodCut(price , n-i)));
}
return ans;
}
And, Solution for Dynamic Programming :
int rodCut(int *price,int n){
int ans[n+1];
ans[0]=0; // if length of rod is zero
for(int i=1;i<=n;++i){
int max_value=price[i-1];
for(int j=1;j<=i/2;++j){
max_value=max(max_value,ans[j]+ans[i-j]);
}
ans[i]=max_value;
}
return ans[n];
}
Your algorithm looks almost correct - you need to be a bit careful when n is odd.
However, it's also exponential in time complexity - you make two recursive calls in each call to rodCutv2. The first algorithm uses memoisation (the price array), so avoids computing the same thing multiple times, and so is faster (it's polynomial-time).
Edit: Actually, the first algorithm isn't correct! It never stores values in prices, but I suspect that's just a typo and not intentional.

Recurrence function's runtime

int foo(int n)
{
if(n==0)
return 1;
int sum = 0;
for(int i = 0;i < n;i++)
sum += foo(n-1);
return sum;
}
I am learning Big O notation recently.
Could someone give me an idea about how to determine this recurrence function's runtime by using big-O notation and how to present the runtime of this function.
So think about what happens if you run a foo(n) for a big n. The sum then consists of n times a call to foo(n-1). In the next level of the recursion tree we have foo(n-1) and again we call n(-1) times foo(n-1-1) but for each of the n foo(n-1) branches of our tree. We know that the tree has the height of n as we have to go until foo(n-n). So at every recursion step you turn one instance of foo into O(n) instances of foo(n-1).
I am not certain if I should reveal the answer as it seems like an exercise but it seems quit obvious, just draw a few levels of the recursion tree and your find your answer.

Finding the temporal complexity of an exponential algorithm

Problem: Find best way to cut a rod of length n.
Each cut is integer length.
Assume that each length i rod has a price p(i).
Given: rod of length n, and a list of prices p, which provided the price of each possible integer lenght between 0 and n.
Find best set of cuts to get maximum price.
Can use any number of cuts, from 0 to n−1.
There is no cost for a cut.
Following I present a naive algorithm for this problem.
CUT-ROD(p,n)
if(n == 0)
return 0
q = -infinity
for i = 1 to n
q = max(q, p[i]+CUT-ROD(p,n-1))
return q
How can I prove that this algorithm is exponential? Step-by-step.
I can see that it is exponential. However, I'm not able to proove it.
Let's translate the code to C++ for clarity:
int prices[n];
int cut-rod(int n) {
if(n == 0) {
return 0;
}
q = -1;
res = cut-rod(n-1);
for(int i = 0; i < n; i++) {
q = max(q, prices[i] + res);
}
return q;
}
Note: We are caching the result of cut-rod(n-1) to avoid unnecessarily increasing the complexity of the algorithm. Here, we can see that cut-rod(n) calls cut-rod(n-1), which calls cut-rod(n-2) and so on until cut-rod(0). For cut-rod(n), we see that the function iterates over the array n times. Therefore the time complexity of the algorithm is equal to O(n + (n-1) + (n-2) + (n-3)...1) = O(n(n+1)/2) which is approximately equal to O((n^2)/2).
EDIT:
If we are using the exact same algorithm as the one in the question, its time complexity is O(n!) since cut-rod(n) calls cut-rod(n-1) n times. cut-rod(n-1) calls cut-rod(n-2) n-1 times and so on. Therefore the time complexity is equal to O(n*(n-1)*(n-2)...1) = O(n!).
I am unsure if this counts as a step-by-step solution but it can be shown easily by induction/substitution. Just assume T(i)=2^i for all i<n then we show that it holds for n:

How to find recurrence relation from recursive algorithm

I know how to find recurrence relation from simple recursive algorithms.
For e.g.
QuickSort(A,LB, UB){
key = A[LB]
i = LB
j = UB + 1
do{
do{
i = i + 1
}while(A[i] < key)
do{
j = j - 1
}while(A[j] > key)
if(i < j){
swap(A[i]<->A[j])
}while(i <= j)
swap(key<->A[j]
QuickSort(A,LB,J-1)
QuickSort(A,J+1,UB)
}
T(n) = T(n - a) + T(a) + n
In the above recursive algorithm it was quite easy to understand how the input size is reducing after each recursive call. But how to find recurrence relation for any algorithm in general, which is not recursive but might also be iterative. So i started learning how to convert iterative algorithm to recursive just to make it easy to find recurrence relation.
I found this link http://refactoring.com/catalog/replaceIterationWithRecursion.html.
I used to convert my linear search algorithm to recursive.
LinearSearch(A,x,LB,UB){
PTR = LB
while(A[PTR]!=x && PTR<=UB){
if(PTR==UB+1){
print("Element does not exist")
}
else{
print("location"+PTR)
}
}
}
got converted to
LinearSearch(A,x,LB,UB){
PTR=LB
print("Location"+search(A,PTR,UB,x))
}
search(A,PTR,UB,x){
if(A[PTR]!=x && PTR<=UB){
if(PTR==UB+1){
return -1
}
else{
return search(A,PTR+1,UB,x)
}
}
else{
return PTR
}
}
This gives the recurrence relation to be T(n) = T(n-1) + 1
But i was wondering is this the right approach to find recurrence relation for any algorithm?
Plus i don't know how to find recurrence relation for algorithms where more than one parameter is increasing or decreasing.
e.g.
unsigned greatest_common_divisor (const unsigned a, const unsigned b)
{
if (a > b)
{
return greatest_common_divisor(a-b, b);
}
else if (b > a)
{
return greatest_common_divisor(a, b-a);
}
else // a == b
{
return a;
}
}
First of all, algorithms are very flexible so you should not expect to have a simple rule that covers all of them.
That said, one thing that I think will be helpful for you is to pay more attention to the structure of the input you pass to your algorithm than to the algorithm yourself. For example, consider that QuickSort you showed in your post. If you glance at those nested do-whiles you are probably going to guess its O(N^2) when in reality its O(N). The real answer is easier to find by looking at the inputs: i always increases and j always decreases and when they finaly meet each other, each of the N indices of the array will have been visited exactly once.
Plus I don't know how to find recurrence relation for algorithms where more than one parameter is increasing or decreasing.
Well, those algorithms are certainly harder than the ones with a single variable. For the euclidean algorithm you used as an example, the complexity is actually not trivial to figure out and it involves thinking about greatest-common-divisors instead of just looking at the source code for the algorithm's implementation.

What Big-O notation would this fall under?

What Big-O notation would this fall under? I know setSearch() and removeAt() are of order O(n) (assume they are either way). I know without the for loop it'd be O(n), for certain, but I get confused how to calculate what it becomes with a for loop thrown into it. I'm not all that great at math... so. Would it be O(n^2)?
public void removeAll(DataElement clearElement)
{
if(length == 0)
System.err.println("Cannot delete from an empty list.");
else
{
for(int i = 0; i < list.length; i++)
{
loc = seqSearch(clearElement);
if(loc != -1)
{
removeAt(loc);
--i;
}
}
}
}
If removeAt() and seqSearch() are O(n) with respect to the length of the list then yes, this algorithm would be of order O(n^2). This is because within the for loop you call seqSearch every time, with a possibility of calling removeAt(loc). That means for each iteration you are doing either n or 2n operations. Taking the worst case, you have 2n^2 operations which is O(n^2).

Resources