Trouble trying to find the time efficiency of a recursive algorithm - algorithm

I'm learning time efficiency of algorithms and have become stuck at trying to analyse recursive algorithms. I currently have an algorithm that just basically traverses a binary search tree and puts each node into an array.
placeIntoArray(root, array[]a, int i) {
if (root.left != null) {
i = placeIntoArray(root.left, a, i);
}
a[i] = root;
i++;
if (root.right != null) {
i = placeIntoArray(root.right, a, i);
}
return i;
}
If I had to guess I'd think it was in the class of O(n) since it's just touching each node of placing it into an array, but I'm not sure how to analyse it properly.. Any help would be appreciated

The time complexity of the problem for the array with the size of n is T(n) = T(number of the elements in the root.left) + T(number of the elements in the root.right) + c that c is constant. In two extreme scenarios, it would be T(n) = 2T(n/2) + c (completely balanced) which means T(n) = Theta(n) or T(n) = T(n-1) + T(1) + c (completely unblanced) that means T(n) = Theta(n). If you consider the other cases, you will find that T(n) = Theta(n).

Related

Recurrence Relation and Time Complexity for finding height of binary tree

I am working on a simple problem of finding height of a binary tree on HackerRank. My solution below passed all test cases and after some running code by hands, I believe it is an O(n) solution. Most other solutions I found online (I think) said it is O(n) as well, but I am unable to solve the recurrence relation to reach O(n) based on both solutions.
Assuming the tree is not balanced, below is my solution:
public static int height(Node root) {
// Write your code here.
if (root.left == null && root.right == null) {
return 0;
} else if (root.left == null && root.right != null) {
return 1 + height(root.right);
} else if (root.left != null && root.right == null) {
return 1 + height(root.left);
} else {
return 1 + Math.max(height(root.left), height(root.right));
}
}
I found that the number of functions called is almost exact the number of nodes, which means it should be O(n). Based on my last case, which I think is likely to be the worst runtime case when I need to call height(node) on both branches, I tried to derive the following recurrence relation
Let n be the number of nodes and k be the number of level
T(n) = 2 T(n/2)
Base Case: n = 0, n =1
T(0) = -1
T(1) = T(2^0)= 0
Recursive Case:
k = 1, T(2^1) = 2 * T(1)
k = 2, T(2^2) = 2 * T(2) = 2 * 2 * T(1) = 2^2 T(1)
k = 3, T(2^3) = 2^3 * T(1)
....
2^k = n=> k = logn => T(2^k) = 2^k * T(1) = 2^logn * T(1)
So to me apparently, the time complexity is O(2^logn), I am confused why people say it is O(n) ? I read this article (Is O(n) greater than O(2^log n)) and I guess it makes sense because O(n) > O(2^logn), but I have two questions:
Is my recurrence relation correct and the result correct ? If it is, why in reality ( I count the number of times function is called) I still get O(n) instead of O(2^logn) ?
How do you derive recurrence relation for a recursive function like this ? Do you take the worst case (in my case is last condition) and derive recurrence relation from that ?
As 2^log(n) = n based on the definition of the log function, you can find that both are the same. it means O(n) and O(2^log(n)) are equivalent.
Also if you need to find the height of the tree repeatedly, you can preprocess the tree and store the height of subtree for each node to find the height of the tree in O(1) after the preprocessing phase.

How do you get the complexity of an sequence alignment algorithm?

void opt(int i , int j )
{
if(i == m)
opt = 2( n - j);
else if(j == n)
opt = 2( m - i);
else{
if(x[i] == x[j])
penalty = 0;
else
penalty = 1;
opt = min(opt(i+1,j+1) + penalty, opt(i+1,j)+2, opt(i, j+1)+2);
}
}
Why is the complexity of this algorithm 3^n ?
Analyze the time complexity of Algorithm opt.
You should specify how are you calling this function first.
Regarding the analysis of Big O, you can get it by drawing the recursion tree. You can do it for small n samples and you will notice that the height of the tree is n. Now, you can notice that for each instance of the function you call the same function 3 times again, so you have a tree that expands by a factor of 3 exponentially. Hence your complexity is O(3^n).
Bonus: Analogy with Fibonacci
Check the basic (withot memoization) recusrive version of the Fibonacci algorithm and you will see the similar structure except for the fact that 2 calls are done each time, and hence the complexity is O(2^n).

Selection Sort Recurrence Relation

up front this is a homework question but I am having a difficult time understanding recurrence relations. I've scoured the internet for examples and they are very vague to me. I understand that recurrence relations for recursive algorithms don't have a set way of handling each one but I am lost at how to understand these. Here's the algorithm I have to work with:
void selectionSort(int array[]) {
sort(array, 0);
}
void sort(int[] array, int i) {
if (i < array.length - 1)
{
int j = smallest(array, i); T(n)
int temp = array[i];
array[i] = array[j];
array[j] = temp;
sort(array, i + 1); T(n)
}
}
int smallest(int[] array, int j) T(n - k)
{
if (j == array.length - 1)
return array.length - 1;
int k = smallest(array, j + 1);
return array[j] < array[k] ? j : k;
}
So from what I understand this is what I'm coming up with: T(n) = T(n – 1) +cn + c
The T(n-1) represents the recursive function of sort and the added cn represents the recursive function of smallest which should decrease as n decreases since it's called only the amount of times that are remaining in the array each time. The constant multiplied by n is the amount of time to run the additional code in smallest and the additional constant is the amount of time to run the additional code in sort. Is this right? Am I completely off? Am I not explaining it correctly? Also the next step is to create a recursion tree out of this but I don't see this equation as the form T(n) = aT(n/b) + c which is the form needed for the tree if I understand this right. Also I don't see how my recurrence relation would get to n^2 if it is correct. This is my first post too so I apologize if I did something incorrect here. Thanks for the help!
The easiest way to compute the time complexity is to model the time complexity of each function with a separate recurrence relation.
We can model the time complexity of the function smallest with the recurrence relation S(n) = S(n-1)+O(1), S(1)=O(1). This obviously solves to S(n)=O(n).
We can model the time complexity of the sort function with T(n) = T(n-1) + S(n) + O(1), T(1)=O(1). The S(n) term comes in because we call smallest within the function sort. Because we know that S(n)=O(n) we can write T(n) = T(n-1) + O(n), and writing out the recurrence we get T(n)=O(n)+O(n-1)+...+O(1)=O(n^2).
So the total running time is O(n^2), as expected.
In selection sort algo
Our outer loop runs for n- 1 times (n is the length of the array) so n-1 passes would be made... and then element is compared with other elements ....so n-1 comparisons
T(n)=T(n-1) + n-1
Which can be proved as O(n^2) by solving the particular relation ..

What is the difference between O(N) + O(M) and O(N + M). Is there any?

I'm solving problems for interview practice and I can't seem to figure out the answer to the time and space complexity of the following problem:
Given two sorted Linked Lists, merge them into a third list in sorted order. Let's assume we are using descending ordering.
One of the answers I came across, which is clearly not the most efficient one, is the following recursive solution:
Node mergeLists(Node head1, Node head2) {
if (head1 == null) {
return head2;
} else if (head2 == null) {
return head1;
}
Node newHead = null;
if(head1.data < head2.data) {
newHead = head1;
newHead.next = mergeLists(head1.next, head2);
} else {
newHead = head2;
newHead.next = mergeLists(head1, head2.next);
}
return newHead;
}
Now, when I was analyzing the complexity of this function, I came across an issue. I wasn't sure if it was O(M + N) or O(M) + O(N). I just cannot get an intuitive answer. It seems logical to me that the run-time and space complexities of this function are O(N) + O(M) or O(max(N,M)) since the larger value would drive the asymptotic curve (or the recursive calls and stack frame creations).
To sum up:
In big-Oh notation, what is the difference between O(N+M) and O(N) + O(M)? Is there any? If they differ, I would appreciate if somebody could provide simple examples of both.
O(N) + O(M) means functions that are bounded by cN + dM for some c and d.
O(N + M) means functions that are bounded by e(N + M) for some e.
They are equivalent because:
cN + dM <= (c + d)(N + M) for some c and d.
and
e(N + M) <= eN + eM for some e.

Design And Analysis of Algorithm : Recursive Relation

I have a great Doubt in solving this recursive relation. Can anyone provide me a solution?
The relation:
T(n) = Sumation i=1 to N T(i) +1...,
What is the bigOh order?
Taking the first order difference allows you to get rid of the summation.
T(n) - T(n-1) = (Sum(1<=i<n: T(i)) + 1) - (Sum(1<=i<n-1: T(i)) + 1) = T(n-1)
Hence
T(n) = 2.T(n-1)
A recurrence relation describes a sequence of numbers. Early terms are specified explicitly and later terms are expressed as a function of their predecessors. As a trivial example, this recurrence describes the sequence 1, 2, 3, etc.:
void Sample(int n)
{
if (n > 0) {
Sample(n-1);
System.out.println('here');
} else {
return 1;
}
}
Sample(3);
Here, the first term is defined to be 1 and each subsequent term is one more than its predecessor. In order to analyze a recurrence relationship, we need to know execution time of each line of code, In above example:
void Sample(int n)
{
if (n > 0) {
// T(n-1) Sample(n-1);
// 1 System.out.println('here');
} else {
return 1;
}
}
We define T(n) as a:
For solving T(n)= T(n-1)+1, If we know what is T(n-1), then we can substitute it and get answer, Let's substitute it with n then, we will have:
T(n-1)= T(n-1-1)+1 => T(n-2)+1
//in continue
T(n)=[T(n-2)+1]+1 => T(n-2)+2
//and again
T(n)=[T(n-3)+2]+1 => T(n-3)+3
.
.
.
// if repeat it k times, while it ends
T(n)= T(n-k)+k
As you see it is increasing by 1 in each step, If we go k times, Then T(n)= T(n-k)+k. Now, we need to know the smallest value(when the the function will stop, always must be a point to stop). In this problem zero is end of recursive stack. sine we assume that we go k times to reach zero, the solution will be:
// if n-k is final step
n-k = 0 => n = k
// replace k with n
T(n)= T(n-n)+n => T(n)= T(0)+n;
// we know
T(0) = 1;
T(n) = 1+n => O(n)
The big O is n, It means this recursive algorithms in worst case goes n times.

Resources