Say we have a method that takes n steps, but that calls itself linearly at the worst case n times. In such a case would the Big O would be n*n? So is a recursive call generally n^2 similarly to two for loops?
Let's now take a binary recursion algorithm such as binary Fibonacci. 1 call of that algorithm takes n step, but let's say it can reiterate up to n times. Would the run-time of that algorithm be 2^n?
Let f() be a function which calls itself n times. Consider the C code representing the function f().
int f(int n)
{
int i;
if(n==0)
{
printf("\n Recursion Stopped");
}
else
{
for(i=0;i<=n;i++)
{
printf("\n Hello");
}
f(n-1);
}
}
For n = 5, the message Hello will be printed 15 times.
For n = 10, the message Hello will be printed 55 times.
In general the message will be printed n*(n+1)/2 times.
Thus the complexity of the function f() is O(n2). Remember f() is a function which has a complexity n and f() is recursively called n times. The complexity of such a function is equal to the following loop order if the inner loop contains constant time expressions like addition, subtraction etc.
for(i=0;i<=n;i++)
{
for(j=i;j<=n;j++)
{
/* Some constant time operation */
}
}
For a Binary Recursion the time complexity is O(2n).
A Binary Recursive function calls itself twice.
The following function g() is an example for binary recursion, (which is a Fibonacci binary recursion)
int g(int n)
{
int i;
if(n==0)
{
printf("\n Recursion Stopped");
}
else
{
printf("\n Hello");
g(n-1);
g(n-2);
}
}
The recurrence relation is g(n) = g(n-1)+g(n-2), for n>1.
Solving which we get an upper bound of O(2n).
The function g() is also Θ(φn),
where φ is the golden ratio and φ = ((1+ √5)/2)
Related
Question 1
public void guessWhat1(int N){
for (int i=N; i>0, i=i/2){
for (int j=0; j<i*2; j+=1){
System.out.println(“Hello World”);
}
}
}
The first loop will run for log(n).
The second loop will run for log(n).
The upper bound is O(log^2(n). What would be Big Θ?
Question 2
public void guessWhat2(int N) {
int i=1, s=1;
while (s<=N) {
i += 1;
s = s + i;
}
}
The upper bound for this is O(n). I am not quite sure about the Big Θ.
It would great if someone could clarify on these. Thank You
Lets get clear with the definitions of the notations first.
Big O: It denotes the upper bound of the algorithm.
Big Theta: It denotes the average bound of the algorithm.
For your first question
public void guessWhat1(int N){
for (int i=N; i>0, i=i/2){
for (int j=0; j<i*2; j+=1){
System.out.println(“Hello World”);
}
}
}
For i=N, inner loop runs 2N times, i=N/2 inner loop runs for N times, for i=N/4 inner loop runs for N/2 times.....
so the total complexity = O(2N+N+N/2+N/4+...+1)
which is equal to O(N(2+1+1/2+1/4+....1/N))= O(N(3+1/2+1/4+....1/N))
N(3+1/2+1/4+....1/N)
= N( 3 + 1 - (0.5)^logN ) = O(N(4-1/N)) = O(N)
So the complexity is O(N), even in theta notation its the same N as the above loops takes the same time for all cases.
For your second question
public void guessWhat2(int N) {
int i=1, s=1;
while (s<=N) {
i += 1;
s = s + i;
}
}
The while loop takes O(sqrt(N)). Same as above, here also the theta notation will also be the same as big O notation, which is sqrt(N).
The theta notation varies from big O if input has multiple cases. Lets take an example of insertion sort https://en.wikipedia.org/wiki/Insertion_sort where N is the size of the input array. If the input array is already sorted it takes linear time, but if the input array is reverse sorted it takes N^2 time to sort the array.
So in that case for insertion sort, the time complexity is O(N^2).
For best case it is theta(N) and for worst case its theta(N^2).
can someone explain to me how i can calculate time complexity of h3(worst case) here...
given this code :
int g3(int n) {
if (n <= 1)
return 2;
int goo = g3(n / 2);
return goo * goo; //i have trouble with this line`
}
int h3(int n) {
return g3(g3(n)); //trouble with this one too
}
i've tried to calculate the complexity based on what i calculated it's Big O of nlog(n) , however it's wrong ...
is there a quick and technical method to solve these kinds of problems fast and correctly ?
(I usually use recursive tree method to calculate time complexity)
g3 have O(log n) complexity, n is divided by 2 in all the values of n, the result is a logarithmic function. h3 have O(log n), and this is because complexity depends on function g3, doesnt matter the function composition in the return value g(g(n)) and the value of n, the key is the g function complexity. The algorithm needs use all values of n and do something logarithmic to give the n*logn complexity
I've computed complexity of below algorithm as
for i = 0 to m
for j = 0 to n
//Process of O(1)
Complexity: O( m * n)
This is simple example of O( m * n). But I'm not able to figure out how O(m+n) computed. Any sample example
O(m+n) means O(max(m,n)). A code example:
for i = 0 to max(m,n)
//Process
The time complexity of this example is linear to the maximum of m and n.
for i=0 to m
//process of O(1)
for i=0 to n
//process of O(1)
time complexity of this procedure is O(m+n).
You often get O(m+n) complexity for graph algorithms. It is the complexity for example of a simple graph traversal such as BFS or DFS. Then n = |V| stands for the number of vertices, m = |E| for the number of edges, where the graph is G=(V,E).
The Knuth-Morris-Pratt string-searching algorithm is an example.
http://en.wikipedia.org/wiki/Knuth%E2%80%93Morris%E2%80%93Pratt_algorithm#Efficiency_of_the_KMP_algorithm
The string you're looking for (the needle or the pattern) is length m and the text you're searching through is length n. There is preprocessing done on the pattern which is O(m) and then the search, with the preprocessed data, is O(n), giving O(m + n).
The above example you have is a nested for loop, when you have nested loops and have 2 different inputs m and n ( considered very large in size). The complexity is said to be multiplicative. so for first for loop you write complexity O(m) and for inner for loop you write O(n) and as they are nested loop, you can write as O (m) * O(n) or O(m * n).
static void AddtiviteComplexity(int[] arr1,int[] arr2)
{
int i = 0;
int j = 0;
while (i < arr1.Length)
{
Console.WriteLine(arr1[i]);
while (j < arr2.Length)
{
Console.WriteLine(arr2[j]);
j++;
}
i++;
}
}
similarly when have 2 loops and they are not nested and have 2 different inputs m and n ( considered very large in size), the complexity is said to be additive.
For the First loop, you write the complexity O(m) and for the second loop you write the complexity O(n) and as there are separate loops, you can write the complexity as O(m) + O(n) or O(m + n).
static void AddtiviteComplexity(int[] arr1,int[] arr2)
{
int i = 0;
int j = 0;
while(i< arr1.Length)
{
Console.WriteLine(arr1[i]);
i++;
}
while (j < arr2.Length)
{
Console.WriteLine(arr2[j]);
j++;
}
}
Note: the above code is for example with int array is example purpose. Also I have used while loop, it doesn't matter if it's a while or a for loop for calculating complexity.
Hope this helps.
Consider the following Java method:
public static void f(int n) {
if (n<=1) {
System.out.print(n) ;
return;
}else {
f(n/2) ;
System.out.print(n);
f(n/2);
}
} // end of method
Question 3. Let S(n) denote the space complexity of f(n). Which of the
following statements is correct?
A: S(n) = (2n)
B: S(n) = (n)
C: S(n) = (log n) <- Correct answer, anyone know why?
D: none of the above
Whenever the function calls itself recursively all local variables remain on the stack and a new set of them are pushed to the stack for the new call.
This means that you care how many calls are there at most, in other words what is the maximum depth of the recursion.
It's clear that it's log n, because the successive argumetns are n, n/2, n/4, ..., 1.
There is a constant number of local variables, namely 1 (for which space is needed on the stack) therefore the overall space complexity is O(log n).
What is the time and space complexity of:
int superFactorial4(int n, int m)
{
if(n <= 1)
{
if(m <= 1)
return 1;
else
n = m -= 1;
}
return n*superFactorial4(n-1, m);
}
It runs recursively by decreasing the value of n by 1 until it equals 1 and then it will either decrease the value of m by 1 or return 1 in case m equals 1.
I think the complexity depends on both n and m so maybe it's O(n*m).
Actually, it looks to be closer to O(N+m^2) to me. n is only used for the first "cycle".
Also, in any language that doesn't do tail call optimization the space complexity is likely to be "fails". In languages that support the optimization, the space complexity is more like O(1).
The time complexity is O(n+m^2), space complexity the same.
Reasoning: with a fixed value of m, the function makes n recursive calls to itself, each does constant work, so the complexity of calls with fixed m is n. Now, when n reaches zero, it becomes m-1 and m becomes m-1 too. So the next fixed-m-phase will take m-1, the next m-2 and so on. So you get a sum (m-1)+(m-2)+...+1 which is O(m^2).
The space complexity is equal, because for each recursive call, the recursion takes constant space and you never return from the recursion except at the end, and there is no tail recursion.
The time complexity of a Factorial function using recursion
pseudo code:
int fact(n)
{
if(n==0)
{
return 1;
}
else if(n==1)
{
return 1;
}
else if
{
return n*f(n-1);
}
}
time complexity;
let T(n) be the number of steps taken to compute fact(n).
we know in each step F(n)= n*F(n-1)+C
F(n-1)= (n-1)*F(n-2)+c
substitute this in F(n), we get
F(n)= n*(n-1)*F(n-2)+(n+1)c
using big o notation now we can say that
F(n)>= n*F(n-1)
F(n)>= n*(n-1)*F(n-2)
.
.
.
.
.
F(n)>=n!F(n-k)
T(n)>=n!T(n-k)
n-k=1;
k=n-1;
T(n)>=n!T(n-(n-1))
T(n)>=n!T(1)
since T(1)=1
T(n)>=1*n!
now it is in the form of
F(n)>=c(g(n))
so we can say that time complexity of factorial using recursion is
T(n)= O(n!)