Consider the following C-function:
double foo (int n) {
int i;
double sum;
if (n==0)
return 1.0;
else {
sum = 0.0;
for (i=0; i<n; i++)
sum +=foo(i);
return sum;
}
}
The space complexity of the above function is:
(a) O(1) (b) O(n) (c) O(n!) (d) O(n^n)
What I've done is calculating the recurrence relation for the above code but I'm still not able to solve that recurrence. I know this is not home work related site. But any help would be appreciated.
This is my recurrence.
T(n) = T(n-1) + T(n-2) + T(n-3) + T(n-4) +........+ T(1)+ S
Where S is some constant.
That would depend on whether you're talking about stack, or heap space complexity.
For the heap, it's O(1) or O(0) since you're using no heap memory. (aside from the basic system/program overhead)
For the stack, it's O(n). This is because the recursion gets up the N levels deep.
The deepest point is:
foo(n)
foo(n - 1)
foo(n - 2)
...
foo(0)
Space complexity describes how much space your program needs. Since foo does not declare arrays, each level requires O(1) space. Now all you need to do is to figure out how many nested levels can be active at the most at any given time.
Edit: ...so much for letting you figure out the solution for yourself :)
You don't explain how you derived your recurrence relation. I would do it like this:
If n == 0, then foo uses constant space (there is no recursion).
If n > 1, then foo recurses once for each i from 0 to n-1 (inclusive). For each recursion, it uses constant space (for the call itself) plus T(i) space for the recursive call. But these calls occur one after the other; the space used by each call is releasing before the next call. Therefore they should not be added, but simply the maximum taken. That would be T(n-1), since T is non-decreasing.
The space cmplexity would be O(n). As you have mentioned, it might seem like O(n*n), but one should remember that onces the call for say (i=1) in the loop is done, the space used up in the stack for this is removed. So, you will have to consider the worst case, when i=n-1. Then the maximum number of recursive function calls will be on the stack simultaneously
Related
The following code reverses an array.What is its runtime ?
My heart says it is O(n/2), but my friend says O(n). which is correct? please answer with reason. thank you so much.
void reverse(int[] array) {
for (inti = 0; i < array.length / 2; i++) {
int other = array.length - i - 1;
int temp = array[i];
array[i] = array[other];
array[other] = temp;
}
}
Big-O complexity captures how the run-time scales with n as n gets arbitrarily large. It isn't a direct measure of performance. f(n) = 1000n and f(n) = n/128 + 10^100 are both O(n) because they both scale linearly with n even though the first scales much more quickly than the second, and the second is actually prohibitively slow for all n because of the large constant cost. Nonetheless, they have the same complexity class. For these sorts of reasons, if you want to differentiate actual performance between algorithms or define the performance of any particular algorithm (rather than how performance scales with n) asymptotic complexity is not the best tool. If you want to measure performance, you can count the exact number of operations performed by the algorithm, or better yet, provide a representative set of inputs and just measure the execution time on those inputs.
As for the particular problem, yes, the for loop runs n/2 times, but you also do some constant number of operations, c, in each of those loops (subtractions, array accesses, variable assignments, conditional check on i). Maybe c=10, it's not really important to count precisely to determine the complexity class, just to know that it's constant. The run-time is then f(n)=c*n/2, which is O(n): the fact that you only do n/2 for-loops doesn't change the complexity class.
Most recursive functions I have seen being asked about (e.g. Fibonacci or Hanoi) have had O(1) returns, but what would the time complexity be if it wasn't O(1) but O(n) instead?
For example, a recursive Fibonacci with O(n) base case:
class fibonacci {
static int fib(int n) {
if (n <= 1)
for (int i=0;i<n;i++) {
// something
}
return n;
return fib(n-1) + fib(n-2);
}
public static void main (String args[])
{
int n = 9;
System.out.println(fib(n));
}
}
The base case for the function that you’ve written here actually still has time complexity O(1). The reason for this is that if the base case triggers here, then n ≤ 1, so the for loop here will run at most once.
Because so many base cases trigger when the input size is small, it’s comparatively rare to get a base case whose runtime is, say, O(n) when the input to the algorithm has size n. This would mean that the base case is independent of the array size, which can happen but is somewhat unusual.
A more common occurrence - albeit one I think is still pretty uncommon - would be for a recursive function to have two different parameters to it (say, n and k), where the recursion reduces n but leaves k unmodified. For example, imagine taking the code you have here and replacing the for loop on n in the base case with a for loop on k in the base case. What happens then?
This turns out to be an interesting question. In the case of this particular problem, it means that the total work done will be given by O(k) times the number of base cases triggered, plus O(1) times the number of non-base-case recursive calls. For the Fibonacci recursion, the number of base cases triggered computing Fn is Fn+1 and there are (Fn+1 - 1) non-base-case calls, so the overall runtime would be Θ(k Fn+1 + Fn+1) = Θ(k φn). For the Towers of Hanoi, you’d similarly see a scaling effect where the overall runtime would be Θ(k 2n). But for other recursive functions the runtime might vary in different ways, depending on how those functions were structured.
Hope this helps!
This algorithm essentially finds the star (*) inside a given binary string, and replaces it with 0 and also 1 to output all the different combinations of the binary string.
I originally thought this algorithm is O(2^n), however, it seems to me that that only takes into account the number of stars (*) inside the string. What about the length of the string? Since if there are no stars in the given string, it should technically still be linear, because the amount of recursive calls depends on string length, but my original O(2^n) does not seem to take that into account as it would become O(1) if n = 0.
How should I go about finding out its time and space complexity? Thanks.
Code:
static void RevealStr(StringBuilder str, int i) {
//base case: prints each possibility when reached
if(str.length() == i) {
System.out.println(str);
return;
}
//recursive step if wild card (*) found
if(str.charAt(i) == '*') {
//exploring permutations for 0 and 1 in the stack frame
for(char ch = '0'; ch <= '1'; ch++) {
//making the change to our string
str.setCharAt(i, ch);
//recur to reach permutations
RevealStr(str, i+1);
//undo changes to backtrack
str.setCharAt(i, '*');
}
return;
}
else
//if no wild card found, recur next char
RevealStr(str, i+1);
}
Edit: I am currently thinking of something like, O(2^s + l) where s is the number of stars and l the length of the string.
The idea of Big-O notation is to give an estimate of an upperbound, i.e. if the order of an algorithm is O(N^4) runtime it simply means that algorithm can't do any worse than that.
Lets say, there maybe an algorithm of order O(N) runtime but we can still say it is O(N^2). Since O(N) never does any worse than O(N^2). But then in computational sense we want the estimate to be as close and tight as it will give us a better idea of how well an algorithm actually performs.
In your current example, both O(2^N) and O(2^L), N is length of string and L is number of *, are valid upperbounds. But since O(2^L) gives a better idea about algorithm and its dependence on the presence of * characters, O(2^L) is better and tighter estimate (as L<=N) of the algorithm.
Update: The space complexity is implementation dependant. In your current implementation, assuming StringBuilder is passed by reference and there are no copies of strings made in each recursive call, the space complexity is indeed O(N), i.e. the size of recursive call stack. If it is passed by value and it is copied to stack every time before making call, the overall complexity would then be O(N * N), i.e. (O(max_number_of_recursive_calls * size_of_string)), since copy operation cost is O(size_of_string).
To resolve this we can do a manual run:
Base: n=1
RevealStr("*", 1)
It meets the criteria for the first if, we only ran this once for output *
Next: n=2
RevealStr("**", 1)
RevealStr("0*", 2)
RevealStr("00", 2)
RevealStr("01", 2)
RevealStr("1*", 2)
RevealStr("10", 2)
RevealStr("11", 2)
Next: n=3
RevealStr("***", 1)
RevealStr("0**", 2)
RevealStr("00*", 2)
RevealStr("000", 3)
RevealStr("001", 3)
RevealStr("01*", 2)
RevealStr("010", 3)
RevealStr("011", 3)
RevealStr("1**", 2)
RevealStr("10*", 2)
RevealStr("100", 3)
RevealStr("101", 3)
RevealStr("11*", 2)
RevealStr("110", 3)
RevealStr("111", 3)
You can see that with n=2, RevealStr was called 7 times, while with n=3 it was called 15. This follows the function F(n)=2^(n+1)-1
For the worst case scenario, the complexity seems to be O(2^n) being n the number of stars
Example1: Given an input of array A with n elements.
See the algo below:
Algo(A, I, n)
{
int i, j = 100;
for (i = 1 to j)
A[i] = 0;
}
Space complexity = Extra space required by variable i + variable 'j'
In this case my space complexity is: O(1) => constant
Example2: Array of size n given as input
A(A,I,n)
{
int i;
create B[n]; //create a new array B with same number of elements
for(i = 1 to n)
B[i] = A[i]
}
Space complexity in this case: Extra space taken by i + new Array B
=> 1 + n => O(n)
Even if I used 5 variables here space complexity will still be O(n).
If as per computer science my space complexity is always constant for first and O(n) for second even if I was using 10 variables in the above algo, why is it always advised to make programs using less number of variables?
I do understand that in practical scenarios it makes the code more readable and easier to debug etc.
But looking for an answer in terms of space complexity only here.
Big O complexity is not the be-all end-all consideration in analysis of performance. It is all about the constants that you are dropping when you look at asymptotic (big O) complexity. Two algorithms can have the same big-O complexity and yet one can be thousands of times more expensive than the other.
E.g. if one approach to solving some problem always takes 10s flat, and another approach takes 3000s flat, regardless of input size, they both have O(1) time complexity. Of course, that doesn't really mean both are equally good; using the latter approach if there is no real benefit is simply a massive waste of time.
This is not to say performance is the only, or even the primary consideration when someone advises you to be economical with your use of local variables. Other considerations like readability, or avoidance of subtle bugs are also factors.
For this code snippet
Algo(A, I, n)
{
int i, j = 100;
for (i = 1 to j)
A[i] = 0;
}
Space Complexity is: O(1) for the array and constant space for the two variables i and j
It is always advised to use less variables because ,each variable occupies constant space ,if you have 'k' variables.k variables will use k*constant space ,if lets consider each variable is of type int so int occupies 2 bytes so k*2bytes,lets take k as 10 so it 20bytes here
It is as similar as using int A[10] =>20 bytes space complexity
I hope you understand
I'm reading Cracking the Coding Interview, and in the Big O section of the book we are posed with this piece of code:
int pairSumSequence(int n){
int sum = 0;
for(int i = 0; i < n; i++){
sum += pairSum(i, i + 1);
}
return sum;
}
int pairSum(int a, int b) {
return a + b;
}
Now, I understand that the time complexity is of O(n), because it's clearly obvious that the time to execute increases as the size of the int n increases. But the book continues to state that:
"...However, those calls [referring to calls of pairSum] do not exist simultaneously on the call
stack, so you only need O(1) space."
This is the part I do not understand. Why is the space complexity of this algorithm O(1)? What exactly does the author mean by this? At my initial analysis, I assumed that because pairSum() is being called N times those calls would be added to the call stack back-to-back and would thus occupy N space in the call stack. Thanks very much.
It means that the amount of space used by this algorithm is constant with respect to the input size.
Yes, the pairSum is called N times. However, it occupies O(1) space because, as the book says, no two calls are done simultaneously.
Roughly speaking, at each iteration of the loop:
The pairSum is called. It uses a constant amount of the stack space.
It returns. It doesn't occupy any stack space after that.
Thus, this algorithm uses only a fixed amount of space at any point (it doesn't depend on n).