I'm reading Cracking the Coding Interview, and in the Big O section of the book we are posed with this piece of code:
int pairSumSequence(int n){
int sum = 0;
for(int i = 0; i < n; i++){
sum += pairSum(i, i + 1);
}
return sum;
}
int pairSum(int a, int b) {
return a + b;
}
Now, I understand that the time complexity is of O(n), because it's clearly obvious that the time to execute increases as the size of the int n increases. But the book continues to state that:
"...However, those calls [referring to calls of pairSum] do not exist simultaneously on the call
stack, so you only need O(1) space."
This is the part I do not understand. Why is the space complexity of this algorithm O(1)? What exactly does the author mean by this? At my initial analysis, I assumed that because pairSum() is being called N times those calls would be added to the call stack back-to-back and would thus occupy N space in the call stack. Thanks very much.
It means that the amount of space used by this algorithm is constant with respect to the input size.
Yes, the pairSum is called N times. However, it occupies O(1) space because, as the book says, no two calls are done simultaneously.
Roughly speaking, at each iteration of the loop:
The pairSum is called. It uses a constant amount of the stack space.
It returns. It doesn't occupy any stack space after that.
Thus, this algorithm uses only a fixed amount of space at any point (it doesn't depend on n).
Related
I'm a little confused about the space complexity.
int fn_sum(int a[], int n){
int result =0;
for(int i=0; i<n ; i++){
result += a[i];
}
return result;
}
In this case, is the space complexity O(n) or O(1)?
I think it uses only result,i variables so it is O(1). What's the answer?
(1) Space Complexity: how many memory do your algorithm allocate according to input size?
int fn_sum(int a[], int n){
int result = 0; //here you have 1 variable allocated
for(int i=0; i<n ; i++){
result += a[i];
}
return result;
}
as the variable you created (result) is a single value (it's not a list, an array, etc.), your space complexity is O(1), since the space usage is constant, which means: it doesn't change according to the size of the inputs, it's just a single and constant value.
(2) Time Complexity: how do the number of operations of your algorithm relates to the size of the input?
int fn_sum(int a[], int n){ //the input is an array of size n
int result = 0; //1 variable definition operation = O(1)
for(int i=0; i<n ; i++){ //loop that will run n times whatever it has inside
result += a[i]; //1 sum operation = O(1) that runs n times = n * O(1) = O(n)
}
return result; //1 return operation = O(1)
}
all the operations you do take O(1) + O(n) + O(1) = O(n + 2) = O(n) time, following the rules of removing multiplicative and additive constants from the function.
I answer bit differently:
Since memory space consumed by int fn_sum(int a[], int n) doesn't correlate with the number of input items its algorithmic complexity in this regard is O(1).
However runtime complexity is O(N) since it iterates over N items.
And yes, there are algorithms that consume more memory and get faster. Classic one is caching operations.
https://en.wikipedia.org/wiki/Space_complexity
If int means the 32-bit signed integer type, the space complexity is O(1) since you always allocate, use and return the same number of bits.
If this is just pseudocode and int means integers represented in their binary representations with no leading zeroes and maybe an extra sign bit (imagine doing this algorithm by hand), the analysis is more complicated.
If negatives are allowed, the best case is alternating positive and negative numbers so that the result never grows beyond a constant size - O(1) space.
If zero is allowed, an equally good case is to put zero in the whole array. This is also O(1).
If only positive numbers are allowed, the best case is more complicated. I expect the best case will see some number repeated n times. For the best case, we'll want the smallest representable number for the number of bits involved; so, I expect the number to be a power of 2. We can work out the sum in terms of n and the repeated number:
result = n * val
result size = log(result) = log(n * val) = log(n) + log(val)
input size = n*log(val) + log(n)
As val grows without bound, the log(val) term dominates in result size, and the n*log(val) term dominates in the input size; the best-case is thus like the multiplicative inverse of the input size, so also O(1).
The worst case should be had by choosing val to be as small as possible (we choose val = 1) and letting n grow without bound. In that case:
result = n
result size = log(n)
input size = 2 * log(n)
This time, the result size grows like half the input size as n grows. The worst-case space complexity is linear.
Another way to calculate space complexity is to analyze whether the memory required by your code scales/increases according to the input given.
Your input is int a[] with size being n. The only variable you have declared is result.
No matter what the size of n is, result is declared only once. It does not depend on the size of your input n.
Hence you can conclude your space complexity to be O(1).
Example1: Given an input of array A with n elements.
See the algo below:
Algo(A, I, n)
{
int i, j = 100;
for (i = 1 to j)
A[i] = 0;
}
Space complexity = Extra space required by variable i + variable 'j'
In this case my space complexity is: O(1) => constant
Example2: Array of size n given as input
A(A,I,n)
{
int i;
create B[n]; //create a new array B with same number of elements
for(i = 1 to n)
B[i] = A[i]
}
Space complexity in this case: Extra space taken by i + new Array B
=> 1 + n => O(n)
Even if I used 5 variables here space complexity will still be O(n).
If as per computer science my space complexity is always constant for first and O(n) for second even if I was using 10 variables in the above algo, why is it always advised to make programs using less number of variables?
I do understand that in practical scenarios it makes the code more readable and easier to debug etc.
But looking for an answer in terms of space complexity only here.
Big O complexity is not the be-all end-all consideration in analysis of performance. It is all about the constants that you are dropping when you look at asymptotic (big O) complexity. Two algorithms can have the same big-O complexity and yet one can be thousands of times more expensive than the other.
E.g. if one approach to solving some problem always takes 10s flat, and another approach takes 3000s flat, regardless of input size, they both have O(1) time complexity. Of course, that doesn't really mean both are equally good; using the latter approach if there is no real benefit is simply a massive waste of time.
This is not to say performance is the only, or even the primary consideration when someone advises you to be economical with your use of local variables. Other considerations like readability, or avoidance of subtle bugs are also factors.
For this code snippet
Algo(A, I, n)
{
int i, j = 100;
for (i = 1 to j)
A[i] = 0;
}
Space Complexity is: O(1) for the array and constant space for the two variables i and j
It is always advised to use less variables because ,each variable occupies constant space ,if you have 'k' variables.k variables will use k*constant space ,if lets consider each variable is of type int so int occupies 2 bytes so k*2bytes,lets take k as 10 so it 20bytes here
It is as similar as using int A[10] =>20 bytes space complexity
I hope you understand
I have a question regarding the space (memory) complexity of this particular piece of pseudocode:
int b(int n, int x) {
int sol = x;
if (n>1) {
for (int i = 1; i <= n; i++) {
sol = sol+i;
}
for (int k=0; k<3; k++) {
sol = sol + b(n/3,sol/9);
}
}
return sol;
}
The code gets called: b(n,0)
My opinion is, that the space complexity progresses linearly, that is n, because as the input n grows, so does the amount of variable declarations (sol).
Whereas a friend of mine insists it must be log(n). I didn't quite get his explanation. But he said something about the second for loop and that the three recursive calls happen in sequence.
So, is n or log(n) correct?
The total number times function b is called is O(n), but space complexity is O(log(n)).
Recursive calls in your program cause the execution stack to grow. Every time a recursive call takes place all local variables are pushed to the stack (stack size increases). And when function comes back from recursion the local variables are poped from the stack (stack size decreases).
So what you want to calculate here is the maximum size of execution stack, which is maximum depth of recursion, which is clearly O(log(n)).
I think the complexity is
O(log 3 base (n) )
I am confused when it comes to space complexity of an algorithm. In theory, it corresponds to extra stack space that an algorithm uses i.e. other than the input. However, I have problems pointing out, what exactly is meant by that.
If, for instance, I have a following brute force algorithm that checks whether there are no duplicates in the array, would that mean that it uses O(1) extra storage spaces, because it uses int j and int k?
public static void distinctBruteForce(int[] myArray) {
for (int j = 0; j < myArray.length; j++) {
for (int k = j + 1; k < myArray.length; k++) {
if (k != j && myArray[k] == myArray[j]) {
return;
}
}
}
}
Yes, according to your definition (which is correct), your algorithm uses constant, or O(1), auxilliary space: the loop indices, possibly some constant heap space needed to set up the function call itself, etc.
It is true that it could be argued that the loop indices are bit-logarithmic in the size of the input, but it is usually approximated as being constant.
According to the Wikipedia entry:
In computational complexity theory, DSPACE or SPACE is the computational resource describing the resource of memory space for a deterministic Turing machine. It represents the total amount of memory space that a "normal" physical computer would need to solve a given computational problem with a given algorithm
So, in a "normal" computer, the indices would be considered each to be 64 bits, or O(1).
would that mean that it uses O(1) extra storage spaces, because it uses int j and int k?
Yes.
Extra storage space means space used for something other then the input itself. And, just as time complexity works, if that extra space is not dependent (increases when input size is increased) on the size of the input size itself, then the space complexity would be O(1)
Yes, your algorithm is indeed O(1) storage space 1, since the auxillary space you use has a strict upper bound that is independent on the input.
(1) Assuming integers used for iteration are in restricted range, usually up to 2^32-1
Consider the following C-function:
double foo (int n) {
int i;
double sum;
if (n==0)
return 1.0;
else {
sum = 0.0;
for (i=0; i<n; i++)
sum +=foo(i);
return sum;
}
}
The space complexity of the above function is:
(a) O(1) (b) O(n) (c) O(n!) (d) O(n^n)
What I've done is calculating the recurrence relation for the above code but I'm still not able to solve that recurrence. I know this is not home work related site. But any help would be appreciated.
This is my recurrence.
T(n) = T(n-1) + T(n-2) + T(n-3) + T(n-4) +........+ T(1)+ S
Where S is some constant.
That would depend on whether you're talking about stack, or heap space complexity.
For the heap, it's O(1) or O(0) since you're using no heap memory. (aside from the basic system/program overhead)
For the stack, it's O(n). This is because the recursion gets up the N levels deep.
The deepest point is:
foo(n)
foo(n - 1)
foo(n - 2)
...
foo(0)
Space complexity describes how much space your program needs. Since foo does not declare arrays, each level requires O(1) space. Now all you need to do is to figure out how many nested levels can be active at the most at any given time.
Edit: ...so much for letting you figure out the solution for yourself :)
You don't explain how you derived your recurrence relation. I would do it like this:
If n == 0, then foo uses constant space (there is no recursion).
If n > 1, then foo recurses once for each i from 0 to n-1 (inclusive). For each recursion, it uses constant space (for the call itself) plus T(i) space for the recursive call. But these calls occur one after the other; the space used by each call is releasing before the next call. Therefore they should not be added, but simply the maximum taken. That would be T(n-1), since T is non-decreasing.
The space cmplexity would be O(n). As you have mentioned, it might seem like O(n*n), but one should remember that onces the call for say (i=1) in the loop is done, the space used up in the stack for this is removed. So, you will have to consider the worst case, when i=n-1. Then the maximum number of recursive function calls will be on the stack simultaneously