Why does the following code only have space complexity O(N)? [duplicate] - algorithm

This question already has answers here:
What is the space complexity of this code?
(3 answers)
Closed 6 years ago.
This is taken straight from Cracking the coding interview by Gayle Lakmaan McDowell.
She lists the following code:
int f(int n) {
if(n<=1){
return 1;
}
return f(n-1) + f(n-1);
}
She has an articulate explanation about why the time complexity is O(2^n), but why is the space complexity here only O(N)?

The space complexity is taking in account the call stack usage. The function will call itself O(N) times before returning, so the call stack will be O(N) deep.
Update (thanks #MatthewWetmore):
To clarify, the two recursive calls in the f(n-1) + f(n-1); expression are not executed in parallel. First one call is completed consuming the linear stack usage, and then the second one, consuming the same stack size. So no doubling of the space is happening, which is different from the running-time consumption (which is obviously accumulated each call).

It uses that much space in system stack for function call
like F(5) will call F(4) which wil call F(3)
F(5)->F(4)->F(3)->F(2)->F(1)->return value will close function in reverse sequence
so for calculation this space is allocated in stack that is for F(5) 5 stack space so for F(n) , space will be O(N)

The O notation is an asymtotic notation. This means that the notation states that the time to complete the algorithm would never surpass certain limit and it can go below but this is not specified, it is bounded.
The time complexity of an algorithm depends on the size of the input. This can be {1} an input of one number or {1,2} an input double the size of the previous input. That is what the n means O(n). This means the algorithm has linear complexity that means the time to complete the algorithm would be n * some constant or C * n.
You may ask yourself how do we determine C well that is why we use aymptotic notation we can omit the exact value of C, this is useful because we might see an algorithm can have a C that varies depending on the order of the input or some other condition. You might see best case scnearios and worst case scenarios where the complexity is Cn and the worst is Cnlogn in this case we say the algorithm has O(nlogn) complexity this is because we take the worst case scenario as the boundary. In this case we still omit the C and we omit the best case scenario Cn. You might consider reading an algorithm book for more information like the art of computer programming book 1, or Algorithms by Robert Sedgewick. The book by Sedgewick uses Java and is very accesible.
Space complexity works the same way but this time we are talking about memory space, there is a technique called dynamic programming and this technique stores previous results so you can use them later as solutions to subproblems. That way we skip the time it takes to make the calculation to the subproblem many times, however the amount of memory needed to solve the problem increases the classic example is the Change-making problem. You store many solutions and the space it takes in system memoy increases. The problem has linear space complexity because the amount of memory needed to solve the problem behaves as a function Cn.
As an example this function uses n which can be said to be one space in memory, the 1 constant is another so we can say the space in memory would be 1 + n; in this case we say the linear function has a displacement a the linear function is Cn + a where C = 1 and a = 1. Still a linear funtion. The n comes from the recursion as we will need n spaces in memory to store the n numbers as the recursion goes all the way down to 1. so for example lets say n = 3.
The first time we have 3 in memory as it goes down the recursion we store 2 in memory and last 1, we can say 1 is already in memory or not as the space where the 1 resides might be on stack or on heap. That is why we use O notation this is dependent on the architecture implementation. So while sometimes the complexity can be 1+n it can also be n but saying O(n) we cover both cases.
Also if you have doubts about the line:
f(n-1) + f(n-1);
Depending on architecture you may have a stack or you can send them to main memory(No example of this architecture comes to mind) and have a garbage collector clean the references as it goes out of scope, it will still be linear space as you store 3 * 2, 2 * 2, 1 * 2. In this case it will be 2n+1 or 2n. Still linear. But I doubt you see anything like this most likey you will hava a stack and as soon as the recursion return it will start popping all the variables from the stack, using at most 1+n or n+1.

Related

Time Complexity (Big O) - Can value of N decides whether the time complexity is O(1) or O(N) when we have 2 nested FOR loops?

Suppose that I have 2 nested for loops, and 1 array of size N as shown in my code below:
int result = 0;
for( int i = 0; i < N ; i++)
{
for( int j = i; j < N ; j++)
{
result = array[i] + array[j]; // just some funny operation
}
}
Here are 2 cases:
(1) if the constraint is that N >= 1,000,000 strictly, then we can definitely say that the time complexity is O(N^2). This is true for sure as we all know.
(2) Now, if the constraint is that N < 25 strictly, then people could probably say that because we know that definitely, N is always too small, the time complexity is estimated to be O(1) since it takes very little time to run and complete these 2 for loops WITH MODERN COMPUTERS ? Does that sound right ?
Please tell me if the value of N plays a role in deciding the outcome of the time complexity O(N) ? If yes, then how big the value N needs to be in order to play that role (1,000 ? 5,000 ? 20,000 ? 500,000 ?) In other words, what is the general rule of thumb here ?
INTERESTING THEORETICAL QUESTION: If 15 years from now, the computer is so fast that even if N = 25,000,000, these 2 for loops can be completed in 1 second. At that time, can we say that the time complexity would be O(1) even for N = 25,000,000 ? I suppose the answer would be YES at that time. Do you agree ?
tl:dr No. The value of N has no effect on time complexity. O(1) versus O(N) is a statement about "all N" or how the amount of computation increases when N increases.
Great question! It reminds me of when I was first trying to understand time complexity. I think many people have to go through a similar journey before it ever starts to make sense so I hope this discussion can help others.
First of all, your "funny operation" is actually funnier than you think since your entire nested for-loops can be replaced with:
result = array[N - 1] + array[N - 1]; // just some hilarious operation hahaha ha ha
Since result is overwritten each time, only the last iteration effects the outcome. We'll come back to this.
As far as what you're really asking here, the purpose of Big-O is to provide a meaningful way to compare algorithms in a way that is indenependent of input size and independent of the computer's processing speed. In other words, O(1) versus O(N) has nothing to with the size of N and nothing to do with how "modern" your computer is. That all effects execution time of the algorithm on a particular machine with a particular input, but does not effect time complexity, i.e. O(1) versus O(N).
It is actually a statement about the algorithm itself, so a math discussion is unavoidable, as dxiv has so graciously alluded to in his comment. Disclaimer: I'm going to omit certain nuances in the math since the critical stuff is already a lot to explain and I'll defer to the mountains of complete explanations elsewhere on the web and textbooks.
Your code is a great example to understand what Big-O does tell us. The way you wrote it, its complexity is O(N^2). That means that no matter what machine or what era you run your code in, if you were to count the number of operations the computer has to do, for each N, and graph it as a function, say f(N), there exists some quadratic function, say g(N)=9999N^2+99999N+999 that is greater than f(N) for all N.
But wait, if we just need to find big enough coefficients in order for g(N) to be an upper bound, can't we just claim that the algorithm is O(N) and find some g(N)=aN+b with gigantic enough coefficients that its an upper bound of f(N)??? THE ANSWER TO THIS IS THE MOST IMPORTANT MATH OBSERVATION YOU NEED TO UNDERSTAND TO REALLY UNDERSTAND BIG-O NOTATION. Spoiler alert. The answer is no.
For visuals, try this graph on Desmos where you can adjust the coefficients:[https://www.desmos.com/calculator/3ppk6shwem][1]
No matter what coefficients you choose, a function of the form aN^2+bN+c will ALWAYS eventually outgrow a function of the form aN+b (both having positive a). You can push a line as high as you want like g(N)=99999N+99999, but even the function f(N)=0.01N^2+0.01N+0.01 crosses that line and grows past it after N=9999900. There is no linear function that is an upper bound to a quadratic. Similarly, there is no constant function that is an upper bound to a linear function or quadratic function. Yet, we can find a quadratic upper bound to this f(N) such as h(N)=0.01N^2+0.01N+0.02, so f(N) is in O(N^2). This observation is what allows us to just say O(1) and O(N^2) without having to distinguish between O(1), O(3), O(999), O(4N+3), O(23N+2), O(34N^2+4+e^N), etc. By using phrases like "there exists a function such that" we can brush all the constant coefficients under the rug.
So having a quadratic upper bound, aka being in O(N^2), means that the function f(N) is no bigger than quadratic and in this case happens to be exactly quadratic. It sounds like this just comes down to comparing the degree of polynomials, why not just say that the algorithm is a degree-2 algorithm? Why do we need this super abstract "there exists an upper bound function such that bla bla bla..."? This is the generalization necessary for Big-O to account for non-polynomial functions, some common ones being logN, NlogN, and e^N.
For example if the number of operations required by your algorithm is given by f(N)=floor(50+50*sin(N)), we would say that it's O(1) because there is a constant function, e.g. g(N)=101 that is an upper bound to f(N). In this example, you have some bizarre algorithm with oscillating execution times, but you can convey to someone else how much it doesn't slow down for large inputs by simply saying that it's O(1). Neat. Plus we have a way to meaningfully say that this algorithm with trigonometric execution time is more efficient than one with linear complexity O(N). Neat. Notice how it doesn't matter how fast the computer is because we're not measuring in seconds, we're measuring in operations. So you can evaluate the algorithm by hand on paper and it's still O(1) even if it takes you all day.
As for the example in your question, we know it's O(N^2) because there are aN^2+bN+c operations involved for some a, b, c. It can't be O(1) because no matter what aN+b you pick, I can find a large enough input size N such that your algorithm requires more than aN+b operations. On any computer, in any time zone, with any chance of rain outside. Nothing physical effects O(1) versus O(N) versus (N^2). What changes it to O(1) is changing the algorithm itself to the one-liner that I provided above where you just add two numbers and spit out the result no matter what N is. Let's say for N=10 it takes 4 operations to do both array lookups, the addition, and the variable assignment. If you run it again on the same machine with N=10000000 it's still doing the same 4 operations. The amount of operations required by the algorithm doesn't grow with N. That's why the algorithm is O(1).
It's why problems like finding a O(NlogN) algorithm to sort an array are math problems and not nano-technology problems. Big-O doesn't even assume you have a computer with electronics.
Hopefully this rant gives you a hint as to what you don't understand so you can do more effective studying for a complete understanding. There's no way to cover everything needed in one post here. It was some good soul-searching for me, so thanks.

Differences between time complexity and space complexity?

I have seen that in most cases the time complexity is related to the space complexity and vice versa. For example in an array traversal:
for i=1 to length(v)
print (v[i])
endfor
Here it is easy to see that the algorithm complexity in terms of time is O(n), but it looks to me like the space complexity is also n (also represented as O(n)?).
My question: is it possible that an algorithm has different time complexity than space complexity?
The time and space complexities are not related to each other. They are used to describe how much space/time your algorithm takes based on the input.
For example when the algorithm has space complexity of:
O(1) - constant - the algorithm uses a fixed (small) amount of space which doesn't depend on the input. For every size of the input the algorithm will take the same (constant) amount of space. This is the case in your example as the input is not taken into account and what matters is the time/space of the print command.
O(n), O(n^2), O(log(n))... - these indicate that you create additional objects based on the length of your input. For example creating a copy of each object of v storing it in an array and printing it after that takes O(n) space as you create n additional objects.
In contrast the time complexity describes how much time your algorithm consumes based on the length of the input. Again:
O(1) - no matter how big is the input it always takes a constant time - for example only one instruction. Like
function(list l) {
print("i got a list");
}
O(n), O(n^2), O(log(n)) - again it's based on the length of the input. For example
function(list l) {
for (node in l) {
print(node);
}
}
Note that both last examples take O(1) space as you don't create anything. Compare them to
function(list l) {
list c;
for (node in l) {
c.add(node);
}
}
which takes O(n) space because you create a new list whose size depends on the size of the input in linear way.
Your example shows that time and space complexity might be different. It takes v.length * print.time to print all the elements. But the space is always the same - O(1) because you don't create additional objects. So, yes, it is possible that an algorithm has different time and space complexity, as they are not dependent on each other.
Time and Space complexity are different aspects of calculating the efficiency of an algorithm.
Time complexity deals with finding out how the computational time of
an algorithm changes with the change in size of the input.
On the other hand, space complexity deals with finding out how much
(extra)space would be required by the algorithm with change in the
input size.
To calculate time complexity of the algorithm the best way is to check if we increase in the size of the input, will the number of comparison(or computational steps) also increase and to calculate space complexity the best bet is to see additional memory requirement of the algorithm also changes with the change in the size of the input.
A good example could be of Bubble sort.
Lets say you tried to sort an array of 5 elements.
In the first pass you will compare 1st element with next 4 elements. In second pass you will compare 2nd element with next 3 elements and you will continue this procedure till you fully exhaust the list.
Now what will happen if you try to sort 10 elements. In this case you will start with comparing comparing 1st element with next 9 elements, then 2nd with next 8 elements and so on. In other words if you have N element array you will start of by comparing 1st element with N-1 elements, then 2nd element with N-2 elements and so on. This results in O(N^2) time complexity.
But what about size. When you sorted 5 element or 10 element array did you use any additional buffer or memory space. You might say Yes, I did use a temporary variable to make the swap. But did the number of variables changed when you increased the size of array from 5 to 10. No, Irrespective of what is the size of the input you will always use a single variable to do the swap. Well, this means that the size of the input has nothing to do with the additional space you will require resulting in O(1) or constant space complexity.
Now as an exercise for you, research about the time and space complexity of merge sort
First of all, the space complexity of this loop is O(1) (the input is customarily not included when calculating how much storage is required by an algorithm).
So the question that I have is if its possible that an algorithm has different time complexity from space complexity?
Yes, it is. In general, the time and the space complexity of an algorithm are not related to each other.
Sometimes one can be increased at the expense of the other. This is called space-time tradeoff.
There is a well know relation between time and space complexity.
First of all, time is an obvious bound to space consumption: in time t
you cannot reach more than O(t) memory cells. This is usually expressed
by the inclusion
DTime(f) ⊆ DSpace(f)
where DTime(f) and DSpace(f) are the set of languages
recognizable by a deterministic Turing machine in time
(respectively, space) O(f). That is to say that if a problem can
be solved in time O(f), then it can also be solved in space O(f).
Less evident is the fact that space provides a bound to time. Suppose
that, on an input of size n, you have at your disposal f(n) memory cells,
comprising registers, caches and everything. After having written these cells
in all possible ways you may eventually stop your computation,
since otherwise you would reenter a configuration you
already went through, starting to loop. Now, on a binary alphabet,
f(n) cells can be written in 2^f(n) different ways, that gives our
time upper bound: either the computation will stop within this bound,
or you may force termination, since the computation will never stop.
This is usually expressed in the inclusion
DSpace(f) ⊆ Dtime(2^(cf))
for some constant c. the reason of the constant c is that if L is in DSpace(f) you only
know that it will be recognized in Space O(f), while in the previous
reasoning, f was an actual bound.
The above relations are subsumed by stronger versions, involving
nondeterministic models of computation, that is the way they are
frequently stated in textbooks (see e.g. Theorem 7.4 in Computational
Complexity by Papadimitriou).
Yes, this is definitely possible. For example, sorting n real numbers requires O(n) space, but O(n log n) time. It is true that space complexity is always a lowerbound on time complexity, as the time to initialize the space is included in the running time.
Sometimes yes they are related, and sometimes no they are not related,
actually we sometimes use more space to get faster algorithms as in dynamic programming https://www.codechef.com/wiki/tutorial-dynamic-programming
dynamic programming uses memoization or bottom-up, the first technique use the memory to remember the repeated solutions so the algorithm needs not to recompute it rather just get them from a list of solutions. and the bottom-up approach start with the small solutions and build upon to reach the final solution.
Here two simple examples, one shows relation between time and space, and the other show no relation:
suppose we want to find the summation of all integers from 1 to a given n integer:
code1:
sum=0
for i=1 to n
sum=sum+1
print sum
This code used only 6 bytes from memory i=>2,n=>2 and sum=>2 bytes
therefore time complexity is O(n), while space complexity is O(1)
code2:
array a[n]
a[1]=1
for i=2 to n
a[i]=a[i-1]+i
print a[n]
This code used at least n*2 bytes from the memory for the array
therefore space complexity is O(n) and time complexity is also O(n)
The way in which the amount of storage space required by an algorithm varies with the size of the problem it is solving. Space complexity is normally expressed as an order of magnitude, e.g. O(N^2) means that if the size of the problem (N) doubles then four times as much working storage will be needed.
space complexity is the total amount of memory space used by an algorithm/program, including input value execution space. whereas the time complexity is the number of operations an algorithm performs to complete its task. These are two different concept, a single algorithm can of low time complexity but still can take up a lot of memory for example hashmaps take more memory than array but take less time.

What does it mean to find big o notation for memory

I have a question, what does it mean to find the big-o order of the memory required by an algorithm?
Like what's the difference between that and the big o operations?
E.g
a question asks
Given the following pseudo-code, with an initialized two dimensional array A, with both dimensions of size n:
for i <- 1 to n do
for j <- 1 to n-i do
A[i][j]= i + j
Wouldn't the big o notation for memory just be n^2 and the computations also be n^2?
Big-Oh is about how something grows according to something else (technically the limit on how something grows). The most common introductory usage is for the something to be how fast an algorithm runs according to the size of inputs.
There is nothing that says you can't have the something be how much memory is used according to the size of the input.
In your example, since there is a bucket in the array for everything in i and j, the space requirements grow as O(i*j), which is O(n^2)
But if your algorithm was instead keeping track of the largest sum, and not the sums of every number in each array, the runtime complexity would still be O(n^2) while the space complexity would be constant, as the algorithm only ever needs to keep track of current i, current j, current max, and the max being tested.
Big-O order of memory means how does the number of bytes needed to execute the algorithm vary as the number of elements processed increases. In your example, I think the Big-O order is n squared, because the data is stored in a square array of size nxn.
The big-O order of operations means how does the number of calculations needed to execute the algorithm vary as the number of elements processed increases.
Yes you are correct the space and time complexity for the above pseudo code is n^2.
But for the below code the space or memory complexity is 1 and but time complexity is n^2.
I usually go by the assignments etc done within the code which gives you the memory complexity.
for i <- 1 to n do
for j <- 1 to n-i do
A[0][0]= i + j
I honestly never heard of "big O for memory" but I can easily guess it is only loosely relater to the computation time - probably only setting a lower bound.
As an example, it is easy to design an algorithm which uses n^2 memory and n^3 computation, but i think it is impossible to do the other way round - you cannot process n^2 data with n complexity computationally.
Your algorithm has complexity 1/2 * n^ 2, thus O(n^2)
If A is given to your algorithm, then the space complexity is O(1). Iterating over an existing 2D array and writing values to existing memory locations uses no additional memory.
However, if the algorithm allocates A, then the space complexity is O(n2).
The time complexity is O(n2) either way.

time and space complexity

I have a doubt related with time and space complexity in following 2 case
Blockquote
Case I:
Recurion: Factorial calculation.
int fact(int n)
{
if(n==0)
return 1;
else
return (n*fact(n-1));
}
here how come time complexity become 2*n and space complexity proportional to n.
and
Case II:
Iterative:-
int fact(int n)
{
int i, result = 1;
if(n==0)
result = 1;
else
{
for(1=1;i<=n;i++)
result*=i;
}
return (result);
}
Time complexity proportional to n and space complexity is constant.
This always remain confusing to me.
If my reasoning is faulty, somebody please correct me :)
For the space complexity, here's my explanation:
For the recursive solution, there will be n recursive calls, so there will be n stacks used; one for each call. Hence the O(n) space. This is not the case for the iterative solution - there's just one stack and you're not even using an array, there is only one variable. So the space complexity is O(1).
For the time complexity of the iterative solution, you have n multiplications in the for loop, so a loose bound will be O(n). Every other operation can be assumed to be unit time or constant time, with no bearing on the overall efficiency of the algorithm. For the recursive solution, I am not so sure. If there are two recursive calls at each step, you can think of the entire call stack as a balanced binary tree, and the total number of nodes will be 2*n - 1, but in this case I am not so sure.
From: https://cs.nyu.edu/~acase/fa14/CS2/module_extras.php
Space Complexity
Below we're going to compare three different calls to both the iterative and recursive factorial functions and see how memory gets used. Keep in mind that every variable that we declare has to have space reserved in memory to store it's data. So the space complexity of an algorithm in its simplest form is the number of variables in use. So in this simplest situation we can calculate approximate space complexity using this equation:
space complexity = number of function calls * number of variables per call
Time Complexity: The number of (machine) instructions which a program executes during its running time is called its time complexity in computer science.
Space Complexity:This is essentially the number of memory cells which an algorithm needs.
Case 1: In the program is of recursively calculating the factorial , so there will be one direct call to the function and than there will be backtracking, so the time complexity becomes 2*n.
Talking about the space complexity there will be n stacks declared during the point of execution of program, so it is n.
Case 2: This case is pretty simple here you have n iteration inside the for loop so time complexity is n
This is not the case for the iterative solution - there's just one stack and you're not even using an array, there is only one variable. So the space complexity is O(1)

Is the time complexity of the empty algorithm O(0)?

So given the following program:
Is the time complexity of this program O(0)? In other words, is 0 O(0)?
I thought answering this in a separate question would shed some light on this question.
EDIT: Lots of good answers here! We all agree that 0 is O(1). The question is, is 0 O(0) as well?
From Wikipedia:
A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.
From this description, since the empty algorithm requires 0 time to execute, it has an upper bound performance of O(0). This means, it's also O(1), which happens to be a larger upper bound.
Edit:
More formally from CLR (1ed, pg 26):
For a given function g(n), we denote O(g(n)) the set of functions
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
The asymptotic time performance of the empty algorithm, executing in 0 time regardless of the input, is therefore a member of O(0).
Edit 2:
We all agree that 0 is O(1). The question is, is 0 O(0) as well?
Based on the definitions, I say yes.
Furthermore, I think there's a bit more significance to the question than many answers indicate. By itself the empty algorithm is probably meaningless. However, whenever a non-trivial algorithm is specified, the empty algorithm could be thought of as lying between consecutive steps of the algorithm being specified as well as before and after the algorithm steps. It's nice to know that "nothingness" does not impact the algorithm's asymptotic time performance.
Edit 3:
Adam Crume makes the following claim:
For any function f(x), f(x) is in O(f(x)).
Proof: let S be a subset of R and T be a subset of R* (the non-negative real numbers) and let f(x):S ->T and c ≥ 1. Then 0 ≤ f(x) ≤ f(x) which leads to 0 ≤ f(x) ≤ cf(x) for all x∈S. Therefore f(x) ∈ O(f(x)).
Specifically, if f(x) = 0 then f(x) ∈ O(0).
It takes the same amount of time to run regardless of the input, therefore it is O(1) by definition.
Several answers say that the complexity is O(1) because the time is a constant and the time is bounded by the product of some coefficient and 1. Well, it is true that the time is a constant and it is bounded that way, but that doesn't mean that the best answer is O(1).
Consider an algorithm that runs in linear time. It is ordinarily designated as O(n) but let's play devil's advocate. The time is bounded by the product of some coefficient and n^2. If we consider O(n^2) to be a set, the set of all algorithms whose complexity is small enough, then linear algorithms are in that set. But it doesn't mean that the best answer is O(n^2).
The empty algorithm is in O(n^2) and in O(n) and in O(1) and in O(0). I vote for O(0).
I have a very simple argument for the empty algorithm being O(0): For any function f(x), f(x) is in O(f(x)). Simply let f(x)=0, and we have that 0 (the runtime of the empty algorithm) is in O(0).
On a side note, I hate it when people write f(x) = O(g(x)), when it should be f(x) ∈ O(g(x)).
Big O is asymptotic notation. To use big O, you need a function - in other words, the expression must be parametrized by n, even if n is not used. It makes no sense to say that the number 5 is O(n), it's the constant function f(n) = 5 that is O(n).
So, to analyze time complexity in terms of big O you need a function of n. Your algorithm always makes arguably 0 steps, but without a varying parameter talking about asymptotic behaviour makes no sense. Assume that your algorithm is parametrized by n. Only now you may use asymptotic notation. It makes no sense to say that it is O(n2), or even O(1), if you don't specify what is n (or the variable hidden in O(1))!
As soon as you settle on the number of steps, it's a matter of the definition of big O: the function f(n) = 0 is O(0).
Since this is a low-level question it depends on the model of computation.
Under "idealistic" assumptions, it is possible you don't do anything.
But in Python, you cannot say def f(x):, but only def f(x): pass. If you assume that every instruction, even pass (NOP), takes time, then the complexity is f(n) = c for some constant c, and unless c != 0 you can only say that f is O(1), not O(0).
It's worth noting big O by itself does not have anything to do with algorithms. For example, you may say sin x = x + O(x3) when discussing Taylor expansion. Also, O(1) does not mean constant, it means bounded by constant.
All of the answers so far address the question as if there is a right and a wrong answer. But there isn't. The question is a matter of definition. Usually in complexity theory the time cost is an integer --- although that too is just a definition. You're free to say that the empty algorithm that quits immediately takes 0 time steps or 1 time step. It's an abstract question because time complexity is an abstract definition. In the real world, you don't even have time steps, you have continuous physical time; it may be true that one CPU has clock cycles, but a parallel computer could easily have asynchronoous clocks and in any case a clock cycle is extremely small.
That said, I would say that it's more reasonable to say that the halt operation takes 1 time step rather than that it takes 0 time steps. It does seem more realistic. For many situations it's arguably very conservative, because the overhead of initialization is typically far greater than executing one arithmetic or logical operation. Giving the empty algorithm 0 time steps would only be reasonable to model, for example, a function call that is deleted by an optimizing compiler that knows that the function won't do anything.
It should be O(1). The coefficient is always 1.
Consider:
If something grows like 5n, you don't say O(5n), you say O(n) [in other words, O(1n)]
If something grows like 7n^2, you don't say O(7n^2), you say O(n^2) [in other words, O(1n^2)]
Likewise you should say O(1), not O(some other constant)
There is no such thing as O(0). Even an oracle machine or a hypercomputer require the time for one operation, i.e. solve(the_goldbach_conjecture), ergo:
All machines, theoretical or real, finite or infinite produce algorithms with a minimum time complexity of O(1).
But then again, this code right here is O(0):
// Hello world!
:)
I would say it's O(1) by definition, but O(0) if you want to get technical about it: since O(k1g(n)) is equivalent to O(k2g(n)) for any constants k1 and k2, it follows that O(1 * 1) is equivalent to O(0 * 1), and therefore O(0) is equivalent to O(1).
However, the empty algorithm is not like, for example, the identity function, whose definition is something like "return your input". The empty algorithm is more like an empty statement, or whatever happens between two statements. Its definition is "do absolutely nothing with your input", presumably without even the implied overhead of simply having input.
Consequently, the complexity of the empty algorithm is unique in that O(0) has a complexity of zero times whatever function strikes your fancy, or simply zero. It follows that since the whole business is so wacky, and since O(0) doesn't already mean something useful, and since it's slightly ridiculous to even discuss such things, a reasonable special case for O(0) is something like this:
The complexity of the empty algorithm is O(0) in time and space. An algorithm with time complexity O(0) is equivalent to the empty algorithm.
So there you go.
Given the formal definition of Big O:
Let f(x) and g(x) be two functions defined over the set of real numbers. Then, we write:
f(x) = O(g(x)) as x approaches infinity iff there exists a real M and a real x0 so that:
|f(x)| <= M * |g(x)| for every x > x0
As I see it, if we substitute g(x) = 0 (in order to have a program with complexity O(0)), we must have:
|f(x)| <= 0, for every x > x0 (the constraint of existence of a real M and x0 is practically lifted here)
which can only be true when f(x) = 0.
So I would say that not only the empty program is O(0), but it is the only one for which that holds. Intuitively, this should've been true since O(1) encompasses all algorithms that require a constant number of steps regardless of the size of its task, including 0. It's essentially useless to talk about O(0); it's already in O(1). I suspect it's purely out of simplicity of definition that we use O(1), where it could as well be O(c) or something similar.
0 = O(f) for all function f, since 0 <= |f|, so it is also O(0).
Not only is this a perfectly sensible question, but it is important in certain situations involving amortized analysis, especially when "cost" means something other than "time" (for example, "atomic instructions").
Let's say there is a datastructure featuring multiple operation types, for which an amortized analysis is being conducted. It could well happen that one type of operation can always be funded fully using "coins" deposited during previous operations.
There is a simple example of this: the "multipop queue" described in Cormen, Leiserson, Rivest, Stein [CLRS09, 17.2, p. 457], and also on Wikipedia. Each time an item is pushed, a coin is put on the item, for a total amortized cost of 2. When (multi) pops occur, they can be fully paid for by taking one coin from each item popped, so the amortized cost of MULTIPOP(k) is O(0). To wit:
Note that the amortized cost of MULTIPOP is a constant (0)
...
Moreover, we can also charge MULTIPOP operations nothing. To pop the
first plate, we take the dollar of credit off the plate and use it to
pay the actual cost of a POP operation. To pop a second plate, we
again have a dollar of credit on the plate to pay for the POP
operation, and so on. Thus, we have always charged enough up front to
pay for MULTIPOP operations. In other words, since each plate on the
stack has 1 dollar of credit on it, and the stack always has a
nonnegative number of plates, we have ensured that the amount of
credit is always nonnegative.
Thus O(0) is an important "complexity class" for certain amortized operations.
O(1) means the algorithm's time complexity is always constant.
Let's say we have this algorithm (in C):
void doSomething(int[] n)
{
int x = n[0]; // This line is accessing an array position, so it is time consuming.
int y = n[1]; // Same here.
return x + y;
}
I am ignoring the fact that the array could have less than 2 positions, just to keep it simple.
If we count the 2 most expensive lines, we have a total time of 2.
2 = O(1), because:
2 <= c * 1, if c = 2, for every n > 1
If we have this code:
public void doNothing(){}
And we count it as having 0 expansive lines, there is no difference in saying it has O(0) O(1), or O(1000), because for every one of these functions, we can prove the same theorem.
Normally, if the algorithm takes a constant number of steps to complete, we say it has O(1) time complexity.
I guess this is just a convention, because you could use any constant number to represent the function inside the O().
No. It's O(c) by convention whenever you don't have dependence on input size, where c is any positive constant (typically 1 is used - O(1) = O(12.37)).

Resources