compare Time complexity of O(2/n) and O(1) - algorithm

If there are 2 functions with time complexity of O(2/n) and O(100). which function has lesser execution time ?. Is there any real function with time complexity of 2/n ?.
(found this in some algorithm question paper)

Firstly, in the O notation (as well as Theta and Omega), you can dismiss any constants, because the definition already includes the part "for some constant k".
So, basically, O(100) is equivalent to O(1), while O(2/n) is equivalent to O(1/n). Which has faster execution time -- depends on the n. If I presume that 100 and 2/n are directly used to calculate execution time, than the execution time is:
100 (units of time) for O(100) in all cases
more than 100 (units of time) for O(2/n) for n < 0.02
less than 100 (units of time) for O(2/n) for n >= 0.02
Now, I hope this question is purely theoretical, because in reality there is no algorithm with O(1/n) complexity -- it would mean that it takes less time (and that's note time per amount of data, that's just time) the more data it needs to process. I hope this is clear, that there is no algorithm that could take 0 time for infinite amount of data.
The other complexity, O(100) is an algorithm that takes the same steps no matter what the input data is, and thus always has constant time of execution (actually, it only has to be bounded by a constant, it can run faster that that sometimes). An example would be a program that reads an input from a file of integers, and then returns the sum of the first 100 numbers in that file or of all the numbers if there isn't a 100 numbers present. Since it always reads at most a 100 numbers (the rest can be ignored) and sums them, it is bounded by a constant number of steps.

An O(2/n) algorithm that does anything is practically impossible.
An algorithm is a finite sequence of steps that produces a result. Since a "step" on a computer takes a certain amount of time (for example at least one CPU cycle), the only way an algorithm can have O(2/n) time is if it takes zero time for sufficiently large n. Hence it does nothing.
Leaving aside algorithms and time complexity: an O(2/n) function is "less than" constant, in the sense that a O(2/n) function necessarily tends to 0 as n tends to infinity, whereas an O(1) function doesn't necessarily do that.
A remark on the text of the question from this paper: any function that is O(100) is also O(1), and any function that is O(2/n) is also O(1/n). It doesn't really make much sense to write big-O with unnecessary constants in there, but since it's an examination perhaps it's there to potentially confuse you.

Surely it depends on the value of N. The O(100) is basically fixed time, and may run faster or slower than the O(2/N) depending on what n is.
I can't think of an O(2/N) algorithm off hand, something that gets faster with more data... sounds a bit weird.

I am not familiar with any algorithm that has O(2/n) running time and I doubt one can exist, but let's look at the mathematical qustion.
The mathematical question should be: (1) is O(2/n) a subset of O(1)1 (2) Is O(1) a subset of O(2/n)?
yes. Let f(n) be a function in O(2/n) - that means that there are constants c,N such that for each n > N: c*f(n) < 2/n. There are also constants c2,N2 such that c*2/n < 1 for each n > N2, and thus min{c1,c2} * f(n) < 1 for each n > max{N1,N2}. so O(2/n) is subset of O(1)
No. Since lim(2/n) = 0 at infinity, for each c,N, there is n>N such that 2/n < 1*c and thus f(n) = 2/n is not in O(1), while it is in O(2/n)
Conclusion: a function that is O(2/n) is also O(1) - but not the other way around.
It means - each function in O(2/n) scales "smaller" then 1.
(1) It is identical to O(100), since O(1) = O(100)

Related

Determine running time for multiple calls of a functions

Assume I have a function f(K) that runs in amortised logarithmic time in K, but linear worst case time. What is the running time of:
for i in range(N): f(N) (Choose the smallest correct estimate)
A. Linearithmic in N
B. Amortised linear in N
C. Quadratic in N
D.Impossible to say from the information given
Let's say f(N) just prints "Hello World" so it doesn't depend on how big the parameter is. Can we say the total running time is amortised linear in N ?
This kinda looks like a test question, so instead of just saying the answer, allow me to explain what each of these algorithmic complexity concepts mean.
Let's start with a claim function f(n) runs in constant time. I am aware it's not even mentioned in the question, but it's really the basis for understanding all other algorithmic complexities. If some function runs in constant time, it means that its runtime is not dependent on its input parameters. Note that it could be as simple as print Hello World or return n, or as complex as finding the first 1,000,000,000 prime numbers (which obviously takes a while, but takes the same amount of time on every invocation). However, this last example is more of an abuse of the mathematical notation; usually constant-time functions are fast.
Now, what does it mean if a function f(n) runs in amortized constant time? It means that if you call the function once, there is no guarantee on how fast it will finish; but if you call it n times, the sum of time spent will be O(n) (and thus, each invocation on average took O(1)). Here is a lengthier explanation from another StackOverflow answer. I can't think of any extremely simple functions that run in amortized constant time (but not constant time), but here is one non-trivial example:
called = 0
next_heavy = 1
def f(n):
called += 1
if called == next_heavy:
for i in range(n):
print i
next_heavy *= 2
On 512-th call, this function would print 512 numbers; however, before that it only printed a total of 511, so it's total number of prints is 2*n-1, which is O(n). (Why 511? Because sum of powers of two from 1 to 2^k equals 2^(k+1).)
Note that every constant time function is also an amortized constant time function, because it takes O(n) time for n calls. So non-amortized complexity is a bit stricter than amortized complexity.
Now your question mentions a function with amortized logarithmic time, which similarly to above means that after n calls to this function, the total runtime is O(nlogn) (and average runtime per one call is O(logn)). And then, per question, this function is called in a loop from 1 to N, and we just said that by definition those N calls together would run in O(NlogN). This is linearithmic.
As for the second part of your question, can you deduce what's the total running time of the loop based on our previous observations?

What constitutes exponential time complexity?

I am comparing two algorithms that determine whether a number is prime. I am looking at the upper bound for time complexity, but I can't understand the time complexity difference between the two, even though in practice one algorithm is faster than the other.
This pseudocode runs in exponential time, O(2^n):
Prime(n):
for i in range(2, n-1)
if n % i == 0
return False
return True
This pseudocode runs in half the time as the previous example, but I'm struggling to understand if the time complexity is still O(2^n) or not:
Prime(n):
for i in range(2, (n/2+1))
if n % i == 0
return False
return True
As a simple intuition of what big-O (big-O) and big-Θ (big-Theta) are about, they are about how changes the number of operations you need to do when you significantly increase the size of the problem (for example by a factor of 2).
The linear time complexity means that you increase the size by a factor of 2, the number of steps you need to perform also increases by about 2 times. This is what called Θ(n) and often interchangeably but not accurate O(n) (the difference between O and Θ is that O provides only an upper bound but Θ guarantees both upper and lower bounds).
The logarithmic time complexity (Θ(log(N))) means that when increase the size by a factor of 2, the number of steps you need to perform increases by some fixed amount of operations. For example, using binary search you can find given element in twice as long list using just one ore loop iterations.
Similarly the exponential time complexity (Θ(a^N) for some constant a > 1) means that if you increase that size of the problem just by 1, you need a times more operations. (Note that there is a subtle difference between Θ(2^N) and 2^Θ(N) and actually the second one is more generic, both lie inside the exponential time but neither of two covers it all, see wiki for some more details)
Note that those definition significantly depend on how you define "the size of the task"
As #DavidEisenstat correctly pointed out there are two possible context in which your algorithm can be seen:
Some fixed width numbers (for example 32-bit numbers). In such a context an obvious measure of the complexity of the prime-testing algorithm is the value being tested itself. In such case your algorithm is linear.
In practice there are many contexts where prime testing algorithm should work for really big numbers. For example many crypto-algorithms used today (such as Diffie–Hellman key exchange or RSA) rely on very big prime numbers like 512-bits, 1024-bits and so on. Also in those context the security is measured in the number of those bits rather than particular prime value. So in such contexts a natural way to measure the size of the task is the number of bits. And now the question arises: how many operations do we need to perform to check a value of known size in bits using your algorithm? Obviously if the value N has m bits it is about N ≈ 2^m. So your algorithm from linear Θ(N) converts into exponential 2^Θ(m). In other words to solve the problem for a value just 1 bit longer, you need to do about 2 times more work.
Exponential versus linear is a question of how the input is represented and the machine model. If the input is represented in unary (e.g., 7 is sent as 1111111) and the machine can do constant time division on numbers, then yes, the algorithm is linear time. A binary representation of n, however, uses about lg n bits, and the quantity n has an exponential relationship to lg n (n = 2^(lg n)).
Given that the number of loop iterations is within a constant factor for both solutions, they are in the same big O class, Theta(n). This is exponential if the input has lg n bits, and linear if it has n.
i hope this will explain you why they are in fact linear.
suppose you call function and see how many time they r executed
Prime(n): # 1 time
for i in range(2, n-1) #n-1-1 times
if n % i == 0 # 1 time
return False # 1 time
return True # 1 time
# overall -> n
Prime(n): # Time
for i in range(2, (n/2+1)) # n//(2+1) -1-1 time
if n % i == 0 # 1 time
return False # 1 time
return True # 1 time
# overall -> n/2 times -> n times
this show that prime is linear function
O(n^2) might be because of code block where this function is called.

Polynomial in n code? what does it mean [duplicate]

Could someone explain the difference between polynomial-time, non-polynomial-time, and exponential-time algorithms?
For example, if an algorithm takes O(n^2) time, then which category is it in?
Below are some common Big-O functions while analyzing algorithms.
O(1) - Constant time
O(log(n)) - Logarithmic time
O(n log(n)) - Linearithmic time
O((log(n))c) - Polylogarithmic time
O(n) - Linear time
O(n2) - Quadratic time
O(nc) - Polynomial time
O(cn) - Exponential time
O(n!) - Factorial time
(n = size of input, c = some constant)
Here is the model graph representing Big-O complexity of some functions
graph credits http://bigocheatsheet.com/
Check this out.
Exponential is worse than polynomial.
O(n^2) falls into the quadratic category, which is a type of polynomial (the special case of the exponent being equal to 2) and better than exponential.
Exponential is much worse than polynomial. Look at how the functions grow
n = 10 | 100 | 1000
n^2 = 100 | 10000 | 1000000
k^n = k^10 | k^100 | k^1000
k^1000 is exceptionally huge unless k is smaller than something like 1.1. Like, something like every particle in the universe would have to do 100 billion billion billion operations per second for trillions of billions of billions of years to get that done.
I didn't calculate it out, but ITS THAT BIG.
O(n^2) is polynomial time. The polynomial is f(n) = n^2. On the other hand, O(2^n) is exponential time, where the exponential function implied is f(n) = 2^n. The difference is whether the function of n places n in the base of an exponentiation, or in the exponent itself.
Any exponential growth function will grow significantly faster (long term) than any polynomial function, so the distinction is relevant to the efficiency of an algorithm, especially for large values of n.
Polynomial time.
A polynomial is a sum of terms that look like Constant * x^k
Exponential means something like Constant * k^x
(in both cases, k is a constant and x is a variable).
The execution time of exponential algorithms grows much faster than that of polynomial ones.
Exponential (You have an exponential function if MINIMAL ONE EXPONENT is dependent on a parameter):
E.g. f(x) = constant ^ x
Polynomial (You have a polynomial function if NO EXPONENT is dependent on some function parameters):
E.g. f(x) = x ^ constant
More precise definition of exponential
The definition of polynomial is pretty much universal and straightforward so I won't discuss it further.
The definition of Big O is also quite universal, you just have to think carefully about the M and the x0 in the Wikipedia definition and work through some examples.
So in this answer I would like to focus on the precise definition of the exponential as it requires a bit more thought/is less well known/is less universal, especially when you start to think about some edge cases. I will then contrast it with polynomials a bit further below
https://cstheory.stackexchange.com/questions/22588/is-it-right-to-call-2-sqrtn-exponential
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
The most common definition of exponential time is:
2^{polymonial(n)}
where polynomial is a polynomial that:
is not constant, e.g. 1, otherwise the time is also constant
the highest order term has a positive coefficient, otherwise it goes to zero at infinity, e.g. 2^{-n^2 + 2n + 1}
so a polynomial such as this would be good:
2^{n^2 + 2n + 1}
Note that the base 2 could be any number > 1 and the definition would still be valid because we can transform the base by multiplying the exponent, e.g.:
8^{polymonial(n)} = (2^3)^{polymonial(n)} = 2^{3 * polymonial(n)}
and 3 * polymonial(n) is also a polynomial.
Also note that constant addition does not matter, e.g. 2^{n + 1} = 2 * 2^{n} and so the + 1 does not matter for big O notation.
Therefore, two possible nice big O equivalent choices for a canonical "smallest exponential" would be for any small positive e either of:
(1 + e)^{n}
2^{en}
for very small e.
The highest order term of the polynomial in the exponent in both cases is n^1, order one, and therefore the smallest possible non-constant polynomial.
Those two choices are equivalent, because as saw earlier, we can transform base changes into an exponent multiplier.
Superpolynomial and sub-exponential
But note that the above definition excludes some still very big things that show up in practice and that we would be tempted to call "exponential", e.g.:
2^{n^{1/2}}. This is a bit like a polynomial, but it is not a polynomial because polynomial powers must be integers, and here we have 1/2
2^{log_2(n)^2}
Those functions are still very large, because they grow faster than any polynomial.
But strictly speaking, they are big O smaller than the exponentials in our strict definition of exponential!
This motivates the following definitions:
superpolynomial: grows faster than any polynomial
subexponential: grows less fast than any exponential, i.e. (1 + e)^{n}
and all the examples given above in this section fall into both of those categories. TODO proof.
Keep in mind that if you put something very small on the exponential, it might go back to polynomial of course, e.g.:
2^{log_2(n)} = n
And that is also true for anything smaller than log_2, e.g.:
2^{log_2(log_2(n))} = log_2(n)
is sub-polynomial.
Important superpolynomial and sub-exponential examples
the general number field sieve the fastest 2020-known algorithm for integer factorization, see also: What is the fastest integer factorization algorithm? That algorithm has complexity of the form:
e^{(k + o(1))(ln(n)^(1/3) * ln(ln(n)))^(2/3)}
where n is the factored number, and the little-o notation o(1) means a term that goes to 0 at infinity.
That complexity even has a named generalization as it presumably occurs in other analyses: L-notation.
Note that the above expression itself is clearly polynomial in n, because it is smaller than e^{ln(n)^(1/3) * ln(n))^(2/3)} = e^{ln(n)} = n.
However, in the context of factorization, what really matters is note n, but rather "the number of digits of n", because cryptography parties can easily generate crypto keys that are twice as large. And the number of digits grows as log_2. So in that complexity, what we really care about is something like:
e^{(k + o(1))(n^(1/3) * ln(n)^(2/3)}
which is of course both superpolynomial and sub-exponential.
The fantastic answer at: What would cause an algorithm to have O(log log n) complexity? gives an intuitive explanation of where the O(log log n) comes from: while log n comes from an algorithm that removes half of the options at each step, and log log n comes from an algorithm that reduces the options to the square root of the total at each step!
https://quantumalgorithmzoo.org/ contains a list of algorithms which might be of interest to quantum computers, and in most cases, the quantum speedup relative to a classical computer is not strictly exponential, but rather superpolynomial. However, as this answer will have hopefully highlighted, this is still extremely significant and revolutionary. Understanding that repository is what originally motivated this answer :-)
It is also worth noting that we currently do not expect quantum computers to solve NP-complete problems, which are also generally expected to require exponential time to solve. But there is no proof otherwise either. See also: https://cs.stackexchange.com/questions/130470/can-quantum-computing-help-solve-np-complete-problems
https://math.stackexchange.com/questions/3975382/what-problems-are-known-to-be-require-superpolynomial-time-or-greater-to-solve asks about any interesting algorithms that have been proven superpolynomial (and presumably with proof of optimality, otherwise the general number sieve would be an obvious choice, but we don't 2020-know if it is optimal or not)
Proof that exponential is always larger than polynomial at infinity
https://math.stackexchange.com/questions/3975382/what-problems-are-known-to-be-require-superpolynomial-time-or-greater-to-solve
Discussions of different possible definitions of sub-exponential
https://cstheory.stackexchange.com/questions/22588/is-it-right-to-call-2-sqrtn-exponential
https://math.stackexchange.com/questions/55468/how-to-prove-that-exponential-grows-faster-than-polynomial
https://en.wikipedia.org/w/index.php?title=Time_complexity&oldid=1026049783#Sub-exponential_time
polynomial time O(n)^k means Number of operations are proportional to power k of the size of input
exponential time O(k)^n means Number of operations are proportional to the exponent of the size of input
Polynomial examples: n^2, n^3, n^100, 5n^7, etc….
Exponential examples: 2^n, 3^n, 100^n, 5^(7n), etc….
o(n sequre) is polynimal time complexity while o(2^n) is exponential time complexity
if p=np when best case , in the worst case p=np not equal becasue when input size n grow so long or input sizer increase so longer its going to worst case and handling so complexity growth rate increase and depend on n size of input when input is small it is polynimal when input size large and large so p=np not equal it means growth rate depend on size of input "N".
optimization, sat, clique, and independ set also met in exponential to polynimal.
Here's the most simplest explaination for newbies:
A Polynomial:
if an expression contains or function is equal to when a constant is the power of a variable e.g.
f(n) = 2 ^ n
while
An Exponential:
if an expression contains or function is qual to when a variable is the power of a constant e.g.
f(n) = n ^ 2

Understanding Time complexity of algorithm

I am just starting to learn the big O concept. What I learned is that if a function f is less than or equal to another constant multiple of function g, then f is O(g).
Now I came across an example in which a string of size "n" takes "2n" (double the size of input) steps of algorithm. So they say the time taken is O(2n) but then they follow this statement by saying As O(2n)=O(n), time complexity is O(n).
I dont understand this. As 2n will always be greater than n, how can we ignore the multple of 2 then? Anything less than or equal to 2n will not necessarily be less than n!
Doesn't it mean that we are somehow equating n and 2n? Sounds confusing. Please clarify in simplest possible way as I am just a beginner in this concept.
Best Regards :)
Big-O and related notations are intended to capture the aspects of algorithm performance that are most inherent to the algorithm, independent of how it is being run and measured.
Constant multipliers depend on the unit of measurement, seconds vs. microseconds vs. instructions vs. loop iterations. Even measured in the same units they will be different if measured on different systems. The same algorithm may take 20n instructions in one instruction set, 30n instructions on another. It may take 0.5n microseconds on one, 10n microseconds on another.
Many of the basic algorithm complexities you will see in the literature were calculated decades ago, but remain meaningful across significant changes in processor architecture and even more significant changes in performance.
Similar considerations apply to start-up and similar overheads.
A f(n) is O(n) if there exist constants N and c such that, for all n>=N, f(n) <= cn. For f(n) = 2n the constants are N=0 and c = 2. The first constant, N, is about ignoring overhead, the second, c, is about ignoring constant multipliers.
... As 2n will always be greater than n, how can we ignore the multple of 2 then? ...
Simply put, with growing n the multiplier loses its importance. The asymptotic behavior of a function describes what happens when n gets large.
Maybe it helps to consider not just O(n) and O(2n), because they are in the same class, but to contrast it with some other common classes. Example: Any O(n^2) algorithm will take longer than any O(n), in the long run (in the short run, their running times might even be reversed). Say you have two algorithms, one with linear time complexity of 100n and another with 8n^2. The quadratic algorithm will be faster for all n =< 12, but slower for all n > 12.
This property – that for any fixed nonnegative c and d you'll find an n, so that cn < dn^2 – constitues a part of the hierarchy of time complexities.
As you alluded to in your first paragraph, the time required to execute the algorithm is proportional to a constant multiple of the input size. You can think of O(n), to be O(C*n), where C is any constant multiplier.

Is the time complexity of the empty algorithm O(0)?

So given the following program:
Is the time complexity of this program O(0)? In other words, is 0 O(0)?
I thought answering this in a separate question would shed some light on this question.
EDIT: Lots of good answers here! We all agree that 0 is O(1). The question is, is 0 O(0) as well?
From Wikipedia:
A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function.
From this description, since the empty algorithm requires 0 time to execute, it has an upper bound performance of O(0). This means, it's also O(1), which happens to be a larger upper bound.
Edit:
More formally from CLR (1ed, pg 26):
For a given function g(n), we denote O(g(n)) the set of functions
O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }
The asymptotic time performance of the empty algorithm, executing in 0 time regardless of the input, is therefore a member of O(0).
Edit 2:
We all agree that 0 is O(1). The question is, is 0 O(0) as well?
Based on the definitions, I say yes.
Furthermore, I think there's a bit more significance to the question than many answers indicate. By itself the empty algorithm is probably meaningless. However, whenever a non-trivial algorithm is specified, the empty algorithm could be thought of as lying between consecutive steps of the algorithm being specified as well as before and after the algorithm steps. It's nice to know that "nothingness" does not impact the algorithm's asymptotic time performance.
Edit 3:
Adam Crume makes the following claim:
For any function f(x), f(x) is in O(f(x)).
Proof: let S be a subset of R and T be a subset of R* (the non-negative real numbers) and let f(x):S ->T and c ≥ 1. Then 0 ≤ f(x) ≤ f(x) which leads to 0 ≤ f(x) ≤ cf(x) for all x∈S. Therefore f(x) ∈ O(f(x)).
Specifically, if f(x) = 0 then f(x) ∈ O(0).
It takes the same amount of time to run regardless of the input, therefore it is O(1) by definition.
Several answers say that the complexity is O(1) because the time is a constant and the time is bounded by the product of some coefficient and 1. Well, it is true that the time is a constant and it is bounded that way, but that doesn't mean that the best answer is O(1).
Consider an algorithm that runs in linear time. It is ordinarily designated as O(n) but let's play devil's advocate. The time is bounded by the product of some coefficient and n^2. If we consider O(n^2) to be a set, the set of all algorithms whose complexity is small enough, then linear algorithms are in that set. But it doesn't mean that the best answer is O(n^2).
The empty algorithm is in O(n^2) and in O(n) and in O(1) and in O(0). I vote for O(0).
I have a very simple argument for the empty algorithm being O(0): For any function f(x), f(x) is in O(f(x)). Simply let f(x)=0, and we have that 0 (the runtime of the empty algorithm) is in O(0).
On a side note, I hate it when people write f(x) = O(g(x)), when it should be f(x) ∈ O(g(x)).
Big O is asymptotic notation. To use big O, you need a function - in other words, the expression must be parametrized by n, even if n is not used. It makes no sense to say that the number 5 is O(n), it's the constant function f(n) = 5 that is O(n).
So, to analyze time complexity in terms of big O you need a function of n. Your algorithm always makes arguably 0 steps, but without a varying parameter talking about asymptotic behaviour makes no sense. Assume that your algorithm is parametrized by n. Only now you may use asymptotic notation. It makes no sense to say that it is O(n2), or even O(1), if you don't specify what is n (or the variable hidden in O(1))!
As soon as you settle on the number of steps, it's a matter of the definition of big O: the function f(n) = 0 is O(0).
Since this is a low-level question it depends on the model of computation.
Under "idealistic" assumptions, it is possible you don't do anything.
But in Python, you cannot say def f(x):, but only def f(x): pass. If you assume that every instruction, even pass (NOP), takes time, then the complexity is f(n) = c for some constant c, and unless c != 0 you can only say that f is O(1), not O(0).
It's worth noting big O by itself does not have anything to do with algorithms. For example, you may say sin x = x + O(x3) when discussing Taylor expansion. Also, O(1) does not mean constant, it means bounded by constant.
All of the answers so far address the question as if there is a right and a wrong answer. But there isn't. The question is a matter of definition. Usually in complexity theory the time cost is an integer --- although that too is just a definition. You're free to say that the empty algorithm that quits immediately takes 0 time steps or 1 time step. It's an abstract question because time complexity is an abstract definition. In the real world, you don't even have time steps, you have continuous physical time; it may be true that one CPU has clock cycles, but a parallel computer could easily have asynchronoous clocks and in any case a clock cycle is extremely small.
That said, I would say that it's more reasonable to say that the halt operation takes 1 time step rather than that it takes 0 time steps. It does seem more realistic. For many situations it's arguably very conservative, because the overhead of initialization is typically far greater than executing one arithmetic or logical operation. Giving the empty algorithm 0 time steps would only be reasonable to model, for example, a function call that is deleted by an optimizing compiler that knows that the function won't do anything.
It should be O(1). The coefficient is always 1.
Consider:
If something grows like 5n, you don't say O(5n), you say O(n) [in other words, O(1n)]
If something grows like 7n^2, you don't say O(7n^2), you say O(n^2) [in other words, O(1n^2)]
Likewise you should say O(1), not O(some other constant)
There is no such thing as O(0). Even an oracle machine or a hypercomputer require the time for one operation, i.e. solve(the_goldbach_conjecture), ergo:
All machines, theoretical or real, finite or infinite produce algorithms with a minimum time complexity of O(1).
But then again, this code right here is O(0):
// Hello world!
:)
I would say it's O(1) by definition, but O(0) if you want to get technical about it: since O(k1g(n)) is equivalent to O(k2g(n)) for any constants k1 and k2, it follows that O(1 * 1) is equivalent to O(0 * 1), and therefore O(0) is equivalent to O(1).
However, the empty algorithm is not like, for example, the identity function, whose definition is something like "return your input". The empty algorithm is more like an empty statement, or whatever happens between two statements. Its definition is "do absolutely nothing with your input", presumably without even the implied overhead of simply having input.
Consequently, the complexity of the empty algorithm is unique in that O(0) has a complexity of zero times whatever function strikes your fancy, or simply zero. It follows that since the whole business is so wacky, and since O(0) doesn't already mean something useful, and since it's slightly ridiculous to even discuss such things, a reasonable special case for O(0) is something like this:
The complexity of the empty algorithm is O(0) in time and space. An algorithm with time complexity O(0) is equivalent to the empty algorithm.
So there you go.
Given the formal definition of Big O:
Let f(x) and g(x) be two functions defined over the set of real numbers. Then, we write:
f(x) = O(g(x)) as x approaches infinity iff there exists a real M and a real x0 so that:
|f(x)| <= M * |g(x)| for every x > x0
As I see it, if we substitute g(x) = 0 (in order to have a program with complexity O(0)), we must have:
|f(x)| <= 0, for every x > x0 (the constraint of existence of a real M and x0 is practically lifted here)
which can only be true when f(x) = 0.
So I would say that not only the empty program is O(0), but it is the only one for which that holds. Intuitively, this should've been true since O(1) encompasses all algorithms that require a constant number of steps regardless of the size of its task, including 0. It's essentially useless to talk about O(0); it's already in O(1). I suspect it's purely out of simplicity of definition that we use O(1), where it could as well be O(c) or something similar.
0 = O(f) for all function f, since 0 <= |f|, so it is also O(0).
Not only is this a perfectly sensible question, but it is important in certain situations involving amortized analysis, especially when "cost" means something other than "time" (for example, "atomic instructions").
Let's say there is a datastructure featuring multiple operation types, for which an amortized analysis is being conducted. It could well happen that one type of operation can always be funded fully using "coins" deposited during previous operations.
There is a simple example of this: the "multipop queue" described in Cormen, Leiserson, Rivest, Stein [CLRS09, 17.2, p. 457], and also on Wikipedia. Each time an item is pushed, a coin is put on the item, for a total amortized cost of 2. When (multi) pops occur, they can be fully paid for by taking one coin from each item popped, so the amortized cost of MULTIPOP(k) is O(0). To wit:
Note that the amortized cost of MULTIPOP is a constant (0)
...
Moreover, we can also charge MULTIPOP operations nothing. To pop the
first plate, we take the dollar of credit off the plate and use it to
pay the actual cost of a POP operation. To pop a second plate, we
again have a dollar of credit on the plate to pay for the POP
operation, and so on. Thus, we have always charged enough up front to
pay for MULTIPOP operations. In other words, since each plate on the
stack has 1 dollar of credit on it, and the stack always has a
nonnegative number of plates, we have ensured that the amount of
credit is always nonnegative.
Thus O(0) is an important "complexity class" for certain amortized operations.
O(1) means the algorithm's time complexity is always constant.
Let's say we have this algorithm (in C):
void doSomething(int[] n)
{
int x = n[0]; // This line is accessing an array position, so it is time consuming.
int y = n[1]; // Same here.
return x + y;
}
I am ignoring the fact that the array could have less than 2 positions, just to keep it simple.
If we count the 2 most expensive lines, we have a total time of 2.
2 = O(1), because:
2 <= c * 1, if c = 2, for every n > 1
If we have this code:
public void doNothing(){}
And we count it as having 0 expansive lines, there is no difference in saying it has O(0) O(1), or O(1000), because for every one of these functions, we can prove the same theorem.
Normally, if the algorithm takes a constant number of steps to complete, we say it has O(1) time complexity.
I guess this is just a convention, because you could use any constant number to represent the function inside the O().
No. It's O(c) by convention whenever you don't have dependence on input size, where c is any positive constant (typically 1 is used - O(1) = O(12.37)).

Resources