What does "O(1) access time" mean? [duplicate] - big-o

This question already has answers here:
What is a plain English explanation of "Big O" notation?
(43 answers)
Closed 8 months ago.
I have seen this term "O(1) access time" used to mean "quickly" but I don't understand what it means. The other term that I see with it in the same context is "O(n) access time". Could someone please explain in a simple way what these terms mean?
See Also
What is Big O notation? Do you use it?
Big-O for Eight Year Olds?

You're going to want to read up on Order of complexity.
http://en.wikipedia.org/wiki/Big_O_notation
In short, O(1) means that it takes a constant time, like 14 nanoseconds, or three minutes no matter the amount of data in the set.
O(n) means it takes an amount of time linear with the size of the set, so a set twice the size will take twice the time. You probably don't want to put a million objects into one of these.

In essence, It means that it takes the same amount of time to look up a value in your collection whether you have a small number of items in your collection or very very many (within the constraints of your hardware)
O(n) would mean that the time it takes to look up an item is proportional to the number of items in the collection.
Typical examples of these are arrays, which can be accessed directly, regardless of their size, and linked lists, which must be traversed in order from the beginning to access a given item.
The other operation usually discussed is insert. A collection can be O(1) for access but O(n) for insert. In fact an array has exactly this behavior, because to insert an item in the middle, You would have to move each item to the right by copying it into the following slot.

O(1) means the time to access something is independent of the number of items in the collection.
O(N) would mean the time to access an item is a proportional to the number (N) of items in the collection.

Every answer currently responding to this question tells you that the O(1) means constant time (whatever it happens to measuring; could be runtime, number of operations, etc.). This is not accurate.
To say that runtime is O(1) means that there is a constant c such that the runtime is bounded above by c, independent of the input. For example, returning the first element of an array of n integers is O(1):
int firstElement(int *a, int n) {
return a[0];
}
But this function is O(1) too:
int identity(int i) {
if(i == 0) {
sleep(60 * 60 * 24 * 365);
}
return i;
}
The runtime here is bounded above by 1 year, but most of the time the runtime is on the order of nanoseconds.
To say that runtime is O(n) means that there is a constant c such that the runtime is bounded above by c * n, where n measures the size of the input. For example, finding the number of occurrences of a particular integer in an unsorted array of n integers by the following algorithm is O(n):
int count(int *a, int n, int item) {
int c = 0;
for(int i = 0; i < n; i++) {
if(a[i] == item) c++;
}
return c;
}
This is because we have to iterate through the array inspecting each element one at a time.

O(1) does not necessarily mean "quickly". It means that the time it takes is constant, and not based on the size of the input to the function. Constant could be fast or slow. O(n) means that the time the function takes will change in direct proportion to the size of the input to the function, denoted by n. Again, it could be fast or slow, but it will get slower as the size of n increases.

Here is a simple analogy;
Imagine you are downloading movies online, with O(1), if it takes 5 minutes to download one movie, it will still take the same time to download 20 movies. So it doesn't matter how many movies you are downloading, they will take the same time(5 minutes) whether it's one or 20 movies. A normal example of this analogy is when you go to a movie library, whether you are taking one movie or 5, you will simply just pick them at once. Hence spending the same time.
However, with O(n), if it takes 5 minutes to download one movie, it will take about 50 minutes to download 10 movies. So time is not constant or is somehow proportional to the number of movies you are downloading.

It's called the Big O notation, and describes the search time for various algorithms.
O(1) means that the worst-case run time is constant. For most situations it means that you don't actually need to search the collection, you can find what you are searching for right away.

"Big O notation" is a way to express the speed of algorithms. n is the amount of data the algorithm is working with. O(1) means that, no matter how much data, it will execute in constant time. O(n) means that it is proportional to the amount of data.

Basically, O(1) means its computation time is constant, while O(n) means it will depend lineally on the size of input - i.e. looping through an array has O(n) - just looping -, because it depends on the number of items, while calculating the maximum between two ordinary numbers has O(1).
Wikipedia might help as well: http://en.wikipedia.org/wiki/Computational_complexity_theory

O(1) always execute in the same time regardless of dataset n.
An example of O(1) would be an ArrayList accessing its element with index.
O(n) also known as Linear Order, the performance will grow linearly and in direct proportion to the size of the input data.
An example of O(n) would be an ArrayList insertion and deletion at random position. As each subsequent insertion/deletion at random position will cause the elements in the ArrayList to shift left right of its internal array in order to maintain its linear structure, not to mention about the creation of a new arrays and the copying of elements from the old to new array which takes up expensive processing time hence, detriment the performance.

The easiest way to differentiate O(1) and O(n) is comparing array and list.
For array, if you have the right index value, you can access the data instantly.
(If you don't know the index and have to loop through the array, then it won't be O(1) anymore)
For list, you always need to loop through it whether you know the index or not.

It means that access time is constant. Whether you're accessing from 100 or 100,000 records, the retrieval time will be the same.
In contrast, O(n) access time would indicate that the retrieval time is directly proportional to the number of records you're accessing from.

It means that the access takes constant time i.e. does not depend on the size of the dataset. O(n) means that the access will depend on the size of the dataset linearly.
The O is also known as big-O.

Introduction to Algorithms: Second Edition by Cormen, Leiserson, Rivest & Stein says on page 44 that
Since any constant is a degree-0
polynomial, we can express any
constant function as Theta(n^0), or
Theta(1). This latter notation is a
minor abuse, however, because it is
not clear what variable is tending to
infinity. We shall often use the
notation Theta(1) to mean either a
constant or a constant function with
respect to some variable. ... We
denote by O(g(n))... the set of
functions f(n) such that there exist
positive constants c and n0 such that
0 <= f(n) <= c*g(n) for all n >= n0.
... Note that f(n) = Theta(g(n))
implies f(n) = O(g(n)), since Theta
notation is stronger than O notation.
If an algorithm runs in O(1) time, it means that asymptotically doesn't depend upon any variable, meaning that there exists at least one positive constant that when multiplied by one is greater than the asymptotic complexity (~runtime) of the function for values of n above a certain amount. Technically, it's O(n^0) time.

O(1) means Random Access. In any Random Access Memory, the time taken to access any element at any location is the same. Here time can be any integer, but the only thing to remember is time taken to retrieve the element at (n-1)th or nth location will be same(ie constant).
Whereas O(n) is dependent on the size of n.

According to my perspective,
O(1) means time to execute one operation or instruction at a time is one, in time complexity analysis of algorithm for best case.

Related

Do problem constraints change the time complexity of algorithms?

Let's say that the algorithm involves iterating through a string character by character.
If I know for sure that the length of the string is less than, say, 15 characters, will the time complexity be O(1) or will it remain as O(n)?
There are two aspects to this question - the core of the question is, can problem constraints change the asymptotic complexity of an algorithm? The answer to that is yes. But then you give an example of a constraint (strings limited to 15 characters) where the answer is: the question doesn't make sense. A lot of the other answers here are misleading because they address only the second aspect but try to reach a conclusion about the first one.
Formally, the asymptotic complexity of an algorithm is measured by considering a set of inputs where the input sizes (i.e. what we call n) are unbounded. The reason n must be unbounded is because the definition of asymptotic complexity is a statement like "there is some n0 such that for all n ≥ n0, ...", so if the set doesn't contain any inputs of size n ≥ n0 then this statement is vacuous.
Since algorithms can have different running times depending on which inputs of each size we consider, we often distinguish between "average", "worst case" and "best case" time complexity. Take for example insertion sort:
In the average case, insertion sort has to compare the current element with half of the elements in the sorted portion of the array, so the algorithm does about n2/4 comparisons.
In the worst case, when the array is in descending order, insertion sort has to compare the current element with every element in the sorted portion (because it's less than all of them), so the algorithm does about n2/2 comparisons.
In the best case, when the array is in ascending order, insertion sort only has to compare the current element with the largest element in the sorted portion, so the algorithm does about n comparisons.
However, now suppose we add the constraint that the input array is always in ascending order except for its smallest element:
Now the average case does about 3n/2 comparisons,
The worst case does about 2n comparisons,
And the best case does about n comparisons.
Note that it's the same algorithm, insertion sort, but because we're considering a different set of inputs where the algorithm has different performance characteristics, we end up with a different time complexity for the average case because we're taking an average over a different set, and similarly we get a different time complexity for the worst case because we're choosing the worst inputs from a different set. Hence, yes, adding a problem constraint can change the time complexity even if the algorithm itself is not changed.
However, now let's consider your example of an algorithm which iterates over each character in a string, with the added constraint that the string's length is at most 15 characters. Here, it does not make sense to talk about the asymptotic complexity, because the input sizes n in your set are not unbounded. This particular set of inputs is not valid for doing such an analysis with.
In the mathematical sense, yes. Big-O notation describes the behavior of an algorithm in the limit, and if you have a fixed upper bound on the input size, that implies it has a maximum constant complexity.
That said, context is important. All computers have a realistic limit to the amount of input they can accept (a technical upper bound). Just because nothing in the world can store a yottabyte of data doesn't mean saying every algorithm is O(1) is useful! It's about applying the mathematics in a way that makes sense for the situation.
Here are two contexts for your example, one where it makes sense to call it O(1), and one where it does not.
"I decided I won't put strings of length more than 15 into my program, therefore it is O(1)". This is not a super useful interpretation of the runtime. The actual time is still strongly tied to the size of the string; a string of size 1 will run much faster than one of size 15 even if there is technically a constant bound. In other words, within the constraints of your problem there is still a strong correlation to n.
"My algorithm will process a list of n strings, each with maximum size 15". Here we have a different story; the runtime is dominated by having to run through the list! There's a point where n is so large that the time to process a single string doesn't change the correlation. Now it makes sense to consider the time to process a single string O(1), and therefore the time to process the whole list O(n)
That said, Big-O notation doesn't have to only use one variable! There are problems where upper bounds are intrinsic to the algorithm, but you wouldn't put a bound on the input arbitrarily. Instead, you can describe each dimension of your input as a different variable:
n = list length
s = maximum string length
=> O(n*s)
It depends.
If your algorithm's requirements would grow if larger inputs were provided, then the algorithmic complexity can (and should) be evaluated independently of the inputs. So iterating over all the elements of a list, array, string, etc., is O(n) in relation to the length of the input.
If your algorithm is tied to the limited input size, then that fact becomes part of your algorithmic complexity. For example, maybe your algorithm only iterates over the first 15 characters of the input string, regardless of how long it is. Or maybe your business case simply indicates that a larger input would be an indication of a bug in the calling code, so you opt to immediately exit with an error whenever the input size is larger than a fixed number. In those cases, the algorithm will have constant requirements as the input length tends toward very large numbers.
From Wikipedia
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.
...
In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.
In practice, almost all inputs have limits: you cannot input a number larger than what's representable by the numeric type, or a string that's larger than the available memory space. So it would be silly to say that any limits change an algorithm's asymptotic complexity. You could, in theory, use 15 as your asymptote (or "particular value"), and therefore use Big-O notation to define how an algorithm grows as the input approaches that size. There are some algorithms with such terrible complexity (or some execution environments with limited-enough resources) that this would be meaningful.
But if your argument (string length) does not tend toward a large enough value for some aspect of your algorithm's complexity to define the growth of its resource requirements, it's arguably not appropriate to use asymptotic notation at all.
NO!
The time complexity of an algorithm is independent of program constraints. Here is (a simple) way of thinking about it:
Say your algorithm iterates over the string and appends all consonants to a list.
Now, for iteration time complexity is O(n). This means that the time taken will increase roughly in proportion to the increase in the length of the string. (Time itself though would vary depending on the time taken by the if statement and Branch Prediction)
The fact that you know that the string is between 1 and 15 characters long will not change how the program runs, it merely tells you what to expect.
For example, knowing that your values are going to be less than 65000 you could store them in a 16-bit integer and not worry about Integer overflow.
Do problem constraints change the time complexity of algorithms?
No.
If I know for sure that the length of the string is less than, say, 15 characters ..."
We already know the length of the string is less than SIZE_MAX. Knowing an upper fixed bound for string length does not make the the time complexity O(1).
Time complexity remains O(n).
Big-O measures the complexity of algorithms, not of code. It means Big-O does not know the physical limitations of computers. A Big-O measure today will be the same in 1 million years when computers, and programmers alike, have evolved beyond recognition.
So restrictions imposed by today's computers are irrelevant for Big-O. Even though any loop is finite in code, that need not be the case in algorithmic terms. The loop may be finite or infinite. It is up to the programmer/Big-O analyst to decide. Only s/he knows which algorithm the code intends to implement. If the number of loop iterations is finite, the loop has a Big-O complexity of O(1) because there is no asymptotic growth with N. If, on the other hand, the number of loop iterations is infinite, the Big-O complexity is O(N) because there is an asymptotic growth with N.
The above is straight from the definition of Big-O complexity. There are no ifs or buts. The way the OP describes the loop makes it O(1).
A fundamental requirement of big-O notation is that parameters do not have an upper limit. Suppose performing an operation on N elements takes a time precisely equal to 3E24*N*N*N / (1E24+N*N*N) microseconds. For small values of N, the execution time would be proportional to N^3, but as N gets larger the N^3 term in the denominator would start to play an increasing role in the computation.
If N is 1, the time would be 3 microseconds.
If N is 1E3, the time would be about 3E33/1E24, i.e. 3.0E9.
If N is 1E6, the time would be about 3E42/1E24, i.e. 3.0E18
If N is 1E7, the time would be 3E45/1.001E24, i.e. ~2.997E21
If N is 1E8, the time would be about 3E48/2E24, i.e. 1.5E24
If N is 1E9, the time would be 3E51/1.001E27, i.e. ~2.997E24
If N is 1E10, the time would be about 3E54/1.000001E30, i.e. 2.999997E24
As N gets bigger, the time would continue to grow, but no matter how big N gets the time would always be less than 3.000E24 seconds. Thus, the time required for this algorithm would be O(1) because one could specify a constant k such that the time necessary to perform the computation with size N would be less than k.
For any practical value of N, the time required would be proportional to N^3, but from an O(N) standpoint the worst-case time requirement is constant. The fact that the time changes rapidly in response to small values of N is irrelevant to the "big picture" behaviour, which is what big-O notation measures.
It will be O(1) i.e. constant.
This is because for calculating time complexity or worst-case time complexity (to be precise), we think of the input as a huge chunk of data and the length of this data is assumed to be n.
Let us say, we do some maximum work C on each part of this input data, which we will consider as a constant.
In order to get the worst-case time complexity, we need to loop through each part of the input data i.e. we need to loop n times.
So, the time complexity will be:
n x C.
Since you fixed n to be less than 15 characters, n can also be assumed as a constant number.
Hence in this case:
n = constant and,
(maximum constant work done) = C = constant
So time complexity is n x C = constant x constant = constant i.e. O(1)
Edit
The reason why I have said n = constant and C = constant for this case, is because the time difference for doing calculations for smaller n will become so insignificant (compared to n being a very large number) for modern computers that we can assume it to be constant.
Otherwise, every function ever build will take some time, and we can't say things like:
lookup time is constant for hashmaps

What is the overall O(n) time complexity of O(sum(a)) if a is an array of integers and n is the length of the array?

I’m having a hard time using O(n) principles to generalize the time complexity of an algorithm whose more specific time complexity is O(sum(a)) where a is an array of integers.
My intuition is that this time complexity should generalize to O(n) as you can think of this as a “linear” equation of ki values that occur n times where k is the integer value in the array, making it O(n)( k=1 for a straight up O(n) case).
But it doesn’t seem to be exactly the same as O(n) - the value of k could be much larger than n, and if all these k values are larger you have something that could be O(n^2) or O(n^3) depending on how large that value is.
Is this something to take into account for O(n) complexity where n is the length of the array? Should I actually be defining n as the sum of all elements in the array instead of the length of the array?
In general, what would be the best way to think about this?
Fundamentally, we want to describe the runtime of an algorithm based on the input. The "runtime" is a vague term, that is often swept under the rug. For example, the "runtime" of a sorting algorithm or a hashtable operation is measured in number of comparisons, but using "runtime" to mean the number of basic operations (which are also usually only vaguely defined) is also possible.
There are two choices (or simplifications) often made when calcuating runtime. The first, is to ignore the actual input, and to use the size of the input (measured somehow) instead. This size is usually denoted n. The second, is to use big-O notation to describe the worst case (or best case, or average, or amortized...).
Neither of these choices is always necessary, and sometimes, they won't make sense. To repeat, since this is the crux of the answer: describing runtimes in big-O of n is not the only way to describe runtimes and sometimes it makes no sense to do so.
For example, in the case of an algorithm that runs in O(sum(a)) time:
func f(a) {
t = 0
for x in a {
for i = 1..x {
t += 1
}
}
}
It's not useful to describe the runtime of this using the length of the input array a. It's not useful because the length of a doesn't say anything about the worst-case runtime.
Saying that t is incremented sum(a) times is a useful statement about the runtime of the program. It doesn't use big-O complexity notation.
And if you do want to express that in big-O notation, you can say that the runtime of this code is O(sum(a)). This blurs exactly what you're measuring in the runtime, because you can be including the cost of performing the statements other than incrementing t.
And going back to the example, you could (and if you were studying complexity classes, you probably would) say n is the size (in bits) of the input array. Then you could say something about the runtime (measured in basic operations): it's O(2^n), since the worst case input is an array with one element which takes the value 2^n-1 (*note).
*note: this ignores some technical details about how to encode an array using bits.

How can the worst case for an algorithm have different bounds?

I've been trying to figure this out all day. Some other threads address this, but I really don't understand the answers. There are also many answers that contradict one another.
I understand that an algorithm will never take longer than the upper bound and never be faster than the lower bound. However, I didn't know an upper bound existed for best case time and a lower bound existed for worst case time. This question really threw me in a loop. I can't wrap my head around this... a given run time can have a different upper and lower bound?
For example, if someone asked: "Show that the worst-case running time of some algorithm on a heap of size n is Big Omega(lg(n))". How do you possibly get a lower bound, any bound for that matter, when given a run time?
So, in summation, an algorithm's worst case upper bound can be different than its worst case lower bound? How can this be? Once given the case, don't bounds become irrelevant? Trying to independent study algorithms and I really need to wrap my head around this first.
The meat of my accepted answer to that question is a function whose running time oscillates between n^2 and n^3 depending on whether n is odd. The point that I was trying to make is that sometimes bounds of the form O(n^k) and Omega(n^k) aren't sufficiently descriptive, even though the worst case running time is a perfectly well defined function (which, like all functions, is its own best lower and upper bound). This happens with more natural functions like n log n, which is Omega(n^k) but not O(n^k) for k ≤ 1, and O(n^k) but not Omega(n^k) for k > 1 (and hence not Theta(n^k) regardless of how we choose a constant k).
Suppose you write a program like this to find the smallest prime factor of an integer:
function lpf(n):
for i = 2 to n
if n%i == 0 then return i
If you run the function on the number 10^11 + 3, it will take 10^11 + 2 steps. If you run it on the number 10^11 + 4 it will take just one step. So the function's best-case time is O(1) steps and its worst-case time is O(n) steps.
Big O notation, describes efficiency in runtime iterations, generally based on size of an input data set.
The notation is written in its simplest form, ignoring multiples or additives, but keeping exponential form. If you have an operation of O(1) it is executed in constant time, no matter the input data.
However if you have something such as O(N) or O(log(N)), they will execute at different rates depending on input data.
The high and low bounds describe the largest and least iterations, respectively, that an algorithm can take.
Example: O(N), high bound is largest input data and low bound is smallest.
Extra sources:
Big O Cheat Sheet and MIT Lecture Notes
UPDATE:
Looking at the Stack Overflow question mentioned above, that algorithm is broken into three parts, where it has 3 possible types of runtime, depending on data. Now really, this is three different algorithms designed to handle for different data values. An algorithm is generally classified with just one notation of efficiency and that is of the notation taking the least time for ALL possible values of N.
In the case of O(N^2), larger data will take exponentially longer, and having a smaller number will proceed quickly. The algorithm determines how quickly a data set will be run, yet bounds are given depending on the range of data the algorithm is designed to handle.
I will try to explain it in the quicksort algorithm.
In quicksort you have an array and choose an element as pivot. The next step is to partition the input array into two arrays. The first one will contain elements < pivot and the second one elements > pivot.
Now assume you will apply quicksort on an already sorted list and the pivot element will always be the last element of the array. The result of partition will be an array of size n-1 and an array oft size 1 (the pivot element). This will result in a runtime of O(n*n). Now assume that the pivot element will always split the array in two equal sized array. In every step the array size will be cut in halves. This will result in O(n log n). I hope this example will make this a bit clearer for you.
Another well known sort algorithm is mergesort. Mergesort has always runtime of O(n log n). In mergesort you will cut the array down until only one element is left und will climb up the call stack to merge the one sized arrays and after that merge the array of size two and so on.
Let's say you implement a set using an array. To insert a element you simply put in the next available bucket. If there is no available bucket you increase the capacity of the array by a value m.
For the insert algorithm "there is no enough space" is the worse case.
insert (S, e)
if size(S) >= capacity(S)
reserve(S, size(S) + m)
put(S,e)
Assume we never delete elements. By keeping track of the last available position, put, size and capacity are Θ(1) in space and memory.
What about reserve? If it is implemented like [realloc in C][1], in the best case you just allocate new memory at the end of the existing memory (best case for reserve), or you have to move all existing elements as well (worse case for reserve).
The worst case lower bound for insert is the best case of
reserve(), which is linear in m if we dont nitpick. insert in
worst case is Ω(m) in space and time.
The worst case upper bound for insert is the worse case of
reserve(), which is linear in m+n. insert in worst case is
O(m+n) in space and time.

why O(1) != O(log(n)) ? for n=[integer, long, ...]

for example, say n = Integer.MAX_VALUE or 2^123 then O(log(n)) = 32 and 123 so a small integer. isn't it O(1) ?
what is the difference ? I think, the reason is O(1) is constant but O(log(n)) not. Any other ideas ?
If n is bounded above, then complexity classes involving n make no sense. There is no such thing as "in the limit as 2^123 approaches infinity", except in the old joke that "a pentagon approximates a circle, for sufficiently large values of 5".
Generally, when analysing the complexity of code, we pretend that the input size isn't bounded above by the resource limits of the machine, even though it is. This does lead to some slightly odd things going on around log n, since if n has to fit into a fixed-size int type, then log n has quite a small bound, so the bound is more likely to be useful/relevant.
So sometimes, we're analysing a slightly idealised version of the algorithm, because the actual code written cannot accept arbitrarily large input.
For example, your average quicksort formally uses Theta(log n) stack in the worst case, obviously so with the fairly common implementation that call-recurses on the "small" side of the partition and loop-recurses on the "big" side. But on a 32 bit machine you can arrange to in fact use a fixed-size array of about 240 bytes to store the "todo list", which might be less than some other function you've written based on an algorithm that formally has O(1) stack use. The morals are that implementation != algorithm, complexity doesn't tell you anything about small numbers, and any specific number is "small".
If you want to account for bounds, you could say that, for example, your code to sort an array is O(1) running time, because the array has to be below the size that fits in your PC's address space, and hence the time to sort it is bounded. However, you will fail your CS assignment if you do, and you won't be providing anyone with any useful information :-)
Obviously if you know that the input will always have a fixed number of elements, the algorithm will always run in constant time. Big-O notation is used to denote worse-case running time, which describes the limit when the number of elements grows infinitely large.
The difference is that n isn't fixed. The idea behind Big-O notation is to get an idea of how the size of the input effects the running time (or memory usage). So if an algorithm always takes the same amount of time, whether n = 1 or n = Integer.MAX_VALUE, we say it is O(1). If the algorithm takes a unit of time longer each time the input size doubles, then we say it is O(logn).
Edit: to answer your specific question on the difference between O(1) and O(logn), I'll give you an example. Let's say we want an algorithm that will find the min element in an unsorted array. One approach is to go through each element and keep track of the current min. Another approach is to sort the array and then return the first element.
The first algorithm is O(n), and the second algorithm is O(nlogn). So let's say we start with an array of 16 elements. The first algorithm will run in time 16, the second algorithm will run in time 16*4. If we increase it to 17, then it becomes 17 and 17*4. We might naively say that the second algorithm takes 4 times as long as the first algorithm (if we treat the logn component as constant).
But let's look at what happens when our array contains 2^32 elements. Now the first algorithm takes 2^32 time to complete, where our second algorithm takes 32*2^32 time to complete. It takes 32 times as long. Yes, it's a small difference, but it is still a difference. If the first algorithm takes 1 minute, the second algorithm will take over half an hour!
I think you will get a better idea if it is called O(n^0).
It is a scaling function depending on the input variable N. It is a function, not number, you should never assume any number for the variable N.
It is just like that you say that a function f(x) is 3 because f(100) = 3, it is wrong. It is a function, not any particular number. A constant function f(x) = 1 is still a function, it will never equal to another function g(x) = N, i.e. g(x)=f(x)
Its the growth rate that you want to look at. O(1) implies no growth at all. While O(logn) does have growth. Even though the growth is small it is still growth.
You’re not thinking big enough. Any algorithm that runs on a computer will either run forever or terminate after some small number of steps — since the computer is only a finite state machine, you cannot write algorithms that run for an arbitrary amount of time and then terminate. By that argument, Big-O notation is only theoretical and has no purpose in a real-life computer program. Even O(2^n) hits an upper limit at O(2^INT_MAX), which is equivalent to O(1).
Realistically, though, Big-O can help you out if you know the constant factors. Even if an algorithm has an upper bound of O(log n), and n can have 32 bits, that could mean the difference between a request taking 1 second and 32 seconds.
Big-O shows how running time (or memory, etc) changes as the size of problem changes.
When size of the problem gets 10 times bigger, an O(n) solution takes 10 times as long, an O(log(n)) solution takes a bit longer, and an O(1) solution takes the same time: O(1) means 'changes as fast as constant 1', but constants don't change.
Familiarize yourself with the big-O notation in a bit more detail.
There is a reason why you leave "O(n)" in, and consider to drop "O(log n)". They both are "constants": the former is less than 32, and the latter is less than 232. But you nevertheless have a natural feeling that you can't call O(n) O(1).
However, if log(n) < 32, it means that O(n*logn) algorithm works thirty two times slower than its O(n) version. Big enough to write "log*n"s?

what does O(N) mean [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicate:
What is Big O notation? Do you use it?
Hi all,
fairly basic scalability notation question.
I recently recieved a comment on a post that my python ordered-list implimentation
"but beware that your 'ordered set' implementation is O(N) for insertions"
Which is great to know, but I'm not sure what this means.
I've seen notation such as n(o) o(N), N(o-1) or N(o*o)
what does the above notation refer to?
The comment was referring to the Big-O Notation.
Briefly:
O(1) means in constant time -
independent of the number of items.
O(N) means in proportion to the
number of items.
O(log N) means a time proportional to
log(N)
Basically any 'O' notation means an operation will take time up to a maximum of k*f(N)
where:
k is a constant multiplier
f() is a function that depends on N
O(n) is Big O Notation and refers to the complexity of a given algorithm. n refers to the size of the input, in your case it's the number of items in your list.
O(n) means that your algorithm will take on the order of n operations to insert an item. e.g. looping through the list once (or a constant number of times such as twice or only looping through half).
O(1) means it takes a constant time, that it is not dependent on how many items are in the list.
O(n^2) means that for every insert, it takes n*n operations. i.e. 1 operation for 1 item, 4 operations for 2 items, 9 operations for 3 items. As you can see, O(n^2) algorithms become inefficient for handling large number of items.
For lists O(n) is not bad for insertion, but not the quickest. Also note that O(n/2) is considered as being the same as O(n) because they both grow at the same rate with n.
It's called Big O Notation: http://en.wikipedia.org/wiki/Big_O_notation
So saying that insertion is O(n) means that you have to walk through the whole list (or half of it -- big O notation ignores constant factors) to perform the insertion.
This looks like a nice introduction: http://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/
Specifically O(n) means that if there's 2x as many items in the list, it'll takes No more than twice as long, if there's 50 times as many it'll take No more than 50 times as long. See the wikipedia article dreeves pointed out for more details
Edit (in bold above): It was pointed out that Big-O does represent the upper bound, so if there's twice as many elements in the list, insertion will take at most twice as long, and if there's 50 times as many elements, it would take at most 50 times as long.
If it was additionally Ω(n) (Big Omega of n) then it would take at least twice as long for a list that is twice as big. If your implementation is both O(n) and Ω(n), meaning that it'll take both at least and at most twice as long for a list twice as big, then it can be said to be Θ(n) (Big Theta of n), meaning that it'll take exactly twice as long if there are twice as many elements.
According to Wikipedia (and personal experience, being guilty of it myself) Big-O is often used where Big-Theta is what is meant. It would be technically correct to call your function O(n^n^n^n) because all Big-O says is that your function is no slower than that, but no one would actually say that other than to prove a point because it's not very useful and misleading information, despite it being technically accurate.
It refers to how complex your program is, i.e., how many operations it takes to actually solve a problem. O(n) means that each operation takes the same number of steps as the items in your list, which for insertion, is very slow. Likewise, if you have O(n^2) means that any operation takes "n" squared number of steps to accomplish, and so on... The "O" is for Order of Magnitude, and the the expression in the parentheses is always related to the number of items being manipulated in the procedure.
Short answer: It means that the processing time is in linear relation to the size of input. E.g if the size of input (length of list) triples, the processing time (roughly) triples. And if it increases thousandfold, the processing time also increases in the same magnitude.
Long answer: See the links provided by Ian P and dreeves
This may help:
http://en.wikipedia.org/wiki/Big_O_notation#Orders_of_common_functions
O(n): Finding an item in an unsorted
list or a malformed tree (worst case);
adding two n-digit numbers
Good luck!
Wikipedia explains it far better than I can, however it means that if your list size is N, it takes at max N loops/iterations to insert an item. (In effect, you have to iterate over the whole list)
If you want a better understanding, there is a free book from Berkeley that goes more in-depth about the notation.

Resources