Do problem constraints change the time complexity of algorithms? - algorithm

Let's say that the algorithm involves iterating through a string character by character.
If I know for sure that the length of the string is less than, say, 15 characters, will the time complexity be O(1) or will it remain as O(n)?

There are two aspects to this question - the core of the question is, can problem constraints change the asymptotic complexity of an algorithm? The answer to that is yes. But then you give an example of a constraint (strings limited to 15 characters) where the answer is: the question doesn't make sense. A lot of the other answers here are misleading because they address only the second aspect but try to reach a conclusion about the first one.
Formally, the asymptotic complexity of an algorithm is measured by considering a set of inputs where the input sizes (i.e. what we call n) are unbounded. The reason n must be unbounded is because the definition of asymptotic complexity is a statement like "there is some n0 such that for all n ≥ n0, ...", so if the set doesn't contain any inputs of size n ≥ n0 then this statement is vacuous.
Since algorithms can have different running times depending on which inputs of each size we consider, we often distinguish between "average", "worst case" and "best case" time complexity. Take for example insertion sort:
In the average case, insertion sort has to compare the current element with half of the elements in the sorted portion of the array, so the algorithm does about n2/4 comparisons.
In the worst case, when the array is in descending order, insertion sort has to compare the current element with every element in the sorted portion (because it's less than all of them), so the algorithm does about n2/2 comparisons.
In the best case, when the array is in ascending order, insertion sort only has to compare the current element with the largest element in the sorted portion, so the algorithm does about n comparisons.
However, now suppose we add the constraint that the input array is always in ascending order except for its smallest element:
Now the average case does about 3n/2 comparisons,
The worst case does about 2n comparisons,
And the best case does about n comparisons.
Note that it's the same algorithm, insertion sort, but because we're considering a different set of inputs where the algorithm has different performance characteristics, we end up with a different time complexity for the average case because we're taking an average over a different set, and similarly we get a different time complexity for the worst case because we're choosing the worst inputs from a different set. Hence, yes, adding a problem constraint can change the time complexity even if the algorithm itself is not changed.
However, now let's consider your example of an algorithm which iterates over each character in a string, with the added constraint that the string's length is at most 15 characters. Here, it does not make sense to talk about the asymptotic complexity, because the input sizes n in your set are not unbounded. This particular set of inputs is not valid for doing such an analysis with.

In the mathematical sense, yes. Big-O notation describes the behavior of an algorithm in the limit, and if you have a fixed upper bound on the input size, that implies it has a maximum constant complexity.
That said, context is important. All computers have a realistic limit to the amount of input they can accept (a technical upper bound). Just because nothing in the world can store a yottabyte of data doesn't mean saying every algorithm is O(1) is useful! It's about applying the mathematics in a way that makes sense for the situation.
Here are two contexts for your example, one where it makes sense to call it O(1), and one where it does not.
"I decided I won't put strings of length more than 15 into my program, therefore it is O(1)". This is not a super useful interpretation of the runtime. The actual time is still strongly tied to the size of the string; a string of size 1 will run much faster than one of size 15 even if there is technically a constant bound. In other words, within the constraints of your problem there is still a strong correlation to n.
"My algorithm will process a list of n strings, each with maximum size 15". Here we have a different story; the runtime is dominated by having to run through the list! There's a point where n is so large that the time to process a single string doesn't change the correlation. Now it makes sense to consider the time to process a single string O(1), and therefore the time to process the whole list O(n)
That said, Big-O notation doesn't have to only use one variable! There are problems where upper bounds are intrinsic to the algorithm, but you wouldn't put a bound on the input arbitrarily. Instead, you can describe each dimension of your input as a different variable:
n = list length
s = maximum string length
=> O(n*s)

It depends.
If your algorithm's requirements would grow if larger inputs were provided, then the algorithmic complexity can (and should) be evaluated independently of the inputs. So iterating over all the elements of a list, array, string, etc., is O(n) in relation to the length of the input.
If your algorithm is tied to the limited input size, then that fact becomes part of your algorithmic complexity. For example, maybe your algorithm only iterates over the first 15 characters of the input string, regardless of how long it is. Or maybe your business case simply indicates that a larger input would be an indication of a bug in the calling code, so you opt to immediately exit with an error whenever the input size is larger than a fixed number. In those cases, the algorithm will have constant requirements as the input length tends toward very large numbers.
From Wikipedia
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.
...
In computer science, big O notation is used to classify algorithms according to how their run time or space requirements grow as the input size grows.
In practice, almost all inputs have limits: you cannot input a number larger than what's representable by the numeric type, or a string that's larger than the available memory space. So it would be silly to say that any limits change an algorithm's asymptotic complexity. You could, in theory, use 15 as your asymptote (or "particular value"), and therefore use Big-O notation to define how an algorithm grows as the input approaches that size. There are some algorithms with such terrible complexity (or some execution environments with limited-enough resources) that this would be meaningful.
But if your argument (string length) does not tend toward a large enough value for some aspect of your algorithm's complexity to define the growth of its resource requirements, it's arguably not appropriate to use asymptotic notation at all.

NO!
The time complexity of an algorithm is independent of program constraints. Here is (a simple) way of thinking about it:
Say your algorithm iterates over the string and appends all consonants to a list.
Now, for iteration time complexity is O(n). This means that the time taken will increase roughly in proportion to the increase in the length of the string. (Time itself though would vary depending on the time taken by the if statement and Branch Prediction)
The fact that you know that the string is between 1 and 15 characters long will not change how the program runs, it merely tells you what to expect.
For example, knowing that your values are going to be less than 65000 you could store them in a 16-bit integer and not worry about Integer overflow.

Do problem constraints change the time complexity of algorithms?
No.
If I know for sure that the length of the string is less than, say, 15 characters ..."
We already know the length of the string is less than SIZE_MAX. Knowing an upper fixed bound for string length does not make the the time complexity O(1).
Time complexity remains O(n).

Big-O measures the complexity of algorithms, not of code. It means Big-O does not know the physical limitations of computers. A Big-O measure today will be the same in 1 million years when computers, and programmers alike, have evolved beyond recognition.
So restrictions imposed by today's computers are irrelevant for Big-O. Even though any loop is finite in code, that need not be the case in algorithmic terms. The loop may be finite or infinite. It is up to the programmer/Big-O analyst to decide. Only s/he knows which algorithm the code intends to implement. If the number of loop iterations is finite, the loop has a Big-O complexity of O(1) because there is no asymptotic growth with N. If, on the other hand, the number of loop iterations is infinite, the Big-O complexity is O(N) because there is an asymptotic growth with N.
The above is straight from the definition of Big-O complexity. There are no ifs or buts. The way the OP describes the loop makes it O(1).

A fundamental requirement of big-O notation is that parameters do not have an upper limit. Suppose performing an operation on N elements takes a time precisely equal to 3E24*N*N*N / (1E24+N*N*N) microseconds. For small values of N, the execution time would be proportional to N^3, but as N gets larger the N^3 term in the denominator would start to play an increasing role in the computation.
If N is 1, the time would be 3 microseconds.
If N is 1E3, the time would be about 3E33/1E24, i.e. 3.0E9.
If N is 1E6, the time would be about 3E42/1E24, i.e. 3.0E18
If N is 1E7, the time would be 3E45/1.001E24, i.e. ~2.997E21
If N is 1E8, the time would be about 3E48/2E24, i.e. 1.5E24
If N is 1E9, the time would be 3E51/1.001E27, i.e. ~2.997E24
If N is 1E10, the time would be about 3E54/1.000001E30, i.e. 2.999997E24
As N gets bigger, the time would continue to grow, but no matter how big N gets the time would always be less than 3.000E24 seconds. Thus, the time required for this algorithm would be O(1) because one could specify a constant k such that the time necessary to perform the computation with size N would be less than k.
For any practical value of N, the time required would be proportional to N^3, but from an O(N) standpoint the worst-case time requirement is constant. The fact that the time changes rapidly in response to small values of N is irrelevant to the "big picture" behaviour, which is what big-O notation measures.

It will be O(1) i.e. constant.
This is because for calculating time complexity or worst-case time complexity (to be precise), we think of the input as a huge chunk of data and the length of this data is assumed to be n.
Let us say, we do some maximum work C on each part of this input data, which we will consider as a constant.
In order to get the worst-case time complexity, we need to loop through each part of the input data i.e. we need to loop n times.
So, the time complexity will be:
n x C.
Since you fixed n to be less than 15 characters, n can also be assumed as a constant number.
Hence in this case:
n = constant and,
(maximum constant work done) = C = constant
So time complexity is n x C = constant x constant = constant i.e. O(1)
Edit
The reason why I have said n = constant and C = constant for this case, is because the time difference for doing calculations for smaller n will become so insignificant (compared to n being a very large number) for modern computers that we can assume it to be constant.
Otherwise, every function ever build will take some time, and we can't say things like:
lookup time is constant for hashmaps

Related

Big O (constant) time complexity

Why does the following code for each statement refer to big O constant (here I use 1 for the convention)?
I mean if the array size gets bigger the time complexity may get larger right? Also the number in total will get larger and larger, won't it affect the complexity?
Pseudocode:
def find_sum(given_array)
total = 0 # refers to O(1)
for each i in given array: #O(1)
total+=i #O(1)
return total #O(1)
TL;DR: Because the Big O notation is used to quantify an algorithm, with regards of how it behaves with an increment of its input.
I mean if the array size gets bigger the time complexity may get
larger right? Also the number in total will get larger and larger,
won't it affect the complexity?
You are mistaken the time taken by the algorithm with the time-complexity.
Let us start by clarifying what is Big O notation in the current context. From (source) one can read:
Big O notation is a mathematical notation that describes the limiting
behavior of a function when the argument tends towards a particular
value or infinity. (..) In computer science, big O notation is used to classify algorithms
according to how their run time or space requirements grow as the
input size grows.
Informally, in computer-science time-complexity and space-complexity theories, one can think of the Big O notation as a categorization of algorithms with a certain worst-case scenario concerning time and space, respectively. For instance, O(n):
An algorithm is said to take linear time/space, or O(n) time/space, if its time/space complexity is O(n). Informally, this means that the running time/space increases at most linearly with the size of the input (source).
So for this code:
def find_sum(given_array)
total = 0
for each i in given array:
total+=i
return total
the complexity is O(n) because with an increment of the input the complexity grows linear and not constant. More accurately Θ(n).
IMO it is not very accurate to find out the complexity like:
def find_sum(given_array)
total = 0 # refers to O(1)
for each i in given array: #O(1)
total+=i #O(1)
return total #O(1)
Since the Big O notation represents a set of functions with a certain asymptotic upper-bound; as one can read from source:
Big O notation characterizes functions according to their growth
rates: different functions with the same growth rate may be
represented using the same O notation.
More accurate would be :
def find_sum(given_array)
total = 0 # takes c1 time
for each i in given array:
total+=i # takes c2 time
return total # takes c3 time
So the time complexity would be c1 + n * c2 + c3, which can be simplified to n. And since both the lower and upper bounds of this function are the same we can use Θ(n) instead of O(n).
Why does the following code for each statement refer to big O constant (here I use 1 for the convention)?
Not sure, ask the person who wrote it. It seems clear the overall runtime is not O(1), so if that's the conclusion they arrive at, they are wrong. If they didn't mean to say that, what they wrote is either wrong or confusing.
I mean if the array size gets bigger the time complexity may get larger right?
Yes, it might. Indeed, here, it will, since you are at least iterating over the elements in the array. More elements in the array, more iterations of the loop. Straightforward.
Also the number in total will get larger and larger, won't it affect the complexity?
This is an interesting insight and the answer depends on how you conceive of numbers being represented. If you have fixed-length numeric representations (32-bit unsigned ints, double-precision floats, etc.) then addition is a constant-time operation. If you have variable-length representations (like a big integer library, or doing the algorithm by hand) then the complexity of adding would depend on the addition method used but would necessarily increase with number size (for regular add-with-carry, an upper logarithmic bound would be possible). Indeed, with variable-length representations, your complexity should at least include some parameter related to the size (perhaps max or average) of numbers in the array; otherwise, the runtime might be dominated by adding the numbers rather than looping (e.g., an array of two 1000^1000 bit integers would spend almost all time adding rather than looping).
No answer so far address the second question:
Also the number in total will get larger and larger, won't it affect the complexity?
which is very important and usually not accounted for.
The answer is, it depends on your computational model. If the underlying machine may add insanely arbitrarily large numbers in constant time, then no, it does not affect the time complexity.
A realistic machine, however, operates on values of fixed width. Modern computers happily add 64 bit quantities. Some may only add 16 bit-wide values at a time. A Turing machine - which is a base of the whole complexity theory - works with 1 bit at a time. In any case, once our numbers outgrow the register width, we must account for the fact that addition takes time proportional to the number of bits in the addends, which in this case is log(i) (or log(total), but since total grows as i*(i-1)/2, its bit width is approximately log(i*i) = 2 log(i)).
With this in mind, annotating
total+=i # O(log i)
is more prudent. Now the complexity of the loop
for each i in given array:
total+=i # O(log(i))
is sum[1..n] log(i) = log(n!) ~ n log(n). The last equality comes from the Stirling approximation of a factorial.
There is no way, that the loop:
for each i in given array:
total+=i
will run in O(1) time. Even if the size of input n is 1, asymptotic analysis will still indicate, that it runs in O(n), and not in O(1).
Asymptotic Analysis measures the time/space complexity in relation to the input size, and it does not necessarily show the exact number of operations performed.
Point, that O(1) is constant, does not mean that it's just one (1) operation, but rather it means, that this particular block (which takes O(1)) does not change when the input changes, and therefore, it has no correlation to the input, so it has a constant complexity.
O(n), on the other hand, indicates, that the complexity depends on n, and it changes depending on how the input n changes. O(n) is a linear relation, when input size and runtime have 1 to 1 correlation.
Correctly written comments would look like this:
def find_sum(given_array)
total = 0 #this is O(1), it doesn't depend on input
for each i in given array: #this is O(n), as loop will get longer as the input size gets longer
total+=i #again, O(1)
return total #again, O(1)

What is the overall O(n) time complexity of O(sum(a)) if a is an array of integers and n is the length of the array?

I’m having a hard time using O(n) principles to generalize the time complexity of an algorithm whose more specific time complexity is O(sum(a)) where a is an array of integers.
My intuition is that this time complexity should generalize to O(n) as you can think of this as a “linear” equation of ki values that occur n times where k is the integer value in the array, making it O(n)( k=1 for a straight up O(n) case).
But it doesn’t seem to be exactly the same as O(n) - the value of k could be much larger than n, and if all these k values are larger you have something that could be O(n^2) or O(n^3) depending on how large that value is.
Is this something to take into account for O(n) complexity where n is the length of the array? Should I actually be defining n as the sum of all elements in the array instead of the length of the array?
In general, what would be the best way to think about this?
Fundamentally, we want to describe the runtime of an algorithm based on the input. The "runtime" is a vague term, that is often swept under the rug. For example, the "runtime" of a sorting algorithm or a hashtable operation is measured in number of comparisons, but using "runtime" to mean the number of basic operations (which are also usually only vaguely defined) is also possible.
There are two choices (or simplifications) often made when calcuating runtime. The first, is to ignore the actual input, and to use the size of the input (measured somehow) instead. This size is usually denoted n. The second, is to use big-O notation to describe the worst case (or best case, or average, or amortized...).
Neither of these choices is always necessary, and sometimes, they won't make sense. To repeat, since this is the crux of the answer: describing runtimes in big-O of n is not the only way to describe runtimes and sometimes it makes no sense to do so.
For example, in the case of an algorithm that runs in O(sum(a)) time:
func f(a) {
t = 0
for x in a {
for i = 1..x {
t += 1
}
}
}
It's not useful to describe the runtime of this using the length of the input array a. It's not useful because the length of a doesn't say anything about the worst-case runtime.
Saying that t is incremented sum(a) times is a useful statement about the runtime of the program. It doesn't use big-O complexity notation.
And if you do want to express that in big-O notation, you can say that the runtime of this code is O(sum(a)). This blurs exactly what you're measuring in the runtime, because you can be including the cost of performing the statements other than incrementing t.
And going back to the example, you could (and if you were studying complexity classes, you probably would) say n is the size (in bits) of the input array. Then you could say something about the runtime (measured in basic operations): it's O(2^n), since the worst case input is an array with one element which takes the value 2^n-1 (*note).
*note: this ignores some technical details about how to encode an array using bits.

why O(1) != O(log(n)) ? for n=[integer, long, ...]

for example, say n = Integer.MAX_VALUE or 2^123 then O(log(n)) = 32 and 123 so a small integer. isn't it O(1) ?
what is the difference ? I think, the reason is O(1) is constant but O(log(n)) not. Any other ideas ?
If n is bounded above, then complexity classes involving n make no sense. There is no such thing as "in the limit as 2^123 approaches infinity", except in the old joke that "a pentagon approximates a circle, for sufficiently large values of 5".
Generally, when analysing the complexity of code, we pretend that the input size isn't bounded above by the resource limits of the machine, even though it is. This does lead to some slightly odd things going on around log n, since if n has to fit into a fixed-size int type, then log n has quite a small bound, so the bound is more likely to be useful/relevant.
So sometimes, we're analysing a slightly idealised version of the algorithm, because the actual code written cannot accept arbitrarily large input.
For example, your average quicksort formally uses Theta(log n) stack in the worst case, obviously so with the fairly common implementation that call-recurses on the "small" side of the partition and loop-recurses on the "big" side. But on a 32 bit machine you can arrange to in fact use a fixed-size array of about 240 bytes to store the "todo list", which might be less than some other function you've written based on an algorithm that formally has O(1) stack use. The morals are that implementation != algorithm, complexity doesn't tell you anything about small numbers, and any specific number is "small".
If you want to account for bounds, you could say that, for example, your code to sort an array is O(1) running time, because the array has to be below the size that fits in your PC's address space, and hence the time to sort it is bounded. However, you will fail your CS assignment if you do, and you won't be providing anyone with any useful information :-)
Obviously if you know that the input will always have a fixed number of elements, the algorithm will always run in constant time. Big-O notation is used to denote worse-case running time, which describes the limit when the number of elements grows infinitely large.
The difference is that n isn't fixed. The idea behind Big-O notation is to get an idea of how the size of the input effects the running time (or memory usage). So if an algorithm always takes the same amount of time, whether n = 1 or n = Integer.MAX_VALUE, we say it is O(1). If the algorithm takes a unit of time longer each time the input size doubles, then we say it is O(logn).
Edit: to answer your specific question on the difference between O(1) and O(logn), I'll give you an example. Let's say we want an algorithm that will find the min element in an unsorted array. One approach is to go through each element and keep track of the current min. Another approach is to sort the array and then return the first element.
The first algorithm is O(n), and the second algorithm is O(nlogn). So let's say we start with an array of 16 elements. The first algorithm will run in time 16, the second algorithm will run in time 16*4. If we increase it to 17, then it becomes 17 and 17*4. We might naively say that the second algorithm takes 4 times as long as the first algorithm (if we treat the logn component as constant).
But let's look at what happens when our array contains 2^32 elements. Now the first algorithm takes 2^32 time to complete, where our second algorithm takes 32*2^32 time to complete. It takes 32 times as long. Yes, it's a small difference, but it is still a difference. If the first algorithm takes 1 minute, the second algorithm will take over half an hour!
I think you will get a better idea if it is called O(n^0).
It is a scaling function depending on the input variable N. It is a function, not number, you should never assume any number for the variable N.
It is just like that you say that a function f(x) is 3 because f(100) = 3, it is wrong. It is a function, not any particular number. A constant function f(x) = 1 is still a function, it will never equal to another function g(x) = N, i.e. g(x)=f(x)
Its the growth rate that you want to look at. O(1) implies no growth at all. While O(logn) does have growth. Even though the growth is small it is still growth.
You’re not thinking big enough. Any algorithm that runs on a computer will either run forever or terminate after some small number of steps — since the computer is only a finite state machine, you cannot write algorithms that run for an arbitrary amount of time and then terminate. By that argument, Big-O notation is only theoretical and has no purpose in a real-life computer program. Even O(2^n) hits an upper limit at O(2^INT_MAX), which is equivalent to O(1).
Realistically, though, Big-O can help you out if you know the constant factors. Even if an algorithm has an upper bound of O(log n), and n can have 32 bits, that could mean the difference between a request taking 1 second and 32 seconds.
Big-O shows how running time (or memory, etc) changes as the size of problem changes.
When size of the problem gets 10 times bigger, an O(n) solution takes 10 times as long, an O(log(n)) solution takes a bit longer, and an O(1) solution takes the same time: O(1) means 'changes as fast as constant 1', but constants don't change.
Familiarize yourself with the big-O notation in a bit more detail.
There is a reason why you leave "O(n)" in, and consider to drop "O(log n)". They both are "constants": the former is less than 32, and the latter is less than 232. But you nevertheless have a natural feeling that you can't call O(n) O(1).
However, if log(n) < 32, it means that O(n*logn) algorithm works thirty two times slower than its O(n) version. Big enough to write "log*n"s?

Meaning of average complexity when using Big-O notation

While answering to this question a debate began in comments about complexity of QuickSort. What I remember from my university time is that QuickSort is O(n^2) in worst case, O(n log(n)) in average case and O(n log(n)) (but with tighter bound) in best case.
What I need is a correct mathematical explanation of the meaning of average complexity to explain clearly what it is about to someone who believe the big-O notation can only be used for worst-case.
What I remember if that to define average complexity you should consider complexity of algorithm for all possible inputs, count how many degenerating and normal cases. If the number of degenerating cases divided by n tend towards 0 when n get big, then you can speak of average complexity of the overall function for normal cases.
Is this definition right or is definition of average complexity different ? And if it's correct can someone state it more rigorously than I ?
You're right.
Big O (big Theta etc.) is used to measure functions. When you write f=O(g) it doesn't matter what f and g mean. They could be average time complexity, worst time complexity, space complexities, denote distribution of primes etc.
Worst-case complexity is a function that takes size n, and tells you what is maximum number of steps of an algorithm given input of size n.
Average-case complexity is a function that takes size n, and tells you what is expected number of steps of an algorithm given input of size n.
As you see worst-case and average-case complexity are functions, so you can use big O to express their growth.
If you're looking for a formal definition, then:
Average complexity is the expected running time for a random input.
Let's refer Big O Notation in Wikipedia:
Let f and g be two functions defined on some subset of the real numbers. One writes f(x)=O(g(x)) as x --> infinity if ...
So what the premise of the definition states is that the function f should take a number as an input and yield a number as an output. What input number are we talking about? It's supposedly a number of elements in the sequence to be sorted. What output number could we be talking about? It could be a number of operations done to order the sequence. But stop. What is a function? Function in Wikipedia:
a function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output.
Are we producing exacly one output with our prior defition? No, we don't. For a given size of a sequence we can get a wide variation of number of operations. So to ensure the definition is applicable to our case we need to reduce a set possible outcomes (number of operations) to a single value. It can be a maximum ("the worse case"), a minimum ("the best case") or an average.
The conclusion is that talking about best/worst/average case is mathematically correct and using big O notation without those in context of sorting complexity is somewhat sloppy.
On the other hand, we could be more precise and use big Theta notation instead of big O notation.
I think your definition is correct, but your conclusions are wrong.
It's not necessarily true that if the proportion of "bad" cases tends to 0, then the average complexity is equal to the complexity of the "normal" cases.
For example, suppose that 1/(n^2) cases are "bad" and the rest "normal", and that "bad" cases take exactly (n^4) operations, whereas "normal" cases take exactly n operations.
Then the average number of operations required is equal to:
(n^4/n^2) + n(n^2-1)/(n^2)
This function is O(n^2), but not O(n).
In practice, though, you might find that time is polynomial in all cases, and the proportion of "bad" cases shrinks exponentially. That's when you'd ignore the bad cases in calculating an average.
Average case analysis does the following:
Take all inputs of a fixed length (say n), sum up all the running times of all instances of this length, and build the average.
The problem is you will probably have to enumerate all inputs of length n in order to come up with an average complexity.

What is big-O notation? How do you come up with figures like O(n)? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Plain english explanation of Big O
I'd imagine this is probably something taught in classes, but as I a self-taught programmer, I've only seen it rarely.
I've gathered it is something to do with the time, and O(1) is the best, while stuff like O(n^n) is very bad, but could someone point me to a basic explanation of what it actually represents, and where these numbers come from?
Big O refers to the worst case run-time order. It is used to show how well an algorithm scales based on the size of the data set (n->number of items).
Since we are only concerned with the order, constant multipliers are ignored, and any terms which increase less quickly than the dominant term are also removed. Some examples:
A single operation or set of operations is O(1), since it takes some constant time (does not vary based on data set size).
A loop is O(n). Each element in the data set is looped over.
A nested loop is O(n^2). A nested nested loop is O(n^3), and onward.
Things like binary tree searching are log(n), which is more difficult to show, but at every level in the tree, the possible number of solutions is halved, so the number of levels is log(n) (provided the tree is balanced).
Something like finding the sum of a set of numbers that is closest to a given value is O(n!), since the sum of each subset needs to be calculated. This is very bad.
It's a way of expressing time complexity.
O(n) means for n elements in a list, it takes n computations to sort the list. Which isn't bad at all. Each increase in n increases time complexity linearly.
O(n^n) is bad, because the amount of computation required to perform a sort (or whatever you are doing) will exponentially increase as you increase n.
O(1) is the best, as it means 1 computation to perform a function, think of hash tables, looking up a value in a hash table has O(1) time complexity.
Big O notation as applied to an algorithm refers to how the run time of the algorithm depends on the amount of input data. For example, a sorting algorithm will take longer to sort a large data set than a small data set. If for the sorting algorithm example you graph the run time (vertical-axis) vs the number of values to sort (horizontal-axis), for numbers of values from zero to a large number, the nature of the line or curve that results will depend on the sorting algorithm used. Big O notation is a shorthand method for describing the line or curve.
In big O notation, the expression in the brackets is the function that is graphed. If a variable (say n) is included in the expression, this variable refers to the size of the input data set. You say O(1) is the best. This is true because the graph f(n) = 1 does not vary with n. An O(1) algorithm takes the same amount of time to complete regardless of the size of the input data set. By contrast, the run time of an algorithm of O(n^n) increases with the square of the size of the input data set.
That is the basic idea, for a detailed explanation, consult the wikipedia page titled 'Big O Notation'.

Resources