Trying to understand Big-oh notation - big-o

Hi I would really appreciate some help with Big-O notation. I have an exam in it tomorrow and while I can define what f(x) is O(g(x)) is, I can't say I thoroughly understand it.
The following question ALWAYS comes up on the exam and I really need to try and figure it out, the first part seems easy (I think) Do you just pick a value for n, compute them all on a claculator and put them in order? This seems to easy though so I'm not sure. I'm finding it very hard to find examples online.
From lowest to highest, what is the
correct order of the complexities
O(n2), O(log2 n), O(1), O(2n), O(n!),
O(n log2 n)?
What is the
worst-case computational-complexity of
the Binary Search algorithm on an
ordered list of length n = 2k?

That guy should help you.
From lowest to highest, what is the
correct order of the complexities
O(n2), O(log2 n), O(1), O(2n), O(n!),
O(n log2 n)?
The order is same as if you compare their limit at infinity. like lim(a/b), if it is 1, then they are same, inf. or 0 means one of them is faster.
What is the worst-case
computational-complexity of the Binary
Search algorithm on an ordered list of
length n = 2k?
Find binary search best/worst Big-O.
Find linked list access by index best/worst Big-O.
Make conclusions.

Hey there. Big-O notation is tough to figure out if you don't really understand what the "n" means. You've already seen people talking about how O(n) == O(2n), so I'll try to explain exactly why that is.
When we describe an algorithm as having "order-n space complexity", we mean that the size of the storage space used by the algorithm gets larger with a linear relationship to the size of the problem that it's working on (referred to as n.) If we have an algorithm that, say, sorted an array, and in order to do that sort operation the largest thing we did in memory was to create an exact copy of that array, we'd say that had "order-n space complexity" because as the size of the array (call it n elements) got larger, the algorithm would take up more space in order to match the input of the array. Hence, the algorithm uses "O(n)" space in memory.
Why does O(2n) = O(n)? Because when we talk in terms of O(n), we're only concerned with the behavior of the algorithm as n gets as large as it could possibly be. If n was to become infinite, the O(2n) algorithm would take up two times infinity spaces of memory, and the O(n) algorithm would take up one times infinity spaces of memory. Since two times infinity is just infinity, both algorithms are considered to take up a similar-enough amount of room to be both called O(n) algorithms.
You're probably thinking to yourself "An algorithm that takes up twice as much space as another algorithm is still relatively inefficient. Why are they referred to using the same notation when one is much more efficient?" Because the gain in efficiency for arbitrarily large n when going from O(2n) to O(n) is absolutely dwarfed by the gain in efficiency for arbitrarily large n when going from O(n^2) to O(500n). When n is 10, n^2 is 10 times 10 or 100, and 500n is 500 times 10, or 5000. But we're interested in n as n becomes as large as possible. They cross over and become equal for an n of 500, but once more, we're not even interested in an n as small as 500. When n is 1000, n^2 is one MILLION while 500n is a "mere" half million. When n is one million, n^2 is one thousand billion - 1,000,000,000,000 - while 500n looks on in awe with the simplicity of it's five-hundred-million - 500,000,000 - points of complexity. And once more, we can keep making n larger, because when using O(n) logic, we're only concerned with the largest possible n.
(You may argue that when n reaches infinity, n^2 is infinity times infinity, while 500n is five hundred times infinity, and didn't you just say that anything times infinity is infinity? That doesn't actually work for infinity times infinity. I think. It just doesn't. Can a mathematician back me up on this?)
This gives us the weirdly counterintuitive result where O(Seventy-five hundred billion spillion kajillion n) is considered an improvement on O(n * log n). Due to the fact that we're working with arbitrarily large "n", all that matters is how many times and where n appears in the O(). The rules of thumb mentioned in Julia Hayward's post will help you out, but here's some additional information to give you a hand.
One, because n gets as big as possible, O(n^2+61n+1682) = O(n^2), because the n^2 contributes so much more than the 61n as n gets arbitrarily large that the 61n is simply ignored, and the 61n term already dominates the 1682 term. If you see addition inside a O(), only concern yourself with the n with the highest degree.
Two, O(log10n) = O(log(any number)n), because for any base b, log10(x) = log_b(*x*)/log_b(10). Hence, O(log10n) = O(log_b(x) * 1/(log_b(10)). That 1/log_b(10) figure is a constant, which we've already shown drop out of O(n) notation.

Very loosely, you could imagine picking extremely large values of n, and calculating them. Might exceed your calculator's range for large factorials, though.
If the definition isn't clear, a more intuitive description is that "higher order" means "grows faster than, as n grows". Some rules of thumb:
O(n^a) is a higher order than O(n^b) if a > b.
log(n) grows more slowly than any positive power of n
exp(n) grows more quickly than any power of n
n! grows more quickly than exp(kn)
Oh, and as far as complexity goes, ignore the constant multipliers.
That's enough to deduce that the correct order is O(1), O(log n), O(2n) = O(n), O(n log n), O(n^2), O(n!)

For big-O complexities, the rule is that if two things vary only by constant factors, then they are the same. If one grows faster than another ignoring constant factors, then it is bigger.
So O(2n) and O(n) are the same -- they only vary by a constant factor (2). One way to think about it is to just drop the constants, since they don't impact the complexity.
The other problem with picking n and using a calculator is that it will give you the wrong answer for certain n. Big O is a measure of how fast something grows as n increases, but at any given n the complexities might not be in the right order. For instance, at n=2, n^2 is 4 and n! is 2, but n! grows quite a bit faster than n^2.
It's important to get that right, because for running times with multiple terms, you can drop the lesser terms -- ie, if O(f(n)) is 3n^2+2n+5, you can drop the 5 (constant), drop the 2n (3n^2 grows faster), then drop the 3 (constant factor) to get O(n^2)... but if you don't know that n^2 is bigger, you won't get the right answer.
In practice, you can just know that n is linear, log(n) grows more slowly than linear, n^a > n^b if a>b, 2^n is faster than any n^a, and n! is even faster than that. (Hint: try to avoid algorithms that have n in the exponent, and especially avoid ones that are n!.)
For the second part of your question, what happens with a binary search in the worst case? At each step, you cut the space in half until eventually you find your item (or run out of places to look). That is log2(2k). A search where you just walk through the list to find your item would take n steps. And we know from the first part that O(log(n)) < O(n), which is why binary search is faster than just a linear search.
Good luck with the exam!

In easy to understand terms the Big-O notation defines how quickly a particular function grows. Although it has its roots in pure mathematics its most popular application is the analysis of algorithms which can be analyzed on the basis of input size to determine the approximate number of operations that must be performed.
The benefit of using the notation is that you can categorize function growth rates by their complexity. Many different functions (an infinite number really) could all be expressed with the same complexity using this notation. For example, n+5, 2*n, and 4*n + 1/n all have O(n) complexity because the function g(n)=n most simply represents how these functions grow.
I put an emphasis on most simply because the focus of the notation is on the dominating term of the function. For example, O(2*n + 5) = O(2*n) = O(n) because n is the dominating term in the growth. This is because the notation assumes that n goes to infinity which causes the remaining terms to play less of a role in the growth rate. And, by convention, any constants or multiplicatives are omitted.
Read Big O notation and Time complexity for more a more in depth overview.

See this and look up for solutions here is first one.

Related

Can O(k * n) be considered as linear complexity (O(n))?

When talking about complexity in general, things like O(3n) tend to be simplified to O(n) and so on. This is merely theoretical, so how does complexity work in reality? Can O(3n) also be simplified to O(n)?
For example, if a task implies that solution must be in O(n) complexity and in our code we have 2 times linear search of an array, which is O(n) + O(n). So, in reality, would that solution be considered as linear complexity or not fast enough?
Note that this question is asking about real implementations, not theoretical. I'm already aware that O(n) + O(n) is simplified to O(n)?
Bear in mind that O(f(n)) does not give you the amount of real-world time that something takes: only the rate of growth as n grows. O(n) only indicates that if n doubles, the runtime doubles as well, which lumps functions together that take one second per iteration or one millennium per iteration.
For this reason, O(n) + O(n) and O(2n) are both equivalent to O(n), which is the set of functions of linear complexity, and which should be sufficient for your purposes.
Though an algorithm that takes arbitrary-sized inputs will often want the most optimal function as represented by O(f(n)), an algorithm that grows faster (e.g. O(n²)) may still be faster in practice, especially when the data set size n is limited or fixed in practice. However, learning to reason about O(f(n)) representations can help you compose algorithms to have a predictable—optimal for your use-case—upper bound.
Yes, as long as k is a constant, you can write O(kn) = O(n).
The intuition behind is that the constant k doesn't increase with the size of the input space and at some point will be incomparably small to n, so it doesn't have much influence on the overall complexity.
Yes - as long as the number k of array searches is not affected by the input size, even for inputs that are too big to be possible in practice, O(kn) = O(n). The main idea of the O notation is to emphasize how the computation time increases with the size of the input, and so constant factors that stay the same no matter how big the input is aren't of interest.
An example of an incorrect way to apply this is to say that you can perform selection sort in linear time because you can only fit about one billion numbers in memory, and so selection sort is merely one billion array searches. However, with an ideal computer with infinite memory, your algorithm would not be able to handle more than one billion numbers, and so it is not a correct sorting algorithm (algorithms must be able to handle arbitrarily large inputs unless you specify a limit as a part of the problem statement); it is merely a correct algorithm for sorting up to one billion numbers.
(As a matter of fact, once you put a limit on the input size, most algorithms will become constant-time because for all inputs within your limit, the algorithm will solve it using at most the amount of time that is required for the biggest / most difficult input.)

Why do we ignore co-efficients in Big O notation?

While searching for answers relating to "Big O" notation, I have seen many SO answers such as this, this, or this, but still I have not clearly understood some points.
Why do we ignore the co-efficients?
For example this answer says that the final complexity of 2N + 2 is O(N); we remove the leading co-efficient 2 and the final constant 2 as well.
Removing the final constant of 2 perhaps understandable. After all, N may be very large and so "forgetting" the final 2 may only change the grand total by a small percentage.
However I cannot clearly understand how removing the leading co-efficient does not make difference. If the leading 2 above became a 1 or a 3, the percentage change to the grand total would be large.
Similarly, apparently 2N^3 + 99N^2 + 500 is O(N^3). How do we ignore the 99N^2 along with the 500?
The purpose of the Big-O notation is to find what is the dominant factor in the asymptotic behavior of a function as the value tends towards the infinity.
As we walk through the function domain, some factors become more important than others.
Imagine f(n) = n^3+n^2. As n goes to infinity, n^2 becomes less and less relevant when compared with n^3.
But that's just the intuition behind the definition. In practice we ignore some portions of the function because of the formal definition:
f(x) = O(g(x)) as x->infinity
if and only if there is a positive real M and a real x_0 such as
|f(x)| <= M|g(x)| for all x > x_0.
That's in wikipedia. What that actually means is that there is a point (after x_0) after which some multiple of g(x) dominates f(x). That definition acts like a loose upper bound on the value of f(x).
From that we can derive many other properties, like f(x)+K = O(f(x)), f(x^n+x^n-1)=O(x^n), etc. It's just a matter of using the definition to prove those.
In special, the intuition behind removing the coefficient (K*f(x) = O(f(x))) lies in what we try to measure with computational complexity. Ultimately it's all about time (or any resource, actually). But it's hard to know how much time each operation take. One algorithm may perform 2n operations and the other n, but the latter may have a large constant time associated with it. So, for this purpose, isn't easy to reason about the difference between n and 2n.
From a (complexity) theory point of view, the coefficients represent hardware details that we can ignore. Specifically, the Linear Speedup Theorem dictates that for any problem we can always throw an exponentially increasing amount of hardware (money) at a computer to get a linear boost in speed.
Therefore, modulo expensive hardware purchases two algorithms that solve the same problem, one at twice the speed of the other for all input sizes, are considered essentially the same.
Big-O (Landau) notation has its origins independently in number theory, where one of its uses is to create a kind of equivalence between functions: if a given function is bounded above by another and simultaneously is bounded below by a scaled version of that same other function, then the two functions are essentially the same from an asymptotic point of view. The definition of Big-O (actually, "Big-Theta") captures this situation: the "Big-O" (Theta) of the two functions are exactly equal.
The fact that Big-O notation allows us to disregard the leading constant when comparing the growth of functions makes Big-O an ideal vehicle to measure various qualities of algorithms while respecting (ignoring) the "freebie" optimizations offered by the Linear Speedup Theorem.
Big O provides a good estimate of what algorithms are more efficient for larger inputs, all things being equal; this is why for an algorithm with an n^3 and an n^2 factor we ignore the n^2 factor, because even if the n^2 factor has a large constant it will eventually be dominated by the n^3 factor.
However, real algorithms incorporate more than simple Big O analysis, for example a sorting algorithm will often start with a O(n * log(n)) partitioning algorithm like quicksort or mergesort, and when the partitions become small enough the algorithm will switch to a simpler O(n^2) algorithm like insertionsort - for small inputs insertionsort is generally faster, although a basic Big O analysis doesn't reveal this.
The constant factors often aren't very interesting, and so they're omitted - certainly a difference in factors on the order of 1000 is interesting, but usually the difference in factors are smaller, and then there are many more constant factors to consider that may dominate the algorithms' constants. Let's say I've got two algorithms, the first with running time 3*n and the second with running time 2*n, each with comparable space complexity. This analysis assumes uniform memory access; what if the first algorithm interacts better with the cache, and this more than makes up for the worse constant factor? What if more compiler optimizations can be applied to it, or it behaves better with the memory management subsystem, or requires less expensive IO (e.g. fewer disk seeks or fewer database joins or whatever) and so on? The constant factor for the algorithm is relevant, but there are many more constants that need to be considered. Often the easiest way to determine which algorithm is best is just to run them both on some sample inputs and time the results; over-relying on the algorithms' constant factors would hide this step.
An other thing is that, what I have understood, the complexity of 2N^3 + 99N^2 + 500 will be O(N^3). So how do we ignore/remove 99N^2 portion even? Will it not make difference when let's say N is one miilion?
That's right, in that case the 99N^2 term is far overshadowed by the 2N^3 term. The point where they cross is at N=49.5, much less than one million.
But you bring up a good point. Asymptotic computational complexity analysis is in fact often criticized for ignoring constant factors that can make a huge difference in real-world applications. However, big-O is still a useful tool for capturing the efficiency of an algorithm in a few syllables. It's often the case that an n^2 algorithm will be faster in real life than an n^3 algorithm for nontrivial n, and it's almost always the case that a log(n) algorithm will be much faster than an n^2 algorithm.
In addition to being a handy yardstick for approximating practical efficiency, it's also an important tool for the theoretical analysis of algorithm complexity. Many useful properties arise from the composability of polynomials - this makes sense because nested looping is fundamental to computation, and those correspond to polynomial numbers of steps. Using asymptotic complexity analysis, you can prove a rich set of relationships between different categories of algorithms, and that teaches us things about exactly how efficiently certain problems can be solved.
Big O notation is not an absolute measure of complexity.
Rather it is a designation of how complexity will change as the variable changes. In other words as N increases the complexity will increase
Big O(f(N)).
To explain why terms are not included we look at how fast the terms increase.
So, Big O(2n+2) has two terms 2n and 2. Looking at the rate of increase
Big O(2) this term will never increase it does not contribute to the rate of increase at all so it goes away. Also since 2n increases faster than 2, the 2 turns into noise as n gets very large.
Similarly Big O(2n^3 + 99n^2) compares Big O(2n^3) and Big O(99n^2). For small values, say n < 50, the 99n^2 will contribute a larger nominal percentage than 2n^3. However if n gets very large, say 1000000, then 99n^2 although nominally large it is insignificant (close to 1 millionth) compared to the size of 2n^3.
As a consequence Big O(n^i) < Big O(n^(i+1)).
Coefficients are removed because of the mathematical definition of Big O.
To simplify the definition says Big O(f(n)) = Big O(f(cn)) for a constant c. This needs to be taken on faith because the reason for this is purely mathematical, and as such the proof would be too complex and dry to explain in simple terms.
The mathematical reason:
The real reason why we do this, is the way Big O-Notation is defined:
A series (or lets use the word function) f(n) is in O(g(n)) when the series f(n)/g(n) is bounded. Example:
f(n)= 2*n^2
g(n)= n^2
f(n) is in O(g(n)) because (2*n^2)/(n^2) = 2 as n approaches Infinity. The term (2*n^2)/(n^2) doesn't become infinitely large (its always 2), so the quotient is bounded and thus 2*n^2 is in O(n^2).
Another one:
f(n) = n^2
g(n) = n
The term n^2/n (= n) becomes infinetely large, as n goes to infinity, so n^2 is not in O(n).
The same principle applies, when you have
f(n) = n^2 + 2*n + 20
g(n) = n^2
(n^2 + 2*n + 20)/(n^2) is also bounded, because it tends to 1, as n goes to infinity.
Big-O Notation basically describes, that your function f(n) is (from some value of n on to infinity) smaller than a function g(n), multiplied by a constant. With the previous example:
2*n^2 is in O(n^2), because we can find a value C, so that 2*n^2 is smaller than C*n^2. In this example we can pick C to be 5 or 10, for example, and the condition will be satisfied.
So what do you get out of this? If you know your algorithm has complexity O(10^n) and you input a list of 4 numbers, it may take only a short time. If you input 10 numbers, it will take a million times longer! If it's one million times longer or 5 million times longer doesn't really matter here. You can always use 5 more computers for it and have it run in the same amount of time, the real problem here is, that it scales incredibly bad with input size.
For practical applications the constants does matter, so O(2 n^3) will be better than O(1000 n^2) for inputs with n smaller than 500.
There are two main ideas here: 1) If your algorithm should be great for any input, it should have a low time complexity, and 2) that n^3 grows so much faster than n^2, that perfering n^3 over n^2 almost never makes sense.

what is order of complexity in Big O notation?

Question
Hi I am trying to understand what order of complexity in terms of Big O notation is. I have read many articles and am yet to find anything explaining exactly 'order of complexity', even on the useful descriptions of Big O on here.
What I already understand about big O
The part which I already understand. about Big O notation is that we are measuring the time and space complexity of an algorithm in terms of the growth of input size n. I also understand that certain sorting methods have best, worst and average scenarios for Big O such as O(n) ,O(n^2) etc and the n is input size (number of elements to be sorted).
Any simple definitions or examples would be greatly appreciated thanks.
Big-O analysis is a form of runtime analysis that measures the efficiency of an algorithm in terms of the time it takes for the algorithm to run as a function of the input size. It’s not a formal bench- mark, just a simple way to classify algorithms by relative efficiency when dealing with very large input sizes.
Update:
The fastest-possible running time for any runtime analysis is O(1), commonly referred to as constant running time.An algorithm with constant running time always takes the same amount of time
to execute, regardless of the input size.This is the ideal run time for an algorithm, but it’s rarely achievable.
The performance of most algorithms depends on n, the size of the input.The algorithms can be classified as follows from best-to-worse performance:
O(log n) — An algorithm is said to be logarithmic if its running time increases logarithmically in proportion to the input size.
O(n) — A linear algorithm’s running time increases in direct proportion to the input size.
O(n log n) — A superlinear algorithm is midway between a linear algorithm and a polynomial algorithm.
O(n^c) — A polynomial algorithm grows quickly based on the size of the input.
O(c^n) — An exponential algorithm grows even faster than a polynomial algorithm.
O(n!) — A factorial algorithm grows the fastest and becomes quickly unusable for even small values of n.
The run times of different orders of algorithms separate rapidly as n gets larger.Consider the run time for each of these algorithm classes with
n = 10:
log 10 = 1
10 = 10
10 log 10 = 10
10^2 = 100
2^10= 1,024
10! = 3,628,800
Now double it to n = 20:
log 20 = 1.30
20 = 20
20 log 20= 26.02
20^2 = 400
2^20 = 1,048,576
20! = 2.43×1018
Finding an algorithm that works in superlinear time or better can make a huge difference in how well an application performs.
Say, f(n) in O(g(n)) if and only if there exists a C and n0 such that f(n) < C*g(n) for all n greater than n0.
Now that's a rather mathematical approach. So I'll give some examples. The simplest case is O(1). This means "constant". So no matter how large the input (n) of a program, it will take the same time to finish. An example of a constant program is one that takes a list of integers, and returns the first one. No matter how long the list is, you can just take the first and return it right away.
The next is linear, O(n). This means that if the input size of your program doubles, so will your execution time. An example of a linear program is the sum of a list of integers. You'll have to look at each integer once. So if the input is an list of size n, you'll have to look at n integers.
An intuitive definition could define the order of your program as the relation between the input size and the execution time.
Others have explained big O notation well here. I would like to point out that sometimes too much emphasis is given to big O notation.
Consider matrix multplication the naïve algorithm has O(n^3). Using the Strassen algoirthm it can be done as O(n^2.807). Now there are even algorithms that get O(n^2.3727).
One might be tempted to choose the algorithm with the lowest big O but it turns for all pratical purposes that the naïvely O(n^3) method wins out. This is because the constant for the dominating term is much larger for the other methods.
Therefore just looking at the dominating term in the complexity can be misleading. Sometimes one has to consider all terms.
Big O is about finding an upper limit for the growth of some function. See the formal definition on Wikipedia http://en.wikipedia.org/wiki/Big_O_notation
So if you've got an algorithm that sorts an array of size n and it requires only a constant amount of extra space and it takes (for example) 2 n² + n steps to complete, then you would say it's space complexity is O(n) or O(1) (depending on wether you count the size of the input array or not) and it's time complexity is O(n²).
Knowing only those O numbers, you could roughly determine how much more space and time is needed to go from n to n + 100 or 2 n or whatever you are interested in. That is how well an algorithm "scales".
Update
Big O and complexity are really just two terms for the same thing. You can say "linear complexity" instead of O(n), quadratic complexity instead of O(n²), etc...
I see that you are commenting on several answers wanting to know the specific term of order as it relates to Big-O.
Suppose f(n) = O(n^2), we say that the order is n^2.
Be careful here, there are some subtleties. You stated "we are measuring the time and space complexity of an algorithm in terms of the growth of input size n," and that's how people often treat it, but it's not actually correct. Rather, with O(g(n)) we are determining that g(n), scaled suitably, is an upper bound for the time and space complexity of an algorithm for all input of size n bigger than some particular n'. Similarly, with Omega(h(n)) we are determining that h(n), scaled suitably, is a lower bound for the time and space complexity of an algorithm for all input of size n bigger than some particular n'. Finally, if both the lower and upper bound are the same complexity g(n), the complexity is Theta(g(n)). In other words, Theta represents the degree of complexity of the algorithm while big-O and big-Omega bound it above and below.
Constant Growth: O(1)
Linear Growth: O(n)
Quadratic Growth: O(n^2)
Cubic Growth: O(n^3)
Logarithmic Growth: (log(n)) or O(n*log(n))
Big O use Mathematical Definition of complexity .
Order Of use in industrial Definition of complexity .

Which is bigger: O(n*logn) or O(1)?

We are going over the master theorem in my algorithms class, and for one problem, I'm trying to compare nlogn vs 1 to figure out which case of the MT it falls under. But I'm having a hard timing figuring out which is bigger.
Edit: This is for solving a recurrence problem. The equation is T(n) = 2T(n/4) + N*LogN. Just threw this in incase it helps.
Think about it this way:
O(N*LogN) will increase with N in such a way that for any X, no matter how large, you can find a value of N such that N*LogN is greater than X.
O(1) will stay the same, no matter what N is.
This means that O(1) is asymptotically better, i.e. for some (perhaps very high) value of N the O(N*LogN) will become slower.
If an algorithm is O(NlogN) that means that there exists a number A and a quantity of execution time B, such that for any input size N greater than A, the execution time will be less than B times NlogN.
If an algorithm is O(1), that would mean that there exists some fixed amount of time C in which the algorithm would be guaranteed to complete regardless of the input size.
In comparing two algorithms, one of which is O(NlgN) and one of which is O(1), one will generally discover that the O(1) algorithm is faster for values of N that are sufficiently large, but in many cases the O(NlgN) algorithm may be faster for small values of N.
Indeed, while something like an O(N^3) or O(N^4) algorithm would generally seem pretty bad, it's possible that even an O(N^4) algorithm may outperform an O(1) algorithm if N is usually a small number (e.g. 1-5 or so) and never gets very big (even an occasional value of 50 could seriously dog performance).

Analysis of algorithms

I am reading algorithm analysis topic. Here is the text snippet from the book
When n doubles, the running time goes up by a factor of 2 for linear
programs, 4 for quadratic programs, and 8 for cubic programs.
Programs that run in logarithmic time take only an additive constant
longer when n doubles, and programs that run in O(n log n) take
slightly more than twice as long to run under the same circumstances.
These increases can be hard to spot if the lower-order terms have
relatively large coefficients and n is not large enough.
My question is what does author mean lower-order terms have relatively large coefficients? Can any one explain with example
Thanks!
Suppose your algorithm actually executes n^2 + 1000 n computations when run on n elements. Now for n = 1 you need 1001 computations, and for n = 2 you need 2004. The difference from linear growth is tiny, and you can hardly spot the quadratic contribution!
Asymptotically, however, your algorithm takes O(n^2) steps, so asymptotically (as n gets large) doubling the input size quadruples your runtime. But for our small value, doubling from 1 to 2 did not quadruple the runtime! The lower-order term is n, and its coefficient (1000) is large compared to the coefficient of the leading-order termn^2 (which is 1).
This shows how the asymptotic complexity does not say anything about particular, especially small values. It's merely a limiting statement about the behaviour as n gets large.
When using O notation, you specify the largest term of the function that is your performance bound. For example, if the performance was always bound by f = c3n3 + c2n2 + c1n + c0, you would say that is is O(n3). The author is saying that when n is small, the coefficients may have a larger impact than n on the performance, for example if c2 were very large and c3 very small, the performance may appear to be O(n2) until the size of n dominates the coefficients if you only go by the relative performance for specific small instances of n.
Asymptotic notation refers to the bounds of the runtime as n->infinity. So, a function that is O(n log n) may have an actual runtime of .1*n log n + 100000*n.
In this case, the 100000*n term is the "lower-order term". As n->infinity, this term is overpowered by the .1*n log n term.
However, as you can see, for small n, the 100000*n term will dominate the runtime.
For instance if you have an O(n) algorithm at lower scales you could have T(n) = 490239n + (insert ridiculous constant here) which means that the performance would look bad but as the scales increase you see that the increase is always linear.
Real world example is merge sort, O(n logn) problem is that recursion has a computational cost or overhead which is a factor of n which is a smaller order than nlogn so it gets discarded in the Big-O, problem is that that factor gets to be quite large as well and affects performance.

Resources