Why are constants ignored in asymptotic analysis?
Constant factors are ignored because running time and memory consumption (the two properties most often measured using the O-notation) are much harder to reason about when considering constant factors.
If we define U( f(n) ) to be the set of all function g for which there exists an N such that for all N > n: g(n) <= f(n) (i.e. the same as O without the constant factor), it is much harder to show that an algorithm's running time is in U( f(n) ) than O( f(n) ).
For one thing, we'd need an exact unit for measuring running time. Using a CPU instruction as the basic unit would work, but that'd depend on the exact implementation of the algorithm as well as the processor architecture it runs on.
It's similar for memory consumption: different implementations of the same algorithm will differ in their memory consumption (by a constant factor). Further if an implementation uses a lot of pointers, the same implementation will use about twice as much memory on a 64-bit machine than on a 32-bit machine. But saying things like "this algorithm's memory consumption, when implemented using this C-code, is in O(23 * n) on a 32-bit Intel PC. It's in O(42 * n) on a 64-bit PC" is just not useful.
Ignoring constants allows us to reason about the properties of an algorithm in an implementation- and platform-independent manner.
It's because of the linear speedup theorem for Turing machines.
If you show me a Turing machine that solves a problem of size n in f(n) steps, and specify a constant c > 0, I can make a Turing machine that solves the same problem in c f(n) steps (or one step, if c f(n) < 1). For example, by taking c = ½ my machine can solve the same problem in half as many steps. Or, by taking c = 1/1000000, my machine can solve the same problem in only a millionth as many steps!
This result makes constant factors uninteresting (theoretically speaking: obviously in practice they still have some interest).
If you are talking about this
http://www.cs.cornell.edu/courses/cs312/2004fa/lectures/lecture16.htm
When you analyze the running time (or some other aspect) of an algorithm and find that it is something like
n ^ 2 + k
Then, when you are figuring out the big-O performance time -- it makes no sense to look at k, because you want to know the running time as n is getting large. n ^ 2 is so much larger than k -- that you can safely ignore it.
Related
Multiplicative constants are not considered in analysis of algorithm time efficiency because A) they cancel out when computing efficiency functions B) constant functions grow very slowly with input size growth C) they have a small effect when input size is small D) they can be overcome by faster machines E) they do not affect the actual run time of the algorithm
My guess is "B", but I don't know correct answer. Are all options incorrect ?
So here's my comment extended to an answer:
B) constant functions grow very slowly with input size growth
This doesn't even make sense. A constant function doesn't even grow at all; however, here we are not talking about constant run-time functions, but about constant coefficients that may occur when estimating the actual number of "steps" given the asymptotic complexity of an algorithm.
In asymptotic analysis, however, we do not care about the exact number of steps, only the limit of the ratio of running times as a function of the input size as the input size goes to infinity.
E. g. O(n ^ 2) means that if you double the input size, the running time will be approximately 4 times the original, if you triple the input size, it will be 9 times the original, etc. It does not say that the execution will take exactly "4 steps" or "9 steps".
C) they have a small effect when input size is small
No, they have rather significant effects when the input size is small. Again, we are considering the limit as the input size approaches infinity. Any constant is asymptotically negligible compared to any non-constant monotonically growing function of n as n goes to infinity.
When n is small, then constants can have a tremendous effect on execution times. For example, there are all sorts of interesting and clever data structures, but if we only have small amounts of data, we often prefer arrays over e. g. a binary tree or a linked list, even for frequent insertion, because the good cache locality properties of the array make its constant factor so small that the theoretically O(n) insertion may well be a lot faster than an O(log n) insertion into a tree.
D) they can be overcome by faster machines
This answer completely misses the point, asymptotic analysis of algorithms has nothing to do with how fast physical machines are. Yes, machines are becoming faster over time, but again, that's just a constant factor. If you run a program for an O(n ^ 2) algorithm on a faster machine, it will still take 4 times the CPU time to execute it with a doubled input size.
E) they do not affect the actual run time of the algorithm
That's also wrong, they absolutely do.
So the only remaining answer is A, which may be correct if interpreted as in my explanation above (relating to ratios of execution times), but I would have phrased it quite differently for sure.
I think the answer is D:
Multiplicative constants are not considered in analysis of algorithm time efficiency because
D) they can be overcome by faster machines
Machines are becoming faster giving constant factor speedups, which overcomes the multiplicative constants, hence we can ignore the multiplicative constants for analysis.
I'd rather say we ignore multiplicative constants because they depend on the particular machine but for multiple choice we have to pick the best answer offered.
I am really confused with Big(O) notation. Is Big(O) machine dependent or machine independent ? (Machine in the sense the computer in which we run the Algorithm)
Will Sorting of 1000 numbers using quick sort in i3 processor and i7 processor be the same ? Why don't we consider the machine and it's processor speed when calculating the Time Complexity ? I am a neophyte in this stuff.
Big-O is a measure of scalability, not of speed. It shows you what effect on time and memory it has when you e.g. double the amount of data - does it double the execution, or quadruple it?
Whether you use i7 or i3, double is double. Whether a linear algorithm is fast or slow, double is double.
This also has another implication many people ignore. A complex algorithm such as O(n^3) can be faster than a simple algorithm such as O(n) for a given n that is below a certain limit. Example:
loop n times:
loop n times:
loop n times:
sleep 1 second
is O(n^3), as it has 3 nested loops.
loop n times:
sleep 10 seconds
is O(n), as it only has one loop. For n = 10 the first program executes for 1000 seconds, and the second one executes for only 100. So, O(n) is good! one would be tempted to say. But if you have n = 2, the first, complex program executes in only 8 seconds, while the second, simpler one executes for 20! Even for n = 3, the first executes in 27 seconds, the second one in 30. So while the n is low, a complex program might be able to outperform the simpler one. It's just that as n rises, the complex program gets slower much faster (if that makes sense) than a simple one. For n = 1000, the simple code has risen to only 10000 seconds, but the complex one is now 1000000000 seconds!
Also, this clearly shows you that complexity is not processor-dependent. A second is a second.
EDIT: Also, you might want to read this question, where Big-O is explained in a number of very high-quality answers.
Big(O) Notation is the method of calculating the complexity of an algorithm, and hence the relative time it will take to run. The same algorithm, for the same data, will run faster on a faster processor, but will still take the same number of operations. It's used as a way of evaluating the relative efficiency of different algorithms to achieve the same result.
Big O notation is not architecture-dependent in any way, it is a Mathematical construct.It is a very limited measure of algorithmic complexity, it only gives you a rough upper bound for how performance changes with data size.
Big(O) is alogorithm dependent. It's job is to help compare the relative costs of various algorithms, without the need to consider the machine dependencies.
Linear search though an array, on average will look at about 1/2 of the elements if it is found. for all practical purposes that is O(N/2) which is the same as O(1/2 * N). for compairson, you toss away the coefficient. hence it is O(N) for use.
A binary tree can hold N elements for searching as well. on agerage it will look though log base 2 (N) to find something, hence you will see it described as cost O(LN2(N)).
pop in small values for N, and there isn't a whole lot of difference between the algorithms. Pop in a large value of N, and it will be clear that the binary tree lookup is much faster.
Big(O) is not machine dependent. It is mathematical notation to denote complexity of an algorithm. Usually we use these notations in theory to compare algorithms performance.
I'm trying to understand a particular aspect of Big O analysis in the context of running programs on a PC.
Suppose I have an algorithm that has a performance of O(n + 2). Here if n gets really large the 2 becomes insignificant. In this case it's perfectly clear the real performance is O(n).
However, say another algorithm has an average performance of O(n2 / 2). The book where I saw this example says the real performance is O(n2). I'm not sure I get why, I mean the 2 in this case seems not completely insignificant. So I was looking for a nice clear explanation from the book. The book explains it this way:
"Consider though what the 1/2 means. The actual time to check each value
is highly dependent on the machine instruction that the code
translates to and then on the speed at which the CPU can execute the instructions. Therefore the 1/2 doesn't mean very much."
And my reaction is... huh? I literally have no clue what that says or more precisely what that statement has to do with their conclusion. Can somebody spell it out for me please.
Thanks for any help.
There's a distinction between "are these constants meaningful or relevant?" and "does big-O notation care about them?" The answer to that second question is "no," while the answer to that first question is "absolutely!"
Big-O notation doesn't care about constants because big-O notation only describes the long-term growth rate of functions, rather than their absolute magnitudes. Multiplying a function by a constant only influences its growth rate by a constant amount, so linear functions still grow linearly, logarithmic functions still grow logarithmically, exponential functions still grow exponentially, etc. Since these categories aren't affected by constants, it doesn't matter that we drop the constants.
That said, those constants are absolutely significant! A function whose runtime is 10100n will be way slower than a function whose runtime is just n. A function whose runtime is n2 / 2 will be faster than a function whose runtime is just n2. The fact that the first two functions are both O(n) and the second two are O(n2) doesn't change the fact that they don't run in the same amount of time, since that's not what big-O notation is designed for. O notation is good for determining whether in the long term one function will be bigger than another. Even though 10100n is a colossally huge value for any n > 0, that function is O(n) and so for large enough n eventually it will beat the function whose runtime is n2 / 2 because that function is O(n2).
In summary - since big-O only talks about relative classes of growth rates, it ignores the constant factor. However, those constants are absolutely significant; they just aren't relevant to an asymptotic analysis.
Big O notation is most commonly used to describe an algorithm's running time. In this context, I would argue that specific constant values are essentially meaningless. Imagine the following conversation:
Alice: What is the running time of your algorithm?
Bob: 7n2
Alice: What do you mean by 7n2?
What are the units? Microseconds? Milliseconds? Nanoseconds?
What CPU are you running it on? Intel i9-9900K? Qualcomm Snapdragon 845? (Or are you using a GPU, an FPGA, or other hardware?)
What type of RAM are you using?
What programming language did you implement the algorithm in? What is the source code?
What compiler / VM are you using? What flags are you passing to the compiler / VM?
What is the operating system?
etc.
So as you can see, any attempt to indicate a specific constant value is inherently problematic. But once we set aside constant factors, we are able to clearly describe an algorithm's running time. Big O notation gives us a robust and useful description of how long an algorithm takes, while abstracting away from the technical features of its implementation and execution.
Now it is possible to specify the constant factor when describing the number of operations (suitably defined) or CPU instructions an algorithm executes, the number of comparisons a sorting algorithm performs, and so forth. But typically, what we're really interested in is the running time.
None of this is meant to suggest that the real-world performance characteristics of an algorithm are unimportant. For example, if you need an algorithm for matrix multiplication, the Coppersmith-Winograd algorithm is inadvisable. It's true that this algorithm takes O(n2.376) time, whereas the Strassen algorithm, its strongest competitor, takes O(n2.808) time. However, according to Wikipedia, Coppersmith-Winograd is slow in practice, and "it only provides an advantage for matrices so large that they cannot be processed by modern hardware." This is usually explained by saying that the constant factor for Coppersmith-Winograd is very large. But to reiterate, if we're talking about the running time of Coppersmith-Winograd, it doesn't make sense to give a specific number for the constant factor.
Despite its limitations, big O notation is a pretty good measure of running time. And in many cases, it tells us which algorithms are fastest for sufficiently large input sizes, before we even write a single line of code.
Big-O notation only describes the growth rate of algorithms in terms of mathematical function, rather than the actual running time of algorithms on some machine.
Mathematically, Let f(x) and g(x) be positive for x sufficiently large.
We say that f(x) and g(x) grow at the same rate as x tends to infinity, if
now let f(x)=x^2 and g(x)=x^2/2, then lim(x->infinity)f(x)/g(x)=2. so x^2 and x^2/2 both have same growth rate.so we can say O(x^2/2)=O(x^2).
As templatetypedef said, hidden constants in asymptotic notations are absolutely significant.As an example :marge sort runs in O(nlogn) worst-case time and insertion sort runs in O(n^2) worst case time.But as the hidden constant factors in insertion sort is smaller than that of marge sort, in practice insertion sort can be faster than marge sort for small problem sizes on many machines.
You are completely right that constants matter. In comparing many different algorithms for the same problem, the O numbers without constants give you an overview of how they compare to each other. If you then have two algorithms in the same O class, you would compare them using the constants involved.
But even for different O classes the constants are important. For instance, for multidigit or big integer multiplication, the naive algorithm is O(n^2), Karatsuba is O(n^log_2(3)), Toom-Cook O(n^log_3(5)) and Schönhage-Strassen O(n*log(n)*log(log(n))). However, each of the faster algorithms has an increasingly large overhead reflected in large constants. So to get approximate cross-over points, one needs valid estimates of those constants. Thus one gets, as SWAG, that up to n=16 the naive multiplication is fastest, up to n=50 Karatsuba and the cross-over from Toom-Cook to Schönhage-Strassen happens for n=200.
In reality, the cross-over points not only depend on the constants, but also on processor-caching and other hardware-related issues.
Big O without constant is enough for algorithm analysis.
First, the actual time does not only depend how many instructions but also the time for each instruction, which is closely connected to the platform where the code runs. It is more than theory analysis. So the constant is not necessary for most case.
Second, Big O is mainly used to measure how the run time will increase as the problem becomes larger or how the run time decrease as the performance of hardware improved.
Third, for situations of high performance optimizing, constant will also be taken into consideration.
The time required to do a particular task in computers now a days does not required a large amount of time unless the value entered is very large.
Suppose we wants to multiply 2 matrices of size 10*10 we will not have problem unless we wants to do this operation multiple times and then the role of asymptotic notations becomes prevalent and when the value of n becomes very big then the constants don't really makes any difference to the answer and are almost negligible so we tend to leave them while calculating the complexity.
Time complexity for O(n+n) reduces to O(2n). Now 2 is a constant. So the time complexity will essentially depend on n.
Hence the time complexity of O(2n) equates to O(n).
Also if there is something like this O(2n + 3) it will still be O(n) as essentially the time will depend on the size of n.
Now suppose there is a code which is O(n^2 + n), it will be O(n^2) as when the value of n increases the effect of n will become less significant compared to effect of n^2.
Eg:
n = 2 => 4 + 2 = 6
n = 100 => 10000 + 100 => 10100
n = 10000 => 100000000 + 10000 => 100010000
As you can see the effect of the second expression as lesser effect as the value of n keeps increasing. Hence the time complexity evaluates to O(n^2).
Everyone knows that bubblesort is O(n^2), but this is based on the number of comparisons needed to sort this. I have a question in which, if I didn't care about the number of comparisons, but the output time, then how do you do analysis of this? Is there a way to do analysis on output time instead of comparisons?
For example, if you could have bubble sort and have parallel comparisons happening at for all pairs (even then odd comparisons), then the throughput time would be something like 2n-1 throughput time. The number of comparisons would be high, but I don't care as the final throughput time is quick.
So in essence, is there a common analysis for overall parallel performance time? If so, just give me some key terms and I'll learn the rest from google.
Parallel programming is a bit of red herring here. Making assumptions about run time only on big O notation can be misleading. To compare run times of algorithms you need the full equation not just the big O notation.
The problem is that big O notation tells you the dominating term as n goes to infinity. But the run time is on finite ranges of n. This is easy to understand from continuous mathematics (my background).
Consider y=Ax and y=Bx^2 Big O notation would tell you that y=Bx^2 is slower. However, between (0,A/B) it's less than y=Ax. In this case it could be faster to use the O(x^2) algorithm than the O(x) algorithm for x<A/B.
In fact I have heard of sorting algorithms which start off with a O(nlogn) algorithm and then switch to a O(n^2) logarithm when n is sufficiently small.
The best example is matrix multiplication. The naïve algorithm is O(n^3) but there are algorithms that get that down to O(n^2.3727). However, every algorithm I have seen has such a large constant that the naïve O(n^3) is still the fastest algorithm for all particle values of n and that does not look likely to change any time soon.
So really what you need is the full equation to compare. Something like An^3 (let's ignore lower order terms) and Bn^2.3727. In this case B is so incredibly large that the O(n^3) method always wins.
Parallel programming usually just lowers the constant. For example when I do matrix multiplication using four cores my time goes from An^3 to A/4 n^3. The same thing will happen with your parallel bubble sort. You will decrease the constant. So it's possible that for some range of values of n that your parallel bubble sort will beat a non-parallel (or possibly even parallel) merge sort. Though, unlike matrix multiplication, I think the range will be pretty small.
Algorithm analysis is not meant to give actual run times. That's what benchmarks are for. Analysis tells you how much relative complexity is in a program, but the actual run time for that program depends on so many other factors that strict analysis can't guarantee real-world times. For example, what happens if your OS decides to suspend your program to install updates? Your run time will be shot. Even running the same program over and over yields different results due to the complexity of computer systems (memory page faults, virtual machine garbage collection, IO interrupts, etc). Analysis can't take these into account.
This is why parallel processing doesn't usually come under consideration during analysis. The mechanism for "parallelizing" a program's components is usually external to your code, and usually based on a probabilistic algorithm for scheduling. I don't know of a good way to do static analysis on that. Once again, you can run a bunch of benchmarks and that will give you an average run time.
The time efficiency we get by parallel steps can be measured by round complexity. Where each round consists of parallel steps occurring at the same time. By doing so, we can see how effective the throughput time is, in similar analysis that we are used to.
I am just starting to learn the big O concept. What I learned is that if a function f is less than or equal to another constant multiple of function g, then f is O(g).
Now I came across an example in which a string of size "n" takes "2n" (double the size of input) steps of algorithm. So they say the time taken is O(2n) but then they follow this statement by saying As O(2n)=O(n), time complexity is O(n).
I dont understand this. As 2n will always be greater than n, how can we ignore the multple of 2 then? Anything less than or equal to 2n will not necessarily be less than n!
Doesn't it mean that we are somehow equating n and 2n? Sounds confusing. Please clarify in simplest possible way as I am just a beginner in this concept.
Best Regards :)
Big-O and related notations are intended to capture the aspects of algorithm performance that are most inherent to the algorithm, independent of how it is being run and measured.
Constant multipliers depend on the unit of measurement, seconds vs. microseconds vs. instructions vs. loop iterations. Even measured in the same units they will be different if measured on different systems. The same algorithm may take 20n instructions in one instruction set, 30n instructions on another. It may take 0.5n microseconds on one, 10n microseconds on another.
Many of the basic algorithm complexities you will see in the literature were calculated decades ago, but remain meaningful across significant changes in processor architecture and even more significant changes in performance.
Similar considerations apply to start-up and similar overheads.
A f(n) is O(n) if there exist constants N and c such that, for all n>=N, f(n) <= cn. For f(n) = 2n the constants are N=0 and c = 2. The first constant, N, is about ignoring overhead, the second, c, is about ignoring constant multipliers.
... As 2n will always be greater than n, how can we ignore the multple of 2 then? ...
Simply put, with growing n the multiplier loses its importance. The asymptotic behavior of a function describes what happens when n gets large.
Maybe it helps to consider not just O(n) and O(2n), because they are in the same class, but to contrast it with some other common classes. Example: Any O(n^2) algorithm will take longer than any O(n), in the long run (in the short run, their running times might even be reversed). Say you have two algorithms, one with linear time complexity of 100n and another with 8n^2. The quadratic algorithm will be faster for all n =< 12, but slower for all n > 12.
This property – that for any fixed nonnegative c and d you'll find an n, so that cn < dn^2 – constitues a part of the hierarchy of time complexities.
As you alluded to in your first paragraph, the time required to execute the algorithm is proportional to a constant multiple of the input size. You can think of O(n), to be O(C*n), where C is any constant multiplier.