Know running time of algorithm - algorithm

I want to know, given a computer and the big O of the running time of an algorithm, to know the actual aproximated time that the algorithm will take in that computer.
For example, let's say that I have an algorithm of complexity O(n) and a computer with one processor of 3.00 GHz, one core, 32-bits and 4 GB RAM. How can I estimate the actual seconds that will take this algorithm.
Thanks

There is no good answer to this question, simply because big O notation was never meant to answer that sort of question.
What big O notation does tell us is the following:
Big O notation characterizes functions according to their growth rates: different functions with the same growth rate may be represented using the same O notation.
Note that this means that functions that could hold very different values can be assigned the same big O value.
In other words, big O notation doesn't tell you much as to how fast an algorithm runs on a particular input, but rather compares the run times on inputs as their sizes approach infinity.

Related

Why is the constant always dropped from big O analysis?

I'm trying to understand a particular aspect of Big O analysis in the context of running programs on a PC.
Suppose I have an algorithm that has a performance of O(n + 2). Here if n gets really large the 2 becomes insignificant. In this case it's perfectly clear the real performance is O(n).
However, say another algorithm has an average performance of O(n2 / 2). The book where I saw this example says the real performance is O(n2). I'm not sure I get why, I mean the 2 in this case seems not completely insignificant. So I was looking for a nice clear explanation from the book. The book explains it this way:
"Consider though what the 1/2 means. The actual time to check each value
is highly dependent on the machine instruction that the code
translates to and then on the speed at which the CPU can execute the instructions. Therefore the 1/2 doesn't mean very much."
And my reaction is... huh? I literally have no clue what that says or more precisely what that statement has to do with their conclusion. Can somebody spell it out for me please.
Thanks for any help.
There's a distinction between "are these constants meaningful or relevant?" and "does big-O notation care about them?" The answer to that second question is "no," while the answer to that first question is "absolutely!"
Big-O notation doesn't care about constants because big-O notation only describes the long-term growth rate of functions, rather than their absolute magnitudes. Multiplying a function by a constant only influences its growth rate by a constant amount, so linear functions still grow linearly, logarithmic functions still grow logarithmically, exponential functions still grow exponentially, etc. Since these categories aren't affected by constants, it doesn't matter that we drop the constants.
That said, those constants are absolutely significant! A function whose runtime is 10100n will be way slower than a function whose runtime is just n. A function whose runtime is n2 / 2 will be faster than a function whose runtime is just n2. The fact that the first two functions are both O(n) and the second two are O(n2) doesn't change the fact that they don't run in the same amount of time, since that's not what big-O notation is designed for. O notation is good for determining whether in the long term one function will be bigger than another. Even though 10100n is a colossally huge value for any n > 0, that function is O(n) and so for large enough n eventually it will beat the function whose runtime is n2 / 2 because that function is O(n2).
In summary - since big-O only talks about relative classes of growth rates, it ignores the constant factor. However, those constants are absolutely significant; they just aren't relevant to an asymptotic analysis.
Big O notation is most commonly used to describe an algorithm's running time. In this context, I would argue that specific constant values are essentially meaningless. Imagine the following conversation:
Alice: What is the running time of your algorithm?
Bob: 7n2
Alice: What do you mean by 7n2?
What are the units? Microseconds? Milliseconds? Nanoseconds?
What CPU are you running it on? Intel i9-9900K? Qualcomm Snapdragon 845? (Or are you using a GPU, an FPGA, or other hardware?)
What type of RAM are you using?
What programming language did you implement the algorithm in? What is the source code?
What compiler / VM are you using? What flags are you passing to the compiler / VM?
What is the operating system?
etc.
So as you can see, any attempt to indicate a specific constant value is inherently problematic. But once we set aside constant factors, we are able to clearly describe an algorithm's running time. Big O notation gives us a robust and useful description of how long an algorithm takes, while abstracting away from the technical features of its implementation and execution.
Now it is possible to specify the constant factor when describing the number of operations (suitably defined) or CPU instructions an algorithm executes, the number of comparisons a sorting algorithm performs, and so forth. But typically, what we're really interested in is the running time.
None of this is meant to suggest that the real-world performance characteristics of an algorithm are unimportant. For example, if you need an algorithm for matrix multiplication, the Coppersmith-Winograd algorithm is inadvisable. It's true that this algorithm takes O(n2.376) time, whereas the Strassen algorithm, its strongest competitor, takes O(n2.808) time. However, according to Wikipedia, Coppersmith-Winograd is slow in practice, and "it only provides an advantage for matrices so large that they cannot be processed by modern hardware." This is usually explained by saying that the constant factor for Coppersmith-Winograd is very large. But to reiterate, if we're talking about the running time of Coppersmith-Winograd, it doesn't make sense to give a specific number for the constant factor.
Despite its limitations, big O notation is a pretty good measure of running time. And in many cases, it tells us which algorithms are fastest for sufficiently large input sizes, before we even write a single line of code.
Big-O notation only describes the growth rate of algorithms in terms of mathematical function, rather than the actual running time of algorithms on some machine.
Mathematically, Let f(x) and g(x) be positive for x sufficiently large.
We say that f(x) and g(x) grow at the same rate as x tends to infinity, if
now let f(x)=x^2 and g(x)=x^2/2, then lim(x->infinity)f(x)/g(x)=2. so x^2 and x^2/2 both have same growth rate.so we can say O(x^2/2)=O(x^2).
As templatetypedef said, hidden constants in asymptotic notations are absolutely significant.As an example :marge sort runs in O(nlogn) worst-case time and insertion sort runs in O(n^2) worst case time.But as the hidden constant factors in insertion sort is smaller than that of marge sort, in practice insertion sort can be faster than marge sort for small problem sizes on many machines.
You are completely right that constants matter. In comparing many different algorithms for the same problem, the O numbers without constants give you an overview of how they compare to each other. If you then have two algorithms in the same O class, you would compare them using the constants involved.
But even for different O classes the constants are important. For instance, for multidigit or big integer multiplication, the naive algorithm is O(n^2), Karatsuba is O(n^log_2(3)), Toom-Cook O(n^log_3(5)) and Schönhage-Strassen O(n*log(n)*log(log(n))). However, each of the faster algorithms has an increasingly large overhead reflected in large constants. So to get approximate cross-over points, one needs valid estimates of those constants. Thus one gets, as SWAG, that up to n=16 the naive multiplication is fastest, up to n=50 Karatsuba and the cross-over from Toom-Cook to Schönhage-Strassen happens for n=200.
In reality, the cross-over points not only depend on the constants, but also on processor-caching and other hardware-related issues.
Big O without constant is enough for algorithm analysis.
First, the actual time does not only depend how many instructions but also the time for each instruction, which is closely connected to the platform where the code runs. It is more than theory analysis. So the constant is not necessary for most case.
Second, Big O is mainly used to measure how the run time will increase as the problem becomes larger or how the run time decrease as the performance of hardware improved.
Third, for situations of high performance optimizing, constant will also be taken into consideration.
The time required to do a particular task in computers now a days does not required a large amount of time unless the value entered is very large.
Suppose we wants to multiply 2 matrices of size 10*10 we will not have problem unless we wants to do this operation multiple times and then the role of asymptotic notations becomes prevalent and when the value of n becomes very big then the constants don't really makes any difference to the answer and are almost negligible so we tend to leave them while calculating the complexity.
Time complexity for O(n+n) reduces to O(2n). Now 2 is a constant. So the time complexity will essentially depend on n.
Hence the time complexity of O(2n) equates to O(n).
Also if there is something like this O(2n + 3) it will still be O(n) as essentially the time will depend on the size of n.
Now suppose there is a code which is O(n^2 + n), it will be O(n^2) as when the value of n increases the effect of n will become less significant compared to effect of n^2.
Eg:
n = 2 => 4 + 2 = 6
n = 100 => 10000 + 100 => 10100
n = 10000 => 100000000 + 10000 => 100010000
As you can see the effect of the second expression as lesser effect as the value of n keeps increasing. Hence the time complexity evaluates to O(n^2).

What unit of time does big oh measure?

I'm measuring the execution time of a program that should run in O(n^2). To get the expected running time I would calculate n^2 from the input size I assume. But I taken the execution time using another program I get the time in milliseconds. So my question is how to compare that to n^2. For n^2 I get a larger number. How would I convert this to miliseconds? I know this question may not be worded as good as you might like. Hopefully, you know what I mean.
It doesn't measure time in any unit. It describes how the time will change if n changes.
For example, O (n^2) is defined to mean: "There is a constant c such that the time is at most c * n^2 for all but the first few n". When you run the algorithm on different computers, it could be 5 n^2 nanoseconds on one, and 17 n^2 milliseconds on another computer. You just have c = 5 nanoseconds in one case, c = 17 milliseconds in the other case.
It bounds (to within a constant factor) the asymptotic runtime in numbers of steps of an abstract model of a computer, often a Register Machine.
Big O doesn't measure a unit of time. It expresses the runtime of the code in terms of input of size n. You can't accurately use a big O notation to compute runtime in milliseconds as this may vary per machine that you run the code on.
You can estimate how long it takes if you know the runtime of each of the operations in the algorithm but that's not really what the big O notation is meant for.

Estimation of program execution time from complexity

I want to know, how i can estimate the time that my program will take to execute on my machine (for example a 2.5 Ghz machine), if i have an estimation of its worst case time complexity?
For Example : - If I have a program which is O(n^2), in worst case, and n<100000, how can i know /estimate before writing the actual program/procedure, the time that it will take to execute in seconds?
Wouldn't it be good to know how a program actually performs, and it will also save writing code which eventually turns out to be inefficient!
Help greatly appreciated.
Since big O complexity ignores linear coefficients and smaller terms, it is impossible to estimate the performance of an algorithm given only its big o complexity.
In fact, for any specific N, you cannot predict which of two given algorithms will execute faster.
For example, O(N) is not always faster than O(N*N) since an algorithm that takes 100000000*n steps is O(N) is slower than an algorithm than takes N*N steps for many small values of N.
These linear coefficients and asymptotically smaller terms vary from platform to platform and even amongst algorithms of the same equivalence class (in terms of big O measure). 3
The problem you are trying to use big O notation for is not the one it is designed to solve.
Instead of dealing with complecity, you might want to have a look at Worst Case Execution Time (WCET). This area of research most likely corresponds to what you are looking for.
http://en.wikipedia.org/wiki/Worst-case_execution_time
Multiply N^2 by the time You spend in an iteration of the innermost loop, and You have a ballpark estimate.

Could anyone explain Big O versus Big Omega vs Big Theta? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Big Theta Notation - what exactly does big Theta represent?
I understand it in theory, I guess, but what I'm having trouble grasping is the application of the three.
In school, we always used Big O to denote the complexity of an algorithm. Bubble sort was O(n^2) for example.
Now after reading some more theory I get that Big Oh is not the only measure, there's at least two other interesting ones.
But here's my question:
Big O is the upper-bound, Big Omega is the lower bound, and Big Theta is a mix of the two. But what does that mean conceptually? I understand what it means on a graph; I've seen a million examples of that. But what does it mean for algorithm complexity? How does an "upper bound" or a "lower bound" mix with that?
I guess I just don't get its application. I understand that if multiplied by some constant c that if after some value n_0 f(x) is greater than g(x), f(x) is considered O(g(x)). But what does that mean practically? Why would we be multiplying f(x) by some value c? Hell, I thought with Big O notation multiples didn't matter.
The big O notation, and its relatives, the big Theta, the big Omega, the small o and the small omega are ways of saying something about how a function behaves at a limit point (for example, when approaching infinity, but also when approaching 0, etc.) without saying much else about the function. They are commonly used to describe running space and time of algorithms, but can also be seen in other areas of mathematics regarding asymptotic behavior.
The semi-intuitive definition is as follows:
A function g(x) is said to be O(f(x)) if "from some point on", g(x) is lower than c*f(x), where c is some constant.
The other definitions are similar, Theta demanding that g(x) be between two constant multiples of f(x), Omega demanding g(x)>c*f(x), and the small versions demand that this is true for all such constants.
But why is it interesting to say, for example, that an algorithm has run time of O(n^2)?
It's interesting mainly because, in theoretical computer science, we are most interested in how algorithms behave for large inputs. This is true because on small inputs algorithm run times can vary greatly depending on implementation, compilation, hardware, and other such things that are not really interesting when analyzing an algorithm theoretically.
The rate of growth, however, usually depends on the nature of the algorithm, and to improve it you need deeper insights on the problem you're trying to solve. This is the case, for example, with sorting algorithms, where you can get a simple algorithm (Bubble Sort) to run in O(n^2), but to improve this to O(n log n) you need a truly new idea, such as that introduced in Merge Sort or Heap Sort.
On the other hand, if you have an algorithm that runs in exactly 5n seconds, and another that runs in 1000n seconds (which is the difference between a long yawn and a launch break for n=3, for example), when you get to n=1000000000000, the difference in scale seems less important. If you have an algorithm that takes O(log n), though, you'd have to wait log(1000000000000)=12 seconds, perhaps multiplied by some constant, instead of the almost 317,098 years, which, no matter how big the constant is, is a completely different scale.
I hope this makes things a little clearer. Good luck with your studies!

Does Big O Measure Memory Requirments Or Just Speed?

I often here people talk about Big O which measures algorithms against each other
Does this measure clock cycles or space requirements.
If people want to contrast algorithms based on memory usage what measure would they use
If someone says "This algorithm runs in O(n) time", he's talking about speed. If someone says "This algorithm runs in O(n) space", he's talking about memory.
If he just says "This algorithm is O(n)", he's usually talking about speed (though if he says it during a discussion about memory, he's probably talking about memory).
If you're not sure which one someone's talking about, ask him.
Short answer : you have 'Big O in space" and "Big O in time".
Long answer: Big O is just a notation, you can use it in whatever context you want.
Big O is just a mathematical tool that can be used to describe any function. Usually people use it to describe speed, but it can just as well be used to describe memory usage.
Also, when we use Big O for time, we're usually not talking directly about clock cycles. Instead, we count "basic operations" (that are implicitly assumed to take a constant number of cycles).
Typically it's the number of operations, which translates to speed. Usually, algorithms differ more in speed than in memory usage. However, you will see the O() notation used for memory use, when appropriate.
Big O is really just a measure of the growth of complexity based on growth of input.
Two algorithms with are both O(n) may execute in vastly different times but their grown is linear with relation to the growth of the input.
Big-O can be used to describe the relationship between any two quantities. Although it is generally only used in computer science, the concept may also be applicable in other fields like physics. For example, the amount of power that must be put into an antenna of a given size to yield a unit-strength signal at some distance is O(d^2), regardless of the antenna shape. If the size of the antenna is large relative to the distance, the increase in strength required may be linear or even sub-linear rather than quadratic, but as the distance gets larger, the quadratic term will dominate.
Big O and others are used to measure the growth of something.
When someone says that something is O(N), then that thing grows no faster than linear rate. If something is Ω(N^2), then that thing grows no slower than quadratic rate. When something is Θ(2^N), then that thing grows at an exponential rate.
What that thing is can be the time requirement for an algorithm. It can also be the space i.e. memory requirement for an algorithm. It can also be pretty much anything, related to neither space nor time.
For example, some massively parallel algorithms often measure the scalability in the number of processors that it can run on. A particular algorithm may run on O(N) processors in O(N^2) time. Another algorithm may run on O(N^2) processors in O(N) time.
Related questions
Big-oh vs big-theta
See also
Wikipedia/Big O Notation
While normally algorithms compete on time, then I would normally assume that any O statement was time. However, it's perfectly valid to also compare space. O can be used for any measurement- we just normally use speed because it's normally the most important.

Resources