Running time of computing mathematical functions - algorithm

Where can I turn for information regarding computing times of mathematical functions? Has any (general) study with any amount of rigor been made?
For instance, the computing time of
constant + constant
generally takes O(1).
Suppose I want to start using math like integrals, and I'd like to get an asymptotic approximation to various integrals. Has there been a standard study of this, or must I take the information I have and figure out my own approximation. I'd be very interested in a standard approach to this, and I'd like to know if it already exists.
Here's my motivation:
I'm in the middle of writing a paper that points out the equivalence between NP hard problems and certain types of mathematical equations. It seems that there might be use for a study of math computing times that is generalized like a new science.
EDIT:
I guess I'm wondering if there is a standard computational complexity to any given math that cannot be avoided. I'm wondering if anyone has studied this question. I'd love to see what others have tried.
EDIT 2:
Wikipedia lists "Computational Complexity Theory" in their encyclopedia, which I think may fit the bill. I'm still wondering if someone who has studied this could affirm this.

"Standard" math has no notion of algorithmic complexity. That's reserved for computer algorithms.
There are ways to analyze the dynamic behavior of solutions of equations. Things like convergence matter a great deal to mathematicians.
You can ask what the algorithmic complexity of euler integration versus fifth-order Runge-Kutta for integration. They would compare based on number of function evaluations required and time step stability.
But what's the "running time" of the solution to Fermat's Last Theorem? What about the last of David Hilbert's challenge problems? Is the "running time" for those a century and counting? What's your running time for solving a partial differential equation using separation of variables?
When you think about it that way, do you have a better understanding of why people would be put off by your question?

Yes, for various mathematical functions, the computational complexity (running time) of computing the function has been studied. This can differ depending on the model of computation.
For example adding two n-bit numbers takes Θ(n) time, multiplying them takes Θ(n log n) time (using the FFT), finding their gcd takes Θ(n2) time with the usual Euclidean algorithm and Θ(n(log n)2 (log log n)) with better algorithms, etc. For more complicated stuff like integrals, obviously it depends on what algorithm you use to do it.

There isn't a collected body of work, but work on approximating functions comes close. For example, you'd like to know that approximating sin(x) to within an epsilon error can be done in time proportional to some polynomial in log(x) and 1/epsilon. There isn't a general theory here (you should look up information complexity though), and focusing on specific functions might help.

user389117,
I think that subconsciously you want to deduce the complexity of computing a mathematical type from the form of this mathematical type.
E.g. A math type which concerns the square of the variable (x^2) you think (at least subconsciously) that the complexity of the computation is anologous to x^2 so the complexity should be something like O(n^2) or there is a standard process to deduce the form of complexity from the form of the mathematical equation.
These both are different qualities and one cannot deduce the one quality from the other.
I will give you an example: In papers all algorithms are written in pseudo code and then the scientists deduce the complexity of the pseudo code.
The pseudo code must be inevitably written and then you compute the complexity.
There is no a magical way to have the complexity derived from the form of the thing you want to compute.
Even if you compute the complexity and you find that the form is analogous to the form of the equation computed then I think it would be hard, at least at first place, for you to convert that remark from pseudo-science to science.
Good Luck!

Related

How efficient is efficient when it comes to Polynomial Time Algorithms?

I hope this is the right place for this question.
Polynomial time algorithms! How do polynomial time algorithms (PTAs) actually relate to the processing power, memory size (RAM) and storage of computers?
We consider PTAs to be efficient. We know that even for a PTA, the time complexity increases with the input size n. Take for example, there already exists a PTA that determines if a number is prime. But what happens if I want to check a number this big https://justpaste.it/3fnj2? Is the PTA for prime check still considered efficient? Is there a computer that can compute if such a big number like that is prime?
Whether yes or no (maybe no, idk), how does the concept of polynomial time algorithms actually apply in the real world? Is their some computing bound or something for so-called polynomial time algorithms?
I've tried Google searches on this but all I find are mathematical Big O related explanations. I don't find articles that actual relate the concept of PTAs to computing power. I would appreciate some explanation or links to some resources.
There are a few things to explain.
Regarding Polynomial Time as efficient is just an arbitrary agreement. The mathematicians have defined a set Efficient_Algorithms = {P algorithm, where P Polynomial}. That is only a mathematical definition. Mathematicians don't see your actual hardware and they don't care for it. They work with abstract concepts. Yes, scientists consider O(n^100) as efficient.
But you cannot compare one to one statements from theoretical computer science with computer programs running on hardware. Scientists work with formulas and theorems while computer programs are executed on electric circuits.
The Big-Oh notation does not help you for comparing implementations of an algorithms. The Big-Oh notation compares algorithms but not the implementations of them. This can be illustrated as follows. Consider you have a prime checking algorithm with a high polynomial complexity. You implement it and you see it does not perform well for practical use cases. So you use a profiler. It tells you where the bottle neck is. You find out that 98% of the computations time are matrix multiplications. So you develop a processor that does exactly such calculations extremely fast. Or you buy the most modern graphics card for this purpose. Or you wait 150 years for a new hardware generation. Or you achieve to make most of these multiplications parallel. Imagine you achieved somehow to reduce the time for matrix multiplications by 95%. With this wonderful hardware you run your algorithm. And suddenly it performs well. So your algorithm is actually efficient. It was only your hardware that was not powerful enough. This is not an thought experiment. Such dramatic improvements of computation power are reality quite often.
The very most of algorithms that have a polynomial complexity have such because the problems they are solving are actually of polynomial complexity. Consider for example the matrix multiplication. If you do it on paper it is O(n^3). It is the nature of this problem that it has a polynomial complexity. In practice and daily life (I think) most problems for which you have a polynomial algorithm are actually polynomial problems. If you have a polynomial problem, then a polynomial algorithm is efficient.
Why do we talk about polynomial algorithms and why do we consider them as efficient? As already said, this is quite arbitrary. But as a motivation the following words may be helpful. When talking about "polynomial algorithms", we can say there are two types of them.
The algorithms that have a complexity that is even lower than polynomial (e.t. linear or logarithmic). I think we can agree to say these are efficient.
The algorithms that are actually polynomial and not lower than polynomial. As illustrated above, in practice these algorithms are oftentimes polynomial because they solve problems that are actually of polynomial nature and therefore require polynomial complexity. If you see it this way, then of course we can say, these algorithms are efficient.
In practice if you have a linear problem you will normally recognise it as a linear problem. Normally you would not apply an algorithm that has a worse complexity to a linear problem. This is just practical experience. If you for example search an element in a list you would not expect more comparisons than the number of elements in the list. If in such cases you apply an algorithm that has a complexity O(n^2), then of course this polynomial algorithm is not efficient. But as said, such mistakes are oftentimes so obvious, that they don't happen.
So that is my final answer to your question: In practice software developers have a good feeling for linear complexity. Good developers also have a feeling of logarithmic complexity in real life. In consequence that means, you don't have to worry about complexity theory too much. If you have polynomial algorithm, then you normally have a quite good feeling to tell if the problem itself is actually linear or not. If this is not the case, then your algorithm is efficient. If you have an exponential algorithm, it may not be obvious what is going on. But in practice you see the computation time, do some experiments or get complains from users. Exponential complexity is normally not deniable.

Big Theta of Factorial Multiplied by a Coefficient

For a function with a run time of (cn)! where c is a coefficient >= 0 and c != n, would the tight bound of the run be Θ(n!) or Θ((cn)!)? Right now, I believe it would be Θ((cn)!) since they would differ by a coefficent >= n since cn != n.
Thanks!
Edit: A more specific example to clarify what I'm asking:
Will (7n)!, (5n/16)! and n! all be Θ(n!)?
You can use Stirling's approximation to get that if c>1 then (cn)! is asymptotically larger than pow(c,n)*n!, which is not O(n!) since the quotient diverges. As a more elementary approach consider this example for c=2: (2n)!=(2n)(2n-1)...(n+1)n!>n!n! and (n!n!)/n!=n! diverges, so (2n)! is NOT O(n!).
Will (7n)!, (5n/16)! and n! all be Θ(n!)?
I think there are two answers to your question.
The shorter one is from the purely theoretical point of view. Of those 3 only the n! lies in the class of Θ(n!). The second lies in the O(n!) (note big-O instead of big-Theta) and (7n)! is slower than Θ(n!), it lies in Θ((7n)!)
There is also a longer but more practical answer. And to get to it we first need to understand what is the big deal with this whole big-O and big-Theta business in the first place?
The thing is that for many practical tasks there are many algorithms and not all of them are equally or even similarly efficient. So the practical question is: can we somehow capture this difference in performance in an easy to understand and compare way? And this is the problem that big-O/big-Theta are trying to solve. The idea behind this method is that if we look at some algorithm with some complicated real formula for the exact time, there is only 1 term that grows faster than all others and thus dominates the time as the problem gets bigger. So let's compress this big formula to that dominant term. Then we can compare those terms and if they are different, we can easily say which is the better algorithm (7*n^2 is clearly better than 2*n^3).
Another idea is that the term "operation" is usually not that well defined at the level people usually think about algorithms. Which "operation" actually maps to a single CPU instruction and which to a few depends on many factors such as particular hardware. Also the instructions themselves can take different time to execute. Moreover sometimes the algorithm's working time is dominated by memory access than CPU instructions and those components are not easily additive. The morale of this story is that if two algorithms are different only in a scalar coefficient, you can't really compare those algorithms just theoretically. You need to compare some implementations in some particular environment. This is why algorithms complexity measure typically boils down to something like O(n^k) where k is a constant.
There is one more consideration: practicality. If the algorithm is some polynomial, there is a huge practical difference between cases a=3 and a=4 in O(n^a). But if it is something like O(2^(n^a)), then there is not much difference what exactly the a as along as a>1. This is because 2^n grows fast enough to make it impractical for almost any realistic n irrespective of a. So in practical terms it is often good enough approximation to put all such algorithms into a single "exponential algorithms" bucket and say they are all impractical even despite the fact there is a huge difference between them. This is where some mathematically unconventional notations like 2^O(n) come from.
From this last practical perspective the difference between Θ(n!) and Θ((7n)!) is also very little: both are totally impractical because both lie beyond even the exponential bucket of 2^O(n) (see Stirling's formula that shows that n! grows a bit faster than (n/e)^n). So it makes sense to put all such algorithms in another bucket of "factorial complexity" and mark them as impractical as well.

How to translate algorithm complexity to time necessary for computation

If I know complexity of an algorithm, can I predict how long it will compute in real life?
A bit more context:
I have been trying to solve university assignment which has to find the best possible result in a game from given position. I have written an algorithm and it works, however very slow. The complexity is O(n)=5^n . For 24 elements it computes a few minutes. I'm not sure if it's because my implementation is wrong, or if this algorithm is simply very slow. Is there a way for me to approximate how much time any algorithm should take?
Worst case you can base on extrapolation. So having time on N=1,2,3,4 elements (the more the better) and O-notation estimation for algorithm complexity you can estimate time for any finite number. Another question this estimation precision goes lower and lower as N increases.
What you can do with it? Search for error estimation algorithms for such approaches. In practice it usually gives good enough result.
Also please don't forget about model adequateness checks. So having results for N=1..10 and O-notation complexity you should check 'how good' your results correlate with your O-model (if you can select numbers for O-notation formula that meets your results). If you cannot get numbers, you need either more numbers to get wider picture or ... OK, you can have wrong complexity estimation :-).
Useful links:
Brief review on algorithm complexity.
Time complexity catalogue
Really good point to start - look for examples based on code as input.
You cannot predict running time based on time complexity alone. There are many factors involved: hardware speed, programming language, implementation details, etc. The only thing you can predict using the complexity is expected time increase when the size of the input increases.
For example, personally, I've seen O(N^2) algorithms take longer than O(N^3) ones, especially on small values of N, such as it is in your case. And by, the way, 5^24 is a huge number (5.9e16). I wouldn't be surprised if that took a few hours on a supercomputer, let alone on some mid-range personal pc, which most of us are using.

How does one calculate the runtime of an algorithm?

I've read about algorithm run-time in some algorithm books, where it's expressed as, O(n). For eg., the given code would run in O(n) time for the best case & O(n3) for the worst case. What does it mean & how does one calculate it for their own code? Is it like linear time , and is it like each predefined library function has their own run-time which should be kept in mind before calling it? Thanks...
A Beginner's Guide to Big O Notation might be a good place to start:
http://rob-bell.net/2009/06/a-beginners-guide-to-big-o-notation/
also take a look at Wikipedia
http://en.wikipedia.org/wiki/Big_O_notation
there are several related questions and good answers on stackoverflow
What is a plain English explanation of "Big O" notation?
and
Big-O for Eight Year Olds?
Should't this be in math?
If you are trying to sort with bubble sort array, that is already sorted, then
you can check, if this move along array checked anything. If not, all okey -- we done.
Than, for best case you will have O(n) compraisons(n-1, to be exact), for worst case(array is reversed) you will have O(n^2) compraisons(n(n-1)/2, to be exact).
More complicated example. Let's find maximum element of array.
Obvilously, you will always do n-1 compraisons, but how many assignments on average?
Complicated math answers: H(n) -1.
Usually, It is easy to Your Answerget best and worst scenarios, but average require a lot of math.
I would suggest you read Knuth, Volume 1. But who would not?
And, formal definition:
f(n)∈O(g(n)) means exist n∈N: for all m>n f(m)
In fact, you must read O-notation about on wiki.
The big-O notation is one kind of asymptotic notation. Asymptotic notation is an idea from mathematics, which describes the behavior of functions "in the limit" - as you approach infinity.
The problem with defining how long an algorithm takes to run is that you usually can't give an answer in milliseconds because it depends on the machine, and you can't give an answer in clock cycles or as an operation count because that would be too specific to particular data to be useful.
The simple way of looking at asymptotic notation is that it discards all the constant factors in a function. Basically, a n2 will always be bigger that b n if n is sufficiently large (assuming everything is positive). Changing the constant factors a and b doesn't change that - it changes the specific value of n where a n2 is bigger, but doesn't change that it happens. So we say that O(n2) is bigger than O(n), and forget about those constants that we probably can't know anyway.
That's useful because the problems with bigger n are usually the ones where things slow down enough that we really care. If n is small enough, the time taken is small, and the gains available from choosing different algorithms are small. When n gets big, choosing a different algorithm can make a huge difference. Also, the problems happen in the real world are often much bigger than the ones we can easily test against - and often, they keep growing over time (e.g. as databases accumulate more data).
It's a useful mathematical model that abstracts away enough awkward-to-handle detail that useful results can be found, but it's not a perfect system. We don't deal with infinite problems in the real world, and there are plenty of times when problems are small enough that those constants are relevant for real-world performance, and sometimes you just have to time things with a clock.
The MIT OCW Introduction to Algorithms course is very good for this kind of thing. The videos and other materials are available for free, and the course book (not free) is among the best books available for algorithms.

How to compute exact complexity of an algorithm?

Without resorting to asymptotic notation, is tedious step counting the only way to get the time complexity of an algorithm? And without step count of each line of code can we arrive at a big-O representation of any program?
Details: trying to find out the complexity of several numerical analysis algorithms to decide which will be best suited for solving a particular problem.
E.g. - from among Regula-Falsi or Newton-Rhapson method for solving eqns, intention is to evaluate the exact complexity of each method and then decide (putting value of 'n' or whatever arguments there are) which method is less complex.
The only way --- not the "easy" or hard way but the only reasonable way --- to find the exact complexity of a complicated algorithm is to profile it. A modern implementation of an algorithm has a complex interaction with numerical libraries and with the CPU and its floating point unit. For instance in-cache memory access is much faster than out-of-cache memory access, and on top of that there may be more than one level of cache. Counting steps is really much more suitable to the asymptotic complexity that you say isn't enough for your purpose.
But, if you did want to count steps automatically, there are also ways to do that. You can add a counter increment command (like "bloof++;" in C) to every line of code, and then display the value at the end.
You should also know about the more refined time complexity expression, f(n)*(1+o(1)), that is also useful for analytical calculations. For instance n^2+2*n+7 simplifies to n^2*(1+o(1)). If the constant factor is what bothers you about usual asymptotic notation O(f(n)), this refinement is a way to keep track of it and still throw out negligible terms.
The 'easy way' is to simulate it. Try your algorithms with lots of values of n and lots of different data, plot the results then match the curve on the graph to an equation.
Your results may not be strictly correct and they're only as valid as your ability to generate good test data but for most cases this will work.
E.g. - from among Regula-Falsi or Newton-Rhapson method for solving eqns, intention is to evaluate the exact complexity of each method and then decide (putting value of 'n' or whatever arguments there are) which method is less complex.
I don't think it's possible to answer this question in general for nonlinear solvers. You could an exact number of computations per iteration, but you're never going to know in general how many iterations it will take for each solver to converge. There are other complications like needing the Jacobian for Newton's which could make computing the complexity even more difficult.
To sum up, the most efficient nonlinear solver is always dependent on the problem you're solving. If the variety of problems you're solving is very limited, doing a bunch of experiments with different solvers and measuring the number of iterations and CPU time will probably give you more useful information.

Resources