I try to find or learn what a division algorithm use NTL(Number Theory Library), and complexity of this algorithm. (need algorithms in GF2X module and ZZ)
The implementation of these functions is a lot of code that is difficult to understand what is going in the algorithm.
Has anyone uses NTL? Maybe someone knows standart division and remainder algorithms and their comlexity from number theory and can help?
I can copy-paste listing of some functions.
Related
I'm interested in justification of the following line from the wikipedia article:
"This algorithm [extended euclidean algorithm] runs in time O(log(m)^2), assuming |a| < m, and is generally more efficient than exponentiation."
http://en.wikipedia.org/wiki/Modular_multiplicative_inverse
Why is this so? Can anyone explain this to me? I understand completely the algorithm and all the maths, it is just that i do not see how to determine the complexity of such algorithms. Any more general hints?
Also, additionally: Is log meant to be the natural logarithm (ln) or the one with base 2?
The popular Introduction to algorithms book (http://mitpress.mit.edu/books/introduction-algorithms) has got a whole chapter on proving algorithms complexity (however there's much more to the topic than in this book). You can read it if your generally interested in this matter.
You might also try to follow this paper's references: http://itee.uq.edu.au/~havas/cats03.pdf
From Wikipedia:
"Anindya De, Chandan Saha, Piyush Kurur and Ramprasad Saptharishi[11] gave a similar algorithm using modular arithmetic in 2008 achieving the same running time. However, these latter algorithms are only faster than Schönhage–Strassen for impractically large inputs."
I would be very interested in the size of such impractically large integers.
Maybe someone did implement both algorithms in a certain way and could do some benchmarks?
Thanks
Fürer's algorithm and it's modular equivalent (DSKS) are very deep research topics and, for now, remain only as academic interest. Nobody actually knows how big the cross-over point is. And in all likeliness it doesn't matter because that cross-over point is likely to be well beyond 64-bit computing limits.
I've implemented Schönhage-Strassen before and I understand how Fürer's algorithm works. So I'm quite familiar with both of them. I can say it's very possible that the cross-over point between Schönhage-Strassen and Fürer's algorithm is so high that a computer capable of holding the parameters will be larger than the size of the observable universe.
That's the problem when you have complexities that differ by less than a logarithm. It takes exponentially large input sizes to compensate even for small differences in the Big-O constant.
In this case, Fürer's algorithm is known to have a very very very large Big-O constant.
Does anyone know of any program/script to calculate the computational complexity of code (e.g. method function) automatically?
If not, is there a good way (e.g. a design pattern, algorithm, etc) that supports it?
I'm not trying to do this in general.
In most cases, I know the input, the algorithm running it, and what constitutes a halt. I'm trying to compare 2 or more algorithms this way.
E.g.
algo #1 - 2x^2 + 10x + 5
algo #2 - 5x^2 + 1x + 3
Both algorithms are O(N^2). But algo #2 is better in the short run, while algo #1 is better in the long run.
While it is impossible to develop an algorithm that solves your problem generally, you can write an algorithm that will calculate the complexity of a piece of software for a few example inputs.
The only software I can find reference to is called Trend-Profiler. But, if you are more interested in the algorithm than the result, there is a paper here that describes the software and its algorithm.
Wouldn't it be better to sample an algorithm with a different amount of inputs?
With the time calculated for each input, you could approximate the complexity function and therefore determine whichever algorithm is better and at which stage.
Where can I turn for information regarding computing times of mathematical functions? Has any (general) study with any amount of rigor been made?
For instance, the computing time of
constant + constant
generally takes O(1).
Suppose I want to start using math like integrals, and I'd like to get an asymptotic approximation to various integrals. Has there been a standard study of this, or must I take the information I have and figure out my own approximation. I'd be very interested in a standard approach to this, and I'd like to know if it already exists.
Here's my motivation:
I'm in the middle of writing a paper that points out the equivalence between NP hard problems and certain types of mathematical equations. It seems that there might be use for a study of math computing times that is generalized like a new science.
EDIT:
I guess I'm wondering if there is a standard computational complexity to any given math that cannot be avoided. I'm wondering if anyone has studied this question. I'd love to see what others have tried.
EDIT 2:
Wikipedia lists "Computational Complexity Theory" in their encyclopedia, which I think may fit the bill. I'm still wondering if someone who has studied this could affirm this.
"Standard" math has no notion of algorithmic complexity. That's reserved for computer algorithms.
There are ways to analyze the dynamic behavior of solutions of equations. Things like convergence matter a great deal to mathematicians.
You can ask what the algorithmic complexity of euler integration versus fifth-order Runge-Kutta for integration. They would compare based on number of function evaluations required and time step stability.
But what's the "running time" of the solution to Fermat's Last Theorem? What about the last of David Hilbert's challenge problems? Is the "running time" for those a century and counting? What's your running time for solving a partial differential equation using separation of variables?
When you think about it that way, do you have a better understanding of why people would be put off by your question?
Yes, for various mathematical functions, the computational complexity (running time) of computing the function has been studied. This can differ depending on the model of computation.
For example adding two n-bit numbers takes Θ(n) time, multiplying them takes Θ(n log n) time (using the FFT), finding their gcd takes Θ(n2) time with the usual Euclidean algorithm and Θ(n(log n)2 (log log n)) with better algorithms, etc. For more complicated stuff like integrals, obviously it depends on what algorithm you use to do it.
There isn't a collected body of work, but work on approximating functions comes close. For example, you'd like to know that approximating sin(x) to within an epsilon error can be done in time proportional to some polynomial in log(x) and 1/epsilon. There isn't a general theory here (you should look up information complexity though), and focusing on specific functions might help.
user389117,
I think that subconsciously you want to deduce the complexity of computing a mathematical type from the form of this mathematical type.
E.g. A math type which concerns the square of the variable (x^2) you think (at least subconsciously) that the complexity of the computation is anologous to x^2 so the complexity should be something like O(n^2) or there is a standard process to deduce the form of complexity from the form of the mathematical equation.
These both are different qualities and one cannot deduce the one quality from the other.
I will give you an example: In papers all algorithms are written in pseudo code and then the scientists deduce the complexity of the pseudo code.
The pseudo code must be inevitably written and then you compute the complexity.
There is no a magical way to have the complexity derived from the form of the thing you want to compute.
Even if you compute the complexity and you find that the form is analogous to the form of the equation computed then I think it would be hard, at least at first place, for you to convert that remark from pseudo-science to science.
Good Luck!
As far as I know There are 4 ways to solve recurrence equations :
1- Recursion trees
2- Substitution
3 - Iteration
4 - Derivative
We are asked to use Substitution, which we will need to guess a formula for output. I read from CLRS book that there is no magic to do this, i was curious if there are any heuristics to do this?
I can certainly have an idea by drawing a recurrence tree or using iteration but, because the output will be in Big-OH or Theta format, formulas doesnt necessarily match.
Does any one have any recommendation for solving recurrence equations using substitution?
Please note that the list of possible ways to solve recurrence equations is definitely not complete, its merely a set of tools they teach Computer Scientists, because they will most likely solve most of your problems.
For exact solutions of recurrence equations mathematicians use a tool called generating functions. Generating functions give you exact solutions, and in general are more powerful than the master theorem.
There is a great resource online to learn about the here. http://www.math.upenn.edu/~wilf/DownldGF.html
If you go through the first couple examples you should get the hang of it in no time.
You need some math background and understand rudimentary taylor series. http://en.wikipedia.org/wiki/Taylor_series
Generating functions are also extremely useful in probability.
For simple ones, just take a "reasonable" guess.
For more complicated ones, I would go ahead and use a recurrence tree — it seems to me to be the easiest "algorithm" for generating a guess. Note that it can be difficult to use a recurrence tree to prove a bound (the details are tough to get right). Recurrence trees are highly useful for forming guesses which are then proven by substitution.
I'm not sure why you're saying the formulas won't match with the output in Big-O or Theta. They typically don't match exactly, but that's part of the point of Big-O. Part of the trick of going back to substitution is knowing how to plug in the Big-O solution to to make the substitution algebra work out. IIRC, CLRS does work out an example or two of this.