How can I get an algorithm to do a exponentiation of a float exponent? - exponentiation

I'm developing a small application in Deluge (zoho.com). There isn't a "^" operator or a "pow" function to do exponentiation. Getting worst, I'm supposed to do exponentiations also with float exponents, instead just integer exponents. I've found a lot of algorithms doing integer exponentiations, but none that do it for float ones. Thank you for helping.

They are basically same. If you want something simple, then repeatedly multiplying would do.
If you want to make the multiplication process efficient, you can go for Divide and conquer algorithm,

Related

How does one actually implement polynomial multiplications using FFT?

I have been studying this topic in an Algorithms textbook.
The clever usage of the complex roots of unity seems to be mathematically working. However, I do not understand how one could actually represent this in a computer.
I can think of two things:
Use the real/imaginary decomposition to represent the complex numbers. But this means using floats, which means I open up my algorithm to numerical error and I would loose out precision even if I want to multiply two polynomials with integer coefficients.
Represent exp(i 2pi/n) as n. So, I'd eventually get a tuple in omega, and if I have to keep it in this form, I'd essentially be doing polynomial multiplication in omega again, taking us back to square one.
I'd really like to see an implementation of this algorithm in a familiar programming language.
Indeed as you identify, the roots of unity are typically not nice numbers that can be stored well in a computer. Since the numerical error is small, if you know the output should be integers, rounding usually produces the right result.
If you don't want to (or cannot) rely on that, an exact option is the Number Theoretic Transform. It substitutes the roots of unity in the complex plane with roots of unity in a finite field ℤ/pℤ where p is a suitable prime. p has to be large enough for all the necessary roots to exist, and the efficiency is affected by properties of p. If you choose a Fermat prime then the roots of unity have convenient forms and there is a trick to do reduction modulo p more efficiently than usual. That is all exact integer arithmetic and the values stay small, so there is no problem implementing it in a computer.
That technique is used in the Schönhage–Strassen algorithm so you can look up the specifics there.

division equivalent to peasant multiplication algorithm

I'm looking for division algorithm which will be equivalent to peasant multiplication algorithm, but i coudlnt find anything except fourier division algorithm, but maybe Someone coudl tell me about some other alogorithm ? Which will only use +,-and shifting operations.
If you do ordinary long division and write the numbers in base two you only need to do addition and subtraction because you work out the result one digit at a time. When microprocessors didn't have multiplication or division instructions this sort of thing was fairly common - see e.g. http://6502org.wikidot.com/software-math-intdiv

How to divide large numbers and what algorithm to use

I want to manipulate really big numbers and I am trying to work with arrays. I have already implemented the multiplication operation, but now I want to implement the division operation.
I was wondering which algorithm(s) should I use? Is it possible to use the Newton–Raphson division algorithm or maybe should I use the algorithm that we learned in the school?
PS: I know that there are many libraries that work with big numbers, but I want to do this for practice.
These are my favorite algorithms I use:
Binary division
Look here: http://courses.cs.vt.edu/~cs1104/BuildingBlocks/divide.030.html
This is what you should start with. It's not that slow and it is simple. Do not forget to properly test your +, -, <<, >> operations before you start. They should work flawlessly on any given input
Division by half the bits arithmetics
Look here: https://stackoverflow.com/a/19381045/2521214
With little tweaking you can adapt it to arrays. Uses +, -, *, /, %. If you code it properly should be much faster then Binary division.
approximation of division
Look here: https://stackoverflow.com/a/18398246/2521214
Or for some speed up of x^2, x*y here: Fast bignum square computation
This is more suited for floating/fixed point division. It's a little bit harder to understand, but the speed and accuracy is worth the while. Also, there are many other approximation algorithms out there, So google!

Fastest and most reliable factorization method

Which is the fastest and most reliable factorization method used now a days ? I have gone thru
Fermat's Factorization and Pollard's rho factorization method and was wondering are there any better methods to code and implement ?
Please check the Wikipedia article. It has almost everything you want to find: http://en.wikipedia.org/wiki/Integer_factorization
The solution really depends on the range of the number, and sometimes the property of the number.
For big number around or less than 100 digits, according to Wikipedia, quadratic sieve is the best. For larger numbers, general number field sieve is better.
I don't talk about small cases, as you are already mentioning Pollard's rho, this should be trivial.

Compile time optimization of Math.pow

I've read that it's possible to optimize multiplication by a known constant at compile time by generating code which makes clever use of bit shifting and compiler-generated magic constants.
I'm interested in possibilities for optimizing exponentiation in a similar hacky manner. I know about exponentiation by squaring, so I guess you could aggressively optimize
pow(CONSTANT, n)
by embedding precomputed successive squares of CONSTANT into the executable. I'm not sure whether this is actually a good idea.
But when it comes to
pow(n, CONSTANT)
I can't think of anything. Is there a known way to do this efficiently? Do the minds of StackOverflow have ideas, on either problem?
Assuming pow(a,b) is implemented as exp(b * log(a)) (which it probably is), if a is a constant then you can precompute its log. If b is a constant, it only helps if it is also an integer.
Exponentiation by squaring is ideal for the second case, just basically unroll the loop and embed the constants. But only if CONSTANT is an integer of course.
You could use addition chain exponentiation, this is a smart way to reduce the number of CPU cycles for known integer exponents : https://en.wikipedia.org/wiki/Addition-chain_exponentiation

Resources