Simultaneous approximation of polynomial system's roots - algorithm

The article on Wikipedia describes how to derive Durand-Kerner root-finding method from Newton method. The method is attractive because of its good convergence and simplicity, as proven in ACM Algorithm 283 by Immo Kerner himself ("translated to PL/I by R. A. Vowels"):
Prrs: procedure (A, X, n, epsin) options (reorder);
declare (A(*), X(*), epsin) float, n fixed binary;
declare (i, k, j, nits) fixed binary, (xx, P, Q, eps) float;
eps = epsin*epsin;
nits = 1;
W: Q = 0;
do i = 1 to n;
xx = A(n); P = A(n);
do k = 1 to n;
xx = xx * X(i) + A(n-k);
if k ^= i then P = P * (X(i) - X(k));
end;
X(i) = X(i) - xx/P;
Q = Q + (xx/P)*(xx/P);
end;
nits = nits + 1;
if Q > eps then go to W;
end Prrs;
Is it somehow possible to derive similar method for simultaneously finding all the approximations of roots of a system of n polynomial equations in n variables?
The line of thinking was: to find a root of a polynomial in one variable, it is possible to use Newton's method. It is simple and fast, but which exact root is going to be found depends on the initial guess and so it is difficult to find all the roots.
To approximate all the roots simultaneously, there are several generalizations of Newton's method which employ the so-called Weierstrass correction, for example the above mentioned Durand-Kerner's or Aberth's methods.
For systems of n multivariate polynomial equations in n variables there exists another generalization of Newton's method which allows one to find a root (i.e. a set of n values where the system becomes zero). It uses Jacobian matrix.
So my question is, would it be possible to use the Jacobian in the corrections, instead of Newton, kind of like combining Durand-Kerner and multivariate Newton? By sheer luck, does anyone know of an implementation of such algorithm?

Related

best complexity to evaluate coefficients of polynomial

I want to find out coefficients of the n degree polynomial with roots 0,1,2...n-1. Can anybody suggest a good algorithm? I tried using FFT but didn't work fast enough
The simple solution that I would use is to write a function like this:
def poly_with_root_sequence (start, end, gap):
if end < start + gap:
return Polynomial([1, -start])
else:
p1 = poly_with_root_sequence(start, end, gap*2)
p2 = poly_with_root_sequence(start+gap, end, gap*2)
return p1 * p2
answer = poly_with_root_sequence(1, n, 1)
With a naive algorithm this will take O(n^2) arithmetic operations. However some of the operations will involve very large numbers. (Note that n! has more than n digits for large n.) But we have arranged that very few of the operations will involve very large numbers.
There is still no chance of producing answers as quickly as you want unless you are using a polynomial implementation with a very fast multiplication algorithm.
https://gist.github.com/ksenobojca/dc492206f8a8c7e9c75b155b5bd7a099 advertises itself as an implementation of the FFT algorithm for multiplying polynomials in Python. I can't verify that. But it gives you a shot at going fairly fast.
As replied on Evaluating Polynomial coefficients, you can do this as simple way:
def poly(lst, x):
n, tmp = 0, 0
for a in lst:
tmp = tmp + (a * (x**n))
n += 1
return tmp
print poly([1,2,3], 2)

Calculate integer powers with a given loop invariant

I need to derive an algorithm in C++ to calculate integer powers m^n that uses the loop invariant r = y^n and the loop condition y != m.
I tried using the instruction y= y+1 to advance, but I don´t know how to obtain (y+1)^n from y^n, and it shouldn't be difficult to find . So, probably, this isn't the correct path to follow
Could you help me to derive the program?
EDIT: this is a problem from the subject Data Structures and Algorithms. The difficulty ( if there is at all) shouldn't be mathematic.
EDIT2: Just to clarify, the difficulty of the problem is using the invariant y^n and loop condition y != m. If I vary the n I'm not achieving that
Given w and P such that 2^w > m, P > 2^(wn), and 2^((P-1)/2) = -1 mod P,
then 2 is a generator mod P, and there will be some x such that 2^x = m mod P, so:
if (m<=1 || n==1)
return m;
if (n==0)
return 1;
let y = 2;
let r = 1<<n;
while(y!=m)
{
y = (y*2)%P;
r = (r*(1<<n))%P;
}
return r;
Unless your function needs to produce bignum results, you can just pick the largest P that fits into an integer in your language.
There is no useful relation between (y+1)^n and y^n (you can write (y+1)^n = (√(y^n)+1)^n or (y+1)^n = (1+1/y)^n y^n, but this leads you nowhere).
If y was factored, you could exploit (a.b)^n = (a^n).(b^n), but you would need a table of the nth powers of the primes.
I can't see an answer that makes sense.
You can also think of the Binomial theorem,
(y+1)^n = y^n + n y^(n-1) + n(n-1)/2 y^(n-2) + ... 1
but this is worse than anything: you need to compute n binomial coefficients, and update all powers of y from 0 to n. The total cost of the computation would be ridiculously high.

Making a customizable LCG that travels backward and forward

How would i go about making an LCG (type of pseudo random number generator) travel in both directions?
I know that travelling forward is (a*x+c)%m but how would i be able to reverse it?
I am using this so i can store the seed at the position of the player in a map and be able to generate things around it by propogating backward and forward in the LCG (like some sort of randomized number line).
All LCGs cycle. In an LCG which achieves maximal cycle length there is a unique predecessor and a unique successor for each value x (which won't necessarily be true for LCGs that don't achieve maximal cycle length, or for other algorithms with subcycle behaviors such as von Neumann's middle-square method).
Suppose our LCG has cycle length L. Since the behavior is cyclic, that means that after L iterations we are back to the starting value. Finding the predecessor value by taking one step backwards is mathematically equivalent to taking (L-1) steps forward.
The big question is whether that can be converted into a single step. If you're using a Prime Modulus Multiplicative LCG (where the additive constant is zero), it turns out to be pretty easy to do. If xi+1 = a * xi % m, then xi+n = an * xi % m. As a concrete example, consider the PMMLCG with a = 16807 and m = 231-1. This has a maximal cycle length of m-1 (it can never yield 0 for obvious reasons), so our goal is to iterate m-2 times. We can precalculate am-2 % m = 1407677000 using readily available exponentiation/mod libraries. Consequently, a forward step is found as xi+1 = 16807 * xi % 231-1, while a backwards step is found as xi-1 = 1407677000 * xi % 231-1.
ADDITIONAL
The same concept can be extended to generic full-cycle LCGs by casting the transition in matrix form and doing fast matrix exponentiation to come up with the equivalent one-stage transform. The matrix formulation for xi+1 = (a * xi + c) % m is Xi+1 = T · Xi % m, where T is the matrix [[a c],[0 1]] and X is the column vector (x, 1) transposed. Multiple iterations of the LCG can be quickly calculated by raising T to any desired power through fast exponentiation techniques using squaring and halving the power. After noticing that powers of matrix T never alter the second row, I was able to focus on just the first row calculations and produced the following implementation in Ruby:
def power_mod(ary, mod, power)
return ary.map { |x| x % mod } if power < 2
square = [ary[0] * ary[0] % mod, (ary[0] + 1) * ary[1] % mod]
square = power_mod(square, mod, power / 2)
return square if power.even?
return [square[0] * ary[0] % mod, (square[0] * ary[1] + square[1]) % mod]
end
where ary is a vector containing a and c, the multiplicative and additive coefficients.
Using this with power set to the cycle length - 1, I was able to determine coefficients which yield the predecessor for various LCGs listed in Wikipedia. For example, to "reverse" the LCG with a = 1664525, c = 1013904223, and m = 232, use a = 4276115653 and c = 634785765. You can easily confirm that the latter set of coefficients reverses the sequence produced by using the original coefficients.

Fastest algorithm for computing the determinant of a matrix?

For a research paper, I have been assigned to research the fastest algorithm for computing the determinant of a matrix.
I already know about LU decomposition and Bareiss algorithm which both run in O(n^3), but after doing some digging, it seems there are some algorithms that run somewhere between n^2 and n^3.
This source (see page 113-114) and this source (see page 198) say that an algorithm exists that runs in O(n^2.376) because it is based on the Coppersmith-Winograd's algorithm for multiplying matrices. However, I have not been able to find any details on such an algorithm.
My questions are:
What is the fastest created (non-theoretical) algorithm for computing the determinant of a matrix?
Where can I find information about this fastest algorithm?
Thanks so much.
I believe the fastest in practice (and commonly used) algorithm is the Strassen Algorithm. You can find explanation on Wikipedia along with sample C code.
Algorithms based on Coppersmith-Winograd's multiplication algorithms are too complex to be practical, though they have best asymptotic complexity as far.
I know this is not a direct answer for my question, but for the purposes of completing my research paper, it is enough.
I just ended up asking my professor and I will summarize what he said:
Summary:
The fastest matrix-multiplication algorithms (e.g., Coppersmith-Winograd and more recent improvements) can be used with O(n^~2.376) arithmetic operations, but use heavy mathematical tools and are often impractical.
LU Decomposition and Bareiss do use O(n^3) operations, but are more practical
In short, even though LU Decomposition and Bareiss are not as fast as the most efficient algorithms, they are more practical and I should focus my research paper on these two.
Thanks for all who commented and helped!
See the following Matlab test script, which computes determinants of arbitrary square matrices (comparisons to Matlab's built-in function is also included):
nMin = 2; % Start with 2-by-2 matrices
nMax = 50; % Quit with 50-by-50 matrices
nTests = 10000;
detsOfL = NaN*zeros(nTests, nMax - nMin + 1);
detsOfA = NaN*zeros(nTests, nMax - nMin + 1);
disp(' ');
for n = nMin:nMax
tStart = tic;
for j = 1:nTests
A = randn(n, n);
detA1 = det(A); % Matlab's built-in function
if n == 1
detsOfL(j, 1) = 1;
detsOfA(j, 1) = A;
continue; % Trivial case => Quick return
end
[L, U, P] = lu(A);
PtL = P'*L;
realEigenvaluesOfPtL = real(eig(PtL));
if min(prod(realEigenvaluesOfPtL)) < 0 % det(L) is always +1 or -1
detL = -1;
else
detL = 1;
end
detU = prod(diag(U));
detA2 = detL * detU; % Determinant of A using LU decomposition
if detA1 ~= detA2
error(['Determinant computation failed at n = ' num2str(n) ', j = ' num2str(j)]);
end
detsOfL(j, n - nMin + 1) = detL;
detsOfA(j, n - nMin + 1) = detA2;
end
tElapsed = toc(tStart);
disp(sprintf('Determinant computation was successful for n = %d!! Elapsed time was %.3f seconds', n, tElapsed));
end
disp(' ');

Print a polynomial using minimum number of calls

I keep getting these hard interview questions. This one really baffles me.
You're given a function poly that takes and returns an int. It's actually a polynomial with nonnegative integer coefficients, but you don't know what the coefficients are.
You have to write a function that determines the coefficients using as few calls to poly as possible.
My idea is to use recursion knowing that I can get the last coefficient by poly(0). So I want to replace poly with (poly - poly(0))/x, but I don't know how to do this in code, since I can only call poly. ANyone have an idea how to do this?
Here's a neat trick.
int N = poly(1)
Now we know that every coefficient in the polynomial is at most N.
int B = poly(N+1)
Now expand B in base N+1 and you have the coefficients.
Attempted explanation: Algebraically, the polynomial is
poly = p_0 + p_1 * x + p_2 * x^2 + ... + p_k * x^k
If you have a number b and expand it in base n, then you get
b = b_0 + b_1 * n + b_2 * n^2 + ...
where each b_i is uniquely determined and b_i < n.

Resources