This is part of a bigger question. Its actually a mathematical problem. So it would be really great if someone can direct me to any algorithm to obtain the solution of this problem or a pseudo code will be of help.
The question. Given an equation check if it has an integral solution.
For example:
(26a+5)/32=b
Here a is an integer. Is there an algorithm to predict or find if b can be an integer. I need a general solution not specific to this question. The equation can vary. Thanks
Your problem is an example of a linear Diophantine equation. About that, Wikipedia says:
This Diophantine equation [i.e., a x + b y = c] has a solution (where x and y are integers) if and only if c is a multiple of the greatest common divisor of a and b. Moreover, if (x, y) is a solution, then the other solutions have the form (x + k v, y - k u), where k is an arbitrary integer, and u and v are the quotients of a and b (respectively) by the greatest common divisor of a and b.
In this case, (26 a + 5)/32 = b is equivalent to 26 a - 32 b = -5. The gcd of the coefficients of the unknowns is gcd(26, -32) = 2. Since -5 is not a multiple of 2, there is no solution.
A general Diophantine equation is a polynomial in the unknowns, and can only be solved (if at all) by more complex methods. A web search might turn up specialized software for that problem.
Linear Diophantine equations take the form ax + by = c. If c is the greatest common divisor of a and b this means a=z'c and b=z''c then this is Bézout's identity of the form
with a=z' and b=z'' and the equation has an infinite number of solutions. So instead of trial searching method you can check if c is the greatest common divisor (GCD) of a and b
If indeed a and b are multiples of c then x and y can be computed using extended Euclidean algorithm which finds integers x and y (one of which is typically negative) that satisfy Bézout's identity
(as a side note: this holds also for any other Euclidean domain, i.e. polynomial ring & every Euclidean domain is unique factorization domain). You can use Iterative Method to find these solutions:
Integral solution to equation `a + bx = c + dy`
Related
I'm having trouble on doing a homework exercise.
I need to describe an efficient algorithm which solves the polynomial interpolation problem:
Let P[i,j] be the polynomial interpolation of the points (xi, yi),...,(xj,yj). Find 3 simple polynomials q(x), r(x), s(x) of degree 0 or 1 such that:
P[i,j+1]={q(x)P[i,j](x)-r(x)P[i+1,j+1](x)}/s(x)
Given the points (x1,y1),....(xn,yn), describe an efficient dynamic programming algorithm based on the recurrence relation which you found in section 1 for computing the coefficients a0,...an-1 of the polynomial interpolation.
Well, I know how to solve the polynomial interpolation problem using Newton polynomial which looks quite similar to the above recurrence relation but I don't see how it helps me to find q(x), r(x), s(x) of degree 0 or 1, and assuming I have the correct q(x), r(x), s(x)- how do I solve this problem using dynamic programming?
Any help will be much appreciated.
q(x) = (x at {j+1}) - x
r(x) = (x at i) - x
s(x) = (x at {j+1}) - (x at i)
x at i or x at j mean their place in the ordered list of input points.
Some explanations:
First we need to understand what P[i,j](x) means.
Put all your initial (x,y) pairs in the main diagonal of an n x n matrix.
Now you can extract P[0,0](x) to be the y value of the point in your matrix at (0,0).
P[0,1] is the linear interpolation of the points in your matrix at (0,0) and (1,1). This will be a straight line function.
((x at 0 - x)(y at 1) - (x at 1 - x)(y at 0))
---------------------------------------------
(x at 1 - x at 0)
P[0,2] is the linear interpolation of two previous linear interpolations, which means that your ys now will be the linear functions which you calculated at the previous step.
This is also the dynamic algorithm which builds the full polynom.
I highly recommend you have a look at this very good lecture, and the full lecture notes.
I'm looking for a speedy algorithm to find the roots of a univariate polynomial in a prime finite field.
That is, if f = a0 + a1x + a2x2 + ... + anxn (n > 0) then an algorithm that finds all r < p satisfying f(r) = 0 mod p, for a given prime p.
I found Chiens search algorithm https://en.wikipedia.org/wiki/Chien_search but I can't imagine this being that fast for primes greater than 20 bits. Does anyone have experience with Chien's search algorithm or know a faster way? Is there a sympy module for this?
This is pretty well studied, as mcdowella's comment indicates. Here is how the Cantor-Zassenhaus random algorithm works for the case where you want to find the roots of a polynomial, instead of the more general factorization.
Note that in the ring of polynomials with coefficients mod p, the product x(x-1)(x-2)...(x-p+1) has all possible roots, and equals x^p-x by Fermat's Little Theorem and unique factorization in this ring.
Set g = GCD(f,x^p-x). Using Euclid's algorithm to compute the GCD of two polynomials is fast in general, taking a number of steps that is logarithmic in the maximum degree. It does not require you to factor the polynomials. g has the same roots as f in the field, and no repeated factors.
Because of the special form of x^p-x, with only two nonzero terms, the first step of Euclid's algorithm can be done by repeated squaring, in about 2 log_2 (p) steps involving only polynomials of degree no more than twice the degree of f, with coefficients mod p. We may compute x mod f, x^2 mod f, x^4 mod f, etc, then multiply together the terms corresponding to nonzero places in the binary expansion of p to compute x^p mod f, and finally subtract x.
Repeatedly do the following: Choose a random d in Z/p. Compute the GCD of g with r_d = (x+d)^((p-1)/2)-1, which we can again compute rapidly by Euclid's algorithm, using repeated squaring on the first step. If the degree of this GCD is strictly between 0 and the degree of g, we have found a nontrivial factor of g, and we can recurse until we have found the linear factors hence roots of g and thus f.
How often does this work? r_d has as roots the numbers that are d less than a nonzero square mod p. Consider two distinct roots of g, a and b, so (x-a) and (x-b) are factors of g. If a+d is a nonzero square, and b+d is not, then (x-a) is a common factor of g and r_d, while (x-b) is not, which means GCD(g,r_d) is a nontrivial factor of g. Similarly, if b+d is a nonzero square while a+d is not, then (x-b) is a common factor of g and r_d while (x-a) is not. By number theory, one case or the other happens close to half of the possible choices for d, which means that on average it takes a constant number of choices of d before we find a nontrivial factor of g, in fact one separating (x-a) from (x-b).
Your answers are good, but I think I found a wonderful method to find the roots modulo any number: This method based on "LATTICES". Let r ≤ R be a root of mod p. We must find another function such as h(x) such that h isn't large and r is root of h. Lattice method find this function. At the first time, we must create a basis of polynomial for lattice and then, with "LLL" algorithm, we find a "shortest vector" that has root r without modulo p. In fact, we eliminate modulo p with this way.
For more explanation, refer to "Coppersmith D. Finding small solutions to small degree polynomials. In Cryptography and lattices".
I'm trying to write an algorithm to determine whether a linear equation, specifically in the form of ax + by = c, has positive integer solutions for given a,b,c. It needs to be efficient since the numbers a, b, and c can be in range 0<=a,b,c<=10^16. How would I approach this problem?
Since it's a diophantine equation in question, I tried to check if the GCD of a and b divides c, but this way I can't differentiate between positive, negative, or zero solutions.
Algorithm to determine non-negative-values solution existance for linear diophantine equation
I found a solution here, but I didn't quite understand it. Maybe someone can simplify it for me? Since this one is pretty general and I'm only interested in equations with 2 variables.
Existence of integer solutions
Bezout's identity tells you indeed that with gcd(a,b) the greatest common divisor of a and b:
i) gcd(a,b) is the smallest positive integer that can be written as ax + by, and
ii) every integer of the form ax + by is a multiple of gcd(a,b).
So there you go. If c is dividable by gcd(a,b), you have solutions.
Finding positive solutions
From any pair of solutions, we can get all the other ones. Thus we can see if they can be positive. Still from the same identity, we get:
When one pair of Bézout coefficients (x0, y0) has been computed (e.g., using extended Euclidean algorithm), all pairs can be represented in the form
And now we're done. All you have to do is :
Use the extended Euclidean algorithm, that will give you
gcd(a,b) and
a pair of (x0,y0) such that a * x0 + b * y0 = gcd(a,b)
Check if gcd(a,b) divides c.
If not, no solutions exist.
If it does, multiply x0 and y0 by c / gcd(a,b) to get solutions for your equation.
If x0 and y0 both have the same sign, you're done.
If they are both positive, you have positive solutions,
If they are both negative, you don't.
If x0 and y0 have different signs, choose the smallest (in absolute value) k to change the sign of the negative integer.
That is, if x0 is negative, then take k = floor(d * x0 / b) (rounding to -infinity)
and if is y0 is negative, take k = ceil(-d * y0 / a)
Compute (x1,y1) = (x0 - k * b / d , y0 + k * a / d)
If x1 and y1 are both positive, you just found two positive integer solutions.
If flipping the sign of one number flipped the other one, you can not find positive solutions.
Note that it is related to the question you linked, but the number of variables is different. This is solved because you only have two variables.
and thank you for the attention you're paying to my question :)
My question is about finding an (efficient enough) algorithm for finding orthogonal polynomials of a given weight function f.
I've tried to simply apply the Gram-Schmidt algorithm but this one is not efficient enough. Indeed, it requires O(n^2) integrals. But my goal is to use this algorithm in order to find Hankel determinants of a function f. So a "direct" computation wich consists in simply compute the matrix and take its determinants requires only 2*n - 1 integrals.
But I want to use the theorem stating that the Hankel determinant of order n of f is a product of the n first leading coefficients of the orthogonal polynomials of f. The reason is that when n gets larger (say about 20), Hankel determinant gets really big and my goal is to divided it by an other big constant (for n = 20, the constant is of order 10^103). My idea is then to "dilute" the computation of the constant in the product of the leading coefficients.
I hope there is a O(n) algorithm to compute the n first orthogonal polynomials :) I've done some digging and found nothing in that direction for general function f (f can be any smooth function, actually).
EDIT: I'll precise here what the objects I'm talking about are.
1) A Hankel determinant of order n is the determinant of a square matrix which is constant on the skew diagonals. Thus for example
a b c
b c d
c d e
is a Hankel matrix of size 3 by 3.
2) If you have a function f : R -> R, you can associate to f its "kth moment" which is defined as (I'll write it in tex) f_k := \int_{\mathbb{R}} f(x) x^k dx
With this, you can create a Hankel matrix A_n(f) whose entries are (A_n(f)){ij} = f{i+j-2}, that is something of the like
f_0 f_1 f_2
f_1 f_2 f_3
f_2 f_3 f_4
With this in mind, it is easy to define the Hankel determinant of f which is simply
H_n(f) := det(A_n(f)). (Of course, it is understood that f has sufficient decay at infinity, this means that all the moments are well defined. A typical choice for f could be the gaussian f(x) = exp(-x^2), or any continuous function on a compact set of R...)
3) What I call orthogonal polynomials of f is a set of polynomials (p_n) such that
\int_{\mathbb{R}} f(x) p_j(x) p_k(x) is 1 if j = k and 0 otherwize.
(They are called like that since they form an orthonormal basis of the vector space of polynomials with respect to the scalar product
(p|q) = \int_{\mathbb{R}} f(x) p(x) q(x) dx
4) Now, it is basic linear algebra that from any basis of a vector space equipped with a scalar product, you can built a orthonormal basis thanks to the Gram-Schmidt algorithm. This is where the n^2 integrations comes from. You start from the basis 1, x, x^2, ..., x^n. Then you need n(n-1) integrals for the family to be orthogonal, and you need n more in order to normalize them.
5) There is a theorem saying that if f : R -> R is a function having sufficient decay at infinity, then we have that its Hankel determinant H_n(f) is equal to
H_n(f) = \prod_{j = 0}^{n-1} \kappa_j^{-2}
where \kappa_j is the leading coefficient of the j+1th orthogonal polynomial of f.
Thank you for your answer!
(PS: I tagged octave because I work in octave so, with a bit of luck (but I doubt it), there is a built-in function or a package already done managing this kind of think)
Orthogonal polynomials obey a recurrence relation, which we can write as
P[n+1] = (X-a[n])*P[n] - b[n-1]*P[n-1]
P[0] = 1
P[1] = X-a[0]
and we can compute the a, b coefficients by
a[n] = <X*P[n]|P[n]> / c[n]
b[n-1] = c[n-1]/c[n]
where
c[n] = <P[n]|P[n]>
(Here < | > is your inner product).
However I cannot vouch for the stability of this process at large n.
How to find the first perfect square from the function: f(n)=An²+Bn+C? B and C are given. A,B,C and n are always integer numbers, and A is always 1. The problem is finding n.
Example: A=1, B=2182, C=3248
The answer for the first perfect square is n=16, because sqrt(f(16))=196.
My algorithm increments n and tests if the square root is a integer nunber.
This algorithm is very slow when B or C is large, because it takes n calculations to find the answer.
Is there a faster way to do this calculation? Is there a simple formula that can produce an answer?
What you are looking for are integer solutions to a special case of the general quadratic Diophantine equation1
Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0
where you have
ax^2 + bx + c = y^2
so that A = a, B = 0, C = -1, D = b, E = 0, F = c where a, b, c are known integers and you are looking for unknown x and y that satisfy this equation. Once you recognize this, solutions to this general problem are in abundance. Mathematica can do it (use Reduce[eqn && Element[x|y, Integers], x, y]) and you can even find one implementation here including source code and an explanation of the method of solution.
1: You might recognize this as a conic section. It is, and people have been studying them for thousands of years. As such, our understanding of them is very deep and your problem is actually quite famous. The study of them is an immensely deep and still active area of mathematics.