Finding an intersection between something and a line - algorithm

I have a set of points which are interpolated with an unknown method, or to be more precise, the method is known but it can be one of the several - it can be polynomial interpolation, spline, simple linear ... - and a line, which, let's for now imagine it is given in the simple form of y = ax + b.
For interpolation, I don't know what method is used (i.e. the function is hidden), so I can only determine y for some x, and equally, x for a given y value.
What is the usual way to go about finding an intersection between the two?

Say your unknown function is y = f(x) and the line is y = g(x) = ax + b. The intersection of these curves will be the zeroes of Δy = f(x) - g(x). Just use any iterative method to find the roots of Δy - the simplest would be to use the bisection method.

You have (an interpolation polynomial) f1(x) and (a line) f2(x) and you want to solve f(x) = f1(x)-f2(x) = 0. Use any method for solving this equation, e.g. Newton-Raphson or even bisection. This may not be the most optimal for your case. Pay attention to convergence guarantees and possible multiple roots.

Spline: bezier clipping.
Polynomial: Viète's formulas (to get the zeroes, I think).
Line: line-line.
Not a trivial question (or solution) under any circumstance.

Related

Algorithm for plotting polar equations in general form

I'm looking for an algorithm, which can decide if a given point (x,y) satisfies some equation written in polar form, like r-phi=0. The main problem is that the angle phi is bounded between (0,2pi), so in this example I'm only getting one cycle of the spiral. So how can I get all the possible solutions for any polar equation written in such form?
Tried bounding r value to (0-2pi) range, which didn't work on some more complicated examples like logarithmic spirals
You can use the following transformation equations:
r = √(x² + y²)
φ = arctan(y, x) + 2kπ
where the function arctan is on four quadrants.
In the case of your Archimedean spiral, check that
(√(x² + y²) - arctan(y, x)) / 2π
is an integer.

Interpolation using dynamic programming

I'm having trouble on doing a homework exercise.
I need to describe an efficient algorithm which solves the polynomial interpolation problem:
Let P[i,j] be the polynomial interpolation of the points (xi, yi),...,(xj,yj). Find 3 simple polynomials q(x), r(x), s(x) of degree 0 or 1 such that:
P[i,j+1]={q(x)P[i,j](x)-r(x)P[i+1,j+1](x)}/s(x)
Given the points (x1,y1),....(xn,yn), describe an efficient dynamic programming algorithm based on the recurrence relation which you found in section 1 for computing the coefficients a0,...an-1 of the polynomial interpolation.
Well, I know how to solve the polynomial interpolation problem using Newton polynomial which looks quite similar to the above recurrence relation but I don't see how it helps me to find q(x), r(x), s(x) of degree 0 or 1, and assuming I have the correct q(x), r(x), s(x)- how do I solve this problem using dynamic programming?
Any help will be much appreciated.
q(x) = (x at {j+1}) - x
r(x) = (x at i) - x
s(x) = (x at {j+1}) - (x at i)
x at i or x at j mean their place in the ordered list of input points.
Some explanations:
First we need to understand what P[i,j](x) means.
Put all your initial (x,y) pairs in the main diagonal of an n x n matrix.
Now you can extract P[0,0](x) to be the y value of the point in your matrix at (0,0).
P[0,1] is the linear interpolation of the points in your matrix at (0,0) and (1,1). This will be a straight line function.
((x at 0 - x)(y at 1) - (x at 1 - x)(y at 0))
---------------------------------------------
(x at 1 - x at 0)
P[0,2] is the linear interpolation of two previous linear interpolations, which means that your ys now will be the linear functions which you calculated at the previous step.
This is also the dynamic algorithm which builds the full polynom.
I highly recommend you have a look at this very good lecture, and the full lecture notes.

Finding integral solution of an equation

This is part of a bigger question. Its actually a mathematical problem. So it would be really great if someone can direct me to any algorithm to obtain the solution of this problem or a pseudo code will be of help.
The question. Given an equation check if it has an integral solution.
For example:
(26a+5)/32=b
Here a is an integer. Is there an algorithm to predict or find if b can be an integer. I need a general solution not specific to this question. The equation can vary. Thanks
Your problem is an example of a linear Diophantine equation. About that, Wikipedia says:
This Diophantine equation [i.e., a x + b y = c] has a solution (where x and y are integers) if and only if c is a multiple of the greatest common divisor of a and b. Moreover, if (x, y) is a solution, then the other solutions have the form (x + k v, y - k u), where k is an arbitrary integer, and u and v are the quotients of a and b (respectively) by the greatest common divisor of a and b.
In this case, (26 a + 5)/32 = b is equivalent to 26 a - 32 b = -5. The gcd of the coefficients of the unknowns is gcd(26, -32) = 2. Since -5 is not a multiple of 2, there is no solution.
A general Diophantine equation is a polynomial in the unknowns, and can only be solved (if at all) by more complex methods. A web search might turn up specialized software for that problem.
Linear Diophantine equations take the form ax + by = c. If c is the greatest common divisor of a and b this means a=z'c and b=z''c then this is Bézout's identity of the form
with a=z' and b=z'' and the equation has an infinite number of solutions. So instead of trial searching method you can check if c is the greatest common divisor (GCD) of a and b
If indeed a and b are multiples of c then x and y can be computed using extended Euclidean algorithm which finds integers x and y (one of which is typically negative) that satisfy Bézout's identity
(as a side note: this holds also for any other Euclidean domain, i.e. polynomial ring & every Euclidean domain is unique factorization domain). You can use Iterative Method to find these solutions:
Integral solution to equation `a + bx = c + dy`

Compute cosines and sines of a sequence of angles

I should create a program which computes the cosines and sines of a sequence of angles k*α, where k is a growing natural number (i.e., 0, 1, 2,...) and α is a constant angle which lies between 0 and π. I would like to make this program as fast as possibile.
Hence, I want to compute first the cosine of each angle, and then the related sine with sqrt(1-cos(k*α)^2). The problem is the sign of the sine, which should be determined by the position of the angle k*α on the real line.
I would like to know how I could implement this dynamic comparison as fast as possibile, or if the fastest way to proceed is to compute directly the sine, too.
After some time, I thought again about this problem and I found a really simple solution:
n = floor(k*alpha/pi);
if (n % 2 == 0)
sin_alpha = +sqrt(1-pow(cos(k*alpha,2)));
else
sin_alpha = -sqrt(1-pow(cos(k*alpha,2)));
Problem solved. :)

How can a transform a polynomial to another coordinate system?

Using assorted matrix math, I've solved a system of equations resulting in coefficients for a polynomial of degree 'n'
Ax^(n-1) + Bx^(n-2) + ... + Z
I then evaulate the polynomial over a given x range, essentially I'm rendering the polynomial curve. Now here's the catch. I've done this work in one coordinate system we'll call "data space". Now I need to present the same curve in another coordinate space. It is easy to transform input/output to and from the coordinate spaces, but the end user is only interested in the coefficients [A,B,....,Z] since they can reconstruct the polynomial on their own. How can I present a second set of coefficients [A',B',....,Z'] which represent the same shaped curve in a different coordinate system.
If it helps, I'm working in 2D space. Plain old x's and y's. I also feel like this may involve multiplying the coefficients by a transformation matrix? Would it some incorporate the scale/translation factor between the coordinate systems? Would it be the inverse of this matrix? I feel like I'm headed in the right direction...
Update: Coordinate systems are linearly related. Would have been useful info eh?
The problem statement is slightly unclear, so first I will clarify my own interpretation of it:
You have a polynomial function
f(x) = Cnxn + Cn-1xn-1 + ... + C0
[I changed A, B, ... Z into Cn, Cn-1, ..., C0 to more easily work with linear algebra below.]
Then you also have a transformation such as: z = ax + b that you want to use to find coefficients for the same polynomial, but in terms of z:
f(z) = Dnzn + Dn-1zn-1 + ... + D0
This can be done pretty easily with some linear algebra. In particular, you can define an (n+1)×(n+1) matrix T which allows us to do the matrix multiplication
d = T * c ,
where d is a column vector with top entry D0, to last entry Dn, column vector c is similar for the Ci coefficients, and matrix T has (i,j)-th [ith row, jth column] entry tij given by
tij = (j choose i) ai bj-i.
Where (j choose i) is the binomial coefficient, and = 0 when i > j. Also, unlike standard matrices, I'm thinking that i,j each range from 0 to n (usually you start at 1).
This is basically a nice way to write out the expansion and re-compression of the polynomial when you plug in z=ax+b by hand and use the binomial theorem.
If I understand your question correctly, there is no guarantee that the function will remain polynomial after you change coordinates. For example, let y=x^2, and the new coordinate system x'=y, y'=x. Now the equation becomes y' = sqrt(x'), which isn't polynomial.
Tyler's answer is the right answer if you have to compute this change of variable z = ax+b many times (I mean for many different polynomials). On the other hand, if you have to do it just once, it is much faster to combine the computation of the coefficients of the matrix with the final evaluation. The best way to do it is to symbolically evaluate your polynomial at point (ax+b) by Hörner's method:
you store the polynomial coefficients in a vector V (at the beginning, all coefficients are zero), and for i = n to 0, you multiply it by (ax+b) and add Ci.
adding Ci means adding it to the constant term
multiplying by (ax+b) means multiplying all coefficients by b into a vector K1, multiplying all coefficients by a and shifting them away from the constant term into a vector K2, and putting K1+K2 back into V.
This will be easier to program, and faster to compute.
Note that changing y into w = cy+d is really easy. Finally, as mattiast points out, a general change of coordinates will not give you a polynomial.
Technical note: if you still want to compute matrix T (as defined by Tyler), you should compute it by using a weighted version of Pascal's rule (this is what the Hörner computation does implicitely):
ti,j = b ti,j-1 + a ti-1,j-1
This way, you compute it simply, column after column, from left to right.
You have the equation:
y = Ax^(n-1) + Bx^(n-2) + ... + Z
In xy space, and you want it in some x'y' space. What you need is transformation functions f(x) = x' and g(y) = y' (or h(x') = x and j(y') = y). In the first case you need to solve for x and solve for y. Once you have x and y, you can substituted those results into your original equation and solve for y'.
Whether or not this is trivial depends on the complexity of the functions used to transform from one space to another. For example, equations such as:
5x = x' and 10y = y'
are extremely easy to solve for the result
y' = 2Ax'^(n-1) + 2Bx'^(n-2) + ... + 10Z
If the input spaces are linearly related, then yes, a matrix should be able to transform one set of coefficients to another. For example, if you had your polynomial in your "original" x-space:
ax^3 + bx^2 + cx + d
and you wanted to transform into a different w-space where w = px+q
then you want to find a', b', c', and d' such that
ax^3 + bx^2 + cx + d = a'w^3 + b'w^2 + c'w + d'
and with some algebra,
a'w^3 + b'w^2 + c'w + d' = a'p^3x^3 + 3a'p^2qx^2 + 3a'pq^2x + a'q^3 + b'p^2x^2 + 2b'pqx + b'q^2 + c'px + c'q + d'
therefore
a = a'p^3
b = 3a'p^2q + b'p^2
c = 3a'pq^2 + 2b'pq + c'p
d = a'q^3 + b'q^2 + c'q + d'
which can be rewritten as a matrix problem and solved.

Resources