Algorithm for plotting polar equations in general form - algorithm

I'm looking for an algorithm, which can decide if a given point (x,y) satisfies some equation written in polar form, like r-phi=0. The main problem is that the angle phi is bounded between (0,2pi), so in this example I'm only getting one cycle of the spiral. So how can I get all the possible solutions for any polar equation written in such form?
Tried bounding r value to (0-2pi) range, which didn't work on some more complicated examples like logarithmic spirals

You can use the following transformation equations:
r = √(x² + y²)
φ = arctan(y, x) + 2kπ
where the function arctan is on four quadrants.
In the case of your Archimedean spiral, check that
(√(x² + y²) - arctan(y, x)) / 2π
is an integer.

Related

Computer representation of quaternions that's exact for 90-degree rotations?

Unit quaternions have several advantages over 3x3 orthogonal matrices
for representing 3d rotations on a computer.
However, one thing that has been disappointing me about the unit quaternion
representation is that axis-aligned 90 degree rotations
aren't exactly representable. For example, a 90-degree rotation around the z axis, taking the +x axis to the +y axis, is represented as [w=sqrt(1/2), x=0, y=0, z=sqrt(1/2)].
Surprising/unpleasant consequences include:
applying a floating-point-quaternion-represented axis-aligned 90 degree rotation to a vector v
often doesn't rotate v by exactly 90 degrees
applying a floating-point-quaternion-represented axis-aligned 90 degree rotation to a vector v four times
often doesn't yield exactly v
squaring a floating-point-quaternion representing a 90 degree rotation around a coordinate axis
doesn't exactly yield the (exactly representable) 180 degree rotation
around that coordinate axis,
and raising it to the eighth power doesn't yield the identity quaternion.
Because of this unfortunate lossiness of the quaternion representation on "nice" rotations,
I still sometimes choose 3x3 matrices for applications in which I'd like axis-aligned
90 degree rotations, and compositions of them,
to be exact and floating-point-roundoff-error-free.
But the matrix representation isn't ideal either,
since it loses the sometimes-needed double covering property
(i.e. quaternions distinguish between the identity and a 360-degree rotation,
but 3x3 rotation matrices don't) as well as other familiar desirable numerical properties
of the quaternion representation, such as lack of need for re-orthogonalization.
My question: is there a computer representation of unit quaternions that does not suffer this
imprecision, and also doesn't lose the double covering property?
One solution I can think of is to represent each of the 4 elements of the quaternion
as a pair of machine-representable floating-point numbers [a,b], meaning a + b √2.
So the representation of a quaternion would consist of eight floating-point numbers.
I think that works, but it seems rather heavyweight;
e.g. when computing the product of a long sequence of quaternions,
each multiplication in the simple quaternion calculation would turn into
4 floating-point multiplications and 2 floating-point additions,
and each addition would turn into 2 floating-point additions. From the point of view of trying to write a general-purpose library implementation, all that extra computation and storage seems pointless as soon as there's a factor that's not one of these "nice" rotations.
Another possible solution would be to represent each quaternion q=w+xi+yj+zk
as a 4-tuple [sign(w)*w2, sign(x)*x2, sign(y)*y2, sign(z)*z2].
This representation is concise and has the desired non-lossiness for the subgroup of
interest, but I don't know how to multiply together two quaternions in this representation.
Yet another possible approach would be to store the quaternion q2
instead of the usual q. This seems promising at first,
but, again, I don't know how to non-lossily multiply
two of these representions together on the computer, and furthermore
the double-cover property is evidently lost.
You probably want to check the paper "Algorithms for manipulating quaternions in floating-point arithmetic" published in 2020 available online:
https://hal.archives-ouvertes.fr/hal-02470766/file/quaternions.pdf
Which shows how to perform exact computations and avoid unbounded numerical errors.
EDIT:
You can get rid of the square roots by using an unnormalized (i.e., non-unit) quaternion. Let me explain the idea.
Having two unit 3D-vectors X and Y, represented as pure quaternions, the quaternion Q rotating X to Y is
Q = X * (X + Y) / |X * (X + Y)|
The denominator, which is taking the norm, is the problem since it involves a square root.
You can see it by expanding the expresion as:
Q = (1 + X * Y) / sqrt(2 + 2*(X • Y))
If we replace X = i and Y = j to see the 90 degree rotation you get:
Q = (1 + k) / sqrt(2)
So, |1 + k| = sqrt(2). But we can actually use the unnormalized quaternion Q = 1 + k to perform rotations, all we need to do is to normalize the rotated result by the
SQUARED norm of the quaternion.
For example, the squared norm of Q = 1 + k is |1 + k|^2 = 2 (and that is exact as you never took the square root) lets apply the unnormalized quaternion to the vector X = i:
= (1 + k) i (1 - k)
= (i + k * i - i * k - k * i * k)
= (i + 2 j - i)
= 2 j
To get the correct result we divide by squared norm.
I haven't tested but I believe you would get exact results by applying unnormalized quaternions to your vectors and then dividing the result by the squared norm.
The algorithm would be
Create the unnormalized quaternion Q = X * (X + Y)
Apply Q to your vectors as: v' = Q * v * ~Q
Normalize by squared norm: v'' = v' / |Q|^2

Genetic Algorithm : Find curve that fits points

I am working on a genetic algorithm. Here is how it works :
Input : a list of 2D points
Input : the degree of the curve
Output : the equation of the curve that passes through points the best way (try to minimize the sum of vertical distances from point's Ys to the curve)
The algorithm finds good equations for simple straight lines and for 2-degree equations.
But for 4 points and 3 degree equations and more, it gets more complicated. I cannot find the right combination of parameters : sometimes I have to wait 5 minutes and the curve found is still very bad. I tried modifying many parameters, from population size to number of parents selected...
Do famous combinations/theorems in GA programming can help me ?
Thank you ! :)
Based on what is given, you would need a polynomial interpolation in which, the degree of the equation is number of points minus 1.
n = (Number of points) - 1
Now having said that, let's assume you have 5 points that need to be fitted and I am going to define them in a variable:
var points = [[0,0], [2,3], [4,-1], [5,7], [6,9]]
Please be noted the array of the points have been ordered by the x values which you need to do.
Then the equation would be:
f(x) = a1*x^4 + a2*x^3 + a3*x^2 + a4*x + a5
Now based on definition (https://en.wikipedia.org/wiki/Polynomial_interpolation#Constructing_the_interpolation_polynomial), the coefficients are computed like this:
Now you need to used the referenced page to come up with the coefficient.
It is not that complicated, for the polynomial interpolation of degree n you get the following equation:
p(x) = c0 + c1 * x + c2 * x^2 + ... + cn * x^n = y
This means we need n + 1 genes for the coefficients c0 to cn.
The fitness function is the sum of all squared distances from the points to the curve, below is the formula for the squared distance. Like this a smaller value is obviously better, if you don't want that you can take the inverse (1 / sum of squared distances):
d_squared(xi, yi) = (yi - p(xi))^2
I think for faster conversion you could limit the mutation, e.g. when mutating choose a new value with 20% probability between min and max (e.g. -1000 and 1000) and with 80% probabilty a random factor between 0.8 and 1.2 with which you multiply the old value.

Fit a curve parallel to line segments

I have a set of (2-dimensional) line segments. I want to fit a curve of second degree which is parallel to the linesegments.
I would like to do this by using an implicit function like this: f(x,y) = a x^2 + b xy + c y^2 + d x + e y + f = 0.
I also have a couple of points where the curve should start with, so it is possible to determine f.
Here is what I tried so far:
I computed lines which are perpendicular to my line segments. Then I would like to set the gradient at the intersection point of the curve perpendicular to the lines gradient (and so equals the line segment). Unfortunately there are two problems: 1) I might have 2 solutions (intersections) 2) I do not know the intersecting points as long I don't know the coefficient of f. Eventually I get non linear equation systems which I do not know how to solve.
So far I used the singular value decomposition to solve my linear equation systems.

Finding an intersection between something and a line

I have a set of points which are interpolated with an unknown method, or to be more precise, the method is known but it can be one of the several - it can be polynomial interpolation, spline, simple linear ... - and a line, which, let's for now imagine it is given in the simple form of y = ax + b.
For interpolation, I don't know what method is used (i.e. the function is hidden), so I can only determine y for some x, and equally, x for a given y value.
What is the usual way to go about finding an intersection between the two?
Say your unknown function is y = f(x) and the line is y = g(x) = ax + b. The intersection of these curves will be the zeroes of Δy = f(x) - g(x). Just use any iterative method to find the roots of Δy - the simplest would be to use the bisection method.
You have (an interpolation polynomial) f1(x) and (a line) f2(x) and you want to solve f(x) = f1(x)-f2(x) = 0. Use any method for solving this equation, e.g. Newton-Raphson or even bisection. This may not be the most optimal for your case. Pay attention to convergence guarantees and possible multiple roots.
Spline: bezier clipping.
Polynomial: Viète's formulas (to get the zeroes, I think).
Line: line-line.
Not a trivial question (or solution) under any circumstance.

How can a transform a polynomial to another coordinate system?

Using assorted matrix math, I've solved a system of equations resulting in coefficients for a polynomial of degree 'n'
Ax^(n-1) + Bx^(n-2) + ... + Z
I then evaulate the polynomial over a given x range, essentially I'm rendering the polynomial curve. Now here's the catch. I've done this work in one coordinate system we'll call "data space". Now I need to present the same curve in another coordinate space. It is easy to transform input/output to and from the coordinate spaces, but the end user is only interested in the coefficients [A,B,....,Z] since they can reconstruct the polynomial on their own. How can I present a second set of coefficients [A',B',....,Z'] which represent the same shaped curve in a different coordinate system.
If it helps, I'm working in 2D space. Plain old x's and y's. I also feel like this may involve multiplying the coefficients by a transformation matrix? Would it some incorporate the scale/translation factor between the coordinate systems? Would it be the inverse of this matrix? I feel like I'm headed in the right direction...
Update: Coordinate systems are linearly related. Would have been useful info eh?
The problem statement is slightly unclear, so first I will clarify my own interpretation of it:
You have a polynomial function
f(x) = Cnxn + Cn-1xn-1 + ... + C0
[I changed A, B, ... Z into Cn, Cn-1, ..., C0 to more easily work with linear algebra below.]
Then you also have a transformation such as: z = ax + b that you want to use to find coefficients for the same polynomial, but in terms of z:
f(z) = Dnzn + Dn-1zn-1 + ... + D0
This can be done pretty easily with some linear algebra. In particular, you can define an (n+1)×(n+1) matrix T which allows us to do the matrix multiplication
d = T * c ,
where d is a column vector with top entry D0, to last entry Dn, column vector c is similar for the Ci coefficients, and matrix T has (i,j)-th [ith row, jth column] entry tij given by
tij = (j choose i) ai bj-i.
Where (j choose i) is the binomial coefficient, and = 0 when i > j. Also, unlike standard matrices, I'm thinking that i,j each range from 0 to n (usually you start at 1).
This is basically a nice way to write out the expansion and re-compression of the polynomial when you plug in z=ax+b by hand and use the binomial theorem.
If I understand your question correctly, there is no guarantee that the function will remain polynomial after you change coordinates. For example, let y=x^2, and the new coordinate system x'=y, y'=x. Now the equation becomes y' = sqrt(x'), which isn't polynomial.
Tyler's answer is the right answer if you have to compute this change of variable z = ax+b many times (I mean for many different polynomials). On the other hand, if you have to do it just once, it is much faster to combine the computation of the coefficients of the matrix with the final evaluation. The best way to do it is to symbolically evaluate your polynomial at point (ax+b) by Hörner's method:
you store the polynomial coefficients in a vector V (at the beginning, all coefficients are zero), and for i = n to 0, you multiply it by (ax+b) and add Ci.
adding Ci means adding it to the constant term
multiplying by (ax+b) means multiplying all coefficients by b into a vector K1, multiplying all coefficients by a and shifting them away from the constant term into a vector K2, and putting K1+K2 back into V.
This will be easier to program, and faster to compute.
Note that changing y into w = cy+d is really easy. Finally, as mattiast points out, a general change of coordinates will not give you a polynomial.
Technical note: if you still want to compute matrix T (as defined by Tyler), you should compute it by using a weighted version of Pascal's rule (this is what the Hörner computation does implicitely):
ti,j = b ti,j-1 + a ti-1,j-1
This way, you compute it simply, column after column, from left to right.
You have the equation:
y = Ax^(n-1) + Bx^(n-2) + ... + Z
In xy space, and you want it in some x'y' space. What you need is transformation functions f(x) = x' and g(y) = y' (or h(x') = x and j(y') = y). In the first case you need to solve for x and solve for y. Once you have x and y, you can substituted those results into your original equation and solve for y'.
Whether or not this is trivial depends on the complexity of the functions used to transform from one space to another. For example, equations such as:
5x = x' and 10y = y'
are extremely easy to solve for the result
y' = 2Ax'^(n-1) + 2Bx'^(n-2) + ... + 10Z
If the input spaces are linearly related, then yes, a matrix should be able to transform one set of coefficients to another. For example, if you had your polynomial in your "original" x-space:
ax^3 + bx^2 + cx + d
and you wanted to transform into a different w-space where w = px+q
then you want to find a', b', c', and d' such that
ax^3 + bx^2 + cx + d = a'w^3 + b'w^2 + c'w + d'
and with some algebra,
a'w^3 + b'w^2 + c'w + d' = a'p^3x^3 + 3a'p^2qx^2 + 3a'pq^2x + a'q^3 + b'p^2x^2 + 2b'pqx + b'q^2 + c'px + c'q + d'
therefore
a = a'p^3
b = 3a'p^2q + b'p^2
c = 3a'pq^2 + 2b'pq + c'p
d = a'q^3 + b'q^2 + c'q + d'
which can be rewritten as a matrix problem and solved.

Resources