Avoiding sin() calls in audio synth - algorithm

A naive sine wave generator takes a set of n values and calls the sin function on each of them:
for i = 0; i < 2*pi ; i = i+step {
output = append(output, sin(i) )
}
However, this makes a lot of calls to a potentially expensive sin function, and fails to take advantage of the fact that all the samples are sequential, have been calculated previously, and will be rounded to an integer (PCM). So, what alternatives are there?
I'm imagining something along the lines of Bresenham's circle algorithm or pre-computing a high-res sample, and then downsizing by taking every n'th entry, but if there's an 'industrial strength' solution to this problem, I'd love to hear it.

You can calculate the vector z which gives you (cos theta, sin theta) when you add to (1,0), where theta = 2*pi/step. Then you add this vector to (1,0) and get the next sin value as the y-coordinate of the sum. Then you rotate z by angle theta (by multiplying by the rotation matrix through angle theta) and add this to your previous vector (cos theta, sin theta), to get the next sin value as the y-coordinate of the resultant sum vector. And so forth. This requires computing cos theta and sin theta just once, and then each update is given by a matrix multiplication of a 2x2 matrix with a 2-d vector, and then a simple addition, which is faster than computing sin() using the power series expansion.

Related

Algorithm for plotting polar equations in general form

I'm looking for an algorithm, which can decide if a given point (x,y) satisfies some equation written in polar form, like r-phi=0. The main problem is that the angle phi is bounded between (0,2pi), so in this example I'm only getting one cycle of the spiral. So how can I get all the possible solutions for any polar equation written in such form?
Tried bounding r value to (0-2pi) range, which didn't work on some more complicated examples like logarithmic spirals
You can use the following transformation equations:
r = √(x² + y²)
φ = arctan(y, x) + 2kπ
where the function arctan is on four quadrants.
In the case of your Archimedean spiral, check that
(√(x² + y²) - arctan(y, x)) / 2π
is an integer.

Computer representation of quaternions that's exact for 90-degree rotations?

Unit quaternions have several advantages over 3x3 orthogonal matrices
for representing 3d rotations on a computer.
However, one thing that has been disappointing me about the unit quaternion
representation is that axis-aligned 90 degree rotations
aren't exactly representable. For example, a 90-degree rotation around the z axis, taking the +x axis to the +y axis, is represented as [w=sqrt(1/2), x=0, y=0, z=sqrt(1/2)].
Surprising/unpleasant consequences include:
applying a floating-point-quaternion-represented axis-aligned 90 degree rotation to a vector v
often doesn't rotate v by exactly 90 degrees
applying a floating-point-quaternion-represented axis-aligned 90 degree rotation to a vector v four times
often doesn't yield exactly v
squaring a floating-point-quaternion representing a 90 degree rotation around a coordinate axis
doesn't exactly yield the (exactly representable) 180 degree rotation
around that coordinate axis,
and raising it to the eighth power doesn't yield the identity quaternion.
Because of this unfortunate lossiness of the quaternion representation on "nice" rotations,
I still sometimes choose 3x3 matrices for applications in which I'd like axis-aligned
90 degree rotations, and compositions of them,
to be exact and floating-point-roundoff-error-free.
But the matrix representation isn't ideal either,
since it loses the sometimes-needed double covering property
(i.e. quaternions distinguish between the identity and a 360-degree rotation,
but 3x3 rotation matrices don't) as well as other familiar desirable numerical properties
of the quaternion representation, such as lack of need for re-orthogonalization.
My question: is there a computer representation of unit quaternions that does not suffer this
imprecision, and also doesn't lose the double covering property?
One solution I can think of is to represent each of the 4 elements of the quaternion
as a pair of machine-representable floating-point numbers [a,b], meaning a + b √2.
So the representation of a quaternion would consist of eight floating-point numbers.
I think that works, but it seems rather heavyweight;
e.g. when computing the product of a long sequence of quaternions,
each multiplication in the simple quaternion calculation would turn into
4 floating-point multiplications and 2 floating-point additions,
and each addition would turn into 2 floating-point additions. From the point of view of trying to write a general-purpose library implementation, all that extra computation and storage seems pointless as soon as there's a factor that's not one of these "nice" rotations.
Another possible solution would be to represent each quaternion q=w+xi+yj+zk
as a 4-tuple [sign(w)*w2, sign(x)*x2, sign(y)*y2, sign(z)*z2].
This representation is concise and has the desired non-lossiness for the subgroup of
interest, but I don't know how to multiply together two quaternions in this representation.
Yet another possible approach would be to store the quaternion q2
instead of the usual q. This seems promising at first,
but, again, I don't know how to non-lossily multiply
two of these representions together on the computer, and furthermore
the double-cover property is evidently lost.
You probably want to check the paper "Algorithms for manipulating quaternions in floating-point arithmetic" published in 2020 available online:
https://hal.archives-ouvertes.fr/hal-02470766/file/quaternions.pdf
Which shows how to perform exact computations and avoid unbounded numerical errors.
EDIT:
You can get rid of the square roots by using an unnormalized (i.e., non-unit) quaternion. Let me explain the idea.
Having two unit 3D-vectors X and Y, represented as pure quaternions, the quaternion Q rotating X to Y is
Q = X * (X + Y) / |X * (X + Y)|
The denominator, which is taking the norm, is the problem since it involves a square root.
You can see it by expanding the expresion as:
Q = (1 + X * Y) / sqrt(2 + 2*(X • Y))
If we replace X = i and Y = j to see the 90 degree rotation you get:
Q = (1 + k) / sqrt(2)
So, |1 + k| = sqrt(2). But we can actually use the unnormalized quaternion Q = 1 + k to perform rotations, all we need to do is to normalize the rotated result by the
SQUARED norm of the quaternion.
For example, the squared norm of Q = 1 + k is |1 + k|^2 = 2 (and that is exact as you never took the square root) lets apply the unnormalized quaternion to the vector X = i:
= (1 + k) i (1 - k)
= (i + k * i - i * k - k * i * k)
= (i + 2 j - i)
= 2 j
To get the correct result we divide by squared norm.
I haven't tested but I believe you would get exact results by applying unnormalized quaternions to your vectors and then dividing the result by the squared norm.
The algorithm would be
Create the unnormalized quaternion Q = X * (X + Y)
Apply Q to your vectors as: v' = Q * v * ~Q
Normalize by squared norm: v'' = v' / |Q|^2

Strassen matrix multiplication to store linear equations

One of the questions I've come across in my textbook is:
In Computer Graphics transformations are applied on many vertices on the screen. Translation, Rotations
and Scaling.
Assume you’re operating on a vertex with 3 values (X, Y, 1). X, Y being the X Y coordinates and 1 is always
constant
A Translation is done on X as X = X + X’ and on Y as Y = Y + Y’
X’ and Y’ being the values to translate by
A scaling is done on X as X = aX and on Y as Y = bY
a and b being the scaling factors
Propose the best way to store these linear equations and an optimal way to calculate them on each vertex
It is hinted that it involves matrix multiplication and Strassen. However, I'm not sure where to start here? It doesn't involve complex code and it states to come up with something simple to showcase my idea but all Strassen implementations I've come across are definitely large enough to not call complex. What should my thought process be here?
Would my matrix look like this? 3x3 for each equation or do I combine them all in one?
[ X X X']
[ Y Y Y']
[ 1 1 1 ]
What you're trying to find is a transformation matrix, which you can then use to transform some current (x, y) point into the next (nx, ny) point. In other words, we want
start = Vec([x, y, 1])
matrix = Matrix(...)
next = start * matrix // * is matrix multiplication
Now, if your next is supposed to look something like Vec([a * x + x', b * y + y', 1]), we can work our way backwards to figure out the matrix. First, look at just the x component. We're going to effectively take the dot product of our start vector and the topmost row of our matrix, yielding a * x + x'.
If we write it out more explicitly, we want a * x + 0 * y + x' * 1. Hopefully that makes it a bit more easy to see that the vector we want to dot start with is Vec([a, 0, x']). We can repeat this for the remaining two rows of the matrix, and obtain the following matrix:
matrix = Matrix(
[[a, 0, x'],
[0, b, y'],
[0, 0, 1]])
Double check that this makes sense and seems reasonable to you. If we take our start vector and multiply it with this matrix, we'll get the translated vector next as Vec([a * x + x', b * y + y', 1]).
Now for the real beauty of this- the matrix itself doesn't care at all about what our inputs are, its completely independent. So, we can repeatedly apply this matrix over and over again to step forward through more scaling and translations.
next_next_next = start * matrix * matrix * matrix
Knowing this, we can actually compute many steps ahead really quickly, using some mathematical tricks. Multiplying but the matrix n times is the same as multiplying by matrix raised to the nth power. And fortunately, we have an efficient method for computing a matrix to a power- its called exponentiation by squaring (actually applies to regular numbers as well, but here we're concerned with multiplying matrices, and the logic still applies). In a nutshell, rather than multiplying the number or matrix over and over again n times, we square it and multiply intermediate values by the original number / matrix at the right times, to very rapidly approach the desired power (in log(n) multiplications).
This is almost certainly what your professor is wanting you to realize. You can simulate n translations / scalings / rotations (yes, there are rotation matrices as well) in log(n) time.
Extra Mile
What's even cooler is that using some more advanced linear algebra, you can actually do it even faster. You can diagonalize your matrix (meaning you rewrite your matrix as P * D * P^-1, that is, the product of a some matrix P with a matrix D where the only non-zero elements are along the main diagonal, multiplied by the inverse of P). You can then raise this diagonalized matrix to a power really quickly, because (P * D * P^-1) * (P * D * P^-1) simplifies to P * D * D * P^-1, and this generalizes to:
M^N = (P * D * P^-1)^N = (P * D^N * P^-1)
Since D only has non-zero elements along its diagonal, you can raise it to any power by just raising each individual element to that power, which is just the normal cost of integer multiplication, across as many elements as your matrix is wide/tall. This is stupidly fast, and then you just do a single matrix multiplication on either side to arrive at M^N, and then multiply your start vector with this, for your end result.

Compute cosines and sines of a sequence of angles

I should create a program which computes the cosines and sines of a sequence of angles k*α, where k is a growing natural number (i.e., 0, 1, 2,...) and α is a constant angle which lies between 0 and π. I would like to make this program as fast as possibile.
Hence, I want to compute first the cosine of each angle, and then the related sine with sqrt(1-cos(k*α)^2). The problem is the sign of the sine, which should be determined by the position of the angle k*α on the real line.
I would like to know how I could implement this dynamic comparison as fast as possibile, or if the fastest way to proceed is to compute directly the sine, too.
After some time, I thought again about this problem and I found a really simple solution:
n = floor(k*alpha/pi);
if (n % 2 == 0)
sin_alpha = +sqrt(1-pow(cos(k*alpha,2)));
else
sin_alpha = -sqrt(1-pow(cos(k*alpha,2)));
Problem solved. :)

Fast Calculation of Pairwise Cosine Directional Distance Between Points in a (n x d x t) matrix

I am aware of the pdist(X,distance) in Matlab to take an (nxd) matrix of points and calculate the pairwise distances between them. I am also aware that it has an extra option to calculate the cosine distance if a matrix contain vectors rather than points.
What I would like to do is take a (n x d x t) matrix, which holds the varying positions of samples over time t and efficiently / quickly calculate the cosine directional distance between all pairs / all frames, where a v(t) is defined as the direction as calculate by p(t+1) - p(t), and p(t) refers to the row M(p,:,t).
Obviously I don't want to be using loops if can be helped. Any suggestions?
Any help much appreciated.
Recall that the cosine distance is the same as twice the Euclidean distance between normalized vectors. This saves us from computing the norm over and over in the cosine distance function.
It sounds like you want the distances between the vector difference with each change in time. Is that correct?
data = diff(data,1,3);
[m,n,nt] = size(data);
data = reshape(data,m*nt,n);
data = data./repmat(sqrt(sum(data.^2,2)),1,n);
d = pdist(data);
d = d/2; %# The uniform scaling may not matter to you.

Resources