animating multiple body trajectories in octave - animation

I know the hold on; command in octave allows me to plot multiple trajectories in the same figure. However, i recently came across the function 'comet'. It animates the state of a system over the time range defined by the user. I have only successfully used it for a simple code which shows the trajectory of a small body around a fixed massive body. How can I use 'comet' to animate the trajectories of 2 bodies over the same time range?
PS: If you need an example of how 'comet' works, here is the simple code i mentioned above:
function xdot = f(x,t)
G = 1.37;
M = 10^5;
[T,r] = cart2pol(x(1),x(2));
xdot(3) = -((G*M)/((x(1)^2) + (x(2)^2)))*cos(T);
xdot(4) = -((G*M)/((x(1)^2) + (x(2)^2)))*sin(T);
xdot(1) = x(3);
xdot(2) = x(4);
endfunction
X = lsode ("f", [1000,0,5,10],(t = linspace(0,1000,2000)'));
comet(X(:,1),X(:,2),0.01);
This basically plots the trajectory over time. You can copy paste to octave and see the animation.
Can anyone tell me how I can doe the same for a 2 body or multiple body system ?

You can't really use comet in that way. You'll have to do the 'animation' manually, but it's not hard. Plus, you get better customisability. Here's one approach.
X1 = lsode ("f", [1000, 0, 5, 10], (t = linspace(0,1000,2000)'));
X2 = lsode ("f", [500, 0, 4, 5 ], (t = linspace(0,1000,2000)'));
x_low = min ([X1(:, 1); X2(:, 1)]); x_high = max ([X1(:, 1); X2(:, 1)]);
y_low = min ([X1(:, 2); X2(:, 2)]); y_high = max ([X1(:, 2); X2(:, 2)]);
for n = 1 : size (X1, 1)
plot (X1(1:n, 1), X1(1:n, 2), ':', 'color', [0, 0.5, 1], 'linewidth', 2);
hold on;
plot (X1(n, 1), X1(n, 2), 'o', 'markerfacecolor', 'g', 'markeredgecolor', 'k', 'markersize', 10);
plot (X2(1:n, 1), X2(1:n, 2), ':', 'color', [1, 0.5, 0], 'linewidth', 2);
plot (X2(n, 1), X2(n, 2), 'o', 'markerfacecolor', 'm', 'markeredgecolor', 'k', 'markersize', 10);
hold off;
axis ([x_low, x_high, y_low, y_high]); % needed, otherwise first few plots will
% use automatic axis limits
drawnow; pause(0.01);
end
This is the most straightforward way, but its speed might not be as fast as 0.01, if the refresh rate is slower than the time it takes to produce the plot; you can make it even faster if you only plot once and change the data of each plot object at each step instead.
Also, this 'animation' is simply for visualising inside an octave session. If you want to produce a video file from this instead, you'll have to produce images and convert to a movie / gif format etc.

Related

Faster approach for decomposing a rotation to rotations around arbitrary orthogonal axes

I have a rotation and I want to decompose it into a series of rotations around 3 orthogonal arbitrary axes. It's a bit like a generalisation of Euler decomposition where the rotations are not around the X, Y and Z axes
I've tried to find a closed form solution but not been successful so I have produced a numerical solution based on minimising the difference between the rotation I want and the product of 3 quaternions representing the 3 axes roations with the 3 angles being the unknowns. 'SimplexMinimize' is just an abstraction of the code to find the 3 angles that minimises the error.
double GSUtil::ThreeAxisDecomposition(const Quaternion &target, const Vector &ax1, const Vector &ax2, const Vector &ax3, double *ang1, double *ang2, double *ang3)
{
DataContainer data = {target, ax1, ax2, ax3};
VaraiablesContainer variables = {ang1, ang2, ang3};
error = SimplexMinimize(ThreeAxisDecompositionError, data, variables);
}
double GSUtil::ThreeAxisDecompositionError(const Quaternion &target, const Vector &ax1, const Vector &ax2, const Vector &ax3, double ang1, double ang2, double ang3)
{
Quaternion product = MakeQFromAxisAngle(ax3, ang3) * MakeQFromAxisAngle(ax2, ang2) * MakeQFromAxisAngle(ax1, ang1);
// now we need a distance metric between product and target. I could just calculate the angle between them:
// theta = acos(2?q1,q2?^2-1) where ?q1,q2? is the inner product (n1n2 + x1x2+ y1y2 + z1z2)
// but there are other quantities that will do a similar job in less time
// 1-(q1,q2)^2 should be faster to calculate and is 0 when they are identical and 1 when they are 180 degrees apart
double innerProduct = target.n * product.n + target.v.x * product.v.x + target.v.x * product.v.x + target.v.x * product.v.x;
double error = 1 - innerProduct * innerProduct;
return error;
}
It works (I think) but obviously it is quite slow. My feeling is there ought to be a closed form solution. At the very least there ought to be a gradient to the function so I can use a faster optimiser.
There is indeed a closed form solution. Since the axes form an orthonormal basis A (each axe is a column of the matrix), you can decompose a rotation R on the three axes by transforming R into the basis A and then do Euler Angle decomposition on the three main axes:
R = A*R'*A^t = A*X*Y*Z*A^t = (A*X*A^t)*(A*Y*A^t)*(A*Z*A^t)
This translates into the following algorithm:
Compute R' = A^t*R*A
Decompose R' into Euler Angles around main axes to obtain matrices X, Y, Z
Compute the three rotations around the given axes:
X' = A*X*A^t
Y' = A*Y*A^t
Z' = A*Y*A^t
As a reference, here's the Mathematica code I used to test my answer
(*Generate random axes and a rotation matrix for testing purposes*)
a = RotationMatrix[RandomReal[{0, \[Pi]}],
Normalize[RandomReal[{-1, 1}, 3]]];
t1 = RandomReal[{0, \[Pi]}];
t2 = RandomReal[{0, \[Pi]}];
t3 = RandomReal[{0, \[Pi]}];
r = RotationMatrix[t1, a[[All, 1]]].
RotationMatrix[t2, a[[All, 2]]].
RotationMatrix[t2, a[[All, 3]]];
(*Decompose rotation matrix 'r' into the axes of 'a'*)
rp = Transpose[a].r.a;
{a1, a2, a3} = EulerAngles[rp, {1, 2, 3}];
xp = a.RotationMatrix[a1, {1, 0, 0}].Transpose[a];
yp = a.RotationMatrix[a2, {0, 1, 0}].Transpose[a];
zp = a.RotationMatrix[a3, {0, 0, 1}].Transpose[a];
(*Test that the generated matrix is equal to 'r' (should give 0)*)
xp.yp.zp - r // MatrixForm
(*Test that the individual rotations preserve the axes (should give 0)*)
xp.a[[All, 1]] - a[[All, 1]]
yp.a[[All, 2]] - a[[All, 2]]
zp.a[[All, 3]] - a[[All, 3]]
I was doing the same thing in python and found #Gilles-PhilippePaillé 's answer really helpful although I had to tweak a couple of things, mostly getting the euler angles out in reverse. Thought I would add my python version here for reference anyway in case it helps anyone.
import numpy as np
from scipy.spatial.transform import Rotation
def normalise(v: np.ndarray) -> np.ndarray:
"""Normalise an array along its final dimension."""
return v / norm(v, axis=-1, keepdims=True)
# Generate random basis
A = Rotation.from_rotvec(normalise(np.random.random(3)) * np.random.rand() * np.pi).as_matrix()
# Generate random rotation matrix
t0 = np.random.rand() * np.pi
t1 = np.random.rand() * np.pi
t2 = np.random.rand() * np.pi
R = Rotation.from_rotvec(A[:, 0] * t0) * Rotation.from_rotvec(A[:, 1] * t1) * Rotation.from_rotvec(A[:, 2] * t2)
R = R.as_matrix()
# Decompose rotation matrix R into the axes of A
rp = Rotation.from_matrix(A.T # R # A)
a3, a2, a1 = rp.as_euler('zyx')
xp = A # Rotation.from_rotvec(a1 * np.array([1, 0, 0])).as_matrix() # A.T
yp = A # Rotation.from_rotvec(a2 * np.array([0, 1, 0])).as_matrix() # A.T
zp = A # Rotation.from_rotvec(a3 * np.array([0, 0, 1])).as_matrix() # A.T
# Test that the generated matrix is equal to 'r' (should give 0)
assert np.allclose(xp # yp # zp, R)
# Test that the individual rotations preserve the axes (should give 0)
assert np.allclose(xp # A[:, 0], A[:, 0])
assert np.allclose(yp # A[:, 1], A[:, 1])
assert np.allclose(zp # A[:, 2], A[:, 2])

cv2 pose estimation using homography matrix

I am trying to calculate the pose of image Y, given image X. Image Y is the same as image X rotated 90º degrees.
1 -So, for starters i find the matches between both images.
2 -Then i store all the good matches.
3 -The homography between the the matches from both images is calculated using cv2.RANSAC.
4 -Then for the X image, i transform the 2d matching points into 3d, adding 0 as the Z axis.
5 -Object points contain all points from matches of original image, while image points contain matches from the training image. Both array of points are filtered using the mask returned by homography.
6 -After that, i use cv2.calibrateCamera with these object points and image points.
7 -Finnaly i use cv2.projectPoints to get the projections of the axis
I know with that until step 5, the results are correct because i use cv2.drawMatches to see the matches. However this may not be the way to get what i want to achieve.
matches = flann.knnMatch(query_image.descriptors, descriptors, k=2)
good = []
for m, n in matches:
if m.distance < 0.70 * n.distance:
good.append(m)
current_good = good
src_pts = np.float32([selected_image.keypoints[m.queryIdx].pt for m in current_good]).reshape(-1, 1, 2)
dst_pts = np.float32([keypoints[m.trainIdx].pt for m in current_good]).reshape(-1, 1, 2)
homography, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
test = np.zeros(((mask.ravel() > 0).sum(), 3),np.float32) #obj points
test1 = np.zeros(((mask.ravel() > 0).sum(), 2), np.float32) #img points
i=0
counter=0
for m in current_good:
if mask.ravel()[i] == 1:
test[counter][0] = selected_image.keypoints[m.queryIdx].pt[0]
test[counter][1] = selected_image.keypoints[m.queryIdx].pt[1]
test1[counter][0] = selected_image.keypoints[m.trainIdx].pt[0]
test1[counter][1] = selected_image.keypoints[m.trainIdx].pt[1]
counter+=1
i+=1
gray = cv2.cvtColor(self.train_image, cv2.COLOR_BGR2GRAY)
gray = np.float32(gray)
#here start my doubts about what i want to do and if it is possible to do it this way
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera([test], [test1], gray.shape[::-1], None, None)
axis = np.float32([[3, 0, 0], [0, 3, 0], [0, 0, -3]]).reshape(-1, 3)
rvecs = np.array(rvecs, np.float32)
tvecs = np.array(tvecs, np.float32)
imgpts, jac = cv2.projectPoints(axis, rvecs, tvecs, mtx, dist)
However after all this, imgpts returned by cv2.projectPoints give results that don't make much sense to me, like :
[[[857.3185 109.317406]]
[[857.2196 108.360954]]
[[857.2846 107.579605]]]
I would like to have a normal to my image like it is shown here https://docs.opencv.org/trunk/d7/d53/tutorial_py_pose.html and i successfully got it to work using the chessboard image. But trying to adapt to a general image is giving me strange results.

Matlab - Scale down an image using an average of four pixels

I have just started learning image-processing and Matlab and I'm trying to scale down an image using an average of 4 pixels. That means that for every 4 original pixels I calculate the average and produce 1 output pixel.
So far I have the following code:
img = imread('bird.jpg');
row_size = size(img, 1);
col_size = size(img, 2);
res = zeros(floor(row_size/2), floor(col_size/2));
figure, imshow(img);
for i = 1:2:row_size
for j = 1:2:col_size
num = mean([img(i, j), img(i, j+1), img(i+1, j), img(i+1, j+1)]);
res(round(i/2), round(j/2)) = num;
end
end
figure, imshow(uint8(res));
This code manages to scale down the image but it converts it to grayscale.
I understand that I probably have to calculate the average of the RGB components for the output pixel but I don't know how to access them, calculate the average and insert them to the result matrix.
In Matlab, an RGB image is treated as a 3D array. You can check it with:
depth_size = size(img, 3)
depth_size =
3
The loop solution, as you have done, is explained in Sardar_Usama's answer. However, in Matlab it is recommended to avoid loops whenever you want to gain speed.
This is a vectorized solution to scale down an RGB image by a factor of n:
img = imread('bird.jpg');
n = 2; % n can only be integer
[row_size, col_size] = size(img(:, :, 1));
% getting rid of extra rows and columns that won't be counted in averaging:
I = img(1:n*floor(row_size / n), 1:n*floor(col_size / n), :);
[r, ~] = size(I(:, :, 1));
% separating and re-ordering the three colors of image in a way ...
% that averaging could be done with a single 'mean' command:
R = reshape(permute(reshape(I(:, :, 1), r, n, []), [2, 1, 3]), n*n, [], 1);
G = reshape(permute(reshape(I(:, :, 2), r, n, []), [2, 1, 3]), n*n, [], 1);
B = reshape(permute(reshape(I(:, :, 3), r, n, []), [2, 1, 3]), n*n, [], 1);
% averaging and reshaping the colors back to the image form:
R_avg = reshape(mean(R), r / n, []);
G_avg = reshape(mean(G), r / n, []);
B_avg = reshape(mean(B), r / n, []);
% concatenating the three colors together:
scaled_img = cat(3, R_avg, G_avg, B_avg);
% casting the result to the class of original image
scaled_img = cast(scaled_img, 'like', img);
Benchmarking:
If you want to know why vectorized solutions are more popular, take a look at how long it takes to process an RGB 768 x 1024 image with the two methods:
------------------- With vectorized solution:
Elapsed time is 0.024690 seconds.
------------------- With nested loop solution:
Elapsed time is 6.127933 seconds.
So there is more than 2 orders of magnitude difference of speed between the two solutions.
Another possible solution can be using the function blockproc as mentioned at this link. This will also avoid for loops.
You can take care of that using the modified code below:
img = imread('bird.jpg');
row_size = size(img, 1);
col_size = size(img, 2);
figure, imshow(img);
res = zeros(floor(row_size/2), floor(col_size/2), 3); %Pre-allocation
for p = 1:2:row_size
for q = 1:2:col_size
num = mean([img(p, q,:), img(p, q+1,:), img(p+1, q,:), img(p+1, q+1,:)]);
res(round(p/2), round(q/2),:) = num;
end
end
figure, imshow(uint8(res));
I took a sample image of 1200x1600x3 uint8 which is converted to 600x800x3 uint8 by the above code which is correct because (1200*1600)/4 = 480000 and 600*800 = 480000
P.S : I changed the variable names i and j to p and q respectively since i and j are reserved for imaginary numbers.

Matlab Image Surface plot across color channels

I am new to image processing and want to plot the color channels using surf to study the intensities in different color channels and their peaks or if they get cut off or saturated at any point. Can someone direct me as to where I can learn how to do this?
You can use the surf command directly to do just this. When you pass a 2D array to surf, it uses the values as the height (z) and uses 1:size(data, 2) for the x values and 1:size(data, 1) for the y values.
figure
hax = axes;
hold(hax, 'on');
rsurf = surf(img(:,:,1), 'FaceColor', 'r', 'FaceAlpha', 0.5, 'EdgeColor', 'none');
bsurf = surf(img(:,:,2), 'FaceColor', 'b', 'FaceAlpha', 0.5, 'EdgeColor', 'none');
gsurf = surf(img(:,:,3), 'FaceColor', 'g', 'FaceAlpha', 0.5, 'EdgeColor', 'none');
As an example
img = reshape(parula(16), [4 4 3]);

Why is my performance bad? (Noob scheduling)

I'm mainly a very high level programmer so thinking about things like CPU locality is very new to me.
I'm working on a basic bilinear demosaic (for RGGB sensor data) and I've got the algorithm right (judging by the results) but it's not performing as well as I'd hoped (~210Mpix/s).
Here's my code (the input is a 4640x3472 image with a single channel of RGGB):
def get_bilinear_debayer(input_raw, print_nest=False):
x, y, c = Var(), Var(), Var()
# Clamp and move to 32 bit for lots of space for averaging.
input = Func()
input[x,y] = cast(
UInt(32),
input_raw[
clamp(x,0,input_raw.width()-1),
clamp(y,0,input_raw.height()-1)]
)
# Interpolate vertically
vertical = Func()
vertical[x,y] = (input[x,y-1] + input[x,y+1])/2
# Interpolate horizontally
horizontal = Func()
horizontal[x,y] = (input[x-1,y] + input[x+1,y])/2
# Interpolate on diagonals
diagonal_average = Func()
diagonal_average[x, y] = (
input[x+1,y-1] +
input[x+1,y+1] +
input[x-1,y-1] +
input[x-1,y+1])/4
# Interpolate on adjacents
adjacent_average = Func()
adjacent_average[x, y] = (horizontal[x,y] + vertical[x,y])/2
red, green, blue = Func(), Func(), Func()
# Calculate the red channel
red[x, y, c] = select(
# Red photosite
c == 0, input[x, y],
# Green photosite
c == 1, select(x%2 == 0, vertical[x,y],
horizontal[x,y]),
# Blue photosite
diagonal_average[x,y]
)
# Calculate the blue channel
blue[x, y, c] = select(
# Blue photosite
c == 2, input[x, y],
# Green photosite
c == 1, select(x%2 == 1, vertical[x,y],
horizontal[x,y]),
# Red photosite
diagonal_average[x,y]
)
# Calculate the green channel
green[x, y, c] = select(
# Green photosite
c == 1, input[x,y],
# Red/Blue photosite
adjacent_average[x,y]
)
# Switch color interpolator based on requested color.
# Specify photosite as third argument, calculated as [x, y, z] = (0, 0, 0), (0, 1, 1), (1, 0, 1), (1, 1, 2)
# Happily works out to a sum of x mod 2 and y mod 2.
debayer = Func()
debayer[x, y, c] = select(c == 0, red[x, y, x%2 + y%2],
c == 1, green[x, y, x%2 + y%2],
blue[x, y, x%2 + y%2])
# Scheduling
x_outer, y_outer, x_inner, y_inner, tile_index = Var(), Var(), Var(), Var(), Var()
bits = input_raw.get().type().bits
output = Func()
# Cast back to the original colour space
output[x,y,c] = cast(UInt(bits), debayer[x,y,c])
# Reorder so that colours are calculated in order (red runs, then green, then blue)
output.reorder_storage(c, x, y)
# Tile in 128x128 squares
output.tile(x, y, x_outer, y_outer, x_inner, y_inner, 128, 128)
# Vectorize based on colour
output.bound(c, 0, 3)
output.vectorize(c)
# Fuse and parallelize
output.fuse(x_outer, y_outer, tile_index)
output.parallel(tile_index)
# Debugging
if print_nest:
output.print_loop_nest()
debayer.print_loop_nest()
red.print_loop_nest()
green.print_loop_nest()
blue.print_loop_nest()
return output
Honestly I have no idea what I'm doing here and I'm too new to this to have any clue where or what to look at.
Any advice on how to improve the scheduling is helpful. I'm still learning but feedback is hard to find.
The schedule I have is the best I've been able to do but it's pretty much entirely trial and error.
EDIT: I added an extra 30Mpix/s by doing the whole adjacent average summation in the function directly and by vectorizing on x_inner instead of colour.
EDIT: New schedule:
# Set input bounds.
output.bound(x, 0, (input_raw.width()/2)*2)
output.bound(y, 0, (input_raw.height()/2)*2)
output.bound(c, 0, 3)
# Reorder so that colours are calculated in order (red runs, then green, then blue)
output.reorder_storage(c, x, y)
output.reorder(c, x, y)
# Tile in 128x128 squares
output.tile(x, y, x_outer, y_outer, x_inner, y_inner, 128, 128)
output.unroll(x_inner, 2).unroll(y_inner,2)
# Vectorize based on colour
output.unroll(c)
output.vectorize(c)
# Fuse and parallelize
output.fuse(x_outer, y_outer, tile_index)
output.parallel(tile_index)
EDIT: Final schedule that's now beating (640MP/s) the Intel Performance Primitive benchmark that was run on a CPU twice as powerful as mine:
output = Func()
# Cast back to the original colour space
output[x,y,c] = cast(UInt(bits), debayer[x,y,c])
# Set input bounds.
output.bound(x, 0, (input_raw.width()/2)*2)
output.bound(y, 0, (input_raw.height()/2)*2)
output.bound(c, 0, 3)
# Tile in 128x128 squares
output.tile(x, y, x_outer, y_outer, x_inner, y_inner, 128, 128)
output.unroll(x_inner, 2).unroll(y_inner, 2)
# Vectorize based on colour
output.vectorize(x_inner, 16)
# Fuse and parallelize
output.fuse(x_outer, y_outer, tile_index)
output.parallel(tile_index)
target = Target()
target.arch = X86
target.os = OSX
target.bits = 64
target.set_feature(AVX)
target.set_feature(AVX2)
target.set_feature(SSE41)
output.compile_jit(target)
Make sure that you are using unroll(c) to make the per-channel select logic optimize away. Unrolling by 2 in x and y will also help:
output.unroll(x, 2).unroll(y,2)
The goal there is to optimize out the select logic between even/odd rows and columns. In order to take full advantage of that, you'll likely also need to tell Halide that the min and extent are a multiple of 2:
output.output_buffer().set_bounds(0,
(f.output_buffer().min(0) / 2) * 2,
(output.output_buffer().extent(0) / 2) * 2)
output.output_buffer().set_bounds(1,
(f.output_buffer().min(1) / 2) * 2,
(output.output_buffer().extent(1) / 2) * 2)
Though it may be worth stating even more stringent constraints, such as using 128 instead of 2 to assert multiples of the tile size or just hardwiring the min and extent to reflect the actual sensor parameters if you are only supporting a single camera.

Resources