Animating a rolling ball [duplicate] - animation

This question already has answers here:
Pygame and Numpy Animations
(1 answer)
Animated sprite from few images
(4 answers)
How do I make a sprite as a gif in pygame? [duplicate]
(2 answers)
How can I load an animated GIF and get all of the individual frames in Pygame?
(3 answers)
Closed 2 years ago.
While making a pool game in pygame, I wanted to animate a rolling ball. Here's an example of what I have and what I want it to look like.
Here is my 'solution':
def update_sprite(self):
sprite_size = np.repeat([self.radius*2],2)
new_sprite = pygame.Surface(sprite_size)
new_sprite.fill((200, 200, 200))
new_sprite.set_colorkey((200, 200, 200))
#this is the actual ball part of the circle
pygame.draw.circle(new_sprite, self.color, sprite_size/2, self.radius)
if self.sprite_visible:
#sprite offset is an np array, with two coordinates,
#which contains the diplacement of the small circle from centre of the circle
pygame.draw.circle(new_sprite, (255, 255, 255), self.sprite_offset.astype(int)+[self.radius, self.radius], self.sprite_size)
if self.number!=0:
new_sprite.blit(self.text, (self.radius - self.text_length / 2)+self.sprite_offset.astype(int))
# used to remove part of the sprite which is outside the ball
triag1 = np.array(([0, 0], [self.radius / 2, 0], [0, self.radius / 2]))
triag2 = np.array(([0, 2*self.radius], [self.radius / 2, 2*self.radius], [0, 3*self.radius / 2]))
triag3 = np.array(([2*self.radius, 0], [2*self.radius, self.radius / 2], [3*self.radius / 2,0]))
triag4 = np.array(([2*self.radius, 2*self.radius], [3*self.radius / 2, 2*self.radius], [2*self.radius, 3*self.radius / 2]))
pygame.draw.polygon(new_sprite,(200,200,200),triag1)
pygame.draw.polygon(new_sprite, (200, 200, 200), triag2)
pygame.draw.polygon(new_sprite, (200, 200, 200), triag3)
pygame.draw.polygon(new_sprite, (200, 200, 200), triag4)
pygame.draw.circle(new_sprite, (200,200,200), sprite_size/2, self.radius+6,6)
self.image = new_sprite
self.rect = self.image.get_rect()
self.top_left = self.pos - self.radius
The line
pygame.draw.circle(new_sprite, (255, 255, 255), self.sprite_offset.astype(int)+[self.radius, self.radius], self.sprite_size)
Draws the small circle, and the second line after it draws the number, however, if I would leave it like that this would happen. Later I drew a bigger circle over the ball, without the middle part. However, that leaves small lines around the circle (you can see it here). Which is why I needed to draw 4 triangles around the circle to delete them.
Now, please tell me, is there any other way of doing this?
Link to source code

Related

Halcon - find edge position, draw line and lineintersection

I'm starting from scratch with Halcon, and I'm not able to solve a problem. I have a Object, need to extract edges from this object, draw a line along the borders and draw a point on the intersection of the lines.
I've tried tresholding, edge, color edge, but It extracts borders everywhere, except the ones I need..
Its just a test i am doing as it is similar to what I have to do later on a real project. But in two days I didnt manage to solve it..
Here is the base image, and the desired result image:
what I have so far:
open_framegrabber ('GigEVision', 0, 0, 0, 0, 0, 0, 'default', -1, 'default', -1, 'false', 'default', 'S1204667', 0, -1, AcqHandle)
set_framegrabber_param (AcqHandle, 'Gain', 1.0)
set_framegrabber_param (AcqHandle, 'ExposureTime', 20000)
set_framegrabber_param (AcqHandle, 'timerDuration', 1)
set_framegrabber_param (AcqHandle, 'BalanceWhiteAuto', 'Off')
set_framegrabber_param (AcqHandle, 'BalanceRatioSelector', 'Red')
set_framegrabber_param (AcqHandle, 'BalanceRatio', 1.22)
set_framegrabber_param (AcqHandle, 'BalanceRatioSelector', 'Green')
set_framegrabber_param (AcqHandle, 'BalanceRatio', 1.00)
set_framegrabber_param (AcqHandle, 'BalanceRatioSelector', 'Blue')
set_framegrabber_param (AcqHandle, 'BalanceRatio', 1.95)
grab_image (Image, AcqHandle)
threshold (Image, Region, 0, 128)
expand_region (Region, Region, RegionExpanded, 15, 'image')
close_framegrabber (AcqHandle)
Based off the original poster being worried about positional movement, I'm posting another answer which is more involved. This strategy might not be the easiest for this case but it is a general strategy that works for a lot of cases. Typically problems like this are solved as follows:
1) Perform a rough location of the part. This usually involves either a blob detection or a matching strategy (correlation, shape based etc). The output of this step is a transformation describing the location of the object (translation, orientation).
2) Based off the found location in step 1, the search regions for detecting features (lines, holes etc) are transformed or updated to new locations. Or the entire image is transformed.
I couldn't post all the code since it was too large. You will have to personal message me if you want me to email you the full HDevelop script. Here are some snippets to give you an idea:
Step 1: Threshold the image and setup search regions where the lines should be found. Only posting code for the first two regions but code is identical for the other three
threshold(Image, RegionThreshold, 0, 100)
region_to_bin(RegionThreshold, ImageBinary, 255, 0, Width, Height)
dev_display(ImageBinary)
*Use the mouse to draw region 1 around first line. Right click when finished.
draw_rectangle2(WindowHandle, Reg1Row, Reg1Column, Reg1Phi, Reg1Length1, Reg1Length2)
gen_rectangle2(Rectangle1, Reg1Row, Reg1Column, Reg1Phi, Reg1Length1, Reg1Length2)
*Use the mouse to draw region 2 around second line. Right click when finished.
draw_rectangle2(WindowHandle, Reg2Row, Reg2Column, Reg2Phi, Reg2Length1, Reg2Length2)
gen_rectangle2(Rectangle2, Reg2Row, Reg2Column, Reg2Phi, Reg2Length1, Reg2Length2)
The search regions look like this:
Step 2: Calculate the intersection of the lines. Only posting code for the first two lines but code is identical for the other three
*get line segment 1
reduce_domain(ImageBinary, Rectangle1, ImageReduced)
edges_sub_pix (ImageReduced, EdgesLine1, 'lanser2', 0.1, 20, 40)
fit_line_contour_xld (EdgesLine1, 'regression', -1, 0, 5, 2, RowBeginLine1, \
ColBeginLine1, RowEndLine1, ColEndLine1, Nr, Nc, Dist)
*get line segment 2
reduce_domain(ImageBinary, Rectangle2, ImageReduced)
edges_sub_pix (ImageReduced, EdgesLine2, 'lanser2', 0.1, 20, 40)
fit_line_contour_xld (EdgesLine2, 'regression', -1, 0, 5, 2, RowBeginLine2, \
ColBeginLine2, RowEndLine2, ColEndLine2, Nr, Nc, Dist)
*Calculate and display intersection line 1 to line 2
intersection_lines(RowBeginLine1, ColBeginLine1, RowEndLine1, ColEndLine1, \
RowBeginLine2, ColBeginLine2, RowEndLine2, ColEndLine2, \
Line1Line2IntersectRow, Line1Line2IntersectCol,
IsOverlappingLine1Line2)
This produces the following output:
Step 3: Create a normalized cross correlation model for finding the object when it undergoes translation or rotation. Here I choose a simple region on the bottom
gen_rectangle1 (ModelRegion, 271.583, 200, 349.083, 530)
reduce_domain (ImageBinary, ModelRegion, TemplateImage)
create_ncc_model (TemplateImage, 'auto', rad(0), rad(360), 'auto', 'use_polarity',
ModelID)
area_center (ModelRegion, ModelRegionArea, RefRow, RefColumn)
Output Image
Step 4: Now we consider what happens when the object is moved. To simulate this I warped the image using a affine transform. I then searched for the normalized cross correlation model created in step 3. Below you can see the object was found. The output is a row, column and angle where it was found. This is converted to a matrix called AlignmentHomMat2D
Some of the code:
threshold(TransImage, RegionThreshold, 0, 100)
region_to_bin(RegionThreshold, ImageBinaryScene, 255, 0, Width, Height)
* Matching 01: Find the model
find_ncc_model (ImageBinaryScene, ModelID, rad(0), rad(360), 0.8, 1, 0.5, 'true', 0,
Row, Column, Angle, Score)
* Matching 01: Display the centers of the matches in the detected positions
dev_display (TransImage)
set_line_width(WindowHandle, 3)
for I := 0 to |Score| - 1 by 1
* Matching 01: Display the center of the match
gen_cross_contour_xld (TransContours, Row[I], Column[I], 20, Angle)
dev_set_color ('green')
dev_display (TransContours)
hom_mat2d_identity (AlignmentHomMat2D)
hom_mat2d_translate (AlignmentHomMat2D, -RefRow, -RefColumn, AlignmentHomMat2D)
hom_mat2d_rotate (AlignmentHomMat2D, Angle[I], 0, 0, AlignmentHomMat2D)
hom_mat2d_translate (AlignmentHomMat2D, Row[I], Column[I], AlignmentHomMat2D)
* Matching 01: Display the aligned model region
affine_trans_region (ModelRegion, RegionAffineTrans, AlignmentHomMat2D,
'nearest_neighbor')
dev_display (RegionAffineTrans)
endfor
The output is as follows:
Step 5: Finally the search regions for locating the original lines are updated based off where the cross-correlation model was found.
Here is the code. Again I'm only showing the first two line segments:
*transform initial search regions
affine_trans_region(Rectangle1, Rectangle1Transformed,
AlignmentHomMat2D,'nearest_neighbor')
affine_trans_region(Rectangle2, Rectangle2Transformed,
AlignmentHomMat2D,'nearest_neighbor')
*get line segment 1
reduce_domain(ImageBinaryScene, Rectangle1Transformed, ImageReduced)
edges_sub_pix (ImageReduced, EdgesLine1, 'lanser2', 0.5, 20, 40)
fit_line_contour_xld (EdgesLine1, 'regression', -1, 0, 5, 2, RowBeginLine1, \
ColBeginLine1, RowEndLine1, ColEndLine1, Nr, Nc, Dist)
*get line segment 2
reduce_domain(ImageBinaryScene, Rectangle2Transformed, ImageReduced)
edges_sub_pix (ImageReduced, EdgesLine2, 'lanser2', 0.5, 20, 40)
fit_line_contour_xld (EdgesLine2, 'regression', -1, 0, 5, 2, RowBeginLine2, \
ColBeginLine2, RowEndLine2, ColEndLine2, Nr, Nc, Dist)
*Calculate and display intersection line 1 to line 2
intersection_lines(RowBeginLine1, ColBeginLine1, RowEndLine1, ColEndLine1, \
RowBeginLine2, ColBeginLine2, RowEndLine2, ColEndLine2, \
Line1Line2IntersectRow, Line1Line2IntersectCol,
IsOverlappingLine1Line2)
This produces the following output:
Halcon has a lot of ways this can be accomplished depending on the requirements. One of the most common techniques for detecting lines is to use the Hough Transform. I've attached a small HDevelop script showing how to get the intersection of two of the lines in your image. The same principle can be used for the others.
One of the most important concepts in Halcon is Regions. The example program first allows you to create two regions by drawing rectangles over top of two of the lines. The regions are black in the image below. On line 8 of the program (draw_rectangle2...) you will need to draw a bounding box around the first line. Right click when you are finished. Line 10 (draw rectangle2...) will expect you to draw a bounding box around the second line. Again right click when finished.
The regions are then combined on lines 13-16 by concatenation. On line 19 (reduce_domain) the domain of the image is reduced to the concatenated regions. You can think of this as a mask. Now when we search for the lines we will only search the part of the image where we created the regions.
emphasized text
read_image (Image, 'C:/Users/Jake/Documents/Stack Overflow/Halcon/Find Edge Position,
Draw Line and Line Intersection/FMuX1.jpg')
get_image_size (Image, Width, Height)
dev_open_window (0, 0, Width, Height, 'black', WindowHandle)
dev_display(Image)
*Use the mouse to draw region 1 around first line. Right click when finished.
draw_rectangle2(WindowHandle, Reg1Row, Reg1Column, Reg1Phi, Reg1Length1, Reg1Length2)
*Use the mouse to draw region 2 around second line. Right click when finished.
draw_rectangle2(WindowHandle, Reg2Row, Reg2Column, Reg2Phi, Reg2Length1, Reg2Length2)
*Generate a single region to search for two lines
gen_rectangle2(Rectangle1, Reg1Row, Reg1Column, Reg1Phi, Reg1Length1, Reg1Length2)
gen_rectangle2(Rectangle2, Reg2Row, Reg2Column, Reg2Phi, Reg2Length1, Reg2Length2)
concat_obj(Rectangle1, Rectangle2, Regions)
union1(Regions, RegionUnion)
*Reduce the domain of the image to the region created in lines 13-16
reduce_domain(Image, RegionUnion, ImageReduced)
* Detect edges (amplitude) using the Sobel operator
sobel_amp (ImageReduced, EdgeAmplitude1, 'thin_sum_abs', 3)
dev_set_color ('red')
threshold (EdgeAmplitude1, Region1, 100, 255)
hough_lines (Region1, 4, 50, 5, 5, Line1Angle, Line1Dist)
dev_set_color ('blue')
* Store input lines described in HNF
gen_region_hline (LineRegions, Line1Angle, Line1Dist)
*Select Line1
select_obj(LineRegions, Line1, 1)
*Select Line2
select_obj(LineRegions, Line2, 2)
*Calculate and display intersection
intersection(Line1, Line2, Line1Line2Intersection)
area_center(Line1Line2Intersection, Line1Line2IntersectArea, Line1Line2IntersectRow,
Line1Line2IntersectCol)
disp_circle (WindowHandle, Line1Line2IntersectRow, Line1Line2IntersectCol, 6)

Matlab - Creating a figure of different sized subplots

I have an array of images and I need to plot them side by side, with each image having a different size. Although the actual image sizes are quite large, I would want to do something like imresize to plot the size that I want.
I have tried doing the subplot strategy like
subplot(1, 4, 1);
imshow(...);
subplot(1, 4, 2);
imshow(...);
subplot(1, 4, 3);
imshow(...);
subplot(1, 4, 4);
imshow(...);
But all the images show up as the same size. I want something like this
This for some reason seems non-trivial. Would really appreciate some help.
It's possible to make subplots of different sizes by specifying a multiple-element vector for the grid position argument p in the syntax subplot(m,n,p).
Your example can be constructed with the following:
subplot(4,10,[1:4 11:14 21:24 31:34]);
subplot(4,10,[5:7 15:17 25:27]);
subplot(4,10,[8:9 18:19]);
subplot(4,10,[10]);
You can add 4 axeses to the figure, and set the position of each axes:
I = imread('cameraman.tif');
scrsz = get(groot, 'ScreenSize'); %Get screen size
f = figure('Position', [scrsz(3)/10, scrsz(4)/5, scrsz(4)/2*2.4, scrsz(4)/2]); %Set figure position by screen size.
positionVector1 = [-0.25, 0.95-0.9, 0.9, 0.9]; %position vector for largest image.
positionVector2 = [0.23, 0.95-0.6, 0.6, 0.6];
positionVector3 = [0.555, 0.95-0.4, 0.4, 0.4];
positionVector4 = [0.775, 0.95-0.267, 0.267, 0.267]; %position vector for smallest image.
axes(f, 'Position', positionVector1);
imshow(I, 'border', 'tight');
axes(f, 'Position', positionVector2);
imshow(I, 'border', 'tight');
axes(f, 'Position', positionVector3);
imshow(I, 'border', 'tight');
axes(f, 'Position', positionVector4);
imshow(I, 'border', 'tight');
Setting the position manually is not the best solution.
There must be a way to compute the position of each axes.
Result:

Flip Image in Processing 3.4 [duplicate]

This question already has an answer here:
Processing mirror image over x axis?
(1 answer)
Closed 4 years ago.
How can I flip (mirror) an image along the Y-axis in Processing 3.4? I have tried scale(-1,1) but that just makes my image disappear.
If you call scale(-1, 1) then your X values are flipped, and you have to adjust your arguments accordingly. Here's an example:
size(500, 500);
PImage img = loadImage("my_image.jpg");
scale(-1, 1);
image(img, -500, 0, width, height);
Personally I find this very confusing, so I would avoid calling scale() with negative numbers. There are a number of ways to flip an image: I would probably use the get() function to get the colors from the image and copy them into a PGraphics instance.

animating multiple body trajectories in octave

I know the hold on; command in octave allows me to plot multiple trajectories in the same figure. However, i recently came across the function 'comet'. It animates the state of a system over the time range defined by the user. I have only successfully used it for a simple code which shows the trajectory of a small body around a fixed massive body. How can I use 'comet' to animate the trajectories of 2 bodies over the same time range?
PS: If you need an example of how 'comet' works, here is the simple code i mentioned above:
function xdot = f(x,t)
G = 1.37;
M = 10^5;
[T,r] = cart2pol(x(1),x(2));
xdot(3) = -((G*M)/((x(1)^2) + (x(2)^2)))*cos(T);
xdot(4) = -((G*M)/((x(1)^2) + (x(2)^2)))*sin(T);
xdot(1) = x(3);
xdot(2) = x(4);
endfunction
X = lsode ("f", [1000,0,5,10],(t = linspace(0,1000,2000)'));
comet(X(:,1),X(:,2),0.01);
This basically plots the trajectory over time. You can copy paste to octave and see the animation.
Can anyone tell me how I can doe the same for a 2 body or multiple body system ?
You can't really use comet in that way. You'll have to do the 'animation' manually, but it's not hard. Plus, you get better customisability. Here's one approach.
X1 = lsode ("f", [1000, 0, 5, 10], (t = linspace(0,1000,2000)'));
X2 = lsode ("f", [500, 0, 4, 5 ], (t = linspace(0,1000,2000)'));
x_low = min ([X1(:, 1); X2(:, 1)]); x_high = max ([X1(:, 1); X2(:, 1)]);
y_low = min ([X1(:, 2); X2(:, 2)]); y_high = max ([X1(:, 2); X2(:, 2)]);
for n = 1 : size (X1, 1)
plot (X1(1:n, 1), X1(1:n, 2), ':', 'color', [0, 0.5, 1], 'linewidth', 2);
hold on;
plot (X1(n, 1), X1(n, 2), 'o', 'markerfacecolor', 'g', 'markeredgecolor', 'k', 'markersize', 10);
plot (X2(1:n, 1), X2(1:n, 2), ':', 'color', [1, 0.5, 0], 'linewidth', 2);
plot (X2(n, 1), X2(n, 2), 'o', 'markerfacecolor', 'm', 'markeredgecolor', 'k', 'markersize', 10);
hold off;
axis ([x_low, x_high, y_low, y_high]); % needed, otherwise first few plots will
% use automatic axis limits
drawnow; pause(0.01);
end
This is the most straightforward way, but its speed might not be as fast as 0.01, if the refresh rate is slower than the time it takes to produce the plot; you can make it even faster if you only plot once and change the data of each plot object at each step instead.
Also, this 'animation' is simply for visualising inside an octave session. If you want to produce a video file from this instead, you'll have to produce images and convert to a movie / gif format etc.

how is the default blending mode (SKBlendModeAlpha) calculated?

In SpriteKit, how is the default blending mode (SKBlendModeAlpha) calculated? Can you please verify that it works on my example points below? All nodes below use the default SKBlendModeAlpha.
My scene has a white background and two identical child nodes of uniform color that partially intersect each other. The true RGB of each node is (16, 195, 117) and alpha = 0.4.
When I look at the blended color of the node sitting on the white background, the color is (158, 221, 190). (This was confirmed by doing a screen capture and checking in gimp).
How was this calculated by SpriteKit?
When I look at the blended color of the intersected area of two nodes on the white background, the RGB is (112, 203, 153).
How was this calculated by SpriteKit?
Thanks!
Thanks Okapi. That was the key to finding the answer. Using "Display native values" helped me see that for each RGB component, it's:
final_color = previous_final_color * (1 - alpha) + new_color * alpha.
For one layer on white, I indeed get an RGB of
(159, 231, 200) = floor(0.6 * (255, 255, 255) + 0.4 * (16, 195, 117))
For the two layers on white, I get an RGB of
(101, 216, 166) = floor(0.6 * (159, 231, 200) + 0.4 * (16, 195, 117))

Resources