How to detect if a ball will collide with another, and the normal vector of the collision? - collision

So I have a 2D game involving balls (circles) colliding. I want to be able to detect if two balls will collide before it happens, and the normal vector of the collision if a collision is going to happen. Take a look at the below picture:
Essentially a normalized vector represented by the red arrow is what I am interested in knowing. How can I figure that out any frame most efficiently, given I know the following:
The blue ball has a current initial velocity
The ball ball is pulled down by a constant gravity
The green ball does not move
The sizes and location of both balls

Lets assume:
r1 radius of green ball
(x1,y1) the position of green ball
r2 radius of blue ball
(x2,y2) the position of blue ball
The distance between the balls is
d^2 = (x2-x1)^2+(y2-y1)^2
Collision occurs when
d^2 = (r1+r2)^2
The vector is just (x2-x1,y2-y1) when d=r1+r2

Related

Triangulation: select weights for triangle corners so that they remain constant when crossing a boundary

I have a set of points that can be understood as loudspeaker positions. I want to write an algorithm that selects always three speakers given a virtual position in the two-dimensional space, and assign them amplitudes so that when the virtual position coincides with a speaker position, only that speaker has an amplitude > 0, and otherwise the tree speakers surrounding the point have a weight-balanced amplitude. The three speakers are selected using a Delaunay triangulation.
Now this all works fine when I stay within one triangle, but when I cross the boundary to the next triangle, the two speakers defining the boundary abruptly change their amplitude. I tried this first by projecting the point to the altitudes of the triangle:
Here the virtual point is a magenta circle, the speakers are black little squares, and the amplitude are visualised by blue circle. When the magenta circle slightly moves to the next triangle, the amplitudes (blue circle) change size:
Trying the same with median projection doesn't help. One triangle:
Moving to the next triangle (amplitudes jump again):
The amplitude is always based on the relative position of the point projected onto the diagonal that leaves a corner.
Any ideas how to fix this? I'm thinking I need to make sure that at the boundary, the weight of the speakers must only depend on the two speakers forming the boundary. Perhaps with an additional clever multiplication?
Actually, my first approach with the altitude projections was correct. I had a bug in the scaling of the amplitudes:
An alternative, but slightly more complicated variant, is to draw a line from each corner through the virtual point and calculate the relative position to the intersection with the opposite side:
Animated:

Reconstructing a 2D shape from its projection in 1D

I have a convex closed shape in 2 D space (on the x-y plane). I do not know what it looks like. I rotate this shape about approximately the center of its bounding box 64 times by 5.625 degrees (360/64). For each rotation I have the x-coordinates of the extreme points of the shape. In other words I know the left and right x extents of the shape for each rotation (assuming an orthographic projection). How do I obtain 64 points on the shape that do not contradict the x projections.
Note that the 2D shape is rotating, but the coordinate axes are not rotating along with it. So if your object is a line, the x projection of each end if plotted will essentially be a sin/cos wave depending on its original orientation.
The higher the number of rotations, if I have the solution - the closer I will get to my actual shape.
In reality I do not know the exact point I am rotating the shape about, however any solution assuming I do know will still be helpful, as I don't mind the reconstruction being imperfect.
We used the straight-forward method to reconstruct.
Projection is a shade of the object.
You start with a bounding 2D box. For each projection you cut away from the 2D shape left and right parts that fall outside of the projection. So, the main function calculates intersection of two convex 2D shapes. You calculate these intersections for each projection.
We have several purple projections P1, P2, P3, P4 of the original green object:
Knowing position of a purple projection build two red rays coming from the end points of a projection and intersect them with the reconstructed object:
The red object was reconstructed using 4 projections. When compared to original green you can see that they are not the same. The more projections you have the less error you'll get in the final result.

Detecting 3D collision between a triangle and a rotating sphere

My problem:
Consider a little green sphere constrained to move only on the surface of a large yellow sphere of center O. The green sphere is initially at point C0 and is following a command motion with 2 degrees of freedom.
Then, at point C1, the sphere collides with a red object (e.g. a triangle). This is a non-elastic friction-free collision, which basically removes one degree of freedom from the motion of the green sphere.
However, the green sphere still tries to follow its command trajectory. Since there is still one degree of freedom, the motion of the sphere will be modified and it will go towards C2.
My questions:
How should I approach the problem of designing an algorithm to detect the collision between the green sphere and the red object (in particular, when the object is a 3D triangle), when the motion of the green sphere is constrained to be on the yellow sphere ?
Once the collision is detected, how can I compute the residual/reaction trajectory of the green sphere after the collision ?
Note that I am aware of algorithms to detect 3D collisions between moving spheres and triangles in the case of 3D translations. How would I handle a constrained motion, such as rotation at fixed length from a fixed point?
According to the flat-earth society, if the yellow sphere is the earth, and the green sphere is a person walking on the earth, then an observer 10 meters above the person sees the person as walking on a 2D plane. Mathematically, the curvature of the yellow sphere is negligible when considering infinitesimal distances around C1, so the motion of the green sphere can be modeled as if moving on a plane. Specifically, the motion appears to be in the plane tangent to the sphere at point C1.
The colliding object can be imagined as a wall embedded in the surface of the earth. When the person reaches the wall, they must follow the wall. Mathematically, the intersection of the triangle with the tangent plane forms a line on the plane which the green sphere must follow.
Thus, to an observer looking down at point C1, the situation looks like this
where
. P is the plane tangent to the sphere at point C1
. L is the line formed by the intersection of the triangle with P
. the dark blue line is the original direction of motion
. the light blue line is the new direction of motion
So my approach to the problem would be:
Translate and rotate the coordinate system so that
. C1 is at point (0,0,0)
. the center of the yellow sphere is at point (0,0,-r)
This means that P is the XY plane, i.e. the plane with z=0.
Determine the equation of the plane of the triangle in the new coordinate system, in the form ax + by + cz + d = 0. The equation for L is then ax + by + d = 0
The new direction of motion is parallel to L, which you must then rotate and translate back to the original coordinate system.
Use the following trick to address this problem of tangency: deflate the green sphere to a point (radius 0), and at the same time inflate the yellow sphere (radius R+r) and the red triangle (fat triangle of thickness 2r and cylindrical edges/spherical corners).
The green sphere touching one original surface is now understood as the green point belonging to the corresponding inflated surface.
So your point initially runs on the inflated yellow sphere, until it meets the surface of the inflated triangle; then on, it follows the intersection of both surfaces.
The exact curve depends on the particular relative positions of the sphere and the triangle, and you will have to consider the possible intersections of the sphere with 2 planes (=> circular arcs), three cylinders (=> complex quartic curves) and 3 spheres (=> circular arcs), and discuss the endpoints of these curves.
All this is tractable analytically, but probably painful.
The intersection curve below shows the trajectory of the center.

How to find collision center of two rectangles? Rects can be rotated

I've just implemented collision detection using SAT and this article as reference to my implementation. The detection is working as expected but I need to know where both rectangles are colliding.
I need to find the center of the intersection, the black point on the image above (but I don't have the intersection area neither). I've found some articles about this but they all involve avoiding the overlap or some kind of velocity, I don't need this.
The information I've about the rectangles are the four points that represents them, the upper right, upper left, lower right and lower left coordinates. I'm trying to find an algorithm that can give me the intersection of these points.
I just need to put a image on top of it. Like two cars crashed so I put an image on top of the collision center. Any ideas?
There is another way of doing this: finding the center of mass of the collision area by sampling points.
Create the following function:
bool IsPointInsideRectangle(Rectangle r, Point p);
Define a search rectangle as:
TopLeft = (MIN(x), MAX(y))
TopRight = (MAX(x), MAX(y))
LowerLeft = (MIN(x), MIN(y))
LowerRight = (MAX(x), MIN(y))
Where x and y are the coordinates of both rectangles.
You will now define a step for dividing the search area like a mesh. I suggest you use AVG(W,H)/2 where W and H are the width and height of the search area.
Then, you iterate on the mesh points finding for each one if it is inside the collition area:
IsPointInsideRectangle(rectangle1, point) AND IsPointInsideRectangle(rectangle2, point)
Define:
Xi : the ith partition of the mesh in X axis.
CXi: the count of mesh points that are inside the collision area for Xi.
Then:
And you can do the same thing with Y off course. Here is an ilustrative example of this approach:
You need to do the intersection of the boundaries of the boxes using the line to line intersection equation/algorithm.
http://en.wikipedia.org/wiki/Line-line_intersection
Once you have the points that cross you might be ok with the average of those points or the center given a particular direction possibly. The middle is a little vague in the question.
Edit: also in addition to this you need to work out if any of the corners of either of the two rectangles are inside the other (this should be easy enough to work out, even from the intersections). This should be added in with the intersections when calculating the "average" center point.
This one's tricky because irregular polygons have no defined center. Since your polygons are (in the case of rectangles) guaranteed to be convex, you can probably find the corners of the polygon that comprises the collision (which can include corners of the original shapes or intersections of the edges) and average them to get ... something. It will probably be vaguely close to where you would expect the "center" to be, and for regular polygons it would probably match exactly, but whether it would mean anything mathematically is a bit of a different story.
I've been fiddling mathematically and come up with the following, which solves the smoothness problem when points appear and disappear (as can happen when the movement of a hitbox causes a rectangle to become a triangle or vice versa). Without this bit of extra, adding and removing corners will cause the centroid to jump.
Here, take this fooplot.
The plot illustrates 2 rectangles, R and B (for Red and Blue). The intersection sweeps out an area G (for Green). The Unweighted and Weighted Centers (both Purple) are calculated via the following methods:
(0.225, -0.45): Average of corners of G
(0.2077, -0.473): Average of weighted corners of G
A weighted corner of a polygon is defined as the coordinates of the corner, weighted by the sin of the angle of the corner.
This polygon has two 90 degree angles, one 59.03 degree angle, and one 120.96 degree angle. (Both of the non-right angles have the same sine, sin(Ɵ) = 0.8574929...
The coordinates of the weighted center are thus:
( (sin(Ɵ) * (0.3 + 0.6) + 1 - 1) / (2 + 2 * sin(Ɵ)), // x
(sin(Ɵ) * (1.3 - 1.6) + 0 - 1.5) / (2 + 2 * sin(Ɵ)) ) // y
= (0.2077, -0.473)
With the provided example, the difference isn't very noticeable, but if the 4gon were much closer to a 3gon, there would be a significant deviation.
If you don't need to know the actual coordinates of the region, you could make two CALayers whose frames are the rectangles, and use one to mask the other. Then, if you set an image in the one being masked, it will only show up in the area where they overlap.

How to fill circle with increasing radius?

As part of more complex algorithm I need following:
let say I have a circle with radius R1 drawn on discrete grid (image) (green on image below)
I want to draw circle that have radius R2 that is bigger then R1 with one pixel (red on image below).
At each algorithm step to draw circles with increasing radius in a way that each time I have a filled circle.
How can I find the points to fill at each step so at the end of each step I have fully filed circle?
I have thinking of some circle rasterization algorithm, but this will lead to some gaps in filling. Another way is to use some mathematical morphology operation like dilation but this seems to be computationally expensive to do.
I am generally looking for way to do this on arbitrary shape but initially circle algorithm will be enough.
Your best option is to draw and fill a slightly larger red circle, and then draw and fill the green circle. Then redo on next iteration.
To only draw the 1px border is quite tricky. Your sample image is not even quite consistent. At some places a white pixel occurs diagonally to a green pixel, and in other places that pixel is red.
Edit:
borderPixels = emptySet
For each green pixel, p
For each neighbor n to p
If n is white
Add n to *borderPixels`
Do whatever you like with borderPixels (such as color them red)
My current solution for circle.
Based on well known Midpoint circle algorithm
create set of points for 1 octant for R1 radius (light green pixels)
create set of points for 1 octant for R2 radius (dark orange pixels)
for each row in image compare X coordinate for orange and green pixels and get 0 or 1 (or whatever) number of pixels in-between (light orange).
repeat for each octant (where for some octants columns instead of rows have to be compared)
This algorithm can be applied for other types of parametric shapes (Bezier curve based for example)
For non-parametric shapes (pixel based) image convolution (dilation) with kernel with central symmetry (circle). In other words for each pixel in shape looking for neighbors in circle with small radius and setting them to be part of the set. (expensive computation)
Another option is to draw a circle/shape with a 2pixel wide red border, and then draw a green filled circle/shape with NO border. Which should leave an approximately 1px wide edge.
It depends on how whatever technique you use resolves lines to pixels.
Circle algorithms tend to be optimised for drawing circles.....See the link here

Resources