A 2D shadow algorithm - algorithm

I am planning on making a game that utilizes light (and shadows) as part of the gameplay, however I can't think of an efficient algorithm to implement them and I'm sure there is an elegant solution.
The white area is directly illuminated by light, light grey is illuminated by the walls that are directly illuminated and dark grey is darkness.
I wish to find those areas in an efficient manner. (real time, with the light being able to move)
Although not realistic, that is the simplest way I could think of my problem, any other implementation that includes direct light and reflected light is welcome.
...
My first attempt would be to draw lines from the light to the perimeter of the screen and find the first wall they intersect. But repeating this algorithm for every illuminated part of the wall to mark "ambient" light is not feasible.
Also note, the game is in Flash so I don't think I can utilize the gpu.

Inspired by Ryan's answer.
In a 2d grid, mark every point of the screen as lit. Then for every wall on the screen (in order of closeness to point) draw a shadow behind them as such: Before going to the next wall, first check which parts of it are lit and which are not, as to not draw the shadow twice. Mark all the walls for which the shadow was drawn for the next step, as these are the probably lit walls. (we should check again before the next part)
For every lit line segment (wall) first check if any wall intersect the segment. For every intersection split the lit segment in 2 at the intersection.
For the end points of every line segment, repeat the first part in a temporary array, and at the end add all the lit points into the final array.
The first part should go over all the points on the screen and all the points on the walls once. So O(area+length of walls), and depending on the complexity of the scene (number of walls, and interesections the second part should apply the first part about 20 times.
This may work in real time, however make sure to store the lit areas while the lights are not moving.

You don't need to draw all light lines, just the important ones. For example, with one point light source and one line, you only need to solve two intersections.
For reflected lighting, you would start with a point light of intensity n, then every time this light intersects a wall, you split the wall into smaller segments, and add a linear light source of intensity n-1 on the illuminated segment. You can do this as many times as you liked.

Related

Avoiding collision when placing a circle near other circles

I have a Main circle, and Other nearby circles.
All circles are always the same size.
No two circles may ever overlap.
I'd like to find out where Potential circles could be placed near the Main circle (more detail below).
Ideally, I'd like to do this fast (I know, everyone says so - but trust me I really do!).
I'd like the answer as a general high level description of the algorithm to use.
This might be a duplicate of Placing a circle so that it does not collide with any other circles, which I find unimpressive. Both the question and the answers.
Here's an approximate image, explanation below:
Red circle is the Main circle.
Black circles are the Other circles placed nearby.
The centre of each Potential circle should be on the yellow band. It is a relatively narrow range, slightly less wide than the radius of the circle, I'm not interested in any circles whose centre lies outside of this range.
Gray are the Potential circles I'd like the algorithm to find:
The one in the upper slightly right, is as close to the Main circle as can be. If any of the three Other circles around it moved any closer to its centre, there would be no Potential circle here.
Lower right, as far from the Main circle as allowed. If the two Other circles next to it were any closer to each other, there would be no Potential circle here.
Four Potential circles drawn on the left. There could be a multitude of these, it'd be fine if the algorithm returned just any one of these.
The teal circle is a bounding box, if a circle's centre lies outside, it has no impact on placing the circles on the yellow band.
Algorithm input: the Main and Other circles.
Desired algorithm output for the image above: three or four Potential circles. One upper slightly right, one lower right, and one or two on the left side (I don't mind too much which it is).
What have I tried? Googling! Looking at Stack Overflow. Thinking.
I'm thinking probably the first thing the algorithm should do is to convert the Other circles from (X, Y) coordinates to (angle, distance) from Main and order by angle. Then split the yellow band into some segments, and for each segment, get all the circles that could influence it.
Here I get lost a bit.
Next step, do some math to try to position the circle. For two circles, I can place a circle so that it just touches them. Perhaps when there are N circles, I just do this bit for each of the pairs and see if I'm left with anything? Or just each of the N with the centre? Can I save this as a draft so it ain't all gone while I think about it some more?
[Edit]: I guess a better approach would be to start with like 12 Potential circles around the Main circle, see which ones are illegal and which Other circles are getting in the way, then try nudging the Potential circles so they get out of the Other stones way. We should know exactly how far we need to move the Potential stone and in which direction, based on which Other stone it's overlapping with. Repeat, hopefully avoiding infinite loops but that I can do I think.
This is a nice problem. Let’s focus on one word: potential.
Think about force directed graph layouts, like this one:
https://observablehq.com/#d3/temporal-force-directed-graph
Eventually, the layout converges on a minimum energy solution, where the spring lengths are minimal.
Let’s bring that to your problem. We will define a potential field. The main and other circles provide repulsive forces. The yellow ring provides an attractive force.
Choose a random point on the plane. Determine if it is feasible, and reject it if not, for example, it overlaps with main or other circle. Now we are left with a circle in a feasible, but non-optimal location. We will move this circle to new candidate positions over several time steps. Sum up the attractive and repulsive forces operating on it, and move it a small amount in the indicated direction. At some point, it is likely that it will try to go to an infeasible position, perhaps overlapping with the main circle. At that point we are done, and we output its position as a good result.

Fitting a mesh and a drawing together

Suppose you're trying to render a user's freehand drawings using a 2D triangular mesh. Start with a plain regular mesh of triangles and color their edges to match the drawing as closely as possible. To improve the results, you can move the vertices of the mesh slightly, but keep them within a certain distance of where they would be in a regular mesh so the mesh doesn't become a mess. Let's say that 1/4 of the length of an edge is a fair distance, giving the vertices room to move while keeping them out of each other's personal space.
Here is a hand-made representation of roughly what we're trying to do. Since the drawing is coming freehand from the user, it's a series of line segments taken from mouse movements.
The regular mesh is slightly distorted to allow the user's drawing to be better represented by the edges of the mesh. Unfortunately the end result looks quite bad, but perhaps we could have somehow distorted the drawing to better fit the mesh, and the combination of the two distortions would have created something far more recognizable as the original drawing.
The important thing is to preserve angles, so if the user draws a 90-degree corner it ends up looking close to a 90-degree corner, and if the user draws a straight line it doesn't end up looking like a zigzag. Aside from that, there's no reason why we shouldn't change the drawing in other ways, like translating it, scaling it and so on, because we don't need to exactly preserve distances.
One tricky test case is a perfectly vertical line. The triangular mesh in the image above can easily handle horizontal lines, but a naive approach would turn a vertical line into a jagged mess. The best technique seems to be to horizontally translate the line until it passes through each horizontal edge alternating between 1/4 and 3/4 of the way along the edge. That way we can nudge the vertices to the left or right by 1/4 and get a perfect vertical line. That's obvious to a person, but how can an algorithm be made to see that? It involves moving the line further away from vertices, which is the opposite of what we usually want.
Is there some trick to doing this? Does anyone know of a simple algorithm that gives excellent results?

Closest distance to border of shape

I have a shape (in black below) and a point inside the shape (red below). What's the algorithm to find the closest distance between my red point and the border of the shape (which is the green point on the graph) ?
The shape border is not a series of lines but a randomly drawn shape.
Thanks.
So your shape is defined as bitmap and you can access the pixels.
You could scan ever growing squares around your point for border pixels. First, check the pixel itself. Then check a square of width 2 that covers the point's eight adjacent pixels. Next, width 4 for the next 16 pixels and so on. When you find a border pixel, record its distance and check against the minimum distance found. You can stop searching when half the width of the square is greater than the current minimum distance.
An alternative is to draw Bresenham circles of growing radius around the point. The method is similar to the square method, but you can stop immediately when you have a hit, because all points are supposed to have the same distance to your point. The drawback is that this method is somewhat inaccurate, because the circle is only an approximation. You will also miss some pixels along the disgonals, because Bresenham circles have artefacts.
(Both methods are still quite brute-force and in the worst case of a fully black bitmap will visit every node.)
You need a criterion for a pixel on the border. Your shape is antialiassed, so that pixels on the border are smoothed by making them a shade of grey. If your criterion is a pixel that isn't black, you will chose a point a bit inside the shape. If you cose pure white, you'll land a bit outside. Perhaps it's best to chose a pixel with a grey value greater than 0.5 as border.
If you have to find the closest border point to many points for the same shape, you can preprocess the data and use other methods of [nearest-neighbour serach].
As always, it depends on the data, in this case, what your shapes are like and any useful information about your starting point (will it often be close to a border, will it often be near the center of mass, etc).
If they are similar to what you show, I'd probably test the border points individually against the start. Now the problem is how you find the border without having to edge detect the entire shape.
The problem is it appears you can have sharply concave borders (think of a circle with a tiny spike-like sliver jutting into it). In this case you just need to edge detect the shape and test every point.
I think these will work, but don't hold me to it. Computational geometry seems to be very well understood, so you can probably find a pro at this somewhere:
Method One
If the shape is well behaved or you don't mind being wrong try this:
1- Draw 4 lines (diving the shape into four quandrants). And check the distance to each border. What i mean by draw is keep going north until you hit a white pixel, then go south, west, and east.
2- Take the two lines you have drawn so far that have the closest intersection points, bisect the angle they create and add the new line to your set.
3- keep repeating step two until are you to a tolerance you can be happy with.
Actually you can stop before this and on a small enough interval just trace the border between two close points checking each point between them to refine the final answer.
Method Two (this wil work with the poorly behaved shapes and plays well with anti-aliasing):
1- draw a line in any direction until he hit the border (black to white). This will be your starting distance.
2- draw a circle at this distance noting everytime you go from black to white or white to black. These are your intersection points.
As long as you have more than two points, divide the radius in half and try again.
If you have no points increase your radius by 50% and try again (basically binary search until you get to two points - if you get one, you got lucky and found your answer).
3- your closet point lies in the region between your two points. Run along the border checking each one.
If you want to, to reduce the cost of step 3 you can keep doing step 2 until you get a small enough range to brute force in step 3.
Also to prevent a very unlucky start, draw four initial lines (also east, south, and west) and start with the smallest distance. Those are easy to draw and greatly reduce your chance of picking the exact longest distance and accidentally thinking that single pixel is the answer.
Edit: one last optimization: because of the symmetry, you only need to calculate the circle points (those points that make up the border of the circle) for the first quadrant, then mirror them. Should greatly cut down on computation time.
If you define the distance in terms of 'the minimum number of steps that need to be taken to reach from the start pixel to any pixel on the margin', then this problem can be solved using any shortest path search algorithm like bread first search or even better if you use A* search algorithm.

Collision direction detection with vectors [illustration included]

I'm currently working on a jump 'n' prototype in html5 canvas.
The language is actually not that important, I just need a hint on the algorithm.
First look at this illustration: http://i.imgur.com/3CwBI.png
As you can see I have two rectangles that collide with each other (the white one is the player, the gray one a static obstacle).
While precalculating the next frame I need to fix the players position if a collision is about to happen. The human brain can clearly tell, that the white rectangle in the image is going to land on top of the platform (considering linear motion). But how does one tell this the program ?
I'm moving the player with 2d vectors.
Edit: I can already detect the collision, I just need to know the direction, so I can fix the players position on the corresponding side of the obstacle.
For each corner of your player, draw a line segment between where it is now, and where it will be next frame.
for each of these line segments, and for each line segment making up the platform, check to see if the two segments intersect.
If any intersections occur, then the player will collide with the platform in the next frame.
Edit:
90% of the time, the red line segments only collide with a single line segment belonging to the platform. If the red segments collide with the platform's left, the player struck the wall; if the segments collide with the platform's top, the player landed on the platform.
One corner case is when a collision occurs both on the top and the side.
In that case, in order to determine which collision "really" occurs, you need to decide which one occurs first in time. the earliest intersection is the one closest to the earlier player rectangle. In the above image, if the upper right player is the earlier one, then the earliest collision is with the top of the platform.

Is a closed polygonal mesh flipped?

I have a 3d modeling application. Right now I'm drawing the meshes double-sided, but I'd like to switch to single sided when the object is closed.
If the polygonal mesh is closed (no boundary edges/completely periodic), it seems like I should always be able to determine if the object is currently flipped, and automatically correct.
Being flipped means that my normals point into the object instead of out of the object. Being flipped is a result of a mismatch between my winding rules and the current frontface setting, but I compute the normals directly from the geometry, so looking at the normals is a simple way to detect it.
One thing I was thinking was to take the bounding box, find the highest point, and see if its normal points up or down - if it's down, then the object is flipped.
But it seems like this solution might be prone to errors with degenerate geometry, or floating point error, as I'd only be looking at a single point. I guess I could get all 6 axis-aligned extents, but that seems like a slightly better kludge, and not a proper solution.
Is there a robust, simple way to do this? Robust and hard would also work.. :)
This is a robust, but slow way to get there:
Take a corner of a bounding box offset from the centroid (force it to be guaranteed outside your closed polygonal mesh), then create a line segment from that to the center point of any triangle on your mesh.
Measure the angle between that line segment and the normal of the triangle.
Intersect that line segment with each triangle face of your mesh (including the tri you used to generate the segment).
If there are an odd number of intersections, the angle between the normal and the line segment should be <180. If there are an even number, it should be >180.
If there are an even number of intersections, the numbers should be reversed.
This should work for very complex surfaces, but they must be closed, or it breaks down.
"I'm drawing the meshes double-sided"
Why are you doing that? If you're using OpenGL then there is a much better way to go and save yourself all the work. use:
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, 1);
With this, all the polygons are always two sided.
The only reason why you would want to use one-sided lighting is if you have an open or partially inverted mesh and you want to somehow indicate what part belong to the inside by having them unlighted.
Generally, the problem you're posing is an open problem in geometry processing and AFAIK there is no sure-fire general way that can always determine the orientation. As you suggest, there are heuristics that work almost always.
Another approach is reminiscent of a famous point-in-polygon algorithm: choose a vertex on the mesh and shoot a ray from it in the direction of the normal. If the ray hits an even number of faces then the normal is pointed to the outside, if it is odd then the normal is towards to inside. notice not to count the point of origin as an intersection. This approach will work only if the mesh is a closed manifold and it can reach some edge cases if the ray happens to pass exactly between two polygons so you might want to do it a number of times and take the leading vote.

Resources