How can I efficiently implement raycasting through a 2D mesh? - algorithm

I'm implementing a nav mesh pathfinding system and I need to be able to raycast between two points in the mesh and get a list of all edges crossed by the ray. Obviously I'll need to be able to test for individual line intersections, but I expect there to be an efficient way of picking which lines actually need to be checked rather than brute force iterating over every edge in the entire mesh. Does anyone know how I might go about that?

If your mesh is rectangular grid, consider effective method of Woo and Amanatides from article "Fast Voxel Traversal Algorithm..."
Implementation example

Related

Generate a 2D mesh from an outline

I got an outline (list of points) for a plane I want to generate. The plane is quite big and I need evenly distributed vertices inside the outline. Each vertex has a color value from red to green to visualize some data in the plane. I need to visualize the data as precise as possible in real time.
My idea was to simply create a grid and adjust all the vertices outside of the outline. This turned out to be quite complex.
This is a quick example what I want to achieve.
Is there any algorithm that solves this problem?
Is there another way to generate a mesh from an outline with evenly distributed vertices?
It sounds like you want to do something like this:
1) First generate a triangulate your polygon to create a mesh. There are plenty of options: https://en.wikipedia.org/wiki/Polygon_triangulation
2) Then while any of the edges in the mesh are too long (meaning that the points at either end might be too far apart), add the midpoint of the longest edge to the mesh, dividing the adjacent triangles into 2.
The results is a mesh with every point within a limited distance of other points in every direction. The resulting mesh will not necessarily be optimal, in that it may have more points than are strictly required, but it will probably satisfy your needs.
If you need to reduce the number of points and thin triangles, you can apply Delaunay Triangulation flipping around each candidate edge first: https://en.wikipedia.org/wiki/Delaunay_triangulation#Visual_Delaunay_definition:_Flipping
Although not totally clear from the question, the marching cubes algorithm, adapted to two dimensions, comes to mind. A detailed descriptione of the two-dimensional version can be found here.
Delaunay meshing can create evenly distributed vertices inside a shape. The image below shows a combined grid- and Delaunay-mesh. You may have a look here.

2d geometry push algorithm

I have a rectangle board and in it, there are some disjoint 2D shapes such as rectangles, polygons, and more complex geometries, such as simple shapes with arc/line edges.
Usually they are compact, but for some shapes, we may be able to rotate or translate them.
If we move one geometry according a given direction, the adjacent geometry should also be moved or rotated. It looks like the first geometry pushes the second geometry. The second geometry might push the other two geometries. Finally, we may achieve another stable state, or there is no room to push.
Is there any existing investigation on this?
Let's first focus on simple polygons, convex and non-convex.
Push might be any direction.
example image
I'm doing some investigation but could not find existing papers about this topic.
Can we simulate it through mechanics or dynamics? Or pure geometry algorithm?
Just some keywords for paper search is also very useful.
It's similar with EDA's auto push concept. User can move one element (pin/wire) of a circuit, then the software automatically pushes adjacent elements so that the topology is kept and meet design rules.
I think I can use some concepts in mechanics, to compute moving direction at least:
If the connected part of polygon A and polygon B is a point, then pushing A by one direction then generates a force to B along the normal direction. But the force may not generate a move. We need loop all the parts or reach the boundary to check how much it can move.
Let's ignore rotation first.
I post an answer because I don't have enough reputation for a comment. If I don't misunderstand the problem, geometrically this sounds as a collision detection problem. You have to apply a transformation (translation, rotation) to your geometry, and check if this new position overlaps with another geometry. If this is the case you have to apply another transformation to the second. Collision detection is a big topic in games and simulation.

3D algorithm for clamping a model to view frustum

What would be an efficient approach for constraining an object so that it's always at least partially intersecting the view frustum?
The use case is that when viewing a model I want to clamp camera panning, as well as model translation, so that the view frustum is never looking at empty space.
One approach I tried was to wrap the model objects in bounding volumes, then enforce the constraint when those fall outside the frustum. I've tried bounding boxes so far, but am considering using a minimal convex hull.
The problem is that that when you zoom in close enough, it's still possible to be looking at empty space within the boundary, as shown in the attached diagram.
This is for a WebGL application, so needs to be fairly efficient in JavaScript, and also for thousand of vertices.
Ideally you would have a aabb tree of your mesh, and then you can recursively project on camea/screen until you get an intersection ?
http://www.codersnotes.com/algorithms/projected-area-of-an-aabb
edit: it's just frustum culling algo against aabtree does anyway, so looking for optimized solution, is looking for optimized frustum culling things
https://fgiesen.wordpress.com/2010/10/17/view-frustum-culling/
http://www2.in.tu-clausthal.de/~zach/teaching/cg_literatur/vfc_bbox.pdf
As general approximation is possible I would try the point cloud. First create a list of points - either by every Nth mesh vertex or every Nth face centre. With N being for example 10. Once you have this point array all you do is check if any of points is in frustum while updating its orientation. If not then this means that user moved or rotated the camera too much and you need to restore last acceptable orientation.
I know that this may seem quite ridiculous but I think that it is fairly easy to implement and frustum checking of vertex is just couple of multiplications per plane. It will be not perfect though.
You can also make frustum a little smaller to ensure that there is some border around the object.

Triangulating a set of voxels

I haven't done much research on this yet, but i'm just asking around in case this has been done before.
Here's my problem:
I have a set of cubes of an arbitrary height, width and depth. These are either filled or empty. What i'm looking to do is develop an algorithm that is going to create an optimal mesh for this set of cubes by combining the faces of neighboring cubes into one.
My current idea is to step through the set 6 times(twice along each axis, once forwards and once back), and look at the set in cross section. Ignoring cubes that won't be visible from the outside, i'd like to build polygonal face for those cubes in that section. At the end of this, i should have (x+y+z)*2 of these faces. Combining them should give me the resulting optimized mesh for the voxel set.
I'm stumped on the triangulation process however.
If you want to create a mesh from voxel data, the most commonly used algorithm is marching cubes. However I suggest you search the net for iso-surface extraction for more advanced methods.

How can I efficiently detect intersection detection of a ray and a mesh?

I have several 3d models in an OpenGL ES application for iPhone and at some point i want the user to touch the screen and act on them. The problem is to recognize which, among the ones rendered on the screen, has been touched. In order to achieve this I calculated the picking ray as suggested by the OpenGL FAQ, and I now want to detect if it intersects to any model.
I've looked up at the irrlicht source code and found that I can calculate the intersection between the ray and each single model triangle (they do this by calculating if the ray intersects the triangle plane first and then by seeing whether the intersection point falls into the triangle, but there is a more efficient way to do so as stated here).
My question is: do I really need to do all this computation for each single triangle of every model? Isn't there a better way (maybe not so precise) to achieve a similar result?
You are quite right there ARE better ways to go through the tree. One method is to build an octtree around the object. Then if the ray intersects one of the 8 segments you can check which of its 8 children it intersects and so on until you are left with a few triangles to do an intersection test against. Another method is to build a K-d tree.
There are many many ways to efficiently handle this problem. Look up info on ray tracing acceleration structures.

Resources