Real-time vector field interpolation for ballistics simulation - algorithm

I'm working on a real-time ballistics simulation of many particles under the effect of highly non-uniform wind. The wind data is obtained from CFD in a form of 2D discretized vector field (unstructured mesh, each grid point has associated with it a vector which tells the direction and magnitude of air velocity).
The problem is that I need to be able to extract the wind vector at any position that a particle occupies, so that aerodynamic drag can be computed and injected into ballistics physics. This is trivial if the wind data can be approximated by an analytical/numerical vector field where a vector can be computed with an algebraic expression. However, the wind data I'm working with is quite complex and there doesn't seem to be any way to approximate it.
I have two ideas:
Find a way to interpolate the vector field every time each particle's position is updated. This sounds computationally expensive, so I'm not sure if it can be done real-time. Also, the mesh is unstructured, and I'm not sure if 2D interpolation can be done with this kind of mesh.
Just pick the grid point closest to the particle's position and get the vector from there (given that the mesh is fine enough for this to accurately represent the actual vector field). This will then turn into a real-time nearest-neighbor problem with rapid and numerous queries.
I'm not sure if these are the only two solutions for this problem, and if these can be done in real-time at all. How should I go about solving this?

Related

Vector image editing by dragging a polygon as a brush

The goal is to do simple vector image editing by dragging the mouse across the screen like a polygon-shaped paintbrush, creating the Minkowski sum of the brush and the path of the mouse. The new polygon would be subtracted from any previously existing polygons of a different color and merged with any existing polygons of the same color.
The plan is to take each mouse movement as a line segment from the mouse's previous position to its current position, calculate the Minkowski sum on that line segment, then do use the Weiler–Atherton clipping algorithm to update the existing polygons to include that Minkowski sum.
Since it seems likely that Weiler–Atherton would cause UI delays if run for every mouse movement, we plan to delay that step by putting it into another thread that can take its time catching up to the latest mouse movements, or alternatively save all Weiler–Atherton calculations until the drawing is finished, then do it as a bulk operation when saving. We worry that this may lead to the accumulation of a very large number of overlapping polygons, to the point that UI would be delayed by the time required to render them all.
The question is: Is the above plan the way that Inkscape and other serious vector graphics editing software would perform this operation? It seems like a mad plan, both in the trickiness of the algorithm and its computational complexity. What would an expert do?
Another option under consideration: Do the painting using simple raster operations and then convert the raster into a vector image as the final step. Conversion from raster to vectors seems no less tricky than Weiler–Atherton, and quality of the final output might suffer, but could it be the better option?
While the user is holding down the mouse button and drawing, you can remember all the mouse movement line segments, and simultaneously render the brush*line Minkowski sums to a screen resolution bitmap.
You can use the bitmap for painting the screen until the user releases the button. At that time you can calculate the union of all the line segment Minkowski sums and add the resulting shape to your drawing.
To calculate the union of so many shapes simultaneously, some kind of sweep-line algorithm would be best. You should be able to do the job in O(N log N) or linear time, which will not cause any noticeable delay.
IMO the bottleneck in Weiler-Atherton is the detection of the intersections, which requires O(N²) operations when applied by brute-force. the rest of the processing is a reorganization of the links between the vertices and intersections, which should be limited to O(N), or even O(NI), where NI denotes the number of intersections.
In this particular case, you can probably speed-up the search for intersections by means of gridding or a hierarchy of bounding boxes. (Note that gridding parallels the idea of Matt to use auxiliary bitmap rendering.)
Unless you really have billion edges, I wouldn't fear the running time.
If you want to do this like professional vector graphics softwares do with arbitrary brush shapes (including features such as the brush shape can react to speed or stylus pressure), it can be quite involved unfortunately.
I was experimenting with the method described in "Ahn, Kim, Lim - Approximate General Sweep Boundary of a 2D Curved Object". It seems to handle a lot of cases that I would expect from a professional drawing application —especially the details where the brush shape can be dynamic while sweeping; adaptively generate the boundary curve for the resolution you want; variable width offsetting of 2d curves; etc..
It seems you can simplify this method if you don't need a generalised method. But I would like to add it here as a reference.
PS: I was searching for a non-paywalled link to include here in the answer and then this came up – Looking for an efficient algorithm to find the boundary of a swept 2d shape. Looks very similar to what you're trying to do :).

How to design an algorithm to linear interpolate points in 3D space and create an indented surface?

I am trying to develop a function that takes a matrix of input, out a contour plot as an output. There is a function in matlab which fits to my purpose. But for a self-examination method, I would like to do this myself. Like I said, my purpose is self-examination, I will not use this in an app or something.
Here is the deal, I have tried to do this with color maps, but what I got is a graph that seems like an uncalibrated TV channel:
(source: yeniakit.com.tr)
Graph is like that because, my function is discrete. I need to make this discreet function continuous.
I found a solution to this problem. Solution uses linear interpolation method to connect each point with each other. Surface created this way will have lots of rough edges but that is not a problem. Connecting the points in the space to create surfaces is OK, but those surfaces also has point on them. Computing such a thing would be very expensive, since a lot of computations involved. For example, lets say, I would like to create a surface by using 20x20 matrix. Rows and columns will be the x and y coordinates, values in each row and column will be the z coordinate.
I need to connect at least 3 dots to make a surface. Those surfaces have also points in them. I want to design such an algorithm that reduces the computations. How do I do that?

What's the main difference between B-Rep and Mesh index represation

I know B-Rep (ParaSolid) is the popular solid representation. From my past experience, I always touch the triangle mesh representation like OBJ, STL file format. I am wondering why B-Rep is better than mesh representation? What's the main difference?
A boundary representation (b-rep) solid modeler uses a combination of precise geometry and boundary topology to represent objects such as solids (3d manifolds), surfaces (2d manifolds) and wires (1d manifolds).
The salient property of a b-rep is that it represents geometry precisely. Faces of the b-rep are defined by the equations of the surfaces associated with the face. Edges are represented with precise curves, often the curve of intersection of its adjacent faces. (Sometimes approximate curves are used when precise curves are too difficult to compute or when faces don't fit together exactly--this is called a "tolerant" model).
Because the underlying geometry of a b-rep is precise, the model can be queried (in principle) to arbitrary precision. For example, if you have a b-rep of a box with a cylindrical hole through it, you can query the volume of the box to an arbitrary precision. With a tessellated model you can only compute the volume to the precision of the tessellation, which can never represent the cylindrical hole exactly.
Another benefit of b-reps is they tend to be much more compact than tessellated models. As a simple example, a sphere represented as a b-rep has a single face associated with the geometry of the sphere. It only takes a center and radius to define that sphere, and a few bytes more for the b-rep data structure to support it. A tessellated model of a sphere may have many vertices, each with 3 coordinates.
Diving a little deeper, Boolean operations on a tessellation are problematic, since the facets on one of the bodies may not line up with the facets on the other. There needs to be some sort of rectification process which will add complexity and inaccuracy to the combined model. No such problem occurs with b-reps, since new curves can be computed as intersections of the surfaces that underlie the intersecting faces.
On the other hand, tessellated models are becoming more popular now that the technology of manipulating them is maturing. For example, with discrete differential geometry and discrete spectral methods we can manipulate the meshes in a Boolean in a way that minimizes the local changes to discrete curvature, or we can manipulate regions of the tessellation with simple controls that move many points.
Another benefit of tessellated models is they are better for scanned data. If you scan a human face, there is no need to try to find precise surfaces to represent the data, the tessellated image is good enough.
First of all, better for what?
For example, for 3D printing, or pure visualization purposes mesh representation is better suited.
B-Rep preserves the underlying geometry (surfaces, curves, points), as well as connectivity between model's topological items (faces, edges, vertices). Thus, allowing richer operation (feature) set: filleting, blending, etc.

Best algorithm to interpolate on a grid

I have a set points whose coordinates are given by the arrays x, y and z and the value of the density field in each point is stored in the array d.
I would like to reconstruct the density field on a uniform grid. What's the best algorithm to do that?
I know that in python, the scipy module come in handy with the griddata function but I would like to write my own code, I just need a hint.
If you have some sort of scalar field and the points are the origins of the field, you can implement a brute force approach by walking all lattice points and calculating the field intensity given the sources. There are both recursive methods that allow "blanking" wide volumes where the field is more or less constant, and techniques to save some CPU time by calculating the variations from one point to the next.
If the points you have are samplings of a value, then you will have to decompose your space in volumes and interpolate the values. You can employ a simple Voronoi decomposition - this is usually done in 2D for precipitation measurements - or a Delaunay tetrahedralization (you can look into TetGen's documentation). The first approach assumes that the function is constant throughout each Voronoi volume; the last allows rendering a trilinear interpolation.
If you need to smooth a 3D grid, the trilinear interpolation looks like the best approach.
There are also other methods used for fast visualization, that involve maintaining a list of 3D points in order of distance from any one given point in your regular grid. When moving through the grid, you recalculate distances using quadratic increments. Then, you perform a simple interpolation based on a subset of points of chosen cardinality (i.e., if you consider the four nearest points at distances d1..d4, you would calculate the value in P by proportionally weighing the values v1..v4). This approach is fast and easy to implement by yourself, but be warned that it underperforms wherever the minimum distance between points is less than the lattice step (you can compensate by considering more points where this happens; and the effect is less evident if the sampled function is smooth at the same scale).
If you want to implement a mathematical method yourself, you need to learn the theory, of course. In this case, it's 3D scattered data interpolation.
Wikipedia, MATLAB help and scipy help say there are at least half a dozen different methods. WP has a fairly good description of them and there's a comparison article but I strongly suggest you find something in your native language on such a terminology-intensive subject.
One approach is to form the Delaunay triangulation of the scattered points [x,y,z], (actually a tetrahedralisation in your 3d case!) and perform interpolation within each element using a linear representation of the density field, defined at the tetrahedron vertices.
To evaluate the density at each structured grid point you would (i) determine which tetrahedron the point lay within and (ii) evaluate the linear interpolant.
Forming the Delaunay triangulation is non-trivial, put there are a few good libraries that can be used for this, depending on your language of choice. One good option is CGAL.
Hope this helps.

Minimize RMSD between two sets of points

I need to plot the transformation of a 3d object against time. I have the 3d shapes for each moment in time, but they are not guaranteed to be geometrically well placed, so I cannot just render them and slap the pictures together into a movie. I therefore need to align them so that they are pleasantly and consistently oriented with respect to the camera.
What I would do is to take pairs of 3d objects, center them with respect to the geometric center, then perform the proper rotation around some axis so to minimize the RMSD among the points. That's not hard, but I'd enjoy to know if there's something ready out there so not to reinvent the math (and the code). Of course, I'd also accept objections to my method.
I'm working in python, but any code will do, and I will convert it.
The Kabsch algorithm does that. See: http://en.wikipedia.org/wiki/Kabsch_algorithm.
It appears that what I need is the Kabsch algorithm.

Resources