At a given time-step I have a sampled a lot points from a fluid , and I want to extract the
points which lie at the surface of the fluid. Does any one know a good algorithm
and any available codes to do this?
I am aware of surface reconstruction, but the assumption there is that the sampled points
are on/near the surface. So I guess that would not be too useful here.
Is finding the convex hull of the points adequate? How many points do you have?
Maybe start with a convex hull and then modify it to allow a certian amount of concavity where there are large parts of the surface without any points nearby.
Otherwise try fitting a spline or similar polynomial function to the points. You need some sort of cost metric to measure how good your fit is so that you don't bend your surface too much to reach the inner points. (Unless sharp high curvature sections are allowed for breaking waves etc?)
Is this a tank that get's moved about causing sloshing or something similar - if so there may be some physics model you can used to suggest the shape of surfaces likely to be found. The pattern of motion and speed of waves in the fluid may tell you how many waves etc you'd see.
Related
I have a program that visualizes triangular meshes and allows the users to draw on the meshes using a pen. I want to have a "snapping" mode in my system. The snapping mode performs drawing corrections for the user in the sense that the user-drawn lines are snapped to the nearest edge (or the silhouette) of that part of the mesh.
I'm looking for an algorithm that compute the edges visible on the mesh from a given point of view. By edges, I'm referring to the outlines of the shape: corner points and the lines between them (similar to the definition of an edge in computer vision/image processing -- such as Canny edges).
So far I've thought of two approaches for this:
Edge detection: so far I've only found this paper. Their method is understandable, yet the implementation is not trivial (due to tensor computations and some ambiguity in their explanations). The problem with this approach is that it produces "edge strength values" which is a value in the range [0, 1] for every vertex. The value of 1 indicates an edge vertex with a high confidence. This introduces extra thresholding parameters in the system which I'd rather not have. Their output looks like this (range [0, 1] scaled to [0, 65535]):
Rendering or non-photorealistic methods such as the one asked in this question or this paper. They seem to be able to create the silhouette that I'm after as can be seen below:
I'm not a graphics expert and as of yet I don't know whether their methods can be used for computation of the feature lines rather than rendering.
I was wondering if anybody has any ideas about a good algorithm for what I want to do. Since the system is very interactive, the performance is important. The snapping feature does not have to be enabled all the time (therefore, if the method is computationally expensive, some delay in when "snapping enabled" mode is toggled can be tolerated while the algorithm is computing the edges.) Also, if you know of any implementation (preferably open source), I'd be grateful if you could share it with me.
There are two types of edges that you want to detect:
silhouette edges are viewpoint dependent, they correspond to the places where the line of sight tangents the surfaces. With a triangulated model, they are easy to determine, as they are shared by a front-facing triangle and a back-facing one.
"angular" edges are viewpoint independent and formed by a discontinuity in the tangent plane direction. As a triangulated model has itself this kind of discontinuity, there is no exact criterion to find them. Just set a threshold on the angle formed by two triangles. This threshold must be such that smooth patches do not trigger.
By this approach, you will find the wanted edges in 3D.
This is not enough, as part of them are hidden by other surfaces. You have the option of integrating them as edges in the 3D model and letting the rendering engine do its job, or, if you have the courage, to implement an hidden lines removal algorithm. (The wikipedia link is a little terse.)
Since posting the question, something else came into my head. Since 2D edge detection is a very well-studied problem, one way of tackling the problem is performing 2D edge detection on the projection image of the mesh.
In other words, given a specific view of the mesh, one could generate a 2D image. A 2D edge detection algorithm (such as Canny edge detector) could then be run on the 2D image and the results can be back-projected to 3D to determine the silhouettes of the mesh in question. One possible advantage of this is simplicity!
Edit (2017):
Even though I moved away from this, I returned to this problem again for a different purpose. To anybody else looking into this problem: there is a paper that talks about various contours from meshes that's worth reading (the paper is "Suggestive Contours for Conveying Shape" by DeCarlo et al.).
Working implementation of the methods discussed in the paper are available here.
I am solving a fourth order non-linear partial differential equation in time and space (t, x) on a square domain with periodic or free boundary conditions with MATHEMATICA.
WITHOUT using conformal mapping, what boundary conditions at the edge or corner could I use to make the square domain "seem" like a circular domain for my non-linear partial differential equation which is cartesian?
The options I would NOT like to use are:
Conformal mapping
changing my equation to polar/cylindrical coordinates?
This is something I am pursuing purely out of interest just in case someone screams bloody murder if misconstrued as a homework problem! :P
That question was asked on the time people found out that the world was spherical. They wanted to make rectangular maps of the surface of the world...
It is not possible.
The reason why is not possible is because the sphere has an intrinsic curvature, while the cube/parallelepiped has not. It can be shown that for two elements with different intrinsic curvatures, their surfaces cannot be mapped while either keeping constant infinitesimal distances, either the distance between two points is given by the euclidean distance.
The easiest way to understand this problem is to pick some rectangular piece of paper and try to make a sphere of it without locally stretch it or compress it (you can fold). You can't. On the other hand, you can make a cylinder surface, because the cylinder has also no intrinsic curvature.
In maps, normally people use one of the two options:
approximate the local surface of the sphere by a tangent plane and make a rectangle out of it. (a local map of some region)
make world maps but implement some curved lines everywhere identifying that the measuring distances must be made according to those lines.
This is also the main reason why when traveling from Europe to North America the airplanes seems to make a curve always trying to pass near canada. If we measured the distance from the rectangular map, we see that they should go on a strait line to minimize the distance. However, because we are mapping two different intrinsic curvatures, the real distance must be measured in a different way (and not via a strait line).
For 2D (in fact for nD) the same reasoning applies.
I have polygons that define the contour of counties in the UK. These shapes are very detailed (10k to 20k points each), thus rendering the related computations (is point X in polygon P?) quite computationaly expensive.
Thus, I would like to "subsample" my polygons, to obtain a similar shape but with less points. What are the different techniques to do so?
The trivial one would be to take one every N points (thus subsampling by a factor N), but this feels too "crude". I would rather do some averaging of points, or something of that flavor. Any pointer?
Two solutions spring to mind:
1) since the map of the UK is reasonably squarish, you could choose to render a bitmap with the counties. Assign each a specific colour, and then render the borders with a 1 or 2 pixel thick black line. This means you'll only have to perform the expensive interior/exterior calculation if a sample happens to lie on the border. The larger the bitmap, the less often this will happen.
2) simplify the county outlines. You can use a recursive Ramer–Douglas–Peucker algorithm to recursively simplify the boundaries. Just make sure you cache the results. You may also have to solve this not for entire county boundaries but for shared boundaries only, to ensure no gaps. This might be quite tricky.
Here you can find a project dealing exactly with your issues. Although it works primarily with an area "filled" by points, you can set it to work with a "perimeter" type definition as yours.
It uses a k-nearest neighbors approach for calculating the region.
Samples:
Here you can request a copy of the paper.
Seemingly they planned to offer an online service for requesting calculations, but I didn't test it, and probably it isn't running.
HTH!
Polygon triangulation should help here. You'll still have to check many polygons, but these are triangles now, so they are easier to check and you can use some optimizations to determine only a small subset of polygons to check for a given region or point.
As it seems you have all the algorithms you need for polygons, not only for triangles, you can also merge several triangles that are too small after triangulation or if triangle count gets too high.
Imagine we have a 2D sky (10000x10000 coordinates). Anywhere on this sky we can have an aircraft, identified by its position (x, y). Any aircraft can start moving to another coordinates (in straight line).
There is a single component that manages all this positioning and movement. When a aircraft wants to move, it send it a message in the form of (start_pos, speed, end_pos). How can I tell in the component, when one aircraft will move in the line of sight of another (each aircraft has this as a property as radius of sight) in order to notify it. Note that many aircrafts can be moving at the same time. Also, this algorithm is good to be effective sa it can handle ~1000 planes.
If there is some constraint, that is limiting your solution - it can probably be removed. The problem is not fixed.
Use a line to represent the flight path.
Convert each line to a rectangle embracing it. The width of the rectangle is determined by your definition of "close" (The bigger the safety distance is, the wider the rectangle should be).
For each new flight plan:
Check if the new rectangle intersects with another rectangle.
If so, calculate when will each plane reach the collision point. If the time difference is too small (and you should define too small according to the scenario), refuse the new flight plan.
If you want to deal with the temporal aspect (i.e. dealing with the fact that the aircraft move), then I think a potentially simplification is lifting the problem by the time dimension (adding one more dimension - hence, the original problem, being 2D, becomes a 3D problem).
Then, the problem becomes a matter of finding the point where a line intersects a (tilted) cylinder. Finding all possible intersections would then be n^2; not too sure if that is efficient enough.
See Wikipedia:Quadtree for a data structure that will make it easy to find which airplanes are close to a given airplane. It will save you from doing O(N^2) tests for closeness.
You have good answers, I'll comment only on one aspect and probably not correctly
you say that you aircrafts move in form (start_pos, speed, end_pos)
if all aircrafts have such, let's call them, flightplans then you should be able to calculate directly when and where they will be within certain distance from each other, or when will they be at closest point from each other or if the will collide/get too near
So, if they indeed move according to the flightplans and do not deviate from them your problem is deterministic - it boils down to solving a set of equations, which for ~1000 planes is not such a big task.
If you do need to solve these equations faster you can employ the techniques described in other answers
using efficient structures that can speedup calculating distances (quadtree, octree, kd-trees),
splitting the problem to solve the equations only for some relevant future timeslice
prioritize solving equations for pairs for which the distance changes most rapidly
Of course converting time to a third dimension turns the aircrafts from points into lines and you end up searching for the closest points between two 3d lines (here's some math)
I actually found an answer to this question.
It is in the book Real-Time Collision Detection, p. 223. It's better named, as well: Intersecting Moving Sphere Against Sphere, where a 2D sphere is a circle. It's not so simple (and I may also violate some rights) to explain it here, but the basic idea is to fix one of the circles as a point, adding its radius to the radius of the moving one. The new direction for the moving one is the sum of the two original vectors.
I have a set of 3d points that approximate a surface. Each point, however, are subject to some error. Furthermore, the set of points contain a lot more points than is actually needed to represent the underlying surface.
What I am looking for is an algorithm to create a new (much smaller) set of points representing a simplified, smoother version of the surface (pardon for not having a better definition than "simplified, smoother"). The underlying surface is not a mathematical one so I'm not hoping to fit the data set to some mathematical function.
Instead of dealing with it as a point cloud, I would recommend triangulating a mesh using Delaunay triangulation: http://en.wikipedia.org/wiki/Delaunay_triangulation
Then decimate the mesh. You can research decimation algorithms, but you can get pretty good quick and dirty results with an algorithm that just merges adjacent tris that have similar normals.
I think you are looking for 'Level of detail' algorithms.
A simple one to implement is to break your volume (surface) into some number of sub-volumes. From the points in each sub-volume, choose a representative point (such as the one closest to center, or the closest to the average, or the average etc). use these points to redraw your surface.
You can tweak the number of sub-volumes to increase/decrease detail on the fly.
I'd approach this by looking for vertices (points) that contribute little to the curvature of the surface. Find all the sides emerging from each vertex and take the dot products of pairs (?) of them. The points representing very shallow "hills" will subtend huge angles (near 180 degrees) and have small dot products.
Those vertices with the smallest numbers would then be candidates for removal. The vertices around them will then form a plane.
Or something like that.
Google for Hugues Hoppe and his "surface reconstruction" work.
Surface reconstruction is used to find a meshed surface to fit the point cloud; however, this method yields lots of triangles. You can then apply mesh a reduction technique to reduce the polygon count in a way to minimize error. As an example, you can look at OpenMesh's decimation methods.
OpenMesh
Hugues Hoppe
There exist several different techniques for point-based surface model simplification, including:
clustering;
particle simulation;
iterative simplification.
See the survey:
M. Pauly, M. Gross, and L. P. Kobbelt. Efficient simplification of point-
sampled surfaces. In Proceedings of the conference on Visualization’02,
pages 163–170, Washington, DC, 2002. IEEE.
unless you parametrise your surface in some way i'm not sure how you can decide which points carry similar information (and can thus be thrown away).
i guess you can choose a bunch of points at random to get rid of, but that doesn't sound like what you want to do.
maybe points near each other (for some definition of 'near') can be considered to contain similar information, and so reduced to single representatives for each such group.
could you give some more details?
It's simpler to simplify a point cloud without the constraints of mesh triangles and indices.
smoothing and simplification are different tasks though. To simplify the cloud you should first get rid of noise artefacts by making a profile of the kind of noise that you have, it's frequency and directional caracteristics and do a noise profile compared type reduction. good normal vectors are helfpul for that.
here is a document about 5-6 simplifications using delauney, voronoi, and k nearest neighbour maths:
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.10.9640&rep=rep1&type=pdf
A later version from 2008:
http://www.wseas.us/e-library/transactions/research/2008/30-705.pdf
here is a recent c++ version:
https://github.com/tudelft3d/masbcpp/blob/master/src/simplify.cpp