Interpolating missing contour lines between existing contour lines - algorithm

Contour lines (aka isolines) are curves that trace constant values across a 2D scalar field. For example, in a geographical map you might have contour lines to illustrate the elevation of the terrain by showing where the elevation is constant. In this case, let's store contour lines as lists of points on the map.
Suppose you have map that has several contour lines at known elevations, and otherwise you know nothing about the elevations of the map. What algorithm would you use to fill in additional contour lines to approximate the unknown elevations of the map, assuming the landscape is continuous and doesn't do anything surprising?
It is easy to find advise about interpolating the elevation of an individual point using contour lines. There are also algorithms like Marching Squares for turning point elevations into contour lines, but none of these exactly capture this use case. We don't need the elevation of any particular point; we just want the contour lines. Certainly we could solve this problem by filling an array with estimated elevations and then using Marching Squares to estimate the contour lines based on the array, but the two steps of that process seem unnecessarily expensive and likely to introduce artifacts. Surely there is a better way.

IMO, about all methods will amount to somehow reconstructing the 3D surface by interpolation, even if implicitly.
You may try by flattening the curves (turning them to polylines) and triangulating the resulting polygons thay they will define. (There will be a step of closing the curves that end on the border of the domain.)
By intersection of the triangles with a new level (unsing linear interpolation along the sides), you will obtain new polylines corresponding to new isocurves. Notice that the intersections with the old levels recreates the old polylines, which is sound.
You may apply a post-smoothing to the curves, but you will have no guarantee to retrieve the original old curves and cannot prevent close surves to cross each other.
Beware that increasing the density of points along the curves will give you a false feeling of accuracy, as the error due to the spacing of the isolines will remain (indeed the reconstructed surface will be cone-like, with one of the curvatures being null; the surface inside the bottommost and topmost lines will be flat).
Alternatively to using flat triangles, one may think of a scheme where you compute a gradient vector at every vertex (f.i. from a least square fit of a plane on the vertex and its neighbors), and use this information to generate a bivariate polynomial surface in the triangle. You must do this in such a way that the values along a side will coincide for the two triangles that share it. (Unfortunately, I have no formula to give you.)
The isolines are then obtained by a further subdivision of the triangle in smaller triangles, with a flat approximation.
Actually, this is not very different from getting sample points, (Delaunay) triangulating them and fitting picewise continuous patches to the triangles.
Whatever method you will use, be it 2D or 3D, it is useful to reason on what happens if you sweep the range of z values in a continous way. This thought experiment does reconstruct a 3D surface, which will possess continuity and smoothness properties.
A possible improvement over the crude "flat triangulation" model could be to extend every triangle side between to iso-polylines with sides leading to the next iso-polylines. This way, higher order interpolation (cubic) can be achieved, giving a smoother reconstruction.
Anyway, you can be sure that this will introduce discontinuities or other types of artifacts.

A mixed method:
flatten the isolines to polylines;
triangulate the poygons formed by the polylines and the borders;
on every node, estimate the surface gradient (least-square fit of a plane to the node and its neighborrs);
in every triangle, consider the two sides along which you need to interpolate and compute the derivative at endpoints (from the known gradients and the side directions);
use Hermite interpolation along these sides and solve for the desired iso-levels;
join the points obtained on both sides.
This method should be a good tradeoff between complexity and smoothness. It does reconstruct a continuous surface (except maybe for the remark below).
Note that is some cases, yo will obtain three solutions of the cubic. If there are three on each side, join them in order. Otherwise, make a decision on which to join and use the remaining two to close the curve.

Related

Point of intersection between Oriented Boxes (or OBB)

I am trying to write a Rigid body simulator, and during simulation, I am not only interested in finding whether two objects collide or not, but also the point as well as normal of collision. I have found lots of resources which actually says whether two OBB are colliding or not using separating axis theorem. Also I am interested in 3D representation of OBB. Now, if I know the axis with minimum overlap region for two colliding OBB, is there any way to find the point of collision and normal of collision? Also, there are two major cases of collision, first, point-face and second edge-edge.
I tried to google this problem, but almost every solution is only detecting collision with true or false.
Kindly somebody help!
Look at the scene in the direction of the motion (in other terms, apply a change of coordinates such that this direction becomes vertical, and drop the altitude). You get a 2D figure.
Considering the faces of the two boxes that face each other, you will see two hexagons each split in three parallelograms.
Then
Detect the intersections between the edges in 2D. From the section ratios along the edges, you can determine the actual z distances.
For all vertices, determine the face they fall on in the other box; and from the 3D equations, the piercing point of the viewing line into the face plane, hence the distance. (Repeat this for the vertices of A and B.)
Comparing the distances will tell you which collision happens first and give you the coordinates of the first meeting point (in the transformed system, the back to absolute coordinates).
The point-in-face problem is easy to implement as the facesare convex polygons.

Algorithm to detect curved lines from list of 2D points

I am trying to extract horizontal lines from a set of 2D points generated from the photo of the model of a human torso:
The points "mostly" form horizontal(ish) lines in a more or less regular way, but with possible gaps/missing-points:
There can be regions where the lines deform a bit:
And regions with background noise:
Of course I would need to tune things so I exclude those parts with defects. What I am looking for with this question is a suggested algorithm to find lines where they are well-behaved, filling eventual gaps and avoiding eventual noise, and also terminating the lines properly upon some discontinuity condition.
I believe there could be some optimizing or voting "flood fill" variant that would score line candidates and yield only well-formed lines, but I am not experienced with this and cannot figure anything by myself.
This dataset is in a gist here, and it is important to note that X coordinates are integers, so points are aligned vertically. The Y coordinates though are decimal numbers.
I would start by finding the nearest neighbor of every dot, then the second nearest neighbor on the other side (I mean only considering the dots in the half plane opposite to the first neighbor).
If the distance to the second neighbor exceeds twice the distance to the first, ignore it.
Just doing that, I bet that you will reconstruct a great deal of the curves, with gaps left unfilled.
By estimating the local curvature along the curve (f.i. by computing the circumscribed circle of three dots, taking every other dot, you can discard noisy portions.
Then to fill the gaps, you can detect the curve endpoints and look for the nearest endpoint in an angle around the extrapolated direction.
First step in the processing:
These are integral curves to the vector field representing the direction pattern.
So maybe start by finding for each point the slope vector, the predominant direction, by taking points from the neighborhood and fitting a line with LS or performing a PCA. Increasing the neighborhood radius should allow to deal with the data irregularities thereby picking up a greater-scale slope trend instead of a local noise.
If you decide to do this, could you post here the slope field you find, so instead of points could we see some tangents?

How to find if a 3D object fits in another 3D object (the container)?

Given two 3d objects, how can I find if one fits inside the second (and find the location of the object in the container).
The object should be translated and rotated to fit the container - but not modified otherwise.
Additional complications:
The same situation - but look for the best fit solution, even if it's not a proper match (minimize the volume of the object that doesn't fit in the container)
Support for elastic objects - find the best fit while minimizing the "distortion" in the objects
This is a pretty general question - and I don't expect a complete solution.
Any pointers to relevant papers \ articles \ libraries \ tools would be useful
Here is one perhaps less than ideal method.
You could try fixing the position (in 3D space) of 1 shape. Placing the other shape on top of that shape. Then create links that connect one point in shape to a point in the other shape. Then simulate what happens when the links are pulled equally tight. Causing the point that isn't fixed to rotate and translate until it's stable.
If the fit is loose enough, you could use only 3 links (the bare minimum number of links for 3D) and try every possible combination. However, for tighter fit fits, you'll need more links, perhaps enough to place them on every point of the shape with the least number of points. Which means you'll some method to determine how to place the links, which is not trivial.
This seems like quite hard problem. Probable approach is to have some heuristic to suggest transformation and than check is it good one. If transformation moves object only slightly out of interior (e.g. on one part) than make slightly adjust to transformation and test it. If object is 'lot' out (e.g. on same/all axis on both sides) than make new heuristic guess.
Just an general idea for a heuristic. Make a rasterisation of an objects with same pixel size. It can be octree of an object volume. Make connectivity graph between pixels. Check subgraph isomorphism between graphs. If there is a subgraph than that position is for a testing.
This approach also supports 90deg rotation(s).
Some tests can be done even on graphs. If all volume neighbours of a subgraph are in larger graph, than object is in.
In general this is 'refined' boundary box approach.
Another solution is to project equal number of points on both objects and do a least squares best fit on the point sets. The point sets probably will not be ordered the same so iterating between the least squares best fit and a reordering of points so that the points on both objects are close to same order. The equation development for this is a lot of algebra but not conceptually complicated.
Consider one polygon(triangle) in the target object. For this polygon, find the equivalent polygon in the other geometry (source), ie. the length of the sides, angle between the edges, area should all be the same. If there's just one match, find the rigid transform matrix, that alters the vertices that way : X' = M*X. Since X' AND X are known for all the points on the matched polygons, this should be doable with linear algebra.
If you want a one-one mapping between the vertices of the polygon, traverse the edges of the polygons in the same order, and make a lookup table that maps each vertex one one poly to a vertex in another. If you have a half edge data structure of your 3d object that'll simplify this process a great deal.
If you find more than one matching polygon, traverse the source polygon from both the points, and keep matching their neighbouring polygons with the target polygons. Continue until one of them breaks, after which you can do the same steps as the one-match version.
There're more serious solutions that're listed here, but I think the method above will work as well.
What a juicy problem !. As is typical in computational geometry this problem
can be very complicated with a mismatched geometric abstraction. With all kinds of if-else cases etc.
But pick the right abstraction and the solution becomes trivial with few sub-cases.
Compute the Distance Transform of your shapes and VoilĂ ! Your solution is trivial.
Allow me to elaborate.
The distance map of a shape on a grid (pixels) encodes the distance of the closest point on the
shape's border to that pixel. It can be computed in both directions outwards or inwards into the shape.
In this problem, the outward distance map suffices.
Step 1: Compute the distance map of both shapes D_S1, D_S2
Step 2: Subtract the distance maps. Diff = D_S1-D_S2
Step 3: if Diff has only positive values. Then your shapes can be contained in each other(+ve => S1 bigger than S2 -ve => S2 bigger than S1)
If the Diff has both positive and negative values, the shapes intersect.
There you have it. Enjoy !

Averaging vector images to get in-between images

I am looking for an algorithm that takes vector image data (e.g. sets of edges) and interpolate another set of edges which is the "average" of the two (or more) sets.
To put it in another way, it is just like Adobe Flash where you "tween" two vector images and the software automatically computes the in-between images. Therefore you only specify the starting image and end image, then Flash takes care of all the in-between images.
Is there any established algorithm to do this? Especially in cases like different number of edges?
What exactly do you mean by edges? Are we talking about smooth vector graphics that use curves?
Well a basic strategy would be to simply do a linear interpolation on the points and directions of your control polygon.
Basically you could simply take two corresponding points (one of each curve/vector form) and interpolate them with:
x(t) = (1-t)*p1 + t*p2 with t in [0,1]
(t=0.5 would then of course give you the average between the two)
Since vector graphics usually use curves you'd need to do the same with the direction vector of each control point to get the direction vector of the averaged curve.
One big problem though is to match the right points of each control polygon, especially if both curves have a different degree. You could try doing a degree elevation on one to match the degree of the other and then one by one assign them to each other and interpolate.
Maybe that helps...

Drawing a Topographical Map

I've been working on a visualization project for 2-dimensional continuous data. It's the kind of thing you could use to study elevation data or temperature patterns on a 2D map. At its core, it's really a way of flattening 3-dimensions into two-dimensions-plus-color. In my particular field of study, I'm not actually working with geographical elevation data, but it's a good metaphor, so I'll stick with it throughout this post.
Anyhow, at this point, I have a "continuous color" renderer that I'm very pleased with:
The gradient is the standard color-wheel, where red pixels indicate coordinates with high values, and violet pixels indicate low values.
The underlying data structure uses some very clever (if I do say so myself) interpolation algorithms to enable arbitrarily deep zooming into the details of the map.
At this point, I want to draw some topographical contour lines (using quadratic bezier curves), but I haven't been able to find any good literature describing efficient algorithms for finding those curves.
To give you an idea for what I'm thinking about, here's a poor-man's implementation (where the renderer just uses a black RGB value whenever it encounters a pixel that intersects a contour line):
There are several problems with this approach, though:
Areas of the graph with a steeper slope result in thinner (and often broken) topo lines. Ideally, all topo lines should be continuous.
Areas of the graph with a flatter slope result in wider topo lines (and often entire regions of blackness, especially at the outer perimeter of the rendering region).
So I'm looking at a vector-drawing approach for getting those nice, perfect 1-pixel-thick curves. The basic structure of the algorithm will have to include these steps:
At each discrete elevation where I want to draw a topo line, find a set of coordinates where the elevation at that coordinate is extremely close (given an arbitrary epsilon value) to the desired elevation.
Eliminate redundant points. For example, if three points are in a perfectly-straight line, then the center point is redundant, since it can be eliminated without changing the shape of the curve. Likewise, with bezier curves, it is often possible to eliminate cetain anchor points by adjusting the position of adjacent control points.
Assemble the remaining points into a sequence, such that each segment between two points approximates an elevation-neutral trajectory, and such that no two line segments ever cross paths. Each point-sequence must either create a closed polygon, or must intersect the bounding box of the rendering region.
For each vertex, find a pair of control points such that the resultant curve exhibits a minimum error, with respect to the redundant points eliminated in step #2.
Ensure that all features of the topography visible at the current rendering scale are represented by appropriate topo lines. For example, if the data contains a spike with high altitude, but with extremely small diameter, the topo lines should still be drawn. Vertical features should only be ignored if their feature diameter is smaller than the overall rendering granularity of the image.
But even under those constraints, I can still think of several different heuristics for finding the lines:
Find the high-point within the rendering bounding-box. From that high point, travel downhill along several different trajectories. Any time the traversal line crossest an elevation threshold, add that point to an elevation-specific bucket. When the traversal path reaches a local minimum, change course and travel uphill.
Perform a high-resolution traversal along the rectangular bounding-box of the rendering region. At each elevation threshold (and at inflection points, wherever the slope reverses direction), add those points to an elevation-specific bucket. After finishing the boundary traversal, start tracing inward from the boundary points in those buckets.
Scan the entire rendering region, taking an elevation measurement at a sparse regular interval. For each measurement, use it's proximity to an elevation threshold as a mechanism to decide whether or not to take an interpolated measurement of its neighbors. Using this technique would provide better guarantees of coverage across the whole rendering region, but it'd be difficult to assemble the resultant points into a sensible order for constructing paths.
So, those are some of my thoughts...
Before diving deep into an implementation, I wanted to see whether anyone else on StackOverflow has experience with this sort of problem and could provide pointers for an accurate and efficient implementation.
Edit:
I'm especially interested in the "Gradient" suggestion made by ellisbben. And my core data structure (ignoring some of the optimizing interpolation shortcuts) can be represented as the summation of a set of 2D gaussian functions, which is totally differentiable.
I suppose I'll need a data structure to represent a three-dimensional slope, and a function for calculating that slope vector for at arbitrary point. Off the top of my head, I don't know how to do that (though it seems like it ought to be easy), but if you have a link explaining the math, I'd be much obliged!
UPDATE:
Thanks to the excellent contributions by ellisbben and Azim, I can now calculate the contour angle for any arbitrary point in the field. Drawing the real topo lines will follow shortly!
Here are updated renderings, with and without the ghetto raster-based topo-renderer that I've been using. Each image includes a thousand random sample points, represented by red dots. The angle-of-contour at that point is represented by a white line. In certain cases, no slope could be measured at the given point (based on the granularity of interpolation), so the red dot occurs without a corresponding angle-of-contour line.
Enjoy!
(NOTE: These renderings use a different surface topography than the previous renderings -- since I randomly generate the data structures on each iteration, while I'm prototyping -- but the core rendering method is the same, so I'm sure you get the idea.)
Here's a fun fact: over on the right-hand-side of these renderings, you'll see a bunch of weird contour lines at perfect horizontal and vertical angles. These are artifacts of the interpolation process, which uses a grid of interpolators to reduce the number of computations (by about 500%) necessary to perform the core rendering operations. All of those weird contour lines occur on the boundary between two interpolator grid cells.
Luckily, those artifacts don't actually matter. Although the artifacts are detectable during slope calculation, the final renderer won't notice them, since it operates at a different bit depth.
UPDATE AGAIN:
Aaaaaaaand, as one final indulgence before I go to sleep, here's another pair of renderings, one in the old-school "continuous color" style, and one with 20,000 gradient samples. In this set of renderings, I've eliminated the red dot for point-samples, since it unnecessarily clutters the image.
Here, you can really see those interpolation artifacts that I referred to earlier, thanks to the grid-structure of the interpolator collection. I should emphasize that those artifacts will be completely invisible on the final contour rendering (since the difference in magnitude between any two adjacent interpolator cells is less than the bit depth of the rendered image).
Bon appetit!!
The gradient is a mathematical operator that may help you.
If you can turn your interpolation into a differentiable function, the gradient of the height will always point in the direction of steepest ascent. All curves of equal height are perpendicular to the gradient of height evaluated at that point.
Your idea about starting from the highest point is sensible, but might miss features if there is more than one local maximum.
I'd suggest
pick height values at which you will draw lines
create a bunch of points on a fine, regularly spaced grid, then walk each point in small steps in the gradient direction towards the nearest height at which you want to draw a line
create curves by stepping each point perpendicular to the gradient; eliminate excess points by killing a point when another curve comes too close to it-- but to avoid destroying the center of hourglass like figures, you might need to check the angle between the oriented vector perpendicular to the gradient for both of the points. (When I say oriented, I mean make sure that the angle between the gradient and the perpendicular value you calculate is always 90 degrees in the same direction.)
In response to your comment to #erickson and to answer the point about calculating the gradient of your function. Instead of calculating the derivatives of your 300 term function you could do a numeric differentiation as follows.
Given a point [x,y] in your image you could calculate the gradient (direction of steepest decent)
g={ ( f(x+dx,y)-f(x-dx,y) )/(2*dx),
{ ( f(x,y+dy)-f(x,y-dy) )/(2*dy)
where dx and dy could be the spacing in your grid. The contour line will run perpendicular to the gradient. So, to get the contour direction, c, we can multiply g=[v,w] by matrix, A=[0 -1, 1 0] giving
c = [-w,v]
Alternately, there is the marching squares algorithm which seems appropriate to your problem, although you may want to smooth the results if you use a coarse grid.
The topo curves you want to draw are isosurfaces of a scalar field over 2 dimensions. For isosurfaces in 3 dimensions, there is the marching cubes algorithm.
I've wanted something like this myself, but haven't found a vector-based solution.
A raster-based solution isn't that bad, though, especially if your data is raster-based. If your data is vector-based too (in other words, you have a 3D model of your surface), you should be able to do some real math to find the intersection curves with horizontal planes at varying elevations.
For a raster-based approach, I look at each pair of neighboring pixels. If one is above a contour level, and one is below, obviously a contour line runs between them. The trick I used to anti-alias the contour line is to mix the contour line color into both pixels, proportional to their closeness to the idealized contour line.
Maybe some examples will help. Suppose that the current pixel is at an "elevation" of 12 ft, a neighbor is at an elevation of 8 ft, and contour lines are every 10 ft. Then, there is a contour line half way between; paint the current pixel with the contour line color at 50% opacity. Another pixel is at 11 feet and has a neighbor at 6 feet. Color the current pixel at 80% opacity.
alpha = (contour - neighbor) / (current - neighbor)
Unfortunately, I don't have the code handy, and there might have been a bit more to it (I vaguely recall looking at diagonal neighbors too, and adjusting by sqrt(2) / 2). I hope this enough to give you the gist.
It occurred to me that what you're trying to do would be pretty easy to do in MATLAB, using the contour function. Doing things like making low-density approximations to your contours can probably be done with some fairly simple post-processing of the contours.
Fortunately, GNU Octave, a MATLAB clone, has implementations of the various contour plotting functions. You could look at that code for an algorithm and implementation that's almost certainly mathematically sound. Or, you might just be able to offload the processing to Octave. Check out the page on interfacing with other languages to see if that would be easier.
Disclosure: I haven't used Octave very much, and I haven't actually tested it's contour plotting. However, from my experience with MATLAB, I can say that it will give you almost everything you're asking for in just a few lines of code, provided you get your data into MATLAB.
Also, congratulations on making a very VanGough-esque slopefield plot.
I always check places like http://mathworld.wolfram.com before going to deep on my own :)
Maybe their curves section would help? Or maybe the entry on maps.
compare what you have rendered with a real-world topo map - they look identical to me! i wouldn't change a thing...
Write the data out as an HGT file (very simple digital elevation data format used by USGS) and use the free and open-source gdal_contour tool to create contours. That works very well for terrestrial maps, the constraint being that the data points are signed 16-bit numbers, which fits the earthly range of heights in metres very well, but may not be enough for your data, which I assume not to be a map of actual terrain - although you do mention terrain maps.
I recommend the CONREC approach:
Create an empty line segment list
Split your data into regular grid squares
For each grid square, split the square into 4 component triangles:
For each triangle, handle the cases (a through j):
If a line segment crosses one of the cases:
Calculate its endpoints
Store the line segment in the list
Draw each line segment in the line segment list
If the lines are too jagged, use a smaller grid. If the lines are smooth enough and the algorithm is taking too long, use a larger grid.

Resources