Related
I'm trying to create an inner glow effect for a triangle fan primitive using GLSL ES 2.0 - though only the outer edges are to be subject to the effect at hand. I guess there are many ways to do this, but haven't found any description so far.
There is the technique described in Make the edges of a textured polygon glow in OpenGL ES 2.0, however, this doesn't work for me as I'm working purely with primitive at this stage.
My initial thought was to somehow calculate the distance to the nearest edge in the fragment shader, and then set the color according to wether or not the distance falls within the bounds of some threshold value or not. (Of course, the color and alpha is to be a function of the distance from the nearest edge - the exact gradient profile is not important at this point.)
This approach poses two problems:
1) How do I calculate the distance from a fragment to the nearest edge?
2) How do I exclude common edges in this process, i.e. edges that are common to two (or more) triangles?
Is this a sensible approach, and if so: how do I resolve my two issues? Suggestions for alternative approaches are also greatly appreciated. (For instance, I've been reading that texture data need not be an image, and that it may be utilized for custom purposes. Could a non-image texture be part of the solution?) :)
To answer your two questions, I don't think there is any glsl magic that will do this for you. By the time you get to the fragment shader, there is no longer any information available about edges, especially trying to segregate true edges from internal edges.
What I recommend is to add more vertices to your fan, and use a new custom attribute to define the 'glow level'. See image for example, I would put a row of vertices around the edge, define these (and the center of the fan) to have maximum glow, and then define the edges to have zero glow, and then you can get an interpolated glow value between the edge and the new vertices.
When rendering 3D rectangles (i.e. rectangles in 3D space), of course, they are specified as a list of vertexes for two triangles. However, that representation contains a lot of extraneous information that gets tiresome to code multiple times. I'd like to create a "Rectangle" object that will allow me to specify its texture, size, position, and orientation in space and export the list of vertexes (and indexes), but I'm not sure of the best way to do it. Should I specify the position of the lower left corner (pre-rotation), or the center of the rectangle? How should I specify the orientation, as a vector containing rotation angles? This is such a simple and standard requirement that I'm sure people have thought about it before, but I can't find anything on this site or elsewhere on the subject. I plan to use these objects a lot, so my primary goal (apart from performance) is ease of use rather than anything to do with the internal representation. It wouldn't be hard for me to simply code the first thing I can think of, but I don't want to miss anything and make it unnecessarily difficult.
So, how should I represent a Rectangle object? Opinions are welcome, but sources would be especially helpful.
Edit: if it helps, I believe I'd primarily be using the rectangles on the faces of cubes, though not necessarily as the entire faces of those cubes.
It would probably be simplest to store the homogeneous matrix that transforms a standard, axis-aligned square into the desired location, along with a separate matrix that determines how to map the texture onto it.
For the location matrix, you can store the 4x3 matrix that doesn't affect the w-coordinate. This is only a bit redundant: it uses 12 values where a general rectangle needs 8, but on the other hand, it will be much easier to convert it back to a form usable for rendering.
Alternately, you can store a point location (edge or center depending on whatever is most convenient), and two direction vectors, describing the direction and length of each edge; you are relying on your rectangle generator to make sure the edge vectors are orthogonal. This will take 9 values, which is almost the best you can do.
For the texture mapping, you can store a 3x2 matrix that defines an affine mapping of the (u,v) coordinates onto the coordinates defined by the edges of the rectangle. You can choose a zero-based (0,1)x(0,1) mapping, or a symmetric (-1,1)x(-1,1) mapping, based on whatever is convenient for your application. In any case, this will require 6 values.
As a rectangle is just a bounded plane, what about storing it as an extension of that: a point and a normal vector (defining the centre---or perhaps one of the corners---and orientation); but add in two more components for the width and height bounds?
I think it really depends on how you intend to use the rectangle.
For example: If you have lots of rectangles, storing the three points of one of the two triangles might be best, because it you only have to calculate one more point.
If you typically center your rectangles on something the center point, width, height and rotation angles might be more appropriate.
I'd say: start with what ever seems naturally for you. Make sure your class is able to do all the necessary calculations and hide them behind accessors. Have a good suite of tests for that.
That way you can change the implementation any time. Or you can even have different rectangle implementations for different needs
If I construct a shape using constructive solid geometry techniques, how can I construct a wireframe mesh for rendering?
I'm aware of algorithms for directly rendering CSG shapes, but I want to convert it into a wireframe mesh just once so that I can render it "normally"
To add a little more detail. Given a description of a shape such as "A cube here, intersection with a sphere here, subtract a cylinder here" I want to be able to calculate a polygon mesh.
There are two main approaches. If you have a set of polygonal shapes, it is possible to create a BSP tree for each shape, then the BSP trees can be merged. From Wikipedia,
1990 Naylor, Amanatides, and Thibault
provide an algorithm for merging two
bsp trees to form a new bsp tree from
the two original trees. This provides
many benefits including: combining
moving objects represented by BSP
trees with a static environment (also
represented by a BSP tree), very
efficient CSG operations on polyhedra,
exact collisions detection in O(log n
* log n), and proper ordering of transparent surfaces contained in two
interpenetrating objects (has been
used for an x-ray vision effect).
The paper is found here Merging BSP trees yields polyhedral set operations.
Alternatively, each shape can be represented as a function over space (for example signed distance to the surface). As long as the surface is defined as where the function is equal to zero, the functions can then be combined using (MIN == intersection), (MAX == union), and (NEGATION = not) operators to mimic the set operations. The resulting surface can then be extracted as the positions where the combined function is equal to zero using a technique like Marching Cubes. Better surface extraction methods like Dual Marching Cubes or Dual Contouring can also be used. This will, of course, result in a discrete approximation of the true CSG surface. I suggest using Dual Contouring, because it is able to reconstruct sharp features like the corners of cubes .
These libraries seems to do what you want:
www.solidgraphics.com/SolidKit/
carve-csg.com/
gts.sourceforge.net/
See also "Constructive Solid Geometry for Triangulated Polyhedra" (1990) Philip M. Hubbard doi:10.1.1.34.9374
Here are some Google Scholar links which may be of use.
From what I can tell of the abstracts, the basic idea is to generate a point cloud from the volumetric data available in the CSG model, and then use some more common algorithms to generate a mesh of faces in 3D to fit that point cloud.
Edit: Doing some further research, this kind of operation is called "conversion from CSG to B-Rep (boundary representation)". Searches on that string lead to a useful PDF:
http://www.scielo.br/pdf/jbsmse/v29n4/a01v29n4.pdf
And, for further information, the key algorithm is called the "Marching Cubes Algorithm". Essentially, the CSG model is used to create a volumetric model of the object with voxels, and then the Marching Cubes algorithm is used to create a 3D mesh out of the voxel data.
You could try to triangulate (tetrahedralize) each primitive, then perform the boolean operations on the tetrahedral mesh, which is "easier" since you only need to worry about tetrahedron-tetrahedron operations. Then you can perform boundary extraction to get the B-rep. Since you know the shapes of your primitives analytically, you can construct custom tetrahedralizations of your primitives to suit your needs instead of relying on a mesh generation library.
For example, suppose your object was the union of a cube and a cylinder, and suppose you have a tetrahedralization of both objects. In order to compute the boundary representation of the resulting object, you first label all the boundary facets of the tetrahedra of each primitive object. Then, you perform the union operation: if two tetrahedra are disjoint, then nothing needs to be done; both tetrahedra must exist in the resulting polyhedron. If they intersect, then there are a number of cases (probably on the order of a dozen or so) that need to be handled. In each of these cases, the volume of the two tetrahedra needs to be re-triangulated in a way that respects the surface constraints. This is made somewhat easier by the fact that you only need to worry about tetrahedra, as opposed to more complicated shapes. The boundary facet labels need to be maintained in the process so that in the final collection of tetrahedra, the boundary facets can be extracted to form a triangle mesh of the surface.
I've had some luck with the BRL-CAD application MGED where I can construct a convex polyhedron by intersecting planes using CSG then extract the boundary representation using the command-line g-stl command. Check http://brlcad.org/
Malcolm
If you can convert you input primitives to polyhedral meshes then you could use libigl's C++ mesh boolean routines. The following computes the union of a mesh (VA,FA) and another mesh (VB,FB):
igl::mesh_boolean(VA,FA,VB,FB,"union",VC,FC);
where VA is a #VA by 3 matrix of vertex positions and FA is a #FA by 3 matrix of triangle indices into VA, and so on. The technique used in libigl is different from those two mentioned in Joe's answer. All pairs of triangles are intersected against each other (using spatial acceleration) and then resulting sub-triangles are categorized as belonging to the output surface or not.
I have a map that is cut up into a number of regions by borders (contours) like countries on a world map. Each region has a certain surface-cover class S (e.g. 0 for water, 0.03 for grass...). The borders are defined by:
what value of S is on either side of it (0.03 on one side, 0.0 on the other, in the example below)
how many points the border is made of (n=7 in example below), and
n coordinate pairs (x, y).
This is one example.
0.0300 0.0000 7
2660607.5 6332685.5 2660565.0 6332690.5 2660541.5 6332794.5
2660621.7 6332860.5 2660673.8 6332770.5 2660669.0 6332709.5
2660607.5 6332685.5
I want to make a raster map in which each pixel has the value of S corresponding to the region in which the center of the pixel falls.
Note that the borders represent step changes in S. The various values of S represent discrete classes (e.g. grass or water), and are not values that can be averaged (i.e. no wet grass!).
Also note that not all borders are closed loops like the example above. This is a bit like country borders: e.g. the US-Canada border isn't a closed loop, but rather a line joining up at each end with two other borders: the Canada-ocean and the US-ocean "borders". (Closed-loop borders do exist nevertheless!)
Can anyone point me to an algorithm that can do this? I don't want to reinvent the wheel!
The general case for processing this sort of geometry in vector form can be quite difficult, especially since nothing about the structure you describe requires the geometry to be consistent. However, since you just want to rasterize it, then treating the problem as a Voronoi diagram of line segments can be more robust.
Approximating the Voronoi diagram can be done graphically in OpenGL by drawing each line segment as a pair of quads making a tent shape. The z-buffer is used to make the closest quad take precedence, and thus color the pixel based on whichever line is closest. The difference here is that you will want to color the polygons based on which side of the line they are on, instead of which line they represent. A good paper discussing a similar algorithm is Hoff et al's Fast Computation of Generalized Voronoi Diagrams Using Graphics Hardware
The 3d geometry will look something like this sketch with 3 red/yellow segments and 1 blue/green segment:
This procedure doesn't require you to convert anything into a closed loop, and doesn't require any fancy geometry libraries. Everything is handled by the z-buffer, and should be fast enough to run in real time on any modern graphics card. A refinement would be to use homogeneous coordinates to make the bases project to infinity.
I implemented this algorithm in a Python script at http://www.pasteall.org/9062/python. One interesting caveat is that using cones to cap the ends of the lines didn't work without distorting the shape of the cone, because the cones representing the end points of the segments were z-fighting. For the sample geometry you provided, the output looks like this:
I'd recommend you to use a geometry algorithm library like CGAL. Especially the second example in the "2D Polygons" page of the reference manual should provide you what you need. You can define each "border" as a polygon and check if certain points are inside the polygons. So basically it would be something like
for every y in raster grid
for every x in raster grid
for each defined polygon p
if point(x,y) is inside polygon p
pixel[X][Y] = inside_color[p]
I'm not so sure about what to do with the outside_color because the outside regions will overlap, won't they? Anyway, looking at your example, every outside region could be water, so you just could do a final
if pixel[X][Y] still undefined then pixel[X][Y] = water_value
(or as an alternative, set pixel[X][Y] to water_value before iterating through the polygon list)
first, convert all your borders into closed loops (possibly including the edges of your map), and indentify the inside colour. this has to be possible, otherwise you have an inconsistency in your data
use bresenham's algorithm to draw all the border lines on your map, in a single unused colour
store a list of all the "border pixels" as you do this
then for each border
triangulate it (delaunay)
iterate through the triangles till you find one whose centre is inside your border (point-in-polygon test)
floodfill your map at that point in the border's interior colour
once you have filled in all the interior regions, iterate through the list of border pixels, seeing which colour each one should be
choose two unused colors as markers "empty" and "border"
fill all area with "empty" color
draw all region borders by "border" color
iterate through points to find first one with "empty" color
determine which region it belongs to (google "point inside polygon", probably you will need to make your borders closed as Martin DeMello suggested)
perform flood-fill algorithm from this point with color of the region
go to next "empty" point (no need to restart search - just continue)
and so on till no "empty" points will remain
The way I've solved this is as follows:
March along each segment; stop at regular intervals L.
At each stop, place a tracer point immediately to the left and to the right of the segment (at a certain small distance d from the segment). The tracer points are attributed the left and right S-value, respectively.
Do a nearest-neighbour interpolation. Each point on the raster grid is attributed the S of the nearest tracer point.
This works even when there are non-closed lines, e.g. at the edge of the map.
This is not a "perfect" analytical algorithm. There are two parameters: L and d. The algorithm works beautifully as long as d << L. Otherwise you can get inaccuracies (usually single-pixel) near segment junctions, especially those with acute angles.
I've been working on a visualization project for 2-dimensional continuous data. It's the kind of thing you could use to study elevation data or temperature patterns on a 2D map. At its core, it's really a way of flattening 3-dimensions into two-dimensions-plus-color. In my particular field of study, I'm not actually working with geographical elevation data, but it's a good metaphor, so I'll stick with it throughout this post.
Anyhow, at this point, I have a "continuous color" renderer that I'm very pleased with:
The gradient is the standard color-wheel, where red pixels indicate coordinates with high values, and violet pixels indicate low values.
The underlying data structure uses some very clever (if I do say so myself) interpolation algorithms to enable arbitrarily deep zooming into the details of the map.
At this point, I want to draw some topographical contour lines (using quadratic bezier curves), but I haven't been able to find any good literature describing efficient algorithms for finding those curves.
To give you an idea for what I'm thinking about, here's a poor-man's implementation (where the renderer just uses a black RGB value whenever it encounters a pixel that intersects a contour line):
There are several problems with this approach, though:
Areas of the graph with a steeper slope result in thinner (and often broken) topo lines. Ideally, all topo lines should be continuous.
Areas of the graph with a flatter slope result in wider topo lines (and often entire regions of blackness, especially at the outer perimeter of the rendering region).
So I'm looking at a vector-drawing approach for getting those nice, perfect 1-pixel-thick curves. The basic structure of the algorithm will have to include these steps:
At each discrete elevation where I want to draw a topo line, find a set of coordinates where the elevation at that coordinate is extremely close (given an arbitrary epsilon value) to the desired elevation.
Eliminate redundant points. For example, if three points are in a perfectly-straight line, then the center point is redundant, since it can be eliminated without changing the shape of the curve. Likewise, with bezier curves, it is often possible to eliminate cetain anchor points by adjusting the position of adjacent control points.
Assemble the remaining points into a sequence, such that each segment between two points approximates an elevation-neutral trajectory, and such that no two line segments ever cross paths. Each point-sequence must either create a closed polygon, or must intersect the bounding box of the rendering region.
For each vertex, find a pair of control points such that the resultant curve exhibits a minimum error, with respect to the redundant points eliminated in step #2.
Ensure that all features of the topography visible at the current rendering scale are represented by appropriate topo lines. For example, if the data contains a spike with high altitude, but with extremely small diameter, the topo lines should still be drawn. Vertical features should only be ignored if their feature diameter is smaller than the overall rendering granularity of the image.
But even under those constraints, I can still think of several different heuristics for finding the lines:
Find the high-point within the rendering bounding-box. From that high point, travel downhill along several different trajectories. Any time the traversal line crossest an elevation threshold, add that point to an elevation-specific bucket. When the traversal path reaches a local minimum, change course and travel uphill.
Perform a high-resolution traversal along the rectangular bounding-box of the rendering region. At each elevation threshold (and at inflection points, wherever the slope reverses direction), add those points to an elevation-specific bucket. After finishing the boundary traversal, start tracing inward from the boundary points in those buckets.
Scan the entire rendering region, taking an elevation measurement at a sparse regular interval. For each measurement, use it's proximity to an elevation threshold as a mechanism to decide whether or not to take an interpolated measurement of its neighbors. Using this technique would provide better guarantees of coverage across the whole rendering region, but it'd be difficult to assemble the resultant points into a sensible order for constructing paths.
So, those are some of my thoughts...
Before diving deep into an implementation, I wanted to see whether anyone else on StackOverflow has experience with this sort of problem and could provide pointers for an accurate and efficient implementation.
Edit:
I'm especially interested in the "Gradient" suggestion made by ellisbben. And my core data structure (ignoring some of the optimizing interpolation shortcuts) can be represented as the summation of a set of 2D gaussian functions, which is totally differentiable.
I suppose I'll need a data structure to represent a three-dimensional slope, and a function for calculating that slope vector for at arbitrary point. Off the top of my head, I don't know how to do that (though it seems like it ought to be easy), but if you have a link explaining the math, I'd be much obliged!
UPDATE:
Thanks to the excellent contributions by ellisbben and Azim, I can now calculate the contour angle for any arbitrary point in the field. Drawing the real topo lines will follow shortly!
Here are updated renderings, with and without the ghetto raster-based topo-renderer that I've been using. Each image includes a thousand random sample points, represented by red dots. The angle-of-contour at that point is represented by a white line. In certain cases, no slope could be measured at the given point (based on the granularity of interpolation), so the red dot occurs without a corresponding angle-of-contour line.
Enjoy!
(NOTE: These renderings use a different surface topography than the previous renderings -- since I randomly generate the data structures on each iteration, while I'm prototyping -- but the core rendering method is the same, so I'm sure you get the idea.)
Here's a fun fact: over on the right-hand-side of these renderings, you'll see a bunch of weird contour lines at perfect horizontal and vertical angles. These are artifacts of the interpolation process, which uses a grid of interpolators to reduce the number of computations (by about 500%) necessary to perform the core rendering operations. All of those weird contour lines occur on the boundary between two interpolator grid cells.
Luckily, those artifacts don't actually matter. Although the artifacts are detectable during slope calculation, the final renderer won't notice them, since it operates at a different bit depth.
UPDATE AGAIN:
Aaaaaaaand, as one final indulgence before I go to sleep, here's another pair of renderings, one in the old-school "continuous color" style, and one with 20,000 gradient samples. In this set of renderings, I've eliminated the red dot for point-samples, since it unnecessarily clutters the image.
Here, you can really see those interpolation artifacts that I referred to earlier, thanks to the grid-structure of the interpolator collection. I should emphasize that those artifacts will be completely invisible on the final contour rendering (since the difference in magnitude between any two adjacent interpolator cells is less than the bit depth of the rendered image).
Bon appetit!!
The gradient is a mathematical operator that may help you.
If you can turn your interpolation into a differentiable function, the gradient of the height will always point in the direction of steepest ascent. All curves of equal height are perpendicular to the gradient of height evaluated at that point.
Your idea about starting from the highest point is sensible, but might miss features if there is more than one local maximum.
I'd suggest
pick height values at which you will draw lines
create a bunch of points on a fine, regularly spaced grid, then walk each point in small steps in the gradient direction towards the nearest height at which you want to draw a line
create curves by stepping each point perpendicular to the gradient; eliminate excess points by killing a point when another curve comes too close to it-- but to avoid destroying the center of hourglass like figures, you might need to check the angle between the oriented vector perpendicular to the gradient for both of the points. (When I say oriented, I mean make sure that the angle between the gradient and the perpendicular value you calculate is always 90 degrees in the same direction.)
In response to your comment to #erickson and to answer the point about calculating the gradient of your function. Instead of calculating the derivatives of your 300 term function you could do a numeric differentiation as follows.
Given a point [x,y] in your image you could calculate the gradient (direction of steepest decent)
g={ ( f(x+dx,y)-f(x-dx,y) )/(2*dx),
{ ( f(x,y+dy)-f(x,y-dy) )/(2*dy)
where dx and dy could be the spacing in your grid. The contour line will run perpendicular to the gradient. So, to get the contour direction, c, we can multiply g=[v,w] by matrix, A=[0 -1, 1 0] giving
c = [-w,v]
Alternately, there is the marching squares algorithm which seems appropriate to your problem, although you may want to smooth the results if you use a coarse grid.
The topo curves you want to draw are isosurfaces of a scalar field over 2 dimensions. For isosurfaces in 3 dimensions, there is the marching cubes algorithm.
I've wanted something like this myself, but haven't found a vector-based solution.
A raster-based solution isn't that bad, though, especially if your data is raster-based. If your data is vector-based too (in other words, you have a 3D model of your surface), you should be able to do some real math to find the intersection curves with horizontal planes at varying elevations.
For a raster-based approach, I look at each pair of neighboring pixels. If one is above a contour level, and one is below, obviously a contour line runs between them. The trick I used to anti-alias the contour line is to mix the contour line color into both pixels, proportional to their closeness to the idealized contour line.
Maybe some examples will help. Suppose that the current pixel is at an "elevation" of 12 ft, a neighbor is at an elevation of 8 ft, and contour lines are every 10 ft. Then, there is a contour line half way between; paint the current pixel with the contour line color at 50% opacity. Another pixel is at 11 feet and has a neighbor at 6 feet. Color the current pixel at 80% opacity.
alpha = (contour - neighbor) / (current - neighbor)
Unfortunately, I don't have the code handy, and there might have been a bit more to it (I vaguely recall looking at diagonal neighbors too, and adjusting by sqrt(2) / 2). I hope this enough to give you the gist.
It occurred to me that what you're trying to do would be pretty easy to do in MATLAB, using the contour function. Doing things like making low-density approximations to your contours can probably be done with some fairly simple post-processing of the contours.
Fortunately, GNU Octave, a MATLAB clone, has implementations of the various contour plotting functions. You could look at that code for an algorithm and implementation that's almost certainly mathematically sound. Or, you might just be able to offload the processing to Octave. Check out the page on interfacing with other languages to see if that would be easier.
Disclosure: I haven't used Octave very much, and I haven't actually tested it's contour plotting. However, from my experience with MATLAB, I can say that it will give you almost everything you're asking for in just a few lines of code, provided you get your data into MATLAB.
Also, congratulations on making a very VanGough-esque slopefield plot.
I always check places like http://mathworld.wolfram.com before going to deep on my own :)
Maybe their curves section would help? Or maybe the entry on maps.
compare what you have rendered with a real-world topo map - they look identical to me! i wouldn't change a thing...
Write the data out as an HGT file (very simple digital elevation data format used by USGS) and use the free and open-source gdal_contour tool to create contours. That works very well for terrestrial maps, the constraint being that the data points are signed 16-bit numbers, which fits the earthly range of heights in metres very well, but may not be enough for your data, which I assume not to be a map of actual terrain - although you do mention terrain maps.
I recommend the CONREC approach:
Create an empty line segment list
Split your data into regular grid squares
For each grid square, split the square into 4 component triangles:
For each triangle, handle the cases (a through j):
If a line segment crosses one of the cases:
Calculate its endpoints
Store the line segment in the list
Draw each line segment in the line segment list
If the lines are too jagged, use a smaller grid. If the lines are smooth enough and the algorithm is taking too long, use a larger grid.