It is necessary to have a matrix representation of a set of geometric primitives (i.e., line, curve, circle, rectangle, also their filled forms). For simplicity you may suppose we are dealing only with lines, so the answer is already on [SO]. Rectangles therefore could be easily pixelated. For the rest of primitives however two questions appear to me:
1) How to pixelate a curve including circle (~closed curve)?
2) How to pixelate a filled simple / complex shape (rectangle, multi-patch)?
The simplest way (currently in use) may be utilizing a visualizing library (such a MatPlotLib for Python) to save the result (a map of geometric primitives) as a pixelated image on disk (or RAM) and then reuse it for the purpose of interest. Apparently, this method can handle any complexity since in background whatever it (the visualizer) does the output is a 2D image, i.e., 2D matrix. Some serious problems however emerge in this application:
1) the procedure is very slow!
2) the procedure is not standard but heavily dependent to the setting of the visualizer, that is often the low-level configuration is impossible or difficult to be set for visualizer. In other words, the black box being used lacks controlling on the procedure as required.
What you are doing is called "scan conversion" of geometric primitives.
For line segments, you already know about the Bresenham algorithm.
There is a similar one for circles, a bit trickier (as regards handling of the endpoints).
General curves is a broader topic. You can think of conics, splines or hand-drawn. One approach is to approximate them with a polyline.
To fill polygons, there is a scanline algorithm available (consider a sweeping horizontal line and fill between the intersections with the polygon outline).
To fill arbitrary shapes, an option is to draw the outline and use seed filling (from a given internal point).
You will find relevant material at http://www.cse.ohio-state.edu/~gurari/course/cis681/cis681Ch5.html
Related
I'm trying to write my own CAD program, and it's pretty important that I give the user the ability to select vertices/edges/faces(triangles) of interest by drawing a box or polygon on a 2D screen, and then highlighting whatever is underneath in the 3D view (both ignoring the back faces and also not ignoring the back faces).
How is this done usually? Is there any open-source example I can look at? What's generally the process for this?
This is especially harder when you are trying to handle 1 million+ triangles.
there are many ways to do this here two most often used:
Ray picking
First see:
OpenGL 3D-raypicking with high poly meshes
The idea is to cast a ray(s) from camera focal point into mouse (or cursor) direction and what the ray hits is selected. The link above exploits OpenGL rendering where you can do this very easily and fast (almost for free) and the result is pixel perfect. In order to use selecting box/polygon you need to read all of its pixels on CPU side and convert them to list of entities selected. This is slightly slower but still can be done very fast (regardless of complexity of the rendered scene). This approach is O(1) however if you do the same on CPU it will be much much slower with complexity O(n) while n is number of entities total (unless BVH or is Octree used). This method however will select only what is visible (so no back faces or objects behind).
Geometry tests
Basicaly your 2D rectangle will slice the perspective 3D frustrum to smaller one and what is inside or intersecting should be selected. You can compute this with geometry on CPU side in form of tests (object inside cuboid or box) something like this:
Cone to box collision
The complexity is also O(n) unless BVH or Octree is used. This method will select all objects (even not visible ones).
Also I think this might be interesting reading for you:
simple Drag&Drop in C++
It shows a simple C++ app architecture able of placing and moving objects in 2D. Its a basic start point for CAD like app. For 3D you just add the matrix math and or editation controls...
Also a hint for selecting in CAD/CAM software (IIRC AUTOCAD started with) is that if your selection box was created from left to right and from top to bottom manner you select all objects that are fully inside or intersect and if the selection box was created in reverse direction you select only what is fully inside. This way allows for more comfortable editation.
Let's say I have a static object and a movable object which can be moved and rotated, what is the best way to very quickly calculate the difference of those two meshes?
Precision here is not so important, speed is though, since I have to use it in the update phase of the main loop.
Maybe, given the strict time limit, modifying the static object's vertices and triangles directly is to be preferred. Should voxels be preferred here instead?
EDIT: The use case is an interactive viewer of a wood panel (parallelepiped) and a milling tool (a revolved contour, some like these).
The milling tool can be rotated and can work oriented at varying degrees (5 axes).
EDIT 2: The milling tool may not pierce the wood.
EDIT 3: The panel can be as large as 6000x2000mm and the milling tool can be as little as 3x3mm.
If you need the best possible performance then the generic CSG approach may be too slow for you (but still depending on meshes and target hardware).
You may try to find some specialized algorithm, coded for your specific meshes. Let's say you have two cubes - one is a 'wall' and second is a 'window' - then it's much easier/faster to compute resulting mesh with your custom code, than full CSG. Unfortunately you don't say anything about your meshes.
You may also try to make it a 2D problem, use some simplified meshes to compute the result that will 'look like expected'.
If the movement of your meshes is somehow limited you may be able to precompute full or partial results for different mesh combinations to use at runtime.
You may use some space partitioning like BSP or Octrees to divide your meshes during precomputing stage. This way you could split one big problem into many smaller ones that may be faster to compute or at least to make the solution multi-threaded.
You've said about voxels - if you're fine with their look and limits you may voxelize both meshes and just read and mix two voxel values, instead of one. Then you would triangulate it using algorithm like Marching Cubes.
Those are all just some general ideas but we'll need better info to help you more.
EDIT:
With your description it looks like you're modeling some bas-relief, so you may use Relief Mapping to fake this effect. It's based on a height map stored as a texture, so you'd need to just update few pixels of the texture and render a plane. It should be quite fast compared to other approaches, the downside is that it's based on height map, so you can't get shapes that Tee Slot or Dovetail cutter would create.
If you want the real geometry then I'd start from a simple plane as your panel (don't need full 3D yet, just a front surface) and divide it with a 2D grid. The grid element should be slightly bigger than the drill size and every element is a separate mesh. In the frame update you'd cut one, or at most 4 elements that are touched with a drill. Thanks to this grid all your cutting operations will be run with very simple mesh so they may work with your intended speed. You can also cut all current elements in separate threads. After the cutting is done you'll upload to the GPU only currently modified elements so you may end up with quite complex mesh but small modifications per frame.
To give you some background as to what I'm doing: I'm trying to quantitatively record variations in flow of a compressible fluid via image analysis. One way to do this is to exploit the fact that the index of refraction of the fluid is directly related to its density. If you set up some kind of image behind the flow, the distortion in the image due to refractive index changes throughout the fluid field leads you to a density gradient, which helps to characterize the flow pattern.
I have a set of routines that do this successfully with a regular 2D pattern of dots. The dot pattern is slightly distorted, and by comparing the position of the dots in the distorted image with that in the non-distorted image, I get a displacement field, which is exactly what I need. The problem with this method is resolution. The resolution is limited to the number of dots in the field, and I'm exploring methods that give me more data.
One idea I've had is to use a regular grid of horizontal and vertical lines. This image will distort the same way, but instead of getting only the displacement of a dot, I'll have the continuous distortion of a grid. It seems like there must be some standard algorithm or procedure to compare one geometric grid to another and infer some kind of displacement field. Nonetheless, I haven't found anything like this in my research.
Does anyone have some ideas that might point me in the right direction? FYI, I am not a computer scientist -- I'm an engineer. I say that only because there may be some obvious approach I'm neglecting due to coming from a different field. But I can program. I'm using MATLAB, but I can read Python, C/C++, etc.
Here are examples of the type of images I'm working with:
Regular: Distorted:
--------
I think you are looking for the Digital Image Correlation algorithm.
Here you can see a demo.
Here is a Matlab Implementation.
From Wikipedia:
Digital Image Correlation and Tracking (DIC/DDIT) is an optical method that employs tracking & image registration techniques for accurate 2D and 3D measurements of changes in images. This is often used to measure deformation (engineering), displacement, and strain, but it is widely applied in many areas of science and engineering.
Edit
Here I applied the DIC algorithm to your distorted image using Mathematica, showing the relative displacements.
Edit
You may also easily identify the maximum displacement zone:
Edit
After some work (quite a bit, frankly) you can come up to something like this, representing the "displacement field", showing clearly that you are dealing with a vortex:
(Darker and bigger arrows means more displacement (velocity))
Post me a comment if you are interested in the Mathematica code for this one. I think my code is not going to help anybody else, so I omit posting it.
I would also suggest a line tracking algorithm would work well.
Simply start at the first pixel line of the image and start following each of the vertical lines downwards (You just need to start this at the first line to get the starting points. This can be done by a simple pattern that moves orthogonally to the gradient of that line, ergo follows a line. When you reach a crossing of a horizontal line you can measure that point (in x,y coordinates) and compare it to the corresponding crossing point in your distorted image.
Since your grid is regular you know that the n'th measured crossing point on the m'th vertical black line are corresponding in both images. Then you simply compare both points by computing their distance. Do this for each line on your grid and you will get, by how far each crossing point of the grid is distorted.
This following a line algorithm is also used in basic Edge linking algorithms or the Canny Edge detector.
(All this are just theoretic ideas and I cannot provide you with an algorithm to it. But I guess it should work easily on distorted images like you have there... but maybe it is helpful for you)
I have some map files consisting of 'polylines' (each line is just a list of vertices) representing tunnels, and I want to try and find the tunnel 'center line' (shown, roughly, in red below).
I've had some success in the past using Delaunay triangulation but I'd like to avoid that method as it does not (in general) allow for easy/frequent modification of my map data.
Any ideas on how I might be able to do this?
An "algorithm" that works well with localized data changes.
The critic's view
The Good
The nice part is that it uses a mixture of image processing and graph operations available in most libraries, may be parallelized easily, is reasonable fast, may be tuned to use a relatively small memory footprint and doesn't have to be recalculated outside the modified area if you store the intermediate results.
The Bad
I wrote "algorithm", in quotes, just because I developed it and surely is not robust enough to cope with pathological cases. If your graph has a lot of cycles you may end up with some phantom lines. More on this and examples later.
And The Ugly
The ugly part is that you need to be able to flood fill the map, which is not always possible. I posted a comment a few days ago asking if your graphs can be flood filled, but didn't receive an answer. So I decided to post it anyway.
The Sketch
The idea is:
Use image processing to get a fine line of pixels representing the center path
Partition the image in chunks commensurated to the tunnel thinnest passages
At each partition, represent a point at the "center of mass" of the contained pixels
Use those pixels to represent the Vertices of a Graph
Add Edges to the Graph based on a "near neighbour" policy
Remove spurious small cycles in the induced Graph
End- The remaining Edges represent your desired path
The parallelization opportunity arises from the fact that the partitions may be computed in standalone processes, and the resulting graph may be partitioned to find the small cycles that need to be removed. These factors also allow to reduce the memory needed by serializing instead of doing calcs in parallel, but I didn't go trough this.
The Plot
I'll no provide pseudocode, as the difficult part is just that not covered by your libraries. Instead of pseudocode I'll post the images resulting from the successive steps.
I wrote the program in Mathematica, and I can post it if is of some service to you.
A- Start with a nice flood filled tunnel image
B- Apply a Distance Transformation
The Distance Transformation gives the distance transform of image, where the value of each pixel is replaced by its distance to the nearest background pixel.
You can see that our desired path is the Local Maxima within the tunnel
C- Convolve the image with an appropriate kernel
The selected kernel is a Laplacian-of-Gaussian kernel of pixel radius 2. It has the magic property of enhancing the gray level edges, as you can see below.
D- Cutoff gray levels and Binarize the image
To get a nice view of the center line!
Comment
Perhaps that is enough for you, as you ay know how to transform a thin line to an approximate piecewise segments sequence. As that is not the case for me, I continued this path to get the desired segments.
E- Image Partition
Here is when some advantages of the algorithm show up: you may start using parallel processing or decide to process each segment at a time. You may also compare the resulting segments with the previous run and re-use the previous results
F- Center of Mass detection
All the white points in each sub-image are replaced by only one point at the center of mass
XCM = (Σ i∈Points Xi)/NumPoints
YCM = (Σ i∈Points Yi)/NumPoints
The white pixels are difficult to see (asymptotically difficult with param "a" age), but there they are.
G- Graph setup from Vertices
Form a Graph using the selected points as Vertex. Still no Edges.
H- select Candidate Edges
Using the Euclidean Distance between points, select candidate edges. A cutoff is used to select an appropriate set of Edges. Here we are using 1.5 the subimagesize.
As you can see the resulting Graph have a few small cycles that we are going to remove in the next step.
H- Remove Small Cycles
Using a Cycle detection routine we remove the small cycles up to a certain length. The cutoff length depends on a few parms and you should figure it empirically for your graphs family
I- That's it!
You can see that the resulting center line is shifted a little bit upwards. The reason is that I'm superimposing images of different type in Mathematica ... and I gave up trying to convince the program to do what I want :)
A Few Shots
As I did the testing, I collected a few images. They are probably the most un-tunnelish things in the world, but my Tunnels-101 went astray.
Anyway, here they are. Remember that I have a displacement of a few pixels upwards ...
HTH !
.
Update
Just in case you have access to Mathematica 8 (I got it today) there is a new function Thinning. Just look:
This is a pretty classic skeletonization problem; there are lots of algorithms available. Some algorithms work in principle on outline contours, but since almost everyone uses them on images, I'm not sure how available such things will be. Anyway, if you can just plot and fill the sewer outlines and then use a skeletonization algorithm, you could get something close to the midline (within pixel resolution).
Then you could walk along those lines and do a binary search with circles until you hit at least two separate line segments (three if you're at a branch point). The midpoint of the two spots you first hit, or the center of a circle touching the three points you first hit, is a good estimate of the center.
Well in Python using package skimage it is an easy task as follows.
import pylab as pl
from skimage import morphology as mp
tun = 1-pl.imread('tunnel.png')[...,0] #your tunnel image
skl = mp.medial_axis(tun) #skeleton
pl.subplot(121)
pl.imshow(tun,cmap=pl.cm.gray)
pl.subplot(122)
pl.imshow(skl,cmap=pl.cm.gray)
pl.show()
I'v always wondered this. In a game like GTA where there are 10s of thousands of objects, how does the game know as soon as you're on a health pack?
There can't possibly be an event listener for each object? Iterating isn't good either? I'm just wondering how it's actually done.
There's no one answer to this but large worlds are often space-partitioned by using something along the lines of a quadtree or kd-tree which brings search times for finding nearest neighbors below linear time (fractional power, or at worst O( N^(2/3) ) for a 3D game). These methods are often referred to as BSP for binary space partitioning.
With regards to collision detection, each object also generally has a bounding volume mesh (set of polygons forming a convex hull) associated with it. These highly simplified meshes (sometimes just a cube) aren't drawn but are used in the detection of collisions. The most rudimentary method is to create a plane that is perpendicular to the line connecting the midpoints of each object with the plane intersecting the line at the line's midpoint. If an object's bounding volume has points on both sides of this plane, it is a collision (you only need to test one of the two bounding volumes against the plane). Another method is the enhanced GJK distance algorithm. If you want a tutorial to dive through, check out NeHe Productions' OpenGL lesson #30.
Incidently, bounding volumes can also be used for other optimizations such as what are called occlusion queries. This is a process of determining which objects are behind other objects (occluders) and therefore do not need to be processed / rendered. Bounding volumes can also be used for frustum culling which is the process of determining which objects are outside of the perspective viewing volume (too near, too far, or beyond your field-of-view angle) and therefore do not need to be rendered.
As Kylotan noted, using a bounding volume can generate false positives when detecting occlusion and simply does not work at all for some types of objects such as toroids (e.g. looking through the hole in a donut). Having objects like these occlude correctly is a whole other thread on portal-culling.
Quadtrees and Octrees, another quadtree, are popular ways, using space partitioning, to accomplish this. The later example shows a 97% reduction in processing over a pair-by-pair brute-force search for collisions.
A common technique in game physics engines is the sweep-and-prune method. This is explained in David Baraff's SIGGRAPH notes (see Motion with Constraints chapter). Havok definitely uses this, I think it's an option in Bullet, but I'm not sure about PhysX.
The idea is that you can look at the overlaps of AABBs (axis-aligned bounding boxes) on each axis; if the projection of two objects' AABBs overlap on all three axes, then the AABBs must overlap. You can check each axis relatively quickly by sorting the start and end points of the AABBs; there's a lot of temporal coherence between frames since usually most objects aren't moving very fast, so the sorting doesn't change much.
Once sweep-and-prune detects an overlap between AABBs, you can do the more detailed check for the objects, e.g. sphere vs. box. If the detailed check reveals a collision, you can then resolve the collision by applying forces, and/or trigger a game event or play a sound effect.
Correct. Normally there is not an event listener for each object. Often there is a non-binary tree structure in memory that mimics your games map. Imagine a metro/underground map.
This memory strucutre is a collection of things in the game. You the player, monsters and items that you can pickup or items that might blowup and do you harm. So as the player moves around the game the player object pointer is moved in the game/map memory structure.
see How should I have my game entities knowledgeable of the things around them?
I would like to recommend the solid book of Christer Ericson on real time collision detection. It presents the basics of collision detection while providing references on the contemporary research efforts.
Real-Time Collision Detection (The Morgan Kaufmann Series in Interactive 3-D Technology)
There are a lot of optimizations can be used.
Firstly - any object (say with index i for example) is bounded by cube, with center coordinates CXi,CYi, and size Si
Secondly - collision detection works with estimations:
a) Find all pairs cubes i,j with condition: Abs(CXi-CXj)<(Si+Sj) AND Abs(CYi-CYj)<(Si+Sj)
b) Now we work only with pairs got in a). We calculate distances between them more accurately, something like Sqrt(Sqr(CXi-CXj)+Sqr(CYi-CYj)), objects now represented as sets of few numbers of simple figures - cubes, spheres, cones - and we using geometry formulas to check these figures intersections.
c) Objects from b) with detected intersections are processed as collisions with physics calculating etc.