First, I am sorry for this rough question, but I don't want to introduce too much details, so I just ask for related resource like articles, libraries or tips.
My program need to do intensive computation of ray-triangle intersection (there are millions of rays and triangles), and my goal is to make it as fast as I can.
What I have done is:
Use the fastest ray-triangle algorithm that I know.
Use Octree.(From Game Programming Gem 1, 4.10. 4.11)
Use An Efficient and Robust Ray–Box Intersection Algorithm which is used in octree algorithm.
It is faster than before I applied those better algorithms, but I believe it could be faster, Could you please shed lights on any possible places that could make it faster?
Thanks.
The place to ask these questions is ompf2.com. A forum with topics about realtime (although also non-realtime) raytracing
OMPF forum is the right place for this question, but since I'm here today...
Don't use a ray/box intersection for OctTree traversal. You may use it for the root node of the tree, but that's it. Once you know the distance to the entry and exit of the root box, you can calculate the distances to the x,y, and z partition planes - the planes that subdivide the box. If the distance to front and back are f and b respectively then you can determine which child nodes of the box are hit by analyzing f,b,x,y,z distances. You can also determine the order to traverse the child nodes and completely reject many of them.
At most 4 of the children can be hit since the ray starts in one octant and only changes octants when it crosses one of the 3 partition planes.
Also, since it becomes recursive you'll be needing the entry and exit distances for the child nodes. These distances are chosen from the set (f,b,x,y,z) which you've already computed.
I have been optimizing this for a very long time, and can safely say you have about an order of magnitude performance still on the table for trees many levels deep. I started right where you are now.
There are several optimizations you can do, but all of them depend on the exact domain of your problem. As far as general algorithms go, you are on the right track. Depending on the domain, you could:
Introduce a portal system
Move the calculations to a GPU and take advantage of parallel computation
A quite popular trend in raytracing recently is Bounding Volume Hierarchies
You've already gotten a good start using a spatial sort coupled with fast intersection algorithms. For tracing single rays at a time, one of the best structures out there (for static scenes) is a K-d tree built using the Surface Area Heuristic.
However, for truly high-speed ray tracing you need to take advantage of:
Coherent packets of rays
Frusta
SIMD
I would suggest you start with "Ray Tracing Animated Scenes using Coherent Grid Traversal". It gives an easy-to-follow example of such a modern approach. You can also follow the references to see how these ideas are applied to K-d trees and BVHs.
On the same page, also check out "State of the Art in Ray Tracing Animated Scenes".
Another great set of resources are all the SIGGRAPH publications over the years. This is a very competitive conference, so these papers tend to be top-notch.
Finally, if you're willing to use existing code, check out the project page for OpenRT.
A useful resource I've seen is the journal of graphics tools. Depending on your scenes, another BVH might be more appropriate than an octree.
Also, if you haven't looked at your performance with a profiler then you should. Shark is great on OSX, and I've gotten good results with Very Sleepy on windows.
Related
I'm working on a project with a virtual robot (Turtles in the ComputerCraft mod for Minecraft), where the robot would be in a maze of tunnels and have to navigate around in them. The world is conveniently already divided into tiles (a 2D Cartesian graph of them, with a boolean passable/nonpassable value for each), and the robot building the tunnels will map them as he goes.
In addition, there are teleporter "shortcuts" scattered around in areas where robots need to get between them quickly.
The question is: What's the best way to have the robot pathfind to his destination? How would the system identify areas that need teleporters? A* is the most famous algorithm, but are there others that might suit the application better? Please keep in mind that I have very little experience with pathfinding algorithms, so you might have to break things down into base terms for me to understand. Any suggestions?
The only problem with using A* is finding an admissible heuristic for your problem. Fortunately, this has already been answered here.
How would the system identify areas that need teleporters?
That depends on where the turtle is actually moving to/from. If he's always moving to/from the same start/end points, the answer is trivial: add teleports at the start and finish. For more complicated setups, my guess would be that this is NP-hard; if true, you'll have to look into global-optimization strategies (or just try a bunch of random positions and take the best one).
I am working on edge detection in images and would like to evaluate the performance of algorithm, if any any one could give me a reference or method on how to proceed it will be really helpful. :)
I do not have ground truth and data set includes color as well as gray images.
Thank you.
Create a synthetic data set with known edges, for example by 3D rendering, by compositing 2D images with precise masks (as may be obtained in royalty free photosets), or by introducing edges directly (thin/faint lines). Remember to add some confounding non-edges that look like edges, of a type appropriate for what you're tuning for.
Use your (non-synthetic) data set. Run the reference algorithms that you want to compare against. Also produce combinations of the reference algorithms, for example by voting (majority, at least K out of N, etc). Calculate stats on your algo vs reference algo performance, in terms of (a) number of points your algo classifies as edge which each reference algo, or the combination, does not classify as edge (false positive), or (b) number of points which the reference algo classifies as edge that your algo does not (false negative). You can also calculate a rank correlation-type number for algos by looking at each point and looking at which algos do (or don't) classify that as an edge.
Create ground truth manually. Use reference edge-finding algos as a starting point, then fix up by hand. Probably valuable to do for a small number of images in any case.
Good luck!
For comparisons, quantitative measures like what #Alex I explained is best. To do so, you need to define what is "correct" with a ground truth set and a way to consistently determine if a given image is correct or on a more granular level, how correct (some number like a percentage) it is. #Alex I gave a way to do that.
Another option that is often used in graphics research where there is no ground truth is user studies. Usually less desirable as they are time consuming and often more costly. However, if it is a qualitative improvement that you are after or if a quantitative measurement is just too hard to do, a user study is an appropriate solution.
When I mean user study I mean to poll people on how well a result is given the input image. You could give them a scale to rate things on and randomly give them samples from both your results and the results of another algorithm
And of course, if you still want more ideas, be sure to check out edge detection papers to see how they measured their results (I'd actually look here first as they've already gone through this same process and determined what was best for them: google scholar).
I am currently writing a KDTree for a physics engine (Hobby project).
The KDTree does not contain points.
Instead it contains Axis Aligned bounding boxes which bound the different objects in the environment.
My problem is deciding on how to split the KDTree nodes when they get full.
I am trying 2 methods:
Method1: Always split the node exactly in half on the biggest axis.
This has the advantage of a pretty evenly spaced out tree.
Big disadvantage: If objects are concentrated in small area of the node, redundant sub-divisions will be created. This is because all volumes are split exactly in half.
Method2: Find the area of the node which contains objects. Split the node on the plane which splits that area in half on it's biggest axis. Example - If all objects are concentrated on the bottom of the node then it split length-wise thereby dividing the bottom in two.
This solves the problem with the method above
When indexing something that exists on the same plane (terrain for example), it creates long and narrow nodes. If I am to add some other objects later which are not on the same plane, these elongated nodes provide very poor indexing.
So what I'm looking for here is a better way to split my KD-Tree node.
Considering that this is going to be a physics engine the decision needs to be simple enough to be made in real time.
The "surface area heuristic" (SAH) is considered the best splitting method for building kd-trees, at least within the raytracing community. The idea is to add the plane so that the surface areas of the two child spaces, weighted by the number of objexts in each child, are equal.
A good reference on the subject is Ingo Wald's thesis, in particular chapter 7.3, "High-quality BSP Construction", which explains SAH better than I can.
I can't find a good link at the moment, but you should look around for papers on "binned" SAH, which is an approximation to the true SAH but much faster.
All that being said, bounding-volume hierarchies (BVH) a.k.a. AABB trees, seem to be much more popular than kd-trees these days. Again, Ingo Wald's publication page is a good starting point, probably with the "On fast Construction of SAH based Bounding Volume Hierarchies" paper, although it's been a while since I read it.
The OMPF forums are also a good place to discuss these sorts of things.
Hope that helps. Good luck!
Certainly for a physics engine where the premise is lots of moving geometry, a bvh is probably the better choice, they don't traverse quite as quickly but they are much faster to build, and are much easier to refit/restructure on a frame per frame basis, and offen don't need a complete rebuild, every frame (something that can be done in parallel over a series of frames while the refitted bvh suffices in the meantime, again, refer to wald).
An exception to this in physics could be when you're dealing with entities that have no volume such as particles or photons, the building of the kd tree is simplified by the fact that you don't need to resolve the bounds of the individual primitive. It really depends on the application. A good physics engine should use a balanced combination of spatial acceleration structures, it's common practise to resolve broader phase partitioning with say a shallow octree then extend the leaf nodes with another scheme that better fits the nature of what you are doing, BSPs are ideal for static geometry, especially in 2d and when the structure isn't changing, the best thing to do is experiment with as many different schemes and structures and get a feel for how and when they work best.
I am doing a maprouting application. Several people have suggested me, that I do a datastructure where I split the map in a grid. In theory it sounds really good, but I am not to sure because of the bad performance I get when I implement it.
In the worst case you have to draw every road. If you divide the map in a grid, the sum of roads in all the cells in the grid, will be much larger than if you put all roads in a list.(each cell must have more roads than actually needed if a road goes through it).
If I have to zoom in I can see some smartness in using a grid, but if I keep it in a list I can just decrease the numbers of roads each time I zoom in.
As it is now(by using the list) it is not really fast, so I am all for making it faster. But in practice dividing in a grid makes it slower for me.
Any suggestigion for what datastructure I should be using and/or what I might be doing wrong?
See this question for related information:
What algorithms compute directions from point A to point B on a map?
Somebody who writes this kind of software for a living has answered it.
Also for rendering see:
What is the best way to read, represent and render map data?
I'm not quite sure if you're trying to do routing quick or rendering!
If you want it to go quick, you might be better off organizing your roads in to major and minor roads.
Use the list of minor roads to find a route to the nearest major road.
Use the major roads to get you near the destination.
Then go back to the minor roads to complete the route.
Without a split like this, there are a heck of a lot of roads to search, most of which are quite slow routes.
google does not draw each road every time the screen is refreshed. They used pre-drawn tiles of the map. They can redraw them as needed. e.g. when there is a map update. They even use transparent overlays, stacks of tiles to add and remove layers of details.
Very clever, but very simple.
You may want to look at openlayers javascript library. Free and can do just about anything you need to do with a map.
Maptraction JS is also available - its not as complete as OpenLayers
More optimal then using a grid as your spatial data structure, might be a quadtree because it logarithmically breaks down the map. And from studying the source, my guesstimate is that google uses (that or) a similar data structure.
As for getting directions, you might want to look in to hierarchical path finding to approximate the direction at first and to speed up the process; generic path finding algorithms tend to be quite slow at that level of complexity.
I'm writing a comparatively straightforward raytracer/path tracer in D (http://dsource.org/projects/stacy), but even with full optimization it still needs several thousand processor cycles per ray. Is there anything else I can do to speed it up? More generally, do you know of good optimizations / faster approaches for ray tracing?
Edit: this is what I'm already doing.
Code is already running highly parallel
temporary data is structured in a cache-efficient fashion as well as aligned to 16b
Screen divided into 32x32-tiles
Destination array is arranged in such a way that all subsequent pixels in a tile are sequential in memory
Basic scene graph optimizations are performed
Common combinations of objects (plane-plane CSG as in boxes) are replaced with preoptimized objects
Vector struct capable of taking advantage of GDC's automatic vectorization support
Subsequent hits on a ray are found via lazy evaluation; this prevents needless calculations for CSG
Triangles neither supported nor priority. Plain primitives only, as well as CSG operations and basic material properties
Bounding is supported
The typical first order improvement of raytracer speed is some sort of spatial partitioning scheme. Based only on your project outline page, it seems you haven't done this.
Probably the most usual approach is an octree, but the best approach may well be a combination of methods (e.g. spatial partitioning trees and things like mailboxing). Bounding box/sphere tests are a quick cheap and nasty approach, but you should note two things: 1) they don't help much in many situations and 2) if your objects are already simple primitives, you aren't going to gain much (and might even lose). You can more easily (than octree) implement a regular grid for spatial partitioning, but it will only work really well for scenes that are somewhat uniformly distributed (in terms of surface locations)
A lot depends on the complexity of the objects you represent, your internal design (i.e. do you allow local transforms, referenced copies of objects, implicit surfaces, etc), as well as how accurate you're trying to be. If you are writing a global illumination algorithm with implicit surfaces the tradeoffs may be a bit different than if you are writing a basic raytracer for mesh objects or whatever. I haven't looked at your design in detail so I'm not sure what, if any, of the above you've already thought about.
Like any performance optimization process, you're going to have to measure first to find where you're actually spending the time, then improving things (algorithmically by preference, then code bumming by necessity)
One thing I learned with my ray tracer is that a lot of the old rules don't apply anymore. For example, many ray tracing algorithms do a lot of testing to get an "early out" of a computationally expensive calculation. In some cases, I found it was much better to eliminate the extra tests and always run the calculation to completion. Arithmetic is fast on a modern machine, but a missed branch prediction is expensive. I got something like a 30% speed-up on my ray-polygon intersection test by rewriting it with minimal conditional branches.
Sometimes the best approach is counter-intuitive. For example, I found that many scenes with a few large objects ran much faster when I broke them down into a large number of smaller objects. Depending on the scene geometry, this can allow your spatial subdivision algorithm to throw out a lot of intersection tests. And let's face it, intersection tests can be made only so fast. You have to eliminate them to get a significant speed-up.
Hierarchical bounding volumes help a lot, but I finally grokked the kd-tree, and got a HUGE increase in speed. Of course, building the tree has a cost that may make it prohibitive for real-time animation.
Watch for synchronization bottlenecks.
You've got to profile to be sure to focus your attention in the right place.
Is there anything else I can do to speed it up?
D, depending on the implementation and compiler, puts forth reasonably good performance. As you haven't explained what ray tracing methods and optimizations you're using already, then I can't give you much help there.
The next step, then, is to run a timing analysis on the program, and recode the most frequently used code or slowest code than impacts performance the most in assembly.
More generally, check out the resources in these questions:
Literature and Tutorials for Writing a Ray Tracer
Anyone know of a really good book about Ray Tracing?
Computer Graphics: Raytracing and Programming 3D Renders
raytracing with CUDA
I really like the idea of using a graphics card (a massively parallel computer) to do some of the work.
There are many other raytracing related resources on this site, some of which are listed in the sidebar of this question, most of which can be found in the raytracing tag.
I don't know D at all, so I'm not able to look at the code and find specific optimizations, but I can speak generally.
It really depends on your requirements. One of the simplest optimizations is just to reduce the number of reflections/refractions that any particular ray can follow, but then you start to lose out on the "perfect result".
Raytracing is also an "embarrassingly parallel" problem, so if you have the resources (such as a multi-core processor), you could look into calculating multiple pixels in parallel.
Beyond that, you'll probably just have to profile and figure out what exactly is taking so long, then try to optimize that. Is it the intersection detection? Then work on optimizing the code for that, and so on.
Some suggestions.
Use bounding objects to fail fast.
Project the scene at a first step (as common graphic cards do) and use raytracing only for light calculations.
Parallelize the code.
Raytrace every other pixel. Get the color in between by interpolation. If the colors vary greatly (you are on an edge of an object), raytrace the pixel in between. It is cheating, but on simple scenes it can almost double the performance while you sacrifice some image quality.
Render the scene on GPU, then load it back. This will give you the first ray/scene hit at GPU speeds. If you do not have many reflective surfaces in the scene, this would reduce most of your work to plain old rendering. Rendering CSG on GPU is unfortunately not completely straightforward.
Read source code of PovRay for inspiration. :)
You have first to make sure that you use very fast algorithms (implementing them can be a real pain, but what do you want to do and how far want you to go and how fast should it be, that's a kind of a tradeof).
some more hints from me
- don't use mailboxing techniques, in papers it is sometimes discussed that they don't scale that well with the actual architectures because of the counting overhead
- don't use BSP/Octtrees, they are relative slow.
- don't use the GPU for Raytracing, it is far too slow for advanced effects like reflection and shadows and refraction and photon-mapping and so on ( i use it only for shading, but this is my beer)
For a complete static scene kd-Trees are unbeatable and for dynamic scenes there are clever algorithms there that scale very well on a quadcore (i am not sure about the performance above).
And of course, for a realy good performance you need to use very much SSE code (with of course not too much jumps) but for not "that good" performance (im talking here about 10-15% maybe) compiler-intrinsics are enougth to implement your SSE stuff.
And some decent Papers about some Algorithms i was talking about:
"Fast Ray/Axis-Aligned Bounding Box - Overlap Tests using Ray Slopes"
( very fast very good paralelisizable (SSE) AABB-Ray hit test )( note, the code in the paper is not all code, just google for the title of the paper, youll find it)
http://graphics.tu-bs.de/publications/Eisemann07RS.pdf
"Ray Tracing Deformable Scenes using Dynamic Bounding Volume Hierarchies"
http://www.sci.utah.edu/~wald/Publications/2007///BVH/download//togbvh.pdf
if you know how the above algorithm works then this is a much greater algorithm:
"The Use of Precomputed Triangle Clusters for Accelerated Ray Tracing in Dynamic Scenes"
http://garanzha.com/Documents/UPTC-ART-DS-8-600dpi.pdf
I'm also using the pluecker-test to determine fast (not thaat accurate, but well, you can't have all) if i hit a polygon, works very pretty with SSE and above.
So my conclusion is that there are so many great papers out there about so much Topics that do relate to raytracing (How to build fast, efficient trees and how to shade (BRDF models) and so on and so on), it is an realy amazing and interesting field of "experimentating", but you need to have also much sparetime because it is so damn complicated but funny.
My first question is - are you trying to optimize the tracing of one single still screen,
or is this about optimizing the tracing of multiple screens in order to calculate an animation ?
Optimizing for a single shot is one thing, if you want to calculate successive frames in an animation there are lots of new things to think about / optimize.
You could
use an SAH-optimized bounding volume hierarchy...
...eventually using packet traversal,
introduce importance sampling,
access the tiles ordered by Morton code for better cache coherency, and
much more - but those were the suggestions I could immediately think of. In more words:
You can build an optimized hierarchy based on statistics in order to quickly identify candidate nodes when intersecting geometry. In your case you'll have to combine the automatic hierarchy with the modeling hierarchy, that is either constrain the build or have it eventually clone modeling information.
"Packet traversal" means you use SIMD instructions to compute 4 parallel scalars, each of an own ray for traversing the hierarchy (which is typically the hot spot) in order to squeeze the most performance out of the hardware.
You can perform some per-ray-statistics in order to control the sampling rate (number of secondary rays shot) based on the contribution to the resulting pixel color.
Using an area curve on the tile allows you to decrease the average space distance between the pixels and thus the probability that your performance benefits from cache hits.