Photon Mapping - Several issues - algorithm

So I'm trying to implement a photon mapping algorithm to simulate global illumination in my ray tracing program. However, I'm running into a few issues that are making it hard to complete the implementation.
My program already successfully traces photons throughout the scene, stores them in a balanced KD-Tree and can gather the k nearest photons near any given point p, so most of the work is complete. The issue is mostly when it comes time for the radiance estimate.
First, I can't seem to get the indirect illumination to be bright enough to make any noticeable difference in my scene. If my light source emits 100,000 photons, then the power of each stored photon (which for 100,000 emitted photons would be roughly around 500,000 stored photons in my program) must be scaled down by a factor of 100,000 which makes them very dim. I thought I could mitigate this when dividing by the area if the encapsulating circle (pi*rad^2) if I have a search radius much less than 1 but decreasing the search radius to such a small number leaves me with very few photons for the estimate and if I use a large radius I can get enough photons but i don't get the extra "power boost" of using a small radius and might wind up including incorrect photons. So I don't know what to do.
Additionally, my other problem is that if I were to artificially scale up the photon power to increase the indirect lighting contribution, the resulting illumination is splotchy, uneven and ugly and doesn't look at all realistic. I know this problem is vague but I don't know why it looks this way since I'm pretty sure I'm doing the radiance estimate and brdf calculations correctly.

Without knowing the exact calculations you are doing throughout your rendering system, no one will be able to tell you why your indirect illumination is so weak. It could be that your materials are dark enough that there just isn't much indirect light. It could be that you are missing a factor of pi in your indirect illumination calculation somewhere, or you could be missing a divide-by-pi in your direct illumination calculation, so the indirect is dim in comparison.
As for splotchiness, that's what photon mapping looks like without a ton of photons. Try 100 million photons (or at least 10 million) instead and see if the issue persists.

Related

Fast 3D mesh generation from pointcloud

I would like to build a simple mesh from a set of points in the fastest way possible. Hypothethically my point cloud could be in a very low number of points range (something like 1000 to 50000).
I've seen about 3D Delenauy triangularization and some other methods, but most of the time I don't find speed reported on papers and other times I see huge computational times in the order of minutes.
An interesting algorithm I've found is this: https://doc.cgal.org/latest/Poisson_surface_reconstruction_3/index.html
My main concern is that this is used for making 2D surfaces in 3D space, while I have points in my pointcloud which would lie in the interior of the final volume.
Could you suggest me some algorithms which could be useful in my scenario? And also a raw estimate of computational times? Is it possible to make such task in less than 5 seconds?
Think I'm not trying to make human faces, sculptures, or stuff like that. The meshes I'm trying to reconstruct are always pretty polyhedrical.
Thanks for your attention

Performance: Offline surface for hit testing vs Triangle Intersection

First, a disclaimer. I'm well aware of the std answer for X vs Y - "it depends". However, I'm working on a very general purpose product, and I'm trying to figure out "it depends on what". I'm also not really able to test the wide variety of hardware, so try-and-see is an imperfect measure at best.
I've been doing some googling, and I've found very little reference to using an offline render target/surface for hittesting. I'm not sure of the nomenclature, what I'm talking about though is using very simple shaders to render a geometry ID (for example) to a buffer, then reading the pixel value under the mouse to see what geometry is directly under the mouse pointer.
I have however, found 101 different tutorials on doing triangle intersection, a la D3DXIntersect & DirectX sample "Pick".
I'm a little curious on this - I would have thought using HW was the standard method. By all rights, it should be many orders of magnitude faster, and should scale far better.
I'm relatively new to graphics programming, so here are my assumptions, for you to disabuse.
1) A simple shader that does geometry transform & writes a Node + UV value should be nearly free.
2) The main cost in the HW pick method would be the buffer fetch, when getting the rendered surface back off the GPU for the CPU to read over. I have no idea how costly this is. us? ms? seconds? minutes?
3) This may be obvious, but I am assuming that Triangle Intersection (D3DXIntersect) is only possible on the CPU.
4) A possible cost people want to avoid is the cost of the extra render target(s) (zbuffer+surface). I'm a'guessing about 10 megs for 1024x1280 (std screen size?). This is acceptable to me, although if I could render a smaller surface (trade accuracy for memory) I would do so (is that possible?).
This all leads to a few thoughts.
1) For very simple scenes, triangle intersection may be faster. Quite what is simple/complex is hard to guess at this point. I'm looking at possible 100s of tris to 10000s. Probably not much more than that.
2) The HW buffer needs to be rendered regardless of whether or not its used (in my case). However, it can be reused without cost (ie, click-drag, where mouse tracks across a static scene)
2a) Possibly, triangle intersection may be preferable if my scene updates every frame, or if I have limited mouse interaction.
Now I've finished writing, I see a similar question has been asked: (3D Graphics Picking - What is the best approach for this scenario). My problem with this is (a) why would you need to re-render your picking surface for click-drag as your scene hasn't actually changed, and (b) wouldn't it still be faster than triangle intersection?
I welcome thoughts, criticism, and any manner of side-tracking :-)

Raytracing via diffusion algorithm

Many certain resources about raytracing tells about:
"shoot rays, find the first obstacle to cut it"
"shoot secondary rays..."
"or, do it reverse and approximate/interpolate"
I didnt see any algortihm that uses a diffusion algorithm. Lets assume a point-light is a point that has more density than other cells(all space is divided into cells), every step/iteration of lighting/tracing makes that source point to diffuse into neighbours using a velocity field and than their neighbours and continues like that. After some satisfactory iterations(such as 30-40 iterations), the density info of each cell is used for enlightment of objects in that cell.
Point light and velocity field:
But it has to be a like 1000x1000x1000 size and this would take too much time and memory to compute. Maybe just computing 10x10x10 and when finding an obstacle, partitioning that area to 100x100x100(in a dynamic kd-tree fashion) can help generating lighting/shadows for acceptable resolution? Especially for vertex-based illumination rather than triangle.
Has anyone tried this approach?
Note: Velocity field is here to make light diffuse to outwards mostly(not %100 but %99 to have some global illumination). Finite-element-method can make this embarassingly-parallel.
Edit: any object that is hit by a positive-density will be an obstacle to generate a new velocity field around the surface of it. So light cannot go through that object but can be mirrored to another direction.(if it is a lens object than light diffuse harder through it) So the reflection of light can affect other objects with a higher iteration limit
Same kd-tree can be used in object-collision algorithms :)
Just to take as a grain of salt: a neural-network can be trained for advection&diffusion in a 30x30x30 grid and that can be used in a "gpu(opencl/cuda)-->neural-network ---> finite element method --->shadows" way.
There's a couple problems with this as it stands.
The first problem is that, fundamentally, a photon in the Newtonian sense doesn't react or change based on the density of other photons around. So using a density field and trying to light to follow the classic Navier-Stokes style solutions (which is what you're trying to do, based on the density field explanation you gave) would result in incorrect results. It would also, given enough iterations, result in complete entropy over the scene, which is also not what happens to light.
Even if you were to get rid of the density problem, you're still left with the the problem of multiple photons going different directions in the same cell, which is required for global illumination and diffuse lighting.
So, stripping away the problem portions of your idea, what you're left with is a particle system for photons :P
Now, to be fair, sudo-particle systems are currently used for global illumination solutions. This type of thing is called Photon Mapping, but it's only simple to implement a direct lighting solution using it :P

algorithm to control intensities of multiple lights

We have multiple lights in 10x10 grid each of which we can control intensity 1 to 10. Target of those lights is a wall and our goal is to have uniform intensity within some range over wall image where user defines the intensity value. One restriction is that only direct adjacent neighbor lights of given light will be affect the image intensity for the wall area the light directly shed on.
I think (and hope) that this is a known problem but couldn't find any good reference to solve this problem. Any tip or clue would be appreciated.
I suppose that resulting intensity is linear combination of some neigbour lamps. For example, I[x,y]=a*L[x,y]+b*(L[x-1,y]+L[x+1,y]+L[x,y-1]+L[x,y-1])+c*(L[x-1,y-1] +...), where a,b,c are some coefficients. So there is linear system of 100 equations with 100 unknowns variables. It may be solved, if coefficients are known.
More complex model - convolution of lamp intensity matrix with point spread function. It may require sophisticated methods of signal reconstruction
This cries out for a genetic algorithms approach: Without too much trouble you can customize it to take into account your lamp characteristics, and any desired function of illumination on the wall.
Update: To be more concrete, if the OP already has some information about the light intensity function due to one lamp, then the programming aspect will be tedious, but straightforward. If not, then what's needed is a way to get that information. One way to do this is to get a photodiode and just measure the light intensity from the center to the periphery, with one lamp turned on mounted the way it will be in the real application. Use whatever sampling interval seems appropriate based on the physical set-up-- an inch, six inches, a foot, whatever. Using that information, the OP can create a function of light intensity based on one lamp.
I have no particular photodiode to recommend, but they can't be that expensive, since Lego Mindstorms can take readings from them. I did speak incorrectly in the comments below, though-- it might actually take one measurement for each of the ten intensity settings on the lamps, and I'm explicitly assuming that all the lamps have roughly the same performance.
From there, we can mathematically build the larger function of a light intensity pattern caused by 100 lamps at arbitrary intensities-- a function into which we can plug 100 numbers (representing the lamp settings) and get out a good approximation of the resulting light intensity. Finally, we can use a genetic algorithm to optimize the inputs of that function such that uniform intensity patterns are highly fit.
Careful, though-- the true optimum of that statement is probably "all lamps turned off."
(If you're more confident in your photography than I am, a camera might work. But either way, without a detailed knowledge of the intensity patterns of the lamp settings, this is not a solvable problem.)

How to speed up marching cubes?

I'm using this marching cube algorithm to draw 3D isosurfaces (ported into C#, outputting MeshGeomtry3Ds, but otherwise the same). The resulting surfaces look great, but are taking a long time to calculate.
Are there any ways to speed up marching cubes? The most obvious one is to simply reduce the spatial sampling rate, but this reduces the quality of the resulting mesh. I'd like to avoid this.
I'm considering a two-pass system, where the first pass samples space much more coarsely, eliminating volumes where the field strength is well below my isolevel. Is this wise? What are the pitfalls?
Edit: the code has been profiled, and the bulk of CPU time is split between the marching cubes routine itself and the field strength calculation for each grid cell corner. The field calculations are beyond my control, so speeding up the cubes routine is my only option...
I'm still drawn to the idea of trying to eliminate dead space, since this would reduce the number of calls to both systems considerably.
I know this is a bit old, but I recently implemented Marching Cubes based on much the same source. There is a LOT of inefficiency here. At a minimum if you were doing something like
for (int x=0; x<densityArrayWidth; x++)
for (int z=0; z<densityArrayLength; z++)
for (int y=0; y<densityArrayHeight; y++)
Polygonize(Gridcell, isolevel, Triangles)
Look at how many times you'd be reallocating the edgeTable and Tritable! Those immediately need to move out to the overall class. I ditched the gridCell object as well, going directly from the points/values to the triangles.
In short it isn't just the algorithmic complexity, memory allocations (and in the base this does a huge amount of them) take time also.
Just in case anyone else ends up here, dead-space elimination through a coarser sampling rate makes virtually no difference at all. Any remotely safe (ie: allowing a border for sampling artifacts) coarser sampling ends up grabbing most of the grid anyway in any remotely non-trivial field.
Speeding up the underlying field evaluation (with heavy memoisation) seemed to mostly solve the performance problems.
Try marching tetrahedra instead -- the math is simpler, allowing you to consider fewer cases per cell.
each cube has 12 edges, if you go through each cube and find 12 intersection points, you are doing 4 times too many calculations for intersection points- you have to only use 3 edges in the bottom left corner of each cube, with an extra row in the top right corner of the zone, and then use a special upgrade to access all the values that you have found. I'm going to do a topic on this because it needs to be discussed and it's complicated.
Also, testing for areas in space that need polygons, by assessing the ISO level using Octree, and skipping areas far from the ISO level.
I had a look at propagation, but it isn't that reliable and efficient.

Resources