Raytracer -- Why are there speckles on the edges of my spheres? - raytracing

My raytracer is generating the following image:
I've checked the normals many times and I'm quite confident that those are not the problem. Does anyone else have any ideas?

What #Alnitak said in the comments. The speckles usually appear because of self intersection. This can also occur during the implementation of shadows as well. If you have implemented shadows do check if the shadow ray generates from the point on the surface, if so add a small constant for example "0.001" to generate your ray a little further than the surface to avoid self-intersection.

Related

Bidirectional path tracing, algorithm explanation

I'm trying to understand path tracing. So far, I have only dealt with the very basis - when a ray is launched from each intersection point in a random direction within the hemisphere, then again, and so on recursively, until the ray hits the light source. As a result, this approach leads to the fact that in the case of small light sources, the image is extremely noisy.
The following images show the noise level depending on the number of samples (rays) per pixel.
I am also not sure that i am doing everything correctly, because the "Monte Carlo" method, as far as I understand, implies that several rays are launched from each intersection point, and then their result is summed and averaged. But this approach leads to the fact that the number of rays increases exponentially, and after 6 bounces reaches inadequate values, so i decided that it is better to just run several rays per pixel initially (slightly shifted from the center of the pixel in a random direction), but only 1 ray is generated at each intersection. I do not know whether this approach corresponds to "Monte Carlo" or not, but at least this way the rendering does not last forever..
Bidirectional path tracing
I started looking for ways to reduce the amount of noise, and came across bidirectional path tracing. But unfortunately, i couldn't find a detailed explanation of this algorithm in simple words. All I understood is that the rays are generated from both the camera and the light sources, and then there is a check on the possibility of connecting the endpoints of these paths.
As you can see, if the intersection points of the blue ray from the camera and the white ray from the light source can be freely connected (there are no obstacles in the connection path), then we can assume that the ray from the camera can pass through the points y1, y0 directly to the light source.
But there are a lot of questions:
If the light source is not a point, but has some shape, then the point from which the ray is launched must be randomly selected on the surface of this shape? If you take only the center - then there will be no difference from a point light source, right?
Do i need to build a path from the light source for each path from the camera, or should there be only one path from the light source, while several paths (samples) are built from the camera for one pixel at once?
The number of bounces/re-reflections/refractions should be the same for the path from the camera and the light source? Or not?
But the questions don't end there. I have heard that the bidirectional trace method allows you to model caustics well (in comparison with regular path tracing). But I completely did not understand how the method of bidirectional path tracing can somehow help for this.
Example 1
Here the path will eventually be built, but the number of bounces will be extremely large, so no caustics will work here, despite the fact that the ray from the camera is directed almost to the same point where the path of the ray from the light source ends.
Example 2
Here the path will not be built, because there is an obstacle between the endpoints of the paths, although it could be built if point x3 was connected to point y1, but according to the algorithm (if I understand everything correctly), only the last points of the paths are connected.
Question:
What is the use of such an algorithm, if in a significant number of cases the paths either cannot be built, or are unnecessarily long? Maybe I misunderstand something? I came across many articles and documents where this algorithm was somehow described, but mostly it was described mathematically (using all sorts of magical terms like biased-unbiased, PDF, BSDF, and others), and not.. algorithmically. I am not that strong in mathematics and all sorts of mathematical notation and wording, I would just like to understand WHAT TO DO, how to implement it correctly in the code, how these paths are connected, in what order, and so on. This can be explained in simple words, pseudocode, right? I would be extremely grateful if someone would finally shed some light on all this.
Some references that helped me to understand the Path tracing right :
https://www.scratchapixel.com/ (every rendering student should begin with this)
https://en.wikipedia.org/wiki/Path_tracing
If you're looking for more references, path tracing is used for "Global illumination" wich is the opposite as "Direct illumination" that only rely on a straight line from the point to the light.
What's more caustics is well knowned to be a hard problem, so don't begin with it! Monte Carlo method is a good straightforward method to begin with, but it has its limitations (ie Caustics and tiny lights).
Some advices for rendering newbees
Mathematics notations are surely not the coolest ones. Every one will of course prefer a ready to go code. But maths is the most rigourous way to describe the world. It permits also to modelize a whole physic interaction in a small formula instead of plenty of lines of codes that doesn't fit to the real problem. I suggest you to forget you to try reading what you read better as a good mathematic formula is always detailed. If some variables are not specified, don't loose your time and search another reference.

Raycasting in all direction from the source in three js

I am working with RayCasting in three js from couple of days and have understood how it works.
at the moment I am able to intersect the object using raycasting but I am up with one problem.
I am trying to find if there is any obstacle intersecting my 5 meter away from my source and I am looking to check this 5 meter in all the direction(similar like a boundary of 5 meters around the object).
right now with what I am doing it only checks at one.
so can someone help me how can I achieve this ?
Using raycasting is not a precise and performant approach for this kind of collision detection since you would have to cast an infinite number of rays to produce a 100% reliable result.
You should work with bounding volumes instead and perform intersection tests between those. three.js provides a bounding sphere and AABB implementation in the core which are sufficiently tight bounding volumes for many use cases. You can also give the new
OBB class a try which is located in the examples. All three classes provide methods for intersection tests between all types.

InstancedBufferGeometry negative scale issue

I have InstancedBufferGeometry working in my scene. However, some of the instances are mirrors of the source, hence they have a negative scale to represent the geometry.
This flips the winding order of those instances and look wrong due to Back Face Culling (which I want to keep).
I'm fully aware of the limitations within this approach, but I was wondering if there was a way to tackle this that I may have not come across yet? Maybe some trick in the shader to specify which ones are front face and which are back face? I don't recall this being possible though...
Or should I be doing two separate loads? (Which will duplicate the draw calls)
I'm loading a lot of different geometries (which are all instanced) so trying to make sure I get the best performance possible.
Thanks!
Ant
[EDIT: Added a little more detail]
It would help if you provide an example. As far as I can understand your question, simple answer is - no, you can't do that.
As far as i'm aware, primitives are rejected before they get to the shader, meaning that it's not in your control. If you want to use negative scaling, and make sure that surfaces are still visible - enable rendering of both faces (Front and Back).
Alternatively, you might be okay with simply rotating objects and sticking to positive scale - if you have to have mirroring - you're out of luck here.
One more idea: have 2 instanced objects, one with normal geometry and one with mirrored, you can fix up normals in the mirrored geometry.

How to avoid hole filling in surface reconstruction?

I am using Poisson surface reconstruction algorithm to reconstruct triangulated mesh surface from points. However, Poisson will always generate a watertight surface, which fills all holes with interpolation.
For some small holes that is the result of data missing, this hole filling is desirable. But for some big holes, I do not want hole filling and just want the surface to remain open.
The figure above shows my idea, the left one is a pointset with normal, the right one is reconstructed surface. I want the top of this surface remains open rather than current watertight result.
Can anyone provide me with some advice, how may I keep these big holes in Poisson surface reconstruction? Or is there any other algorithms that could solve this?
P.S.
Based on the accepted answer to this question, I understand surface reconstruction algorithms could be categorized as explicit ones and implicit ones. Poisson is implicit ones, and explicit ones could naturally handle big hole problem. But since the points data I have are mostly sparse and noisy, I would prefer an implicit one like Poisson.
Your screenshots look like you might be using MeshLab's implementation which is based on an old implementation. This implementation is not capable of trimming the surface.
The latest implementation, however, contains the SurfaceTrimmer that does exactly what you want. Take a look at the examples at the bottom of the page to see how to use it.
To use SurfaceTrimmer program, you have to first use SSDRecon program to reconstruct a mesh surface with --density, then setting a trim value would exactly remove faces under a specific threshold.
Below is a sample usage of that program on the demo eagle data
./SSDRecon --in eagle.points.ply --out eagle.screened.color.ply --depth 10 --density
./SurfaceTrimmer --in eagle.screened.color.ply --out eagle.screened.color.trimmed.ply --trim 7

3D triangulation algorithm

Does anybody know what triangulation algorithm Maya uses? Lacking that, what would be the most probable algoritms to try? I tried a few simple off the top of my head (shortest/longest resulting edges, smallest minimum angle, smallest/biggest area), but they where all wrong. Is Delaunay the most plausible algoritm?
Edit: by the way, pseudo code on how to implement Delaunay for a 2D quad in 3D space to generate two triangles are more than welcome!
Edit 2: Unfortunately, this is not the answer in 3D-space (only applicable in 2D).
I don't like to second-guess people's intentions but if you are simply trying to get out of Maya what is shown in the viewport you can extract Maya's triangulation by starting with MItMeshPolygon::getTriangles.
(The corresponding normals and vertex colours are straightforwardly accessible. UVs require a little more effort -- I don't remember the details (all my Maya code is with my ex employer) but whilst at first glance it may seem like you don't have the data, in fact it's all there, just not conveniently.)
(One further note -- if your artists try hard enough, they can create polygons that crash Maya when getTriangles is called, even though they render OK and can be manipulated with the UI. This used to happen every few months, so it's worth bearing in mind but probably not worth worrying about too much.)
If you don't want to use the API or Python, then running polyTriangulate before exporting, then undo afterwards (to get back the original polygons) would let you examine out the triangulated mesh. (You may want or need to save the scene to a temp file, then reload it afterwards and use file to give it its old name back, if your export process does stuff that is difficult or impossible to undo.)
This is a bit hacky, but you're guaranteed to get the exact triangulation Maya is using. Rather easier than writing your own triangulation code, and almost certainly a LOT easier than trying to work out whatever Maya does internally...
Jonathan Shewchuk has a very popular 2D triangulation tool called Triangle, and a 3D version should appear soon. He also has a number of papers on this topic that might be of use.
You might try looking at Voronoi and Delaunay Techniques by Henrik Zimmer. I don't know if it's what Maya uses, but the paper describes some common techniques.
Here you can find an applet that demonstrates the Incremental, Gift Wrap, Divide and Conquer and QuickHull algorithms computing the Delaunay triangulation in 3D. Pointers to each algorithm are provided.

Resources