Raytracing and light - raytracing

I want to implement a physical raytracer (i.e. with actual photons with a given wavelength), restricting myself to small scenes (like two spheres and an enclosing box), to do experiments. It's not meant to be fast but I'll optimize it later.
I'm currently gathering all I know about how photons interact with surfaces, i.e. they either reflect (get absorbed, then emitted again) or refract with a probability based on the surface's absorption spectrum and reflectivity/refractivity indices, and refraction is dependent on the wavelength (which naturally results in dispersion) etc...
I understand how shooting photons out of emissive materials (like "lights") and making them bounce around the scene until they happen to land into the camera produces an accurate result, but is unacceptably slow, thus the need to do it backwards (shoot photons from the camera)
But I'm having trouble understanding how surface interactions can be modelled "backwards" - for instance, if a photon coming from the camera hits the side of a red box, if the photon has a wavelength corresponding to red, it will be reflected, and all other wavelengths will be absorbed, which will produce a red color. But is the intensity of the color decided by taking many samples of very close photons, and checking which of them eventually collide with a light, and which don't? Because ultimately, either a photon hits a light or it doesn't (after a given number of bounces) - there is no notion of partial collision.
So basically my question is - is the intensity of the light received by a pixel a function of the number of photon samples for that pixel that actually make it to a light source, or is there something else involved?

It sounds like you want to do something called http://en.wikipedia.org/wiki/Path_tracing which is like raytracing, except it does not directly sample light sources when a direct ray from the camera hits a surface (causing it to be quite slow, but not as slow as shooting rays "forwards" from the light sources).
However you seem to confuse yourself by thinking of "reverse photons" coming from the camera which you assume to already have the properties ("the photon has a wavelength corresponding to red") you are actually trying to decide in the first place. To wrap your mind around this, you might want to read up on "regular" raytracing first. So think of rays from the camera that bounce through a scene up to a certain bounce depth or until they hit an object, at which point they directly sample light sources to see if they illuminate the object.
About your final question "Is the intensity of the light received by a pixel a function of the number of photon samples for that pixel that actually make it to a light source, or is there something else involved?" I'll refer you to http://en.wikipedia.org/wiki/Rendering_equation where you will find the rendering equation (the general mathematical problem all 3D graphics algorithms like raytracing try to solve) and a list with its limitations, which answers your question in the negative (i.e. other than the light source these effects are also involved in deciding the ultimate colour and intensity of a pixel):
phosphorescence, which occurs when light is absorbed at one moment in time and emitted at a different time,
fluorescence, where the absorbed and emitted light have different wavelengths,
interference, where the wave properties of light are exhibited, and
subsurface scattering, where the spatial locations for incoming and departing light are different. Surfaces rendered without accounting for subsurface scattering may appear unnaturally opaque.

Related

Using three.js, how would you project a globe world to a map on the screen?

I am curious about the limits of three.js. The following question is asked mainly as a challenge, not because I actually need the specific knowledge/code right away.
Say you have a game/simulation world model around a sphere geometry representing a planet, like the worlds of the game Populous. The resolution of polygons and textures is sufficient to look smooth when the globe fills the view of an ordinary camera. There are animated macroscopic objects on the surface.
The challenge is to project everything from the model to a global map projection on the screen in real time. The choice of projection is yours, but it must be seamless/continuous, and it must be possible for the user to rotate it, placing any point on the planet surface in the center of the screen. (It is not an option to maintain an alternative model of the world only for visualization.)
There are no limits on the number of cameras etc. allowed, but the performance must be expected to be "realtime", say two-figured FPS or more.
I don't expect ayn proof in the form of a running application (although that would be cool), but some explanation as to how it could be done.
My own initial idea is to place a lot of cameras, in fact one for every pixel in the map projection, around the globe, within a Group object that is attached to some kind of orbit controls (with rotation only), but I expect the number of object culling operations to become a huge performance issue. I am sure there must exist more elegant (and faster) solutions. :-)
why not just use a spherical camera-model (think a 360° camera) and virtually put it in the center of the sphere? So this camera would (if it were physically possible) be wrapped all around the sphere, looking toward the center from all directions.
This camera could be implemented in shaders (instead of the regular projection-matrix) and would produce an equirectangular image of the planet-surface (or in fact any other projection you want, like spherical mercator-projection).
As far as I can tell the vertex-shader can implement any projection you want and it doesn't need to represent a camera that is physically possible. It just needs to produce consistent clip-space coordinates for all vertices. Fragment-Shaders for lighting would still need to operate on the original coordinates, normals etc. but that should be achievable. So the vertex-shader would just need compute (x,y,z) => (phi,theta,r) and go on with that.
Occlusion-culling would need to be disabled, but iirc three.js doesn't do that anyway.

Is it possible to use GIS terrain vector data in three.js?

I'm new to three.js and WebGL in general.
The sample at http://css.dzone.com/articles/threejs-render-real-world shows how to use raster GIS terrain data in three.js
Is it possible to use vector GIS data in a scene? For example, I have a series of points representing locations (including height) stored in real-world coordinates (meters). How would I go about displaying those in three.js?
The basic sample at http://threejs.org/docs/59/#Manual/Introduction/Creating_a_scene shows how to create a geometry using coordinates - could I use a similar approach with real-world coordinates such as
"x" : 339494.5,
"y" : 1294953.7,
"z": 0.75
or do I need to convert these into page units? Could I use my points to create a surface on which to drape an aerial image?
I tried modifying the simple sample but I'm not seeing anything (or any error messages): http://jsfiddle.net/slead/KpCfW/
Thanks for any suggestions on what I'm doing wrong, or whether this is indeed possible.
I did a number of things to get the JSFiddle show something.. here: http://jsfiddle.net/HxnnA/
You did not specify any faces in your geometry. In this case I just hard-coded a face with all three of your data points acting as corner. Alternatively you can look into using particles to display your data as points instead of faces.
Set material to THREE.DoubleSide. This is not usually needed or recommended, but helps debugging in early phases, when you can see both sides of a face.
Your camera was probably looking in a wrong direction. Added a lookAt() to point it to the center and made the field of view wider (this just makes it easier to find things while coding).
Your camera near and far planes were likely off-range for the camera position and terrain dimensions. So I increased the far plane distance.
Your coordinate values were quite huge, so I just modified them by hand a bit to make sense in relation to the camera, and to make sure they form a big enough triangle for it to be seen in camera. You could consider dividing your coordinates with something like 100 to make the units smaller. But adjusting the camera to account for the huge scale should be enough too.
Nothing wrong with your approach, just make sure you feed the data so that it makes sense considering the camera location, direction and near + far planes. Pay attention to how you make the faces. The parameters to Face3 is the index of each point in your vertices array. Later on you might need to take winding order, normals and uvs into account. You can study the geometry classes included in Three.js for reference.
Three.js does not specify any meaning to units. Its just floating point numbers, and you can decide yourself what a unit (1.0) represents. Whether it's 1mm, 1 inch or 1km, depends on what makes the most sense considering the application and the scale of it. Floating point numbers can bring precision problems when the actual numbers are extremely small or extremely big. My own applications typically deal with stuff in the range from a couple of centimeters to couple hundred meters, and use units in such a way that 1.0 = 1 meter, that has been working fine.

Non predefined multiple light sources in OpenGL ES 2.0

There is a great article about multiple light sources in GLSL
http://en.wikibooks.org/wiki/GLSL_Programming/GLUT/Multiple_Lights
But light0 and light1 parameters described in shader code, what if must draw flare gun shots, e.g every flare has it own position, color and must illuminate surroundings. How we manage other objects shader to deal with unknown (well there is a limit to max flares on the screen) position, colors of flares? For example there will be 8 max flares on screen, what i must to pass 8*2 uniforms, even if they not exist at this time?
Or imagine you making level editor, user can place lamps, how other objects will "know" about new light source and render then new lamp has been added?
I think there must be clever solution, but i can't find one.
Lighting equations usually rely on additive colour. So the output is the colour of light one plus the colour of light two plus the colour of light three, etc.
One of the in-framebuffer blending modes offered by OpenGL is additive blending. So the colour output of anything new that you draw will be added to whatever is already in the buffer.
The most naive solution is therefore to write your shader to do exactly one light. If you have multiple lights, draw the scene that many times, each time with a different nominated line. It's an example of multipass rendering.
Better solutions involve writing shaders to do two, four, eight or whatever lights at once, doing, say, 15 lights as an 8-light draw then a 4-light draw then a 2-light draw then a 1-light draw, and including only geometry within reach of each light when you do that pass. Which tends to mean finding intelligent ways to group lights by locality.
EDIT: with a little more thought, I should add that there's another option in deferred shading, though it's not completely useful on most GL ES devices at the moment due to the limited options for output buffers.
Suppose theoretically you could render your geometry exactly once and store whatever you wanted per pixel. So you wouldn't just output a colour, you'd output, say, a position in 3d space, a normal, a diffuse colour, a specular colour and a specular exponent. Those would then all be in a per-pixel buffer.
You could then render each light by (i) working out the maximum possible space it can occupy when projected onto the screen (so, a 2d rectangle that relates directly to pixels); and (ii) rendering the light as a single quad of that size, for each pixel reading the relevant values from the buffer you just set up and outputting an appropriately lit colour.
Then you'd do all the actual geometry in your scene only exactly once, and each additional light would cost at most a single, full-screen quad.
In practice you can't really do that because the output buffers you tend to be able to use in ES provide too little storage. But what you can usually do is render to a 32bit colour buffer with an attached depth buffer. So you can just store depth in the depth buffer and work out world (x, y, z) from that plus the [uniform] position of the camera in the light shader. You could store 8-bit versions of normal x and y in the colour buffer so as to spend 16 bits and work out z in the colour buffer because you know that the normal is always of unit length. Then, to pick a concrete example at random, maybe you could store a 16-bit version of the diffuse colour in the remaining space, possibly in YCrCb with extra storage for Y.
The main disadvantage is that hardware antialiasing then doesn't due to much the same sort of concerns as transparency and depth buffers. But if you get to the point where you save dramatically on lighting it might still make sense to do manual antialiasing by rendering a large version of the scene and then scaling it down in a final pass.

How to blend colors

I'm currently coding a raytracer. I wonder how to blend a primitive color with a light color.
I've seen many combinations.
Some just add the two colors. This gives me very strange results.
Some mutiply each components. It looks ok, but in the primitive is blue ({0, 0, 1}) and the light is red ({1, 0, 0}), it is just black. Is it the normal behavior ?
I've also seen the screen blending mode (screen(C1, C2) = C1 + C2 - C1 * C2)) which is more logical for me since in the above case, colors will actually blend.
Same question for reflected rays colors : how to blend them with the local color ?
Bonus question: should a point on a primitive that it not illuminated be black ? I've seen stuff like "the half of the color".
It's actually way more complicated. The light and surface material interact via a formula called a bidirectional reflectance distribution function (BRDF, for short), a 4-dimensional function whose inputs are the direction of the light and direction of the viewing angle (both relative to the surface normal, i.e. the unit vector perpendicular to the surface), and whose output is the ratio of the outgoing radiance in the view direction to the incoming irradiance from the light's direction.
There's not an easy way to explain it in this short space. Perhaps you could check out the Wikipedia article, as well as its links and references, or even crack open a decent computer graphics textbook?
But suffice it to say that for many BRDF's, it is akin to "multiplication", in that a perfectly reflective Lambertian pure red surface illuminated by a perfectly blue light ought to look black.
There are several complicating factors: No real light, other than a laser, emits a pure primary color with the other components being 0.0. (Avoid this if you want a realistic image.) No real surface reflects all in one wavelength with the other color components being 0.0. (Avoid this if you want a realistic image.) And no material is really quite lambertian (purely diffuse) -- generally there is some specular component that tends to reflect light at the surface, before they get to any underlying pigment, and therefore that portion of the reflected light will tend to look like the color of the light, not the surface. Unless it's metallic, in which case the "color" of the metal does influence the specular. Sigh. Like I said, it's complicated, and you need to actually read up on the physics (as well as the 40 years of computer graphics research in which all these problems have been solved long before you ever contemplated the problem).
Bonus answer ("what happens to a point that is not illuminated"): points truly not illuminated should be black. But surely you knew that, as well as the fact that in any real-world situation, there isn't any point that gets no illumination (and can also be photographed). A more interesting formulation of the question goes like this: my renderers is only considering direct light paths from sources, not all the indirect paths (object to object) that illuminate the darker corners, so how do I prevent it from going black? A cheap answer is to add an "ambient" light that just throws a constant amount of light everywhere. This is what people did for a long time, and it looked quite fake as you would imagine. A better approach is to use various "global illumination" techniques. Look it up. But definitely don't just "illuminate it half as much," unless you're aiming for a very stylistic (as opposed to realistic) appearance.

What algorithms are out there for detecting lights and shadows and their parameters?

So I have picture (not the best one)
I want to detect where the lights come from and what types of lights are they. What algorithm\framework can do such things with static images?
I mentioned shadows because in general if you can separate a shadow from a surface than you can probably determine light type and other its parameters.
I mean general shadows search not only for presented image.
With the image that you presented, there are so many sources of error that I'd be surprised if a trained human, let alone an algorithm could do better than ±20% on any calculations. Here are the problems:
There isn't a known straight line anywhere since everything is hand hewn. The best bet would be the I-beam above the doorway but you don't know it's orientation.
There's heavy barrel distortion in the edges of the image which are introduced by the lens and are characteristic of that lens at that zoom and focus. Without precise calibration of that, you can only guess at the degree of distortion.
The image is skewed with regard to the wall it is facing but none of the walls appear to be all that planar anyway.
You want to know the source of lights. Well the obvious primary light is the sun, but latitude, longitude, time and date all affect that. Then there are the diffuse reflections but unless you have the albedo of the materials you can only guess.
What are you hoping to derive from this image? Usually when doing lighting analysis, someone will put known reference targets of different, known reflectivity in the space to be analyzed. Working from a pocket snapshot camera on an unknown scene really limits what you can extrapolate.

Resources