I'm currently coding a raytracer. I wonder how to blend a primitive color with a light color.
I've seen many combinations.
Some just add the two colors. This gives me very strange results.
Some mutiply each components. It looks ok, but in the primitive is blue ({0, 0, 1}) and the light is red ({1, 0, 0}), it is just black. Is it the normal behavior ?
I've also seen the screen blending mode (screen(C1, C2) = C1 + C2 - C1 * C2)) which is more logical for me since in the above case, colors will actually blend.
Same question for reflected rays colors : how to blend them with the local color ?
Bonus question: should a point on a primitive that it not illuminated be black ? I've seen stuff like "the half of the color".
It's actually way more complicated. The light and surface material interact via a formula called a bidirectional reflectance distribution function (BRDF, for short), a 4-dimensional function whose inputs are the direction of the light and direction of the viewing angle (both relative to the surface normal, i.e. the unit vector perpendicular to the surface), and whose output is the ratio of the outgoing radiance in the view direction to the incoming irradiance from the light's direction.
There's not an easy way to explain it in this short space. Perhaps you could check out the Wikipedia article, as well as its links and references, or even crack open a decent computer graphics textbook?
But suffice it to say that for many BRDF's, it is akin to "multiplication", in that a perfectly reflective Lambertian pure red surface illuminated by a perfectly blue light ought to look black.
There are several complicating factors: No real light, other than a laser, emits a pure primary color with the other components being 0.0. (Avoid this if you want a realistic image.) No real surface reflects all in one wavelength with the other color components being 0.0. (Avoid this if you want a realistic image.) And no material is really quite lambertian (purely diffuse) -- generally there is some specular component that tends to reflect light at the surface, before they get to any underlying pigment, and therefore that portion of the reflected light will tend to look like the color of the light, not the surface. Unless it's metallic, in which case the "color" of the metal does influence the specular. Sigh. Like I said, it's complicated, and you need to actually read up on the physics (as well as the 40 years of computer graphics research in which all these problems have been solved long before you ever contemplated the problem).
Bonus answer ("what happens to a point that is not illuminated"): points truly not illuminated should be black. But surely you knew that, as well as the fact that in any real-world situation, there isn't any point that gets no illumination (and can also be photographed). A more interesting formulation of the question goes like this: my renderers is only considering direct light paths from sources, not all the indirect paths (object to object) that illuminate the darker corners, so how do I prevent it from going black? A cheap answer is to add an "ambient" light that just throws a constant amount of light everywhere. This is what people did for a long time, and it looked quite fake as you would imagine. A better approach is to use various "global illumination" techniques. Look it up. But definitely don't just "illuminate it half as much," unless you're aiming for a very stylistic (as opposed to realistic) appearance.
Related
Recently I found an amazing APP called Photo Lab,and I'm curious about one effect called Paper Rose.In the pictures below,one is the original picture,the other is the effected picture.My question is what kind of algorithm can do this effect,and it would be better if you can show me some code or demo.Thanks in advance!
enter image description here
enter image description here
I am afraid that this is not just an algorithm, but a complex piece of software.
The most difficult part is to model the shape of the rose. The petals are probably a meshed surface. It is not so difficult to give them a curved shape, but the hard issue is to group them in such a way that they do not intersect.
It is not quite impossible that this can be achieved by first putting them in a flat geometry where you can master intersections, then to wrap it around an axis with a king of polar transform. But I don't really believe in that. I rather think that they have a collision-avoiding geometric modeller.
The next steps, which are more classical, are to texture-map the pictures onto the petals and to perform the realistic rendering of the whole scene.
But there's another option, which I'll call the "poor man's rendering".
You can start from a real picture of a paper rose, where the petals have an empty black, thick frame. Then on the picture, you detect (either in some automated way or just by hand) points that correspond to a regular grid on the flattened paper.
As the petals are not wholly visible, the hidden parts must be clipped out from the mesh, possibly by using a polygonal fence.
Now you can take any picture, fit it over the undistorted mesh, clip out the hidden areas and warp to the distorted position. Then by compositing tricks, you will give it a natural shaded appearance on the rose.
Note: the process is eased by drawing a complet grid inside the frame. Anyway, you will need to somehow erase it before doing the compositing, in order to retrieve just the shading information.
I would tend to believe that the second approach was used here, as I see a few mapping anomalies along some edges, which would not arise on a fully synthetic scene.
In any case, hard work.
I'm having an issue with back faces (to the light) and shadow mapping that I can't seem to get past. I'm still at the relatively early stages of optimizing my engine, however I can't seem to get there as even with everything hand-tuned for this one piece of geometry it still looks like garbage.
What it is is a skinny wall that is "curved" via about 5 different chunks of wall. When I create my depth map I'm culling front faces (to the light). This definitely helps, but the front faces on the other side of the wall are what seem to be causing the z-fighting/projective shadowing.
Some notes on the screenshot:
Front faces are culled when the depth texture (from the light) is being drawn
I have the near and far planes tuned just for this chunk of geometry (set at 20 and 25 respectively)
One directional light source, coming down on a slight angle toward the right side of the scene, enough to indicate that wall should be shadowed, but mostly straight down
Using a ludicrously large 4096x4096 shadow map texture
All lighting is disabled, but know that I am doing soft lighting (and hence vertex normals for the vertices) even on this wall
As mentioned here it concludes you should not shadow polygons that are back faced from the light. I'm struggling with this particular issue because I don't want to pass the face normals all the way through to the fragment shader to rule out the true back faces to the light there - however if anyone feels this is the best/only solution for this geometry thats what I'll have to do. Considering how the pipeline doesn't make it easy/obvious to pass the face normals through it makes me feel like this isn't the path of least resistance. And note that the normals I am passing are the vertex normals, to allow for softer lighting effects around the edges (will likely include both non-shadowed and shadowed surfaces).
Note that I am having some nasty Perspective Aliasing, but I'm hoping my next steps are to work on cascaded shadow maps, but without fixing this I feel like I'm just delaying the inevitable as I've hand-tightened the view as best I can (or so I think).
Anyways I feel like I'm missing something, so if you have any thoughts or help at all would be most appreciated!
EDIT
To be clear, the wall technically should NOT be in shadow, based on where the light is coming from.
Below is an image with shadowing turned off. This is just using the vertex normals to calculate diffuse lighting - its not pretty (too much geometry is visible) but it does show that some of the edges are somewhat visible.
So yes, the wall SHOULD be in shadow, but I'm hoping I can get the smoothing working better so the edges can have some diffuse lighting. If I need to have it completely in shadow, then if its the shadow map that puts it in shadow, or my code specifically putting it in shadow because the face normal is away, I'm fine with that - but passing the face normal through to my vertex/fragment shader does not seem like the path of least resistance.
Perhaps these will help illustrate my problem better, or perhaps bring to light some fundamental understanding I am missing.
EDIT #2
I've included the depth texture below. You can see the wall in question in the bottom left, and from the screenshot you can see how i've trimmed the depth values to ~0.4->1. This means the depth values of that wall start in the 0.4 range. So its not PERFECTLY clipped for it, but its close. Does that seem reasonable? I'm pretty sure its a full 24 or 32 bit depth buffer, a la DEPTH_COMPONENT extension on iOS. For #starmole, does this help to determine if its a scaling error in my projection? Do you think the size/area covered of my map is too large, hence if it focuses closer it might help?
The problem seems to be that you are
Culling the front faces
Looking at the back face
Not removing the light from the back face because it's actually not lit by the normal - or there is some inaccuracy in the computation
Probably not adding some epsilon
(1) and (2) mean that there will be Z-fighting between the shadow map and the back faces.
Also, the shadow map resolution is not going to help you - just look at the wall in the shadow map, it's one pixel thick.
Recommendations:
Epsilons. Make sure that Z > lightZ + epsilon
Epsilons. Make sure that the wall is facing the light (dot of normal > epsilon) to make sure the wall is shadowed if it's very nearly orthogonal
I want to implement a physical raytracer (i.e. with actual photons with a given wavelength), restricting myself to small scenes (like two spheres and an enclosing box), to do experiments. It's not meant to be fast but I'll optimize it later.
I'm currently gathering all I know about how photons interact with surfaces, i.e. they either reflect (get absorbed, then emitted again) or refract with a probability based on the surface's absorption spectrum and reflectivity/refractivity indices, and refraction is dependent on the wavelength (which naturally results in dispersion) etc...
I understand how shooting photons out of emissive materials (like "lights") and making them bounce around the scene until they happen to land into the camera produces an accurate result, but is unacceptably slow, thus the need to do it backwards (shoot photons from the camera)
But I'm having trouble understanding how surface interactions can be modelled "backwards" - for instance, if a photon coming from the camera hits the side of a red box, if the photon has a wavelength corresponding to red, it will be reflected, and all other wavelengths will be absorbed, which will produce a red color. But is the intensity of the color decided by taking many samples of very close photons, and checking which of them eventually collide with a light, and which don't? Because ultimately, either a photon hits a light or it doesn't (after a given number of bounces) - there is no notion of partial collision.
So basically my question is - is the intensity of the light received by a pixel a function of the number of photon samples for that pixel that actually make it to a light source, or is there something else involved?
It sounds like you want to do something called http://en.wikipedia.org/wiki/Path_tracing which is like raytracing, except it does not directly sample light sources when a direct ray from the camera hits a surface (causing it to be quite slow, but not as slow as shooting rays "forwards" from the light sources).
However you seem to confuse yourself by thinking of "reverse photons" coming from the camera which you assume to already have the properties ("the photon has a wavelength corresponding to red") you are actually trying to decide in the first place. To wrap your mind around this, you might want to read up on "regular" raytracing first. So think of rays from the camera that bounce through a scene up to a certain bounce depth or until they hit an object, at which point they directly sample light sources to see if they illuminate the object.
About your final question "Is the intensity of the light received by a pixel a function of the number of photon samples for that pixel that actually make it to a light source, or is there something else involved?" I'll refer you to http://en.wikipedia.org/wiki/Rendering_equation where you will find the rendering equation (the general mathematical problem all 3D graphics algorithms like raytracing try to solve) and a list with its limitations, which answers your question in the negative (i.e. other than the light source these effects are also involved in deciding the ultimate colour and intensity of a pixel):
phosphorescence, which occurs when light is absorbed at one moment in time and emitted at a different time,
fluorescence, where the absorbed and emitted light have different wavelengths,
interference, where the wave properties of light are exhibited, and
subsurface scattering, where the spatial locations for incoming and departing light are different. Surfaces rendered without accounting for subsurface scattering may appear unnaturally opaque.
So I have picture (not the best one)
I want to detect where the lights come from and what types of lights are they. What algorithm\framework can do such things with static images?
I mentioned shadows because in general if you can separate a shadow from a surface than you can probably determine light type and other its parameters.
I mean general shadows search not only for presented image.
With the image that you presented, there are so many sources of error that I'd be surprised if a trained human, let alone an algorithm could do better than ±20% on any calculations. Here are the problems:
There isn't a known straight line anywhere since everything is hand hewn. The best bet would be the I-beam above the doorway but you don't know it's orientation.
There's heavy barrel distortion in the edges of the image which are introduced by the lens and are characteristic of that lens at that zoom and focus. Without precise calibration of that, you can only guess at the degree of distortion.
The image is skewed with regard to the wall it is facing but none of the walls appear to be all that planar anyway.
You want to know the source of lights. Well the obvious primary light is the sun, but latitude, longitude, time and date all affect that. Then there are the diffuse reflections but unless you have the albedo of the materials you can only guess.
What are you hoping to derive from this image? Usually when doing lighting analysis, someone will put known reference targets of different, known reflectivity in the space to be analyzed. Working from a pocket snapshot camera on an unknown scene really limits what you can extrapolate.
I need the fastest sphere mapping algorithm. Something like Bresenham's line drawing one.
Something like the implementation that I saw in Star Control 2 (rotating planets).
Are there any already invented and/or implemented techniques for this?
I really don't want to reinvent the bicycle. Please, help...
Description of the problem.
I have a place on the 2D surface where the sphere has to appear. Sphere (let it be an Earth) has to be textured with fine map and has to have an ability to scale and rotate freely. I want to implement it with a map or some simple transformation function of coordinates: each pixel on the 2D image of the sphere is defined as a number of pixels from the cylindrical map of the sphere. This gives me an ability to implement the antialiasing of the resulting image. Also I think about using mipmaps to implement mapping if one pixel on resulting picture is corresponding to more than one pixel on the original map (for example, close to poles of the sphere). Deeply inside I feel that this can be implemented with some trivial math. But all these thoughts are just my thoughts.
This question is a little bit related to this one: Textured spheres without strong distortion, but there were no answers available on my question.
UPD: I suppose that I have no hardware support. I want to have an cross-platform solution.
The standard way to do this kind of mapping is a cube map: the sphere is projected onto the 6 sides of a cube. Modern graphics cards support this kind of texture at the hardware level, including full texture filtering; I believe mipmapping is also supported.
An alternative method (which is not explicitly supported by hardware, but which can be implemented with reasonable performance by procedural shaders) is parabolic mapping, which projects the sphere onto two opposing parabolas (each of which is mapped to a circle in the middle of a square texture). The parabolic projection is not a projective transformation, so you'll need to handle the math "by hand".
In both cases, the distortion is strictly limited. Due to the hardware support, I recommend the cube map.
There is a nice new way to do this: HEALPix.
Advantages over any other mapping:
The bitmap can be divided into equal parts (very little distortion)
Very simple, recursive geometry of the sphere with arbitrary precision.
Example image.
Did you take a look at Jim Blinn's articles "How to draw a sphere" ? I do not have access to the full articles, but it looks like what you need.
I'm a big fan of StarconII, but unfortunately I don't remember the details of what the planet drawing looked like...
The first option is triangulating the sphere and drawing it with standard 3D polygons. This has definite weaknesses as far as versimilitude is concerned, but it uses the available hardware acceleration and can be made to look reasonably good.
If you want to roll your own, you can rasterize it yourself. Foley, van Dam et al's Computer Graphics -- Principles and Practice has a chapter on Bresenham-style algorithms; you want the section on "Scan Converting Ellipses".
For the point cloud idea I suggested in earlier comments: you could avoid runtime parameterization questions by preselecting and storing the (x,y,z) coordinates of surface points instead of a 2D map. I was thinking of partially randomizing the point locations on the sphere, so that they wouldn't cause structured aliasing when transformed (forwards, backwards, whatever 8^) onto the screen. On the downside, you'd have to deal with the "fill" factor -- summing up the colors as you draw them, and dividing by the number of points. Er, also, you'd have the problem of what to do if there are no points; e.g., if you want to zoom in with extreme magnification, you'll need to do something like look for the nearest point in that case.