What algorithms are out there for detecting lights and shadows and their parameters? - algorithm

So I have picture (not the best one)
I want to detect where the lights come from and what types of lights are they. What algorithm\framework can do such things with static images?
I mentioned shadows because in general if you can separate a shadow from a surface than you can probably determine light type and other its parameters.
I mean general shadows search not only for presented image.

With the image that you presented, there are so many sources of error that I'd be surprised if a trained human, let alone an algorithm could do better than ±20% on any calculations. Here are the problems:
There isn't a known straight line anywhere since everything is hand hewn. The best bet would be the I-beam above the doorway but you don't know it's orientation.
There's heavy barrel distortion in the edges of the image which are introduced by the lens and are characteristic of that lens at that zoom and focus. Without precise calibration of that, you can only guess at the degree of distortion.
The image is skewed with regard to the wall it is facing but none of the walls appear to be all that planar anyway.
You want to know the source of lights. Well the obvious primary light is the sun, but latitude, longitude, time and date all affect that. Then there are the diffuse reflections but unless you have the albedo of the materials you can only guess.
What are you hoping to derive from this image? Usually when doing lighting analysis, someone will put known reference targets of different, known reflectivity in the space to be analyzed. Working from a pocket snapshot camera on an unknown scene really limits what you can extrapolate.

Related

Is there any way to implement this beautiful image effect?

Recently I found an amazing APP called Photo Lab,and I'm curious about one effect called Paper Rose.In the pictures below,one is the original picture,the other is the effected picture.My question is what kind of algorithm can do this effect,and it would be better if you can show me some code or demo.Thanks in advance!
enter image description here
enter image description here
I am afraid that this is not just an algorithm, but a complex piece of software.
The most difficult part is to model the shape of the rose. The petals are probably a meshed surface. It is not so difficult to give them a curved shape, but the hard issue is to group them in such a way that they do not intersect.
It is not quite impossible that this can be achieved by first putting them in a flat geometry where you can master intersections, then to wrap it around an axis with a king of polar transform. But I don't really believe in that. I rather think that they have a collision-avoiding geometric modeller.
The next steps, which are more classical, are to texture-map the pictures onto the petals and to perform the realistic rendering of the whole scene.
But there's another option, which I'll call the "poor man's rendering".
You can start from a real picture of a paper rose, where the petals have an empty black, thick frame. Then on the picture, you detect (either in some automated way or just by hand) points that correspond to a regular grid on the flattened paper.
As the petals are not wholly visible, the hidden parts must be clipped out from the mesh, possibly by using a polygonal fence.
Now you can take any picture, fit it over the undistorted mesh, clip out the hidden areas and warp to the distorted position. Then by compositing tricks, you will give it a natural shaded appearance on the rose.
Note: the process is eased by drawing a complet grid inside the frame. Anyway, you will need to somehow erase it before doing the compositing, in order to retrieve just the shading information.
I would tend to believe that the second approach was used here, as I see a few mapping anomalies along some edges, which would not arise on a fully synthetic scene.
In any case, hard work.

Are cubes not a good geometry for environmental mapping?

I've been building out a scene with many cubes using this example from Three.js for reference: THREE.js Environmental Mapping
I've noticed that spheres, torus', etc. look great with this sort of mapping, however, flat surfaces like the ones on a cube look terrible. Is there a better way of doing environmental mapping for a scene with many cubes?
I think that what you are seeing is that a flat surface has sharp edges, and so the environment map comes suddenly into view and passes suddenly out of view, and the result is jarring because there is no sense of what will come next over the course of a rotation.
With a taurus/sphere/anything with rounded edges, we get a distorted preview of whatever will rotate into view, and so the experience is less jarring. At leas that's my take on it.
Also the square will give a more 1:1 reflection of the resolution of the map, whereas a sphere will compress more like PI/2 : 1 pixel data into the same cross section, so it makes your reflections look higher quality than they are, because it shrinks them.
I'd say that those two factors are probably what you are seeing. Try doubling the resolution of your map when using cubes as any pixelation will be more obvious.

Raytracing and light

I want to implement a physical raytracer (i.e. with actual photons with a given wavelength), restricting myself to small scenes (like two spheres and an enclosing box), to do experiments. It's not meant to be fast but I'll optimize it later.
I'm currently gathering all I know about how photons interact with surfaces, i.e. they either reflect (get absorbed, then emitted again) or refract with a probability based on the surface's absorption spectrum and reflectivity/refractivity indices, and refraction is dependent on the wavelength (which naturally results in dispersion) etc...
I understand how shooting photons out of emissive materials (like "lights") and making them bounce around the scene until they happen to land into the camera produces an accurate result, but is unacceptably slow, thus the need to do it backwards (shoot photons from the camera)
But I'm having trouble understanding how surface interactions can be modelled "backwards" - for instance, if a photon coming from the camera hits the side of a red box, if the photon has a wavelength corresponding to red, it will be reflected, and all other wavelengths will be absorbed, which will produce a red color. But is the intensity of the color decided by taking many samples of very close photons, and checking which of them eventually collide with a light, and which don't? Because ultimately, either a photon hits a light or it doesn't (after a given number of bounces) - there is no notion of partial collision.
So basically my question is - is the intensity of the light received by a pixel a function of the number of photon samples for that pixel that actually make it to a light source, or is there something else involved?
It sounds like you want to do something called http://en.wikipedia.org/wiki/Path_tracing which is like raytracing, except it does not directly sample light sources when a direct ray from the camera hits a surface (causing it to be quite slow, but not as slow as shooting rays "forwards" from the light sources).
However you seem to confuse yourself by thinking of "reverse photons" coming from the camera which you assume to already have the properties ("the photon has a wavelength corresponding to red") you are actually trying to decide in the first place. To wrap your mind around this, you might want to read up on "regular" raytracing first. So think of rays from the camera that bounce through a scene up to a certain bounce depth or until they hit an object, at which point they directly sample light sources to see if they illuminate the object.
About your final question "Is the intensity of the light received by a pixel a function of the number of photon samples for that pixel that actually make it to a light source, or is there something else involved?" I'll refer you to http://en.wikipedia.org/wiki/Rendering_equation where you will find the rendering equation (the general mathematical problem all 3D graphics algorithms like raytracing try to solve) and a list with its limitations, which answers your question in the negative (i.e. other than the light source these effects are also involved in deciding the ultimate colour and intensity of a pixel):
phosphorescence, which occurs when light is absorbed at one moment in time and emitted at a different time,
fluorescence, where the absorbed and emitted light have different wavelengths,
interference, where the wave properties of light are exhibited, and
subsurface scattering, where the spatial locations for incoming and departing light are different. Surfaces rendered without accounting for subsurface scattering may appear unnaturally opaque.

How to blend colors

I'm currently coding a raytracer. I wonder how to blend a primitive color with a light color.
I've seen many combinations.
Some just add the two colors. This gives me very strange results.
Some mutiply each components. It looks ok, but in the primitive is blue ({0, 0, 1}) and the light is red ({1, 0, 0}), it is just black. Is it the normal behavior ?
I've also seen the screen blending mode (screen(C1, C2) = C1 + C2 - C1 * C2)) which is more logical for me since in the above case, colors will actually blend.
Same question for reflected rays colors : how to blend them with the local color ?
Bonus question: should a point on a primitive that it not illuminated be black ? I've seen stuff like "the half of the color".
It's actually way more complicated. The light and surface material interact via a formula called a bidirectional reflectance distribution function (BRDF, for short), a 4-dimensional function whose inputs are the direction of the light and direction of the viewing angle (both relative to the surface normal, i.e. the unit vector perpendicular to the surface), and whose output is the ratio of the outgoing radiance in the view direction to the incoming irradiance from the light's direction.
There's not an easy way to explain it in this short space. Perhaps you could check out the Wikipedia article, as well as its links and references, or even crack open a decent computer graphics textbook?
But suffice it to say that for many BRDF's, it is akin to "multiplication", in that a perfectly reflective Lambertian pure red surface illuminated by a perfectly blue light ought to look black.
There are several complicating factors: No real light, other than a laser, emits a pure primary color with the other components being 0.0. (Avoid this if you want a realistic image.) No real surface reflects all in one wavelength with the other color components being 0.0. (Avoid this if you want a realistic image.) And no material is really quite lambertian (purely diffuse) -- generally there is some specular component that tends to reflect light at the surface, before they get to any underlying pigment, and therefore that portion of the reflected light will tend to look like the color of the light, not the surface. Unless it's metallic, in which case the "color" of the metal does influence the specular. Sigh. Like I said, it's complicated, and you need to actually read up on the physics (as well as the 40 years of computer graphics research in which all these problems have been solved long before you ever contemplated the problem).
Bonus answer ("what happens to a point that is not illuminated"): points truly not illuminated should be black. But surely you knew that, as well as the fact that in any real-world situation, there isn't any point that gets no illumination (and can also be photographed). A more interesting formulation of the question goes like this: my renderers is only considering direct light paths from sources, not all the indirect paths (object to object) that illuminate the darker corners, so how do I prevent it from going black? A cheap answer is to add an "ambient" light that just throws a constant amount of light everywhere. This is what people did for a long time, and it looked quite fake as you would imagine. A better approach is to use various "global illumination" techniques. Look it up. But definitely don't just "illuminate it half as much," unless you're aiming for a very stylistic (as opposed to realistic) appearance.

Any ideas on real life rocks 3d Reconstruction from Single View?

So in general, when we think of Single View Reconstruction we think of working with planes, simple textures and so on... Generally, simple objects from nature's point of view. But what about such thing as wet beach stones? I wonder if there are any algorithms that could help with reconstructing 3d from single picture of stones?
Shape from shading would be my first angle of attack.
Smooth wet rocks, such as those in the first image, may exhibit predictable specular properties allowing one to estimate the surface normal based only on the brightness value and the relative angle between the camera and the light source (the sun).
If you are able to segment individual rocks, like those in the second photo, you could probably estimate the parameters of the ground plane by making some assumptions about all the rocks in the scene being similar in size and lying on said ground plane.

Resources