Tilable 3d noise in texture? - perlin-noise

I've looked high and low for an answer to this, but I haven't found anything yet. Here's my question:
Can you pack pre-calculated noise into a 2d texture in such a way as to be able to calculate a reasonable facsimile of 3d noise without the overhead of having to calculate it with a full blown 3d noise algorithm.
My initial idea was to take Z slices of X by Y noise and pack them up side by side, then for every pixel to calculate the 'low' and 'high' noise pixels and do a weighted interpolation between the two Z samples. Needless to say this didn't work very well.
I know about the various shaders that generate noise, but by and large they have problems on the mobile platform due to the low spec hardware and various optimizations that the mfrs have put into place, so it's not viable to calculate on the fly.

Related

Fast approximate algorithm for RGB/LAB conversion?

I am working on a data visualization tool using OpenGL, and the LAB color space is the most comprehensible color space for visualization of the data I'm dealing with (3 axes of data are mapped to the 3 axes of the color space). Is there a fast (e.g. no non-integer exponentiation, suitable for execution in a shader) algorithm for approximate conversion of LAB values to and from RGB values?
If doing the actual conversion calculation in a shader is too complex/expensive, you can always use a lookup table. Since both color spaces have 3 components, you can use a 3D RGB texture to represent the lookup table.
Using a 3D texture might sound like a lot of overhead. Since 8 bits/component is often used to represent colors in OpenGL, you would need a 256x256x256 3D texture. At 4 bytes/texel, that's a 64 MByte texture, which is not outrageous, but very substantial.
However, depending on how smooth the values in the translation table are, you might be able to get away with a lower resolution. Keep in mind that texture sampling uses linear interpolation. If piecewise linear interpolation is good enough with a certain base-resolution of the lookup table, you can greatly reduce the size.
If you go this direction, and can't afford to use 64 MBytes for the LUT, you'll have to play with the size of the LUT, and make a possible size/performance vs. quality tradeoff.

Restoring Image corrupted by Gaussian and Motion Blur

An image is given to us that has been corrupted by:
Gaussian blur
Gaussian noise
Motion blur
in that order. The parameters of all the above (filter size, variance, SNR, etc) are known to us.
How can we restore the image?
I have tried to compute the aggregate degradation function by convolving the above and then used the Weiner filter to restore, but the attempts have failed so far, since the blur still remains.
Could anyone please shed some light?
For Gaussian and motion blur, it is a matter of deducing the convolution kernel. Once it is known, deconvolution can be done in Fourier space. The Fourier transform of the image, divided by the Fourier transform of the kernel, gives the Fourier transform of a (hopefully) improved image.
Gaussians transform into other Gaussians, so there is no problem with divide by zero. But Gaussians do fall of rather fast, as exp(-x^2), so you'd be dividing by small numbers to obtain large whacky high frequency amplitudes. So, some sort constant bias or other way of keeping the FT of the kernal from getting small must be applied. That's where the Wiener filter comes in. The bias is usually chosen in relation to random noise levels, or quantization.
For motion blur, a typical case is when the clean image is convolved with a short line segment. Unfortunately, sharply cut-off line segments have plenty of zeros. Again, Wiener filter to the rescue.
Additive Guassian noise cannot be removed, but can be averaged out. The simplest quickest way is to blur the image with Gaussian, box, or other filter. Biggest problem with that - you end up with a blurred image! Median filters are somewhat better at preserving edges and details if not too small. There are many noise reduction techniques out there.
Sometimes noise reduction is easy for certain types of images. For Cassini imaging work, most image features were either high-contrast hard edges (planet edges, craters), or softly varying (cloud details in atmospheres) so I used an edge detector, fattened (dilated) its output, blurred it, and used that as a mask to protect parts of the image from a small-radius blur filter. Applying different filters.
There's Signal Processing Stack Exchange site (in beta for now) which may have questions and answers about restoring corrupted images. https://dsp.stackexchange.com/questions

Shading mask algorithm for radiation calculations

I am working on a software (Ruby - Sketchup) to calculate the radiation (sun, sky and surrounding buildings) within urban development at pedestrian level. The final goal is to be able to create a contour map that shows the level of total radiation. With total radiation I mean shortwave (light) and longwave(heat). (To give you an idea: http://www.iaacblog.com/maa2011-2012-digitaltools/files/2012/01/Insolation-Analysis-All-Year.jpg)
I know there are several existing software that do this, but I need to write my own as this calculation is only part of a more complex workflow.
The (obvious) pseudo code is the following:
Select and mesh surface for analysis
From each point of the mesh
Cast n (see below) rays in the upper hemisphere (precalculated)
For each ray check whether it is in shade
If in shade => Extract properties from intersected surface
If not in shade => Flag it
loop
loop
loop
The approach above is brute force, but it is the only I can think of. The calculation time increases with the fourth power of the accuracy (Dx,Dy,Dazimth, Dtilt). I know that software like radiance use a Montecarlo approach to reduce the number of rays.
As you can imagine, the accuracy of the calculation for a specific point of the mesh is strongly dependent by the accuracy of the skydome subdivision. Similarly the accuracy on the surface depends on the coarseness of the mesh.
I was thinking to a different approach using adaptive refinement based on the results of the calculations. The refinement could work for the surface analyzed and the skydome. If the results between two adjacent points differ more than a threshold value, than a refinement will be performed. This is usually done in fluid simulation, but I could not find anything about light simulation.
Also i wonder whether there are are algorithms, from computer graphics for example, that would allow to minimize the number of calculations. For example: check the maximum height of the surroundings so to exclude certain part of the skydome for certain points.
I don't need extreme accuracy as I am not doing rendering. My priority is speed at this moment.
Any suggestion on the approach?
Thanks
n rays
At the moment I subdivide the sky by constant azimuth and tilt steps; this causes irregular solid angles. There are other subdivisions (e.g. Tregenza) that maintain a constant solid angle.
EDIT: Response to the great questions from Spektre
Time frame. I run one simulation for each hour of the year. The weather data is extracted from an epw weather file. It contains, for each hour, solar altitude and azimuth, direct radiation, diffuse radiation, cloudiness (for atmospheric longwave diffuse). My algorithm calculates the shadow mask separately then it uses this shadow mask to calculate the radiation on the surface (and on a typical pedestrian) for each hour of the year. It is in this second step that I add the actual radiation. In the the first step I just gather information on the geometry and properties of the various surfaces.
Sun paths. No, i don't. See point 1
Include reflection from buildings? Not at the moment, but I plan to include it as an overall diffuse shortwave reflection based on sky view factor. I consider only shortwave reflection from the ground now.
Include heat dissipation from buildings? Absolutely yes. That is the reason why I wrote this code myself. Here in Dubai this is key as building surfaces gets very, very hot.
Surfaces albedo? Yes, I do. In Skethcup I have associated a dictionary to every surface and in this dictionary I include all the surface properties: temperature, emissivity, etc.. At the moment the temperatures are fixed (ambient temperature if not assigned), but I plan, in the future, to combine this with the results from a building dynamic thermal simulation that already calculates all the surfaces temperatures.
Map resolution. The resolution is chosen by the user and the mesh generated by the algorithm. In terms of scale, I use this for masterplans. The scale goes from 100mx100m up to 2000mx2000m. I usually tend to use a minimum resolution of 2m. The limit is the memory and the simulation time. I also have the option to refine specific areas with a much finer mesh: for example areas where there are restaurants or other amenities.
Framerate. I do not need to make an animation. Results are exported in a VTK file and visualized in Paraview and animated there just to show off during presentations :-)
Heat and light. Yes. Shortwave and longwave are handled separately. See point 4. The geolocalization is only used to select the correct weather file. I do not calculate all the radiation components. The weather files I need have measured data. They are not great, but good enough for now.
https://www.lucidchart.com/documents/view/5ca88b92-9a21-40a8-aa3a-0ff7a5968142/0
visible light
for relatively flat global base ground light map I would use projection shadow texture techniques instead of ray tracing angular integration. It is way faster with almost the same result. This will not work on non flat grounds (many bigger bumps which cast bigger shadows and also change active light absorbtion area to anisotropic). Urban areas are usually flat enough (inclination does not matter) so the technique is as follows:
camera and viewport
the ground map is a target screen so set the viewpoint to underground looking towards Sun direction upwards. Resolution is at least your map resolution and there is no perspective projection.
rendering light map 1st pass
first clear map with the full radiation (direct+diffuse) (light blue) then render buildings/objects but with diffuse radiation only (shadow). This will make the base map without reflections and or soft shadows in the Magenta rendering target
rendering light map 2nd pass
now you need to add building faces (walls) reflections for that I would take every outdoor face of the building facing Sun or heated enough and compute reflection points onto light map and render reflection directly to map
in tis parts you can add ray tracing for vertexes only to make it more precise and also for including multiple reflections (bu in that case do not forget to add scattering)
project target screen to destination radiation map
just project the Magenta rendering target image to ground plane (green). It is only simple linear affine transform ...
post processing
you can add soft shadows by blurring/smoothing the light map. To make it more precise you can add info to each pixel if it is shadow or wall. Actual walls are just pixels that are at 0m height above ground so you can use Z-buffer values directly for this. Level of blurring depends on the scattering properties of the air and of coarse pixels at 0m ground height are not blurred at all
IR
this can be done in similar way but temperature behaves a bit differently so I would make several layers of the scene in few altitudes above ground forming a volume render and then post process the energy transfers between pixels and layers. Also do not forget to add the cooling effect of green plants and water vaporisation.
I do not have enough experience in this field to make more suggestions I am more used to temperature maps for very high temperature variances in specific conditions and material not the outdoor conditions.
PS. I forgot albedo for IR and visible light is very different for many materials especially aluminium and some wall paintings

Which way is my yarn oriented?

I have an image processing problem. I have pictures of yarn:
The individual strands are partly (but not completely) aligned. I would like to find the predominant direction in which they are aligned. In the center of the example image, this direction is around 30-34 degrees from horizontal. The result could be the average/median direction for the whole image, or just the average in each local neighborhood (producing a vector map of local directions).
What I've tried: I rotated the image in small steps (1 degree) and calculated statistics in the vertical vs horizontal direction of the rotated image (for example: standard deviation of summed rows or summed columns). I reasoned that when the strands are oriented exactly vertically or exactly horizontally the difference in statistics would be greatest, and so that angle of rotation is the correct direction in the original image. However, for at least several kinds of statistical properties I tried, this did not work.
I further thought that perhaps this wasn't working because there were too many different directions at the same time in the whole image, so I tired it in a small neighborhood. In this case, there is always a very clear preferred direction (different for each neighborhood), but it is not the direction that the fibers really go... I can post my sample code but it is basically useless.
I keep thinking there has to be some kind of simple linear algebra/statistical property of the whole image, or some value derived from the 2D FFT that would give the correct direction in one step... but how?
What probably won't work: detecting individual fibers. They are not necessarily the same color, and the image can shade from light to dark so edge detectors don't work well, and the image may not even be in focus sometimes. Because of that, it is not always even possible to see individual fibers for a human (see top-right in the example), they kinda have to be detected as preferred direction in a statistical sense.
You might try doing this in the frequency domain. The output of a Fourier Transform is orientation dependent so, if you have some kind of oriented pattern, you can apply a 2D FFT and you will see a clustering around a specific orientation.
For example, making a greyscale out of your image and performing FFT (with ImageJ) gives this:
You can see a distinct cluster that is oriented orthogonally with respect to the orientation of your yarn. With some pre-processing on your source image, to remove noise and maybe enhance the oriented features, you can probably achieve a much stronger signal in the FFT. Once you have a cluster, you can use something like PCA to determine the vector for the major axis.
For info, this is a technique that is often used to enhance oriented features, such as fingerprints, by applying a selective filter in the FFT and then taking the inverse to obtain a clearer image.
An alternative approach is to try a series of Gabor filters see here pre-built with a selection of orientations and frequencies and use the resulting features as a metric for identifying the most likely orientation. There is a scikit article that gives some examples here.
UPDATE
Just playing with ImageJ to give an idea of some possible approaches to this - I started with the FFT shown above, then - in the following image, I performed these operations (clockwise from top left) - Threshold => Close => Holefill => Erode x 3:
Finally, rather than using PCA, I calculated the spatial moments of the lower left blob using this ImageJ Plugin which handily calculates the orientation of the longest axis based on the 2nd order moment. The result gives an orientation of approximately -38 degrees (with respect to the X axis):
Depending on your frame of reference you can calculate the approximate average orientation of your yarn from this rather than from PCA.
I tried to use Gabor filters to enhance the orientations of your yarns. The parameters I used are:
phi = x*pi/16; % x = 1, 3, 5, 7
theta = 3;
sigma = 0.65*theta;
filterSize = 3;
And the imag part of the convoluted image are shown below:
As you mentioned, the most orientations lies between 30-34 degrees, thus the filter with phi = 5*pi/16 in left bottom yields the best contrast among the four.
I would consider using a Hough Transform for this type of problem, there is a nice write-up here.

How can you distribute the color intensity of two images using its gradients?

I am working on an automatic image stitching algorithm using MATLAB. So far, I have downloaded a source code much like the one that I had in mind and so, I'm currently studying how the code work.
The problem is, when stitching two or more images together, their color intensity will most probably be different from each other so the stitched seams will be visible to the eye... So, right now, I'm trying to find out how to redistribute their color intensity using the images gradients so that the whole stitched image will have the same color intensity.
I hope someone can help me out there and if so, thank you very much...
If the images overlap by a significant amount, and the stitching algorithm does a very good job of registering the overlap region, a very simple solution would be to blend the pixel values from the two images together in the overlap region, using a weighted average with weights going from 0-1 depending on the distance from the edge of the overlap region.
blendedPixel = (imageApixel * weightA) + (imageBpixel * weightB)
where weightA is approaches 1 as we get closer to the imageA side of the overlap region, weightB approaches 1 as we get closer to the imageB side of the overlap region, and the sum of weightA and weightB is always 1.
The above solution is not particularly principled, and does depend on the stitching algorithm doing a very good job of image registration in the overlap region.
Another, more principled solution to the problem would be to remove the source of the intensity difference, attempting to homogenize the response of the pixels across the image plane.
The form of this solution will depend on the source of the intensity difference, which will depend on the optics and the scene lighting conditions.
For example when dealing with photographs of outdoor scenes, taken at the same time from the same location, then the dominant effect will likely be "vignetting" effects, which can be due to a variety of different causes, including differences between the various paths taken by the light through camera optics.
As another example, when dealing with photographs taken through a microscope of a sample illuminated at an oblique angle, the dominant effect will likely be due to the difference in illumination between those parts of the image closest to the light and those far away.
Vignetting generally manifests itself as a radially symmetric function centred around the projection of the optical axis of the lens onto the image plane. To correct for vignetting, you should try to fit a suitable radially symmetric function.
Lighting changes can take different functional forms, but fitting a straightforward linear approximation is sufficient in many cases.
Depending upon the scene, and the number and variability of the images that have available, you may need to take calibration images to fit these functions properly.
The above approaches make assumptions about the functional forms of the sources of the intensity differences, but not about the scene or it's statistics.
Yet another approach might be to make some assumptions about the scene, for example, that all significant information is represented by spatial frequencies above some threshold. You can then remove all low image intensity spatial frequency components. This will "flatten" the image, removing much of the low-frequency vignetting and lighting issues.
This approach might be applicable to microscopy images, sattelite images, or images of other scenes within which most of the interest lies in the detail, rather than in the drama of the composition.
There are a number of papers that tackle this problem, many at a level of technical sophistication rather beyond the above discussion. For example, see D Goldman, "Vignette and Exposure Calibration and Compensation", IEEE Trans Pattern Analysis and Machine Intelligence, vol 32, no 12, pp2276-2288

Resources