Why are the basis blocks corresponding to reflected waves in the quantisation matrix given seemingly random priorities in the standard JPEG quantisation matrices. Also, why aren't the priorities monotonic with respect to frequency?
I haven't been able to find any explanation and all I can come up with is possible tiling patterns occurring with symmetric quantisation matrices or an adaptation to the arrangement of photoreceptors in the eye.
The quantization tables are a set of fudge factors that attempt to model human perception.
The specific quantization table values are more art than science, because human perception is quirky and complex, and ideal coefficients depend on specific viewing conditions that can only be roughly guessed in advance.
Tables are not always monotonic with respect to frequency, because blocks of certain frequencies form patterns that are more useful than others, e.g. for straight horizontal and vertical lines.
Related
I'm looking for algorithms that can combine images based on a quality factor. For example, you have 50-100 photographies of the same scene, but some areas had bad quality in some image because artefacts or whatever.
Now for each pixel I select the best one with a quality factor based in darkness but for sure we have a lot off possible combinations and a lot a quality measures pixel/patch/image-based.
I'm trying to research about this topic but I don't found how to describe it properly, do you know some algorithms or at least which is de name of this "problem"?
Update: Note some desired pixels or pixel areas only appears in a few cases, e.g. in 10 of 100 images. It causes we can't use simple averaging or similar methods.
One of the solution is averaging the images.
If you have quality factor for each sample than u can do weighted averaging.
You use following alogorithm to improve over averaging : -
Divide image into block of 4*4 or 8*8
calculate autocorrelation in all such blocks
Higher autocorrelation means lesser noise hence u can give quality factor high for autocorrealation and low otherwise.
Do weighted averaging averaging of blocks using the quality factor defined
I am new in image processing and I don't know the use of basic terms, I know the basic definition of sparsity, but can anyone please elaborate the definition in term of image processing?
Well Sajid, I actually was doing image processing a few months ago, and I had found a website that gave me what I thought was the best definition of sparsity.
Sparsity and density are terms used to describe the percentage of
cells in a database table that are not populated and populated,
respectively. The sum of the sparsity and density should equal 100%.
A table that is 10% dense has 10% of its cells populated with non-zero
values. It is therefore 90% sparse – meaning that 90% of its cells are
either not filled with data or are zeros.
I took this in the context of on/off for black and white image processing. If many pixels were off, then the pixels were sparse.
As The Obscure Question said, sparsity is when a vector or matrix is mostly zeros. To see a real world example of this, just look at the wavelet transform, which is known to be sparse for any real-world image.
(all the black values are 0)
Sparsity has powerful impacts. It can transform matrix multiplication of two NxN matrices, normally a O(N^3) operation, into an O(k) operation (with k non-zero elements). Why? Because it's a well-known fact that for all x, x * 0 = 0.
What does sparsity mean? In the problems I've been exposed to, it means similarity in some domain. For example, natural images are largely the same color in areas (the sky is blue, the grass is green, etc). If you take the wavelet transform of that natural image, the output is sparse through the recursive nature of the wavelet (well, at least recursive in the Haar wavelet).
What's the randomness quality of the Perlin Noise algorithm and Simplex Noise algorithm?
Which algorithm of the two has better randomness?
Compared with standard pseudo-random generators, does it make sense to use Perlin/Simplex as random number generator?
Update:
I know what the Perlin/Simplex Noise is used for. I'm only curious of randomness properties.
Perlin noise and simplex noise are meant to generate useful noise, not to be completely random. These algorithms are generally used to create procedurally generated landscapes and the like. For example, it can generate terrain such as this (image from here):
In this image, the noise generates a 2D heightmap such as this (image from here):
Each pixel's color represents a height. After producing a heightmap, a renderer is used to create terrain matching the "heights" (colors) of the image.
Therefore, the results of the algorithm are not actually "random"; there are lots of easily discernible patterns, as you can see.
Simplex supposedly looks a bit "nicer", which would imply less randomness, but its main purpose is that it produces similar noise but scales to higher dimensions better. That is, if one would produce 3D,4D,5D noise, simplex noise would outperform Perlin noise, and produce similar results.
If you want a general psuedo-random number generator, look at the Mersenne twister or other prngs. Be warned, wrt to cryptography, prngs can be full of caveats.
Update:
(response to OPs updated question)
As for the random properties of these noise functions, I know perlin noise uses a (very) poor man's prng as input, and does some smoothing/interpolation between neighboring "random" pixels. The input randomness is really just pseudorandom indexing into a precomputed random vector.
The index is computed using some simple integer operations, nothing too fancy. For example, the noise++ project uses precomputed "randomVectors" (see here) to obtain its source noise, and interpolates between different values from this vector. It generates a "random" index into this vector with some simple integer operations, adding a small amount of pseudorandomness. Here is a snippet:
int vIndex = (NOISE_X_FACTOR * ix + NOISE_Y_FACTOR * iy + NOISE_Z_FACTOR * iz + NOISE_SEED_FACTOR * seed) & 0xffffffff;
vIndex ^= (vIndex >> NOISE_SHIFT);
vIndex &= 0xff;
const Real xGradient = randomVectors3D[(vIndex<<2)];
...
The somewhat random noise is then smoothed over and in effect blended with neighboring pixels, producing the patterns.
After producing the initial noise, perlin/simplex noise has the concept of octaves of noise; that is, reblending the noise into itself at different scales. This produces yet more patters. So the initial quality of the noise is probably only as good as the precomputed random arrays, plus the effect of the psuedorandom indexing. But after all that the perlin noise does to it, the apparent randomness decreases significantly (it actually spreads over a wider area I think).
As stated in "The Statistics of Random Numbers", AI Game Wisdom 2, asking which produces 'better' randomness depends what you're using it for. Generally, the quality of PRNGs are compared via test batteries. At the time of print, the author indicates that the best known & most widely used test batteries for testing the randomness of PRNGs are ENT & Diehard. Also, see related questions of how to test random numbers and why statistical randomness tests seem ad-hoc.
Beyond the standard issues of testing typical PRNGs, testing Perlin Noise or Simplex Noise as PRNGs is more complicated because:
Both internally require a PRNG, thus the randomness of their output is influenced by the underlying PRNG.
Most PRNGs have lack tunable parameters. In contrast, Perlin noise is summation of one or more coherent-noise functions (octaves) with ever-increasing frequencies and ever-decreasing amplitudes. Since the final image depends on the number and nature of the octaves used, the quality of the randomness will vary accordingly. libnoise: Modifying the Parameters of the Noise Module
An argument similar to #2 holds for varying the number of dimensions used in Simplex noise as "a 3D section of 4D simplex noise is different from 3D simplex noise." Stefan Gustavson's Simplex noise demystified.
i think you are confused.
perlin and simplex take random numbers from some other source and make them less random so that they look more like natural landscapes (random numbers alone do not look like natural landscapes).
so they are not a source of random numbers - they are a way of processing random numbers from somewhere else.
and even if they were a source, they would not be a good source (the numbers are strongly correlated).
do NOT use perlin or simplex for randomness. they aren't meant for that. they're an /application/ of randomness.
people choose these for their visual appeal, which hasn't been sufficiently discussed yet, so i'll focus on that.
perlin/simplex with smoothstep are perfectly smooth. no matter how far you zoom, they will always be a gradient, not a vertex or edge.
the output range is (+/- 1/2 x #dimensions), so you need to compensate for this to get it to the range 0 to 1 or -1 to 1 as needed. fixing this is standard. adding octaves will increase this range by the scaling factor of the octave (its usually half the bigger octave of course).
perlin/simplex noise have the bizarre quality of being brown noise when zoomed in and blue noise when zoomed out. neither one nor a middle zoom is especially good for prng purposes, but theyre great for faking natural occurances (which arent really random, and /are/ spacially biased).
both perlin and simplex noise tend to have some bias along the axes, with perlin having a few more problems in this area. edit: getting away from even more bias in three dimensions is very complicated. its difficult (impossible?) to generate a large number of unbiased points upon a sphere.
perlin results tend to be circular with octagonal bias, while simplex tends to generate ovals with hexagonal bias.
a slice of higher dimensional simplex doesnt look like lower dimensional simplex. but a 2d slice of 3d perlin looks pretty much just like 2d perlin.
most people feel that simplex can't actually handle higher dimensions - it tends to "look worse and worse" for higher dimensions. perlin allegedly doesn't have this problem (it still has bias though).
i believe once "octaved" they both have similar triangular distribution of output when layered, (similar to rolling 2 dice) (id love if someone could double check this for me.) and so both benefit from a smoothstep. this is standard. (its possible to bias the results for equal output but it would still have dimensional biases that would fail prng quality tests due to high spacial correlation, which is /the/ feature, not a bug.)
please note that the octaves technique is not part of perlin or simplex definition. it is merely a trick frequently used in conjunction with them. perlin and simplex blend gradients at equally distributed points. octaves of this noise are combined to create larger and smaller structures. this is also frequently used in "value noise" which uses basically the white noise equivalent to this concept instead of the perlin noise. value noise with octaves will also exhibit /even worse/ octagonal bias. hence why perlin or simplex are preferred.
simplex is faster in all cases - /especially/ in higher dimensions.
so simplex fixes the problems of perlin in both performance and visuals, but introduces its own problems.
I need to compare 3 216x216 matrix (data correlation matrix, events etc) . can someone suggest a way to plot these in matlab or someother plotting tools that can easily visualise and compare them ... does a 3d mesh plot be useful ? I thought mesh would be good .. but I need others opinion too.
Thanks in advance,
Sparse matrices
You can use the spy() method to visualize a "sparsity pattern", as Matlab calls it. It plots a dot (or any other marker) where the matrix element is non-zero.
spy() can also be used to visualize non-sparse matrices where a lot of entries are close to zero - just threshold the matrix first:
a=eye(50)+0.01*randn(50);
spy(a) % Not very useful
b=a; b(b<0.02)=0;
figure, spy(b) % Much more useful
More generally, you can apply upper and lower thresholds to visualize the location of matrix entries whose value is within a specific range.
Corellation
It may be useful to just display the matrix using imagesc(). This may give you an idea of the degree of corellation in your data - i.e. an uncorellated signal will have a corellation matrix with dominant diagonal elements, which will be clearly visible. I find Matlab's default color map distracting, so I usually do something like
colormap(gray);imagesc(a);
Miscellaneous
Of course, there's a whole host of non-visual comparisons you can make - various norm()'s, std(), spectral analysis using eig() for square matrices, or svd() more generally. You can compare eigenvalue magnitudes, or compare the eigenvectors. This may be very useful or complete garbage, depending on what your data is.
Thus, to conclude (for now), depending on what specifically your matrices contain, you may get more useful suggestions.
Well, I was programming something that required the use of DCT. I found 2 resources for the DCT formula:
Mathworks
Wikipedia
Initially I used the wikipedia version of DCT-II. In the DCT-II section of wiki page, it is written that some authors further multiply the X0 term by 1/√2 and multiply the resulting matrix by an overall scale factor, which makes the DCT-II matrix orthogonal, but breaks the direct correspondence with a real-even DFT of half-shifted input. And the mathworks site does this only.
What is this property being talked about?
I beleive that they are trying to say that that are concerned about making the DCT-II transform matrix a unitary matrix. It is nice from a signal processing standpoint to have a unitary matrix because when we transform the signal back to its original domain, we are not adding any more power into the signal.
However, the 1-D DFT:
can be rewritten in terms of sines and consies (using Euler's Identity). If the input is a real-even signal, the even terms of the DFT will correspond to the terms of the DCT. Some people like to simplify their algorithms by simply taking the DFT of a signal, and only concentrating on the even terms.