Intensity to pixel conversion - pixel

I hava a data of images 20 x 20 pixel in the form of intensities normalized by anti-aliasing.I took this
data from THE MNIST DATABASE
I want to convert these intensities to pixel (0-255) so that i can visualize these grayscale images it in java. Also why there are negative values(-1 to 1).

Whats hard about converting a range of values from -1 to +1 to a range of 0 to 255. Simply multiply each component (the -1 to +1] by 255/2 and round.

Related

Extraction of the least significant bit of a pixel

I want to extract the least significant bit (LSB) of each pixel in an image. I want to know what is the value of LSB in each pixel, whether it is 1 or 0. How do I do this in MATLAB?
You can use the bitget function and specify the bit position of 1, denoting the LSB of each pixel. Assuming your image is stored in A, simply do:
B = bitget(A, 1);
B will be the same size as A which tells you the LSB of each pixel in the image.

Image Mean Normalization

I am wondering if I understood the mean normalization of images correctly.
As far as I know, you calculate the mean value over all pixels (lets assume it is in grayscale). Then, for each pixel, you subtract this mean value.
But how should one deal with negative values which could arise? For example, the whole image has a mean value of 100, but one specific pixel has an intensity of 90. After this normalization, the pixel's value would be -10.
This may not be quite what you're looking for but one option that avoids negative numbers in your output would be to normalize to the range of values present rather than to the image mean.
The equation would be: X' = (X - Xmin)/(Xmax - Xmin). This rescales the image to be between 0 and 1 (no negative values involved). If you'd like to save it as an easily view-able greyscale you could multiply values by 255 to rescale it.
It may also be worth noting that unless the entire image has a constant intensity, there is guaranteed to be some negative values after subtracting the mean (not simply a possibility that they could arise).
You don't have to deal with negative inputs, the model can handle them. It is good practice, for a Neural Network for example, to have inputs in the range [-1, 1]

Explanation of Vector Images

I was given a ten thousand by sixty four matrix of integers 1, 2, ..., 16 for identification via some programs I wrote earlier. This matrix represents ten thousand eight by eight vectorized images.
My questions aren't so much about how to identify them (I already got that part covered), but rather, more about the data I'm looking at. A brief .txt file with the matrix gave me a little bit of explanation:
Thirty two by thirty two bitmaps are divided into nonoverlapping blocks of four by four and the number of pixels are counted in each block. This generates an input matrix of eight by eight where each element is an integer in the range
0, . . . , 16.
I'm still a bit confused though. How does a thirty two by thirty two bitmap correspond to a eight by eight image? What does each integer represent? How would I go about converting a given image into a vector image, and then creating it into an array? What is the relationship between a bitmap and the picture in this context?
Thanks for any information or pointers you can give!
A wild guess...
Step 1) Take a 4 pixel x 4 pixel image. Each pixel can be black or white. You count the number of, say, black pixels and you will get a number between 0 and 16.
Step 2) Now, imagine you have a 32 pixel by 32 pixel image, and you divide it into squares of 4 pixels by 4 pixels - each square being one of the sqaures in Step 1 above. You will get 64 squares, that is, 8 by 8. For each of the 64 squares, store the number between 0-16 from Step 1.

Usage of the gaussian blur

I read now tons of different explanations of the gaussian blur and I am really confused.
I roughly understand how the gaussian blur works.
http://en.wikipedia.org/wiki/Gaussian_blur
I understood that we choose 3*sigma as the maxium size for our mask because the values will get really small.
But my three questions are:
How do I create a gaussian mask with the sigma only?
If I understood it correctly, the mask gives me the weights, then
I place the mask on the top left pixel. I multiply the weights for
each value of the pixels in the mask. Then I move the mask to the
next pixel. I do this for all pixels. Is this correct?
I also know that 1D masks are faster. So I create a mask for x and a
mask for y. Lets say my mask would look like this. (3x3)
1 2 1
2 4 2
1 2 1
How would my x and y mask look like?
1- A solution to create a gaussian mask is to setup an N by N matrix, with N=3*sigma (or less if you want a coarser solution), and fill each entry (i,j) with exp(-((i-N/2)^2 + (j-N/2)^2)/(2*sigma^2)). As a comment mentioned, taking N=3*sigma just means that you truncate your gaussian at a "sufficiently small" threshold.
2- yes - you understood correctly. A small detail is that you'll need to normalize by the sum of your weights (ie., divide the result of what you said by the sum of all the elements of your matrix). The other option is that you can build your matrix already normalized, so that you don't need to perform this normalization at the end (the normalized gaussian formula becomes exp(-((i-N/2)^2 + (j-N/2)^2)/(2*sigma^2))/(2*pi*sigma))
3- In your specific case, the 1D version is [1 2 1] (ie, both your x and y masks) since you can obtain the matrix you gave with the multiplication transpose([1 2 1]) * [1 2 1]. In general, you can directly build these 1D gaussians using the 1D gaussian formula which is similar as the one above : exp(-((i-N/2)^2)/(2*sigma^2)) (or the normalized version exp(-((i-N/2)^2)/(2*sigma^2)) / (sigma*sqrt(2*pi)))

Laplacian and Gaussian Filter

I am trying to do some image processing and I would like to apply the LoG kernel. I know the formula, which is :
But I didn't understand how to obtain the kernel matrix with this formula. From what I have read, I have a matrix of n x n and I apply this formula to every cell in that matrix, but what should be the starting values within that matrix in the first place.
Also, I have the same question with the Laplacian filer. I know the formula, which is:
and also, from what I have read, the 3 x 3 filter should be the matrix:
x = [1 1 1; 1 -4 1; 1 1 1]
but can you please tell me how to apply the formula in order to obtain the matrix, or at least indicate me a tutorial of how to apply this.
Basically, we are just going from continuous space to discrete space. The first derivative in continuous time (space) is analogous to the first difference in discrete time (space). To compute the first difference of a discrete-time signal, you convolve [1 -1] over the signal. To compute the second difference, you convolve a signal with [1 -2 1] (which is [1 -1] convolved with itself, or equivalently, convolving the signal with [1 -1] twice).
To calculate the second difference in two dimensions, you convolve the input image with the matrix you mentioned in your question. That means that you take the 3-by-3 mask (i.e, the matrix you mentioned), multiply all nine numbers with nine pixels in the image, and sum the products to get one output pixel. Then you shift the mask to the right, and do it again. Each shift will produce one output pixel. You do that across the entire image.
To get the mask for a Gaussian filter, just sample the two-dimensional Gaussian function for any arbitrary sigma.
This may help: convolution matrix, Gaussian filter

Resources