I have this equation :
I = (I - min(I(:))) / (max(I(:)) - min(I(:)));
where I is a matrix, I know that min(I(:)) and max(I(:)) compute minimum and maximum element of the I matrix respectively.
When I make a random matrix rand(5,5) or randi(5,5) I don't see any change before and after the implement above equation:
but when I implement this equation on gray-scale image the result is binary image:
Can anyone here explain this equation exactly please?
I = (I - min(I(:))) / (max(I(:)) - min(I(:)));
The code line
I = (I - min(I(:))) / (max(I(:)) - min(I(:)));
linearly transforms data from the range [min(I(:)), max(I(:))] to the range [0, 1] – it is a form of standardization. The part before the division moves the data such that the minimal value becomes 0. Then the division squeezes the data such that the maximal value becomes 1.
You can get a feeling for what happens by plotting the original and transformed data against each other:
I = randi(100, 1, 10);
plot(I, (I - min(I(:))) / (max(I(:)) - min(I(:))), '.')
xlabel original
ylabel transformed
By chance, the minimum value was 5 and the maximum value 75. The data are linearly transformed such that the minimum is mapped to 0 and the maximum to 1.
That you don't see a difference in your matrix plots is probably due to the way you plot it. If you use e.g. imagesc, it does such a transformation internally before plotting (hence the sc part for "scaling") and so you don't see a difference. But the difference is there, just look at the numbers themselves:
Example:
>> I = randi(3, 3, 3)
I =
1 2 2
1 2 2
2 3 3
>> I = (I - min(I(:))) / (max(I(:)) - min(I(:)))
I =
0 0.5 0.5
0 0.5 0.5
0.5 1 1
The gray-scale image that you used, tire.tif from Matlab, is an 8-bit image. If you read it into Matlab
I = imread('tire.tif');
you get an array of uint8 values:
>> whos I
Name Size Bytes Class Attributes
I 205x232 47560 uint8
In Matlab, if you do computations with such an integer data type, in many cases the result stays an integer, too. You scale to [0, 1], but there are only two integers in that range. As a result you get an image that contains only 0 and 1 as values, a binary image. The effect can again be visualized by plotting:
plot(I(:), (I(:) - min(I(:))) / (max(I(:)) - min(I(:))), '.')
xlabel original
ylabel transformed
The original data are integers from 0 to 255, and they are mapped to 0 for the range 0–127, and to 1 for the range 128–255. To avoid that, first convert the data to a floating-point data type:
I = double(I);
For more information on integer arithmetic, see the Matlab documentation.
Related
How should I approach the problem of removing negative pixels from MRI slices (or any image)?
Also, it would really help if someone can briefly explain why they occur.
I have read many online references especially those on MATLAB forum, but they seem to have diverse reasons as to why they occur.
Sorry, I don't have a code to post, since I am figuring out my approach yet.
An MRI slice is presumably for us here nothing but an image, which is also nothing but a matrix. Since a matrix representing an image have only positive values, 'a negative pixel' also, presumably, mean the pixels having a value lower than a certain threshold. Let's have such a scenario:
load clown loads a matrix X to your workspace representing a clown image, to see it first do imagesc(X);colormap(gray);. If you want to cut out some values lower than a threshold, you can do:
threshold=10;
newValue=0;
X(find(X>threshold))=newValue;
imagesc(X)
colormap(gray)
Assume for example the following image matrix:
>> Img = [-2, -1, 0, 1, 2];
You could set all negative elements to zero:
>> ImgZeros = Img;
>> ImgZeros(Img<0) = 0
ImgZeros =
0 0 0 1 2
Or any other value useful to you, e.g. NaN:
>> ImgNans = Img;
>> ImgNans(Img<0) = nan
ImgNans =
NaN NaN 0 1 2
You could 'shift' all the values up such that the lowest negative value becomes zero:
>> ImgZeroFloor = Img - min(Img(:))
ImgZeroFloor =
0 1 2 3 4
You could convert the whole thing to a grayscale image in the range (0,1):
>> ImgGray = mat2gray(Img)
ImgGray =
0 0.2500 0.5000 0.7500 1.0000
etc.
As to why you're getting negative values, who knows. It's problem specific. (If I had to guess for MRI I would say it's due to numerical inaccuracy during the conversion process from an MRI signal to pixel intensities.)
Another way to limit your values is to use a sigmoid function
https://en.wikipedia.org/wiki/Sigmoid_function
It can be scaled so that all answers range from 0 to 255
And it can be used to limit the spikes in raw data balance.
Its often used in neural networks.
I am trying to do some image processing for which I am given an 8-bit grayscale image. I am supposed to change the contrast of the image by generating a lookup table that increases the contrast for pixel values between 50 and 205. I have generated a look up table using the following MATLAB code.
a = 2;
x = 0:255;
lut = 255 ./ (1+exp(-a*(x-127)/32));
When I plot lut, I get a graph shown below:
So far so good, but how do I go about increasing the contrast for pixel values between 50 and 205? Final plot of the transform mapping should be something like:
Judging from your comments, you simply want a linear map where intensities that are < 50 get mapped to 0, intensities that are > 205 get mapped to 255, and everything else is a linear mapping in between. You can simply do this by:
slope = 255 / (205 - 50); % // Generate equation of the line -
% // y = mx + b - Solve for m
intercept = -50*slope; %// Solve for b --> b = y - m*x, y = 0, x = 50
LUT = uint8(slope*(0:255) + intercept); %// Generate points
LUT(1:51) = 0; %// Anything < intensity 50 set to 0
LUT(206:end) = 255; %// Anything > intensity 205 set to 255
The LUT now looks like:
plot(0:255, LUT);
axis tight;
grid;
Take note at how I truncated the intensities when they're < 50 and > 205. MATLAB starts indexing at index 1, and so we need to offset the intensities by 1 so that they correctly map to pixel intensities which start at 0.
To finally apply this to your image, all you have to do is:
out = LUT(img + 1);
This is assuming that img is your input image. Again, take note that we had to offset the input by +1 as MATLAB starts indexing at location 1, while intensities start at 0.
Minor Note
You can easily do this by using imadjust, which basically does this for you under the hood. You call it like so:
outAdjust = imadjust(in, [low_in; high_in], [low_out; high_out]);
low_in and high_in represent the minimum and maximum input intensities that exist in your image. Note that these are normalized between [0,1]. low_out and high_out adjust the intensities of your image so that low_in maps to low_out, high_in maps to high_out, and everything else is contrast stretched in between. For your case, you would do:
outAdjust = imadjust(img, [0; 1], [50/255; 205/255]);
This should stretch the contrast such that the input intensity 50 maps to the output intensity 0 and the input intensity 205 maps to the output intensity 255. Any intensities < 50 and > 205 get automatically saturated to 0 and 255 respectively.
You need to take each pixel in your image and replace it with the corresponding value in the lookup table. This can be done with some nested for loops, but it is not the most idiomatic way to do it. I would recommend using arrayfun with a function that replaces a pixel.
new_image = arrayfun(#(pixel) lut(pixel), image);
It might be more efficient to use the code that generates lut directly on the image. If performance is a concern and you don't need to use a lookup table, try comparing both methods.
new_image = 255 ./ (1 + exp(-image * (x-127) / 32));
Note that the new_image variable will no longer be of type uint8. If you need to display it again (say, with imshow) you will need to convert it back by writing uint8(new_image).
I am interested in adding a single Gaussian shaped object to an existing image, something like in the attached image. The base image that I would like to add the object to is 8-bit unsigned with values ranging from 0-255. The bright object in the attached image is actually a tree represented by normalized difference vegetation index (NDVI) data. The attached script is what I have have so far. How can I add a a Gaussian shaped abject (i.e. a tree) with values ranging from 110-155 to an existing NDVI image?
Sample data available here which can be used with this script to calculate NDVI
file = 'F:\path\to\fourband\image.tif';
[I R] = geotiffread(file);
outputdir = 'F:\path\to\output\directory\'
%% Make NDVI calculations
NIR = im2single(I(:,:,4));
red = im2single(I(:,:,1));
ndvi = (NIR - red) ./ (NIR + red);
ndvi = double(ndvi);
%% Stretch NDVI to 0-255 and convert to 8-bit unsigned integer
ndvi = floor((ndvi + 1) * 128); % [-1 1] -> [0 256]
ndvi(ndvi < 0) = 0; % not really necessary, just in case & for symmetry
ndvi(ndvi > 255) = 255; % in case the original value was exactly 1
ndvi = uint8(ndvi); % change data type from double to uint8
%% Need to add a random tree in the image here
%% Write to geotiff
tiffdata = geotiffinfo(file);
outfilename = [outputdir 'ndvi_' '.tif'];
geotiffwrite(outfilename, ndvi, R, 'GeoKeyDirectoryTag', tiffdata.GeoTIFFTags.GeoKeyDirectoryTag)
Your post is asking how to do three things:
How do we generate a Gaussian shaped object?
How can we do this so that the values range between 110 - 155?
How do we place this in our image?
Let's answer each one separately, where the order of each question builds on the knowledge from the previous questions.
How do we generate a Gaussian shaped object?
You can use fspecial from the Image Processing Toolbox to generate a Gaussian for you:
mask = fspecial('gaussian', hsize, sigma);
hsize specifies the size of your Gaussian. You have not specified it here in your question, so I'm assuming you will want to play around with this yourself. This will produce a hsize x hsize Gaussian matrix. sigma is the standard deviation of your Gaussian distribution. Again, you have also not specified what this is. sigma and hsize go hand-in-hand. Referring to my previous post on how to determine sigma, it is generally a good rule to set the standard deviation of your mask to be set to the 3-sigma rule. As such, once you set hsize, you can calculate sigma to be:
sigma = (hsize-1) / 6;
As such, figure out what hsize is, then calculate your sigma. After, invoke fspecial like I did above. It's generally a good idea to make hsize an odd integer. The reason why is because when we finally place this in your image, the syntax to do this will allow your mask to be symmetrically placed. I'll talk about this when we get to the last question.
How can we do this so that the values range between 110 - 155?
We can do this by adjusting the values within mask so that the minimum is 110 while the maximum is 155. This can be done by:
%// Adjust so that values are between 0 and 1
maskAdjust = (mask - min(mask(:))) / (max(mask(:)) - min(mask(:)));
%//Scale by 45 so the range goes between 0 and 45
%//Cast to uint8 to make this compatible for your image
maskAdjust = uint8(45*maskAdjust);
%// Add 110 to every value to range goes between 110 - 155
maskAdjust = maskAdjust + 110;
In general, if you want to adjust the values within your Gaussian mask so that it goes from [a,b], you would normalize between 0 and 1 first, then do:
maskAdjust = uint8((b-a)*maskAdjust) + a;
You'll notice that we cast this mask to uint8. The reason we do this is to make the mask compatible to be placed in your image.
How do we place this in our image?
All you have to do is figure out the row and column you would like the centre of the Gaussian mask to be placed. Let's assume these variables are stored in row and col. As such, assuming you want to place this in ndvi, all you have to do is the following:
hsizeHalf = floor(hsize/2); %// hsize being odd is important
%// Place Gaussian shape in our image
ndvi(row - hsizeHalf : row + hsizeHalf, col - hsizeHalf : col + hsizeHalf) = maskAdjust;
The reason why hsize should be odd is to allow an even placement of the shape in the image. For example, if the mask size is 5 x 5, then the above syntax for ndvi simplifies to:
ndvi(row-2:row+2, col-2:col+2) = maskAdjust;
From the centre of the mask, it stretches 2 rows above and 2 rows below. The columns stretch from 2 columns to the left to 2 columns to the right. If the mask size was even, then we would have an ambiguous choice on how we should place the mask. If the mask size was 4 x 4 as an example, should we choose the second row, or third row as the centre axis? As such, to simplify things, make sure that the size of your mask is odd, or mod(hsize,2) == 1.
This should hopefully and adequately answer your questions. Good luck!
I'm trying to create a mozaic image in Matlab. The database consists of mostly RGB images but also some gray scale images.
I need to calculate the histograms - like in the example of the Wikipedia article about color histograms - for the RGB images and thought about using the bitshift operator in Matlab to combine the R,G and B channels.
nbins = 4;
nbits = 8;
index = bitshift(bitshift(image(:,:,1), log2(nbins)-nbits), 2*log2(nbins)) + ...
+ bitshift(bitshift(image(:,:,2), log2(nbins)-nbits), log2(nbins)) + ...
+ bitshift(image(:,:,3), log2(nbins)-nbits) + 1;
index is now a matrix of the same size as image with the index to the corresponding bin for the pixel value.
How can I sum the occurences of all unique values in this matrix to get the histogram of the RGB image?
Is there a better approach than bitshift to calculate the histogram of an RGB image?
Calculating Indices
The bitshift operator seems OK to do. Me what I would personally do is create a lookup relationship that relates RGB value to bin value. You first have to figure out how many bins in each dimension that you want. For example, let's say we wanted 8 bins in each channel. This means that we would have a total of 512 bins all together. Assuming we have 8 bits per channel, you would produce a relationship that creates an index like so:
% // Figure out where to split our bins
accessRed = floor(256 / NUM_RED_BINS);
accessGreen = floor(256 / NUM_GREEN_BINS);
accessBlue = floor(256 / NUM_BLUE_BINS);
%// Figures out where to index the histogram
redChan = floor(red / accessRed);
greenChan = floor(green / accessGreen);
blueChan = floor(blue / accessBlue);
%// Find single index
out = 1 + redChan + (NUM_RED_BINS)*greenChan + (NUM_RED_BINS*NUM_GREEN_BINS)*blueChan;
This assumes we have split our channels into red, green and blue. We also offset our indices by 1 as MATLAB indexes arrays starting at 1. This makes more sense to me, but the bitshift operator looks more efficient.
Onto your histogram question
Now, supposing you have the indices stored in index, you can use the accumarray function that will help you do that. accumarray takes in a set of locations in your array, as well as "weights" for each location. accumarray will find the corresponding locations as well as the weights and aggregate them together. In your case, you can use sum. accumarray isn't just limited to sum. You can use any operation that provides a 1-to-1 relationship. As an example, suppose we had the following variables:
index =
1
2
3
4
5
1
1
2
2
3
3
weights =
1
1
1
2
2
2
3
3
3
4
4
What accumarray will do is for each value of weights, take a look at the corresponding value in index, and accumulate this value into its corresponding slot.
As such, by doing this you would get (make sure that index and weights are column vectors):
out = accumarray(index, weights);
out =
6
7
9
2
2
If you take a look, all indices that have a value of 1, any values in weights that share the same index of 1 get summed into the first slot of out. We have three values: 1, 2 and 3. Similarly, with the index 2 we have values of 1, 3 and 3, which give us 7.
Now, to apply this to your application, given your code, your indices look like they start at 1. To calculate the histogram of your image, all we have to do is set all of the weights to 1 and use accumarray to accumulate the entries. Therefore:
%// Make sure these are column vectors
index = index(:);
weights = ones(numel(index), 1);
%// Calculate histogram
h = accumarray(index, weights);
%// You can also do:
%// h = accumarray(index, 1); - This is a special case if every value
%// in weights is the same number
accumarray's behaviour by default invokes sum. This should hopefully give you what you need. Also, should there be any indices that are missing values, (for example, suppose the index of 2 is missing from your index matrix), accumarray will conveniently place a zero in this location when you aggregate. Makes sense right?
Good luck!
To perform K means clustering with k = 3 (segments). So I:
1) Converted the RGB img into grayscale
2) Casted the original image into a n X 1, column matrix
3) idx = kmeans(column_matrix)
4) output = idx, casted back into the same dimensions as the original image.
My questions are :
A
When I do imshow(output), I get a plain white image. However when I do imshow(output[0 5]), it shows the output image. I understand that 0 and 5 specify the display range. But why do I have to do this?
B)
Now the output image is meant to be split into 3 segments right. How do I threshold it such that I assign a
0 for the clusters of region 1
1 for clusters of region 2
2 for clusters of region 3
As the whole point of me doing this clustering is so that I can segment the image into 3 regions.
Many thanks.
Kind Regards.
A: Given that your matrix output contains scalar values ranging from 1 to 3, imshow(output) is treating this as a grayscale matrix and assuming that the full range of values is 0 to 255. This is why constraining the color limits is necessary as otherwise your image is all white or almost all white.
B: output = output - 1
As pointed out by Ryan, your problem is probably just how you display the image. Here's a working example:
snow = rand(256, 256);
figure;
imagesc(snow);
nClusters = 3;
clusterIndices = kmeans(snow(:), nClusters);
figure;
imagesc(reshape(clusterIndices, [256, 256]));