I have a specific question to ask about the intensity adjustment for image processing. I need high constraint value to find small gaps in the image which is shown as a red circle in the image. I used a manual threshold value 0.99 to convert the grayscale image to binary image for other processing methods. However, as the illumination on the surface did not distribute evenly, some parts of the image is lost. I used the adaptive method suggested by Matlab, however, the results is similar to a global threshold graythresh.
I will show my code and result below.
I0 = imread('1_2.jpg');
[R,C,K] = size(I0);
if K==1
I1 = I0;
else
I1 = rgb2gray(I0);
end
%Adjsut image to get a standar binary picture
%Adjust image intensity value
I1 = imadjust(I1,[0.1 0.7],[]);
BW0 = im2bw(I1,0.99);
figure;
BW0 = bwareaopen(BW0,10000);
%Fill non_crack hole error
BW0 = bwareaopen(1-BW0,500);
BW0 = 1-BW0;
imshow(BW0);
After this process, only half of the image will be left. I want a whole image, with locally intensity threshold but show the same feature as the high-level threshold. What can I do?
Thanks
Try adaptthresh:
I0 = imread('1_2.jpg');
[R,C,K] = size(I0);
if K==1
I1 = I0;
else
I1 = rgb2gray(I0);
end
T = adaptthresh(I1, 0.4); %adaptive thresholding
% Convert image to binary image, specifying the threshold value.
BW = imbinarize(I1,T);
% Display the original image with the binary version, side-by-side.
figure
imshowpair(I1, BW, 'montage')
Related
I'm implementing image mosaic in Matlab using SURF.the problem is
outputView = imref2d(size(img1)*2);
Ir = imwarp(img2,tform,'OutputView',outputView);
it produces
i want it something like this
if i change
outputView = imref2d(size(img1)*2);
to
outputView = imref2d(size(img1));
matlab crops the second image so it can fit in first image size after transforming.
Notice that when you warp the image with respect to the target plane, many of the pixels in this new plane are equal to 0. A very rudimentary algorithm is to simply threshold your image so that you find values above 0 then find the largest bounding box that encompasses the non-zero pixels... then crop:
[rows,cols] = find(Ir(:,:,1) > 0);
topLeftRow = min(rows);
topLeftCol = min(cols);
bottomRightRow = max(rows);
bottomRightCol = max(cols);
Ir_crop = Ir(topLeftRow:bottomRightRow, topLeftCol:bottomRightCol, :);
I've been working with the retina image, currently I am submitting to the wavelet, but I have noticed that I have two problems are:
The optical disk which causes me image noise
And the circle delimiting the retina
The original image is the next
My plan is to establish the bottom of the tone of the optical disk in order not to lose any detail of the blood vessels of the retina (I post a code with which I played but still do not understand much as I know the tone of the optical disc and how to set it to the image without altering the blood vessels)
And with respect to the outer circle of the retina, I donĀ“t know that you recommend me (I do not know about masks, I would appreciate if they have to consult my literature can provide)
c = [242 134 72];% Background to change
thresh = 50;
A = imread('E:\Prueba.jpg');
B = zeros(size(A));
Ar = A(:,:,1);
Ag = A(:,:,2);
Ab = A(:,:,3);
Br = B(:,:,1);
Bg = B(:,:,2);
Bb = B(:,:,3);
logmap = (Ar > (c(1) - thresh)).*(Ar < (c(1) + thresh)).*...
(Ag > (c(2) - thresh)).*(Ag < (c(2) + thresh)).*...
(Ab > (c(3) - thresh)).*(Ab < (c(3) + thresh));
Ar(logmap == 1) = Br(logmap == 1);
Ag(logmap == 1) = Bg(logmap == 1);
Ab(logmap == 1) = Bb(logmap == 1);
A = cat(3 ,Ar,Ag,Ab);
imshow(A);
courtesy of the question How can I change the background color of the image?
The image I get is the following
I need a picture like this where the optical disc does not cause me noise when segmenting the blood vessels of the retina.
I want to be uniform background ... and only the veins are perceived
I continued to work and have obtained the following image As you can realize the optical disk removes some parts of the blood vessels (veins) that are above him, so I require eliminating or make uniform the entire bottom of the image.
As Wouter said, you should first correct the inhomogeneity of the image. I would do it in my own way:
First, the parameters you can adjust to optimize the output:
gfilt = 3;
thresh = 0.4;
erode = 3;
brighten = 20;
You will see how they are used in the code.
This is the main step: to apply a Gaussian filter to the image to make it smooth and then subtract the result from the original image. This way you end up with the sharp changes in your data, which happens to be the vessels:
A = imread('Prueba.jpg');
B = imgaussfilt(A, gfilt) - A; % Gaussian filter and subtraction
% figure; imshow(B)
Then I create a binary mask to remove the unwanted area of the image:
% the 'imadjust' makes sure that you get the same result even if you ...
% change the intensity of illumination. "thresh" is the threshold of ...
% conversion to black and white:
circ = im2bw(imadjust(A(:,:,1)), thresh);
% here I am shrinking the "circ" for "erode" pixels:
circ = imerode(circ, strel('disk', erode));
circ3 = repmat(circ, 1, 1, 3); % and here I extended it to 3D.
% figure; imshow(circ)
And finally, I remove everything on the surrounding dark area and show the result:
B(~circ3) = 0; % ignore the surrounding area
figure; imshow(B * brighten) % brighten and show the output
Notes:
I do not see the last image as a final result, but probably you could apply some thresholds to it and separate the vessels from the rest.
The quality of the image you provided is quite low. I expect good results with a better data.
Although the intensity of blue channel is less than the rest, the vessels are expressed there better than the other channels, because blood is red!
If you are acquiring this data or you have access to the person, I suggest you to use blue light for illumination, since it provides you with higher contrast of the vessels.
Morphological operations are good for working with sphagetti images.
Original image:
Convert to grayscale:
original = rgb2gray(gavrF);
Estimate the background via morphological closing:
se = strel('disk', 3);
background = imclose(original, se);
Estimate of the background:
You could then for example subtract this background from the original grayscale image. You can do this straight by doing a bottom hat transform on the grayscale image:
flatImage = imbothat(original, strel('disk', 4));
With a output:
Noisy, but now you got access to global thresholding methods. Remember to change the datatypes to double if you wish to do some subtraction or division manually.
In the implementation of downsampling by a factor of 2 to the image, the downsampled image is gray. What should I do in order to add all of the color components to the downsampling implementation so that it will be a color image?
I = imread('lena.gif','gif');
[j k] = size(I)
x_new = j./2;
y_new = k./2;
x_scale = j./x_new;
y_scale = k./y_new;
M = zeros(x_new,y_new);
for count1 = 1:x_new
for count2 = 1:y_new
M(count1,count2) = I(count1.*x_scale,count2.*y_scale);
end
end
figure,imshow(I);
title('Original Image');
M = uint8(M);
figure,imshow(M);
title('Downsample');
GIF images are what are known as indexed images. This means that what you read in with imread are values that are indices to a colour map. Each index generates a unique colour for you, and that's how GIF images are stored. They choose from a predefined set of colours, and each pixel in the GIF image comes from one of the colours in the colour map.
You first need to convert the image into RGB, and you do that with ind2rgb. However, you need to read in the colour map first with the two-output version of imread. You also will want to convert the images to uint8 as good practice with im2uint8:
[X,map] = imread('lena.gif');
I = im2uint8(ind2rgb(X,map));
What you need to do next is what #NKN suggested. You must apply the algorithm to all channels.
As such, simply make an output matrix that has three channels, and apply the algorithm to each plane independently. If I can make a suggestion, when accessing pixels this way after you downsample, make sure you floor or round the image coordinates so you're not inadvertently specifying locations that aren't defined - things like (13.8, 25.5) for example. Image pixel locations are integer, so you need to make sure the coordinates are integer too.
[X,map] = imread('lena.gif');
I = im2uint8(ind2rgb(X,map));
j = size(I,1); %// Change
k = size(I,2);
x_new = j./2;
y_new = k./2;
x_scale = j./x_new;
y_scale = k./y_new;
M = zeros(x_new,y_new,size(I,3)); %// Change
for jj = 1 : size(I,3) %// Change
for count1 = 1:x_new
for count2 = 1:y_new
M(count1,count2,jj) = I(floor(count1.*x_scale),floor(count2.*y_scale),jj); %// Change
end
end
end
figure,imshow(I);
title('Original Image');
M = uint8(M);
figure,imshow(M);
title('Downsample');
To test this, I'm using the mandrill dataset that's part of MATLAB. It is an indexed image with an associated colour map. These are coincidentally stored in X and map respectfully:
load mandrill;
I = im2uint8(ind2rgb(X,map));
Running the modified code, I get these two figures:
When you read the original image it contains 3 layers, R-G-B (as suggested by #rayryeng:
[X,map] = imread('lena.gif');
I = ind2rgb(X,map);
size(I)
ans =
768 1024 3
You should perform the down-sampling process on all the layers:
The code you provided does not down-sample. A simple downsampling example is as follows:
imshow(I(1:2:end,1:2:end,:))
If I have an image, in which there is a page of text shot on a uniform background, how can I auto detect the boundaries between the paper and the background?
An example of the image I want to detect is shown below. The images that I will be dealing with consist of a single page on a uniform background and they can be rotated at any angle.
One simple method would be to threshold the image by some known value once you convert the image to grayscale. The problem with that approach is that we are applying a global threshold and so some of the paper at the bottom of the image will be lost if you make the threshold too high. If you make the threshold too low, then you'll certainly get the paper, but you'll include a lot of the background pixels too and it will probably be difficult to remove those pixels with post-processing.
One thing I can suggest is to use an adaptive threshold algorithm. An algorithm that has worked for me in the past is the Bradley-Roth adaptive thresholding algorithm. You can read up about it here on a post I commented on a while back:
Bradley Adaptive Thresholding -- Confused (questions)
However, if you want the gist of it, an integral image of the grayscale version of the image is taken first. The integral image is important because it allows you to calculate the sum of pixels within a window in O(1) complexity. However, the calculation of the integral image is usually O(n^2), but you only have to do that once. With the integral image, you scan neighbourhoods of pixels of size s x s and you check to see if the average intensity is less than t% of the actual average within this s x s window then this is pixel classified as the background. If it's larger, then it's classified as being part of the foreground. This is adaptive because the thresholding is done using local pixel neighbourhoods rather than using a global threshold.
I've coded an implementation of the Bradley-Roth algorithm here for you. The default parameters for the algorithm are s being 1/8th of the width of the image and t being 15%. Therefore, you can just call it this way to invoke the default parameters:
out = adaptiveThreshold(im);
im is the input image and out is a binary image that denotes what belongs to foreground (logical true) or background (logical false). You can play around with the second and third input parameters: s being the size of the thresholding window and t the percentage we talked about above and can call the function like so:
out = adaptiveThreshold(im, s, t);
Therefore, the code for the algorithm looks like this:
function [out] = adaptiveThreshold(im, s, t)
%// Error checking of the input
%// Default value for s is 1/8th the width of the image
%// Must make sure that this is a whole number
if nargin <= 1, s = round(size(im,2) / 8); end
%// Default value for t is 15
%// t is used to determine whether the current pixel is t% lower than the
%// average in the particular neighbourhood
if nargin <= 2, t = 15; end
%// Too few or too many arguments?
if nargin == 0, error('Too few arguments'); end
if nargin >= 4, error('Too many arguments'); end
%// Convert to grayscale if necessary then cast to double to ensure no
%// saturation
if size(im, 3) == 3
im = double(rgb2gray(im));
elseif size(im, 3) == 1
im = double(im);
else
error('Incompatible image: Must be a colour or grayscale image');
end
%// Compute integral image
intImage = cumsum(cumsum(im, 2), 1);
%// Define grid of points
[rows, cols] = size(im);
[X,Y] = meshgrid(1:cols, 1:rows);
%// Ensure s is even so that we are able to index the image properly
s = s + mod(s,2);
%// Access the four corners of each neighbourhood
x1 = X - s/2; x2 = X + s/2;
y1 = Y - s/2; y2 = Y + s/2;
%// Ensure no co-ordinates are out of bounds
x1(x1 < 1) = 1;
x2(x2 > cols) = cols;
y1(y1 < 1) = 1;
y2(y2 > rows) = rows;
%// Count how many pixels there are in each neighbourhood
count = (x2 - x1) .* (y2 - y1);
%// Compute row and column co-ordinates to access each corner of the
%// neighbourhood for the integral image
f1_x = x2; f1_y = y2;
f2_x = x2; f2_y = y1 - 1; f2_y(f2_y < 1) = 1;
f3_x = x1 - 1; f3_x(f3_x < 1) = 1; f3_y = y2;
f4_x = f3_x; f4_y = f2_y;
%// Compute 1D linear indices for each of the corners
ind_f1 = sub2ind([rows cols], f1_y, f1_x);
ind_f2 = sub2ind([rows cols], f2_y, f2_x);
ind_f3 = sub2ind([rows cols], f3_y, f3_x);
ind_f4 = sub2ind([rows cols], f4_y, f4_x);
%// Calculate the areas for each of the neighbourhoods
sums = intImage(ind_f1) - intImage(ind_f2) - intImage(ind_f3) + ...
intImage(ind_f4);
%// Determine whether the summed area surpasses a threshold
%// Set this output to 0 if it doesn't
locs = (im .* count) <= (sums * (100 - t) / 100);
out = true(size(im));
out(locs) = false;
end
When I use your image and I set s = 500 and t = 5, here's the code and this is the image I get:
im = imread('http://i.stack.imgur.com/MEcaz.jpg');
out = adaptiveThreshold(im, 500, 5);
imshow(out);
You can see that there are some spurious white pixels at the bottom white of the image, and there are some holes we need to fill in inside the paper. As such, let's use some morphology and declare a structuring element that's a 15 x 15 square, perform an opening to remove the noisy pixels, then fill in the holes when we're done:
se = strel('square', 15);
out = imopen(out, se);
out = imfill(out, 'holes');
imshow(out);
This is what I get after all of that:
Not bad eh? Now if you really want to see what the image looks like with the paper segmented, we can use this mask and multiply it with the original image. This way, any pixels that belong to the paper are kept while those that belong to the background go away:
out_colour = bsxfun(#times, im, uint8(out));
imshow(out_colour);
We get this:
You'll have to play around with the parameters until it works for you, but the above parameters were the ones I used to get it working for the particular page you showed us. Image processing is all about trial and error, and putting processing steps in the right sequence until you get something good enough for your purposes.
Happy image filtering!
I know this thread about converting black color to white and white to black simultaneously.
I would like to convert only black to white.
I know this thread about doing this what I am asking but I do not understand what goes wrong.
Picture
Code
rgbImage = imread('ecg.png');
grayImage = rgb2gray(rgbImage); % for non-indexed images
level = graythresh(grayImage); % threshold for converting image to binary,
binaryImage = im2bw(grayImage, level);
% Extract the individual red, green, and blue color channels.
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
% Make the black parts pure red.
redChannel(~binaryImage) = 255;
greenChannel(~binaryImage) = 0;
blueChannel(~binaryImage) = 0;
% Now recombine to form the output image.
rgbImageOut = cat(3, redChannel, greenChannel, blueChannel);
imshow(rgbImageOut);
Which gives
Where seems to be something wrong in red color channel.
The Black color is just (0,0,0) in RGB so its removal should mean to turn every (0,0,0) pixel to white (255,255,255).
Doing this idea with
redChannel(~binaryImage) = 255;
greenChannel(~binaryImage) = 255;
blueChannel(~binaryImage) = 255;
Gives
So I must have misunderstood something in Matlab. The blue color should not have any black. So this last image is strange.
How can you turn only black color to white?
I want to keep the blue color of the ECG.
If I understand you properly, you want to extract out the blue ECG plot while removing the text and axes. The best way to do that would be to examine the HSV colour space of the image. The HSV colour space is great for discerning colours just like the way humans do. We can clearly see that there are two distinct colours in the image.
We can convert the image to HSV using rgb2hsv and we can examine the components separately. The hue component represents the dominant colour of the pixel, the saturation denotes the purity or how much white light there is in the pixel and the value represents the intensity or strength of the pixel.
Try visualizing each channel doing:
im = imread('http://i.stack.imgur.com/cFOSp.png'); %// Read in your image
hsv = rgb2hsv(im);
figure;
subplot(1,3,1); imshow(hsv(:,:,1)); title('Hue');
subplot(1,3,2); imshow(hsv(:,:,2)); title('Saturation');
subplot(1,3,3); imshow(hsv(:,:,3)); title('Value');
Hmm... well the hue and saturation don't help us at all. It's telling us the dominant colour and saturation are the same... but what sets them apart is the value. If you take a look at the image on the right, we can tell them apart by the strength of the colour itself. So what it's telling us is that the "black" pixels are actually blue but with almost no strength associated to it.
We can actually use this to our advantage. Any pixels whose values are above a certain value are the values we want to keep.
Try setting a threshold... something like 0.75. MATLAB's dynamic range of the HSV values are from [0-1], so:
mask = hsv(:,:,3) > 0.75;
When we threshold the value component, this is what we get:
There's obviously a bit of quantization noise... especially around the axes and font. What I'm going to do next is perform a morphological erosion so that I can eliminate the quantization noise that's around each of the numbers and the axes. I'm going to make it the mask a bit large to ensure that I remove this noise. Using the image processing toolbox:
se = strel('square', 5);
mask_erode = imerode(mask, se);
We get this:
Great, so what I'm going to do now is make a copy of your original image, then set any pixel that is black from the mask I derived (above) to white in the final image. All of the other pixels should remain intact. This way, we can remove any text and the axes seen in your image:
im_final = im;
mask_final = repmat(mask_erode, [1 1 3]);
im_final(~mask_final) = 255;
I need to replicate the mask in the third dimension because this is a colour image and I need to set each channel to 255 simultaneously in the same spatial locations.
When I do that, this is what I get:
Now you'll notice that there are gaps in the graph.... which is to be expected due to quantization noise. We can do something further by converting this image to grayscale and thresholding the image, then filling joining the edges together by a morphological dilation. This is safe because we have already eliminated the axies and text. We can then use this as a mask to index into the original image to obtain our final graph.
Something like this:
im2 = rgb2gray(im_final);
thresh = im2 < 200;
se = strel('line', 10, 90);
im_dilate = imdilate(thresh, se);
mask2 = repmat(im_dilate, [1 1 3]);
im_final_final = 255*ones(size(im), class(im));
im_final_final(mask2) = im(mask2);
I threshold the previous image that we got without the text and axes after I convert it to grayscale, and then I perform dilation with a line structuring element that is 90 degrees in order to connect those lines that were originally disconnected. This thresholded image will contain the pixels that we ultimately need to sample from the original image so that we can get the graph data we need.
I then take this mask, replicate it, make a completely white image and then sample from the original image and place the locations we want from the original image in the white image.
This is our final image:
Very nice! I had to do all of that image processing because your image basically has quantization noise to begin with, so it's going to be a bit harder to get the graph entirely. Ander Biguri in his answer explained in more detail about colour quantization noise so certainly check out his post for more details.
However, as a qualitative measure, we can subtract this image from the original image and see what is remaining:
imshow(rgb2gray(abs(double(im) - double(im_final_final))));
We get:
So it looks like the axes and text are removed fine, but there are some traces in the graph that we didn't capture from the original image and that makes sense. It all has to do with the proper thresholds you want to select in order to get the graph data. There are some trouble spots near the beginning of the graph, and that's probably due to the morphological processing that I did. This image you provided is quite tricky with the quantization noise, so it's going to be very difficult to get a perfect result. Also, these thresholds unfortunately are all heuristic, so play around with the thresholds until you get something that agrees with you.
Good luck!
What's the problem?
You want to detect all black parts of the image, but they are not really black
Example:
Your idea (or your code):
You first binarize the image, selecting the pixels that ARE something against the pixels that are not. In short, you do: if pixel>level; pixel is something
Therefore there is a small misconception you have here! when you write
% Make the black parts pure red.
it should read
% Make every pixel that is something (not background) pure red.
Therefore, when you do
redChannel(~binaryImage) = 255;
greenChannel(~binaryImage) = 255;
blueChannel(~binaryImage) = 255;
You are doing
% Make every pixel that is something (not background) white
% (or what it is the same in this case, delete them).
Therefore what you should get is a completely white image. The image is not completely white because there has been some pixels that were labelled as "not something, part of the background" by the value of level, in case of your image around 0.6.
A solution that one could think of is manually setting the level to 0.05 or similar, so only black pixels will be selected in the gray to binary threholding. But this will not work 100%, as you can see, the numbers have some very "no-black" values.
How would I try to solve the problem:
I would try to find the colour you want, extract just that colour from the image, and then delete outliers.
Extract blue using HSV (I believe I answered you somewhere else how to use HSV).
rgbImage = imread('ecg.png');
hsvImage=rgb2hsv(rgbImage);
I=rgbImage;
R=I(:,:,1);
G=I(:,:,2);
B=I(:,:,3);
th=0.1;
R((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
G((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
B((hsvImage(:,:,1)>(280/360))|(hsvImage(:,:,1)<(200/360)))=255;
I2= cat(3, R, G, B);
imshow(I2)
Once here we would like to get the biggest blue part, and that would be our signal. Therefore the best approach seems to first binarize the image taking all blue pixels
% Binarize image, getting all the pixels that are "blue"
bw=im2bw(rgb2gray(I2),0.9999);
And then using bwlabel, label all the independent pixel "islands".
% Label each "blob"
lbl=bwlabel(~bw);
The label most repeated will be the signal. So we find it and separate the background from the signal using that label.
% Find the blob with the highes amount of data. That will be your signal.
r=histc(lbl(:),1:max(lbl(:)));
[~,idxmax]=max(r);
% Profit!
signal=rgbImage;
signal(repmat((lbl~=idxmax),[1 1 3]))=255;
background=rgbImage;
background(repmat((lbl==idxmax),[1 1 3]))=255;
Here there is a plot with the signal, background and difference (using the same equation as #rayryang used)
Here is a variation on #rayryeng's solution to extract the blue signal:
%// retrieve picture
imgRGB = imread('http://i.stack.imgur.com/cFOSp.png');
%// detect axis lines and labels
imgHSV = rgb2hsv(imgRGB);
BW = (imgHSV(:,:,3) < 1);
BW = imclose(imclose(BW, strel('line',40,0)), strel('line',10,90));
%// clear those masked pixels by setting them to background white color
imgRGB2 = imgRGB;
imgRGB2(repmat(BW,[1 1 3])) = 255;
%// show extracted signal
imshow(imgRGB2)
To get a better view, here is the detected mask overlayed on top of the original image (I'm using imoverlay function from the File Exchange):
figure
imshow(imoverlay(imgRGB, BW, uint8([255,0,0])))
Here is a code for this:
rgbImage = imread('ecg.png');
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
black = ~redChannel&~greenChannel&~blueChannel;
redChannel(black) = 255;
greenChannel(black) = 255;
blueChannel(black) = 255;
rgbImageOut = cat(3, redChannel, greenChannel, blueChannel);
imshow(rgbImageOut);
black is the area containing the black pixels. These pixels are set to white in each color channel.
In your code you use a threshold and a grayscale image so of course you have much bigger area of pixels that is set to white resp. red color. In this code only pixel that contain absolutly no red, green and blue are set to white.
The following code does the same with a threshold for each color channel:
rgbImage = imread('ecg.png');
redChannel = rgbImage(:, :, 1);
greenChannel = rgbImage(:, :, 2);
blueChannel = rgbImage(:, :, 3);
black = (redChannel<150)&(greenChannel<150)&(blueChannel<150);
redChannel(black) = 255;
greenChannel(black) = 255;
blueChannel(black) = 255;
rgbImageOut = cat(3, redChannel, greenChannel, blueChannel);
imshow(rgbImageOut);