I'm trying to obtain an image which has everything but several colorful objects grayed out, as shown here:
My original image is this (the caps have slightly different colors than the example above):
I tried to apply a threshold process then binarize the image, which gave me the following result (mask is on the left, result of multiplication is on the right):
And now I'm trying to combine all of these masks. Should I use if loop to combine it into a single image or is there a better way? I tried using (&,&,&) but it turned into a black images.
Your original image has 7 distinct regions: 5 colorful tips, the hand, and the background. The question then becomes, how do we disregard the wall and the hand, which happen to be the two largest regions, and only keep the color of the tips.
If your MATLAB license permits, I would recommend using the Color Thresholder App (colorThresholder), which allows you to find a suitable representation of the colors in your image. Experimenting with it for a minute I can say that the L*a*b* color space allows good separation between the regions/colors:
We can then export this function, yielding the following:
function [BW,maskedRGBImage] = createMask(RGB)
%createMask Threshold RGB image using auto-generated code from colorThresholder app.
% [BW,MASKEDRGBIMAGE] = createMask(RGB) thresholds image RGB using
% auto-generated code from the colorThresholder app. The colorspace and
% range for each channel of the colorspace were set within the app. The
% segmentation mask is returned in BW, and a composite of the mask and
% original RGB images is returned in maskedRGBImage.
% Auto-generated by colorThresholder app on 25-Dec-2018
%------------------------------------------------------
% Convert RGB image to chosen color space
I = rgb2lab(RGB);
% Define thresholds for channel 1 based on histogram settings
channel1Min = 0.040;
channel1Max = 88.466;
% Define thresholds for channel 2 based on histogram settings
channel2Min = -4.428;
channel2Max = 26.417;
% Define thresholds for channel 3 based on histogram settings
channel3Min = -12.019;
channel3Max = 38.908;
% Create mask based on chosen histogram thresholds
sliderBW = (I(:,:,1) >= channel1Min ) & (I(:,:,1) <= channel1Max) & ...
(I(:,:,2) >= channel2Min ) & (I(:,:,2) <= channel2Max) & ...
(I(:,:,3) >= channel3Min ) & (I(:,:,3) <= channel3Max);
BW = sliderBW;
% Invert mask
BW = ~BW;
% Initialize output masked image based on input image.
maskedRGBImage = RGB;
% Set background pixels where BW is false to zero.
maskedRGBImage(repmat(~BW,[1 1 3])) = 0;
end
Now that you have the mask we can easily convert the original image to grayscale, replicate it along the 3rd dimension, then take the colorful pixels from the original image using logical indexing:
function q53922067
img = imread("https://i.stack.imgur.com/39WNm.jpg");
% Image segmentation:
BW = repmat( createMask(img), 1, 1, 3 ); % Note that this is the function shown above
% Keeping the ROI colorful and the rest gray:
gImg = repmat( rgb2gray(img), 1, 1, 3 ); % This is done for easier assignment later
gImg(BW) = img(BW);
% Final result:
figure(); imshow(gImg);
end
Which yields:
To combine masks, use | (element-wise logical OR), not & (logical AND).
mask = mask1 | mask2 | mask3;
Related
I have an image (white background with 1-5 black dots) that is called main.jpg (main image).
I am trying to place another image (secondary.jpg) in every black dot that is found in main image.
In order to do that:
I found the black pixels in main image
resize the secondary image to specific size that I want
plot the image in every coordinate that I found in step one. (the black pixel should be the center coordinates of the secondary image)
Unfortunately, I don't know how to do the third step.
for example:
main image is:
secondary image is:
output:
(The dots are behind the chairs. They are the image center points)
This is my code:
mainImage=imread('main.jpg')
secondaryImage=imread('secondary.jpg')
secondaryImageResized = resizeImage(secondaryImage)
[m n]=size(mainImage)
for i=1:n
for j=1:m
% if it's black pixel
if (mainImage(i,j)==1)
outputImage = plotImageInCoordinates(secondaryImageResized, i, j)
% save this image
imwrite(outputImage,map,'clown.bmp')
end
end
end
% resize the image to (250,350) width, height
function [ Image ] = resizeImage(img)
image = imresize(img, [250 350]);
end
function [outputImage] = plotImageInCoordinates(image, x, y)
% Do something
end
Any help appreciated!
Here's an alternative without convolution. One intricacy that you must take into account is that if you want to place each image at the centre of each dot, you must determine where the top left corner is and index into your output image so that you draw the desired object from the top left corner to the bottom right corner. You can do this by taking each black dot location and subtracting by half the width horizontally and half the height vertically.
Now onto your actual problem. It's much more efficient if you loop through the set of points that are black, not the entire image. You can do this by using the find command to determine the row and column locations that are 0. Once you do this, loop through each pair of row and column coordinates, do the subtraction of the coordinates and then place it on the output image.
I will impose an additional requirement where the objects may overlap. To accommodate for this, I will accumulate pixels, then find the average of the non-zero locations.
Your code modified to accommodate for this is as follows. Take note that because you are using JPEG compression, you will have compression artifacts so regions that are 0 may not necessarily be 0. I will threshold with an intensity of 128 to ensure that zero regions are actually zero. You will also have the situation where objects may go outside the boundaries of the image. Therefore to accommodate for this, pad the image sufficiently with twice of half the width horizontally and twice of half the height vertically then crop it after you're done placing the objects.
mainImage=imread('https://i.stack.imgur.com/gbhWJ.png');
secondaryImage=imread('https://i.stack.imgur.com/P0meM.png');
secondaryImageResized = imresize(secondaryImage, [250 300]);
% Find half height and width
rows = size(secondaryImageResized, 1);
cols = size(secondaryImageResized, 2);
halfHeight = floor(rows / 2);
halfWidth = floor(cols / 2);
% Create a padded image that contains our main image. Pad with white
% pixels.
rowsMain = size(mainImage, 1);
colsMain = size(mainImage, 2);
outputImage = 255*ones([2*halfHeight + rowsMain, 2*halfWidth + colsMain, size(mainImage, 3)], class(mainImage));
outputImage(halfHeight + 1 : halfHeight + rowsMain, ...
halfWidth + 1 : halfWidth + colsMain, :) = mainImage;
% Find a mask of the black pixels
mask = outputImage(:,:,1) < 128;
% Obtain black pixel locations
[row, col] = find(mask);
% Reset the output image so that they're all zeros now. We use this
% to output our final image. Also cast to ensure accumulation is proper.
outputImage(:) = 0;
outputImage = double(outputImage);
% Keeps track of how many times each pixel was hit by the object
% This is so that we can find the average at each location.
counts = zeros([size(mask), size(mainImage, 3)]);
% For each row and column location in the image
for i = 1 : numel(row)
% Get the row and column locations
r = row(i); c = col(i);
% Offset to get the top left corner
r = r - halfHeight;
c = c - halfWidth;
% Place onto final image
outputImage(r:r+rows-1, c:c+cols-1, :) = outputImage(r:r+rows-1, c:c+cols-1, :) + double(secondaryImageResized);
% Accumulate the counts
counts(r:r+rows-1,c:c+cols-1,:) = counts(r:r+rows-1,c:c+cols-1,:) + 1;
end
% Find average - Any values that were not hit, change to white
outputImage = outputImage ./ counts;
outputImage(counts == 0) = 255;
outputImage = uint8(outputImage);
% Now crop and show
outputImage = outputImage(halfHeight + 1 : halfHeight + rowsMain, ...
halfWidth + 1 : halfWidth + colsMain, :);
close all; imshow(outputImage);
% Write the final output
imwrite(outputImage, 'finalimage.jpg', 'Quality', 100);
We get:
Edit
I wasn't told that your images had transparency. Therefore what you need to do is use imread but ensure that you read in the alpha channel. We then check to see if one exists and if one does, we will ensure that the background of any values with no transparency are set to white. You can do that with the following code. Ensure this gets placed at the very top of your code, replacing the images being loaded in:
mainImage=imread('https://i.stack.imgur.com/gbhWJ.png');
% Change - to accommodate for transparency
[secondaryImage, ~, alpha] = imread('https://i.imgur.com/qYJSzEZ.png');
if ~isempty(alpha)
m = alpha == 0;
for i = 1 : size(secondaryImage,3)
m2 = secondaryImage(:,:,i);
m2(m) = 255;
secondaryImage(:,:,i) = m2;
end
end
secondaryImageResized = imresize(secondaryImage, [250 300]);
% Rest of your code follows...
% ...
The code above has been modified to read in the basketball image. The rest of the code remains the same and we thus get:
You can use convolution to achieve the desired effect. This will place a copy of im everywhere there is a black dot in imz.
% load secondary image
im = double(imread('secondary.jpg'))/255.0;
% create some artificial image with black indicators
imz = ones(500,500,3);
imz(50,50,:) = 0;
imz(400,200,:) = 0;
imz(200,400,:) = 0;
% create output image
imout = zeros(size(imz));
imout(:,:,1) = conv2(1-imz(:,:,1),1-im(:,:,1),'same');
imout(:,:,2) = conv2(1-imz(:,:,2),1-im(:,:,2),'same');
imout(:,:,3) = conv2(1-imz(:,:,3),1-im(:,:,3),'same');
imout = 1-imout;
% output
imshow(imout);
Also, you probably want to avoid saving main.jpg as a .jpg since it results in lossy compression and will likely cause issues with any method that relies on exact pixel values. I would recommend using .png which is lossless and will also likely compress better than .jpg for synthetic images where the same colors repeat many times.
I've been working with the retina image, currently I am submitting to the wavelet, but I have noticed that I have two problems are:
The optical disk which causes me image noise
And the circle delimiting the retina
The original image is the next
My plan is to establish the bottom of the tone of the optical disk in order not to lose any detail of the blood vessels of the retina (I post a code with which I played but still do not understand much as I know the tone of the optical disc and how to set it to the image without altering the blood vessels)
And with respect to the outer circle of the retina, I donĀ“t know that you recommend me (I do not know about masks, I would appreciate if they have to consult my literature can provide)
c = [242 134 72];% Background to change
thresh = 50;
A = imread('E:\Prueba.jpg');
B = zeros(size(A));
Ar = A(:,:,1);
Ag = A(:,:,2);
Ab = A(:,:,3);
Br = B(:,:,1);
Bg = B(:,:,2);
Bb = B(:,:,3);
logmap = (Ar > (c(1) - thresh)).*(Ar < (c(1) + thresh)).*...
(Ag > (c(2) - thresh)).*(Ag < (c(2) + thresh)).*...
(Ab > (c(3) - thresh)).*(Ab < (c(3) + thresh));
Ar(logmap == 1) = Br(logmap == 1);
Ag(logmap == 1) = Bg(logmap == 1);
Ab(logmap == 1) = Bb(logmap == 1);
A = cat(3 ,Ar,Ag,Ab);
imshow(A);
courtesy of the question How can I change the background color of the image?
The image I get is the following
I need a picture like this where the optical disc does not cause me noise when segmenting the blood vessels of the retina.
I want to be uniform background ... and only the veins are perceived
I continued to work and have obtained the following image As you can realize the optical disk removes some parts of the blood vessels (veins) that are above him, so I require eliminating or make uniform the entire bottom of the image.
As Wouter said, you should first correct the inhomogeneity of the image. I would do it in my own way:
First, the parameters you can adjust to optimize the output:
gfilt = 3;
thresh = 0.4;
erode = 3;
brighten = 20;
You will see how they are used in the code.
This is the main step: to apply a Gaussian filter to the image to make it smooth and then subtract the result from the original image. This way you end up with the sharp changes in your data, which happens to be the vessels:
A = imread('Prueba.jpg');
B = imgaussfilt(A, gfilt) - A; % Gaussian filter and subtraction
% figure; imshow(B)
Then I create a binary mask to remove the unwanted area of the image:
% the 'imadjust' makes sure that you get the same result even if you ...
% change the intensity of illumination. "thresh" is the threshold of ...
% conversion to black and white:
circ = im2bw(imadjust(A(:,:,1)), thresh);
% here I am shrinking the "circ" for "erode" pixels:
circ = imerode(circ, strel('disk', erode));
circ3 = repmat(circ, 1, 1, 3); % and here I extended it to 3D.
% figure; imshow(circ)
And finally, I remove everything on the surrounding dark area and show the result:
B(~circ3) = 0; % ignore the surrounding area
figure; imshow(B * brighten) % brighten and show the output
Notes:
I do not see the last image as a final result, but probably you could apply some thresholds to it and separate the vessels from the rest.
The quality of the image you provided is quite low. I expect good results with a better data.
Although the intensity of blue channel is less than the rest, the vessels are expressed there better than the other channels, because blood is red!
If you are acquiring this data or you have access to the person, I suggest you to use blue light for illumination, since it provides you with higher contrast of the vessels.
Morphological operations are good for working with sphagetti images.
Original image:
Convert to grayscale:
original = rgb2gray(gavrF);
Estimate the background via morphological closing:
se = strel('disk', 3);
background = imclose(original, se);
Estimate of the background:
You could then for example subtract this background from the original grayscale image. You can do this straight by doing a bottom hat transform on the grayscale image:
flatImage = imbothat(original, strel('disk', 4));
With a output:
Noisy, but now you got access to global thresholding methods. Remember to change the datatypes to double if you wish to do some subtraction or division manually.
I am interested in adding a single Gaussian shaped object to an existing image, something like in the attached image. The base image that I would like to add the object to is 8-bit unsigned with values ranging from 0-255. The bright object in the attached image is actually a tree represented by normalized difference vegetation index (NDVI) data. The attached script is what I have have so far. How can I add a a Gaussian shaped abject (i.e. a tree) with values ranging from 110-155 to an existing NDVI image?
Sample data available here which can be used with this script to calculate NDVI
file = 'F:\path\to\fourband\image.tif';
[I R] = geotiffread(file);
outputdir = 'F:\path\to\output\directory\'
%% Make NDVI calculations
NIR = im2single(I(:,:,4));
red = im2single(I(:,:,1));
ndvi = (NIR - red) ./ (NIR + red);
ndvi = double(ndvi);
%% Stretch NDVI to 0-255 and convert to 8-bit unsigned integer
ndvi = floor((ndvi + 1) * 128); % [-1 1] -> [0 256]
ndvi(ndvi < 0) = 0; % not really necessary, just in case & for symmetry
ndvi(ndvi > 255) = 255; % in case the original value was exactly 1
ndvi = uint8(ndvi); % change data type from double to uint8
%% Need to add a random tree in the image here
%% Write to geotiff
tiffdata = geotiffinfo(file);
outfilename = [outputdir 'ndvi_' '.tif'];
geotiffwrite(outfilename, ndvi, R, 'GeoKeyDirectoryTag', tiffdata.GeoTIFFTags.GeoKeyDirectoryTag)
Your post is asking how to do three things:
How do we generate a Gaussian shaped object?
How can we do this so that the values range between 110 - 155?
How do we place this in our image?
Let's answer each one separately, where the order of each question builds on the knowledge from the previous questions.
How do we generate a Gaussian shaped object?
You can use fspecial from the Image Processing Toolbox to generate a Gaussian for you:
mask = fspecial('gaussian', hsize, sigma);
hsize specifies the size of your Gaussian. You have not specified it here in your question, so I'm assuming you will want to play around with this yourself. This will produce a hsize x hsize Gaussian matrix. sigma is the standard deviation of your Gaussian distribution. Again, you have also not specified what this is. sigma and hsize go hand-in-hand. Referring to my previous post on how to determine sigma, it is generally a good rule to set the standard deviation of your mask to be set to the 3-sigma rule. As such, once you set hsize, you can calculate sigma to be:
sigma = (hsize-1) / 6;
As such, figure out what hsize is, then calculate your sigma. After, invoke fspecial like I did above. It's generally a good idea to make hsize an odd integer. The reason why is because when we finally place this in your image, the syntax to do this will allow your mask to be symmetrically placed. I'll talk about this when we get to the last question.
How can we do this so that the values range between 110 - 155?
We can do this by adjusting the values within mask so that the minimum is 110 while the maximum is 155. This can be done by:
%// Adjust so that values are between 0 and 1
maskAdjust = (mask - min(mask(:))) / (max(mask(:)) - min(mask(:)));
%//Scale by 45 so the range goes between 0 and 45
%//Cast to uint8 to make this compatible for your image
maskAdjust = uint8(45*maskAdjust);
%// Add 110 to every value to range goes between 110 - 155
maskAdjust = maskAdjust + 110;
In general, if you want to adjust the values within your Gaussian mask so that it goes from [a,b], you would normalize between 0 and 1 first, then do:
maskAdjust = uint8((b-a)*maskAdjust) + a;
You'll notice that we cast this mask to uint8. The reason we do this is to make the mask compatible to be placed in your image.
How do we place this in our image?
All you have to do is figure out the row and column you would like the centre of the Gaussian mask to be placed. Let's assume these variables are stored in row and col. As such, assuming you want to place this in ndvi, all you have to do is the following:
hsizeHalf = floor(hsize/2); %// hsize being odd is important
%// Place Gaussian shape in our image
ndvi(row - hsizeHalf : row + hsizeHalf, col - hsizeHalf : col + hsizeHalf) = maskAdjust;
The reason why hsize should be odd is to allow an even placement of the shape in the image. For example, if the mask size is 5 x 5, then the above syntax for ndvi simplifies to:
ndvi(row-2:row+2, col-2:col+2) = maskAdjust;
From the centre of the mask, it stretches 2 rows above and 2 rows below. The columns stretch from 2 columns to the left to 2 columns to the right. If the mask size was even, then we would have an ambiguous choice on how we should place the mask. If the mask size was 4 x 4 as an example, should we choose the second row, or third row as the centre axis? As such, to simplify things, make sure that the size of your mask is odd, or mod(hsize,2) == 1.
This should hopefully and adequately answer your questions. Good luck!
After I did a 'imclearborder', there are still a bit of unwanted object around the barcode. How can I remove those objects to isolate the barcode? I have pasted my code for your reference.
rgb = imread('barcode2.jpg');
% Resize Image
rgb = imresize(rgb,0.33);
figure(),imshow(rgb);
% Convert from RGB to Gray
Igray = double(rgb2gray(rgb));
% Calculate the Gradients
[dIx, dIy] = gradient(Igray);
B = abs(dIx) - abs(dIy);
% Low-Pass Filtering
H = fspecial('gaussian', 20, 10);
C = imfilter(B, H);
C = imclearborder(C);
figure(),imagesc(C);colorbar;
Well, i have already explained it in your previous question How to find the location of red region in an image using MATLAB? , but with a opencv code and output images.
Instead of asking for code, try to implement it yourself.
Below is what to do next.
1) convert image 'C' in your code to binary.
2) Apply some erosion to remove small noises.( this time, barcode region also shrinks)
3) Apply dilation to compensate previous erosion.(most of noise will have removed in previous erosion. So they won't come back)
4) Find contours in the image.
5) Find their area. Most probably, contour which has maximum area will be the barcode, because other things like letters, words etc will be small ( you can understand it in the grayscale image you have provided)
6) Select contour with max. area. Draw a bounding rectangle for it.
Its result is already provided in your previous question. It works very nice. Try to implement it yourself with help of MATLAB documentation. Come here only when you get an error which you don't understand.
%%hi, i am ading my code to yours at the end of your code%%%%
clear all;
rgb = imread('barcode.jpeg');
% Resize Image
rgb = imresize(rgb,0.33);
figure(),imshow(rgb);
% Convert from RGB to Gray
Igray = double(rgb2gray(rgb));
Igrayc = Igray;
% Calculate the Gradients
[dIx, dIy] = gradient(Igray);
B = abs(dIx) - abs(dIy);
% Low-Pass Filtering
H = fspecial('gaussian', 10, 5);
C = imfilter(B, H);
C = imclearborder(C);
imshow(Igray,[]);
figure(),imagesc(C);colorbar;
%%%%%%%%%%%%%%%%%%%%%%%%from here my code starts%%%%%%%%%%%%%%%%
bw = im2bw(C);%%%binarising the image
% imshow(bw);
%%%%if there are letters or any other noise is present around the barcode
%%Note: the size of the noise and letters should be smaller than the
%%barcode size
labelImage = bwlabel(bw,8);
len=0;labe=0;
for i=1:max(max(labelImage))
a = find(labelImage==i);
if(len<length(a))
len=length(a);
labe=i;
end
end
imag = zeros(size(l));
imag(find(labelImage==labe))=255;
% imtool(imag);
%%%if Necessary do errossion
% se2 = strel('line',10,0);
% imag= imerode(imag,se2);
% imag= imerode(imag,se2);
[r c]= find(imag==255);
minr = min(r);
maxc = max(c);
minc = min(c);
maxr = max(r);
imag1 = zeros(size(l));
for i=minr:maxr
for j=minc:maxc
imag1(i,j)=255;
end
end
% figure,imtool(imag1);
varit = find(imag1==0);
Igrayc(varit)=0;
%%%%%result image having only barcode
imshow(Igrayc,[]);
%%%%%original image
figure(),imshow(Igray,[]);
Hope it is useful
how do i convert an image represented as double into an image that i can use to produce a histogram?
(with dohist:)
% computes the histogram of a given image into num bins.
% values less than 0 go into bin 1, values bigger than 255
% go into bin 255
% if show=0, then do not show. Otherwise show in figure(show)
function thehist = dohist(theimage,show)
% set up bin edges for histogram
edges = zeros(256,1);
for i = 1 : 256;
edges(i) = i-1;
end
[R,C] = size(theimage);
imagevec = reshape(theimage,1,R*C); % turn image into long array
thehist = histc(imagevec,edges)'; % do histogram
if show > 0
figure(show)
clf
pause(0.1)
plot(thehist)
axis([0, 256, 0, 1.1*max(thehist)])
end
I am guessing that you just need to normalize your image first, to do this you can use:
255*(theimage./(max(theimage(:)));
Your code seems fine, you could make sure the bounds get treated correctly with:
theimage(theimage<0) = 0;
theimage(theimage>255) = 255;
But this shouldnt be necessary, usually you either get a double image ranging [0,1] or uint8 [0,255] when you read an image with imread(). Just rescale to [0,255] in this case if needed.
Some other tips:
You can make the edges-vector like this:
edges = 0:255;
And theimage(:) is the same as reshape(theimage,1,R*C) in this case since you want one long vector.
The built-in function hist can be applied directly to images of class double.
Matlab documentation link
If you have an image which you suspect to have N bits of resolution on the interval [A,B], you can call hist directly on the image (without conversion) like:
[H,bins] = hist(IM,linspace(A,B,2^N));
to retrieve the histogram and bins or
hist(IM,linspace(A,B,2^N));
to simply plot the histogram.