Color only a segment of an image in Matlab - image

I'm trying to color only a segment of an image in Matlab. For example, I load an RGB image, then I obtain a mask with Otsu's method (graythresh). I want to keep the color only in the pixels that have value of 1 after applying im2bw with graythresh as the threshold. For example:
image = imread('peppers.png');
thr = graythresh(image);
bw = im2bw(image, thr);
With this code I obtain the following binary image:
My goal is to keep the color in the white pixels.
Thanks!

I have another suggestion on how to replace the pixels we don't care about. This works by creating linear indices for each of the slices where black pixels exist in the bw image. The summation with the result of find is done because bw is the size of just one "slice" of image and this is how we get the indices for the other 2 slices.
Starting MATLAB 2016b:
image(find(~bw)+[0 numel(bw)*[1 2]]) = NaN;
In older versions:
image(bsxfun(#plus,find(~bw),[0 numel(bw)*[1 2]])) = NaN;
Then imshow(image) gives:
Note that NaN gets converted to 0 for integer classes.
Following the clarification that the other pixels should be kept in their gray version, see the below code:
% Load image:
img = imread('peppers.png');
% Create a grayscale version:
grayimg = rgb2gray(img);
% Segment image:
if ~verLessThan('matlab','9.0') && exist('imbinarize.m','file') == 2
% R2016a onward:
bw = imbinarize(grayimg);
% Alternatively, work on just one of the color channels, e.g. red:
% bw = imbinarize(img(:,:,1));
else
% Before R2016a:
thr = graythresh(grayimg);
bw = im2bw(grayimg, thr);
end
output_img = repmat(grayimg,[1 1 3]);
colorpix = bsxfun(#plus,find(bw),[0 numel(bw)*[1 2]]);
output_img(colorpix) = img(colorpix);
figure; imshow(output_img);
The result when binarizing using only the red channel:

Your question misses "and replace the rest with black". here are two ways:
A compact solution: use bsxfun:
newImage = bsxfun(#times, Image, cast(bw, 'like', Image));
Although I am glad with the previous one, you can also take a look at this step-by-step approach:
% separate the RGB layers:
R = image(:,:,1);
G = image(:,:,2);
B = image(:,:,3);
% change the values to zero or your desired color wherever bw is false:
R(~bw) = 0;
G(~bw) = 0;
B(~bw) = 0;
% concatenate the results:
newImage = cat(3, R, G, B);
Which can give you different replacements for the black region:
UPDATE:
According to the comments, the false area of bw should be replaced with grayscale image of the same input. This is how to achieve it:
image = imread('peppers.png');
thr = graythresh(image);
bw = im2bw(image, thr);
gr = rgb2gray(image); % generate grayscale image from RGB
newImage(repmat(~bw, 1, 1, 3)) = repmat(gr(~bw), 1, 1, 3); % substitude values
% figure; imshow(newImage)
With this result:

Related

Overlapping grayscale and RGB Images

I would like to overlap two images, one grayscale and one RGB image. I would like to impose the RGB image on top of the grayscale image, but ONLY for pixels greater than a certain value. I tried using the double function in MATLAB, but this seems to change the color scheme and I cannot recover the original RGB colors. What should I do in order to retain the original RGB image instead of mapping it to one of the MATLAB colormaps? Below is my attempt at superimposing:
pixelvalues = double(imread('hello.png'));
PixelInt = mean(pixelvalues,3);
I1 = ind2rgb(Brightfield(:,:,1), gray(256)); %Brightfield
I2 = ind2rgb(PixelInt, jet(256)); %RGB Image
imshow(I2,[])
[r,c,d] = size(I2);
I1 = I1(1:r,1:c,1:d);
% Replacing those pixels below threshold with Brightfield Image
threshold = 70;
I2R = I2(:,:,1); I2G = I2(:,:,2); I2B = I2(:,:,3);
I1R = I1(:,:,1); I1G = I1(:,:,2); I1B = I1(:,:,3);
I2R(PixelInt<threshold) = I1R(PixelInt<threshold);
I2G(PixelInt<threshold) = I1G(PixelInt<threshold);
I2B(PixelInt<threshold) = I1B(PixelInt<threshold);
I2(:,:,1) = I2R; I2(:,:,2) = I2G; I2(:,:,3) = I2B;
h = figure;
imshow(I2,[])
Original RGB Image:
Brightfield:
Overlay:
Is the content of pixelvalues what you show in your first image? If so, that image does not use a jet colormap. It has pink and white values above the red values, whereas jet stops at dark red at the upper limits. When you take the mean of those values and then generate a new RGB image with ind2rgb using the jet colormap, you're creating an inherently different image. You probably want to use pixelvalues directly in generating your overlay, like so:
% Load/create your starting images:
pixelvalues = imread('hello.png'); % Color overlay
I1 = repmat(Brightfield(:, :, 1), [1 1 3]); % Grayscale underlay
[r, c, d] = size(pixelvalues);
I1 = I1(1:r, 1:c, 1:d);
% Create image mask:
PixelInt = mean(double(pixelvalues), 3);
threshold = 70;
mask = repmat((PixelInt > threshold), [1 1 3]);
% Combine images:
I1(mask) = pixelvalues(mask);
imshow(I1);
Note that you may need to do some type conversions when loading/creating the starting images. I'm assuming 'hello.png' is a uint8 RGB image and Brightfield is of type uint8. If I load your first image as pixelvalues and your second image as I1, I get the following when running the above code:
Create a mask and use it to combine the images:
onionOrig = imread('onion.png');
onionGray = rgb2gray(onionOrig);
onionMask = ~(onionOrig(:,:,1)<100 & onionOrig(:,:,2)<100 & onionOrig(:,:,3)<100);
onionMasked(:,:,1) = double(onionOrig(:,:,1)) .* onionMask + double(onionGray) .* ~onionMask;
onionMasked(:,:,2) = double(onionOrig(:,:,2)) .* onionMask + double(onionGray) .* ~onionMask;
onionMasked(:,:,3) = double(onionOrig(:,:,3)) .* onionMask + double(onionGray) .* ~onionMask;
onionFinal = uint8(onionMasked);
imshow(onionFinal)

Matlab - Scale Colorbar of Image

How can I scale the colorbar axis of a false color image?
I read this post,and copied the code but it seems not to work correctly:
MATLAB Colorbar - Same colors, scaled values
Please see the two images below. In the first (without the scaling) the coloraxis goes
[1 2 3 4 5 6]*10^4
In the second image, it goes
[0.005 0.01 0.015 0.02 0.025]
The correct scaling (with C = 100000) would be
[0.1 0.2 0.3 0.4 0.5 0.6]
Without scaling
Wrong scaling
I want that the coloraxis is scaled by 1/C and I can freely choose C, so that when the pixel value = 10^4 and C=10^6 the scale should show 10^-2.
The reason why I multiply my image first by C is to get more decimals places, because all values below 1 will be displayed as zero without the C scaling.
When I run the code I get yticks as a workspace variable with the following values:
[500 1000 1500 2000 2500]
My code:
RGB = imread('IMG_0043.tif');% Read Image
info = imfinfo('IMG_0043.CR2'); % get Metadata
C = 1000000; % Constant to adjust image
x = info.DigitalCamera; % get EXIF
t = getfield(x, 'ExposureTime');% save ExposureTime
f = getfield(x, 'FNumber'); % save FNumber
S = getfield(x, 'ISOSpeedRatings');% save ISOSpeedRatings
date = getfield(x,'DateTimeOriginal');
I = rgb2gray(RGB); % convert Image to greyscale
K = 480; % Kamerakonstante(muss experimentel eavaluiert werden)
% N_s = K*(t*S)/power(f,2))*L
L = power(f,2)/(K*t*S)*C; %
J = immultiply(I,L); % multiply each value with constant , so the Image is Calibrated to cd/m^2
hFig = figure('Name','False Color Luminance Map', 'ToolBar','none','MenuBar','none');
% Create/initialize default colormap of jet.
cmap = jet(16); % or 256, 64, 32 or whatever.
% Now make lowest values show up as black.
cmap(1,:) = 0;
% Now make highest values show up as white.
cmap(end,:) = 1;
imshow(J,'Colormap',cmap) % show Image in false color
colorbar % add colorbar
h = colorbar; % define colorbar as variable
y_Scl = (1/C);
yticks = get(gca,'YTick');
set(h,'YTickLabel',sprintfc('%g', [yticks.*y_Scl]))
ylabel(h, 'cd/m^2')% add unit label
title(date); % Show date in image
caxis auto % set axis to auto
datacursormode on % enable datacursor
img = getframe(gcf);
nowstr = datestr(now, 'yyyy-mm-dd_HH_MM_SS');
folder = 'C:\Users\Taiko\Desktop\FalseColor\';
ImageFiles = dir( fullfile(folder, '*.jpg') );
if isempty(ImageFiles)
next_idx = 1;
else
lastfile = ImageFiles(end).name;
[~, basename, ~] = fileparts(lastfile);
file_number_str = regexp('(?<=.*_)\d+$', basename, 'match' );
last_idx = str2double(file_number_str);
next_idx = last_idx + 1;
end
newfilename = fullfile( folder, sprintf('%s_%04d.jpg', nowstr, next_idx) );
imwrite(img.cdata, newfilename);
Problems:
1) You are getting YTick of the figure (gca) but not the color bar. That would give you the "pixel" coordinates of the graph, instead of the actual values. Use yticks = get(h,'YTick');.
2) caxis auto Should come before overwriting YTicks (and after enabling the color bar); otherwise the scale and ticks will mismatch.
3) Do you mean C = 100000?
Result:

Connect disjoint edges in binary image

I performed some operations on an image of a cube and I obtained a binary image of the edges of the cube which are disconnected at some places.The image I obtained is shown below:
I want to join the sides to make it a closed figure.I have tried the following:
BW = im2bw(image,0.5);
BW = imdilate(BW,strel('square',5));
figure,imshow(BW);
But this only thickens the image.It does not connect the edges.I have also tried bwmorph() and various other functions, but it is not working.Can anyone please suggest any function or steps to connect the edges? Thank you
This could be one approach -
%// Read in the input image
img = im2bw(imread('http://i.imgur.com/Bl7zhcn.jpg'));
%// There seems to be white border, which seems to be non-intended and
%// therefore could be removed
img = img(5:end-4,5:end-4);
%// Thin input binary image, find the endpoints in it and connect them
im1 = bwmorph(img,'thin',Inf);
[x,y] = find(bwmorph(im1,'endpoints'));
for iter = 1:numel(x)-1
two_pts = [y(iter) x(iter) y(iter+1) x(iter+1)];
shapeInserter = vision.ShapeInserter('Shape', 'Lines', 'BorderColor', 'White');
rectangle = int32(two_pts);
im1 = step(shapeInserter, im1, rectangle);
end
figure,imshow(im1),title('Thinned endpoints connected image')
%// Dilate the output image a bit
se = strel('diamond', 1);
im2 = imdilate(im1,se);
figure,imshow(im2),title('Dilated Thinned endpoints connected image')
%// Get a convex shaped blob from the endpoints connected and dilate image
im3 = bwconvhull(im2,'objects',4);
figure,imshow(im3),title('Convex blob corresponding to previous output')
%// Detect the boundary of the convex shaped blob and
%// "attach" to the input image to get the final output
im4 = bwboundaries(im3);
idx = im4{:};
im5 = false(size(im3));
im5(sub2ind(size(im5),idx(:,1),idx(:,2))) = 1;
img_out = img;
img_out(im5==1 & img==0)=1;
figure,imshow(img_out),title('Final output')
Debug images -
I used the above code to write the following one.I haven't tested it on many images and it may not be as efficient as the one above but it executes faster comparatively.So I thought I would post it as a solution.
I = imread('image.jpg'); % your original image
I=im2bw(I);
figure,imshow(I)
I= I(5:end-4,5:end-4);
im1 = bwmorph(I,'thin',Inf);
[x,y] = find(bwmorph(im1,'endpoints'));
for iter = 1:numel(x)-1
im1=linept(im1, x(iter), y(iter), x(iter+1), y(iter+1));
end
im2=imfill(im1,'holes');
figure,imshow(im2);
BW = edge(im2);
figure,imshow(BW);
se = strel('diamond', 1);
im3 = imdilate(BW,se);
figure,imshow(im3);
The final result is this:
I got the "linept" function from here:http://in.mathworks.com/matlabcentral/fileexchange/4177-connect-two-pixels

How to limit the raster processing extent using a spatial mask?

I am trying to limit raster processing in MATLAB to include only areas within a shapefile boundary, similar to how ArcGIS Spatial Analyst functions use a mask. Here is some (reproducible) sample data I am working with:
A 4-band NAIP image (WARNING 169MB download)
A shapefile of study area boundaries (A zipped shapefile on File Dropper)
Here is a MATLAB script I use to calculate NDVI:
file = 'C:\path\to\doi1m2011_41111h4nw_usda.tif';
[I R] = geotiffread(file);
outputdir = 'C:\output\'
% Calculate NDVI
NIR = im2single(I(:,:,4));
red = im2single(I(:,:,1));
ndvi = (NIR - red) ./ (NIR + red);
double(ndvi);
imshow(ndvi,'DisplayRange',[-1 1]);
% Stretch to 0 - 255 and convert to 8-bit unsigned integer
ndvi = floor((ndvi + 1) * 128); % [-1 1] -> [0 256]
ndvi(ndvi < 0) = 0; % not really necessary, just in case & for symmetry
ndvi(ndvi > 255) = 255; % in case the original value was exactly 1
ndvi = uint8(ndvi); % change data type from double to uint8
% Write NDVI to .tif file (optional)
tiffdata = geotiffinfo(file);
outfilename = [outputdir 'ndvi_' 'temp' '.tif'];
geotiffwrite(outfilename, ndvi, R, 'GeoKeyDirectoryTag', tiffdata.GeoTIFFTags.GeoKeyDirectoryTag)
The following image illustrates what I would like to accomplish using MATLAB. For this example, I used the ArcGIS raster calculator (Float(Band4-Band1)/Float(Band4+Band1)) to produce the NDVI on the right. I also specified the study area shapefile as a mask in the environment settings.
Question:
How can I limit the raster processing extent in MATLAB using a polygon shapefile as a spatial mask to replicate the results shown in the figure?
What I have unsuccessfully tried:
roipoly and poly2mask, although I cannot seem to apply these functions properly (taking into account these are spatial data) to produce the desired effects.
EDIT:
I tried the following to convert the shapefile to a mask, without success. Not sure where I am going wrong here...
s = 'C:\path\to\studyArea.shp'
shp = shaperead(s)
lat = [shp.X];
lon = [shp.Y];
x = shp.BoundingBox(2) - shp.BoundingBox(1)
y = shp.BoundingBox(3) - shp.BoundingBox(1)
x = poly2mask(lat,lon, x, y)
Error messages:
Error using poly2mask
Expected input number 1, X, to be finite.
Error in poly2mask (line 49)
validateattributes(x,{'double'},{'real','vector','finite'},mfilename,'X',1);
Error in createMask (line 13)
x = poly2mask(lat,lon, x, y)
You can read the region of interest by:
roi = shaperead('study_area_shapefile/studyArea.shp');
Chop the trailing NaN:
rx = roi.X(1:end-1);
ry = roi.Y(1:end-1);
If you have several polygons in your shapefile, they are seperated by NaNs and you have to treat them seperately.
Then use the worldToIntrinsic-method from your spatial reference of the sat-image to convert the polygon-points into image-coordinates:
[ix, iy] = R.worldToIntrinsic(rx,ry);
This assumes both coordinate systems are the same.
Then you can go and make your mask by:
mask = poly2mask(ix,iy,R.RasterSize(1),R.RasterSize(2));
You can use the mask on your original multilayer image before making any calculation by:
I(repmat(~mask,[1,1,4])) = nan;
Or use it on a single layer (i.e. red) by:
red(~mask) = nan;
If the regions are very small, it could be beneficial (for memory and computation power) to convert a masked image to a sparse matrix. I have not tried if that makes any speed-difference.
red(~mask) = 0;
sred = sparse(double(red));
Unfortunatly, sparse matrizes are only possible with doubles, so your uint8 needs prior to be converted.
Generally you should crop the ROI out of the image. Look in the objects "roi" and "R" to find useful parameters and methods. I haven't done it here.
Finally my version of your script, with some slight other changes:
file = 'doi1m2011_41111h4nw_usda.tif';
[I R] = geotiffread(file);
outputdir = '';
% Read Region of Interest
roi = shaperead('study_area_shapefile/studyArea.shp');
% Remove trailing nan from shapefile
rx = roi.X(1:end-1);
ry = roi.Y(1:end-1);
% convert to image coordinates
[ix, iy] = R.worldToIntrinsic(rx,ry);
% make the mask
mask = poly2mask(ix,iy,R.RasterSize(1),R.RasterSize(2));
% mask sat-image
I(repmat(~mask,[1,1,4])) = 0;
% convert to sparse matrizes
NIR = sparse(double(I(:,:,4)));
red = sparse(double(I(:,:,1)));
% Calculate NDVI
ndvi = (NIR - red) ./ (NIR + red);
% convert back to full matrizes
ndvi = full(ndvi);
imshow(ndvi,'DisplayRange',[-1 1]);
% Stretch to 0 - 255 and convert to 8-bit unsigned integer
ndvi = (ndvi + 1) / 2 * 255; % [-1 1] -> [0 255]
ndvi = uint8(ndvi); % change and round data type from double to uint8
% Write NDVI to .tif file (optional)
tiffdata = geotiffinfo(file);
outfilename = [outputdir 'ndvi_' 'temp' '.tif'];
geotiffwrite(outfilename, ndvi, R, 'GeoKeyDirectoryTag', tiffdata.GeoTIFFTags.GeoKeyDirectoryTag);
mapshow(outfilename);
There are three steps here, for which I will create 3 functions:
Compute the NDVI for the complete input image: ndvi = comp_ndvi(nir, red)
Compute the mask from the shapefile: mask = comp_mask(shape)
Combine the NDVI and the mask: output = combine_ndvi_mask(ndvi, mask)
You have the code for comp_ndvi() in your question. The code for combine_ndvi_mask() depends on what you want to do to the masked areas; if you want to make them white, it might look like:
function output = combine_ndvi_mask(ndvi, mask)
output = ndvi;
output(~mask) = 255;
end
In comp_mask() you will want to use poly2mask() to convert the polygon vertices into the raster mask. In order to help here I need to know what you've got already. Have you loaded the vertices into MATLAB? What have you tried with poly2mask?

Separating Background and Foreground

I am new to Matlab and to Image Processing as well. I am working on separating background and foreground in images like this
I have hundreds of images like this, found here. By trial and error I found out a threshold (in RGB space): the red layer is always less than 150 and the green and blue layers are greater than 150 where the background is.
so if my RGB image is I and my r,g and b layers are
redMatrix = I(:,:,1);
greenMatrix = I(:,:,2);
blueMatrix = I(:,:,3);
by finding coordinates where in red, green and blue the values are greater or less than 150 I can get the coordinates of the background like
[r1 c1] = find(redMatrix < 150);
[r2 c2] = find(greenMatrix > 150);
[r3 c3] = find(blueMatrix > 150);
now I get coordinates of thousands of pixels in r1,c1,r2,c2,r3 and c3.
My questions:
How to find common values, like the coordinates of the pixels where red is less than 150 and green and blue are greater than 150?
I have to iterate every coordinate of r1 and c1 and check if they occur in r2 c2 and r3 c3 to check it is a common point. but that would be very expensive.
Can this be achieved without a loop ?
If somehow I came up with common points like [commonR commonC] and commonR and commonC are both of order 5000 X 1, so to access this background pixel of Image I, I have to access first commonR then commonC and then access image I like
I(commonR(i,1),commonC(i,1))
that is expensive too. So again my question is can this be done without loop.
Any help would be appreciated.
I got solution with #Science_Fiction answer's
Just elaborating his/her answer
I used
mask = I(:,:,1) < 150 & I(:,:,2) > 150 & I(:,:,3) > 150;
No loop is needed. You could do it like this:
I = imread('image.jpg');
redMatrix = I(:,:,1);
greenMatrix = I(:,:,2);
blueMatrix = I(:,:,3);
J(:,:,1) = redMatrix < 150;
J(:,:,2) = greenMatrix > 150;
J(:,:,3) = blueMatrix > 150;
J = 255 * uint8(J);
imshow(J);
A greyscale image would also suffice to separate the background.
K = ((redMatrix < 150) + (greenMatrix > 150) + (blueMatrix > 150))/3;
imshow(K);
EDIT
I had another look, also using the other images you linked to.
Given the variance in background colors, I thought you would get better results deriving a threshold value from the image histogram instead of hardcoding it.
Occasionally, this algorithm is a little to rigorous, e.g. erasing part of the clothes together with the background. But I think over 90% of the images are separated pretty well, which is more robust than what you could hope to achieve with a fixed threshold.
close all;
path = 'C:\path\to\CUHK_training_cropped_photos\photos';
files = dir(path);
bins = 16;
for f = 3:numel(files)
fprintf('%i/%i\n', f, numel(files));
file = files(f);
if isempty(strfind(file.name, 'jpg'))
continue
end
I = imread([path filesep file.name]);
% Take the histogram of the blue channel
B = I(:,:,3);
h = imhist(B, bins);
h2 = h(bins/2:end);
% Find the most common bin in the *upper half*
% of the histogram
m = bins/2 + find(h2 == max(h2));
% Set the threshold value somewhat below
% the value corresponding to that bin
thr = m/bins - .25;
BW = im2bw(B, thr);
% Pad with ones to ensure background connectivity
BW = padarray(BW, [1 1], 1);
% Find connected regions in BW image
CC = bwconncomp(BW);
L = labelmatrix(CC);
% Crop back again
L = L(2:end-1,2:end-1);
% Set the largest region in the orignal image to white
for c = 1:3
channel = I(:,:,c);
channel(L==1) = 255;
I(:,:,c) = channel;
end
% Show the results with a pause every 16 images
subplot(4,4,mod(f-3,16)+1);
imshow(I);
title(sprintf('Img %i, thr %.3f', f, thr));
if mod(f-3,16)+1 == 16
pause
clf
end
end
pause
close all;
Results:
Your approach seems basic but decent. Since for this particular image the background is composed of mainly blue so you be crude and do:
mask = img(:,:,3) > 150;
This will set those pixels which evaluate to true for > 150 to 0 and false to 1. You will have a black and white image though.
imshow(mask);
To add colour back
mask3d(:,:,1) = mask;
mask3d(:,:,2) = mask;
mask3d(:,:,3) = mask;
img(mask3d) = 255;
imshow(img);
Should give you the colour image of face hopefully, with a pure white background. All this requires some trial and error.

Resources