speed up matlab code for eliminating white pixels - performance

I have rgb images from a camera which contain white pixels. I wrote the following code to eliminate them. It works but takes forever.
% elliminate white pixel
while 1
maxValue = max(imageRGB(:));
[maxY maxX] = maxPosition(squeeze(imageRGB(:,:,c)));
surr = 2;
x_l = maxX - surr; if x_l < 1, x_l = 1; end
x_r = maxX + surr; if x_r > size(imageRGB,2), x_r = size(imageRGB,2); end
y_u = maxY - surr; if y_u < 1, y_u = 1; end
y_b = maxY + surr; if y_b > size(imageRGB,1), y_b = size(imageRGB,1); end
meanArea = ((y_b-y_u)+1) * ((x_r-x_l)+1) - 1;
mean = (sum(sum(imageRGB(y_u:y_b, x_l:x_r,c))) - maxValue)/meanArea;
if (maxValue/mean > 1.5)
imageRGB(maxY,maxX,c) = mean;
else
break;
end
end
Any ideas how to speed up this codeĀ“?

Correct me if I'm wrong, or ignore this 'answer' entirely, but the code posted appears to:
Find the most white pixel in the image (I'm guessing here, imageRGB isn't a Matlab built-in).
Find the position in the image of the most white pixel (another guess, another unknown function maxPosition).
Do some sort of averaging to replace the most-white pixel with an average over its immediate neighbourhood.
Repeat the process until the stopping criterion is satisfied.
If you have the Image Processing Toolbox, you will find that it has all sorts of functions for adjusting pixel intensity which is, I think, what you are trying to do, so you can stop reading this answer now. If you don't have the toolbox, read on.
If you can, you should amend your entire approach and decide, from one read of the image, what the threshold for averaging should be. This would lift the computation of maxValue out of the loop, maybe replace it by a single computation of thresholdValue. Then you could lift the calculation of [maxY maxX] out of the loop too.
If you can't do this, there are still some options for increasing the speed of your operations. You could either:
Pad the image with a 2-pixel halo all round before starting operations. Then apply your operation to all the white pixels in the original image. Obviously you'll have to set the halo pixels to the right value to leave your operation unchanged.
or
Operate only on the pixels in the image which are not within 2 pixels of the edge. This will produce an output image which is 4 pixels smaller in each dimension, but on large images this is often not a problem.
Either of these eliminates a whole slew of if statements and the repeated calculation of meanArea (since it becomes a constant).
If you can calculate a threshold once, at the start of processing, rather than recalculating it iteratively you might find that you can write a function to implement the averaging which you can apply to all the pixels in the image, and eliminate the need to find the white pixels. The function would have to leave the un-white pixels unchanged of course. Applying an operation to every pixel, ensuring that it is a null-operation for the pixels which should not be changed (or an identity operation for those pixels) is sometimes faster than first finding the pixels that need to be changed and then applying the operation only to those pixels.

From what I know if perform poorly.
you could replace
x_l = maxX - surr; if x_l < 1, x_l = 1; end
with
x_l = max(maxX - surr,1);
and the others analogous.
Also you could put the (maxValue/mean > 1.5) in the condition for the while loop.
in the lines
maxValue = max(imageRGB(:));
[maxY maxX] = maxPosition(squeeze(imageRGB(:,:,c)));
you search for max twice. I suppose you could save some time if you write it like:
[maxY maxX] = maxPosition(squeeze(imageRGB(:,:,c)));
maxValue = imageRGB(maxY,maxX,c);

Another possibility would be to remove the sorting and just calculate the average on the whole image. This is easily done with conv2 which is a built-in and therefor very fast compared to anything anyone of us could cook up.
Assuming you are working with double gray-scale images:
% generate an averageing filter
filterMat=ones(2*filterSize+1);
filterMat=filterMat/sum(filterMat(:));
% convolve with image
meanComplete=conv2(picture,filterMat,'same');
% calculate the decision criterion
changeIndices=picture./meanComplete>relThreshold & picture>absThreshold;
% use logical indexing to replace white pixels with the mean
newPicture=picture;
newPicture(changeIndices)=mean(changeIndices);
I need 50ms for one Full-HD image.

Related

Histogram of an image but without considering the first k pixels

I would like to create a histogram of an image but without considering the first k pixels.
Eg: 50x70 image and k = 40, the histogram is calculated on the last 3460 pixels. The first 40 pixels of the image are ignored.
The order to scan the k pixels is a raster scan order (starting from the top left and proceeds by lines).
Another example is this, where k=3:
Obviously I can't assign a value to those k pixels otherwise the histogram would be incorrect.
Honestly I have no idea how to start.
How can I do that?
Thanks so much
The vectorized solution to your problem would be
function [trimmedHist]=histKtoEnd(image,k)
imageVec=reshape(image.',[],1); % Transform the image into a vector. Note that the image has to be transposed in order to achieve the correct order for your counting
imageWithoutKPixels=imageVec(k+1:end); % Create vector without first k pixels
trimmedHist=accumarray(imageWithoutKPixels,1); % Create the histogram using accumarray
If you got that function on your workingdirectory you can use
image=randi(4,4,4)
k=6;
trimmedHistogram=histKtoEnd(image,k)
to try it.
EDIT: If you just need the plot you can also use histogram(imageWithoutKPixels) in the 4th row of the function I wrote
One of the way can be this:
histogram = zeros(1,256);
skipcount = 0;
for i = 1:size(image,1)
for j = 1:size(image,2)
skipcount = skipcount + 1;
if (skipcount > 40)
histogram(1,image(i,j)+1) = histogram(1,image(i,j)+1) + 1;
end
end
end
If you need to skip some exact number of top lines, then you can skip the costly conditional check and just start the outer loop from appropriate index.
Vec = image(:).';
Vec = Vec(k+1:end);
Hist = zeros(1, 256);
for i=0:255
grayI = (Vec == i);
Hist(1, i+1) = sum(grayI(:));
end
First two lines drop the first k pixels so they are not considered in the computation.
Then you check how many 0's you have and save it in the array. The same for all gray levels.
In the hist vector, in the i-th cell you will have the number of occurance of gray level (i-1).

How to apply Thresholding in image processing

This is sample code for K means algorithm.
k = 5;
[Centroid,new_cluster]=kmeans_algorithm(inv_trans_img,k);
for i_loop = 1:k
cluster = zeros(size(inv_trans_img));
pos = find(new_cluster==i_loop);
cluster(pos) = new_cluster(pos);
figure; imshow(cluster,[]);title('K-means');
end
I need to get the final image from this K means algorithm and I need to pass that image for thresholding process.I did it like below.
tumour_image=cluster;
n = 512;
binarized_img = zeros(n,n);
sort_val = sort(tumour_image(:));
mid_val = ceil(length(sort_val)/2);
threshold = tumour_image(mid_val);
binarized_img(find(tumour_image>=threshold)) = 1;
binarized_img(find(tumour_image<threshold)) = 0;
imshow(binarized_img);title('binarized image');
But now the problem is,only a white image is coming as a result.How can i solve this out.
Your threshold should be:
threshold = sort_val(mid_val);
You need to get the median of the sorted values, not the center element of tumour_image.
As #NeilSlater mentions in the comments, the reason that you're getting an all-white image from your existing code is that you are, by chance, selecting a black pixel from the original image, so when you threshold, the entire image is greater than or equal to that pixel in value.
In the case of images in which the majority of the pixels are 0, this will still give you an all-white image as as result. One way around this, and the most analogous to what you're currently doing, is to take the median of the nonzero pixels.
mid_val = ceil((find(sort_val, 1)+length(sort_val))/2);
Alternatively, if you know which clusters you're interested in you can simply keep only those clusters.
binarized_image = tumour_image >= 3; % keep clusters 3 and above

Laplacian Image Filtering and Sharpening Images in MATLAB

I am trying to "translate" what's mentioned in Gonzalez and Woods (2nd Edition) about the Laplacian filter.
I've read in the image and created the filter. However, when I try to display the result (by subtraction, since the center element in -ve), I don't get the image as in the textbook.
I think the main reason is the "scaling". However, I'm not sure how exactly to do that. From what I understand, some online resources say that the scaling is just so that the values are between 0-255. From my code, I see that the values are already within that range.
I would really appreciate any pointers.
Below is the original image I used:
Below is my code, and the resultant sharpened image.
Thanks!
clc;
close all;
a = rgb2gray(imread('e:\moon.png'));
lap = [1 1 1; 1 -8 1; 1 1 1];
resp = uint8(filter2(lap, a, 'same'));
sharpened = imsubtract(a, resp);
figure;
subplot(1,3,1);imshow(a); title('Original image');
subplot(1,3,2);imshow(resp); title('Laplacian filtered image');
subplot(1,3,3);imshow(sharpened); title('Sharpened image');
I have a few tips for you:
This is just a little thing but filter2 performs correlation. You actually need to perform convolution, which rotates the kernel by 180 degrees before performing the weighted sum between neighbourhoods of pixels and the kernel. However because the kernel is symmetric, convolution and correlation perform the same thing in this case.
I would recommend you use imfilter to facilitate the filtering as you are using methods from the Image Processing Toolbox already. It's faster than filter2 or conv2 and takes advantage of the Intel Integrated Performance Primitives.
I highly recommend you do everything in double precision first, then convert back to uint8 when you're done. Use im2double to convert your image (most likely uint8) to double precision. When performing sharpening, this maintains precision and prematurely casting to uint8 then performing the subtraction will give you unintended side effects. uint8 will cap results that are negative or beyond 255 and this may also be a reason why you're not getting the right results. Therefore, convert the image to double, filter the image, sharpen the result by subtracting the image with the filtered result (via the Laplacian) and then convert back to uint8 by im2uint8.
You've also provided a link to the pipeline that you're trying to imitate: http://www.idlcoyote.com/ip_tips/sharpen.html
The differences between your code and the link are:
The kernel has a positive centre. Therefore the 1s are negative while the centre is +8 and you'll have to add the filtered result to the original image.
In the link, they normalize the filtered response so that the minimum is 0 and the maximum is 1.
Once you add the filtered response onto the original image, you also normalize this result so that the minimum is 0 and the maximum is 1.
You perform a linear contrast enhancement so that intensity 60 becomes the new minimum and intensity 200 becomes the new maximum. You can use imadjust to do this. The function takes in an image as well as two arrays - The first array is the input minimum and maximum intensity and the second array is where the minimum and maximum should map to. As such, I'd like to map the input intensity 60 to the output intensity 0 and the input intensity 200 to the output intensity 255. Make sure the intensities specified are between 0 and 1 though so you'll have to divide each quantity by 255 as stated in the documentation.
As such:
clc;
close all;
a = im2double(imread('moon.png')); %// Read in your image
lap = [-1 -1 -1; -1 8 -1; -1 -1 -1]; %// Change - Centre is now positive
resp = imfilter(a, lap, 'conv'); %// Change
%// Change - Normalize the response image
minR = min(resp(:));
maxR = max(resp(:));
resp = (resp - minR) / (maxR - minR);
%// Change - Adding to original image now
sharpened = a + resp;
%// Change - Normalize the sharpened result
minA = min(sharpened(:));
maxA = max(sharpened(:));
sharpened = (sharpened - minA) / (maxA - minA);
%// Change - Perform linear contrast enhancement
sharpened = imadjust(sharpened, [60/255 200/255], [0 1]);
figure;
subplot(1,3,1);imshow(a); title('Original image');
subplot(1,3,2);imshow(resp); title('Laplacian filtered image');
subplot(1,3,3);imshow(sharpened); title('Sharpened image');
I get this figure now... which seems to agree with the figures seen in the link:

How do I efficiently created a BW mask for this microscopic image?

So some background. I was tasked to write a matlab program to count the number yeast cells inside visible-light microscopic images. To do that I think the first step will be cell segmentation. Before I got the real experiment image set, I developed an algorithm use a test image set utilizing watershed. Which looks like this:
The first step of watershed is generating a BW mask for the cells. Then I would generate a bwdist image with imposed local minimums generated from the BW mask. With that I can generate the watershed easily.
As you can see my algorithm rely on the successful generation of BW mask. Because I need to generate the bwdist image and markers from it. Originally, I generate the BW mask following the following steps:
generate the Local standard deviation of image sdImage = stdfilt(grayImage, ones(9))
Use BW thresholding to generate the initial BW mask binaryImage = sdImage < 8;
use imclearborder to clear the background. Use some other code to add the cells on the border back.
Background finished. Here is my problem
But today I received the new real data sets. The image resolution is much smaller and the light condition is different from the test image set. The color depth is also much smaller. These make my algorithm useless. Here is it:
Using stdfilt failed to generate a good clean images. Instead it generate stuff like this (Note: I have adjusted parameters for the stdfilt function and the BW threshold value, following is the best result I can get) :
As you can see there are light pixels in the center of the cells that not necessary darker than the membrane. Which lead the bw thresholding generate stuff like this:
The new bw image after bw thresholding have either incomplete membrane or segmented cell centers and make them unsuitable to the other steps.
I only start image processing recently and have no idea how should I proceed. If you have an idea please help me! Thanks!
For your convience, I have attached a link from dropbox for a subset of the images
I think there's a fundamental problem in your approach. Your algorithm uses stdfilt in order to binarize the image. But what that essentially means is you're assuming there is there is low "texture" in the background and within the cell. This works for your first image. However, in your second image there is a "texture" within the cell, so this assumption is broken.
I think a stronger assumption is that there is a "ring" around each cell (valid for both images you posted). So I took the approach of detecting this ring instead.
So my approach is essentially:
Detect these rings (I use a 'log' filter and then binarize based on positive values. However, this results in a lot of "chatter"
Try to remove some of the "chatter" initially by filtering out very small and very large regions
Now, fill in these rings. However, there is still some "chatter" and filled regions between cells left
Again, remove small and large regions, but since the cells are filled, increase the bounds for what is acceptable.
There are still some bad regions, most of the bad areas are going to be regions between cells. Regions between cells are detectable by observing the curvature around the boundary of the region. They "bend inwards" a lot, which is expressed mathematically as a large portion of the boundary having a negative curvature. Also, to remove the rest of the "chatter", these regions will have a large standard deviation in the curvature of their boundary, so remove boundaries with a large standard deviation as well.
Overall, the most difficult part will be removing regions between cells and the "chatter" without removing the actual cells.
Anyway, here's the code (note there are a lot of heuristics and also it's very rough and based on code from older projects, homeworks, and stackoverflow answers so it's definitely far from finished):
cell = im2double(imread('cell1.png'));
if (size(cell,3) == 3)
cell = rgb2gray(cell);
end
figure(1), subplot(3,2,1)
imshow(cell,[]);
% Detect edges
hw = 5;
cell_filt = imfilter(cell, fspecial('log',2*hw+1,1));
subplot(3,2,2)
imshow(cell_filt,[]);
% First remove hw and filter out noncell hws
mask = cell_filt > 0;
hw = 5;
mask = mask(hw:end-hw-1,hw:end-hw-1);
subplot(3,2,3)
imshow(mask,[]);
rp = regionprops(mask, 'PixelIdxList', 'Area');
rp = rp(vertcat(rp.Area) > 50 & vertcat(rp.Area) < 2000);
mask(:) = false;
mask(vertcat(rp.PixelIdxList)) = true;
subplot(3,2,4)
imshow(mask,[]);
% Now fill objects
mask1 = true(size(mask) + hw);
mask1(hw+1:end, hw+1:end) = mask;
mask1 = imfill(mask1,'holes');
mask1 = mask1(hw+1:end, hw+1:end);
mask2 = true(size(mask) + hw);
mask2(hw+1:end, 1:end-hw) = mask;
mask2 = imfill(mask2,'holes');
mask2 = mask2(hw+1:end, 1:end-hw);
mask3 = true(size(mask) + hw);
mask3(1:end-hw, 1:end-hw) = mask;
mask3 = imfill(mask3,'holes');
mask3 = mask3(1:end-hw, 1:end-hw);
mask4 = true(size(mask) + hw);
mask4(1:end-hw, hw+1:end) = mask;
mask4 = imfill(mask4,'holes');
mask4 = mask4(1:end-hw, hw+1:end);
mask = mask1 | mask2 | mask3 | mask4;
% Filter out large and small regions again
rp = regionprops(mask, 'PixelIdxList', 'Area');
rp = rp(vertcat(rp.Area) > 100 & vertcat(rp.Area) < 5000);
mask(:) = false;
mask(vertcat(rp.PixelIdxList)) = true;
subplot(3,2,5)
imshow(mask);
% Filter out regions with lots of positive concavity
% Get boundaries
[B,L] = bwboundaries(mask);
% Cycle over boundarys
for i = 1:length(B)
b = B{i};
% Filter boundary - use circular convolution
b(:,1) = cconv(b(:,1),fspecial('gaussian',[1 7],1)',size(b,1));
b(:,2) = cconv(b(:,2),fspecial('gaussian',[1 7],1)',size(b,1));
% Find curvature
curv_vec = zeros(size(b,1),1);
for j = 1:size(b,1)
p_b = b(mod(j-2,size(b,1))+1,:); % p_b = point before
p_m = b(mod(j,size(b,1))+1,:); % p_m = point middle
p_a = b(mod(j+2,size(b,1))+1,:); % p_a = point after
dx_ds = p_a(1)-p_m(1); % First derivative
dy_ds = p_a(2)-p_m(2); % First derivative
ddx_ds = p_a(1)-2*p_m(1)+p_b(1); % Second derivative
ddy_ds = p_a(2)-2*p_m(2)+p_b(2); % Second derivative
curv_vec(j+1) = dx_ds*ddy_ds-dy_ds*ddx_ds;
end
if (sum(curv_vec > 0)/length(curv_vec) > 0.4 || std(curv_vec) > 2.0)
L(L == i) = 0;
end
end
mask = L ~= 0;
subplot(3,2,6)
imshow(mask,[])
Output1:
Output2:

Extract a page from a uniform background in an image

If I have an image, in which there is a page of text shot on a uniform background, how can I auto detect the boundaries between the paper and the background?
An example of the image I want to detect is shown below. The images that I will be dealing with consist of a single page on a uniform background and they can be rotated at any angle.
One simple method would be to threshold the image by some known value once you convert the image to grayscale. The problem with that approach is that we are applying a global threshold and so some of the paper at the bottom of the image will be lost if you make the threshold too high. If you make the threshold too low, then you'll certainly get the paper, but you'll include a lot of the background pixels too and it will probably be difficult to remove those pixels with post-processing.
One thing I can suggest is to use an adaptive threshold algorithm. An algorithm that has worked for me in the past is the Bradley-Roth adaptive thresholding algorithm. You can read up about it here on a post I commented on a while back:
Bradley Adaptive Thresholding -- Confused (questions)
However, if you want the gist of it, an integral image of the grayscale version of the image is taken first. The integral image is important because it allows you to calculate the sum of pixels within a window in O(1) complexity. However, the calculation of the integral image is usually O(n^2), but you only have to do that once. With the integral image, you scan neighbourhoods of pixels of size s x s and you check to see if the average intensity is less than t% of the actual average within this s x s window then this is pixel classified as the background. If it's larger, then it's classified as being part of the foreground. This is adaptive because the thresholding is done using local pixel neighbourhoods rather than using a global threshold.
I've coded an implementation of the Bradley-Roth algorithm here for you. The default parameters for the algorithm are s being 1/8th of the width of the image and t being 15%. Therefore, you can just call it this way to invoke the default parameters:
out = adaptiveThreshold(im);
im is the input image and out is a binary image that denotes what belongs to foreground (logical true) or background (logical false). You can play around with the second and third input parameters: s being the size of the thresholding window and t the percentage we talked about above and can call the function like so:
out = adaptiveThreshold(im, s, t);
Therefore, the code for the algorithm looks like this:
function [out] = adaptiveThreshold(im, s, t)
%// Error checking of the input
%// Default value for s is 1/8th the width of the image
%// Must make sure that this is a whole number
if nargin <= 1, s = round(size(im,2) / 8); end
%// Default value for t is 15
%// t is used to determine whether the current pixel is t% lower than the
%// average in the particular neighbourhood
if nargin <= 2, t = 15; end
%// Too few or too many arguments?
if nargin == 0, error('Too few arguments'); end
if nargin >= 4, error('Too many arguments'); end
%// Convert to grayscale if necessary then cast to double to ensure no
%// saturation
if size(im, 3) == 3
im = double(rgb2gray(im));
elseif size(im, 3) == 1
im = double(im);
else
error('Incompatible image: Must be a colour or grayscale image');
end
%// Compute integral image
intImage = cumsum(cumsum(im, 2), 1);
%// Define grid of points
[rows, cols] = size(im);
[X,Y] = meshgrid(1:cols, 1:rows);
%// Ensure s is even so that we are able to index the image properly
s = s + mod(s,2);
%// Access the four corners of each neighbourhood
x1 = X - s/2; x2 = X + s/2;
y1 = Y - s/2; y2 = Y + s/2;
%// Ensure no co-ordinates are out of bounds
x1(x1 < 1) = 1;
x2(x2 > cols) = cols;
y1(y1 < 1) = 1;
y2(y2 > rows) = rows;
%// Count how many pixels there are in each neighbourhood
count = (x2 - x1) .* (y2 - y1);
%// Compute row and column co-ordinates to access each corner of the
%// neighbourhood for the integral image
f1_x = x2; f1_y = y2;
f2_x = x2; f2_y = y1 - 1; f2_y(f2_y < 1) = 1;
f3_x = x1 - 1; f3_x(f3_x < 1) = 1; f3_y = y2;
f4_x = f3_x; f4_y = f2_y;
%// Compute 1D linear indices for each of the corners
ind_f1 = sub2ind([rows cols], f1_y, f1_x);
ind_f2 = sub2ind([rows cols], f2_y, f2_x);
ind_f3 = sub2ind([rows cols], f3_y, f3_x);
ind_f4 = sub2ind([rows cols], f4_y, f4_x);
%// Calculate the areas for each of the neighbourhoods
sums = intImage(ind_f1) - intImage(ind_f2) - intImage(ind_f3) + ...
intImage(ind_f4);
%// Determine whether the summed area surpasses a threshold
%// Set this output to 0 if it doesn't
locs = (im .* count) <= (sums * (100 - t) / 100);
out = true(size(im));
out(locs) = false;
end
When I use your image and I set s = 500 and t = 5, here's the code and this is the image I get:
im = imread('http://i.stack.imgur.com/MEcaz.jpg');
out = adaptiveThreshold(im, 500, 5);
imshow(out);
You can see that there are some spurious white pixels at the bottom white of the image, and there are some holes we need to fill in inside the paper. As such, let's use some morphology and declare a structuring element that's a 15 x 15 square, perform an opening to remove the noisy pixels, then fill in the holes when we're done:
se = strel('square', 15);
out = imopen(out, se);
out = imfill(out, 'holes');
imshow(out);
This is what I get after all of that:
Not bad eh? Now if you really want to see what the image looks like with the paper segmented, we can use this mask and multiply it with the original image. This way, any pixels that belong to the paper are kept while those that belong to the background go away:
out_colour = bsxfun(#times, im, uint8(out));
imshow(out_colour);
We get this:
You'll have to play around with the parameters until it works for you, but the above parameters were the ones I used to get it working for the particular page you showed us. Image processing is all about trial and error, and putting processing steps in the right sequence until you get something good enough for your purposes.
Happy image filtering!

Resources