MATLAB image processing technique - image

I have this 3D array in MATLAB (V: vertical, H: horizontal, t: time frame)
Figures below represent images obtained using imagesc function after slicing the array in terms of t axis
area in black represents damage area and other area is intact
each frame looks similar but has different amplitude
I am trying to visualize only defect area and get rid of intact area
I tried to use 'threshold' method to get rid of intact area as below
NewSet = zeros(450,450,200);
for kk = 1:200
frame = uwpi(:,:,kk);
STD = std(frame(:));
Mean = mean(frame(:));
for ii = 1:450
for jj =1:450
if frame(ii, jj) > 2*STD+Mean
NewSet(ii, jj, kk) = frame(ii, jj);
else
NewSet(ii, jj, kk) = NaN;
end
end
end
end
However, since each frame has different amplitude, result becomes
Is there any image processing method to get rid of intact area in this case?
Thanks in advance

You're thresholding based on mean and standard deviation, basically assuming your data is normally distributed and looking for outliers. But your model should try to distinguish values around zero (noise) vs higher values. Your data is not normally distributed, mean and standard deviation are not meaningful.
Look up Otsu thresholding (MATLAB IP toolbox has it). It's model does not perfectly match your data, but it might give reasonable results. Like most threshold estimation algorithms, it uses the image's histogram to determine the optimal threshold given some model.
Ideally you'd model the background peak in the histogram. You can find the mode, fit a Gaussian around it, then cut off at 2 sigma. Or you can use the "triangle method", which finds the point along the histogram that is furthest from the line between the upper end of the histogram and the top of the background peak. A little more complex to explain, but trivial to implement. We have this implemented in DIPimage (http://www.diplib.org), M-file code is visible so you can see how it works (look for the function threshold)
Additionally, I'd suggest to get rid of the loops over x and y. You can type frame(frame<threshold) = nan, and then copy the whole frame back into NewSet in one operation.

Do I clearly understand the question, ROI is the dark border and all it surrounds? If so I'd recommend process in 3D using some kind of region-growing technique like watershed or active snakes with markers by imregionalmin. The methods should provide segmentation result even if the border has small holes. Than just copy segmented object to a new 3D array via logic indexing.

Related

MATLAB Thresholding

I am using the following MATLAB code for Niblack Binarization.
mean= averagefilter2(image1);
meanSquare = averagefilter2(image1.^2);
standardDeviation = (meanSquare - mean.^2).^0.5;
binaryImage = image1 >= (mean + k_threshold * standardDeviation);
function img=averagefilter2(image1)
meanFilter = fspecial('average',[60 60]);
img = imfilter (image1,meanFilter);
end
But when I implement it,
becomes
.
(Ignore the black border..it is just to highlight the white patch on the edge of the image)
That is, near the edges some data pixels go missing and becomes white (the white patch at the top and right edges). Am I wrong anywhere in this implementation? Is there a better "MATLAB way" of implementing it, or should I do it manually using nested loops for calculating average and standard deviation?
Likely this is due to boundary conditions of the imfilter function, and perharps from your own function averagefilter2.
When you filter, in the edge cases, you need to access pixels that are outside the image. That means that you need to make assumptions on what happens outside the boundary.
imfilter has a parameter to choose what is assumed to be outside, and it is assumed to be zero by default. That would definetly cause a smaller value for the mean and perhaps that makes the binarization get "deleted" there.
Try different values, and surely implement that for your own function also.
I suggest starting with 'symmetric'

MATLAB: layer detection, vector combination and selection by tortuosity/arclength

I have a greyscale image similar to the one below that I have achieved after some post-processing steps (image 0001). I would like a vector corresponding to the bottom of the lower bright strip (as depicted in image 0001b). I can use im2bw with various thresholds to achieve the vectors in image 0002 (the higher the threshold value the higher the tendency for the vector line to blip upwards, the lower the threshold the higher the tendency for the line to blip downwards)..and then I was thinking of going through each vector and measuring arclength over some increment (maybe 100 pixels or so) and choosing that vector with the lowest arclength...and adding that 100 pixel stretch to the final vector, creating a frankenstein-like vector using the straightest segments from each of the thresholded vectors.. I should also mention that when there are multiple straightish/parallel vectors, the top one is the best fit.
First off, is there some better strategy I should be employing here to find that line on image 0001? (this needs to be fast so some long fitting code wouldn't work). If my current Frankenstein's monster solution works, any suggestions as to how to best go about this?
Thanks in advance
image=im2bw(image,0.95); %or 0.85, 0.75, 0.65, 0.55
vec=[];
for v=1:x
for x=1:z
if image(c,v)==1
vec(v)=c;
end
end
end
vec=fastsmooth(vec,60,20,1);
Here is the modified version of what I originally did. It works well on on your images. If you want subpixel resolution, you can implement an active contour model with some fitting function.
files = dir('*.png');
filenames = {files.name};
for ifile=1:length(filenames)
%%
% read image
im0 = double(imread(filenames{ifile}));
%%
% remove background by substracting a convolution with a mask
lobj=100;
convmask = ones(lobj,1)/lobj;
im=im0-conv2(im0,convmask,'same');
im(im<0)=0;
imagesc(im);colormap gray;axis image;
%%
% use canny edge filter, alowing extremely weak edge to exist
bw=edge(im,'canny',[0.01,0.3]);
% use close operation on image to close gaps between lines
% the kernel is a flat rectangular so that it helps to connect horizontal
% gaps
se=strel('rectangle',[10,30]);
bw=imdilate(bw,se);
% thin the lines to be single pixel line
bw=bwmorph(bw,'thin',inf);
% connect H bridge
bw=bwmorph(bw,'bridge');
imagesc(bw);colormap gray;axis image;
%% smooth the image, find the decreasing region, and apply the mask
imtmp = imgaussfilt(im0,3);
imtmp = diff(imtmp);
imtmp = [imtmp(1,:);imtmp];
intensity_decrease_mask = imtmp < 0;
bw = bw & intensity_decrease_mask;
imagesc(bw);colormap gray;axis image;
%%
% find properties of the lines, and find the longest lines
cc=regionprops(bw,'Area','PixelList','Centroid','MajorAxisLength','PixelIdxList');
% now select any lines that is larger than eighth of the image width
cc=cc([cc.MajorAxisLength]>size(bw,2)/8);
%%
% select lines that has average intensity larger than gray level
for i=1:length(cc)
cc(i).meanIntensity = mean(im0(sub2ind(size(im0),cc(i).PixelList(:,2), ...
cc(i).PixelList(:,1) )));
end
cc=cc([cc.meanIntensity]>150);
cnts=reshape([cc.Centroid],2,length(cc))';
%%
% calculate the minimum distance to the bottom right of each edge
for i=1:length(cc)
cc(i).distance2bottomright = sqrt(min((cc(i).PixelList(:,2)-size(im,1)).^2 ...
+ (cc(i).PixelList(:,1)-size(im,2)).^2));
end
% select the bottom edge
[~,minindex]=min([cc.distance2bottomright]);
bottomedge = cc(minindex);
%% clean up the lines a little bit
bwtmp = false(size(bw));
bwtmp(bottomedge.PixelIdxList)=1;
% find the end points to the most left and right
endpoints = bwmorph(bwtmp, 'endpoints');
[endy,endx] = find(endpoints);
[~,minind]=min(endx);
[~,maxind]=max(endx);
pos_most_left = [endx(minind),endy(minind)];
pos_most_right = [endx(maxind),endy(maxind)];
% select the shortest path between left and right
dists = bwdistgeodesic(bwtmp,pos_most_left(1),pos_most_left(2)) + ...
bwdistgeodesic(bwtmp,pos_most_right(1),pos_most_right(2));
dists(isnan(dists))=inf;
bwtmp = imregionalmin(dists);
bottomedge=regionprops(bwtmp,'PixelList');
%% plot the lines
imagesc(im0);colormap gray;axis image;hold on;axis off;
for i=1:length(cc)
plot(cc(i).PixelList(:,1),cc(i).PixelList(:,2),'b','linewidth',2);hold on;
end
plot(bottomedge.PixelList(:,1),bottomedge.PixelList(:,2),'r','linewidth',2);hold on;
print(gcf,num2str(ifile),'-djpeg');
% pause
end
I am not sure this answers your question directly, but I have a lot of experiencing fitting arrays (or matrices in my case) to 3D raster images. We were using relatively low power machines (standard i7 processors 32 gb ram), and had to perform the fitting very quickly (<30 seconds). We also had to validate the fit with a variety of parameters (and again these were 3D rasters fit to a point cloud matrix).
Anyways, the process we used was the fminsearch function internal to Matlab. Documentation can be found here: http://www.mathworks.com/help/optim/functionlist.html
We would start with a plain point-cloud and perform successive manipulations on a per pixel basis to adjust the point-cloud to the raster. Essentially walking through each pixel in the raster to produce the lowest offset between the point cloud and the raster.
I will try to search for some code this afternoon and update my answer, but I might explore this option for your case. I would imagine you could fit a curve to certain pixels (e.g. white pixels) both rapidly and accurately by setting up an optimization function.
I also could help more if I understood your objective better. Are you just trying to fit a line to the high-albedo/white areas?
In the way of example: I can fit a 3D point cloud to the following image by starting with a standard point cloud, the 3D raster, and a minimization function (in this case just RMS error of each individual point in the z axis). Throw an fmin function on there and in a few seconds you get a modified point cloud that fits much better than the standard.

How to average multiple images using Octave and matrix manipulation to reduce noise?

UPDATE
Here is my code that is meant to add up the two matrices and using element by element addition and then divide by two.
function [ finish ] = stackAndMeanImage (initFrame, finalFrame)
cd 'C:\Users\Disc-1119\Desktop\Internships\Tracking\Octave\highway\highway (6-13-2014 11-13-41 AM)';
pkg load image;
i = initFrame;
f = finalFrame;
astr = num2str(i);
tmp = imread(astr, 'jpg');
d = f - i
for a = 1:d
a
astr = num2str(i + 1);
read_tmp = imread(astr, 'jpg');
read_tmp = rgb2gray(read_tmp);
tmp = tmp :+ read_tmp;
tmp = tmp / 2;
end
imwrite(tmp, 'meanimage.JPG');
finish = 'done';
end
Here are two example input images
http://imgur.com/5DR1ccS,AWBEI0d#1
And here is one output image
http://imgur.com/aX6b0kj
I am really confused as to what is happening. I have not implemented what the other answers have said yet though.
OLD
I am working on an image processing project where I am now manually choosing images that are 'empty' or only have the background, so that my algorithm can compute the differences and then do some more analysis, I have a simple piece of code that computes the mean of the two images, which I have converted to grayscale matrices, but this only works for two images, because when I find the mean of two, then take this mean and find the mean of this versus the next image, and do this repeatedly, I end up with a washed out white image that is absolutely useless. You can't even see anything.
I found that there is a function in Matlab called imFuse that is able to average images. I was wondering if anyone knew the process that imFuse uses to combine images, I am happy to implement this into Octave, or if anyone knew of or has already written a piece of code that achieves something similiar to this. Again, I am not asking for anyone to write code for me, just wondering what the process for this is and if there are already pre-existing functions out there, which I have not found after my research.
Thanks,
AeroVTP
You should not end up with a washed-out image. Instead, you should end up with an image, which is technically speaking temporally low-pass filtered. What this means is that half of the information content is form the last image, one quarter from the second last image, one eight from the third last image, etc.
Actually, the effect in a moving image is similar to a display with slow response time.
If you are ending up with a white image, you are doing something wrong. nkjt's guess of type challenges is a good one. Another possibility is that you have forgotten to divide by two after summing the two images.
One more thing... If you are doing linear operations (such as averaging) on images, your image intensity scale should be linear. If you just use the RGB values or some grayscale values simply calculated from them, you may get bitten by the nonlinearity of the image. This property is called the gamma correction. (Admittedly, most image processing programs just ignore the problem, as it is not always a big challenge.)
As your project calculates differences of images, you should take this into account. I suggest using linearised floating point values. Unfortunately, the linearisation depends on the source of your image data.
On the other hand, averaging often the most efficient way of reducing noise. So, there you are in the right track assuming the images are similar enough.
However, after having a look at your images, it seems that you may actually want to do something else than to average the image. If I understand your intention correctly, you would like to get rid of the cars in your road cam to give you just the carless background which you could then subtract from the image to get the cars.
If that is what you want to do, you should consider using a median filter instead of averaging. What this means is that you take for example 11 consecutive frames. Then for each pixel you have 11 different values. Now you order (sort) these values and take the middle (6th) one as the background pixel value.
If your road is empty most of the time (at least 6 frames of 11), then the 6th sample will represent the road regardless of the colour of the cars passing your camera.
If you have an empty road, the result from the median filtering is close to averaging. (Averaging is better with Gaussian white noise, but the difference is not very big.) But your averaging will be affected by white or black cars, whereas median filtering is not.
The problem with median filtering is that it is computationally intensive. I am very sorry I speak very broken and ancient Octave, so I cannot give you any useful code. In MatLab or PyLab you would stack, say, 11 images to a M x N x 11 array, and then use a single median command along the depth axis. (When I say intensive, I do not mean it couldn't be done in real time with your data. It can, but it is much more complicated than averaging.)
If you have really a lot of traffic, the road is visible behind the cars less than half of the time. Then the median trick will fail. You will need to take more samples and then find the most typical value, because it is likely to be the road (unless all cars have similar colours). There it will help a lot to use the colour image, as cars look more different from each other in RGB or HSV than in grayscale.
Unfortunately, if you need to resort to this type of processing, the path is slightly slippery and rocky. Average is very easy and fast, median is easy (but not that fast), but then things tend to get rather complicated.
Another BTW came into my mind. If you want to have a rolling average, there is a very simple and effective way to calculate it with an arbitrary length (arbitrary number of frames to average):
# N is the number of images to average
# P[i] are the input frames
# S is a sum accumulator (sum of N frames)
# calculate the sum of the first N frames
S <- 0
I <- 0
while I < N
S <- S + P[I]
I <- I + 1
# save_img() saves an averaged image
while there are images to process
save_img(S / N)
S <- -P[I-N] + S + P[I]
I <- I + 1
Of course, you'll probably want to use for-loops, and += and -= operators, but still the idea is there. For each frame you only need one subtraction, one addition, and one division by a constant (which can be modified into a multiplication or even a bitwise shift in some cases if you are in a hurry).
I may have misunderstood your problem but I think what you're trying to do is the following. Basically, read all images into a matrix and then use mean(). This is providing that you are able to put them all in memory.
function [finish] = stackAndMeanImage (ini_frame, final_frame)
pkg load image;
dir_path = 'C:\Users\Disc-1119\Desktop\Internships\Tracking\Octave\highway\highway (6-13-2014 11-13-41 AM)';
imgs = cell (1, 1, d);
## read all images into a cell array
current_frame = ini_frame;
for n = 1:(final_frame - ini_frame)
fname = fullfile (dir_path, sprintf ("%i", current_frame++));
imgs{n} = rgb2gray (imread (fname, "jpg"));
endfor
## create 3D matrix out of all frames and calculate mean across 3rd dimension
imgs = cell2mat (imgs);
avg = mean (imgs, 3);
## mean returns double precision so we cast it back to uint8 after
## rescaling it to range [0 1]. This assumes that images were all
## originally uint8, but since they are jpgs, that's a safe assumption
avg = im2uint8 (avg ./255);
imwrite (avg, fullfile (dir_path, "meanimage.jpg"));
finish = "done";
endfunction

Detect black dots from color background

My short question
How to detect the black dots in the following images? (I paste only one test image to make the question look compact. More images can be found →here←).
My long question
As shown above, the background color is roughly blue, and the dots color is "black". If pick one black pixel and measure its color in RGB, the value can be (0, 44, 65) or (14, 69, 89).... Therefore, we cannot set a range to tell the pixel is part of the black dot or the background.
I test 10 images of different colors, but I hope I can find a method to detect the black dots from more complicated background which may be made up of three or more colors, as long as human eyes can identify the black dots easily. Some extremely small or blur dots can be omitted.
Previous work
Last month, I have asked a similar question at stackoverflow, but have not got a perfect solution, some excellent answers though. Find more details about my work if you are interested.
Here are the methods I have tried:
Converting to grayscale or the brightness of image. The difficulty is that I can not find an adaptive threshold to do binarization. Obviously, turning a color image to grayscale or using the brightness (HSV) will lose much useful information. Otsu algorithm which calculates adaptive threshold can not work either.
Calculating RGB histogram. In my last question, natan's method is to estimate the black color by histogram. It is time-saving, but the adaptive threshold is also a problem.
Clustering. I have tried k-means clustering and found it quite effective for the background that only has one color. The shortage (see my own answer) is I need to set the number of clustering center in advance but I don't know how the background will be. What's more, it is too slow! My application is for real time capturing on iPhone and now it can process 7~8 frames per second using k-means (20 FPS is good I think).
Summary
I think not only similar colors but also adjacent pixels should be "clustered" or "merged" in order to extract the black dots. Please guide me a proper way to solve my problem. Any advice or algorithm will be appreciated. There is no free lunch but I hope a better trade-off between cost and accuracy.
I was able to get some pretty nice first pass results by converting to HSV color space with rgb2hsv, then using the Image Processing Toolbox functions imopen and imregionalmin on the value channel:
rgb = imread('6abIc.jpg');
hsv = rgb2hsv(rgb);
openimg = imopen(hsv(:, :, 3), strel('disk', 11));
mask = imregionalmin(openimg);
imshow(rgb);
hold on;
[r, c] = find(mask);
plot(c, r, 'r.');
And the resulting images (for the image in the question and one chosen from your link):
You can see a few false positives and missed dots, as well as some dots that are labeled with multiple points, but a few refinements (such as modifying the structure element used in the opening step) could clean these up some.
I was curios to test with my old 2d peak finder code on the images without any threshold or any color considerations, really crude don't you think?
im0=imread('Snap10.jpg');
im=(abs(255-im0));
d=rgb2gray(im);
filter=fspecial('gaussian',16,3.5);
p=FastPeakFind(d,0,filter);
imagesc(im0); hold on
plot(p(1:2:end),p(2:2:end),'r.')
The code I'm using is a simple 2D local maxima finder, there are some false positives, but all in all this captures most of the points with no duplication. The filter I was using was a 2d gaussian of width and std similar to a typical blob (the best would have been to get a matched filter for your problem).
A more sophisticated version that does treat the colors (rgb2hsv?) could improve this further...
Here is an extraodinarily simplified version, that can be extended to be full RGB, and it also does not use the image procesing library. Basically you can do 2-D convolution with a filter image (which is an example of the dot you are looking for), and from the points where the convolution returns the highest values, are the best matches for the dots. You can then of course threshold that. Here is a simple binary image example of just that.
%creating a dummy image with a bunch of small white crosses
im = zeros(100,100);
numPoints = 10;
% randomly chose the location to put those crosses
points = randperm(numel(im));
% keep only certain number of points
points = points(1:numPoints);
% get the row and columns (x,y)
[xVals,yVals] = ind2sub(size(im),points);
for ii = 1:numel(points)
x = xVals(ii);
y = yVals(ii);
try
% create the crosses, try statement is here to prevent index out of bounds
% not necessarily the best practice but whatever, it is only for demonstration
im(x,y) = 1;
im(x+1,y) = 1;
im(x-1,y) = 1;
im(x,y+1) = 1;
im(x,y-1) = 1;
catch err
end
end
% display the randomly generated image
imshow(im)
% create a simple cross filter
filter = [0,1,0;1,1,1;0,1,0];
figure; imshow(filter)
% perform convolution of the random image with the cross template
result = conv2(im,filter,'same');
% get the number of white pixels in filter
filSum = sum(filter(:));
% look for all points in the convolution results that matched identically to the filter
matches = find(result == filSum);
%validate all points found
sort(matches(:)) == sort(points(:))
% get x and y coordinate matches
[xMatch,yMatch] = ind2sub(size(im),matches);
I would highly suggest looking at the conv2 documentation on MATLAB's website.

image enhancement - cleaning given image from writing

i need to clean this picture delete the writing "clean me" and make it bright.
as a part of my homework in image processing course i may use matlab functions ginput, to find specific points in the image (of course in the script you should hard code the coordinates you need).
You may use conv2, fft2, ifft2, fftshift etc.
You may also use median, mean, max, min, sort, etc.
my basic idea was to use the white and black values from the middle of the picture and insert them into the other parts of the black and white strips. however gives a very synthetic look to the picture.
can you please give me a direction what to do ? a median filter will not give good results.
The general technique to do such thing is called Inpainting. But in order to do it, you need a mask of the regions that you want to in paint. So, let us suppose that we managed to get a good mask and inpainted the original image considering a morphological dilation of this mask:
To get that mask, we don't need anything much fancy. Start with a binarization of the difference between the original image and the result of a median filtering of it:
You can remove isolated pixels; join the pixels representing the stars of your flag by a combination of dilation in horizontal followed by another dilation with a small square; remove this just created largest component; and then perform a geodesic dilation with the result so far against the initial mask. This gives the good mask above.
Now to inpaint there are many algorithms, but one of the simplest ones I've found is described at Fast Digital Image Inpainting, which should be easy enough to implement. I didn't use it, but you could and verify which results you can obtain.
EDIT: I missed that you also wanted to brighten the image.
An easy way to brighten an image, without making the brighter areas even brighter, is by applying a gamma factor < 1. Being more specific to your image, you could first apply a relatively large lowpass filter, negate it, multiply the original image by it, and then apply the gamma factor. In this second case, the final image will likely be darker than the first one, so you multiply it by a simple scalar value. Here are the results for these two cases (left one is simply a gamma 0.6):
If you really want to brighten the image, then you can apply a bilateral filter and binarize it:
I see two options for removing "clean me". Both rely on the horizontal similarity.
1) Use a long 1D low-pass filter in the horizontal direction only.
2) Use a 1D median filter maybe 10 pixels long
For both solutions you of course have to exlude the stars-part.
When it comes to brightness you could try a histogram equalization. However that won't fix the unevenness of the brightness. Maybe a high-pass before equalization can fix that.
Regards
The simplest way to remove the text is, like KlausCPH said, to use a long 1-d median filter in the region with the stripes. In order to not corrupt the stars, you would need to keep a backup of this part and replace it after the median filter has run. To do this, you could use ginput to mark the lower right side of the star part:
% Mark lower right corner of star-region
figure();imagesc(Im);colormap(gray)
[xCorner,yCorner] = ginput(1);
close
xCorner = round(xCorner); yCorner = round(yCorner);
% Save star region
starBackup = Im(1:yCorner,1:xCorner);
% Clean up stripes
Im = medfilt2(Im,[1,50]);
% Replace star region
Im(1:yCorner,1:xCorner) = starBackup;
This produces
To fix the exposure problem (the middle part being brighter than the corners), you could fit a 2-D Gaussian model to your image and do a normalization. If you want to do this, I suggest looking into fit, although this can be a bit technical if you have not been working with model fitting before.
My found 2-D gaussian looks something like this:
Putting these two things together, gives:
I used gausswin() function to make a gaus. mask:
Pic_usa_g = abs(1 - gausswin( size(Pic_usa,2) ));
Pic_usa_g = Pic_usa_g + 0.6;
Pic_usa_g = Pic_usa_g .* 2;
Pic_usa_g = Pic_usa_g';
C = repmat(Pic_usa_g, size(Pic_usa,1),1);
and after multiply the image with the mask you get the fixed image.

Resources