I am using the following MATLAB code for Niblack Binarization.
mean= averagefilter2(image1);
meanSquare = averagefilter2(image1.^2);
standardDeviation = (meanSquare - mean.^2).^0.5;
binaryImage = image1 >= (mean + k_threshold * standardDeviation);
function img=averagefilter2(image1)
meanFilter = fspecial('average',[60 60]);
img = imfilter (image1,meanFilter);
end
But when I implement it,
becomes
.
(Ignore the black border..it is just to highlight the white patch on the edge of the image)
That is, near the edges some data pixels go missing and becomes white (the white patch at the top and right edges). Am I wrong anywhere in this implementation? Is there a better "MATLAB way" of implementing it, or should I do it manually using nested loops for calculating average and standard deviation?
Likely this is due to boundary conditions of the imfilter function, and perharps from your own function averagefilter2.
When you filter, in the edge cases, you need to access pixels that are outside the image. That means that you need to make assumptions on what happens outside the boundary.
imfilter has a parameter to choose what is assumed to be outside, and it is assumed to be zero by default. That would definetly cause a smaller value for the mean and perhaps that makes the binarization get "deleted" there.
Try different values, and surely implement that for your own function also.
I suggest starting with 'symmetric'
Related
I have this 3D array in MATLAB (V: vertical, H: horizontal, t: time frame)
Figures below represent images obtained using imagesc function after slicing the array in terms of t axis
area in black represents damage area and other area is intact
each frame looks similar but has different amplitude
I am trying to visualize only defect area and get rid of intact area
I tried to use 'threshold' method to get rid of intact area as below
NewSet = zeros(450,450,200);
for kk = 1:200
frame = uwpi(:,:,kk);
STD = std(frame(:));
Mean = mean(frame(:));
for ii = 1:450
for jj =1:450
if frame(ii, jj) > 2*STD+Mean
NewSet(ii, jj, kk) = frame(ii, jj);
else
NewSet(ii, jj, kk) = NaN;
end
end
end
end
However, since each frame has different amplitude, result becomes
Is there any image processing method to get rid of intact area in this case?
Thanks in advance
You're thresholding based on mean and standard deviation, basically assuming your data is normally distributed and looking for outliers. But your model should try to distinguish values around zero (noise) vs higher values. Your data is not normally distributed, mean and standard deviation are not meaningful.
Look up Otsu thresholding (MATLAB IP toolbox has it). It's model does not perfectly match your data, but it might give reasonable results. Like most threshold estimation algorithms, it uses the image's histogram to determine the optimal threshold given some model.
Ideally you'd model the background peak in the histogram. You can find the mode, fit a Gaussian around it, then cut off at 2 sigma. Or you can use the "triangle method", which finds the point along the histogram that is furthest from the line between the upper end of the histogram and the top of the background peak. A little more complex to explain, but trivial to implement. We have this implemented in DIPimage (http://www.diplib.org), M-file code is visible so you can see how it works (look for the function threshold)
Additionally, I'd suggest to get rid of the loops over x and y. You can type frame(frame<threshold) = nan, and then copy the whole frame back into NewSet in one operation.
Do I clearly understand the question, ROI is the dark border and all it surrounds? If so I'd recommend process in 3D using some kind of region-growing technique like watershed or active snakes with markers by imregionalmin. The methods should provide segmentation result even if the border has small holes. Than just copy segmented object to a new 3D array via logic indexing.
I have the following questions:
What is the algorithm that bwareafilt uses?
Weird behaviour: When the input matrix is totally black, I get following error
Error using bwpropfilt (line 73)
Internal error: p must be positive.
Error in bwareafilt (line 33)
bw2 = bwpropfilt(bw, 'area', p, direction, conn);
Error in colour_reception (line 95)
Iz=bwareafilt(b,1);
Actually, I am using this function to take snapshots from a webcam, but when I block my webcam totally, then I get above following error.
So I believe it is an error due to some internal implementation mistake. Is this the case? How do I overcome this?
Let's answer your questions one at a time:
What algorithm does bwareafilt use?
bwareafilt is a function from the image processing toolbox that accepts a binary image and determines unique objects in this image. To find unique objects, a connected components analysis is performed where each object is assigned a unique ID. You can think of this as performing a flood fill on each object individually. A flood fill can be performed using a variety of algorithms - among them is depth-first search where you can consider an image as a graph where edges are connected to each pixel. Flood fill in this case visits all of the pixels that are connected to each other until you don't have any more pixels to visit and that are localized within this object. You then proceed to the next object and repeat the same algorithm until you run out of objects.
After, it determines the "area" for each object by counting how many pixels belong to that object. Once we determine the area for each object, we can either output an image that retains the top n objects or filter the image so that only those objects that are within a certain range of areas get retained.
Given your code above, you are trying to output an image that is the largest object in the binary image. Therefore, you are using the former, not the latter where n=1.
Weird behaviour with bwareafilt
Given the above description of bwareafilt and your intended application:
Actually, I am using this function to take snapshots from a webcam, but when I block my webcam totally, then I get above following error.
... the error is self-explanatory. When you cover the webcam, the entire frame is black and there are no objects that are found in the image. Because there are no objects in the image, returning the object with the largest area makes no sense because there are no objects to return to begin with. That's why you are getting the error because you are trying to make bwareafilt return an image with the largest object but there aren't any objects in your image to begin with.
As such, if you want to use bwareafilt, what I suggest is you check to see if the entire image is black first. If it isn't black, then go ahead and use bwareafilt. If it is, then skip it.
Do something like this, assuming that b is the image you're trying to process:
if any(b(:))
Iz = bwareafilt(b, 1);
else
Iz = b;
end
The above code uses any to check to see if there are any white pixels in your image b that are non-zero. If there are, then bwareafilt should be appropriately called. If there aren't any white pixels in the image, then simply set the output to be what b originally was (which is a dark image anyway).
You can add Conditions to make your function robust to any inputs , for exemple by ading a simple condition to first treat the input image if it is all black or not, based on the condition yo use your function to filter objects.
On a shape from a logical image, I am trying to extract the field of view from any point inside the shape on matlab :
I tried something involving to test each line going through the point but it is really really long.(I hope to do it for each points of the shape or at least each point of it's contour wich is quite a few times)
I think a faster method would be working iteratively by the expansion of a disk from the considered point but I am not sure how to do it.
How can I find this field of view in an efficient way?
Any ideas or solution would be appreciated, thanks.
Here is a possible approach (the principle behind the function I wrote, available on Matlab Central):
I created this test image and an arbitrary point of view:
testscene=zeros(500);
testscene(80:120,80:120)=1;
testscene(200:250,400:450)=1;
testscene(380:450,200:270)=1;
viewpoint=[250, 300];
imsize=size(testscene); % checks the size of the image
It looks like this (the circle marks the view point I chose):
The next line computes the longest distance to the edge of the image from the viewpoint:
maxdist=max([norm(viewpoint), norm(viewpoint-[1 imsize(2)]), norm(viewpoint-[imsize(1) 1]), norm(viewpoint-imsize)]);
angles=1:360; % use smaller increment to increase resolution
Then generate a set of points uniformly distributed around the viewpoint.:
endpoints=bsxfun(#plus, maxdist*[cosd(angles)' sind(angles)'], viewpoint);
for k=1:numel(angles)
[CX,CY,C] = improfile(testscene,[viewpoint(1), endpoints(k,1)],[viewpoint(2), endpoints(k,2)]);
idx=find(C);
intersec(k,:)=[CX(idx(1)), CY(idx(1))];
end
What this does is drawing lines from the view point to each directions specified in the array angles and look for the position of the intersection with an obstacle or the edge of the image.
This should help visualizing the process:
Finally, let's use the built-in roipoly function to create a binary mask from a set of coordinates:
FieldofView = roipoly(testscene,intersec(:,1),intersec(:,2));
Here is how it looks like (obstacles in white, visible field in gray, viewpoint in red):
i need to clean this picture delete the writing "clean me" and make it bright.
as a part of my homework in image processing course i may use matlab functions ginput, to find specific points in the image (of course in the script you should hard code the coordinates you need).
You may use conv2, fft2, ifft2, fftshift etc.
You may also use median, mean, max, min, sort, etc.
my basic idea was to use the white and black values from the middle of the picture and insert them into the other parts of the black and white strips. however gives a very synthetic look to the picture.
can you please give me a direction what to do ? a median filter will not give good results.
The general technique to do such thing is called Inpainting. But in order to do it, you need a mask of the regions that you want to in paint. So, let us suppose that we managed to get a good mask and inpainted the original image considering a morphological dilation of this mask:
To get that mask, we don't need anything much fancy. Start with a binarization of the difference between the original image and the result of a median filtering of it:
You can remove isolated pixels; join the pixels representing the stars of your flag by a combination of dilation in horizontal followed by another dilation with a small square; remove this just created largest component; and then perform a geodesic dilation with the result so far against the initial mask. This gives the good mask above.
Now to inpaint there are many algorithms, but one of the simplest ones I've found is described at Fast Digital Image Inpainting, which should be easy enough to implement. I didn't use it, but you could and verify which results you can obtain.
EDIT: I missed that you also wanted to brighten the image.
An easy way to brighten an image, without making the brighter areas even brighter, is by applying a gamma factor < 1. Being more specific to your image, you could first apply a relatively large lowpass filter, negate it, multiply the original image by it, and then apply the gamma factor. In this second case, the final image will likely be darker than the first one, so you multiply it by a simple scalar value. Here are the results for these two cases (left one is simply a gamma 0.6):
If you really want to brighten the image, then you can apply a bilateral filter and binarize it:
I see two options for removing "clean me". Both rely on the horizontal similarity.
1) Use a long 1D low-pass filter in the horizontal direction only.
2) Use a 1D median filter maybe 10 pixels long
For both solutions you of course have to exlude the stars-part.
When it comes to brightness you could try a histogram equalization. However that won't fix the unevenness of the brightness. Maybe a high-pass before equalization can fix that.
Regards
The simplest way to remove the text is, like KlausCPH said, to use a long 1-d median filter in the region with the stripes. In order to not corrupt the stars, you would need to keep a backup of this part and replace it after the median filter has run. To do this, you could use ginput to mark the lower right side of the star part:
% Mark lower right corner of star-region
figure();imagesc(Im);colormap(gray)
[xCorner,yCorner] = ginput(1);
close
xCorner = round(xCorner); yCorner = round(yCorner);
% Save star region
starBackup = Im(1:yCorner,1:xCorner);
% Clean up stripes
Im = medfilt2(Im,[1,50]);
% Replace star region
Im(1:yCorner,1:xCorner) = starBackup;
This produces
To fix the exposure problem (the middle part being brighter than the corners), you could fit a 2-D Gaussian model to your image and do a normalization. If you want to do this, I suggest looking into fit, although this can be a bit technical if you have not been working with model fitting before.
My found 2-D gaussian looks something like this:
Putting these two things together, gives:
I used gausswin() function to make a gaus. mask:
Pic_usa_g = abs(1 - gausswin( size(Pic_usa,2) ));
Pic_usa_g = Pic_usa_g + 0.6;
Pic_usa_g = Pic_usa_g .* 2;
Pic_usa_g = Pic_usa_g';
C = repmat(Pic_usa_g, size(Pic_usa,1),1);
and after multiply the image with the mask you get the fixed image.
I am trying to generate the following "effect" from a basic shape in MATLAB:
But I don't even know how this process is called. Let's say I have an image containing the brown shape, what I want is generate the contours outside of it, that get smoother as they get bigger.
Is there either a name for this effect, a function to do this in MATLAB or an algorithm that does it from scratch?
thanks
I think you are looking for bwdist.
The image you are displaying looks like the positive part of a distance function from the boundary of your shape. You can perform this easily in Matlab using the examples on the aforementioned manual page.
Try this:
I = imread('brown_image.png');
I_bw = (rgb2gray(I) > 0); % or whatever, just so I_bw is 1 in the 'brown' region
r = 10;
se1 = strel('disk', r);
se2 = strel('disk', r-1);
imshow(imdilate(I_bw, se1) - imdilate(I_bw, se2))
Requires image processing toolbox, but the basic idea is to dilate the image twice with dilation elements that differ by 1 (or however thick you want the contours to be) and subtract the result of the smaller one from the bigger one. You could then color them however you want.