Image Cropping with a Matrix Filter - image

After some processing, I got a black&white mask of a BMP image.
Now, I want to show only the part of the BMP image where it is white in the mask.
I'm a newb with matlab(but I love it), and I've tried a lot of matrix tricks learned from google, well, none works(or I'm not doing them right ..)
Please provide me with some tips.
Thanks for your time in advance.

Assuming the mask is of the same size as image, then you can just do (for grayscale images):
maskedImage=yourImage.*mask %.* means pointwise multiplication.
For color images, do the same operations on the three channels:
maskedImage(:,:,1)=yourImage(:,:,1).*mask
maskedImage(:,:,2)=yourImage(:,:,2).*mask
maskedImage(:,:,3)=yourImage(:,:,3).*mask
Then to visualize the image, do:
imshow(maskedImage,[]);

Using one of the two matlab functions repmat or bsxfun the masking operation can be performed in a single line of code for a source image with any number of channels.
Assuming that your image I is of size M-by-N-by-C and the mask is of size M-by-N, then we can obtain the masked image using either repmat
I2 = I .* repmat(mask, [1, 1, 3]);
or using bsxfun
I2 = bsxfun(#times, I, mask);
These are both very handy functions to know about and can be very useful when it comes to vectorizing your code in general. I would also recommend that you look through the answer to this question: In Matlab, when is it optimal to use bsxfun?

Related

MATALB Kinnect sensor depth image(Too Dark)

I am using Kinect2 with Matlab; however, the depth images shown in the video stream are much brighter than when I saved it in Matlab?
do you know the solution for this problem
Firstly, you should provide the code that you are using at the moment so we can see where you are going wrong.. this is a generic advice for posting on any forum; to provide with all your information, so others can help.
If you use the histogram to check your depth values, you will see that the image is a uint8 image with values from 0 to 255. And since the depth distances are scaled to grayscale value, the values are scaled to new values and using imshow, will not provide enough contrast.
An easy workaround for displaying images is to use any type of
histogram equalization such as
figure(1);
C= adapthisteq(A, 'clipLimit',0.02,'Distribution','rayleigh');
imshow(C);
The image will be contrast adjusted for display.
I used mat2gray and it solved the problem.

Distance transform of grayscale image

Matlab has bwdist and graydist: I was looking for something like bwdist for grayscale images, but I wasn't sure what the mask specified in graydist was.
When I'm working with the graydist function, I have tried:
T = graydist(image,image<=0)
T = graydist(image,image<=254)
I can't find an example of it being used other than that one Matlab page, but they're using different input arguments. An explanation or suggestions as to how to get the distance transform of a greyscale image in Matlab would be really helpful!

How to find entropy of Depth images in Matlab?

I want to compute depth entropy of a depth image in Matlab (same as this work ). Unfortunately, the authors don't reply my emails. There is a function,"entropyfilt", that compute the local entropy of grayscale image. I've used this function with a depth input image that captured by Kinect but it hasn't worked probably. Here is my input depth image:
Here is the code used for entropy computing:
J = entropyfilt(Depth);
imshow(mat2gray(J))
Sorry, My "reputation view" isn't enough, so I can't upload my result image.
How can I compute entropy image of a depth image? I want to acquire an image same as figure 4 in above paper.
Thanks in advance.
It is written in the paper, for each pixel you first extract two patches from the image, then you calculate the entropy of each patch. The formula for which is also in the paper and well-known in statistics.
If you want to use the function entropyfilt, you need to provide as a second argument a mask image that describes the patch (all pixels within the patch need to be 1 in the mask, others need to be 0). This is detailed in the documentation of said function.
The authors generate one color image from two entropy images. How they do so they seemingly forgot to mention.
I think the paper is low quality.
image1= imread('where is located ')
entropy(image1)
imshow(image1)

make a mask for each well in a grid

I have a grid of wells in an image and I'm trying to analyze this in Matlab. I want to create a box around each well to use as a mask. The way I am trying to go about this is to find the offset vectors from the X and Y normal and then use that to make a grid since I know the size of the wells.
I can mask out some of the wells but not all of them---but this doesn't matter since I know that there is a well in every position (see here). I can use regionprops to get the centers but I can't figure out how to move to the next step.
Here is an image with the centers I can extract
Some people have suggested that I do an FFT of the image but I can't get it to work. Any thoughts or suggestions would be greatly appreciated. Thanks in advance!
Edit: Here is the mask with the centers from the centroid feature of regionprops.
here's a quick and dirty 2 cents:
First blur and invert the image so that the well lines will have high intensity values vs the rest, and further analysis will be less sensitive to noise:
im=double(imread('im.jpg'));
im=conv2(im,fspecial('Gaussian',10,1),'same');
im2=abs(im-max(im(:)));
Then, take a local threshold using the average intensity around a neighborhood of (more or less) a well size (~200 pixels)
im3=imfilter(im2,fspecial('average',200),'replicate');
im4=im2-im3;
bw=im2bw(im4,0);
Fill holes (or wells):
[bw2,locations] = imfill(bw,'holes');
Remove objects smaller than some size:
bw3 = bwareaopen(bw2, 2000, 8);
imagesc(bw3);
You can take it from there...

image enhancement - cleaning given image from writing

i need to clean this picture delete the writing "clean me" and make it bright.
as a part of my homework in image processing course i may use matlab functions ginput, to find specific points in the image (of course in the script you should hard code the coordinates you need).
You may use conv2, fft2, ifft2, fftshift etc.
You may also use median, mean, max, min, sort, etc.
my basic idea was to use the white and black values from the middle of the picture and insert them into the other parts of the black and white strips. however gives a very synthetic look to the picture.
can you please give me a direction what to do ? a median filter will not give good results.
The general technique to do such thing is called Inpainting. But in order to do it, you need a mask of the regions that you want to in paint. So, let us suppose that we managed to get a good mask and inpainted the original image considering a morphological dilation of this mask:
To get that mask, we don't need anything much fancy. Start with a binarization of the difference between the original image and the result of a median filtering of it:
You can remove isolated pixels; join the pixels representing the stars of your flag by a combination of dilation in horizontal followed by another dilation with a small square; remove this just created largest component; and then perform a geodesic dilation with the result so far against the initial mask. This gives the good mask above.
Now to inpaint there are many algorithms, but one of the simplest ones I've found is described at Fast Digital Image Inpainting, which should be easy enough to implement. I didn't use it, but you could and verify which results you can obtain.
EDIT: I missed that you also wanted to brighten the image.
An easy way to brighten an image, without making the brighter areas even brighter, is by applying a gamma factor < 1. Being more specific to your image, you could first apply a relatively large lowpass filter, negate it, multiply the original image by it, and then apply the gamma factor. In this second case, the final image will likely be darker than the first one, so you multiply it by a simple scalar value. Here are the results for these two cases (left one is simply a gamma 0.6):
If you really want to brighten the image, then you can apply a bilateral filter and binarize it:
I see two options for removing "clean me". Both rely on the horizontal similarity.
1) Use a long 1D low-pass filter in the horizontal direction only.
2) Use a 1D median filter maybe 10 pixels long
For both solutions you of course have to exlude the stars-part.
When it comes to brightness you could try a histogram equalization. However that won't fix the unevenness of the brightness. Maybe a high-pass before equalization can fix that.
Regards
The simplest way to remove the text is, like KlausCPH said, to use a long 1-d median filter in the region with the stripes. In order to not corrupt the stars, you would need to keep a backup of this part and replace it after the median filter has run. To do this, you could use ginput to mark the lower right side of the star part:
% Mark lower right corner of star-region
figure();imagesc(Im);colormap(gray)
[xCorner,yCorner] = ginput(1);
close
xCorner = round(xCorner); yCorner = round(yCorner);
% Save star region
starBackup = Im(1:yCorner,1:xCorner);
% Clean up stripes
Im = medfilt2(Im,[1,50]);
% Replace star region
Im(1:yCorner,1:xCorner) = starBackup;
This produces
To fix the exposure problem (the middle part being brighter than the corners), you could fit a 2-D Gaussian model to your image and do a normalization. If you want to do this, I suggest looking into fit, although this can be a bit technical if you have not been working with model fitting before.
My found 2-D gaussian looks something like this:
Putting these two things together, gives:
I used gausswin() function to make a gaus. mask:
Pic_usa_g = abs(1 - gausswin( size(Pic_usa,2) ));
Pic_usa_g = Pic_usa_g + 0.6;
Pic_usa_g = Pic_usa_g .* 2;
Pic_usa_g = Pic_usa_g';
C = repmat(Pic_usa_g, size(Pic_usa,1),1);
and after multiply the image with the mask you get the fixed image.

Resources