How to Mask Operations openCV - Frequency masking - image

I would like to create a frequency mask in openCV but have no idea how to go about it. The frequency mask will be an ideal bandpass filter so image filtering will be done in the frequency domain. For this example lets say frequencies between 100Hz-200Hz will be coupled.
Anyone know how to do this?
Thanks is advance!

Perform DFT of your image.
Make an filter matrix with size of your image, where you have ring with radius R1 = wavelength_min, R2=wavelength_max. Ring is filled with ones and the rest elements are zeros.
Multiply DFT of your image by this matrix.
Perform inverse DFT of obtained image.

Related

entropyfilt in OpenCV

I am working on an image processing project and I have to use entopyfilt (from matlab).
I researched and found some information to do it but not enough. I can calculate the entropy value of an image, but I don't know how to write an entropy filter. There is a similar question in the site, but I also didn't understand it.
Can anybody help me to understand entropy filter?
From the MATLAB documentation:
J = entropyfilt(I) returns the array J, where each output pixel contains the entropy value of the 9-by-9 neighborhood around the corresponding pixel in the input image I. I can have any dimension. If I has more than two dimensions, entropyfilt treats it as a multidimensional grayscale image and not as a truecolor (RGB) image. The output image J is the same size as the input image I.
For each pixel, you look at the 9 by 9 area around the pixel and calculate the entropy. Since the entropy calculation is a nonlinear calculation, it is not something you can do with a simple kernel filter. You have to loop over each pixel and do the calculation on a per-pixel basis.

make a mask for each well in a grid

I have a grid of wells in an image and I'm trying to analyze this in Matlab. I want to create a box around each well to use as a mask. The way I am trying to go about this is to find the offset vectors from the X and Y normal and then use that to make a grid since I know the size of the wells.
I can mask out some of the wells but not all of them---but this doesn't matter since I know that there is a well in every position (see here). I can use regionprops to get the centers but I can't figure out how to move to the next step.
Here is an image with the centers I can extract
Some people have suggested that I do an FFT of the image but I can't get it to work. Any thoughts or suggestions would be greatly appreciated. Thanks in advance!
Edit: Here is the mask with the centers from the centroid feature of regionprops.
here's a quick and dirty 2 cents:
First blur and invert the image so that the well lines will have high intensity values vs the rest, and further analysis will be less sensitive to noise:
im=double(imread('im.jpg'));
im=conv2(im,fspecial('Gaussian',10,1),'same');
im2=abs(im-max(im(:)));
Then, take a local threshold using the average intensity around a neighborhood of (more or less) a well size (~200 pixels)
im3=imfilter(im2,fspecial('average',200),'replicate');
im4=im2-im3;
bw=im2bw(im4,0);
Fill holes (or wells):
[bw2,locations] = imfill(bw,'holes');
Remove objects smaller than some size:
bw3 = bwareaopen(bw2, 2000, 8);
imagesc(bw3);
You can take it from there...

Percentage difference between two images

I have two images of same height/width they look like similar.But they are not exactly similar pixel by pixel.That is one of the image is moved to right by few pixels.
I am currently using imagemagick compare command.It shows difference as it compares pixel by pixel.Also i tried with fuzz attribute of it.
Please suggest any other tool to compare such type of images.
I don't know what you're really trying to achieve, but if you want a metric to express the similitude between the two images without taking image displacement into account, then maybe you should work in the frequency domain.
As instance, the frequency part of the DFT of your images should be nearly identical, so if you compute the SNR of the two frequency parts, it should be practically null.
In fact, according to the Fourier shift theorem, you can even get an estimation of the displacement offset by calculating the inverse DFT of the combination of the two DFT.

image enhancement - cleaning given image from writing

i need to clean this picture delete the writing "clean me" and make it bright.
as a part of my homework in image processing course i may use matlab functions ginput, to find specific points in the image (of course in the script you should hard code the coordinates you need).
You may use conv2, fft2, ifft2, fftshift etc.
You may also use median, mean, max, min, sort, etc.
my basic idea was to use the white and black values from the middle of the picture and insert them into the other parts of the black and white strips. however gives a very synthetic look to the picture.
can you please give me a direction what to do ? a median filter will not give good results.
The general technique to do such thing is called Inpainting. But in order to do it, you need a mask of the regions that you want to in paint. So, let us suppose that we managed to get a good mask and inpainted the original image considering a morphological dilation of this mask:
To get that mask, we don't need anything much fancy. Start with a binarization of the difference between the original image and the result of a median filtering of it:
You can remove isolated pixels; join the pixels representing the stars of your flag by a combination of dilation in horizontal followed by another dilation with a small square; remove this just created largest component; and then perform a geodesic dilation with the result so far against the initial mask. This gives the good mask above.
Now to inpaint there are many algorithms, but one of the simplest ones I've found is described at Fast Digital Image Inpainting, which should be easy enough to implement. I didn't use it, but you could and verify which results you can obtain.
EDIT: I missed that you also wanted to brighten the image.
An easy way to brighten an image, without making the brighter areas even brighter, is by applying a gamma factor < 1. Being more specific to your image, you could first apply a relatively large lowpass filter, negate it, multiply the original image by it, and then apply the gamma factor. In this second case, the final image will likely be darker than the first one, so you multiply it by a simple scalar value. Here are the results for these two cases (left one is simply a gamma 0.6):
If you really want to brighten the image, then you can apply a bilateral filter and binarize it:
I see two options for removing "clean me". Both rely on the horizontal similarity.
1) Use a long 1D low-pass filter in the horizontal direction only.
2) Use a 1D median filter maybe 10 pixels long
For both solutions you of course have to exlude the stars-part.
When it comes to brightness you could try a histogram equalization. However that won't fix the unevenness of the brightness. Maybe a high-pass before equalization can fix that.
Regards
The simplest way to remove the text is, like KlausCPH said, to use a long 1-d median filter in the region with the stripes. In order to not corrupt the stars, you would need to keep a backup of this part and replace it after the median filter has run. To do this, you could use ginput to mark the lower right side of the star part:
% Mark lower right corner of star-region
figure();imagesc(Im);colormap(gray)
[xCorner,yCorner] = ginput(1);
close
xCorner = round(xCorner); yCorner = round(yCorner);
% Save star region
starBackup = Im(1:yCorner,1:xCorner);
% Clean up stripes
Im = medfilt2(Im,[1,50]);
% Replace star region
Im(1:yCorner,1:xCorner) = starBackup;
This produces
To fix the exposure problem (the middle part being brighter than the corners), you could fit a 2-D Gaussian model to your image and do a normalization. If you want to do this, I suggest looking into fit, although this can be a bit technical if you have not been working with model fitting before.
My found 2-D gaussian looks something like this:
Putting these two things together, gives:
I used gausswin() function to make a gaus. mask:
Pic_usa_g = abs(1 - gausswin( size(Pic_usa,2) ));
Pic_usa_g = Pic_usa_g + 0.6;
Pic_usa_g = Pic_usa_g .* 2;
Pic_usa_g = Pic_usa_g';
C = repmat(Pic_usa_g, size(Pic_usa,1),1);
and after multiply the image with the mask you get the fixed image.

Applying 1D Gaussian blur to a data set

I have some data set where each object has a Value and Price. I want to apply Gaussian Blur to their Price using their Value. Since my data has only 1 component to use in blurring, I am trying to apply 1D Gaussian blur.
My code does this:
totalPrice = 0;
totalValue = 0;
for each object.OtherObjectsWithinPriceRange()
totalPrice += price;
totalValue += Math.Exp(-value*value);
price = totalPrice/totalValue;
I see good results, but the 1D Gaussian blur algorithms I see online seems to use deviations, sigma, PI, etc. Do I need them, or are they strictly for 2D Gaussian blurs? They combine these 1D blur passes as vertical and horizontal so they are still accounting for 2D.
Also I display the results as colors but the white areas are a little over 1 (white). How can I normalize this? Should I just clamp the values to 1? That's why I am wondering if I am using the correct formula.
Your code applies some sort of a blur, though definitely not Gaussian. The Gaussian blur would look something like
kindaSigma = 1;
priceBlurred = object.price;
for each object.OtherObjectsWithinPriceRange()
priceBlurred += price*Math.Exp(-value*value/kindaSigma/kindaSigma);
and that only assuming that value is proportional to a "distance" between the object and other objects within price range, whatever this "distance" in your application means.
To your questions.
2D Gaussian blur is completely equivalent to a combination of vertical and horizontal 1D Gaussian blurs done one ofter another. That's how thee 2D Gaussian blur is usually implemented in practice.
You don't need any PI or sigmas as a multiplicative factor for the Gaussian - those have an effect of merely scaling an image and can be safely ignored.
The sigma (standard deviation) under the exponent has a major impact on the result, but it is not possible for me to tell you if you need it or not. It depends on your application.
Want more blur: use larger kindaSigma in the snippet above.
Want less blur: use smaller kindaSigma.
When kindaSigma is too small, you won't notice any blur at all. When kindaSigma is too large, the Gaussian blur effectively transforms itself into a moving average filter.
Play with it and choose what you need.
I am not sure I understand your normalization question. In image processing it is common to store each color component (R,G,B) as unsigned char. So black color is represented by (0,0,0) and white color by (255,255,255). Of course, you are free to decided to choose a different presentation form and take white color as 1. But keep in mind that for the visualization packages that are using standard 8-bit presentation, the value of 1 means almost black color. So you will likely need to manipulate and renormalize your image before display.

Resources