Applying an image as a mask in matlab - image

I am a new user on image processing via Matlab. My first aim is applying the article and comparing my results and authors' results.
The article can be found here: http://arxiv.org/ftp/arxiv/papers/1306/1306.0139.pdf
First problem, Image Quality: In Figure 7, masks are defined but I couldn't reach the mask data set, and I use the screenshot so image quality is low. In my view, it can effect the results. Is there any suggestions?
Second problem, Merging images: I want to apply mask 1 on the Lena. But I don't want to use paint =) On the other hand, is it possible merging the images and keeping the lena?

You need to create the mask array. The first step is probably to turn your captured image from Figure 7 into a black and white image:
Mask = im2bw(Figure7, 0.5);
Now the background (white) is all 1 and the black line (or text) is 0.
Let's make sure your image of Lena that you got from imread is actually grayscale:
LenaGray = rgb2gray(Lena);
Finally, apply your mask on Lena:
LenaAndMask = LenaGray.*Mask;
Of course, this last line won't work if Lena and Figure7 don't have the same size, but this should be an easy fix.

First of all, You have to know that this paper is published in archive. when papers published in archive it is always a good idea to know more about the author and/or the university that published the paper.
TRUST me on that: you do not need to waste your time on this paper.
I understand your demand: but it is not a good idea to do get the mask by doing print screen. The pixel values that can be achieved by using print screen may not be the same as the original values. The zoom may change the size. so you need to be sure that the sizes are the same.
you can do print screen. past the image.
crop the mask.
convert rgb to gray scale.
threshold the gray scale to get the binary.
if you saved the image as jpeg. distortions because of high frequency edges will change edge shape.

Related

Equalize contrast and brightness across multiple images

I have roughly 160 images for an experiment. Some of the images, however, have clearly different levels of brightness and contrast compared to others. For instance, I have something like the two pictures below:
I would like to equalize the two pictures in terms of brightness and contrast (probably find some level in the middle and not equate one image to another - though this could be okay if that makes things easier). Would anyone have any suggestions as to how to go about this? I'm not really familiar with image analysis in Matlab so please bear with my follow-up questions should they arise. There is a question for Equalizing luminance, brightness and contrast for a set of images already on here but the code doesn't make much sense to me (due to my lack of experience working with images in Matlab).
Currently, I use Gimp to manipulate images but it's time consuming with 160 images and also just going with subjective eye judgment isn't very reliable. Thank you!
You can use histeq to perform histogram specification where the algorithm will try its best to make the target image match the distribution of intensities / histogram of a source image. This is also called histogram matching and you can read up about it on my previous answer.
In effect, the distribution of intensities between the two images should hopefully be the same. If you want to take advantage of this using histeq, you can specify an additional parameter that specifies the target histogram. Therefore, the input image would try and match itself to the target histogram. Something like this would work assuming you have the images stored in im1 and im2:
out = histeq(im1, imhist(im2));
However, imhistmatch is the more better version to use. It's almost the same way you'd call histeq except you don't have to manually compute the histogram. You just specify the actual image to match itself:
out = imhistmatch(im1, im2);
Here's a running example using your two images. Note that I'll opt to use imhistmatch instead. I read in the two images directly from StackOverflow, I perform a histogram matching so that the first image matches in intensity distribution with the second image and we show this result all in one window.
im1 = imread('http://i.stack.imgur.com/oaopV.png');
im2 = imread('http://i.stack.imgur.com/4fQPq.png');
out = imhistmatch(im1, im2);
figure;
subplot(1,3,1);
imshow(im1);
subplot(1,3,2);
imshow(im2);
subplot(1,3,3);
imshow(out);
This is what I get:
Note that the first image now more or less matches in distribution with the second image.
We can also flip it around and make the first image the source and we can try and match the second image to the first image. Just flip the two parameters with imhistmatch:
out = imhistmatch(im2, im1);
Repeating the above code to display the figure, I get this:
That looks a little more interesting. We can definitely see the shape of the second image's eyes, and some of the facial features are more pronounced.
As such, what you can finally do in the end is choose a good representative image that has the best brightness and contrast, then loop over each of the other images and call imhistmatch each time using this source image as the reference so that the other images will try and match their distribution of intensities to this source image. I can't really write code for this because I don't know how you are storing these images in MATLAB. If you share some of that code, I'd love to write more.

Pulling non-transparent areas to the center of the transparent areas in an image

I am making an image processing project which has a few steps and stuck in one of them. Here is the thing; I have segmented an image and subtract the foreground from background. Now, I need to fill the background.
So far, I have tried the inpainting algorithms. They don't work in my case because my background images haven't at least 40% of them. I mean they fail when they are trying the complete 40% of an image. (By the way, these images have given bad results even in the Photoshop with content-aware tool.)
Anyway, I've given up trying inpainting and decided something else. In my project, I don't need to complete 100% of my background. I want to illustrate my solution;
As you see in the image above, I want to pull the image to the black area (which is transparent) with minimum corruption. Any MATLAB code samples, technique, keyword and approach would be great. If you need further explanation, feel free to ask.
I can think of two crude ways to fill the hole:
use roifill: this fills gaps in 2d image preserving image smoothness.
Alteratively, you can use bwdist to compute the nearest neighbor of each black pixel and assign it to its nearest neighbor's color:
[~, nnIdx] = bwdist( bw );
fillImg(bw) = IMG(bw);
although this code snippet works only for gray images, it is quite trivial to extend it to RGB color images.

How to segment part of moving image based on details from fixed image in MATLAB?

I'm currently working on MRI images and each dataset consists of a series of images. All I need to do is to segment part of the moving image(s), based on details a from fixed image provided, strictly by using the image registration method.
I have tried some of the available code and done some tweaking but all I got was a warped transformation moving image based on features from the fixed image, which was correct but wasn't as I expected.
To help with the idea, here are some of those MRI images1:
Fixed image:
Moving image:
The plan is to segment only total area (quadriceps, inner and outer bone sections) of the moving image as per details from the fixed image, i.e. morphologically warp the boundary of moving image according to fixed image boundary.
Any idea/suggestions as to how this could be done?
1. As a new user I'm unable to post/attach more than 2 links/images but do let me know should you need further images.
'All I need to do is to segment part of the moving image/s', this is certainly not a trivial thing to do. It is called segmentation by deformable models, and there is a lot of literature on the subject. Also, your fixed image is very different from the moved image, which doesn't help.
Here are a couple of ideas to start, but you will probably need to go into more details for your application.
I1=imread('fixed.png');
I2=imread('moving.png');
model=im2bw(I1,0.54);
imshowpair(I1,Model);
This is a simple thresholding segmentation to isolate that blob in the middle of the image. The value 0.54 was obtained by fiddling, you can certainly do a better job at segmenting your fixed image.
Here is the segmented fixed image, purple is inside, green is outside.
Now, let's deform this mask to fit the moved image:
masked = activecontour(I2,model, 20, 'Chan-Vese');
imshowpair(I2,masked);
Result:
You can automatize this in a loop along all your images, deforming each subsequent mask to the next frame. Try different parameters of activecontour as well.
Edit here is another way I can think of:
In the following code, Istart is the original fixed image, Mask is the segmented region on that image (the one you called 'fixed' in your question) and Istep is the moved image.
I first turned the segmented region into a binary mask, this is not strictly necessary:
t=graythresh(Mask);
BWmask=im2bw(Mask, t);
Let's display the masked original image:
imshowpair(BWmask, Istart)
The next step was to compute intensity-based registration between the start and step images:
[optimizer, metric] = imregconfig('monomodal');
optimizer.MaximumIterations = 300;
Tform=imregtform(Istart, Istep, 'affine', optimizer, metric);
And warp the mask according to this transformation:
WarpedMask=imwarp(BWmask, Tform, 'bicubic', 'Outputview', imref2d(size(Istart)));
Now let's have a look at the result:
imshowpair(WarpedMask, Istep);
It's not perfect, but it is a start. I think your main issue is that your mask contains elements that are different from each other (that middle blob vs. the darker soft tissue in the middle) If I where you, I would try to segment these structures separately.
Good luck!

How to reconstruct Bayer to RGB from Canon RAW data?

I'm trying to reconstruct RGB from RAW Bayer data from a Canon DSLR but am having no luck. I've taken a peek at the dcraw.c source, but its lack of comments makes it a bit tough to get through. Anyway, I have debayering working but I need to then take this debayered data and get something that looks correct. My current code does something like this, in order:
Demosaic/debayer
Apply white balance multipliers (I'm using the following ones: 1.0, 2.045, 1.350. These work perfectly in Adobe Camera Raw as 5500K, 0 Tint.)
Multiply the result by the inverse of the camera's color matrix
Multiply the result by an XYZ to sRGB matrix fromm Bruce Lindbloom's site (the D50 sRGB one)
Set white/black point, I am using an input levels control for this
Adjust gamma
Some of what I've read says to apply the white balance and black point correction before the debayer. I've tried, but it's still broken.
Do these steps look correct? I'm trying to determine if the problem is 1.) my sequence of operations, or 2.) the actual math being used.
The first step should be setting black and saturation point because you need to apply white balance looking after saturated pixels in order to avoid magenta highlights:
And before demosaicing, apply white balacing. See here (http://www.guillermoluijk.com/tutorial/dcraw/index_en.htm) how applying white balance before demosaicing introduce artifacts.
After the first step (debayer) you should have a proper RGB image with right colors. Remaining steps are just cosmetics. So I'm guessing there's something wrong at step one.
One problem could be the Bayer pattern you're using to generate RGB image is different from the CFA pattern of the camera. Match sensor alignment in your code to that of the camera!

How do I deal with brightness rescaling after FFT'ing and spatially filtering images?

Louise here. I've recently started experimenting with Fourier transforming images and spatially filtering them. For example, here's one of a fireplace, high-pass filtered to remove everything above ten cycles per image:
http://imgur.com/ECa306n,NBQtMsK,Ngo8eEY#0 - first image (sorry, I can't post images on Stack Overflow because I haven't got enough reputation).
As we can see, the image is very dark. However, if we rescale it to [0,1] we get
http://imgur.com/ECa306n,NBQtMsK,Ngo8eEY#0 - second image
and if we raise everything in the image to the power of -0.5 (we can't raise to positive powers as the image data is all between 0 and 1, and would thus get smaller), we get this:
same link - third image
My question is: how should we deal with reductions in dynamic range due to hi/low pass filtering? I've seen lots of filtered images online and they all seemed to have similar brightness profiles to the original image, without manipulation.
Should I be leaving the centre pixel of the frequency domain (the DC value) alone, and not removing it when low-pass filtering?
Is there a commonplace transform (like histogram equalisation) that I should be using after the filtering?
Or should I just interpret the brightness reduction as normal, because some of the information in the image has been removed?
Thanks for the advice :)
Best,
Louise
I agree with Connor, the best way to preserve brightness is to keep the origin (DC) value unchanged. It is common practise. This way you will get similar image as your second image, because you do not change the average gray level of the image. Removing it using high-pass filtering will set its value to 0 and some scaling is needed afterwards.

Resources