I am new to Matlab, I am working on a project which will take input an image like this
as we can see it has a plain background (blue), and system will generate it's passport size image with given ratios, first I am working to separate background and person, the approach I searched is like if there is a blue in combinations of rgb matrices of image, then it is background, and rest is a person, but I am little bit confused that if this approach is correct or not, if it is correct then how can I find that current pixel is blue or not, how can I do it with matlab function find. Any help would be appreciated.
If you want to crop your image based on person's face, then there is no need in separating the background from the foreground. Nowadays you will easily find ready implementations of face detection, so, unless you want to implement your own method because the ready one fails, this should be a non-issue. See:
Show[img,
Graphics[{EdgeForm[{Yellow, Thick}], Opacity[0],
Rectangle ###
FindFaces[img = Import["http://i.stack.imgur.com/cSwzj.jpg"]]}]]
Supposing the face is detected correctly, you can expand/retract its bounding box to match the size you are after.
Related
I have roughly 160 images for an experiment. Some of the images, however, have clearly different levels of brightness and contrast compared to others. For instance, I have something like the two pictures below:
I would like to equalize the two pictures in terms of brightness and contrast (probably find some level in the middle and not equate one image to another - though this could be okay if that makes things easier). Would anyone have any suggestions as to how to go about this? I'm not really familiar with image analysis in Matlab so please bear with my follow-up questions should they arise. There is a question for Equalizing luminance, brightness and contrast for a set of images already on here but the code doesn't make much sense to me (due to my lack of experience working with images in Matlab).
Currently, I use Gimp to manipulate images but it's time consuming with 160 images and also just going with subjective eye judgment isn't very reliable. Thank you!
You can use histeq to perform histogram specification where the algorithm will try its best to make the target image match the distribution of intensities / histogram of a source image. This is also called histogram matching and you can read up about it on my previous answer.
In effect, the distribution of intensities between the two images should hopefully be the same. If you want to take advantage of this using histeq, you can specify an additional parameter that specifies the target histogram. Therefore, the input image would try and match itself to the target histogram. Something like this would work assuming you have the images stored in im1 and im2:
out = histeq(im1, imhist(im2));
However, imhistmatch is the more better version to use. It's almost the same way you'd call histeq except you don't have to manually compute the histogram. You just specify the actual image to match itself:
out = imhistmatch(im1, im2);
Here's a running example using your two images. Note that I'll opt to use imhistmatch instead. I read in the two images directly from StackOverflow, I perform a histogram matching so that the first image matches in intensity distribution with the second image and we show this result all in one window.
im1 = imread('http://i.stack.imgur.com/oaopV.png');
im2 = imread('http://i.stack.imgur.com/4fQPq.png');
out = imhistmatch(im1, im2);
figure;
subplot(1,3,1);
imshow(im1);
subplot(1,3,2);
imshow(im2);
subplot(1,3,3);
imshow(out);
This is what I get:
Note that the first image now more or less matches in distribution with the second image.
We can also flip it around and make the first image the source and we can try and match the second image to the first image. Just flip the two parameters with imhistmatch:
out = imhistmatch(im2, im1);
Repeating the above code to display the figure, I get this:
That looks a little more interesting. We can definitely see the shape of the second image's eyes, and some of the facial features are more pronounced.
As such, what you can finally do in the end is choose a good representative image that has the best brightness and contrast, then loop over each of the other images and call imhistmatch each time using this source image as the reference so that the other images will try and match their distribution of intensities to this source image. I can't really write code for this because I don't know how you are storing these images in MATLAB. If you share some of that code, I'd love to write more.
I am making an image processing project which has a few steps and stuck in one of them. Here is the thing; I have segmented an image and subtract the foreground from background. Now, I need to fill the background.
So far, I have tried the inpainting algorithms. They don't work in my case because my background images haven't at least 40% of them. I mean they fail when they are trying the complete 40% of an image. (By the way, these images have given bad results even in the Photoshop with content-aware tool.)
Anyway, I've given up trying inpainting and decided something else. In my project, I don't need to complete 100% of my background. I want to illustrate my solution;
As you see in the image above, I want to pull the image to the black area (which is transparent) with minimum corruption. Any MATLAB code samples, technique, keyword and approach would be great. If you need further explanation, feel free to ask.
I can think of two crude ways to fill the hole:
use roifill: this fills gaps in 2d image preserving image smoothness.
Alteratively, you can use bwdist to compute the nearest neighbor of each black pixel and assign it to its nearest neighbor's color:
[~, nnIdx] = bwdist( bw );
fillImg(bw) = IMG(bw);
although this code snippet works only for gray images, it is quite trivial to extend it to RGB color images.
Basically what I'm trying to do is overlay two images using predefined points on each image.
The images will be of two different sizes probably or scaled differently, don't know this for sure yet. But the images are of the same thing. So what I want to do is say this spot on image one is the same as this spot on image 2. And do this for multiple spots and then have matlab resize or transform to get all those points lined up so that the two images can be overlayed. The thing thats confusing me is having matlab automatically adjust the images so that they can "fit" together.
I have no idea where to start on this, and was just hoping to get a general idea of what i may be able to do.
Just incase someone else knows how to do this I'll throw in what else I need to do. After the two images are on top of each other, one images will be a region map the other a real image. What I need matlab to do is count the amount of dots from the real image in each region of the map.
Thanks for any help.
What you are trying to do is called image registration which is a very common image processing task. You wont need to write much code because matlab has built in functions for this. You use the cp2tform to create a transform from the first to second image and can then apply the transform to the first image using imtransform function. The code will look something like this assuming x,y coordinates of the control points are in an m by 2 matrix called points1 for image1 and points2 for image2.
tform= cp2tform(points1, points2 , 'similarity');
imtransform(image1, tform);
i'm interested in some kind of charcoal-filters like the photoshop Photocopy-Filter or the note-paper.
Have someone a paper or some instructions how this filter works?
In best case i want to create the following:
input:
Output:
greetings
I think it's a process akin to pan-sharpening. I could get a quite similar image in gimp by:
Converting to gray
Duplicating into two layers
Lightly blurring one layer
Edge-detecting in the other layer with a DOG filter with large radius
Compositing the two layers, playing a bit with the transparency.
What this is doing is converting the color picture into a 0-1 bitmap picture.
They typically use a threshold function which returns 1 (white) for some values and 0 (black) for some other.
One simple function would be transform the image from color to gray-scale, and then select a shade of gray above which everything is white, and below it everything is black. The actual threshold you use could be made adaptive depending on the brightness of the picture (you want a certain percentage of pixels to be white).
It can also be adaptive based on the context within the picture (i.e. a dark area may still have some white pixels to show local contrast). The trees behind the house are not all black because the filtering is sensitive to the average darkness of the region.
Also note that the area close to the light gap in the tree has a cluster of dark pixels, because of its relative darkness. The edges of the home, the bench are also highlighted. There is an edge detection element at play.
I do not know exactly what effect you gave an example of but there are a variety that are similar to it. As VSOverFlow pointed out, thresholding an image would result in something very similar to that though I do not think it is what is being used. Open cv has a function for this, its documentation can be found here. You may also want to look into Otsu's method for thresholding.
Again as VSOverFlow pointed out, there is an edge detection element at play as well. You may want to investigate the Sobel and Prewitt filters. Those are 3 simple options that will give you something similar to the image you provided. Perhaps you could threshold the result from the Prewitt filter? I have no knowledge of how Photoshop implements its filters. If none of these options are close enough to what you are looking for I would recommend looking for information on the specific implementations of those filters in photoshop.
I need to remove the blur this image:
Image source: http://www.flickr.com/photos/63036721#N02/5733034767/
Any Ideas?
Although previous answers are right when they say that you can't recover lost information, you could investigate a little and make a few guesses.
I downloaded your image in what seems to be the original size (75x75) and you can see here a zoomed segment (one little square = one pixel)
It seems a pretty linear grayscale! Let's verify it by plotting the intensities of the central row. In Mathematica:
ListLinePlot[First /# ImageData[i][[38]][[1 ;; 15]]]
So, it is effectively linear, starting at zero and ending at one.
So you may guess it was originally a B&W image, linearly blurred.
The easiest way to deblur that (not always giving good results, but enough in your case) is to binarize the image with a 0.5 threshold. Like this:
And this is a possible way. Just remember we are guessing a lot here!
HTH!
You cannot generally retrieve missing information.
If you know what it is an image of, in this case a Gaussian or Airy profile then it's probably an out of focus image of a point source - you can determine the characteristics of the point.
Another technique is to try and determine the character tics of the blurring - especially if you have many images form the same blurred system. Then iteratively create a possible source image, blur it by that convolution and compare it to the blurred image.
This is the general technique used to make radio astronomy source maps (images) and was used for the flawed Hubble Space Telescope images
When working with images one of the most common things is to use a convolution filter. There is a "sharpen" filter that does what it can to remove blur from an image. An example of a sharpen filter can be found here:
http://www.panoramafactory.com/sharpness/sharpness.html
Some programs like matlab make convolution really easy: conv2(A,B)
And most nice photo editing have the filters under some name or another (sharpen usually).
But keep in mind that filters can only do so much. In theory, the actual information has been lost by the blurring process and it is impossible to perfectly reconstruct the initial image (no matter what TV will lead you to believe).
In this case it seems like you have a very simple image with only black and white. Knowing this about your image you could always use a simple threshold. Set everything above a certain threshold to white, and everything below to black. Once again most photo editing software makes this really easy.
You cannot retrieve missing information, but under certain assumptions you can sharpen.
Try unsharp masking.