Pulling non-transparent areas to the center of the transparent areas in an image - image

I am making an image processing project which has a few steps and stuck in one of them. Here is the thing; I have segmented an image and subtract the foreground from background. Now, I need to fill the background.
So far, I have tried the inpainting algorithms. They don't work in my case because my background images haven't at least 40% of them. I mean they fail when they are trying the complete 40% of an image. (By the way, these images have given bad results even in the Photoshop with content-aware tool.)
Anyway, I've given up trying inpainting and decided something else. In my project, I don't need to complete 100% of my background. I want to illustrate my solution;
As you see in the image above, I want to pull the image to the black area (which is transparent) with minimum corruption. Any MATLAB code samples, technique, keyword and approach would be great. If you need further explanation, feel free to ask.

I can think of two crude ways to fill the hole:
use roifill: this fills gaps in 2d image preserving image smoothness.
Alteratively, you can use bwdist to compute the nearest neighbor of each black pixel and assign it to its nearest neighbor's color:
[~, nnIdx] = bwdist( bw );
fillImg(bw) = IMG(bw);
although this code snippet works only for gray images, it is quite trivial to extend it to RGB color images.

Related

Applying an image as a mask in matlab

I am a new user on image processing via Matlab. My first aim is applying the article and comparing my results and authors' results.
The article can be found here: http://arxiv.org/ftp/arxiv/papers/1306/1306.0139.pdf
First problem, Image Quality: In Figure 7, masks are defined but I couldn't reach the mask data set, and I use the screenshot so image quality is low. In my view, it can effect the results. Is there any suggestions?
Second problem, Merging images: I want to apply mask 1 on the Lena. But I don't want to use paint =) On the other hand, is it possible merging the images and keeping the lena?
You need to create the mask array. The first step is probably to turn your captured image from Figure 7 into a black and white image:
Mask = im2bw(Figure7, 0.5);
Now the background (white) is all 1 and the black line (or text) is 0.
Let's make sure your image of Lena that you got from imread is actually grayscale:
LenaGray = rgb2gray(Lena);
Finally, apply your mask on Lena:
LenaAndMask = LenaGray.*Mask;
Of course, this last line won't work if Lena and Figure7 don't have the same size, but this should be an easy fix.
First of all, You have to know that this paper is published in archive. when papers published in archive it is always a good idea to know more about the author and/or the university that published the paper.
TRUST me on that: you do not need to waste your time on this paper.
I understand your demand: but it is not a good idea to do get the mask by doing print screen. The pixel values that can be achieved by using print screen may not be the same as the original values. The zoom may change the size. so you need to be sure that the sizes are the same.
you can do print screen. past the image.
crop the mask.
convert rgb to gray scale.
threshold the gray scale to get the binary.
if you saved the image as jpeg. distortions because of high frequency edges will change edge shape.

Image Equalization to compensate for light sources

Currently I am involved in an image processing project where I am dealing with human faces. But I am facing problems with the images in cases where the light source is on either the left or right side of the face. In those cases, the portion of the image away from the light source is darker. I want to distribute the brightness over the image more evenly, so that the the brightness of darker pixels is increased and the brightness of overly bright pixels is decreased at the same time.
I had used 'gamma correction' techniques to do the same but the results are not desirable , Actually I want to create an output in which the brightness is independent of the light source, in other words , increasing the brightness of the darker part and decreasing the brightness of the brighter part. I am not sure if I reproduced the problem statement correctly but this is a very common problem and I haven't found anything useful abut this on the web.
1. Image with Light source on the right side
2. Image after increasing the brightness of the darker pixels.[img = cv2.pow(img, 0.5)]
3. Image after decreasing the brightness of Bright pixels[img = cv2.pow(img, 2.0)]
I was thinking of taking the mean of both the images 2 and 3 but as we see that the over bright pixels still persist in the image 3 , and I want to get rid of that pixels, Any suggestion ?
In the end I need an image with homogeneous brightness, and independent of the light source.
Take a look at homomorphic filtering applied to image enhancement in which you can selectively filter reflectance and illumination components of an image.
I found this paper: http://www.mirlab.org/conference_papers/International_Conference/ICASSP%202010/pdfs/0001374.pdf i think it exactly addresses the concern you have.
you will need to compute "gradient" of an image i.e. laplacian derivatives for which you can read up on this: http://docs.opencv.org/trunk/doc/py_tutorials/py_imgproc/py_gradients/py_gradients.html
i'd be very interested to know about your implementation. if you run into trouble post a comment here and i can try to help.

Detecting Object/Person in an image

I am new to Matlab, I am working on a project which will take input an image like this
as we can see it has a plain background (blue), and system will generate it's passport size image with given ratios, first I am working to separate background and person, the approach I searched is like if there is a blue in combinations of rgb matrices of image, then it is background, and rest is a person, but I am little bit confused that if this approach is correct or not, if it is correct then how can I find that current pixel is blue or not, how can I do it with matlab function find. Any help would be appreciated.
If you want to crop your image based on person's face, then there is no need in separating the background from the foreground. Nowadays you will easily find ready implementations of face detection, so, unless you want to implement your own method because the ready one fails, this should be a non-issue. See:
Show[img,
Graphics[{EdgeForm[{Yellow, Thick}], Opacity[0],
Rectangle ###
FindFaces[img = Import["http://i.stack.imgur.com/cSwzj.jpg"]]}]]
Supposing the face is detected correctly, you can expand/retract its bounding box to match the size you are after.

How can I deblur an image in matlab?

I need to remove the blur this image:
Image source: http://www.flickr.com/photos/63036721#N02/5733034767/
Any Ideas?
Although previous answers are right when they say that you can't recover lost information, you could investigate a little and make a few guesses.
I downloaded your image in what seems to be the original size (75x75) and you can see here a zoomed segment (one little square = one pixel)
It seems a pretty linear grayscale! Let's verify it by plotting the intensities of the central row. In Mathematica:
ListLinePlot[First /# ImageData[i][[38]][[1 ;; 15]]]
So, it is effectively linear, starting at zero and ending at one.
So you may guess it was originally a B&W image, linearly blurred.
The easiest way to deblur that (not always giving good results, but enough in your case) is to binarize the image with a 0.5 threshold. Like this:
And this is a possible way. Just remember we are guessing a lot here!
HTH!
You cannot generally retrieve missing information.
If you know what it is an image of, in this case a Gaussian or Airy profile then it's probably an out of focus image of a point source - you can determine the characteristics of the point.
Another technique is to try and determine the character tics of the blurring - especially if you have many images form the same blurred system. Then iteratively create a possible source image, blur it by that convolution and compare it to the blurred image.
This is the general technique used to make radio astronomy source maps (images) and was used for the flawed Hubble Space Telescope images
When working with images one of the most common things is to use a convolution filter. There is a "sharpen" filter that does what it can to remove blur from an image. An example of a sharpen filter can be found here:
http://www.panoramafactory.com/sharpness/sharpness.html
Some programs like matlab make convolution really easy: conv2(A,B)
And most nice photo editing have the filters under some name or another (sharpen usually).
But keep in mind that filters can only do so much. In theory, the actual information has been lost by the blurring process and it is impossible to perfectly reconstruct the initial image (no matter what TV will lead you to believe).
In this case it seems like you have a very simple image with only black and white. Knowing this about your image you could always use a simple threshold. Set everything above a certain threshold to white, and everything below to black. Once again most photo editing software makes this really easy.
You cannot retrieve missing information, but under certain assumptions you can sharpen.
Try unsharp masking.

How do I locate black rectangles in a grid and extract the binary code from that

i'm working in a project to recognize a bit code from an image like this, where black rectangle represents 0 bit, and white (white space, not visible) 1 bit.
Somebody have any idea to process the image in order to extract this informations? My project is written in java, but any solution is accepted.
thanks all for support.
I'm not an expert in image processing, I try to apply Edge Detection using Canny Edge Detector Implementation, free java implementation find here. I used this complete image [http://img257.imageshack.us/img257/5323/colorimg.png], reduce it (scale factor = 0.4) to have fast processing and this is the result [http://img222.imageshack.us/img222/8255/colorimgout.png]. Now, how i can decode white rectangle with 0 bit value, and no rectangle with 1?
The image have 10 line X 16 columns. I don't use python, but i can try to convert it to Java.
Many thanks to support.
This is recognising good old OMR (optical mark recognition).
The solution varies depending on the quality and consistency of the data you get, so noise is important.
Using an image processing library will clearly help.
Simple case: No skew in the image and no stretch or shrinkage
Create a horizontal and vertical profile of the image. i.e. sum up values in all columns and all rows and store in arrays. for an image of MxN (width x height) you will have M cells in horizontal profile and N cells in vertical profile.
Use a thresholding to find out which cells are white (empty) and which are black. This assumes you will get at least a couple of entries in each row or column. So black cells will define a location of interest (where you will expect the marks).
Based on this, you can define in lozenges in the form and you get coordinates of lozenges (rectangles where you have marks) and then you just add up pixel values in each lozenge and based on the number, you can define if it has mark or not.
Case 2: Skew (slant in the image)
Use fourier (FFT) to find the slant value and then transform it.
Case 3: Stretch or shrink
Pretty much the same as 1 but noise is higher and reliability less.
Aliostad has made some good comments.
This is OMR and you will find it much easier to get good consistent results with a good image processing library. www.leptonica.com is a free open source 'C' library that would be a very good place to start. It could process the skew and thresholding tasks for you. Thresholding to B/W would be a good start.
Another option would be IEvolution - http://www.hi-components.com/nievolution.asp for .NET.
To be successful you will need some type of reference / registration marks to allow for skew and stretch especially if you are using document scanning or capturing from a camera image.
I am not familiar with Java, but in Python, you can use the imaging library to open the image. Then load the height and the widths, and segment the image into a grid accordingly, by Height/Rows and Width/Cols. Then, just look for black pixels in those regions, or whatever color PIL registers that black to be. This obviously relies on the grid like nature of the data.
Edit:
Doing Edge Detection may also be Fruitful. First apply an edge detection method like something from wikipedia. I have used the one found at archive.alwaysmovefast.com/basic-edge-detection-in-python.html. Then convert any grayscale value less than 180 (if you want the boxes darker just increase this value) into black and otherwise make it completely white. Then create bounding boxes, lines where the pixels are all white. If data isn't terribly skewed, then this should work pretty well, otherwise you may need to do more work. See here for the results: http://imm.io/2BLd
Edit2:
Denis, how large is your dataset and how large are the images? If you have thousands of these images, then it is not feasible to manually remove the borders (the red background and yellow bars). I think this is important to know before proceeding. Also, I think the prewitt edge detection may prove more useful in this case, since there appears to be less noise:
The previous method of segmenting may be applied, if you do preprocess to bin in the following manner, in which case you need only count the number of black or white pixels and threshold after some training samples.

Resources