Removing a region in the image - image

I want to eliminate a specific region in my image using MATLAB. For this, i converted my image into binary. now both the wanted and unwanted (region to be removed) exhibits same spatial characteristics (when i used regionprops() function). The only difference is the position of the regions in the image. Is there any command to do this job...
Eliminating the region means setting the pixel intensity to 0 in the specific region

If you have a mask that is (e.g.) TRUE where you want to remove the region, you can do:
myImage[myMask] = 0;
If it's TRUE where you want to keep the region, you can do:
myImage[~myMask] = 0;
Is that what you mean? (Update your question with a small amount of code reproducing your problem and we'll be able to tailor our answers better).

Related

Detect black dots from color background

My short question
How to detect the black dots in the following images? (I paste only one test image to make the question look compact. More images can be found →here←).
My long question
As shown above, the background color is roughly blue, and the dots color is "black". If pick one black pixel and measure its color in RGB, the value can be (0, 44, 65) or (14, 69, 89).... Therefore, we cannot set a range to tell the pixel is part of the black dot or the background.
I test 10 images of different colors, but I hope I can find a method to detect the black dots from more complicated background which may be made up of three or more colors, as long as human eyes can identify the black dots easily. Some extremely small or blur dots can be omitted.
Previous work
Last month, I have asked a similar question at stackoverflow, but have not got a perfect solution, some excellent answers though. Find more details about my work if you are interested.
Here are the methods I have tried:
Converting to grayscale or the brightness of image. The difficulty is that I can not find an adaptive threshold to do binarization. Obviously, turning a color image to grayscale or using the brightness (HSV) will lose much useful information. Otsu algorithm which calculates adaptive threshold can not work either.
Calculating RGB histogram. In my last question, natan's method is to estimate the black color by histogram. It is time-saving, but the adaptive threshold is also a problem.
Clustering. I have tried k-means clustering and found it quite effective for the background that only has one color. The shortage (see my own answer) is I need to set the number of clustering center in advance but I don't know how the background will be. What's more, it is too slow! My application is for real time capturing on iPhone and now it can process 7~8 frames per second using k-means (20 FPS is good I think).
Summary
I think not only similar colors but also adjacent pixels should be "clustered" or "merged" in order to extract the black dots. Please guide me a proper way to solve my problem. Any advice or algorithm will be appreciated. There is no free lunch but I hope a better trade-off between cost and accuracy.
I was able to get some pretty nice first pass results by converting to HSV color space with rgb2hsv, then using the Image Processing Toolbox functions imopen and imregionalmin on the value channel:
rgb = imread('6abIc.jpg');
hsv = rgb2hsv(rgb);
openimg = imopen(hsv(:, :, 3), strel('disk', 11));
mask = imregionalmin(openimg);
imshow(rgb);
hold on;
[r, c] = find(mask);
plot(c, r, 'r.');
And the resulting images (for the image in the question and one chosen from your link):
You can see a few false positives and missed dots, as well as some dots that are labeled with multiple points, but a few refinements (such as modifying the structure element used in the opening step) could clean these up some.
I was curios to test with my old 2d peak finder code on the images without any threshold or any color considerations, really crude don't you think?
im0=imread('Snap10.jpg');
im=(abs(255-im0));
d=rgb2gray(im);
filter=fspecial('gaussian',16,3.5);
p=FastPeakFind(d,0,filter);
imagesc(im0); hold on
plot(p(1:2:end),p(2:2:end),'r.')
The code I'm using is a simple 2D local maxima finder, there are some false positives, but all in all this captures most of the points with no duplication. The filter I was using was a 2d gaussian of width and std similar to a typical blob (the best would have been to get a matched filter for your problem).
A more sophisticated version that does treat the colors (rgb2hsv?) could improve this further...
Here is an extraodinarily simplified version, that can be extended to be full RGB, and it also does not use the image procesing library. Basically you can do 2-D convolution with a filter image (which is an example of the dot you are looking for), and from the points where the convolution returns the highest values, are the best matches for the dots. You can then of course threshold that. Here is a simple binary image example of just that.
%creating a dummy image with a bunch of small white crosses
im = zeros(100,100);
numPoints = 10;
% randomly chose the location to put those crosses
points = randperm(numel(im));
% keep only certain number of points
points = points(1:numPoints);
% get the row and columns (x,y)
[xVals,yVals] = ind2sub(size(im),points);
for ii = 1:numel(points)
x = xVals(ii);
y = yVals(ii);
try
% create the crosses, try statement is here to prevent index out of bounds
% not necessarily the best practice but whatever, it is only for demonstration
im(x,y) = 1;
im(x+1,y) = 1;
im(x-1,y) = 1;
im(x,y+1) = 1;
im(x,y-1) = 1;
catch err
end
end
% display the randomly generated image
imshow(im)
% create a simple cross filter
filter = [0,1,0;1,1,1;0,1,0];
figure; imshow(filter)
% perform convolution of the random image with the cross template
result = conv2(im,filter,'same');
% get the number of white pixels in filter
filSum = sum(filter(:));
% look for all points in the convolution results that matched identically to the filter
matches = find(result == filSum);
%validate all points found
sort(matches(:)) == sort(points(:))
% get x and y coordinate matches
[xMatch,yMatch] = ind2sub(size(im),matches);
I would highly suggest looking at the conv2 documentation on MATLAB's website.

How to segment part of moving image based on details from fixed image in MATLAB?

I'm currently working on MRI images and each dataset consists of a series of images. All I need to do is to segment part of the moving image(s), based on details a from fixed image provided, strictly by using the image registration method.
I have tried some of the available code and done some tweaking but all I got was a warped transformation moving image based on features from the fixed image, which was correct but wasn't as I expected.
To help with the idea, here are some of those MRI images1:
Fixed image:
Moving image:
The plan is to segment only total area (quadriceps, inner and outer bone sections) of the moving image as per details from the fixed image, i.e. morphologically warp the boundary of moving image according to fixed image boundary.
Any idea/suggestions as to how this could be done?
1. As a new user I'm unable to post/attach more than 2 links/images but do let me know should you need further images.
'All I need to do is to segment part of the moving image/s', this is certainly not a trivial thing to do. It is called segmentation by deformable models, and there is a lot of literature on the subject. Also, your fixed image is very different from the moved image, which doesn't help.
Here are a couple of ideas to start, but you will probably need to go into more details for your application.
I1=imread('fixed.png');
I2=imread('moving.png');
model=im2bw(I1,0.54);
imshowpair(I1,Model);
This is a simple thresholding segmentation to isolate that blob in the middle of the image. The value 0.54 was obtained by fiddling, you can certainly do a better job at segmenting your fixed image.
Here is the segmented fixed image, purple is inside, green is outside.
Now, let's deform this mask to fit the moved image:
masked = activecontour(I2,model, 20, 'Chan-Vese');
imshowpair(I2,masked);
Result:
You can automatize this in a loop along all your images, deforming each subsequent mask to the next frame. Try different parameters of activecontour as well.
Edit here is another way I can think of:
In the following code, Istart is the original fixed image, Mask is the segmented region on that image (the one you called 'fixed' in your question) and Istep is the moved image.
I first turned the segmented region into a binary mask, this is not strictly necessary:
t=graythresh(Mask);
BWmask=im2bw(Mask, t);
Let's display the masked original image:
imshowpair(BWmask, Istart)
The next step was to compute intensity-based registration between the start and step images:
[optimizer, metric] = imregconfig('monomodal');
optimizer.MaximumIterations = 300;
Tform=imregtform(Istart, Istep, 'affine', optimizer, metric);
And warp the mask according to this transformation:
WarpedMask=imwarp(BWmask, Tform, 'bicubic', 'Outputview', imref2d(size(Istart)));
Now let's have a look at the result:
imshowpair(WarpedMask, Istep);
It's not perfect, but it is a start. I think your main issue is that your mask contains elements that are different from each other (that middle blob vs. the darker soft tissue in the middle) If I where you, I would try to segment these structures separately.
Good luck!

Storing data for levels in a game like RISK or Total War

I'm working on a game which is a bit like the boardgame RISK, or the campaign section of the Total War series. I currently have a working implementation of the region system, but because of bad performance, the game hangs after certain commands. I'm sure it is possible to do it better.
What I want to do
I want to be able to present a map, such as a world map, and divide it up into regions (e.g. countries). I want to be able to select regions by clicking on them, send units to them, and get the adjacent regions.
What I've tried
A map is defined by 3 files:
A text file, which contains data formatted like this:
"Region Name" "Region Color" "Game-related information" ["Adjacent Region 1", "Adjacent Region 2", ...]'
An image file, where each region is seperated by a black border and has its own color. So for example there could be two regions, one would have the RGB values 255, 0, 0 (red) and another one 255, 255, 255 (white). They are seperated by a black border (but this is not necessary for the algorithm to work).
Another image file, which is the actual image that is drawn to the screen. It is the "nice looking" map.
An example of such a colour map:
(All the white parts evaluate to the same region in the current implementation. Just imagine they all have different colours).
When I load these files, I first load the colour image. Then I load the text file and go through each line. I create regions with the correct settings, as I want to. There's no real performance hit here, as it's simply reading data. A bunch of Region objects is then made, and given the correct colors.
At this stage, everything works fine. I can click on regions, ask the pixel data of the colour image, and by going through all the Regions in a list I can find the one that matches the colour of that particular pixel.
Issues
However, here's where the performance hit comes in:
Issue 1: Units
Each player has a bunch of units. I want to be able to spawn these units in a region. Let's say I want to spawn a unit in the red region. I go through all the pixels in my file, and when I hit a red one, I place the unit there.
for(int i = 0; i < worldmap.size(); i++) {
for(int j = 0; j < worldmap[i].size(); j++) {
if(worldmap[i][j].color == unit_color) {
// place it here
}
}
}
A simple glance at this pseudocode shows that this is not going to work well. Not at a reasonable pace, anyway.
Issue 2: Region colouring
Another issue is that I want to colour the regions owned by players on the "nice looking" map. Let's say player one owns three regions: Blue, Red and Green. I then go through the worldmap, find the blue, red and green pixels on the colour image, and then colour those pixels on the "nice looking" map in a transparent version of the player colour.
However, this is also a very heavy operation and it takes a few seconds.
What I want to ask
Since this is a turn based game, it's not really that big a deal that every now and then, the game slows down a bit. However, it is not to my liking that I'm writing this ugly code.
I have considered other options, such as storing each point of a region as a float, but that would be a massive strain on memory (64 bits times a 3000x1000 resolution image is a lot).
I was wondering if there are algorithms created for this, or if I should try to use more memory to relieve the processor. I've looked for other games and how they do this, but to no avail. I've yet to find some source code on this, or an article.
I have deliberately not put too many code in this question, since it's already fairly lengthy, and the code has a lot of dependencies on other parts of my application. However, if it is needed to solve the problem, I will post some ASAP.
Thanks in advance!
Problem 1: go through the color map with a step size of 10 in both X and Y directions. This reduces the number of pixels considered by a factor of 100. Works if each country contains a square of at least 10x10 pixels.
Problem 2: The best solution here is to do this once, not once per player or once per region. Create a lookup table from region color to player color, iterate over all pixels of the region map, and look up the corresponding player color to apply.
It may help to reduce the region color map to RGB 332 (8 bits total). You probably don't need that many fine shades of lila, and using just one byte colors makes the lookup table a lot easier, just a plain array with 256 elements would work. Considering your maps are 3000x1000 pixels, this would also reduce the map size by 6 MB.
Another thing to consider is whether you really need a region map with 3000x1000 pixel resolution. The nice map may be that big, but the region map could be resampled at 1500x500 pixel resolution. Your borders looked thick enough (more than 2 pixels) so a 1 pixel loss of region resolution would not matter. Yet it would reduce the region map by another 2.25 MB. At 750 kB, it now probably fits in the CPU cache.
What if you traced the regions (so one read through the entire data file) and stored the boundaries. For example, in Java there is a Path2D class which I have used before to store the outlines of states. In fact, if you used this method your data file wouldn't even need all the pixel data, just the boundaries of the areas. This is especially true since it seems your regions aren't dynamic, so you can simply hard-code the boundary values into the data file.
From here you can simply target a location within the boundaries (most libraries/languages with this concept support some sort of isPointInBoundary(x, y) method). You could even create your own Region class that that has a boundary saved to it along with other information (such as what game pieces are currently on it!).
Hope that helps you think about it clearer - should be pretty nice to code too.

image enhancement - cleaning given image from writing

i need to clean this picture delete the writing "clean me" and make it bright.
as a part of my homework in image processing course i may use matlab functions ginput, to find specific points in the image (of course in the script you should hard code the coordinates you need).
You may use conv2, fft2, ifft2, fftshift etc.
You may also use median, mean, max, min, sort, etc.
my basic idea was to use the white and black values from the middle of the picture and insert them into the other parts of the black and white strips. however gives a very synthetic look to the picture.
can you please give me a direction what to do ? a median filter will not give good results.
The general technique to do such thing is called Inpainting. But in order to do it, you need a mask of the regions that you want to in paint. So, let us suppose that we managed to get a good mask and inpainted the original image considering a morphological dilation of this mask:
To get that mask, we don't need anything much fancy. Start with a binarization of the difference between the original image and the result of a median filtering of it:
You can remove isolated pixels; join the pixels representing the stars of your flag by a combination of dilation in horizontal followed by another dilation with a small square; remove this just created largest component; and then perform a geodesic dilation with the result so far against the initial mask. This gives the good mask above.
Now to inpaint there are many algorithms, but one of the simplest ones I've found is described at Fast Digital Image Inpainting, which should be easy enough to implement. I didn't use it, but you could and verify which results you can obtain.
EDIT: I missed that you also wanted to brighten the image.
An easy way to brighten an image, without making the brighter areas even brighter, is by applying a gamma factor < 1. Being more specific to your image, you could first apply a relatively large lowpass filter, negate it, multiply the original image by it, and then apply the gamma factor. In this second case, the final image will likely be darker than the first one, so you multiply it by a simple scalar value. Here are the results for these two cases (left one is simply a gamma 0.6):
If you really want to brighten the image, then you can apply a bilateral filter and binarize it:
I see two options for removing "clean me". Both rely on the horizontal similarity.
1) Use a long 1D low-pass filter in the horizontal direction only.
2) Use a 1D median filter maybe 10 pixels long
For both solutions you of course have to exlude the stars-part.
When it comes to brightness you could try a histogram equalization. However that won't fix the unevenness of the brightness. Maybe a high-pass before equalization can fix that.
Regards
The simplest way to remove the text is, like KlausCPH said, to use a long 1-d median filter in the region with the stripes. In order to not corrupt the stars, you would need to keep a backup of this part and replace it after the median filter has run. To do this, you could use ginput to mark the lower right side of the star part:
% Mark lower right corner of star-region
figure();imagesc(Im);colormap(gray)
[xCorner,yCorner] = ginput(1);
close
xCorner = round(xCorner); yCorner = round(yCorner);
% Save star region
starBackup = Im(1:yCorner,1:xCorner);
% Clean up stripes
Im = medfilt2(Im,[1,50]);
% Replace star region
Im(1:yCorner,1:xCorner) = starBackup;
This produces
To fix the exposure problem (the middle part being brighter than the corners), you could fit a 2-D Gaussian model to your image and do a normalization. If you want to do this, I suggest looking into fit, although this can be a bit technical if you have not been working with model fitting before.
My found 2-D gaussian looks something like this:
Putting these two things together, gives:
I used gausswin() function to make a gaus. mask:
Pic_usa_g = abs(1 - gausswin( size(Pic_usa,2) ));
Pic_usa_g = Pic_usa_g + 0.6;
Pic_usa_g = Pic_usa_g .* 2;
Pic_usa_g = Pic_usa_g';
C = repmat(Pic_usa_g, size(Pic_usa,1),1);
and after multiply the image with the mask you get the fixed image.

How do I locate black rectangles in a grid and extract the binary code from that

i'm working in a project to recognize a bit code from an image like this, where black rectangle represents 0 bit, and white (white space, not visible) 1 bit.
Somebody have any idea to process the image in order to extract this informations? My project is written in java, but any solution is accepted.
thanks all for support.
I'm not an expert in image processing, I try to apply Edge Detection using Canny Edge Detector Implementation, free java implementation find here. I used this complete image [http://img257.imageshack.us/img257/5323/colorimg.png], reduce it (scale factor = 0.4) to have fast processing and this is the result [http://img222.imageshack.us/img222/8255/colorimgout.png]. Now, how i can decode white rectangle with 0 bit value, and no rectangle with 1?
The image have 10 line X 16 columns. I don't use python, but i can try to convert it to Java.
Many thanks to support.
This is recognising good old OMR (optical mark recognition).
The solution varies depending on the quality and consistency of the data you get, so noise is important.
Using an image processing library will clearly help.
Simple case: No skew in the image and no stretch or shrinkage
Create a horizontal and vertical profile of the image. i.e. sum up values in all columns and all rows and store in arrays. for an image of MxN (width x height) you will have M cells in horizontal profile and N cells in vertical profile.
Use a thresholding to find out which cells are white (empty) and which are black. This assumes you will get at least a couple of entries in each row or column. So black cells will define a location of interest (where you will expect the marks).
Based on this, you can define in lozenges in the form and you get coordinates of lozenges (rectangles where you have marks) and then you just add up pixel values in each lozenge and based on the number, you can define if it has mark or not.
Case 2: Skew (slant in the image)
Use fourier (FFT) to find the slant value and then transform it.
Case 3: Stretch or shrink
Pretty much the same as 1 but noise is higher and reliability less.
Aliostad has made some good comments.
This is OMR and you will find it much easier to get good consistent results with a good image processing library. www.leptonica.com is a free open source 'C' library that would be a very good place to start. It could process the skew and thresholding tasks for you. Thresholding to B/W would be a good start.
Another option would be IEvolution - http://www.hi-components.com/nievolution.asp for .NET.
To be successful you will need some type of reference / registration marks to allow for skew and stretch especially if you are using document scanning or capturing from a camera image.
I am not familiar with Java, but in Python, you can use the imaging library to open the image. Then load the height and the widths, and segment the image into a grid accordingly, by Height/Rows and Width/Cols. Then, just look for black pixels in those regions, or whatever color PIL registers that black to be. This obviously relies on the grid like nature of the data.
Edit:
Doing Edge Detection may also be Fruitful. First apply an edge detection method like something from wikipedia. I have used the one found at archive.alwaysmovefast.com/basic-edge-detection-in-python.html. Then convert any grayscale value less than 180 (if you want the boxes darker just increase this value) into black and otherwise make it completely white. Then create bounding boxes, lines where the pixels are all white. If data isn't terribly skewed, then this should work pretty well, otherwise you may need to do more work. See here for the results: http://imm.io/2BLd
Edit2:
Denis, how large is your dataset and how large are the images? If you have thousands of these images, then it is not feasible to manually remove the borders (the red background and yellow bars). I think this is important to know before proceeding. Also, I think the prewitt edge detection may prove more useful in this case, since there appears to be less noise:
The previous method of segmenting may be applied, if you do preprocess to bin in the following manner, in which case you need only count the number of black or white pixels and threshold after some training samples.

Resources