I have an image and two points,
and I want to read the pixels between these two points,
and resample them into a small 1x40 array.
I'm using EMGU which is a C# wrapper for OpenCV.
thanks,
SW
What you are looking for is Bresenham's line algorithm. It will allow you to get the points in the pixel array that best approximate a straight line. The Wikipedia link also contains psuedo code to get you started.
Emgu CV includes method in the Image class for sampling color along a line called Sample.
Refer to the manual for the definition. Here's the link to Image.Sample in version 2.3.0.
You will still have to re-sample/interpolate the points in array returned from Sample to end up with a 40 element array. Since there are a number of ways to re-sample, I'll suggest you look to other questions for that.
Rotate and crop
I'd first try to do it like this:
calculate rotation matrix with (GetRotationMatrix2D)
warp the image so that this line is horisontal (WarpAffine)
calculate new positions of two of your points (you can use Transform)
get image rectangle of suitable width and 1 px high (GetRectSubPix)
Interpolation here and there may affect the results, but you have to interpolate anyway. You may consider cropping the image before rotation.
Iterate over the 8-connected pixels of the line
Otherwise you may use the line iterator to iterate over the pixels between two points. See documentation for InitLineIterator (Sorry, the link is to the Python version of OpenCV, I've never heard of EMGU). I suppose that in this case you iterate over pixels of a line which was not antialiased. But this should be much faster.
Interpolate manually
Finally, you may convert the image to an array, calculate which elements the line is passing through and subsample and interpolate manually.
Related
I want to match symmetry of object contours. I tried using matchShape(), computeDistance(), Humoments() from opencv 3.0 library. But none of them is close to what I want.
Following are the images on which I am working.
Good Shape-1
Defected
I expect to get highest value of dis-symmetry for image-2(named Defected)
You can do it yourself, by using several simple tools:
Find the centroid
Use PCA (http://docs.opencv.org/master/d1/dee/tutorial_introduction_to_pca.html#gsc.tab=0) to find the main axis
Rotate the shape so that the main axis points up
In each row, count the number of pixels on either side of the center and compare (There should be a single center coordinate, probably using the median value of all rows centers)
You can tune your own thresholds to fit the problem.
I have the follow code in matlab which is supposed to draw a polygon on a image (has to be a 2d image, be just a patch).
numCorners=8;
dotPos=[];
for rr=1:numCorners
dotPos(end+1)=(cos(rr/numCorners*2*pi))*100;
dotPos(end+1)=(sin(rr/numCorners*2*pi))*100;
end
BaseIm=zeros(1000,1000);
dotpos=[500,500];
imageMatrix =drawpolygon(BaseIm, dotPos, 1); or how else do draw a white polygon here?
imshow(imageMatrix);
This doesn't work as drawpolygon does not appear to exist in this way any idea how to do this?
Note that the resulting data must be an image of equal size of baseIM and must be an array of doubles (ints can be converted) as this is test data for another algorithm.
I have since found the inpolygon(xi,yi,xv,yv); function which I could combine with a for loop if I knew how to properly call it.
If you just need to plot two polygons, you can use the fill function.
t=0:2*pi;
x=cos(t)*2;
y=sin(t)*2
fill(x,y,'r')
hold on
fill(x/2,y/2,'g')
As an alternative, you can use the patch function:
figure
t=0:2*pi;
x=cos(t)*2;
y=sin(t)*2
patch(x,y,'c')
hold on
patch(x/2,y/2,'k')
Edit
The fill and patch functions allow to add polygons also over an actual image too.
% Load an image on the axes
imshow('Jupiter_New_Horizons.jpg')
hold on
% Get the axis limits (just to center the polygons
x_lim=get(gca,'xlim')
y_lim=get(gca,'ylim')
% Create the polygon's coords
t=0:2*pi;
x=cos(t)*50+x_lim(2)/2;
y=sin(t)*50+y_lim(2)/2
% Add the two polygons to the image
f1_h=fill(x,y,'r')
hold on
f1_h=fill(x/2,y/2,'g')
Hope this helps.
'1.png just has one contour'
img = cv2.imread('1.png')
retval,dst = cv2.threshold(img,120,255,cv2.THRESH_BINARY)
contours, hierarchy = cv2.findContours(dst,cv2.RETR_EXTERANL,cv2.CHAIN_APPROX_SIMPLE)
print cv2.contourArea(contours[0],False)
The image just has one contour, then contours is a list. When I change contours[0] to contours[3] or other numbers, there still have a area. I have no ideas about the question, only one contour.
Why appear so many values? It is the problem of the found thresh? Need your help?
The issue is that what OpenCV considers to be a contour is only part of what you consider to be a contour. In other words, OpenCV is splitting up your contour into pieces. It's possible you could get the full area by adding the areas of the individual contours, but this method is not reliable.
Something you could try is Image Morphology. Using this, you could "grow" the contours so that they overlap more, which would mean that the chance of OpenCV recognizing them as a single contour would be greater.
However, this method would result in a loss of precision. Therefore, if you need an exact area, you will have to rely on other methods. For complex geometries, this is not a simple task.
One more "quick and dirty" solution I've used is to create a fresh one-channel Mat (Mat::zeros), draw the contours filled with a color value of 255 (drawContours), and sum the contents (in c++ this is cv::sum and for you I think it's cv2.sum), then divide the result by 255. This would get you the area in pixels, and would be more reliable than summing the areas of individual contours because it would account for overlap between them.
i need to clean this picture delete the writing "clean me" and make it bright.
as a part of my homework in image processing course i may use matlab functions ginput, to find specific points in the image (of course in the script you should hard code the coordinates you need).
You may use conv2, fft2, ifft2, fftshift etc.
You may also use median, mean, max, min, sort, etc.
my basic idea was to use the white and black values from the middle of the picture and insert them into the other parts of the black and white strips. however gives a very synthetic look to the picture.
can you please give me a direction what to do ? a median filter will not give good results.
The general technique to do such thing is called Inpainting. But in order to do it, you need a mask of the regions that you want to in paint. So, let us suppose that we managed to get a good mask and inpainted the original image considering a morphological dilation of this mask:
To get that mask, we don't need anything much fancy. Start with a binarization of the difference between the original image and the result of a median filtering of it:
You can remove isolated pixels; join the pixels representing the stars of your flag by a combination of dilation in horizontal followed by another dilation with a small square; remove this just created largest component; and then perform a geodesic dilation with the result so far against the initial mask. This gives the good mask above.
Now to inpaint there are many algorithms, but one of the simplest ones I've found is described at Fast Digital Image Inpainting, which should be easy enough to implement. I didn't use it, but you could and verify which results you can obtain.
EDIT: I missed that you also wanted to brighten the image.
An easy way to brighten an image, without making the brighter areas even brighter, is by applying a gamma factor < 1. Being more specific to your image, you could first apply a relatively large lowpass filter, negate it, multiply the original image by it, and then apply the gamma factor. In this second case, the final image will likely be darker than the first one, so you multiply it by a simple scalar value. Here are the results for these two cases (left one is simply a gamma 0.6):
If you really want to brighten the image, then you can apply a bilateral filter and binarize it:
I see two options for removing "clean me". Both rely on the horizontal similarity.
1) Use a long 1D low-pass filter in the horizontal direction only.
2) Use a 1D median filter maybe 10 pixels long
For both solutions you of course have to exlude the stars-part.
When it comes to brightness you could try a histogram equalization. However that won't fix the unevenness of the brightness. Maybe a high-pass before equalization can fix that.
Regards
The simplest way to remove the text is, like KlausCPH said, to use a long 1-d median filter in the region with the stripes. In order to not corrupt the stars, you would need to keep a backup of this part and replace it after the median filter has run. To do this, you could use ginput to mark the lower right side of the star part:
% Mark lower right corner of star-region
figure();imagesc(Im);colormap(gray)
[xCorner,yCorner] = ginput(1);
close
xCorner = round(xCorner); yCorner = round(yCorner);
% Save star region
starBackup = Im(1:yCorner,1:xCorner);
% Clean up stripes
Im = medfilt2(Im,[1,50]);
% Replace star region
Im(1:yCorner,1:xCorner) = starBackup;
This produces
To fix the exposure problem (the middle part being brighter than the corners), you could fit a 2-D Gaussian model to your image and do a normalization. If you want to do this, I suggest looking into fit, although this can be a bit technical if you have not been working with model fitting before.
My found 2-D gaussian looks something like this:
Putting these two things together, gives:
I used gausswin() function to make a gaus. mask:
Pic_usa_g = abs(1 - gausswin( size(Pic_usa,2) ));
Pic_usa_g = Pic_usa_g + 0.6;
Pic_usa_g = Pic_usa_g .* 2;
Pic_usa_g = Pic_usa_g';
C = repmat(Pic_usa_g, size(Pic_usa,1),1);
and after multiply the image with the mask you get the fixed image.
I'm looking for a generic algorithm to calculate a red/cian anaglyph starting from the original image and his b/w depth map (example: http://www.swell3d.com/2008/07/turn-2d-painting-into-3d-anagl.html)
That algorythm are used, for example, in Photoshop but I can't find a readable explanation to reproduce it.
Thanks
After some researches I found what I was looking for.
First, I've readed some Photoshop/Gimp tutorials that describes how to make anaglyphs from two inputs: an image and its grayscale depth map. The core of the process is the use of "Displace Tool" and the depth map as a displacement map.
One of the several youtube tutorials: http://www.youtube.com/watch?v=gfYMe_vYhu4
So, I took some documentation about Gimp's Displace Tool by looking at this http://docs.gimp.org/en/plug-in-displace.html and directly at the source code of the tool (the method is very similar to the one proposed by Asgeir).
This lets us to produce two stereo images from the input, by looking at the depth map. The red and cyan colors of every image are calculated by reading this page http://3dtv.at/Knowhow/AnaglyphComparison_en.aspx ("Optimized" matrices are the best ones).
Then, the sum of the two images in one will produce the final anaglyph. Thanks everybody.
There are two algorithms involved. The first uses the original image and the depth map to produce a left and a right image. The second combines these images into a red-cyan anaglyph.
There are a couple ways to accomplish the first part. One is to take the original image and texture map it onto a fine mesh that lies flat in the XY plane. Then you tweak the Z values of each vertex in the mesh according to the corresponding value in the depth map. You've basically created a textured bas relief. You then use a 3D rendering algorithm to render the image from two vantage points that are offset horizontally by a small amount (essentially from the vantage point of a person's left and right eyes as they would view the bas relief).
There is probably a way to directly shift the pixels left and right which is a good fast approximation to what I described above.
Once you have the left and right images, you pass one through a cyan filter and one through a red filter. If you have RGB sources, that's as simple as taking the red channel from one image and combing it with the green and blue channels from the other image.
Anaglyphs work best with muted colors. If you have strong primaries, it won't look as good. You can use an algorithm to reduce the color saturation of the original image before you begin.
From the description in the link you provided I would assume that it is something like
for each pixel in depthmap
x_offset = (depthmap[x][y] / 255.0f) * MAX_PIXEL_OFFSET * DIRECTION
output[x + x_offset][y] = color_buffer[x][y]
blend output with color_buffer
Where MAX_PIXEL_OFFSET is the maximum shift in pixels and DIRECTION is -1 for one color and 1 for the other. This is assuming that the depthbuffer is one byte per pixel, range [0..255] and that 0 in the depthbuffer represents maximum distance.