what's the use of Edge Detection of image? - edge-detection

After getting edge image using canny, what's the use of edge image?
Is there any use case of edge image?
find object and Segment it from image? or get the sharp,area and perimeter of the object?

As in the wikipedia,
Edge detection is the name for a set of mathematical methods which
aim at identifying points in a digital image at which the image
brightness changes sharply or, more formally, has discontinuities. The
points at which image brightness changes sharply are typically
organized into a set of curved line segments termed edges.
You can use this to find the interested area of an image by programmatically. For example, you have a lazer image of a indoor floor map and you want to detect the actual area a robot can visit, this will be useful. You can refer google more on this. It's just an example in real world usage.

Related

Looking for algorithm to map an image to 4 sides polygon

This is more a math question than a programming question beside the fact that I must implement it using Delphi inside a graphic application.
Assuming I have a picture of a sheet of paper. The actual sheet of paper is of course a rectangular area. When the picture is shown on a computer screen the rectangular area is no more rectangular because when the picture was taken, the camera was not perfectly positioned above the sheet of paper. There is all kinds of perspective effects which result in deformations.
My application needs to tweak the image so that the original rectangular area is displayed as a rectangular area on screen.
Most photo processing software have an interactive tool to do that. The user draw a rectangular area on screen around the rectangular object and then drag each corner to deform the displayed rectangular area until he see the real area as rectangular. What I'm looking for is the algorithm to do that computation.
You need to split the problem into 2 steps. Find the edges or corners of the sheet and remap the pixels.
To find the corners or edges it's a really hard problem since they might be invisible, outside of the picture, obstructed, bent or deformed. Assuming you have a very simple setup (black uniform background, white paper, very little distortion) you could run an edge detection kernel over the image then find the 4 outer edges. If you find the edges you can intersect them to find the corners and the other way around.
Once you find the corners run an interpolation over the image to map the pixels onto the rectangle you want. You should be able to get the graphics engine to do this for you if you provide the coordinates of the corners as texture coordinates for the rectangle and map the image as a texture.
I made it sound simple, but you will encounter many parameters to set and experiment with.
It seems (because you mentioned bilinear interpolation) that you need perspective transformations.
There is implementation of perspective transformations (mapping of arbitrary convex quad to rectangle and vice versa) in Anti-Grain Geometry library (exe example). Delphi port.
With agg_trans_perspective one can calculate the matrix of persp. transformation and then apply it to map coordinates from one quad to another.

Find my camera's 3D position and orientation according to a 2D marker

I am currently building an Augmented Reality application and stuck on a problem that seem quite easy but is very hard to me ... The problem is as follow:
My device's camera is calibrated and detect a 2D marker (such as a QRCode). I know the focal length, the sensor's position, the distance between my camera and the center of the marker, the real size of the marker and the coordinates of the 4 corners of the marker and of it center on the 2D image I got from the camera. See the following image:
On the image, we know the a,b,c,d distances and the coordinates of the red dots.
What I need to know is the position and the orientation of the camera according to the marker (as represented on the image, the origin is the center of the marker).
Is there an easy and fast way to do so? I tried some method imagined by myself (using Al-Kashi's formulas), but this ended with too much errors :(. Could someone point out a way to get me out of this?
You can find some example code for the EPnP algorithm on this webpage. This code consists in one header file and one source file, plus one file for the usage example, so this shouldn't be too hard to include in your code.
Note that this code is released for research/evaluation purposes only, as mentioned on this page.
EDIT:
I just realized that this code needs OpenCV to work. By the way, although this would add a pretty big dependency to your project, the current version of OpenCV has a builtin function called solvePnP, which does what you want.
You can compute the homography between the image points and the corresponding world points. Then from the homography you can compute the rotation and translation mapping a point from the marker's coordinate system into the camera's coordinate system. The math is described in the paper on camera calibration by Zhang.
Here's an example in MATLAB using the Computer Vision System Toolbox, which does most of what you need. It is using the extrinsics function, which computes a 3D rotation and a translation from matching image and world points. The points need not come from a checkerboard.

Best Elliptical Fit for irregular shapes in an image

I have an image with arbitrary regions shape (say objects), let's assume the background pixels are labeled as zeros whereas any object has a unique label (pixels of object 1 are labeled as 1, object 2 pixels are labeled as 2,...). Now for every object, I need to find the best elliptical fit of its pixels. This requires finding the center of the object, the major and minor axis, and the rotation angle. How can I find these?
Thank you;
Principal Component Analysis (PCA) is one way to go. See Wikipedia here.
The centroid is easy enough to find if your shapes are convex - just a weighted average of intensities over the xy positions - and PCA will give you the major and minor axes, hence the orientation.
Once you have the centre and axes, you have the basis for a set of ellipses that cover your shape. Extending the axes - in proportion - and testing each pixel for in/out, you can find the ellipse that just covers your shape. Or if you prefer, you can project each pixel position onto the major and minor axes and find the rough limits in one pass and then test in/out on "corner" cases.
It may help if you post an example image.
As you seem to be using Matlab, you can simply use the regionprops command, given that you have the Image Processing Toolbox.
It can extract all the information you need (and many more properties of image regions) and it will do the PCA for you, if the PCA-based based approach suits your needs.
Doc is here, look for the 'Centroid', 'Orientation', 'MajorAxisLength' and 'MinorAxisLength' parameters specifically.

Map image/texture to a predefined uneven surface (t-shirt with folds, mug, etc.)

Basically I was trying to achieve this: impose an arbitrary image to a pre-defined uneven surface. (See examples below).
-->
I do not have a lot of experience with image processing or 3D algorithms, so here is the best method I can think of so far:
Predefine a set of coordinates (say if we have a 10x10 grid, we have 100 coordinates that starts with (0,0), (0,10), (0,20), ... etc. There will be 9x9 = 81 grids.
Record the transformations for each individual coordinate on the t-shirt image e.g. (0,0) becomes (51,31), (0, 10) becomes (51, 35), etc.
Triangulate the original image into 81x2=162 triangles (with 2 triangles for each grid). Transform each triangle of the image based on the coordinate transformations obtained in Step 2 and draw it on the t-shirt image.
Problems/questions I have:
I don't know how to smooth out each triangle so that the image on t-shirt does not look ragged.
Is there a better way to do it? I want to make sure I'm not reinventing the wheels here before I proceed with an implementation.
Thanks!
This is called digital image warping. There was a popular graphics text on it in the 1990s (probably from somebody's thesis). You can also find an article on it from Dr. Dobb's Journal.
Your process is essentially correct. If you work pixel by pixel, rather than trying to use triangles, you'll avoid some of the problems you're facing. Scan across the pixels in target bitmap, and apply the local transformation based on the cell you're in to determine the coordinate of the corresponding pixel in the source bitmap. Copy that pixel over.
For a smoother result, you do your coordinate transformations in floating point and interpolate the pixel values from the source image using something like bilinear interpolation.
It's not really a solution for the problem, it's just a workaround :
If you have the 3D model that represents the T-Shirt.
you can use directX\OpenGL and put your image as a texture of the t-shirt.
Then you can ask it to render the picture you want from any point of view.

How do you mask an arbitrary area of an image to overlay another image?

I want to mask an arbitrary convex polygon area of an image and put another image into that area. I found this posting, but is wasn't clear to me if this applies only to rectangular areas and not arbitrary polygons.
The basic flow I am talking about is to have an (x,y) coordinate on the screen which would serve to be the center of my polygon (center in terms of an arbitrary point which is consistent for me). I would like to mask this area where the new image (polygonal in nature) would be displayed while leaving the rest of the screen as is.
Can I do this easily and quickly?
You have to use stencil buffer. It's basically another type of buffer that has plethora of awesome applications and one of the simplest one is masking. While I can't recommend any OpenGL ES specific tutorial off the top of my head, I highly recommend reading general tutorials, since it's not that different and surely is fascinating.
Try glScissor... it might be the rectangle you want.

Resources