Convert freehand path to shape - algorithm

I am trying to allow the user to draw a free hand shape, then using a best guess algorithm, convert the free hand to an actual shape. I hope to have it pretty simple at first. Probably just ellipses and rectangles. I'm trying to find a good starting point. Is there a library available that does this. Or a set of algorithms that would be useful. Any help to get me started would be great. I'm having trouble finding the proper terms to search for.

googling "pattern recognition geometric shapes handwritten" returns hits including A Simple Approach to Recognise Geometric Shapes Interactively

Related

Is there some generic algorithm to calculate the dimensions of a piece of fabric needed to cover a 3D shape

I hope that this is the correct place to ask this kind of question. I am developing a web app to design garden ponds and I need to calculate the shape and size of the foil needed to cover that pond. The pond will provided as a 3D model (threeJS). The shape of the pond will be relatively simple (think one or more rectangular boxes potentially with some stairs).
I am considering folding out the surface of the 3D model into a flat shape, but I do not know how to do that in a generic way. And even if a could od that it would not be the complete solution (but potentially it would be a starting point) I have been searching for a generic algorithm to do this, but so far have not found anything. Does anyone know of an algorithm that I could use for this, or at least something that I could start with.
Some additonal information:
this will be a browser based solution which should show the pool; one option would be ThreeJS since I am somewhat familiar with it
the foil that should cover the pond needs to watertight, so it needs to be one piece. That means that when your put it in the pool, it form rinkle, especially in the corners.

Removing skew/distortion based on known dimensions of a shape

I have an idea for an app that takes a printed page with four squares in each corner and allows you to measure objects on the paper given at least two squares are visible. I want to be able to have a user take a picture from less than perfect angles and still have the objects be measured accurately.
I'm unable to figure out exactly how to find information on this subject due to my lack of knowledge in the area. I've been able to find examples of opencv code that does some interesting transforms and the like but I've yet to figure out what I'm asking in simpler terms.
Does anyone know of papers or mathematical concepts I can lookup to get further into this project?
I'm not quite sure how or who to ask other than people on this forum, sorry for the somewhat vague question.
What you describe is very reminiscent of augmented reality marker tracking. Maybe you can start by searching these words on a search engine of your choice.
A single marker, if done correctly, can be used to identify it without confusing it with other markers AND to determine how the surface is placed in 3D space in front of the camera.
But that's all very difficult and advanced stuff, I'd greatly advise to NOT try and implement something like this, it would take years of research... The only way you have is to use a ready-made open source library that outputs the data you need for your app.
It may even not exist. In that case you'll have to buy one. Given the niché of your problem that would be perfectly plausible.
Here I give you only the programming aspect and if you want you can find out about the mathematical aspect from those examples. Most of the functions you need can be done using OpenCV. Here are some examples in python:
To detect the printed paper, you can use cv2.findContours function. The most outer contour is possibly the paper, but you need to test on actual images. https://docs.opencv.org/3.1.0/d4/d73/tutorial_py_contours_begin.html
In case of sloping (not in perfect angle), you can find the angle by cv2.minAreaRect which return the angle of the contour you found above. https://docs.opencv.org/3.1.0/dd/d49/tutorial_py_contour_features.html (part 7b).
If you want to rotate the paper, use cv2.warpAffine. https://docs.opencv.org/3.0-beta/doc/py_tutorials/py_imgproc/py_geometric_transformations/py_geometric_transformations.html
To detect the object in the paper, there are some methods. The easiest way is using the contours above. If the objects are in certain colors, you can detect it by using color filter. https://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_imgproc/py_colorspaces/py_colorspaces.html

Is there some well-known algorithm which turns user's drawings into smoothed shapes?

My requirements:
A user should be able to draw something by hand. Then after he takes off his pen (or finger) an algorithm smooths and transforms it into some basic shapes.
To get started I want to transform a drawing into a rectangle which resembles the original as much as possible. (Naturally this won't work if the user intentionally draws something else.) Right now I'm calculating an average x and y position, and I'm distinguishing between horizontal and vertical lines. But it's not yet a rectangle but some kind of orthogonal lines.
I wondered if there is some well-known algorithm for that, because I saw it a few times at some touchscreen applications. Do you have some reading tip?
Update: Maybe a pattern recognition algorithm would help me. There are some phones which request the user to draw a pattern to unlock it's keys.
P.S.: I think this question is not related to a particular programming language, but if you're interested, I will build a web application with RaphaelGWT.
The Douglas-Peucker algorithm is used in geography (to simplify a GPS track for instance) I guess it could be used here as well.
Based on your description I guess you're looking for a vectorization algorithm. Here are some pointers that might help you:
https://en.wikipedia.org/wiki/Image_tracing
http://outliner.codeplex.com/ - open source vectorizer of the edges in the raster pictures.
http://code.google.com/p/shapelogic/wiki/vectorization - describes different vectorization algorithm implementations
http://cardhouse.com/computer/vector.htm
There are a lot of resources on vectorization algorithms, I'm sure you'll be able to find something that fits your needs. I don't know how complex these algorithms are to implement them, though,

3D triangulation algorithm

Does anybody know what triangulation algorithm Maya uses? Lacking that, what would be the most probable algoritms to try? I tried a few simple off the top of my head (shortest/longest resulting edges, smallest minimum angle, smallest/biggest area), but they where all wrong. Is Delaunay the most plausible algoritm?
Edit: by the way, pseudo code on how to implement Delaunay for a 2D quad in 3D space to generate two triangles are more than welcome!
Edit 2: Unfortunately, this is not the answer in 3D-space (only applicable in 2D).
I don't like to second-guess people's intentions but if you are simply trying to get out of Maya what is shown in the viewport you can extract Maya's triangulation by starting with MItMeshPolygon::getTriangles.
(The corresponding normals and vertex colours are straightforwardly accessible. UVs require a little more effort -- I don't remember the details (all my Maya code is with my ex employer) but whilst at first glance it may seem like you don't have the data, in fact it's all there, just not conveniently.)
(One further note -- if your artists try hard enough, they can create polygons that crash Maya when getTriangles is called, even though they render OK and can be manipulated with the UI. This used to happen every few months, so it's worth bearing in mind but probably not worth worrying about too much.)
If you don't want to use the API or Python, then running polyTriangulate before exporting, then undo afterwards (to get back the original polygons) would let you examine out the triangulated mesh. (You may want or need to save the scene to a temp file, then reload it afterwards and use file to give it its old name back, if your export process does stuff that is difficult or impossible to undo.)
This is a bit hacky, but you're guaranteed to get the exact triangulation Maya is using. Rather easier than writing your own triangulation code, and almost certainly a LOT easier than trying to work out whatever Maya does internally...
Jonathan Shewchuk has a very popular 2D triangulation tool called Triangle, and a 3D version should appear soon. He also has a number of papers on this topic that might be of use.
You might try looking at Voronoi and Delaunay Techniques by Henrik Zimmer. I don't know if it's what Maya uses, but the paper describes some common techniques.
Here you can find an applet that demonstrates the Incremental, Gift Wrap, Divide and Conquer and QuickHull algorithms computing the Delaunay triangulation in 3D. Pointers to each algorithm are provided.

Template matching algorithms

Please suggest any template matching algorithms, which are independent of size and rotation.
(any source codes as examples if possible please)
EDIT 1:
Actually I understand how the algorithm works, we can resize template and rotate it. It is computationally expensive, but we can use image pyramids. But the real problem for me now is when the picture is made at some angle to object, so that only a perspective transform can correct the image. I mean that even if we rotate image or scale it, we will not get a good match if the object in image is perspectively transformed. Of course it is possible to try to generate many templates at different perspective, but I think it is very bad idea.
EDIT 2:
One more problem when using template matching based on shape matching.
What if image doesn't have many sharp edges? For example a plate or dish?
EDIT 3:
I've also heard about camera callibration for object detection. What is the algorithm used for that purpose? I don't understand how it can be used for template matching.
I don't think there is an efficient template matching algorithm that is affine-invariant (rotation+scale+translation).
You can make template matching somewhat robust to scale+rotation by using a distance transform (see Chamfering style methods). You should probably also look at SIFT and MSER to get a sense of how the research area has been shaped the past decade. But these are not template matching algorithms.
Check out this recent 2013 paper on efficient affine template matching: "Fast-Match". http://www.eng.tau.ac.il/~simonk/FastMatch/
Matlab code is available on that website. Basic idea is to exhaustively search the affine space, but do it in the sparsest way possible based on how smooth the image is. Has a formal approximation guarantee, although it won't always find the absolute best answer.

Resources