I am studying boost polygon library,
however I can not understand how each vertexes are generated,
image: http://imm.io/LlIM
what is the rules for derivative of a polygon?
the original paper is:
http://www.boost.org/doc/libs/1_52_0/libs/polygon/doc/GTL_boostcon2009.pdf
It's a bit easier to follow in the manhattan-polygon example in the presentation (page 33). The author later submitted a paper to the CAD journal entitled "Industrial strength polygon clipping: A novel algorithm with applications in VLSI CAD" (I found the PDF using this google search) which detailed the derivative process as follows:
and the accompanying figure:
Related
I am currently reading over Yasutaka Furukawa et al.'s Paper "Accurate, Dense, and Robust Multi-View Stereopsis" (PDF available here), where they describe an MVS-algorithm for reconstructing a 3D point-cloud from images.
I do understand the concepts and the main steps, but there is one detail that I am struggling with. This may be because I am not an English native speaker, so maybe a small hint would be enough.
On page 4 of the linked source, in chapter 3.2 "Expansion", there is the definition of "n-adjacent" patches:
|(c(p)−c(p'))·n(p)|+|(c(p)−c(p'))·n(p')| < 2ρ_2
My question is about ρ_2, that is described as in the following:
[...] ρ_2 is determined automatically as the distance at the depth of the
midpoint of c(p) and c(p') corresponding to an image displacement of β1 pixels
in R(p).
I do not understand what "distance" in this context should be, and I do not understand the stated correspondence to the image displacement.
I know that this is a very specific question, but since this paper is somewhat popular I hoped, that there is somebody, that can help me.
Alright, I think I do get it now.
It just means, that ρ_2 is the distance you have to move in a plane, located as far away from the camera (depth) as the midpoint of c(p) and c(p'), so that you get a displacement of β1 pixels in the image showing the scene.
I have a .xyz file that has irregularly spaced points and gives the position and surface normal (ie XYZIJK). Are there algorithms out there that can reconstruct the surface that factor in the IJK vectors? Most algorithms I have found assume that surface normals aren't known.
This would ultimately be used to plot surface error data (from the nominal surface) using python 3.x, and I'm sure I will have many more follow on questions once I find a good reconstruction algorithm.
The state of the art right now is Poisson Surface Reconstruction and its screened variant. Code for both is available, e.g. under http://www.cs.jhu.edu/~misha/Code/PoissonRecon/Version8.0/. It is also implemented in MeshLab if you want to take a quick look.
If you want to take a look at other methods, check out this STAR. Page three has a table of a couple of approaches and their inputs.
I read the code of ellipse fitting in OpenCV, the following link gives the source code of ellipse fitting in OpenCV: http://lpaste.net/161378.
I want to know some details about ellipse fitting in OpenCV, but I can not find any documents of the algorithm. In the comments, it said " New fitellipse algorithm, contributed by Dr. Daniel Weiss". But I can not find any paper about ellipse fitting of Dr. Daniel Weiss.
I have some questions of the algorithm:
Why does the algorithm need re-fit. It first fit for parameters A - E, and then re-fit for parameters A - C with those center coordinates.
Ellipse need the satisfy the constraint of 4*a*b - c^2 > 0, how does the algorithm satisfy it?
I'm wondering about this myself since I've discovered that the algorithm is bugged. See this bug-report: https://github.com/Itseez/opencv/issues/6544
I tried to find any relevant papers by Dr. Daniel Weiss and failed.
You might find this repo useful (with pip set-up):
https://github.com/bdhammel/least-squares-ellipse-fitting
which works from an upgrade to the Fitzgibbon algorithm (as a starting point), as authored by Halir here:
https://github.com/bdhammel/least-squares-ellipse-fitting/blob/master/media/WSCG98.pdf
I've since tested this a little and seems to be very effective. Note that the 'example' on the repo home page is out of date -- look to the example.py module in the code itself for usage that seems to work as to module imports, etc.
The documentation of the OpenCV function cv::fitEllipse mentions this paper:
Andrew W Fitzgibbon and Robert B Fisher. A buyer's guide to conic fitting. In Proceedings of the 6th British conference on Machine vision (Vol. 2), pages 513–522. BMVA Press, 1995.
Also related link: OpenCV Ellipse fitting: extract parameters.
I'm going to match the sketch face (drawing photo) in to the color photo. so for the research i want to find out what are the challenges that matching sketch drawing in to color faces. for now i have find out that
resolution pixel difference
texture difference
distance difference
and color (not much effect)
I want to know (in technical terms) what are other challenges and what are available OPEN CV and JAVA CV method and algorithms to overcome that challenges?
Here is some example of the sketches and the photos that are known to match them:
This problem is called multi-modal face recognition. There has been a lot of interest in comparing a high quality mugshot (modality 1) to low quality surveillance images (modality 2), another is frontal images to profiles, or pictures to sketches like the OP is interested in. Partial Least Squares (PLS) and Tied Factor Analysis (TFA) have been used for this purpose.
A key difficulty is computing two linear projections from the image in modality 1 (and modality 2) to a space where two points being close means that the individual is the same. This is the key technical step. Here are some papers on this approach:
Abhishek Sharma, David W Jacobs : Bypassing Synthesis: PLS for
Face Recognition with Pose, Low-Resolution and Sketch. CVPR
2011.
S.J.D. Prince, J.H. Elder, J. Warrell, F.M. Felisberti, Tied Factor
Analysis for Face Recognition across Large Pose Differences, IEEE
Patt. Anal. Mach. Intell, 30(6), 970-984, 2008. Elder is a specialist in this area and has a variety of papers on the topic.
B. Klare, Z. Li and A. K. Jain, Matching forensic sketches to
mugshot photos, IEEE Pattern Analysis and Machine Intelligence, 29
Sept. 2010.
As you can understand this is an active research area/problem. In terms using OpenCV to overcome the difficulties, let me give you an analogy: you need to build build a house (match sketches to photos) and you're asking how will having a Stanley hammer (OpenCV) will help. Sure, it will probably help. But you'll also need a lot of other resources: wood, time/money, pipes, cable, etc.
I think that James Elder's old work on the completeness of the edge map (using reconstruction by solving the Laplace equation) is quite relevant here. See the results at the end of this paper: http://elderlab.yorku.ca/~elder/publications/journals/ElderIJCV99.pdf
You could give Eigenfaces a try, though i never tested them with sketches i think they could a least be a good starting point for your research.
See Wiki: http://en.wikipedia.org/wiki/Eigenface and the Tutorial for OpenCV: http://docs.opencv.org/modules/contrib/doc/facerec/facerec_tutorial.html (including not only Eigenfaces!)
OpenCV can be used for feature extraction and machine learning required for this task. I guess you can start with the papers in the answers above, start with some basic features and prototype a classifier with OpenCV.
I guess you might also want to detect and match feature points on the faces. If you use this approach, you will have to do the feature point detectors on your own (training the Viola-Jones detector in OpenCV with your own data is an option).
I am interested to read and understand the 2D mesh algorithms. A search on Google reveals a lot of papers and sources, however most are too academic and not much on beginner's side.
So, would anyone here recommend any reading sources ( suitable for the beginners), or open source implementation that I can learn from the start? Thanks.
Also, compared to triangular mesh generation, I have more interest in quadrilateral mesh and mix mesh( quad and tri combined).
I second David's answer regarding Jonathan Shewchuk's site as a good starting point.
In terms of open source software, it depends on what you are looking for exactly.
If you are interested in mesh generation, you can have a look at CGAL's code. Understanding the low level parts of CGAL's code is too much for a beginner. However, having a look at the higher level algorithms can be quite interesting even for a beginner. Also note that the documentation of CGAL is very detailed.
You can also have a look at TetGen, but its source code is monolithic and is not documented (it is more of an end user software rather than a library, even if it can also be called simply from other programs). Still, it is fairly readable, and the user manual contains a short presentation of mesh generation, with some references.
If you are also interested in mesh processing, you can have a look at OpenMesh.
More informations about your goals would definitely help providing more relevant pointers.
The first link on your Google search takes you to Jonathan Shewchuk's site. This is not actually a bad place to start. He has a program called triangle which you can download for 2D triangulation. On that page there is a link to references used in creating triangle, including a link to a description of the triangluation algorithm.
There are several approaches to mesh generation. One of the most common is to create a Delaunay triangulation. Triangulating a set of points is fairly simple and there are several algorithms which do that, including Watson's and Rupert's as used in triangle
When you want to create a constrained triangulation, where the edges of the triangulation match the edges of your input shape it is a bit harder, because you need to recover certain edges.
I would start by understanding Delaunay triangulation. Then maybe look at some of the other meshing algorithms.
Some of the common topics that you will find in mesh generation papers are
Robustness - that is how to deal with floating point round off errors.
Mesh quality - ensuring the shapes of the triangles/tetrahedrons are close to equilateral. Whether this is important depends on why you are creating the mesh. For analysis work it is very important,
How to choose where to insert the nodes in the mesh to give a good mesh distribution.
Meshing speed
Quadrilateral/Hexahedral mesh generation. This is harder than using triangles/tetrahedra.
3D mesh generation is much harder than 2D so a lot of the papers are on 3D generation
Mesh generation is a large topic. It would be helpful if you could give some more information on what aspects (eg 2D or 3D) that you are interested in. If you can give some idea of what you ant to do then maybe I can find some better sources of information.