Algorithm of ellipse fitting in OpenCV - algorithm

I read the code of ellipse fitting in OpenCV, the following link gives the source code of ellipse fitting in OpenCV: http://lpaste.net/161378.
I want to know some details about ellipse fitting in OpenCV, but I can not find any documents of the algorithm. In the comments, it said " New fitellipse algorithm, contributed by Dr. Daniel Weiss". But I can not find any paper about ellipse fitting of Dr. Daniel Weiss.
I have some questions of the algorithm:
Why does the algorithm need re-fit. It first fit for parameters A - E, and then re-fit for parameters A - C with those center coordinates.
Ellipse need the satisfy the constraint of 4*a*b - c^2 > 0, how does the algorithm satisfy it?

I'm wondering about this myself since I've discovered that the algorithm is bugged. See this bug-report: https://github.com/Itseez/opencv/issues/6544
I tried to find any relevant papers by Dr. Daniel Weiss and failed.

You might find this repo useful (with pip set-up):
https://github.com/bdhammel/least-squares-ellipse-fitting
which works from an upgrade to the Fitzgibbon algorithm (as a starting point), as authored by Halir here:
https://github.com/bdhammel/least-squares-ellipse-fitting/blob/master/media/WSCG98.pdf
I've since tested this a little and seems to be very effective. Note that the 'example' on the repo home page is out of date -- look to the example.py module in the code itself for usage that seems to work as to module imports, etc.

The documentation of the OpenCV function cv::fitEllipse mentions this paper:
Andrew W Fitzgibbon and Robert B Fisher. A buyer's guide to conic fitting. In Proceedings of the 6th British conference on Machine vision (Vol. 2), pages 513–522. BMVA Press, 1995.
Also related link: OpenCV Ellipse fitting: extract parameters.

Related

Image segmentation with watershed thresholding

I have implemented the marker-less (so not like OpenCV) watershed algorithm proposed in a 1991 paper by Vincent and Soille.
I have also implemented a distance transform algorithm to apply it before watersheding.
It works well in a good number of cases, but sometimes it produces a little oversegmentation. I also corrected some of this with gauss filtering the distance-transform image.
I am planning to correct this by applying thresholding to the watersheds. Therefore considering watershed of only some height greater than a threshold
Considering that this paper is quite old (1991) I am wondering if anyone knows of papers or resources that explain something similar to what I intend to do.
Notes:
1) I am not using OpenCV. I am implementing all by myself from papers
2) I am going for a marker-less watersheding.

PMVS definition of "n-adjacent"

I am currently reading over Yasutaka Furukawa et al.'s Paper "Accurate, Dense, and Robust Multi-View Stereopsis" (PDF available here), where they describe an MVS-algorithm for reconstructing a 3D point-cloud from images.
I do understand the concepts and the main steps, but there is one detail that I am struggling with. This may be because I am not an English native speaker, so maybe a small hint would be enough.
On page 4 of the linked source, in chapter 3.2 "Expansion", there is the definition of "n-adjacent" patches:
|(c(p)−c(p'))·n(p)|+|(c(p)−c(p'))·n(p')| < 2ρ_2
My question is about ρ_2, that is described as in the following:
[...] ρ_2 is determined automatically as the distance at the depth of the
midpoint of c(p) and c(p') corresponding to an image displacement of β1 pixels
in R(p).
I do not understand what "distance" in this context should be, and I do not understand the stated correspondence to the image displacement.
I know that this is a very specific question, but since this paper is somewhat popular I hoped, that there is somebody, that can help me.
Alright, I think I do get it now.
It just means, that ρ_2 is the distance you have to move in a plane, located as far away from the camera (depth) as the midpoint of c(p) and c(p'), so that you get a displacement of β1 pixels in the image showing the scene.

Match Sketch(Drawing) face photo to digital color photo

I'm going to match the sketch face (drawing photo) in to the color photo. so for the research i want to find out what are the challenges that matching sketch drawing in to color faces. for now i have find out that
resolution pixel difference
texture difference
distance difference
and color (not much effect)
I want to know (in technical terms) what are other challenges and what are available OPEN CV and JAVA CV method and algorithms to overcome that challenges?
Here is some example of the sketches and the photos that are known to match them:
This problem is called multi-modal face recognition. There has been a lot of interest in comparing a high quality mugshot (modality 1) to low quality surveillance images (modality 2), another is frontal images to profiles, or pictures to sketches like the OP is interested in. Partial Least Squares (PLS) and Tied Factor Analysis (TFA) have been used for this purpose.
A key difficulty is computing two linear projections from the image in modality 1 (and modality 2) to a space where two points being close means that the individual is the same. This is the key technical step. Here are some papers on this approach:
Abhishek Sharma, David W Jacobs : Bypassing Synthesis: PLS for
Face Recognition with Pose, Low-Resolution and Sketch. CVPR
2011.
S.J.D. Prince, J.H. Elder, J. Warrell, F.M. Felisberti, Tied Factor
Analysis for Face Recognition across Large Pose Differences, IEEE
Patt. Anal. Mach. Intell, 30(6), 970-984, 2008. Elder is a specialist in this area and has a variety of papers on the topic.
B. Klare, Z. Li and A. K. Jain, Matching forensic sketches to
mugshot photos, IEEE Pattern Analysis and Machine Intelligence, 29
Sept. 2010.
As you can understand this is an active research area/problem. In terms using OpenCV to overcome the difficulties, let me give you an analogy: you need to build build a house (match sketches to photos) and you're asking how will having a Stanley hammer (OpenCV) will help. Sure, it will probably help. But you'll also need a lot of other resources: wood, time/money, pipes, cable, etc.
I think that James Elder's old work on the completeness of the edge map (using reconstruction by solving the Laplace equation) is quite relevant here. See the results at the end of this paper: http://elderlab.yorku.ca/~elder/publications/journals/ElderIJCV99.pdf
You could give Eigenfaces a try, though i never tested them with sketches i think they could a least be a good starting point for your research.
See Wiki: http://en.wikipedia.org/wiki/Eigenface and the Tutorial for OpenCV: http://docs.opencv.org/modules/contrib/doc/facerec/facerec_tutorial.html (including not only Eigenfaces!)
OpenCV can be used for feature extraction and machine learning required for this task. I guess you can start with the papers in the answers above, start with some basic features and prototype a classifier with OpenCV.
I guess you might also want to detect and match feature points on the faces. If you use this approach, you will have to do the feature point detectors on your own (training the Viola-Jones detector in OpenCV with your own data is an option).

Derivative of a polygon

I am studying boost polygon library,
however I can not understand how each vertexes are generated,
image: http://imm.io/LlIM
what is the rules for derivative of a polygon?
the original paper is:
http://www.boost.org/doc/libs/1_52_0/libs/polygon/doc/GTL_boostcon2009.pdf
It's a bit easier to follow in the manhattan-polygon example in the presentation (page 33). The author later submitted a paper to the CAD journal entitled "Industrial strength polygon clipping: A novel algorithm with applications in VLSI CAD" (I found the PDF using this google search) which detailed the derivative process as follows:
and the accompanying figure:

Face identification with opencv

I'm using the libraries OpenCV for image processing in C + + and this is my question: can you think possible to do a facial recognition (saying the name of a person based on a database of photos) by comparing the frame of videocamera with images in a database using the technique of image histograms comparison? (Note that i compare only the facial region of an image using an example included in the opecv libraries).
I'm asking this because i've just tried to do a program like above but i have a lot of problem (often i detect the wrong person)
You might want to start with compiling the Face Detection using OpenCV example. As others have pointed out, general facial recognition isn't exactly an easy problem to solve. EigenFaces is one common technique for face recognition that is fairly easy to understand and implement.
As others have stated, it's a hard problem, but this gives you a place to start.
Some method I had experience with them are
metric learning for comparing faces
naming video characters: they use SIFT descriptors computed at specific feducial points on each face. Their code worked quite well for me in the past.
A dataset and benchmark that is dedicated for this task is labeled faces in the wild. You can find there references to working methods for comparing faces after detection.
UPDATE:
I have a description of an experiment on face clustering: unsupervised face identification.
The experiment is described in Section 4.4 of my thesis.
The basic flow is as follows
Metric learning: how to determine if two faces are of the same person or not.
This part is supervised, in the sense that it requires as input face images labeled with the identity of the person who appears in each photo.
a. Detect fiducial points (eyes, corner of mouth, nose).
You may use this code, or more recent versions such as this one.
b. Extract SIFT descriptors at the detected fiducial points.
c. Construct a "face descriptor": each face is described using a single vector.
This vector is a concatenation of the sqrt of all the SIFT descriptors.
d. Use the method described here to learn a mahalanobis distance between faces of different persons.
Unsupervised face identification: Once a metric was learned, you may use new photos of new people (these people need not be part of the training set, you may use photos of unseen-before people!).
a. Repeat stages a-c to construct the same "face descriptor" vector for each input face.
b. Compare the descriptor vectors using the learned mahalanobis distance.
I suggest using an existing algorithm such as the one available in the Luxand FaceSDK: http://www.luxand.com/facesdk/ rather than trying to develop your own.
there are 3 builtin techniques for face-recognition in opencv now, pca(eigenfaces), lda(fisherfaces) and lbph.
nice example code:
https://github.com/Itseez/opencv/blob/master/samples/cpp/facerec_demo.cpp

Resources