I just used an open source implementation of an ORB.
How can I implement ORB further by adding new modules?
What can I do from my end in order to get better results than just using an ORB.
Am thinking of to use RANSAC for eliminating the outliers and to get better results.
Am struck at this point waiting for an ideas for further implementation of an ORB.
Any ideas on Homography implementation in circular and triangular shapes?
Related
I am trying to find the equation I would need to use in order to implement a Least Squares Kernel classifier for a dataset with N samples of feature length d. I have the kernel equation k(x_i, x_j) and I need the equation to pug it into to get the length-d vector used to classify future data. No matter where I look/google, Although there are dozens of powerpoints and pdfs that seem to give me almost what I'm looking for, I can't find a resource which can give me a straight answer.
note: I am not looking for the programming-language tool that computes this for me such as lsqlin, but the mathematical formula.
Least Squares Kernel SVM (what I assume your actually asking about) is equivalent to Kernelized Ridge Regression. This is the simplest what to implement it, and the solution can be found here, assume you have the appropriate background.
I'm trying to implement my own version in MATLAB of the SIFT algorithm. I know that there are a lot of existing implementations but I would like to develop my own version to better understand the whole process.
At the moment I'm able to find the keypoints in the different scale-space and octaves but I didn't understand how to bring back to the original octaves the keypoints that I found in the other octaves. Is it done with a simple interpolation?
I read the original article of Lowe and also other tutorials but I miss this point. Any suggestion is greatly appreciated!
I am working on a project which determines the indoor position of an object which moves in 3D space (e.g. a quadcopter).
I have built some prototypes which use a combination of gyroscope, accelerometer and compass. However the results were far from being satisfactory, especially related to the moved distance, which I calculated using the accelerometer. Determining the orientation using a fusion of gyroscope and compass was close to perfect.
In my opinion I am missing some more sensors to get some acceptable results. Which additional sensors would I need for my purpose? I was thinking about adding one or more infrared cameras/distance sensors. I have never worked with such sensors and I am not sure which sensor would lead to better results.
I appreciate any suggestions, ideas and experiences.
The distance checking would decidedly help. The whole algorithm of any surface geo survey is based on the conception of start/final check. You know the start, then you add erroneous steps, and come to the finish that you know, too. But you have collected some sum error by the way. Then you distribute the error found among all steps done, with the opposite sign, of course.
What is interesting, in most cases you not only somewhat diminish the effect of arbitrary mistakes, but almost eliminate the systematical ones. Because they mostly are linear or close to linear and such linear distribution of found error will simply kill them.
That is only the illustration idea. Any non-primitive task will contain collecting all data and finding their dependencies, linearizing them and creating parametrical or correlational systems of equations. The solving of them you get the optimal changes in the measured values. By parametrical method you can also easily find approximate errors of these new values.
The utmost base of these methods is the lesser squares method of Gauss. The more concrete methodics can be found in old books on geodesy/geomatic/triangulation/ geodesy nets. The books after introduction of GPS are for nothing, because everything was terribly simplified by it. Look for the books with matrix formulaes for lesser squares solutions.
Sorry if I had translated some terms into English with errors.
I am wrapping my head around feature detector algorithms. I've studied the options that I have: SIFT, SURF, BRISK, FREAK etc. All of them seem fairly complex in terms of underlying mathematics. On the contrary, I want to take one step at a time therefore I am looking for a simple method which doesn't need to be as good as SURF for example. Which algorithm would you recommend to learn and implement?
The first thing too keep in mind is the difference between a detector and a descriptor. A detector is an algorithm for detecting interest points in an image, which are typically either corners or centers of blob-like structures. Then, if you need to match these points across images, you compute descriptors, which are some kind of vectors of values that represent the patches around the interest points.
This should help clear up some confusion. For example, "good features to track", aka the min-eigen corner detector, is an interest point detector. FREAK is a feature descriptor. SIFT, SURF, and BRISK include both a detector and a descriptor. In general, however, you can mix and match detectors and descriptors.
So for starters, you should look at the corner detectors like GFTT and Harris, and also the Laplacian blob detector. Most of the more recent interest point detectors are faster ways of detecting corners or blobs.
For descriptors, start with SIFT. It may seem a bit scary, but this was the first descriptor that has worked, and it is the inspiration and the benchmark for all the others.
If you are trying to start simple, then perhaps the simplest feature descriptor is to take a NxN square around the detected feature and concat all the pixel values. This doesn't work well in practice because it is very sensitive to small changes in lighting, rotation, scale, etc -- but you can test your implementation with two translated versions of an image.
The simplest feature descriptor that "actually works" seems to be the BRIEF descriptor (http://cvlabwww.epfl.ch/~lepetit/papers/calonder_eccv10.pdf), which randomly compares pairs of nearby pixel values to build up a binary descriptor. Note that it isn't scale- or rotation-invariant however: for that you need one of the many extensions such as AKAZE, BRISK, FREAK, or ORB.
I think you can try GFTT : Good features to tracj based on shi-tomasi definitions and equations. Its very old and I think easy to read too.
In my opinion, SIFT. See vlfeat.org for the code they developed which is free to use, and they have several tutorials for an easy implementation.
Hi I need to perform a Singular Value Decomposition on large dense square matrices using Map Reduce.
I have already checked the Mahout project but what they provide is a TSQR algorithm
http://arbenson.github.io/portfolio/Math221/AustinBenson-math221-report.pdf .
The problem is that I want the full rank and this method does not work in such case.
The Distributed Lanczos SVD implementation they were using before it does not suit my case as well.
I found that the TWO-SIDED JACOBI SCHEME could be used for such purpose but I did not manage to find any available implementation.
Does anybody know if and where I can find a reference code?
If it may help - look to spark lib (mlib). It had implementation. You can use it, or looking at it you can make your own.
https://spark.apache.org/docs/latest/mllib-dimensionality-reduction.html