Find Focal Length with DLT Algorithm, Matlab Paper Implementation - rotation

I'm trying to implement this paper in matlab:
Geometry-Based Camera Calibration Using Five-Point Correspondences From a Single Image
Now, I think I have correctly found the camera position, while I'm still doubtful on the Rotation Matrix and can't understand how to apply DLT algorithm to find the focal length.
I don't understand how to implement the algorithm.
Since focal length is just a parameter, and all other parameters are known, hw the DLT matrix should be formed?
Here is my code, with the images I used.
Code from GitHub
If someone could look into it, or suggest a way to use DLT it will be appreciated.
To make it work I found the focal length as if equation [5] and [6] of the paper form a linear system.

Related

Is it accurate to calculate the focal length( with pixel unit) based on real focal length( with millimeter unit) and the sensor size

I want to find the focal length of a camera. I have its physical sensor size and physical focal length. I know I can use a calibration pattern to estimate its focal length. I also can do this by: physical_focal_length / (sensor_size / resolution). I want to know which one is more accurate?
Thanks.
YL
You only talk about focal length. The camera matrix contains values for the optical center...
And there are lens distortion coefficients. You can rarely calculate those. You'd need to know a lot about the design of your lens.
Calibration can be better or worse than calculating from nominal/design values.
It's better if done right. It can easily be worse.
Nominal design values are a good default and starting state.
When actually building the camera and lens, there can be slight differences to the design parameters. That is why calibration is important.
Either way, you should check any solution. Given an object of known size (ruler) at a known distance (...), does it appear the size in pixels that you would calculate from the known size and distance? How closely?

Uncertainty on pose estimate when minimizing measurement errors

Let's say I want to estimate the camera pose for a given image I and I have a set of measurements (e.g. 2D points ui and their associated 3D coordinates Pi) for which I want to minimize the error (e.g. the sum of squared reprojection errors).
My question is: How do I compute the uncertainty on my final pose estimate ?
To make my question more concrete, consider an image I from which I extracted 2D points ui and matched them with 3D points Pi. Denoting Tw the camera pose for this image, which I will be estimating, and piT the transformation mapping the 3D points to their projected 2D points. Here is a little drawing to clarify things:
My objective statement is as follows:
There exist several techniques to solve the corresponding non-linear least squares problem, consider I use the following (approximate pseudo-code for the Gauss-Newton algorithm):
I read in several places that JrT.Jr could be considered an estimate of the covariance matrix for the pose estimate. Here is a list of more accurate questions:
Can anyone explain why this is the case and/or know of a scientific document explaining this in details ?
Should I be using the value of Jr on the last iteration or should the successive JrT.Jr be somehow combined ?
Some people say that this actually is an optimistic estimate of the uncertainty, so what would be a better way to estimate the uncertainty ?
Thanks a lot, any insight on this will be appreciated.
The full mathematical argument is rather involved, but in a nutshell it goes like this:
The outer product (Jt * J) of the Jacobian matrix of the reprojection error at the optimum times itself is an approximation of the Hessian matrix of least squares error. The approximation ignores terms of order three and higher in the Taylor expansion of the error function at the optimum. See here (pag 800-801) for proof.
The inverse of the Hessian matrix is an approximation of the covariance matrix of the reprojection errors in a neighborhood of the optimal values of the parameters, under a local linear approximation of parameters-to-errors transformation (pag 814 above ref).
I do not know where the "optimistic" comment comes from. The main assumption underlying the approximation is that the behavior of the cost function (the reproj. error) in a small neighborhood of the optimum is approximately quadratic.

Algorithm to check if a polygon is a projection of a polyhedron

I am trying to develop an algorithm that performs the following :
Given a 2D polygon and a 3D polyhedron, determine if the 2D polygon is a projection of the 3D polyhedron (a perspective projection to be precise) without knowing which transformation matrix we may have possibly used for the projection.
input
{2D Polygon}
{3D Polyhedron}
output
{bool} whether or not it's a perspective projection
I am not asking for code, but I would simply like to know if this is feasible in polynomial time.
Any help will be greatly appreciated.
A 3D to 2D perspective projection has 7 degrees of freedom (6 for the relative motion of the scene with respect to the camera, 1 for the focal length).
Select four vertices in the 2D projection and consider all possible correspondences with polyhedron vertices (there is a polynomial number of such associations). Then form a system of 7 equations in the 7 unknown parameters (unfortunately a nonlinear one; maybe the eighth equation can be useful to select among multiple solutions).
Knowing the parameters, you can check a solution by re-projecting the polyhedron and comparing to the polygon (with further search for correspondences with vertices and edges).
All of this will take polynomial time (quartic if I am right), if one admits that the solver takes bounded time (hence bounded precision).
If the focal length is known, then a better approach is possible. Indeed, with only 6 unknowns, you can find the projection parameters from the projection of just three points. This problem is known to have an analytical solution (actually up to 4 of them), as described at length in "New Algorithms for the Perspective-Three-Point Problem, GAO Xiaoshan & CHEN Hangfei, Vol.16 No.3 J. Comput. Sci. & Technol."
This should lead to an O(N³) exact procedure.
More generally speaking, you form putative correspondences between N pairs of points, solve the corresponding Perspective-N-point problem, and check the hypothesis by reprojecting the polyhedron and comparing to the known projection to validate the hypothesis.
Just an idea for an algorithm:
Take a triangle of the projection made of three points next to each other not on the same line. Iterate through all corresponding triangles of the original. For all possible projections that solve the pair of triangles, check if the rest matches.
I must admit I am not sure right now if there could be infinite solutions for triangles (which would be hard to iterate)? If so, start with four points.
I think it is possible but you have to do a fair amount of reverse engineering. A 2D sketch that represents a 3D object is known as an Orthographic Projection. The link shows you the transformation matrices you need apply to transform the 3D point onto its 2D projection. Now, how do you go the opposite way? Inverse matrices with a mix of some inverse transformations (translation, scaling, rotation...)? I think this is a good lead to follow.

Essential Matrix from 8 points algorithm

In „Multiple View Geometry in Computer Vision” R.Hartley,A.Zisserman in chapter 11 - about computation Fundamental Matrix one can read:
„11.7.3 The calibrated case
In the case of calibrated cameras normalized image coordinates may be used, and the essential matrix E computed instead of the fundamental matrix”
Does it mean – if I have proper Intrinsic Cameras matrices (does it mean calibrated in this case?) I can calculate Essential Matrix directly (using 8 points algorithm) omitting calculating Fundamental Matrix?
And I can get matrices R and T from calculated Essential Matrix to reconstruction 3D model?
Regards,
Artik
Short answer, yes. See also longer explanation on Wikipedia.
From your correspondences, using the 8point Alg you obtain the Fundamental Matrix F.
From the relation E=K'^T F K, assuming that you know both K' and K (in case that both the images were taken by the same camera, you have K'=K), you can compute E.
From E you get 4 possible camera couple (P_0,P_0') (P_1,P_1')....(P_3,P_3'). Only one of this couple satisfy the positive depth constraint (i.e. the 3D points lie in front of both the cameras).
That couple will be your cameras.
Hope this help!
So, In general, a calibrated camera in visual odometry refers to a camera for which the intrinsic matrix is known.
In the case of a stereo visual odometry system, I typically take it to mean that the intrinsic matrix is known for both cameras, however, some of my co-works mean it to mean that the Rotation and Translation between the two cameras are known.
In Practice, there is hardly any distinction between the two as you can estimate the intrinsic matrix of a camera using various functions in MatLab or OpenCV, and given the instinct matrix, you can determine the rotation and translation between the two cameras.
Furthermore, the derivation of the fundamental matrix relies upon the Essential matrix and the intrinsic matrix of two cameras (the intrinsic matrix can be the same in the case of monocular visual odometry). This means that it is often the case that the essential matrix is estimated and the fundamental matrix is not.
For an explanation on getting the rotation and translation from the essential matrix, I recommend first watching a youtube video on Single Value Decomposition (SVD) and then reading: https://www.researchgate.net/publication/220556161_Visual_Odometry_Tutorial.
Good Luck with Your Studies Young Scholar.

Simplified version of Ray Tracing to find the nearest object extending out from a vector

I have a 2D array consisting of 0s and 1s. I have a vector originating from a 0 to any direction and I need to find the nearest 1 that intersects with the vector in that direction and its distance
So I've looked into ray tracing, but most materials on the subject seemed rather unintuitive and mostly talked about how to do refraction and color calculations.
Is there something simpler than the said algorithm?
Thanks.
You can adapt the Bresenham' line algorithm to this task.

Resources