Calculate distortion parameters using F-theta chart - camera-calibration

I have a working fisheye camera for which I have fx, fy(in pixel and in mm) values. I also have cx,cy(in pixel) values. I have been given an optical distortion value in %. And also I have an F-theta chart.
The F-theta chart contains values as such:
Rotation along y axis(deg), it's corresponding: tan displacement, distortion%.
I am not sure how to use this data to undistort the fisheye images. Or how to get k1,k2,k3 and k4 distortion parameters from them.
Best regards
Tahera

Related

Webot camera default parameters like pixel size and focus

I am using two cameras without lens or any other settings in webot to measure the position of an object. To apply the localization, I need to know the focus length, which is the distance from the camera center to the imaging plane center,namely f. I see the focus parameter in the camera node, but when I set it NULL as default, the imaging is still normal. Thus I consider this parameter has no relation with f. In addition, I need to know the width and height of a pixel in the image, namely dx and dy respectively. But I have no idea how to get these information.
This is the calibration model I used, where c means camera and w means world coordinate. I need to calculate xw,yw,zw from u,v. For ideal camera, gama is 0, u0, v0 are just half of the resolution. So my problems exist in fx and fy.
First important thing to know is that in Webots pixels are square, therefore dx and dy are equivalent.
Then in the Camera node, you will find a 'fieldOfView' which will give you the horizontal field of view, using the resolution of the camera you can then compute the vertical field of view too:
2 * atan(tan(fieldOfView * 0.5) / (resolutionX / resolutionY))
Finally, you can also get the near projection plane from the 'near' field of the Camera node.
Note also that Webots cameras are regular OpenGL cameras, you can therefore find more information about the OpenGL projection matrix here for example: http://www.songho.ca/opengl/gl_projectionmatrix.html

How to translate MKT expression to D3 options on Albers projection?

This is the standard MKT expression (here also translated to Proj.4 string) of Albers conicEqualArea for official Statistical Grid of Brazil:
PROJCS["Conica_Equivalente_de_Albers_Brasil",
GEOGCS["GCS_SIRGAS2000",
DATUM["D_SIRGAS2000",
SPHEROID["Geodetic_Reference_System_of_1980",6378137,298.2572221009113]],
PRIMEM["Greenwich",0],
UNIT["Degree",0.017453292519943295]],
PROJECTION["Albers"],
PARAMETER["standard_parallel_1",-2],
PARAMETER["standard_parallel_2",-22],
PARAMETER["latitude_of_origin",-12],
PARAMETER["central_meridian",-54],
PARAMETER["false_easting",5000000],
PARAMETER["false_northing",10000000],
UNIT["Meter",1]]
The DATUM is the WGS 84 ("SIRGAS2000" is a alias for it).
How to translate all details to the D3.js v5 parametrization?
I try the obvious, as center and parallels, but it was not sufficient
var projection = d3.geoConicEqualArea()
.parallels([-2,-22]) // IS IT?
.scale(815)
//.rotate([??,??]) // HERE THE PROBLEM...
.center([-54, -12]) // IS IT?
PS: where the D3 documentation for it? The D3 source-code of geoConicEqualArea() have no clues.
The parts that translate to a d3 Albers projection are as follows:
PROJECTION["Albers"],
PARAMETER["standard_parallel_1",-2],
PARAMETER["standard_parallel_2",-22],
PARAMETER["latitude_of_origin",-12],
PARAMETER["central_meridian",-54],
You have the parallels, now you need to rotate. Also note, for any D3 projection, the rotation is applied to the centering coordinates. Generally, you'll want to rotate on the x and center on the y:
d3.geoAlbers()
.parallels([-2,-22])
.center([0,-12])
.rotate([54,0])
.translate([width/2,height/2])
.scale(k)
I've rotated in the opposite direction along the x axis (rotated the earth under me so that I'm overtop of the central meridian, hence my rotation by -x). I've then centered on the y. Lastly I translate so that the intersection of the central longitude and meridian is centered in the map and apply a scale value that is appropriate.
If I want to center on a different area but keep the projection the same, I can modify projection.center(), but keep in mind that the coordinates provided here are relative to the rotation. I can also use projection.fitSize() or projection.fitExtent(), both of which set 'translate' and 'scale' values for the projection. None of center/scale/translate change the distortion in the D3 projection.
Of course this isn't a true replication of your projection as the coordinate space units are pixels, you will remain unable to measure distances in meters directly without some extra work.
See also

Three.js Image Pixel coordinate to World Coordinate Mapping

I'm creating a 3D object in Three.js with 6 faces. Each face has a mesh which uses a THREE.PlaneGeometry(width and height both are 256). On the mesh I'm using a JPEG picture which is 256 by 256 for the texture. I'm trying to find a way to find the world coordinate of a pixel coordinate(for example 200,250 is the pixel coordinate) on the Object3D's PlaneGeometry corresponding to where that picture was used as texture.
Object hierarchy:-
Object3D-->face(object3d) (total 6 faces)-->Each face has a mesh(planegeometry) and uses a jpeg file as texture.
Picture1 pixel coordinate-->Used to create texture for Plane1-->World Coordinate corresponding to that pixel coordinate.
Can someone please help me.
Additional information:-
Thanks for the answer. I'm trying to compare 2 results.
Method 1:- One yaw/pitch is obtained by clicking on a specific point in the 3d object(e.g, center of a particular car headlight which is the front face) using a mouse and getting the point of intersection with the front face using raycasting.
Method 2:-The other yaw/pitch is obtained by taking the pixel coordinate of the same point(center of a particular car headlight) and calculating the world space coordinate for that pixel point. Pls note that pixel coordinate is taken from the JPEG file that was used as texture to create the PlaneGeometry for the mesh(which is a child of the front face).
Do you think the above comparison approach is supposed to produce the same results, assuming all other parameters are identical between the 2 approaches?
Well assuming your planes are PlaneGeometry(1,1) then the local coordinate X/Y/ZZ for a given pixel is pixelX / 256, pixelY / 256 and the Z is 0.5
so something like:
var localPoint = new THREE.Vector3(px/256,py/256,0.5)
var worldPoint = thePlaneObject.localToWorld(localPoint)

Circular fisheye distort using opencv3 fisheye model

I use a OpenCV fisheye model function to perform fisheye calibration work. My image is a Circular fisheye (example), but I'm getting this result from the OpenCV fisheye model function.
I have the following problems:
I don't know why the result is an oval and not a perfect circle. Is this as expected?
Can OpenCV fisheye model be calibrated for a Circular fisheye image?
I don't understand why the image is not centered when using the cv::fisheye::calibrate function to get the Cx Cy parameter in K?
What tips (picture number, angle and position...) can be used on the checkboard to get the corrent camera matrix and Distortion factor?
Expected Result
My Result
First at all cv::fisheye uses a very simple idea. To remove radial distortion it will move points of fisheye circle in direction from circle center to circle edge.
Points near center will be moved a little. Points near edges will be moved on much larger distance.
In other words distance of point movement is not a constant. It's a function f(x)= 1+K1*x3+K2*x5+K3*x7=K4*x9. K1-K4 are coefficients of radial distortion of opencv fisheye undistortion model. In normal case undistorted image is always larger then initial image.
As you can see your undistorted image is smaller then initial fisheye image. I think the source of problem is bad calibration.
I don't know why the result is an oval and not a perfect circle. Is this as expected?
-> Tangential parameter of the calibration model can make it look like oval. It could be either your actual lens is tilted or calibration is incorrect. Just try to turn off tangential parameter option.
Can OpenCV fisheye model be calibrated for a Circular fisheye image?
-> No problem as far as I know. Try ocam as well.
I don't understand why the image is not centered when using the cv::fisheye::calibrate function to get the Cx Cy parameter in K?
-> It is normal that optical center does not align with the image center. It is a matter of degree, however. Cx, Cy represents the actual optical center. Low-quality fisheye camera manufactures does not control the quality of this parameter.
What tips (picture number, angle and position...) can be used on the checkboard to get the corrent camera matrix and Distortion factor?
-> clear images only, different distances, different angles, different positions. As many as possible.

Color space to Camera space transformation matrix

I am looking for transformation matrix to convert color space to camera space.
I know that the point conversion can be done using CoordinateMapper but I am not using Kinect v2 official APIs.
I really appreciate if someone can share the transformation matrix, which can convert color space to camera space.
As always, thank you very much.
Important : The raw kinect RGB image has a distortion. Remove it first.
Short answer
The "transformation matrix" you are searching is called projection matrix.
rgb.cx:959.5
rgb.cy:539.5
rgb.fx:1081.37
rgb.fy:1081.37
Long answer
First understand how color image is generated in Kinect.
X, Y, Z : coordinates of the given point in a coordinate space where kinect sensor is consider as the origin. AKA camera space. Note that camera space is 3D.
u, v : Coordinates of the corresponding color pixel in color space. Note that color space is 2D.
fx , fy : Focal length
cx, cy : principal points (you can consider the principal points of the kinect RGB camera as the center of image)
(R|t) : Extrinsic camera matrix. In kinect this one you can consider as (I|0) where I is identity matrix.
s : scaler value. you can set it to 1.
To get the most accurate values for the fx , fy, cx, cy, you need to calibrate your rgb camera in kinect using a chess board.
The above fx , fy, cx, cy values are my own calibration of my kinect. These values are differ from one kinect to another in very small margin.
More info and implementaion
All Kinect camera matrix
Distort
Registration
I implemented the Registration process in CUDA since CPU is not fast enough to process that much of data (1920 x 1080 x 30 matrix calculations per second) in real-time.

Resources