Circular fisheye distort using opencv3 fisheye model - opencv3.0

I use a OpenCV fisheye model function to perform fisheye calibration work. My image is a Circular fisheye (example), but I'm getting this result from the OpenCV fisheye model function.
I have the following problems:
I don't know why the result is an oval and not a perfect circle. Is this as expected?
Can OpenCV fisheye model be calibrated for a Circular fisheye image?
I don't understand why the image is not centered when using the cv::fisheye::calibrate function to get the Cx Cy parameter in K?
What tips (picture number, angle and position...) can be used on the checkboard to get the corrent camera matrix and Distortion factor?
Expected Result
My Result

First at all cv::fisheye uses a very simple idea. To remove radial distortion it will move points of fisheye circle in direction from circle center to circle edge.
Points near center will be moved a little. Points near edges will be moved on much larger distance.
In other words distance of point movement is not a constant. It's a function f(x)= 1+K1*x3+K2*x5+K3*x7=K4*x9. K1-K4 are coefficients of radial distortion of opencv fisheye undistortion model. In normal case undistorted image is always larger then initial image.
As you can see your undistorted image is smaller then initial fisheye image. I think the source of problem is bad calibration.

I don't know why the result is an oval and not a perfect circle. Is this as expected?
-> Tangential parameter of the calibration model can make it look like oval. It could be either your actual lens is tilted or calibration is incorrect. Just try to turn off tangential parameter option.
Can OpenCV fisheye model be calibrated for a Circular fisheye image?
-> No problem as far as I know. Try ocam as well.
I don't understand why the image is not centered when using the cv::fisheye::calibrate function to get the Cx Cy parameter in K?
-> It is normal that optical center does not align with the image center. It is a matter of degree, however. Cx, Cy represents the actual optical center. Low-quality fisheye camera manufactures does not control the quality of this parameter.
What tips (picture number, angle and position...) can be used on the checkboard to get the corrent camera matrix and Distortion factor?
-> clear images only, different distances, different angles, different positions. As many as possible.

Related

Camera Geometry: Algorithm for "object area correction"

A project I've been working on for the past few months is calculating the top area of ​​an object taken with a 3D depth camera from top view.
workflow of my project:
capture a group of objects image(RGB,DEPTH data) from top-view
Instance Segmentation with RGB image
Calculate the real area of ​​the segmented mask with DEPTH data
Some problem on the project:
All given objects have different shapes
The side of the object, not the top, begins to be seen as it moves to the outside of the image.
Because of this, the mask area to be segmented gradually increases.
As a result, the actual area of ​​an object located outside the image is calculated to be larger than that of an object located in the center.
In the example image, object 1 is located in the middle of the angle, so only the top of the object is visible, but object 2 is located outside the angle, so part of the top is lost and the side is visible.
Because of this, the mask area to be segmented is larger for objects located on the periphery than for objects located in the center.
I only want to find the area of ​​the top of an object.
example what I want image:
Is there a way to geometrically correct the area of ​​an object located on outside of the image?
I tried to calibrate by multiplying the area calculated according to the angle formed by Vector 1 connecting the center point of the camera lens to the center point of the floor and Vector 2 connecting the center point of the lens to the center of gravity of the target object by a specific value.
However, I gave up because I couldn't logically explain how much correction was needed.
fig 3:
What I would do is convert your RGB and Depth image to 3D mesh (surface with bumps) using your camera settings (FOVs,focal length) something like this:
Align already captured rgb and depth images
and then project it onto ground plane (perpendicul to camera view direction in the middle of screen). To obtain ground plane simply take 3 3D positions of the ground p0,p1,p2 (forming triangle) and using cross product to compute the ground normal:
n = normalize(cross(p1-p0,p2-p1))
now you plane is defined by p0,n so just each 3D coordinate convert like this:
by simply adding normal vector (towards ground) multiplied by distance to ground, if I see it right something like this:
p' = p + n * dot(p-p0,n)
That should eliminate the problem with visible sides on edges of FOV however you should also take into account that by showing side some part of top is also hidden so to remedy that you might also find axis of symmetry, and use just half of top side (that is not hidden partially) and just multiply the measured half area by 2 ...
Accurate computation is virtually hopeless, because you don't see all sides.
Assuming your depth information is available as a range image, you can consider the points inside the segmentation mask of a single chicken, estimate the vertical direction at that point, rotate and project the points to obtain the silhouette.
But as a part of the surface is occluded, you may have to reconstruct it using symmetry.
There is no way to do this accurately for arbitrary objects, since there can be parts of the object that contribute to the "top area", but which the camera cannot see. Since the camera cannot see these parts, you can't tell how big they are.
Since all your objects are known to be chickens, though, you could get a pretty accurate estimate like this:
Use Principal Component Analysis to determine the orientation of each chicken.
Using many objects in many images, find a best-fit polynomial that estimates apparent chicken size by distance from the image center, and orientation relative to the distance vector.
For any given chicken, then, you can divide its apparent size by the estimated average apparent size for its distance and orientation, to get a normalized chicken size measurement.

How is point cloud data acquired from the structured light 3D scanning?

I am trying to understand the 3D reconstruction of Object using 3D structured Lighting scanner and I am stuck at the point where a method of decoding set of camera and projector correspondences to use to reconstruct a 3D point cloud. How exactly is 3D point cloud information acquired from the information obtained from those correspondences? I want to understand the mathematical implementation, not the code implementation.
assuming you used structured light method which uses some sort of lines (vertical or horizontal - like binary coding or de-brujin) the idea is as follows:
a light plane goes through the projector perspective center and the line in the pattern.
the light plane normal needs to be rotated with the projector rotation matrix relative to the camera (or world depends on the calibration). the rotation part for the light plane can be avoided if if treat the projector perspective center as system origin.
using the correspondences you find a pixel in the image that match he light plane. now you need to define a vector that goes from the camera perspective center to the pixel in the image and then rotate this vector by the camera rotation (relative to the projector or world. again' depending on the calibration).
intersect the light plane with the found vector. how to compute that can be found in wikipedia: https://en.wikipedia.org/wiki/Line%E2%80%93plane_intersection
the mathematical problem (3d reconstruction) here is very simple as you can see. the hard part is recognizing the projected pattern in the image (easier than regular stereo but still hard) and calibrating (finding relative orientation between camera and projector).

how get homography matrix from intrinsic and extrinsic parameters to obtain top view image

I'm trying to get top view image from the image captured by a camera in perspective. Already got the intrinsic and extrinsic parameters of the camera to the plane , the ground. The camera is positioned on a robot pointed to the ground at a certain height and tilt. For this camera position got:
Intrinsic matrix (3X3):
KK= [624.2745,0,327.0749;0,622.0777,232.3448;0,0,1]
The translation vector and rotation matrix:
T = [-323.708466;-66.960728;1336.693284]
R =[0.0029,1.0000,-0.0034;0.3850,-0.0042,-0.9229;-0.9229,0.0013,-0.3850]
How can I get a Homography matrix to get top view image of the ground (chess set in the ground), with the information I have?
I'm using matlab. I've done the code to apply the matrix H to the captured image, but I still haven´t i way the get this H matrix properly.
Thanks in advance.

Camera matching application

I am trying to build a simple camera matching (or match moving) application. The functionality is the same as that in most 3d applications like 3ds Max or Maya. Given an image of a cube and a 3d model of the cube, the user selects points on the image corresponding to each vertex of the model. The application must then generate a camera view that displays the 3d cube model from the same angle as shown in the image.
Can anyone point me in the direction of an algorithm for that?
PS: The camera is calibrated and the camera calibration matrix is available to the program
You can try with the algorithm illustrated step-by-step on http://www.offbytwo.net/camera-matching/. The Octave source code is provided, too.
As a plus, you don't need to start with a cube, but just with any two edges parallel to the x axis and two in the y direction.

principle point in camera matrix ( programming issue)

In a 3x3 camera matrix what does the principle point do? how its location is formed? can we visualize that?
It is told that the principle point is the intersection of optical axis with the image plane. but why it is not always in the center of image?
we use opencv
The 3x3 camera intrinsics matrix is used to map between the coordinates in the image to the physical world coordinates. Similarly, the role of the principle point in this matrix is the mapping of "the intersection of optical axis with the image plane", between the coordinates in the image to the physical world coordinates. Ideally the principle point is in the center of the image, for most cameras, but this is not always the case in practice. The principle point may be slightly off center due to tangential distortion or imperfect centering of the lens components and other manufacturing defects. The 3x3 camera intrinsics matrix tries to correct this distortion.
I have found this site to be helpful to me when when learning about camera calibration. Although it is in MATLAB, it is based on the same camera calibration used in OpenCV.
The principal point in the 3x3 camera calibration matrix might also, more usefully, represent image cropping. If the image has been cropped around an object, then mapping the pixel coordinates to word coordinates requires a translation vector which appears as a non-centralized principal point in the matrix.

Resources