Need of Extrinsic parameter Camera calibration tips - camera-calibration

I am new to camera calibration (Extrinsic Parameters)..please give clear idea how to start with extrinsic parameters computation. I really need some answers.. am in critical position.

Try the Camera Calibrator app in MATLAB.

Related

Matching 2D image pixels in corresponding 3D point cloud

I want to match pixels of calibrated 3D lidar and 2D camera data. I will use this to train a network. Can this be considered as labeled data with this matching? If it is, is there anyone to help me to achive this? Any suggestions will be appreciated.
On a high level, assuming you have some transformation (rotation/translation) between your camera and your lidar, and the calibration matrix of the camera, you have a 3D image and a 2D projection of it.
That is, if you project the 3D pointcloud onto the the image plane of the camera, you will have a (x,y)_camera (point in camera frame) for every (RGB)D_world == (x,y,z)_world) point.
Whether this is helpful to train on depends on what you're trying to achieve; if you're trying to find where the camera is or calibrate it, given (RGB)D data and image(s), that could be done better with a Perspective-n point algorithm (the lidar could make it easier, perhaps, if it built up a "real" view of the world to compare against). Whether it would be considered labeled data, depends on how you are trying to label it. They both say very similar things.

OpenCV solvePnP return camera below floor

Recent I use OpenCV solvePnP to calibrate the camera. I get some 3D space points on the floor. After calibration I get the camera below floor (z<0). How can I flip the parameters.
I set a wrong right-hand coordinate system. The issue is solved when use the right coordinate system

Kinect v2 Color Camera Calibration Parameters

I am looking for camera calibration parameters of Kinect V2 color camera. Can someone provide these parameters or any reference?
I know that the camera calibration parameters are lens specific but I am fine with default parameters which Kinect v2 is using.
As always, thank you very much.
Each kinect's calibration values are differ by small margins. if you do very precise calculations on them, you will need to calibrate your kinect with a chess board using opencv. Otherwise you can use following values. I calibrated myself.
All the Kinect v2 calibration parameters

Extrinsic parameters of camera - are they constant?

I am wondering how can the extrinsic parameters of a camera be constant?
I know that the rotation matrix aligns the world coordinate system axises to the camera coordinate system, and the translation matrix/vector aligns the origo on top of each other.
But how can the parameters be constant? Would it not somehow be required that I know the orientation of the camera in world space? I.e. by an accelerometer or something?
I hope someone can help me wrap my head around this.
As you have correctly pointed out, camera extrinsics consist of a rotation and a translation of the camera's coordinate system relative to some world coordinate system. So, the extrinsics are constant only as long as the camera does not move relative to the world coordinates. As soon as your camera moves, its extrinsics change.
When you calibrate your camera, you typically use multiple images of a planar calibration pattern. During the calibration process the extrinsics of each location of the calibration pattern are computed. Once the camera is calibrated, you can compute the extrinsics by detecting some reference points with known world coordinates in the image. See this example in MATLAB.

Opengl-es camera can see an object?

I have a camera (pos, dir)
and
I have an object (x,y,z)
How can I detect when the object can see with my camera?
You don't have enough info. You need to know camera frustum. Than you calculate if the object is inside frustum. Learn more here:
http://www.lighthouse3d.com/tutorials/view-frustum-culling/
http://www.songho.ca/opengl/gl_transform.html

Resources