Iam working on Fisheye stereo cameras, I can't Rectify the cameras for stereo correspondence due to the presence of vegetation and the large baseline between the cameras which make the task particularly challenging. The FOV is 210 degrees and baseline is 0.59m I posted Left and Right image of the same scene with the cameras,
Left Image Right Image
I've tried OpenCV-Fisheye model and openCv-Omnidirectional model, none of them worked, the output images were not rectified.
Should the Omnidirectional model work?
Is it even possible to calibrate the cameras together?
Related
I know how to calibrate a single camera and a stereo camera.
But I am not sure how to calibrate when two cameras are facing towards each other as in the below figure.
For my application, I am using two Intel Realsense cameras.
Thank you for your suggestions.
This is a follow up question to:
Using FFMPEG: How to do a Scene Change Detection? with timecode?
Using FFmpeg to do scene change detection, the filter select=gt(scene,0.3) can be used to select the frames whose scene detection score is greater then 0.3.
Question: In FFmpeg, is possible to extend the filter so that only a scene select is applied on a defined area of the video frames? In other words, using an example, how can one do a scene change detection of only the right hand side of the video frames?
Hi i'm working on a vr project with threejs and an oculus rift.
I made a 24 meters wide 360 stereoscopic video
but when i try to display some text in front of it
i have some strange trouble vision effect or kind of eye separation issue.
if anyone has an idea... thanks :/
The text must always be closer than anything it obstructs on the video for this to work. That is, if the 360 cameras took images with nothing closer than 1.5 meters, the text should be at around 1.2 meter.
Disabling the head movement effect on the text will help too (keep the rotation, just disable the translation).
And 24 meters is a bit low, try a few hundred, maybe a kilometer. Remember that the video must look like it's in the infinite range and both head movement and IPD must be completely ignorable.
As far as I researched, it seems that it is not possible to use the Tango fisheye camera and the rgb sensor at the same time.
Thus is it possible to take a color picture with the fisheye camera on Tango?
It is not possibile to take color picture with the fisheye simply because RGB camera and fisheye camera are two different hw devices as you can see here
I'm using AVMutableComposition to position and composite two different video tracks for playback and export. I can easily scale and position the video tracks using AVMutableVideoCompositionLayerInstruction. This is all in the sample code.
However, what I need to do is crop one of the video layers. Not "effectively crop" as is done in the sample code by having the video frame overlap the side of the composition, but actually crop one of the layers being composited so the shape of the composited video is different but video is not distorted. In other words, I don't want to change the renderSize of the whole composition, just crop one of the composited layers.
Is this possible? Any ideas to make it happen? Thanks!
Have you tried the Cropping Settings? of AVMutableVideoCompositionLayerInstruction?