While Reading Two Videos : poll error 1: No space left on device? - device

While reading two camera's by using gstreamer below pipline,
gst-launch-1.0 v4l2src device=/dev/video1 ! videoconvert ! xvimagesink
I am not able to read both two camera's at a time. I can read only one camera at a time. While reading both camera's below error came,
ERROR: from element /GstPipeline:pipeline0/GstV4l2Src:v4l2src0: Could not read from resource.
Additional debug info:
gstv4l2bufferpool.c(1023): gst_v4l2_buffer_pool_poll (): /GstPipeline:pipeline0/GstV4l2Src:v4l2src0:
poll error 1: No space left on device (28)
What is the problem here? And how can it be resolved? How will i read both camera's at a time?

poll error 1: No space left on device (28)
This Error points to Bandwidth limitation on the USB. Especially with USB 2.0 where the maximum allowed is 480 Mb/s.
Solution 1: Try connecting the cameras to different USB host controllers. i.e they should not share the same bus.
Other solutions: to changes the compressed formats, resolutions of the image so that the band width limitation is not breached.
Reference : http://www.ideasonboard.org/uvc/faq/

Related

Slow frame rate for gstreamer stream

I am using the following command for a gstreamer pipeline for a videostream from a webcam:
gst-launch-1.0 v4l2src device=/dev/video0 ! videorate ! video/x-raw,format=I420,width=1920,height=1080,framerate=25/1 ! xvimagesink
Unfortunately the displayed stream has a very low framerate, it feels like maybe 3 frames per second.
I don't really know what could be the problem here. How can I increase the performance for this video stream?
I already tried reducing the width and height values to lower the resolution but this did not leave me with any noticable improvement.
Might the thing with the format be slowing me down? Maybe it is helful to know that I chose the I420 as they where needed for a nodewebRTC implementation where a function was seemingl only called with frames of this format.
Check you camera capabilities first, e.g. with v4l2-ctl --list-formats-ext -d /dev/video0. Might be that I420 requires conversion. If your PC is not able to keep-up with the conversion, you'll see a message like:
WARNING: from element /GstPipeline:pipeline0/GstXvImageSink:xvimagesink0: A lot of buffers are being dropped.
If so, consider using MJPG to stream at higher framerate.

Noisy disparity map in stereovision pipeline

I'm trying to implement the Xilinx xfOpenCV stereovision pipeline explained at the bottom page here in Vivado HLS as standalone IP core (instead using accelerated flow). The stereo pipeline functions are based and very similar as the OpenCV ones.
So first I've collected a couple of images from an already calibrated stereo camera and I've simulated the xfOpenCV functions as standalone HW IP to make sure I've got the expected result, and after simulation the result is not perfect and it's got quite a lot of noise, for instance:
I went ahead and synthesize and implement the IP in the hardware (FPGA) to test it with a live stereo-camera stream. I've got the calibration parameters and are being used to correct the live stream frames before the 'stereo' function. (The calibration parameters also have been tested previously in simulation)
All works fine in terms of video flow, memory etc, but what I've got is a quite high level of noise (as expected from the simulation), this is a screen shot example of the camera live:
Any idea why this 'flickering' noise is generated? is it caused by the original images noise? what would it be the best approach or next steps to get rid of it (or smooth it)?
Thanks in advance.

Google Tango Lenovo Phab 2 Camera Intrinsics

I was trying to extract the camera intrinsics and distortion coefficients from my Lenovo Phab 2 via the documented:
ret = TangoService_getCameraIntrinsics(TANGO_CAMERA_COLOR, &ccIntrinsics);
Weirdly enough, the distortion coefficients are coming back 0 for every one. However, there is data for the intrinsics, with what I think is very low precision.
I thought at first it could have been a casting error, but with the %f, %lf and %E flags (LOGE();), the values don't change.
I know that on the previous Google Tango Tablet dev kit, the calibration coefficients and distortion modle was in a file called calibration.xml. Is this also true of the Lenovo Phab 2?
EDIT: After dumping the contents of the camera intrinsics struct to file, there is for sure no distortion coefficients being returned for the device. I.e. All distortion entries are 0.0000.
This was an issue with my device! It was resolved by receiving an updated device. Somehow the calibration data was missing.
Make sure to check your device for the calibration.xml file. If this file is not in-place, contact costumer support!

Metal // Reading multisampled depth texture

For some post-rendering effects I need to read the depth-texture. This works fine as long as multi-sampling is turned off. But with multi-sampling I have trouble reading the texture.
Trying to use the multisample-texture in the shader though an depth2d_ms argument the compiler will fail at run time with an error message "Internal Compiler Error".
I found that with OpenGL you'd first blit the multisample depth buffer to a resolved depth buffer to read the sampled values, but with Metal I get an assertion error stating that sample counts of blit textures need to match, so there is no chance blitting 4 samples into 1.
So how would I read sampled or unsampled values from the depth buffer whilst using multi-sampling?
I don't know the answer. But suggest to try below solution:
Set the storeAction of MTLRenderPassDepthAttachmentDescriptor as below:
depthAttachment.storeAction = MTLStoreActionMultisampleResolve;
and also set its resolveTexture to another texture:
depthAttachment.resolveTexture = targetResolvedDepthTexture.
at last, try to read the targetResolvedDepthTexture content.
MSAA depth resolve is only supported in iOS GPU Family 3 v1 (A9 GPU on iOS 9).
Take a look on the Feature Availability document:
https://developer.apple.com/library/ios/documentation/Miscellaneous/Conceptual/MetalProgrammingGuide/MetalFeatureSetTables/MetalFeatureSetTables.html#//apple_ref/doc/uid/TP40014221-CH13-DontLinkElementID_8

low depth resolution on google-project-tango

I see that the resolution of the depth camera is 320*180, however, each depth capture frame produces only 10K to 15 K points. Am I missing a setting?
I looked at the transformation matrices keeping the device fixed and with an area_learn update method, with no ADF loaded. I see non-zero offsets on the translation values. I expected 0 offsets.
Is there a published motion estimation performance document for Tango that specifies latency and performance of the IMU + ADF? I am looking for detailed test information.
Thanks
You are right about the resolution of the depth camera and your results align with mine. Depending of where the depth camera is pointing at, i'll get between 5K and 12K points. Scanning the floor surface would generate more points since it is flat and uniform.
I think that you are experiencing drift. This is expected when not using Area Learning (no ADF loaded) There is a known issue of drift occuring because of android 4.4. (source https://plus.google.com/+ChuckKnowledge/posts/dVk4ZgVikgT)
Loading an ADF should help this but i wouldn't expect it to be perfect.
I don't know about this. Sorry!

Resources