Google Tango C API connected onFrameAvailable callback never called - google-project-tango

I'm trying to retrieve the TANGO_CAMERA_COLOR image pixel data by registering a callback function using
static void onFrameAvailable( void* context, const TangoCameraId id, const TangoImageBuffer* buffer )
{
LOGVI( "\nGOOGLE TANGO FRAME AVAILABLE" );
}
...
if(TangoService_connectOnFrameAvailable( TANGO_CAMERA_COLOR, NULL, onFrameAvailable ) == TANGO_SUCCESS)
{
LOGVI( "\nGOOGLE TANGO ONFRAMEAVAILABLE CONNECTED" );
}
I actually get my success log as output but the callback function is never called.
output:
01-13 12:04:02.655: I/tango_client_api(187): Tango Service: connect, internal status 0
01-13 12:04:02.655: I/VR(8529): GOOGLE TANGO CONNECTED
01-13 12:04:02.655: I/VR(8529): GOOGLE TANGO EVENT HANDLER CONNECTED
01-13 12:04:02.656: I/tango_client_api(187): Tango Service: getCameraIntrinsics, internal status 0
01-13 12:04:02.656: I/VR(8529): GOOGLE TANGO CAMERA IMAGE WIDTH 1280
01-13 12:04:02.656: I/VR(8529): GOOGLE TANGO CAMERA IMAGE HEIGHT 720
01-13 12:04:02.656: I/VR(8529): GOOGLE TANGO CAMERA IMAGE FOCAL X 1039.630000
01-13 12:04:02.656: I/VR(8529): GOOGLE TANGO CAMERA IMAGE FOCAL Y 1039.900000
01-13 12:04:02.656: I/VR(8529): GOOGLE TANGO CAMERA IMAGE PRINCIPAL X 634.922000
01-13 12:04:02.656: I/VR(8529): GOOGLE TANGO CAMERA IMAGE PRINCIPAL Y 362.981000
01-13 12:04:02.656: I/tango_client_api(187): Tango Service: connectSurface, internal status 0
01-13 12:04:02.657: I/VR(8529): GOOGLE TANGO ONFRAMEAVAILABLE CONNECTED
01-13 12:04:02.694: I/chromium(8529): [INFO:async_pixel_transfer_manager_android.cc(56)] Async pixel transfers not supported
01-13 12:04:02.823: I/Camera3-OutputStream(168): enqueue first Preview frame or first video frame
01-13 12:04:02.825: I/native-camera(187): Dropping VGA with too much latency 92240.015085 > 0.000000 + 2 * 0.033328
01-13 12:04:02.899: I/native-camera(187): VGA Stats: 3 frames, 0 locked_buffer_drops, 0 sensor_hub_drops, 1 ambiguous, 0 feature_tracking_drops
01-13 12:04:02.899: I/native-camera(187): COLOR Stats: 0 frames, 0 locked_buffer_drops, 0 sensor_hub_drops, 0 ambiguous
... what follows are only event notifications like FisheyeUnderExposed, TooFewFeaturesTracked
I'm using C-API version Descartes.
The permissions in the Manifest are defined.
What should be necessary for the callback to work ( init, connecting, .. )?

One thing worth to check is the permission. To use the color camera, you will need to add the camera permission to the manifest file.

Try building the augmented reality C sample - I hacked it up (poorly) to do exactly this and it worked OK - AquireTangoImage gets called (I chopped the guts out cause the call happens and then I fail to update the texture :-)) - ConnectTexture is part of the original sample, I just added the secondary connectOnFrameAvailable - note that, at least in my experience, as soon as you do start getting callbacks you stop getting texture updates :-(
GLuint cached_single_texture_id;
void AcquireTangoImage(void *context, TangoCameraId id, const TangoImageBuffer *buffer)
{
TangoData* td = (TangoData*)context;
}
void TangoData::ConnectTexture(GLuint texture_id) {
cached_single_texture_id = texture_id;
if (TangoService_connectTextureId(TANGO_CAMERA_COLOR, texture_id, nullptr,
nullptr) == TANGO_SUCCESS) {
LOGI("TangoService_connectTextureId(): Success!");
} else {
LOGE("TangoService_connectTextureId(): Failed!");
}
if (TangoService_connectOnFrameAvailable(TANGO_CAMERA_COLOR, this, AcquireTangoImage) == TANGO_SUCCESS) {
LOGI("TangoService_connectOnFrameAvailable(): Success!");
}
else {
LOGE("TangoService_connectOnFrameAvailable(): Failed!");
}
}

Related

How to mix / combine multiple WebRTC media streams (screen capture + webcam) into a single stream?

I have a live screen capture media stream returned from getDisplayMedia(),
and a live webcam media stream returned from getUserMedia().
I currently render the webcam video on top of the screen share video, to create a picture-in-picture effect:
I want to mix / combine them into one video stream, in order to render it inside a single HTML video element.
I want to keep both streams active & live just as if they were two separate videos rendering two different media streams.
I also need to maintain the streams positions - keeping the webcam stream small and on top of the screen share stream. It's also really important for me to keep the original resolution and bitrate.
How can I do that?
You can draw both video streams on an HTMLCanvasElement, and then create a MediaStream from this HTMLCanvasElement calling its captureStream method.
To draw the two video streams, you'd have to read them in <video> elements and drawImage() these <video> elements onto the canvas in a timed loop (e.g requestAnimationFrame can be used for this timed loop).
async function getOverlayedVideoStreams( stream1, stream2 ) {
// prepare both players
const vid1 = document.createElement("video");
const vid2 = document.createElement("video");
vid1.muted = vid2.muted = true;
vid1.srcObject = stream1;
vid2.srcObject = stream2;
await Promise.all( [
vid1.play(),
vid2.play()
] );
// craete the renderer
const canvas = document.createElement("canvas");
let w = canvas.width = vid1.videoWidth;
let h = canvas.height = vid1.videoHeight;
const ctx = canvas.getContext("2d");
// MediaStreams can change size while streaming, so we need to handle it
vid1.onresize = (evt) => {
w = canvas.width = vid1.videoWidth;
h = canvas.height = vid1.videoHeight;
};
// start the animation loop
anim();
return canvas.captureStream();
function anim() {
// draw bg video
ctx.drawImage( vid1, 0, 0 );
// caculate size and position of small corner-vid (you may change it as you like)
const cam_w = vid2.videoWidth;
const cam_h = vid2.videoHeight;
const cam_ratio = cam_w / cam_h;
const out_h = h / 3;
const out_w = out_h * cam_ratio;
ctx.drawImage( vid2, w - out_w, h - out_h, out_w, out_h );
// do the same thing again at next screen paint
requestAnimationFrame( anim );
}
}
Live demo as a glitch since StackSnippets won't allow capture APIs.
I tried rendering on canvas but the refresh rate drops whenever i change/minimize my web-app tab. I was able to fix that using audioTimerLoop.
But it still did'nt work on firefox.
As i was limited to chrome now, i used PictureInPicture api to display the userMedia as a PiP and then just recorded the screenMedia.
This let the user to adjust their cam video coordinates which were fixed in the canvas method.
P.S. For getting pip mode.I displayed the userMedia onscreen and set its opacity to 0% and created a button to toggle it.

Unity windowed mode size is different on different screen resolutions

So I want the window size in the build to be of a certain size and it works great when displayed on 1920 x 1080 screen resolution, anything more or less than that, and the window becomes too big or too small. Is there any way for the window to be of the same window to screen size resolution?
I have used the following settings:
My build settings
Afaik you can set the resolution depending on the Display screen size using Screen.currentResolution and Screen.SetResolution somewhat like e.g.
public class ScreenSizeController : MonoBehaviour
{
// how much space (percentage) of the screen should your window fill
[Range(0f,1f)]
public float fillX;
[Range(0f,1f)]
public float fillY;
private void Awake()
{
// Get actual display resolution
var res = Screen.currentResolution;
// calculate target resolution using the fill
var targetX = fillX * res.width;
var targetY = fillY * res.height;
// Set player resolution
Screen.SetResolution(targetX, targetY, false);
}
}
Note: Typed on smartphone but I hope the idea gets clear
Wouldn't changing the screen width/height in the resolution and presentation menu to 1920 x 1080 fix it

Streaming UYVY video using SDL

I am trying a program to capture image data from a camera using v4l2. I have grabbed 4 frames and stored them in a buffer of continuous memory. The data format of the image in the buffer is UYVY. I need help in knowing the steps to be followed to map the image buffer or copy them to a texture so that I can stream them like a video.
I tried converting the UYVY files into BMP format and stream them using SDL_loadBMP() function but the frame rate is very low.
EDIT: Here's the code for SDL:
The "buf" is the image buffer which consists of bmp image data. I have converted the UYVY files to BMP and passed here. I need help in streaming the UYVY directly.
void video_stream(unsigned char* buf){
SDL_Surface *image;
SDL_Surface * screen;
SDL_Renderer *renderer;
SDL_Texture* texture;
SDL_Window *window;
SDL_RWops* rw;
SDL_Rect recta;
SDL_Init(SDL_INIT_EVERYTHING);
recta.x=0;
recta.y=0;
recta.w=s_format.fmt.pix.width;
recta.h=s_format.fmt.pix.height;
window=SDL_CreateWindow("Streaming", 0, 0, s_format.fmt.pix.width, s_format.fmt.pix.height, SDL_WINDOW_RESIZABLE);
rw=SDL_RWFromMem(buf,s_format.fmt.pix.width*s_format.fmt.pix.height*NUMBER_OF_BYTES_PER_PIXEL+OFFSET);
renderer=SDL_CreateRenderer(window,-1,SDL_RENDERER_ACCELERATED);
SDL_RenderSetLogicalSize(renderer,s_format.fmt.pix.width,s_format.fmt.pix.height);
SDL_SetRenderDrawColor(renderer,0,0,0,50);
SDL_RenderClear(renderer);
image=SDL_LoadBMP_RW(rw,1);
texture=SDL_CreateTextureFromSurface(renderer,image);
SDL_RenderCopy(renderer,texture,NULL,&recta);
SDL_RenderPresent(renderer);
SDL_DestroyRenderer(renderer);
SDL_FreeSurface(image);
SDL_DestroyTexture(texture);
SDL_DestroyRenderer(renderer);
SDL_DestroyWindow(window);
SDL_Quit();
}
SDL_CreateTexture() with SDL_PIXELFORMAT_UYVY. Upload new frames using SDL_UpdateTexture(). Render using SDL_RenderCopy.
You can use SDL_TEXTUREACCESS_STREAMING and SDL_LockTexture()/SDL_UnlockTexture() to speed up texture updates.

GraphicsView fitInView() very pixelated result when downshrinking

I have searched everywhere and i cannot find any solution after 2 days of trying.
The Problem:
I'm doing an image Viewer with "Fit Image to View" feature. I load a picture of say 3000+ pixels in my GraphicsView (which is a lot smaller ofcourse), scrollbars appear that's good. When i click my btnFitView and executed:
ui->graphicsView->fitInView(scene->sceneRect(),Qt::KeepAspectRatio);
This is down scaling right? After fitInView() all lines are pixelated. It looks like a saw went over the lines on the image.
For example: image of a car has lines, image of a texbook (letters become in very bad quality).
My code sample:
// select file, load image in view
QString strFilePath = QFileDialog::getOpenFileName(
this,
tr("Open File"),
"/home",
tr("Images (*.png *.jpg)"));
imageObject = new QImage();
imageObject->load(strFilePath);
image = QPixmap::fromImage(*imageObject);
scene = new QGraphicsScene(this);
scene->addPixmap(image);
scene->setSceneRect(image.rect());
ui->graphicsView->setScene(scene);
// on_btnFitView_Clicked() :
ui->graphicsView->fitInView(scene->sceneRect(),Qt::KeepAspectRatio);
Just before fitInView(), sizes are:
qDebug()<<"sceneRect = "<< scene->sceneRect();
qDebug()<<"viewRect = " << ui->graphicsView->rect();
sceneRect = QRectF(0,0 1000x750)
viewRect = QRect(0,0 733x415)
If it is necessary i can upload screenshots of original loaded image and fitted in view ?
Am i doing this right? It seems all examples on the Web work with fitInView for auto-fitting. Should i use some other operations on the pixmap perhaps?
SOLUTION
// LOAD IMAGE
bool ImgViewer::loadImage(const QString &strImagePath)
{
m_image = new QImage(strImagePath);
if(m_image->isNull()){
return false;
}
clearView();
m_pixmap = QPixmap::fromImage(*m_image);
m_pixmapItem = m_scene->addPixmap(m_pixmap);
m_scene->setSceneRect(m_pixmap.rect());
this->centerOn(m_pixmapItem);
// preserve fitView if active
if(m_IsFitInView)
fitView();
return true;
}
// TOGGLED FUNCTIONS
void ImgViewer::fitView()
{
if(m_image->isNull())
return;
this->resetTransform();
QPixmap px = m_pixmap; // use local pixmap (not original) otherwise image is blurred after scaling the same image multiple times
px = px.scaled(QSize(this->width(),this->height()),Qt::KeepAspectRatio,Qt::SmoothTransformation);
m_pixmapItem->setPixmap(px);
m_scene->setSceneRect(px.rect());
}
void ImgViewer::originalSize()
{
if(m_image->isNull())
return;
this->resetTransform();
m_pixmap = m_pixmap.scaled(QSize(m_image.width(),m_image.height()),Qt::KeepAspectRatio,Qt::SmoothTransformation);
m_pixmapItem->setPixmap(m_pixmap);
m_scene->setSceneRect(m_pixmap.rect());
this->centerOn(m_pixmapItem); //ensure item is centered in the view.
}
On downshrink this produces good quality. Here are some stats after calling these 2 functions:
// "originalSize()" : IMAGE SIZE = (1152, 2048)
// "originalSize()" : PIXMAP SIZE = (1152, 2048)
// "originalSize()" : VIEW SIZE = (698, 499)
// "originalSize()" : SCENE SIZE = (1152, 2048)
// "fitView()" : IMAGE SIZE = (1152, 2048)
// "fitView()" : PIXMAP SIZE = (1152, 2048)
// "fitView()" : VIEW SIZE = (698, 499)
// "fitView()" : SCENE SIZE = (280, 499)
There is a problem now, after call to fitView() look the size of scene? Much smaller.
And if fitView() is activated, and I now scale the image on wheelEvent (zoomIn/zoomOut), with the views scale function: scale(factor,factor); ..produces terrible result.
This doesn't happen with originalSize() where scene size is equal to image size.
Think of the view as a window into the scene.
Moving the view large amounts, either zooming in or out, will likely create images that don't look great. Rather than the image being scaled as you would expect, the view is just moving away from the scene and doing its best to render the image, but the image has not been scaled, just transformed in the scene.
Rather than using QGraphicsView::fitInView, keep the main image in memory and create a scaled version of the image with QPixamp::scaled, each time FitInView is selected, or the user zooms in / out. Then set this QPixmap on the QGraphicsPixmapItem with setPixmap.
You may also want to think about dropping the scroll bars and allowing the user to drag the image around the screen, which provides a better user interface, in my opinion; though of-course it depends on your requirements.

How do I draw video frames onto the screen permanently using XNA?

I have an app that plays back a video and draws the video onto the screen at a moving position. When I run the app, the video moves around the screen as it plays. Here is my Draw method...
protected override void Draw(GameTime gameTime)
{
Texture2D videoTexture = null;
if (player.State != MediaState.Stopped)
videoTexture = player.GetTexture();
if (videoTexture != null)
{
spriteBatch.Begin();
spriteBatch.Draw(
videoTexture,
new Rectangle(x++, 0, 400, 300), /* Where X is a class member */
Color.White);
spriteBatch.End();
}
base.Draw(gameTime);
}
The video moves horizontally acros the screen. This is not exactly as I expected since I have no lines of code that clear the screen. My question is why does it not leave a trail behind?
Also, how would I make it leave a trail behind?
I think the video moves horizontal because in the new Rectangle( declaration u have "x++" and is incrementing over the draws cicles the position on x os the rectangle. to make trails behind or dont clean previus draws u need to implement the back buffer whith render targets, to make a copy of the previus buffer and draw over it again.
that how i would try for.

Resources