TangoImageBuffer RAW/YUV failed to match frame - google-project-tango

I use the OnFrameAvailable callback to sync rgb with depth data. If I just render the rgb point cloud everything is fine.
But if I do some image processing the camera throws the following exceptions:
E/camera-metadata: /home/ubuntu/jobs/redwood_internal/RedwoodInternal/Redwood/common/player-engine/src/camera-metadata.cc:56 RAW failed to match frame
or
E/camera-metadata: /home/ubuntu/jobs/redwood_internal/RedwoodInternal/Redwood/common/player-engine/src/camera-metadata.cc:56 YUV failed to match frame
And TangoImageBuffer has complete garbage values. Sometimes black pixels, or the buffer half-half old and new pixel data.
I tried to solve it by threading. Everytime I got a new point cloud, the extra image processing thread needs about 1 sec cpu time. And it helped a little. After a few seconds the same behavior happend.
The problem is that I can't debug the nativ code properly. The monitoring of android studio shows normal cpu and gpu usage.
I've seen that user guppy had this problem with the Leipniz tango version, but no solution was posted. So I hope maybe someone else has managed this problem? Or has any suggestions?
EDIT
The behaviour disappeared, after using the tango_support library to copy xyz and yuv buffers.

The "YUV failed to match frame" is most likely caused by the callback thread is taking too much time to execute. In short, you shouldn't do heavy processing in the OnFrameAvailable callback. This also applies to all other Tango callbacks, i.e pose or depth callback.
The solution to this would be copying out the byte buffer data and process it in another thread, potentially, the render thread. In the tango-example-c video-overlay-jni-example, the application does a memcpy to copy the data from callback thread to the render thread, so the processing of the data would not block the callbacks keep coming. See this line.

Related

eglDestroyContext and unfinished drawcalls

Following scenario:
I have an OpenGL ES app that renders frames via drawcalls, the end of a frame is marked by eglSwapBuffers. Now imagine that after the last eglSwapBuffers I immediately call eglMakeCurrent to unbind the context from the surface, then immediately call eglDestroyContext. Assuming the context was the last one to hold references to any resources the drawcalls use (shader, buffer, textures etc.) what happens to the drawcalls that the gpu has not yet finished and that use some of these resources?
Regards.
then immediately call eglDestroyContext().
All this really says is that the application is done with the context, and promises not to use it again. The actual context includes a reference count held by the graphics driver, and that won't drop to zero until all pending rendering has actually completed.
TLDR - despite what the APIs say, nothing actually happens "immediately" when you make an API call - it's an elaborate illusion that is mostly a complete lie.

Use Custom Image Recognition Collection locally

I have created a Custom Image Recognition collection on IBM Cloud and am using it in my Django website to do the processing. However, I noticed that the response time ranges from 6 to 14 seconds.
I want to reduce this turnaround time. I am already zipping the image file that I sent. So when going through the API reference document here on IBM Cloud I noticed that there is a method called "get_model_file" which download the collection file to a local space.
But no documentation on how this can be used. Anyone who has successfully implemented this? Or am i missing something here?
However, I noticed that the response time ranges from 6 to 14 seconds.
I want to reduce this turnaround time. I am already zipping the image file that I sent.
How many images at at time are you sending in the zip file to the /analyze endpoint? If you are just sending one image at a time, you should not bother zipping it. Also, if you can, you should parallelize your code so that you make 1 request per image, rather than sending, say 6 images in a single zip file. This will reduce the latency.
Using the v4 API, by the way, you should resize your images to no more than 300 pixels in either width or height. In fact, you can "squash" the aspect ratio to square and it will not affect the outcome. The service will do this resizing internally anyhow, but if you do it on the client side, you save network transmission and decoding time.
With a single image at a time, if your resolution is under 300x300 pixels, you should have latency under 1.5 seconds on a typical call, including your network transmission time.
As the documentation states
Currently, the model format is specific to Android apps.
So unless you are creating an Android App then this is not going to work for you.
You probably have two areas of latency. First will be from the browser to your Django app. Second will be from your Django app to the Visual Recognition service. I am not sure where you have hosted the Django app, but if you locate it in the same region (data centre would be even better) you might be able to reduce part of the latency.

PushSource filter pushing sample too for render

I'm bit new to directshow. I'm using PushSource filter sample provided with DirectShow to push sequence of bmp image to avi file. But before that I'm trying to see whether filter is able to render the samples. The render is able to display just the first frame, though the filter is running properly and filling the buffers. I put printf at various stages to see the flow.
I feel that PushSource is running too fast and render is getting hanged.
Please provide some suggesting how to synchronize the two.
Also let me know if I'm missing something.
You are likely to miss time stamps, you are omitting them or possibly leaving garbage where they are supposed to be correct. You will want the pushing filter to stamp samples correctly, so that multiplexers and renderers have no doubts as for sample presentation times.

AudioQueueOutputCallback not called at first

My question may be similar to this: Why might my AudioQueueOutputCallback not be called?
It seems that person was able to fix by running audio stuff on main thread. I cannot do that.
I enqueue buffers to prime audio Q, then start audio Q. Shouldn't those buffers complete immediately once I start my queue?
I am setting the data size correctly.
As a hack I just re-use buffers without waiting for them to be reported by cabllback as done. If I do this, I run for a couple of seconds like this, then the buffer callback starts working from them on.
definitely not a good idea to hack your way around with core audio.. while it may be a quick fix, it will definitely hurt you in ambiguous ways in the long run.
your problem isn't the same as the link you posted, their problem was assigning the callback on the wrong thread.. in your case, your callback is in the right thread, it's just that the audio buffers you are feeding it initially are either empty, too small or contains data not fit for audio playback
keep in mind that the purpose of the callback is to fire after each audio buffer supplied to the audio queue has been played (ie consumed).. the fact that after you start the queue the callback isn't being fired.. it means that there is nothing in the audio buffers for it to consume.. or too little meaningful information for it to consume..
when you do it manually you see a lag b/c the audio queue is trying to process the empty/erroneous buffers you supplied it.. then you resupply the same buffers with valid data that the queue eventually plays and then fires the callback
solution: compare the data you put in the buffers before starting the queue with the data you are supplying manually.. i'm sure there is a difference.. if that doesn't work please show your code for further analysis

OpenGL ES 2.0 multithreading

I have been trying to use OpenGL ES 2.0 to make a photo viewing application. To optimize the code I'm changing the textures loaded with the objects as the user scrolls down. But image loading into the texture takes some time and thus the effect is not good. To solve the above problem I tried using multithreading with the following ways:
Create a separate context for the new thread and then share the resources (texture object) with the other context
Use multiple threads and a single context. Make the context current while executing gl commands in the threads.
But wasn't successful in either of them. So if anyone has tried similar things with opengl earlier, could you please tell which of the above ones would work and the things I need to pay attention to while doing the same? Also would FBO's and pbuffers be of any use in this case?
Thanks for any help.
Himanshu
I would suggest keeping the OpenGL stuff in your primary thread, and delegating just the data loading to the worker thread.
Have, say, a circular buffer of image data objects that your worker thread can keep itself busy filling, but let your primary thread generate the textures from the image data as it needs them.
I don't think approach 1 is valid - you're not supposed to share resources across contexts.
I was very successful with something like your approach 2. My app, iSynth, does a lot of texture creation on the fly at the same time the app is handling network requests and trying to render a decent number of frames per second. I use two threads for OpenGL and they share a context.
One of these is a dedicated render thread. At the high level, it's a while loop that repeatedly calls my rendering code using whatever textures are available at the time.
The other does nothing but create texture resources for the render thread to use. Images are continuously downloaded on the main thread and queued up - this thread's sole mission in life is to eat through that queue and output GL textures.
This approach works well because one thread is strictly a producer and the other a consumer. I could foresee lots of ugly problems if you start trying to create textures on the render thread, or modify GL state on the texture thread. If you're having problems, I'd make sure you're properly setting up the shared context and making that context current on each thread. Also, if a thread utilizing that context goes away, I've found that it's necessary to call setCurrentContext: nil from that thread first.

Resources