I test on camera api v1 and then on api v2 and result dramatically diferent the speed in camera api v1 is 12-15 fps(NV21) per second in other hand v2 gives only 4-5 frames(YUV_420_888) with same resolution 640x480. I thought it should be faster...Why this happen and is it possible to fix?
Related
EDIT: This is not a code bottleneck. Profiling shows identical CPU usage regardless of fluctuating GPU ms. Also I incorrectly talked about glbufferdata when I first posted; it's glbuffersubdata.
As part of an optimisation drive for the Android build of our app, I reorganised the render pipeline of our engine in a way that exposed a very significant performance bottleneck on certain iOS devices under iOS11.
Our engine manages a mixed scene of 2D geometry (UI) and 3D static models. The 2D geometry is dynamic, and generated each frame.
Until recently, every time a 2D draw call had to be issued (change in texture, change in blend modes or shader etc), the sprite data generated so far would be uploaded via glbuffersubdata and then rendered. This was fine on iOS, less so on Android.
To improve performance on Android I made the engine put all the 2D geometry into a single data buffer, upload it to a single VBO with one glbuffersubdata call at the start of the next frame, and then execute draw calls using sub-sections of that buffer. This led to a significant boost on Android devices, and made no difference to some of our iOS test devices (eg iPod Touch 6 running iOS10).
However, on a 6S+ running iOS 11, the new pipeline increased the GPU time from a rock-stable 8ms to a wildly-fluctuating 10-14ms, often pushing the game down to 30fps instead of 60. This is not the first time something previously innocuous has become performance-killing with an iOS update.
I've now made the buffering-up a compile option, and regained the lost performance on iOS, but if you are generating significant amounts of dynamic geometry and struggling with performance on iOS11, you may see an improvement if you stop trying to batch glbuffersubdata uploads and instead perform smaller ones interspersed with render calls.
I've been developing a virtual camera app for depth cameras and I'm extremely interested in the Tango project. I have several questions regarding the cameras on board. I can't seem to find these specs anywhere in the developer section or forums, so I understand completely if these cant be answered publicly. I thought I would ask regardless and see if the current device is suitable for my app.
Are the depth and color images from the rgb/ir camera captured simultaneously?
What frame rates is the rgb/ir capable of? e.g. 30, 25, 24? And at what resolutions?
Does the motion tracking camera run in sync with the rgb/ir camera? If not what frame rate (or refresh rate) does the motion tracking camera run at? Also if they do not run on the same clock does the API expose a relative or an absolute time stamp for both cameras?
What manual controls (if any) are exposed for the color camera? Frame rate, gain, exposure time, white balance?
If the color camera is fully automatic, does it automatically drop its frame rate in low light situations?
Thank you so much for your time!
Edit: Im specifically referring to the new tablet.
Some guessing
No, the actual image used to generate the point cloud is not the droid you want - I put up a picture on Google+ that shows what you get when you get one of the images that has the IR pattern used to calculate depth (an aside - it looks suspiciously like a Serpinski curve to me
Image frame rate is considerably higher than point cloud frame rate, but seems variable - probably a function of the load that Tango imposes
Motion tracking, i.e. pose, is captured at a rate roughly 3x the pose cloud rate
Timestamps are done with the most fascinating double precision number - in prior releases there was definitely artifacts/data in the lsb's of the double - I do a getposeattime (callbacks used for ADF localization) when I pick up a cloud, so supposedly I've got a pose aligned with the cloud - images have very low timestamp correspondance with pose and cloud data - it's very important to note that the 3 tango streams (pose,image,cloud) all return timestamps
Don't know about camera controls yet - still wedging OpenCV into the cloud services :-) Low light will be interesting - anecdotal data indicates that Tango has a wider visual spectrum than we do, which makes me wonder if fiddling with the camera at the point of capture to change image quality, e.g. dropping the frame rate, might not cause Tango problems
I was looking for expressions for zoom in/out , pan.
Basically the use case is this: Consider a rectangle of 1280x720 and I need to zoom in it
to 640x480. The zoom time is configurable, consider x seconds. The output of the expression should be all the intermediate rectangles (format = x,y,w,h) till 640x480 # 30 fps. which means if the zoom time is 5 seconds, then I should get 150 output rectangles well spaced and smooth. (#30 fps, total rectangles = 30 x 5).
Further which, I'll crop them & then rescale them all to a constant resolution and finally feed to the encoder.
The same requirement goes to zoom out & pan-scan.
Thanks.
If you are using a mobile development platform (xcode, android SDK) then gestures are built in functions of the OS and are configurable through drag and drop.
If you're on a web development platform I recommend jquery plugins such as hammer.js or touchpunch. You can find links to them on this question.
If you give more information on your platform i'd be happy to give you more specific examples!
I'd like to track the position of the device with respect to an initial position with high accuracy (ideally) for motions at a small scale (say < 1 meter). The best bet seems to be using motionReading.SensorReading.DeviceAcceleration. I tried this. But ran into few problems. Apart from the noisy readings (which I was expecting and can tolerate), I see some behaviors that are conceptually wrong - e.g. If I start from rest, move the phone around and bring it back to rest- and in the process periodically update the velocity vector along all the dimensions, I would expect the magnitude of the velocity to be very small (ideally 0). But I don't see that. I have extensively reviewed available help including the official msdn pages but I don't see any examples where the position/velocity of the device are updated using the acceleration vector. Is the acceleration vector that the api returns (atleast in theory) supposed to be the rate of change of velocity or something else? (FYI - my device does not have a gyroscope, so the api is going to be the low accuracy version.)
I'm developing a card game in Android using SurfaceView and canvas to draw the UI.
I've tried to optimize everything as much as possible but I still have two questions:
During the game I'll need to draw 40 bitmaps (the 40 cards in the italian deck), is it better to create all the bitmaps on the onCreate method of my customized SurfaceView (storing them in an array), or create them as needed (every time the user get a new card for example)?
I'm able to get over 90 fps on an old Samsung I5500 (528 MHz, with a QVGA screen), 60 fps on an Optimus Life (800 MHz and HVGA screen) and 60 fps with a Nexus One/Motorola Razr (1 GHz and dual core 1GHz with WVGA and qHD screens) but when I run the game on an Android tablet (Motorola Xoom dual core 1 GHz and 1 GB of Ram) I get only 30/40 fps... how is that possible that a 528 MHz cpu with 256 MB of RAM can handle 90+ fps and a dual core processor can't handle 60 fps? I'm not seeing any kind of GC calling at runtime....
EDIT: Just to clarify I've tried both ARGB_888 and RGB_565 without any changes in the performance...
Any suggestions?
Thanks
Some points for you to consider:
It is recommended not to create new objects while your game is running, otherwise, you may get unexpected garbage collections.
Your FPS numbers doesn't sound good, you may have measurement errors, However my guess is that you are resizing the images to fit the screen size and that affects the memory usage of your game and may cause slow rendering times on tablets.
You can use profiling tools to confirm: TraceView
OpenGL would be much faster
last tip: don't draw overlapping cards if you can, draw only the visible ones.
Good Luck
Ok so it's better to create the bitmap in the onCreate method, that is what I'm doing right now...
They are ok, I believe that the 60 fps on some devices are just some restrictions made by producers since you won't find any advantage in getting more than 60 fps (I'm making this assumption since it doesn't change rendering 1 card, 10 cards or no card... the OnDraw method is called 60 times per second, but if I add for example 50/100 cards it drops accordingly) I don't resize any card cause I use the proper folder (mdpi, hdpi, ecc) for each device, and I get the exact size of the image, without resizing it...
I've tried to look at it but from what I understand all the time of the app execution is used to draw the bitmap, not to resize or update its position here it is:
I know, but it would add complexity to the developing and I believe that using a canvas for 7 cards on the screen should be just fine….
I don't draw every card of the deck.. I just swap bitmap as needed :)
UPDATE: I've tried to run the game on a Xoom 2, Galaxy Tab 7 plus and Asus Transformer Prime and it runs just fine with 60 fps…. could it be just a problem of Tegra 2 devices?