How to Capture Images continuously and faster - image

I am developing an application in J2ME which have to processes images continuously taken from mobile camera. Its purpose is to detect the movement of the mobile and move an object on the screen. i am using
videoControl.getSnapshot("encoding=png&width=100&height=100");
to take images. But the problem is that it is taking about 4 sec to capture next image. It is very slow process.
Is there any way to get image faster?
Since mobile camera has 15 fps Video Recording Frame Rate, Cant we use that capability to capture images faster?
Currently I am targeting for Nokia E5 (target can be changed if not supported.)

Related

360 Video can't reach to 60P output

I'm trying to check 360 Video performance with version 11.62465, the 360 video output can't reach to 60P when we play FHD#60P 360 video. The video decode will update video frame in every 14ms~17 ms, but the application can't call SbPlayerGetCurrentFrame() 16ms, then the drop frame happens. The drop frame rate is nearly 20%. I tried to use chrome://tracing/ check the performance and found that sometimes the rasterizer lasted 40ms, the CPU duration is only 8ms, does this mean the GPU ability is not enough? Will the cobalt render thread be blocked by other modules?
If the CPU duration per-frame is 8ms, then it does sound like the GPU is not powerful enough to process each video frame fast enough. The Cobalt render thread should never be blocked by other modules, though it may be worth double checking that your implementation of SbPlayerGetCurrentFrame() is not taking a long time to render (perhaps it acquires a lock?).
You can use chrome://tracing/ to check the performance of the renderer when a non-360 FHD#60P video is playing, and compare that performance to when a 360 video is playing. This will tell you whether the renderer's performance is affected by the decode-to-texture process or not.

What's the fastest way to access video pixels in as3?

I would like to copy pixels from a 1080p video from one location to another efficiently/with as little CPU impact as possible.
So far my implementation is fairly simple:
using BitmapData's draw() method to grab the pixels from the video
using BitmapData's copyPixels() to shuffle pixels about
Ideally this would have as little CPU impact as possible but I am running out of options and could really use some tips from experienced actionscript 3 developers.
I've profiled my code with Scout and noticed the CPU usage is mostly around 70% but goes above 100% quite a bit. I've looked into StageVideo but one of the main limitations is this:
The video data cannot be copied into a BitmapData object
(BitmapData.draw).
Is there a more direct way to access video pixels, rather than rasterizing a DisplayObject ?
Can I access each video frame as a ByteArray directly and plug it into a BitmapData object ?
(I found appendBytes but it seems to do the reverse of what I need in my setup).
What is the most CPU friendly way to manipulate pixels from an h264 1080p video in actionscript 3 ?
Also, is there a faster way to moving pixels around other than copyPixels() using Flash Player ?Also, I see Scout points out that video is not hardware accelerated( .rend.video.hwrender: false ). Shouldn't h264 video be hardware accelerated (even without stage video) according to this article (or is this for the fullscreen mode only) ?
Latest AIR beta introduced video as texture support which you could possibly use to manipulate the video on GPU (and do that way faster than with BitmapData). But keep in mind that it is currently available for AIR on Windows only and there are some other limitations.

Windows Phone | Frame Based Animations And Memory Footprints

I am developing a small game for Windows Phone which is based on silverlight animation.
Some animations are using silverlight animation framework like Trandforms API and some animations are frame based. What I am doing is, I am running a Storyboard having very small duration and when it;s completed event fires, I am changing image frame there. So images get replaced every time completed event get fired. But I think it is causing memory leakage in my game and memory footprint is increasing with time.
I want to ask is it a right way to do frame base animations or is there any better way to do this in silverlight???
What I can do to reduce memory consumption so that it does not increase with time.
As a general rule, beware animating anything which can't be GPU accelerated or bitmap cached. You haven't given enough information to tell if this is your issue but start by monitoring the frame rate counters, redraw regions and cache visualisation.
You can detect memory leaks with the built in profiling tools.
See DEBUG > Start Windows Phone Application Analysis

How can a graphics card be optimized for WPF 'Dirty Rectangle' update rate

I'm trying to write an application to display multiple video streams, all updating at 25 or 30 images per second. The images are being rendered into WPF controls using Direct3D and some Interop to avoid using a Winforms control. As more video streams are added, the frame rate of each control drops yet the CPU on my machine only ever reaches about 50%.
Using the Microsoft WPF Performance Suite - Perforator tool, it would appear that when the frame rate on the video streams starts to drop, the 'Dirty Rect Addition Rate' levels out like it has reached a maximum for the video card.
There is no software rendering activity in the application so it would appear that overall performance is being limited by the graphics card's ability to update the Dirty Rectangles.
Therefore, is there a feature or performance parameter that can be used to determine the best video card to buy in order to maximise performance for my application?
Either that, or is there a set of graphics cards settings that will boost performance?
Currently running with an ATI FirePro V4800 that will happily run 16 streams of H264 video at a resolution of 4CIF but looking for the ability to run up to 32.

Fast screen capture and lost Vsync

I'd like to generate a movie in real time with a self-made application doing fast screen captures with part of the screen occupied by a running 3D application.
I'm aware that several applications already exist for this (like FRAPS or Taksi), and even dedicated DirectShow filters (like UScreenCapture), but i really need to make this with my own external application.
When correctly setup (UScreenCapture + ffdshow), capturing an compressing a full screen does not consumes as much CPU as you would expect (about 15%), and does not impairs the performances of the 3D app.
The problem of doing a capture from an external application is that the 3D application loses it's Vsync and creates a shaggy, difficult to use 3D application (3D app is only presented on a small part of the screen, the rest being GDI, DirectX)
FRAPS solves this problem by allowing you to capture only one application at a time (the one with focus). Depending on the technology used (OpenGl, DirectX, GDI), it hooks the Vsync and does its capture (with glReadPixels,...), without perturbing it.
Doing this does not solve my problem, since I want the full composed screen image (including 3D and the rest) AND a smooth 3D app.
The UScreenCapture seems to use a fast DirectX call to capture the whole screen, but the openGL 3D app is still out of sync.
Doing a BitBlt is too slow and CPU consumming to do real time 30 fps acquisition (at least under windows XP, not sure with 7)
My question is to know if there is a way to achieve my goal with Windows 7 and it's brand new DirectX compositing engine?
Windows 7 succeeds to show live VSynced duplicated previews of every app (in the taskbar), so there must be a way to access the currenlty displayed screen buffer without perturbing the rendering of the 3D OpenGL app ?
Any other suggestion, technology ?
thank you
I made a list of possibly useful links at
http://betterlogic.com/roger/?p=3037
let me know if you have any success--eventually I would also be interested in a fast open source screen capture for windows...
related: Fastest method of screen capturing

Resources