node webkit render gif files and animation in gpu - animation

We have project written on node webkit which uses lot of gif animated files, when we run the app the load of CPU raises up to 100%,
Is there any way to pass that work (rendering of gif files) to GPU instead of CPU, we have tried to enable GPU as described here http://peter.sh/experiments/chromium-command-line-switches/ , but no result.

Related

What's the fastest way to access video pixels in as3?

I would like to copy pixels from a 1080p video from one location to another efficiently/with as little CPU impact as possible.
So far my implementation is fairly simple:
using BitmapData's draw() method to grab the pixels from the video
using BitmapData's copyPixels() to shuffle pixels about
Ideally this would have as little CPU impact as possible but I am running out of options and could really use some tips from experienced actionscript 3 developers.
I've profiled my code with Scout and noticed the CPU usage is mostly around 70% but goes above 100% quite a bit. I've looked into StageVideo but one of the main limitations is this:
The video data cannot be copied into a BitmapData object
(BitmapData.draw).
Is there a more direct way to access video pixels, rather than rasterizing a DisplayObject ?
Can I access each video frame as a ByteArray directly and plug it into a BitmapData object ?
(I found appendBytes but it seems to do the reverse of what I need in my setup).
What is the most CPU friendly way to manipulate pixels from an h264 1080p video in actionscript 3 ?
Also, is there a faster way to moving pixels around other than copyPixels() using Flash Player ?Also, I see Scout points out that video is not hardware accelerated( .rend.video.hwrender: false ). Shouldn't h264 video be hardware accelerated (even without stage video) according to this article (or is this for the fullscreen mode only) ?
Latest AIR beta introduced video as texture support which you could possibly use to manipulate the video on GPU (and do that way faster than with BitmapData). But keep in mind that it is currently available for AIR on Windows only and there are some other limitations.

Qt Enterprise for IMX6 not using Hardware Acceleration?

We built an application which uses QT WebEngine to test WebGL functionality, it worked however the CPU utilization was very high (>30%) for rendering some sine waveforms, the root file system was provided by QT Enterprise as described here for IMX6
http://doc.qt.digia.com/QtEnterpriseEmbedded/qtee-preparing-hardware-imx6sabresd.html
On inspecting the root file system we found that there were no GPU drivers (usually libVivante.so and libVivante.ko for IMX6), so it looks like all the GL rendering is being done by CPU instead of GPU and thats the reason for high CPU Utilization, Does anybody know any other ways to enable hardware acceleration for WebGL in the QT WebEngine?
Qt WebEngine requires hardware acceleration to composite the layers of the page and you would probably not be able to see anything on the screen without it.
Chromium, behind Qt WebEngine, is quite a beast and is more designed for perceived smoothness than to yield CPU cycles; it will use all the resources it can to achieve this.
Any JavaScript WebGL call will go from the main render thread, then to the GPU process main thread to be decoded into GL calls to the driver. Each different WebGL canvas will trigger a different FBO to be used and bound, requiring GL context switching, and as often as possible, the latest state will trigger the Chromium compositor to kick in, send all the delegated scene to the browser process, to finally end in QtQuick's scene graph thread to be composited.
All this to say that a single JavaScript WebGL call triggers a much bigger machine than just telling OpenGL to draw those geometries. A CPU usage of 30% on this kind of device doesn't seem anormal to me, though there might be a way to avoid bottle necks.
The most efficient this could get is by having a custom QtQuick Scene Graph geometry as shown in this example: http://qt-project.org/doc/qt-5/qtquick-scenegraph-customgeometry-example.html, but even then I wouldn't expect a CPU usage under 10% on that device.

Preloading Images to prevent Repainting in Chrome Rendering on Windows PC

I have a website with 3 full screen background images that use a custom parallax scrolling script utilizing requestAnimationFrame and transform:translate3d() for its animation. I did this because after a fair amount of research on the subject of visual performance this was the best alternative then using canvas.
My problem is that my page runs very smooth on Firefox 29.1 (*because it is most definitely using the computers GPU to render the page and composite layers) however for some reason, Chrome is has some major bottlenecks.
I am getting tremendous performance spikes (way less then 30fps) when I scroll down my page... but an interesting thing I noticed was that it happens specifically just as one of the background images that is being animated with the script (set as background-position:cover) enters into the viewport.
There is a repaint operation happening because the background image is being resized to fit the viewport width/height and that is causing a tremendous performance hit. Considering that the gpu isnt working correctly for my Chrome, but also that I would like to make my page perform super silky smooth when scrolling even without hardware acceleration, is there a method of preloading images/frames and having them already resized before they are scrolled onto screen? Like a frame buffering technique to ensure all the calculations and resizing is finished well before a user scrolls to that image on screen?

XNA Texture loading speed (for extra large Texture sizes)

[Skip to the bottom for the question only]
While developing my XNA game I came to another horrible XNA limitation: Texture2D-s (at least on my PC) can't have dimensions higher than 2048*2048. No problem, I quickly wrote my custom texture class, which uses a [System.Drawing.] Bitmap by default and splits the texture into smaller Texture2D-s eventually and displays them as appropriate.
When I made this change I also had to update the method loading the textures. When loading the Texture2D-s in the old version I used Texture2D.FromStream() which worked pretty good but XNA can't even seem to store/load textures higher than the limit so if I tried to load/store a say 4092*2048 png file I ended up having a 2048*2048 Texture2D in my app. Therefore I switched to load the images using [System.Drawing.] Image.FromFile, then cast it to a Bitmap as it doesn't seem to have any limitation. (Later converting this Bitmap to a Texture2D list.)
The problem is that loading the textures this way is noticeably slower because now even those images that are under the 2048*2048 limit will be loaded as a Bitmap then converted to a Texture2D. So I am actually looking for a way to analyze an image file and check its dimensions (width;height) before even loading it into my application. Because if it is under the texture limit I can load it straight into a Texture2D without the need of loading it into a Bitmap then converting it into a single element Texture2D list.
Is there any (clean and possibly very quick) way to get the dimensions of an image file without loading the whole file into the application? And if it is, is it even worth using? As I guess that the slowest instruction is the file opening/seeking here (probably hardware-based, when it comes to hdd-s) and not streaming the contents into the application.
Do you need to support arbitrarily large textures? If not, switching to the HiDef profile will get you support for textures as large as 4096x4096.
If you do need to stick with your current technique, you might want to check out this answer regarding how to read image sizes without loading the entire file.

OpenGL Win32 texture not shown in DrawToBitmap (DIB)

I have a realtime OpenGL application rendering some objects with textures. I have build a function to make an internal screenshot of the rendered scene by rendering it to an DIB via PFD_DRAW_TO_BITMAP and copy it to an image. It works quite well except for one kind of texture. These are JPGs with 24bpp (so 8 Bit for every R,G,B). I can load them and they render correctly in realtime but not when rendered to the DIB. For other textures it works well.
I have the same behaviour when testing my application on virtual machine (WinXP, no hardware acceleration!). Here these specific textures are not even shown in realtime rendering. Without hardware acceleration I guess WinXP uses its own software implementation of OpenGL and falls back to OpenGL 1.1.
So are there any kinds of textures that cant be drawn without 3d hardware acceleration? Or is there a common pitfall?
PFD_DRAW_TO_BITMAP will always drop you into the fallback OpenGL-1.1 software rasterizer. So you should not use it. Create an off-screen FBO, render to that, retrieve the pixel data using glReadPixels and write it to a file using an image file I/O library.

Resources