I've made a screensaver which displays tables of statistics across a number of "screens" which it fades between. I've used only CALayers and implicit animation, but even so the animation is jerky at best; rather than a smooth transition there are 3 "jumps" between screens, one at ~5%, one at ~30%, then 100%.
Running top in a terminal from another machines, the screensaver always hits 100% CPU during transitions.
I'm running this on a Mac mini, PowerPC G4 (1.5) #1.33GHz with 512MB RAM, running Leopard. No other programs are "active" during running.
System Profiler states that Core Image is supported by software, so I'm assuming the implicit animations are computed in the CPU rather than the built-in Radeon card.
What would one need to do to move the animation to the GPU?
OS X will automatically do the animation on the GPU on most graphics cards that support pixel shaders 2, I believe.
The exact list of supported GPUs is pretty hard to find, since it hasn't really been talked about since 10.4 came out.
The minimum spec list is:
ATI Mobility Radeon 9700
ATI Radeon 9600 XT, 9800 XT, X800 XT
nVidia GeForce FX Go 5200
nVidia GeForce FX 5200 Ultra
nVidia GeForce 6800 Ultra DDL, 6800 GT DDL
So it seems the Radeon 9200 and 9500 in Mac mini and iBook G4s is not properly supported.
Related
I'm working with a shader in MSL that uses sinpi() and cospi(). When I run the shader on a AMD Radeon R9 card, the results are as expected, but when I run the same shader on a Intel HD Graphics 615 the result of these two functions seems to be always 0.
When I replace them with sin() and cos() and do the multiplication with π myself, the results are correct again.
Could this be a bug in the Intel hardware?
We are attempting to display 40 windows, each containing almost the same UI, but displaying a different entity, with scene changes with animations happening every few seconds on each window. All of our tests have ended with the windows being drawn at 3 to 5 frames per second at between 10 and 20 windows open, on a reasonably old and low-powered discrete nVidia GPU and on Windows.
Things we have tried:
Disabling animations - performance improves, but not nearly enough
CPU profiling - shows over 90% of the CPU time being spent in the system DLLs and the nVidia driver; on other machines, the CPU usage is not significant, but the frame rate is still low.
QML profiling - The QT Creator profiling shows the render loop executing its steps at 60 fps and within a couple of milliseconds at most for each frame.
Images/textures are loaded once to the GPU and never reloaded
All the rendering backends - OpenGL, ANGLE and Qt Quick 2D Renderer - perform more or less the same
Is there something major we are missing?
Why would LWJGL be so much slower than say the Unity implementation of OpenGL or even Ogre3D? I'll begin with some "benchmarks" (if you would even call them that) on what I've tested.
Hardware:
i5 - 3570k # 4.3GHZ
GTX 780 # 1150 MHZ
First Test: Place 350,000 triangles on screen (modified Stanford Dragon)
Results:
GTX 780 Renders at 37 FPS (USING LWJGL)
GTX 780 Renders at ~300 FPS (USING UNITY3D)
GTX 780 Renders at ~280 FPS (USING OGRE3D)
Second Test: Render Crytek Sponza w/ Textures (I believe around 200,000 vertices?)
Results:
GTX 780 Renders at 2 FPS (USING LWJGL)
GTX 780 Renders at ~150 FPS (USING UNITY3D)
GTX 780 Renders at ~130 FPS (USING OGRE3D)
Normally I use either Ogre3D, Unity3D, or Panda3D in order to render my game projects, but the difference in frame rates is staggering. I know Unity has things like Occlusion Culling so it's generally the quickest, but even when using similar calls with Ogre3D, I would think to expect similar results to LWJGL... Ogre3D and LWJGL are both doing Front face only culling, but LWJGL doesn't get any sort of performance increase vs. rendering everything. One last thing, LWJGL tends to break 2.5 GB of RAM usage rendering Sponza, but that doesn't explain the other results.
If anyone is having the same issue, the issue is NOT java I've realized. The use of recording immediate draw calls into Display Lists is depreciated and it yields poor performance. You MUST use VBOs and not display list. You can expect performance to increase up to 600x in the case of my laptop.
I'm developing a card game in Android using SurfaceView and canvas to draw the UI.
I've tried to optimize everything as much as possible but I still have two questions:
During the game I'll need to draw 40 bitmaps (the 40 cards in the italian deck), is it better to create all the bitmaps on the onCreate method of my customized SurfaceView (storing them in an array), or create them as needed (every time the user get a new card for example)?
I'm able to get over 90 fps on an old Samsung I5500 (528 MHz, with a QVGA screen), 60 fps on an Optimus Life (800 MHz and HVGA screen) and 60 fps with a Nexus One/Motorola Razr (1 GHz and dual core 1GHz with WVGA and qHD screens) but when I run the game on an Android tablet (Motorola Xoom dual core 1 GHz and 1 GB of Ram) I get only 30/40 fps... how is that possible that a 528 MHz cpu with 256 MB of RAM can handle 90+ fps and a dual core processor can't handle 60 fps? I'm not seeing any kind of GC calling at runtime....
EDIT: Just to clarify I've tried both ARGB_888 and RGB_565 without any changes in the performance...
Any suggestions?
Thanks
Some points for you to consider:
It is recommended not to create new objects while your game is running, otherwise, you may get unexpected garbage collections.
Your FPS numbers doesn't sound good, you may have measurement errors, However my guess is that you are resizing the images to fit the screen size and that affects the memory usage of your game and may cause slow rendering times on tablets.
You can use profiling tools to confirm: TraceView
OpenGL would be much faster
last tip: don't draw overlapping cards if you can, draw only the visible ones.
Good Luck
Ok so it's better to create the bitmap in the onCreate method, that is what I'm doing right now...
They are ok, I believe that the 60 fps on some devices are just some restrictions made by producers since you won't find any advantage in getting more than 60 fps (I'm making this assumption since it doesn't change rendering 1 card, 10 cards or no card... the OnDraw method is called 60 times per second, but if I add for example 50/100 cards it drops accordingly) I don't resize any card cause I use the proper folder (mdpi, hdpi, ecc) for each device, and I get the exact size of the image, without resizing it...
I've tried to look at it but from what I understand all the time of the app execution is used to draw the bitmap, not to resize or update its position here it is:
I know, but it would add complexity to the developing and I believe that using a canvas for 7 cards on the screen should be just fine….
I don't draw every card of the deck.. I just swap bitmap as needed :)
UPDATE: I've tried to run the game on a Xoom 2, Galaxy Tab 7 plus and Asus Transformer Prime and it runs just fine with 60 fps…. could it be just a problem of Tegra 2 devices?
I'm trying to load a large dataset of a million points in 3d space in MATLAB, but whenever I try to plot it (scatter or plot3) , it takes forever. This is on a laptop with Intel Graphics Media Accelerator 950, up to 224-MB shared system memory. This also sometimes leads to Matlab 2008a crashing. Is there a way to let MATLAB use a Nvidia GPU for plotting this dataset. I have another laptop with Nvidia Go 6150. I'm on Windows Xp and Windows 7.
OpenGL
You can set the renderer used for figures in MATLAB.
http://www.mathworks.com/support/tech-notes/1200/1201.html
To take advantage of GPU, You can set it to OpenGL
set(0,'DefaultFigureRenderer','opengl')
Which
enables MATLAB to access graphics hardware if it is available on your machine. It provides object transparency, lighting, and accelerated performance.
Other ways
Also, the following link shows some ideas about optimizing graphics performance:
http://www.mathworks.com/access/helpdesk/help/techdoc/creating_plots/f7-60415.html
However,
These techniques apply to cases when you are creating many graphs of similar data and can improve rendering speed by preventing MATLAB from performing unnecessary operation.
If you're wanting to use CUDA the minimum card spec required is a G80, your 6150 sadly is too old.
List of compatible cards.
There is Jacket, the commercial product to give GPU power to Matlab:
http://www.accelereyes.com/products/jacket
You can download the trial version (30 days, as I remember).