Not visible NSOpenGLView slows down the whole system - performance

I'm making a Mac OS X (10.8.3) OpenGL application using a NSOpenGLView and the CVDisplayLink to manage the calls to the render method.
The application works fine but when the window gets covered or is in other space (basically when is not visible for some reason) the whole system starts going slow.
I tested and profiled it in several ways and this is what I found:
The CPU is OK, no CPU consumption increases
The Memory is fine too, the amount of memory allocated doesn't change
In the OpenGL Driver Monitor the "CPU Wait for GPU" time increases
Also does the "CPU Wait for Free OpenGL Command Buffer" (I think this is the problem)
If no OpenGL draw calls are generated the computer runs fine.
I'm guessing that a non visible NSOpenGLView changes the behavior in some way and it makes my application more GPU consuming.
Any idea of what could be going wrong?

Related

windowed OpenGL first frame delay after idle

I have windowed WinApi/OpenGL app. Scene is drawn rarely (compared to games) in WM_PAINT, mostly triggered by user input - MW_MOUSEMOVE/clicks etc.
I noticed, that when there is no scene moving by user mouse (application "idle") and then some mouse action by user starts, the first frame is drawn with unpleasant delay - like 300 ms. Following frames are fast again.
I implemented 100 ms timer, which only does InvalidateRect, which is later followed by WM_PAINT/draw scene. This "fixed" the problem. But I don't like this solution.
I'd like know why is this happening and also some tips how to tackle it.
Does OpenGL render context save resources, when not used? Or could this be caused by some system behaviour, like processor underclocking/energy saving etc? (Although I noticed that processor runs underclocked even when app under "load")
This sounds like Windows virtual memory system at work. The sum of all the memory use of all active programs is usually greater than the amount of physical memory installed on your system. So windows swaps out idle processes to disc, according to whatever rules it follows, such as the relative priority of each process and the amount of time it is idle.
You are preventing the swap out (and delay) by artificially making the program active every 100ms.
If a swapped out process is reactivated, it takes a little time to retrieve the memory content from disc and restart the process.
Its unlikely that OpenGL is responsible for this delay.
You can improve the situation by starting your program with a higher priority.
https://superuser.com/questions/699651/start-process-in-high-priority
You can also use the virtuallock function to prevent Windows from swapping out part of the memory, but not advisable unless you REALLY know what you are doing!
https://msdn.microsoft.com/en-us/library/windows/desktop/aa366895(v=vs.85).aspx
EDIT: You can improve things for sure by adding more memory and for sure 4GB sounds low for a modern PC, especially if you Chrome with multiple tabs open.
If you want to be scientific before spending any hard earned cash :-), then open Performance Manager and look at Cache Faults/Sec. This will show the swap activity on your machine. (I have 16GB on my PC so this number is very low mostly). To make sure you learn, I would check Cache Faults/Sec before and after the memory upgrade - so you can quantify the difference!
Finally, there is nothing wrong with the solution you found already - to kick start the graphic app every 100ms or so.....
Problem was in NVidia driver global 3d setting -"Power management mode".
Options "Optimal Power" and "Adaptive" save power and cause the problem.
Only "Prefer Maximum Performance" does the right thing.

Profile Build vs Normal Build: CPU Usage?

Short Version:
Before the TL;DR section, my main question is this, what is difference when building to profile using instruments then a regular build that would result in reduced CPU load of my app by over 200%?
When building to run, it uses well over 200% CPU as reported by activity monitor, but with everything else the same, when building for profiling, using the Time Profiler, it reduces the CPU load down to <5%, which is a dramatic (orders of magnitude) difference.
TL;DR Version:
As an exercise to learn Cocoa, Swift and DSP (yes all three at once), I am working on writing a simple radio scanner OS X application using the cheap rtl-sdr dongles.
I have written a simple Swift wrapper around librtlsdr, a simple UI to be able to set the frequency, and a couple of simple DSP routines. My wrapper around librtlsdr uses an NSOperationQueue and my DSP routines use GCD queues in order to move the IO and CPU intense routines off the main thread / queue.
Currently, everything is working to the extent that I can successfully demodulate an AM transmission.
I have implemented a simple low-pass FIR filter and while working on the algorithm, I was surprised when I realized that I couldn’t use much more than about 30 coefficients before my filter routine started taking too long and the audio became choppy. As well, Activity Monitor shows up to 300% CPU usage for my app, which seems crazy high considering my filter contains nothing but a nested loop to do some multiply and accumulate operations. Anything higher than about 40 coefficients and the UI becomes unresponsive.
For the DSP minded, it’s a decimating filter where I am using the entire sample set for filtering (960000 sps) , but only filtering the samples that I need for the rate reduction (48000), using a rectangular windowed sinc function for the coefficients, pre computed. Not the most efficient algorithm, but on my quad core i7 Macbook Pro and iMac, it should still scream.
To get some insight on where my program was using up all the CPU cycles, I decided to give Instruments a go. Product->Profile, choosing the Time Profiler and running my app gave my some interesting information.
1) My filter routine was NOT using the most CPU cycles.
2) Activity monitor showed that my app wasn’t even at 5% CPU usage
So I decided to find out how far I can stress things before I see any stress on the CPU and I was up to a 50,000 tap filter before it started to be noticeably choppy and the CPU usage went close to 300%. So… to recap, normal build and run, I max out at about 35-40 filter taps; profile build and run, I max out at about 50,000 filter taps.
Also worthy of note, while profiling with 50,000 filter taps, the UI still responds instantly and I can change frequency, start / stop the radio and it has choppy audio. During a normal run, the UI starts to freeze just as soon as I start the radio with no audio, and that happens after I get to only about 50 taps.
Again, why the dramatic difference in CPU usage between between running while profiling, and running just a standard build; what’s different aside from the elevated privileges for Instruments and what do I need to do to make it the normal behavior for my app?
JE
This is all about build configurations. When you profile an app with Xcode it gets built with optimizations because Xcode uses the "release" build configuration for profiling. As the name suggests, the "release" config is also used on your final product which therefore always is a build optimized for speed. The default "debug" build configuration which comes to play when you build your app in Xcode by pressing ⌘R doesn't apply any compiler optimizations. This is the reason why your app is slower when not being profiled.
You can learn more about build configurations here: https://developer.apple.com/library/mac/recipes/xcode_help-project_editor/Articles/BasingBuildConfigurationsonConfigurationFiles.html#//apple_ref/doc/uid/TP40010155-CH13-SW1

Possible to run OpenCL program at low priority (be "nice")?

I have an OpenCL Windows program that does heavy number crunching and happily consumes 100% of the GPU. I'd like to be able to run it in the background while using the computer normally, but right now it causes considerable desktop lag and makes any 3d application unusable.
Is there a way to set a priority in OpenCL so that it will yield GPU power to other processes and only use spare cycles?
Unfortunately most GPU's do not support running several tasks at a time, and so there is no way to assign priority. This means that when your OpenCL kernel is running, it is the only task being executed by the GPU and that will be the case until the kernel is complete.
If you want the computer to be usable while running the kernel (normal desktop activity, browsing, videos, games) each kernel iteration would have to be very quick. So if you can reduce the time taken by each set of kernel launches (i.e. each job enqueued with clEnqueueNDRangeKernel) you might get what you're looking for. This could be achieved either through making the NDRange smaller, though it needs to be big enough to be efficient on the GPU. Something like 5120 work-items is what I've found to be the minimum on Radeon HD 5870. Or you could reduce the amount of work in each kernel.
If you can get the execution time of each enqueued job down to maybe 1/60 of a second, there's a good chance the computer will be usable. I've been able to run OpenCL programs where each enqueue takes about 1/120 of a second while gaming without noticing anything.

opengl : texture deletion

I am working on an OpenGL renderer in Win32... I was wondering, when a texture is bound to an ID if it is automatically destroyed and wiped from the video memory when the rendering context is destroyed, or does that count as a memory leak if they are left bound when the process terminates suddenly.
Thanks
All OpenGL resources are per-process, so it is reasonable to assume they get cleaned up on termination. Otherwise, you'd get nasty system-wide memory leaks on crashing applications - something completely unacceptable on any half-decent OS.

Qt 4.6.x under MacOS/X: widget update performance mystery

I'm working on a Qt-based MacOS/X audio metering application, which contains audio-metering widgets (potentially a lot of them), each of which is supposed to be updated every 50ms (i.e. at 20Hz).
The program works, but when lots of meters are being updated at once, it uses up lots of CPU time and can bog down (spinny-color-wheel, oh no!).
The strange thing is this: Originally this app would just call update() on the meter widget whenever the meter value changed, and therefore the entire meter-widget would be redrawn every 50ms. However, I thought I'd be clever and compute just the area of the meter that actually needs to be redrawn, and only redraw that portion of the widget (e.g. update(x,y,w,h), where y and h are computed based on the old and new values of the meter). However, when I implemented that, it actually made CPU usage four times higher(!)... even though the app was drawing 50% fewer pixels per second.
Can anyone explain why this optimization actually turns out to be a pessimization? I've posted a trivial example application that demonstrates the effect, here:
http://www.lcscanada.com/jaf/meter_test.zip
When I compile (qmake;make) the above app and run it like this:
$ ./meter.app/Contents/MacOS/meter 72
Meter: Using numMeters=72 (partial updates ENABLED)
... top shows the process using ~50% CPU.
When I disable the clever-partial-updates logic, by running it like this:
$ ./meter.app/Contents/MacOS/meter 72 disable_partial_updates
Meter: Using numMeters=72 (partial updates DISABLED)
... top shows the process using only ~12% CPU. Huh? Shouldn't this case take more CPU, not less?
I tried profiling the app using Shark, but the results didn't mean much to me. FWIW, I'm running Snow Leopard on an 8-core Xeon Mac Pro.
GPU drawing is a lot faster then letting CPU caclulate the part to redraw (at least for OpenGL this takes in account, I got the Book OpenGL superbible, and it states that OpenGL is build to redraw not, to draw delta as this is potentially a lot more work to do). Even if you use Software Rendering, the libraries are higly optimzed to do their job properly and fast. So Just redrawing is state of art.
FWIW top on my Linux box shows ~10-11% without partial updates and 12% using partial updates. I had to request 400 meters to get that CPU usage though.
Perhaps it's just that the overhead of Qt setting up a paint region actually dwarfs your paint time? After all your painting is really simple, it's just two rectangular fills.

Resources