opengl : texture deletion - winapi

I am working on an OpenGL renderer in Win32... I was wondering, when a texture is bound to an ID if it is automatically destroyed and wiped from the video memory when the rendering context is destroyed, or does that count as a memory leak if they are left bound when the process terminates suddenly.
Thanks

All OpenGL resources are per-process, so it is reasonable to assume they get cleaned up on termination. Otherwise, you'd get nasty system-wide memory leaks on crashing applications - something completely unacceptable on any half-decent OS.

Related

Bad performance of GCContextDrawImage due to it calling suspicious debug functions

A call to GCContextDrawImage turned out to be a bottleneck in my Mac OS X application, especially on retina screens. I managed to mitigate it somewhat by Avoiding colorspace transformations when blitting, Mac OS X 10.11 SDK, but it still seems to be slower than I would expect it to be.
When investigating the stack dump with Instruments I noticed that a lot of time was spent in two functions with highly suspicious names, vImageDebug_CheckDestBuffer which is calling into _ERROR_Buffer_Write__Too_Small_For_Arguments_To_vImage__CheckBacktrace. See the the full stack dump below.
This seems to me like some sort of debug assertion? Am I running a debug version of the vImage library without realising it? Is there something I can do to stop these functions from sucking up all my precious cycles?
The performance problem was solved by making sure the beginning of the pixel data in each scan line of the source bitmap is aligned to 16 bytes. Doing this seems to make the image drawing considerably faster. Presumably this happens by default if you allocate a new image, but we wrapped a CGImage around an existing pixel buffer which wasn't aligned.
What about a graphics context (first parameter)? Do you pass it from other thread? What if you will get context in main thread, then draw image also in main thread?

windowed OpenGL first frame delay after idle

I have windowed WinApi/OpenGL app. Scene is drawn rarely (compared to games) in WM_PAINT, mostly triggered by user input - MW_MOUSEMOVE/clicks etc.
I noticed, that when there is no scene moving by user mouse (application "idle") and then some mouse action by user starts, the first frame is drawn with unpleasant delay - like 300 ms. Following frames are fast again.
I implemented 100 ms timer, which only does InvalidateRect, which is later followed by WM_PAINT/draw scene. This "fixed" the problem. But I don't like this solution.
I'd like know why is this happening and also some tips how to tackle it.
Does OpenGL render context save resources, when not used? Or could this be caused by some system behaviour, like processor underclocking/energy saving etc? (Although I noticed that processor runs underclocked even when app under "load")
This sounds like Windows virtual memory system at work. The sum of all the memory use of all active programs is usually greater than the amount of physical memory installed on your system. So windows swaps out idle processes to disc, according to whatever rules it follows, such as the relative priority of each process and the amount of time it is idle.
You are preventing the swap out (and delay) by artificially making the program active every 100ms.
If a swapped out process is reactivated, it takes a little time to retrieve the memory content from disc and restart the process.
Its unlikely that OpenGL is responsible for this delay.
You can improve the situation by starting your program with a higher priority.
https://superuser.com/questions/699651/start-process-in-high-priority
You can also use the virtuallock function to prevent Windows from swapping out part of the memory, but not advisable unless you REALLY know what you are doing!
https://msdn.microsoft.com/en-us/library/windows/desktop/aa366895(v=vs.85).aspx
EDIT: You can improve things for sure by adding more memory and for sure 4GB sounds low for a modern PC, especially if you Chrome with multiple tabs open.
If you want to be scientific before spending any hard earned cash :-), then open Performance Manager and look at Cache Faults/Sec. This will show the swap activity on your machine. (I have 16GB on my PC so this number is very low mostly). To make sure you learn, I would check Cache Faults/Sec before and after the memory upgrade - so you can quantify the difference!
Finally, there is nothing wrong with the solution you found already - to kick start the graphic app every 100ms or so.....
Problem was in NVidia driver global 3d setting -"Power management mode".
Options "Optimal Power" and "Adaptive" save power and cause the problem.
Only "Prefer Maximum Performance" does the right thing.

EGL/OpenGL ES/switching context is slow

I am developing an OpenGL ES 2.0 application (using angleproject on Windows for developement) that is made up of multiple 'frames'.
Each frame is an isolated application that should not interfere with the surrounding frames. The frames are drawn using OpenGL ES 2.0, by the code running inside of that frame.
My first attempt was to assign a frame buffer to each frame. But there was a problem - OpenGL's internal states are changed while one frame is drawing, and if the next frame doesn't comprehensively reset every known OpenGL state, there could be possible side effects. This defeats my requirement that each frame should be isolated and not affect one another.
My next attempt was to use a context per frame. I created a unique context for each frame. I'm using sharing resources, so that I can eglMakeCurrent to each frame, render each to their own frame buffer/texture, then eglMakeCurrent back to globally, to compose each texture to the final screen.
This does a great job at isolating the instances, however.. eglMakeCurrent is very slow. As little as 4 of them can make it take a second or more to render the screen.
What approach can I take? Is there a way I can either speed up context switching, or avoid context switching by somehow saving the OpenGL state per frame?
I have a suggestion that may eliminate the overhead of eglMakeCurrent while allowing you to use your current approach.
The concept of current EGLContext is thread-local. I suggest creating all contexts in your process's master thread, then create one thread per context, passing one context to each thread. During each thread's initialization, it will call eglMakeCurrent on the context it owns, and never call eglMakeCurrent again. Hopefully, in ANGLE's implementation, the thread-local storage for contexts is implemented efficiently and does not have unnecessary synchronization overhead.
The problem here is trying to do this in a generic platform and OS independent way. If you choose a specific platform, there are good solutions. On Windows, there are the wgl and glut libraries that will give you multiple windows with completely independent OpenGL contexts running concurrently. They are called Windows, not Frames. You could also use DirectX instead of OpenGL. Angle uses DirectX. On linux, the solution is X11 for OpenGL. In either case, it's critical to have quality OpenGL drivers. No Intel Extreme chipset drivers. If you want to do this on Android or iOS, then those require different solutions. There was a recent thread on the Khronos.org OpenGL ES forum about the Android case.

Not visible NSOpenGLView slows down the whole system

I'm making a Mac OS X (10.8.3) OpenGL application using a NSOpenGLView and the CVDisplayLink to manage the calls to the render method.
The application works fine but when the window gets covered or is in other space (basically when is not visible for some reason) the whole system starts going slow.
I tested and profiled it in several ways and this is what I found:
The CPU is OK, no CPU consumption increases
The Memory is fine too, the amount of memory allocated doesn't change
In the OpenGL Driver Monitor the "CPU Wait for GPU" time increases
Also does the "CPU Wait for Free OpenGL Command Buffer" (I think this is the problem)
If no OpenGL draw calls are generated the computer runs fine.
I'm guessing that a non visible NSOpenGLView changes the behavior in some way and it makes my application more GPU consuming.
Any idea of what could be going wrong?

OpenGL "out of memory" on glReadPixels()

I am running into an "out of memory" error from OpenGL on glReadPixels() under low-memory conditions. I am writing a plug-in to a program that has a robust heap mechanism for such situations, but I have no idea whether or how OpenGL could be made to use it for application memory management. The notion that this is even possible came to my attention through this [albeit dated] thread on a similar issue under Mac OS [not X]: http://lists.apple.com/archives/Mac-opengl/2001/Sep/msg00042.html
I am using Windows XP, and have seen it on multiple NVidia cards. I am also interested in any work-arounds I might be able to relay to users (the thread mentions "increasing virtual memory").
Thanks, Sean
I'm quite sure that the out-of-memory error is not raised from glReadPixels (infact glReadPixels doesn't allocate memory itself).
The error is probably raised by other routines allocating buffer objects or textures. Once you detect the out-of-memory error, you should release all non-mandatory buffer objects (textures, texture mipmaps, rarely used buffer objects) is order to allocate a new buffer object holding the glReadPixels returned data.
Without more specifics, hard to say. Ultimately OpenGL is going to talk to the native OS when it needs to allocate. So if nothing else, you can always replace (or hook) the default CRT/heap allocator for your process, and have it fetch blocks from the "more robust" heap manager in the plugin host.

Resources