OpenGL ES 2.0 multithreading - opengl-es

I have been trying to use OpenGL ES 2.0 to make a photo viewing application. To optimize the code I'm changing the textures loaded with the objects as the user scrolls down. But image loading into the texture takes some time and thus the effect is not good. To solve the above problem I tried using multithreading with the following ways:
Create a separate context for the new thread and then share the resources (texture object) with the other context
Use multiple threads and a single context. Make the context current while executing gl commands in the threads.
But wasn't successful in either of them. So if anyone has tried similar things with opengl earlier, could you please tell which of the above ones would work and the things I need to pay attention to while doing the same? Also would FBO's and pbuffers be of any use in this case?
Thanks for any help.
Himanshu

I would suggest keeping the OpenGL stuff in your primary thread, and delegating just the data loading to the worker thread.
Have, say, a circular buffer of image data objects that your worker thread can keep itself busy filling, but let your primary thread generate the textures from the image data as it needs them.

I don't think approach 1 is valid - you're not supposed to share resources across contexts.
I was very successful with something like your approach 2. My app, iSynth, does a lot of texture creation on the fly at the same time the app is handling network requests and trying to render a decent number of frames per second. I use two threads for OpenGL and they share a context.
One of these is a dedicated render thread. At the high level, it's a while loop that repeatedly calls my rendering code using whatever textures are available at the time.
The other does nothing but create texture resources for the render thread to use. Images are continuously downloaded on the main thread and queued up - this thread's sole mission in life is to eat through that queue and output GL textures.
This approach works well because one thread is strictly a producer and the other a consumer. I could foresee lots of ugly problems if you start trying to create textures on the render thread, or modify GL state on the texture thread. If you're having problems, I'd make sure you're properly setting up the shared context and making that context current on each thread. Also, if a thread utilizing that context goes away, I've found that it's necessary to call setCurrentContext: nil from that thread first.

Related

eglDestroyContext and unfinished drawcalls

Following scenario:
I have an OpenGL ES app that renders frames via drawcalls, the end of a frame is marked by eglSwapBuffers. Now imagine that after the last eglSwapBuffers I immediately call eglMakeCurrent to unbind the context from the surface, then immediately call eglDestroyContext. Assuming the context was the last one to hold references to any resources the drawcalls use (shader, buffer, textures etc.) what happens to the drawcalls that the gpu has not yet finished and that use some of these resources?
Regards.
then immediately call eglDestroyContext().
All this really says is that the application is done with the context, and promises not to use it again. The actual context includes a reference count held by the graphics driver, and that won't drop to zero until all pending rendering has actually completed.
TLDR - despite what the APIs say, nothing actually happens "immediately" when you make an API call - it's an elaborate illusion that is mostly a complete lie.

When should glDeleteBuffers, glDeleteShader, and glDeleteProgram be used in OpenGL ES 2?

While working with VBOs in OpenGL ES 2, I came across glDeleteBuffers, glDeleteShader, and glDeleteProgram. I looked around on the web but I couldn't find any good answers to when these methods are supposed to be called. Are these calls even necessary or does the computer automatically delete the objects on its own? Any answers are appreciated, thanks.
Every glGen* call should be paired with the appropriate glDelete* call which is called when you are finished with the resource.
The computer will not delete the objects on its own while your application is still running because it doesn't know whether you plan to re-use them later. If you are creating new objects throughout the life of your application and failing to delete old ones, then that's a resource leak which will eventually cause a shutdown of your application due to excessive memory usage.
The computer will delete objects for you when the application terminates, so there's no real benefit to deleting the objects that are permanently required throughout the lifetime of your application, but it is generally considered good practice to have a leak-free clean up.
You can call the glDelete* functions as soon as you are finished with the object (e.g. as soon as you've made your last draw call that uses it). You do not need to worry about whether the object might still be in the GPU's queues or pipelines, that is the OpenGL driver's problem.

SDL 2.0: Create window in main thread, but do all rendering in separate one

This is my current setup: I'm doing OpenGL rendering using SDL (currently on Linux). I initialize SDL (SDL_Init) and create the application window (SDL_CreateWindow) in the main thread and pass it to a second thread. This second thread creates an OpenGL context from it (SDL_GL_CreateContext) and starts a render loop, while the main thread listens for events. I think it's important to note that GL calls are completely confined to this second thread; actually most of my application logic happens there, the main thread is really only responsible for handling events that come in over SDL.
Originally I did this the other way around, but it turns out you can't process events in anything other than the main thread on OSX and probably also Windows, so I switched it around to be compatible those two in the future.
Should I have any concerns that this will not work on OSX/Windows? On Linux, I haven't had any whatsoever. There's lots of information on the internet about context sharing and doing GL calls from multiple threads, but all I want to do is do OpenGL in one thread that is not the main one. I wouldn't like to continue coding my application only to later find out that it won't work anywhere else.
I have an app which runs on Mac/iOS/Windows that is structured this way (all GL in a rendering thread), but I don't use SDL.
I just took a look at SDL's Cocoa_GL_CreateContext (called by SDL_GL_CreateContext on OS X) and it makes calls that I make from my main thread to set up the context.
So, if you hit any problems, try creating the GL context in the main thread and then pass that off to the rendering thread (instead of creating the GL context in the rendering thread).
OpenGL and multithreading are basically enemies : only one thread can 'own the render context' at any given moment - yes, you can switch the GL render context whenever threads switch, but think of the cost, and also consider that, from one OEM driver to the next, it's not well supported and likely to work for some people and not others.
The only logical (and sane) alternative, is to keep all your OpenGL calls to one thread (note: there are exceptions, there are things that any thread can call in gl, relating to streaming data, without them needing to own render context).
Unfortunately, we can't simply pass the GL context around threads as suggested, we must call (w)glMakeCurrent, which tells GL "this caller thread now owns you", but fails to tell other threads that...

Deleting WebGL contexts

I have a page that is using many many webgl contexts, one per canvas. canvases can be reloaded, resized, etc. each time creating new contexts. It works for several reloads, but eventually when I try to create a new context it returns a null value. I assume that I'm running out of memory.
I would like to be able to delete the contexts that I'm no longer using so I can recover the memory and use it for my new contexts. Is there any way to do this? Or is there a better way to handle many canvases?
Thanks.
This is a long standing bug in Chrome and WebKit
http://code.google.com/p/chromium/issues/detail?id=124238
There is no way to "delete" a context in WebGL. Contexts are deleted by garbage collection whenever the system gets around to it. All resources will be freed at that point but it's best if you delete your own resources rather than waiting for the browser to delete them.
As I said this is a bug. It will eventually be fixed but I have no ETA.
I would suggest not deleting canvases. Keep them around and re-use them.
Another suggestion would be tell us why you need 200 canvases. Maybe the problem you are trying to solve would be better solved a different way.
I would assume, that until you release all the resources attached to your context, something will still hold references to it, and therefore it will still exist.
A few things to try:
Here is some debug gl code. There is a function there to reset a context to it's initial state. Try that before deleting the canvas it belongs to.
It's possible that some event system could hold reference to your contexts, keeping them in a zombie state.
Are you deleting your canvases from the DOM? I'm sure there is a limit on resources the page can maintain in one instance.
SO was complaining that our comment thread was getting a bit long. Try these things first and let me know if it help.

Switch OpenGL contexts or switch context render target instead, what is preferable?

On MacOS X, you can render OpenGL to any NSView object of your choice, simply by creating an NSOpenGLContext and then calling -setView: on it. However, you can only associate one view with a single OpenGL context at any time. My question is, if I want to render OpenGL to two different views within a single window (or possibly within two different windows), I have two options:
Create one context and always change the view, by calling setView as appropriate each time I want to render to the other view. This will even work if the views are within different windows or on different screens.
Create two NSOpenGLContext objects and associate one view with either one. These two contexts could be shared, which means most resources (like textures, buffers, etc.) will be available in both views without wasting twice the memory. In that case, though, I have to keep switching the current context each time I want to render to the other view, by calling -makeCurrentContext on the right context before making any OpenGL calls.
I have in fact used either option in the past, each of them worked okay for my needs, however, I asked myself, which way is better in terms of performance, compatibility, and so on. I read that context switching is actually horribly slow, or at least it used to be very slow in the past, might have changed meanwhile. It may depend on how many data is associated with a context (e.g. resources), since switching the active context might cause data to be transferred between system memory and GPU memory.
On the other hand switching the view could be very slow as well, especially if this might cause the underlying renderer to change; e.g. if your two views are part of two different windows located on two different screens that are driven by two different graphic adapters. Even if the renderer does not change, I have no idea if the system performs a lot of expensive OpenGL setup/clean-up when switching a view, like creating/destroying render-/framebuffer objects for example.
I investigated context switching between 3 windows on Lion, where I tried to resolve some performance issues with a somewhat misused VTK library, which itself is terribly slow already.
Wether you switch render contexts or the windows doesn't really matter,
because there is always the overhead of making both of them current to the calling thread as a triple. I measured roughly 50ms per switch, where some OS/Window manager overhead charges in aswell. This overhead depends also greatly on the arrangement of other GL calls, because the driver could be forced to wait for commands to be finished, which can be achieved manually by a blocking call to glFinish().
The most efficient setup I got working is similar to your 2nd, but has two dedicated render threads having their render context (shared) and window permanently bound. Aforesaid context switches/bindings are done just once on init.
The threads can be controlled using some threading stuff like a common barrier, which lets both threads render single frames in sync (both get stalled at the barrier before they can be launched again). Data handling must also be interlocked, which can be done in one thread while stalling other render threads.

Resources