eglDestroyContext and unfinished drawcalls - opengl-es

Following scenario:
I have an OpenGL ES app that renders frames via drawcalls, the end of a frame is marked by eglSwapBuffers. Now imagine that after the last eglSwapBuffers I immediately call eglMakeCurrent to unbind the context from the surface, then immediately call eglDestroyContext. Assuming the context was the last one to hold references to any resources the drawcalls use (shader, buffer, textures etc.) what happens to the drawcalls that the gpu has not yet finished and that use some of these resources?
Regards.

then immediately call eglDestroyContext().
All this really says is that the application is done with the context, and promises not to use it again. The actual context includes a reference count held by the graphics driver, and that won't drop to zero until all pending rendering has actually completed.
TLDR - despite what the APIs say, nothing actually happens "immediately" when you make an API call - it's an elaborate illusion that is mostly a complete lie.

Related

When should glDeleteBuffers, glDeleteShader, and glDeleteProgram be used in OpenGL ES 2?

While working with VBOs in OpenGL ES 2, I came across glDeleteBuffers, glDeleteShader, and glDeleteProgram. I looked around on the web but I couldn't find any good answers to when these methods are supposed to be called. Are these calls even necessary or does the computer automatically delete the objects on its own? Any answers are appreciated, thanks.
Every glGen* call should be paired with the appropriate glDelete* call which is called when you are finished with the resource.
The computer will not delete the objects on its own while your application is still running because it doesn't know whether you plan to re-use them later. If you are creating new objects throughout the life of your application and failing to delete old ones, then that's a resource leak which will eventually cause a shutdown of your application due to excessive memory usage.
The computer will delete objects for you when the application terminates, so there's no real benefit to deleting the objects that are permanently required throughout the lifetime of your application, but it is generally considered good practice to have a leak-free clean up.
You can call the glDelete* functions as soon as you are finished with the object (e.g. as soon as you've made your last draw call that uses it). You do not need to worry about whether the object might still be in the GPU's queues or pipelines, that is the OpenGL driver's problem.

WebGL - When to call gl.Flush?

I just noticed today that this method, Flush() is available.
Not able to find detailed documentation on it.
What exactly does this do?
Is this required?
gl.flush in WebGL does have it's uses but it's driver and browser specific.
For example, because Chrome's GPU architecture is multi-process you can do this
function var loadShader = function(gl, shaderSource, shaderType) {
var shader = gl.createShader(shaderType);
gl.shaderSource(shader, shaderSource);
gl.compileShader(shader);
return shader;
}
var vs = loadShader(gl, someVertexShaderSource, gl.VERTEX_SHADER);
var fs = loadShader(gl, someFragmentShaderSource, FRAGMENT_SHADER);
var p = gl.createProgram();
gl.attachShader(p, vs);
gl.attachShader(p, fs);
gl.linkProgram(p);
At this point all of the commands might be sitting in the command
queue with nothing executing them yet. So, issue a flush
gl.flush();
Now, because we know that compiling and linking programs is slow depending on how large and complex they are so we can wait a while before trying using them and do other stuff
setTimeout(continueLater, 1000); // continue 1 second later
now do other things like setup the page or UI or something
1 second later continueLater will get called. It's likely our shaders finished compiling and linking.
function continueLater() {
// check results, get locations, etc.
if (!gl.getShaderParameter(vs, gl.COMPILE_STATUS) ||
!gl.getShaderParameter(fs, gl.COMPILE_STATUS) ||
!gl.getProgramParameter(p, gl.LINK_STATUS)) {
alert("shaders didn't compile or program didn't link");
...etc...
}
var someLoc = gl.getUniformLocation(program, "u_someUniform");
...etc...
}
I believe Google Maps uses this technique as they have to compile many very complex shaders and they'd like the page to stay responsive. If they called gl.compileShader or gl.linkProgram and immediately called one of the query functions like gl.getShaderParameter or gl.getProgramParameter or gl.getUniformLocation the program would freeze while the shader is first validated and then sent to the driver to be compiled. By not doing the querying immediately but waiting a moment they can avoid that pause in the UX.
Unfortunately this only works for Chrome AFAIK because other browsers are not multi-process and I believe all drivers compile/link synchronously.
There maybe be other reasons to call gl.flush but again it's very driver/os/browser specific. As an example let's say you were going to draw 1000 objects and to do that took 5000 webgl calls. It likely would require more than that but just to have a number lets pick 5000. 4 calls to gl.uniformXXX and 1 calls to gl.drawXXX per object.
It's possible all 5000 of those calls fit in the browser's (Chrome) or driver's command buffer. Without a flush they won't start executing until the the browser issues a gl.flush for you (which it does so it can composite your results on the screen). That means the GPU might be sitting idle while you issue 1000, then 2000, then 3000, etc.. commands since they're just sitting in a buffer. gl.flush tells the system "Hey, those commands I added, please make sure to start executing them". So you might decide to call gl.flush after each 1000 commands.
The problem though is gl.flush is not free otherwise you'd call it after every command to make sure it executes as soon as possible. On top of that each driver/browser works in different ways. On some drivers calling gl.flush every few 100 or 1000 WebGL calls might be a win. On others it might be a waste of time.
Sorry, that was probably too much info :p
Assuming it's semantically equivalent to the classic GL glFlush then no, it will almost never be required. OpenGL is an asynchronous API — you queue up work to be done and it is done when it can be. glFlush is still asynchronous but forces any accumulated buffers to be emptied as quickly as they can be, however long that may take; it basically says to the driver "if you were planning to hold anything back for any reason, please don't".
It's usually done only for a platform-specific reason related to the coupling of OpenGL and the other display mechanisms on that system. For example, one platform might need all GL work to be ordered not to queue before the container that it will be drawn into can be moved to the screen. Another might allow piping of resources from a background thread into the main OpenGL context but not guarantee that they're necessarily available for subsequent calls until you've flushed (e.g. if multithreading ends up creating two separate queues where there might otherwise be one then flush might insert a synchronisation barrier across both).
Any platform with a double buffer or with back buffering as per the WebGL model will automatically ensure that buffers proceed in a timely manner. Queueing is to aid performance but should have no negative observable consequences. So you don't have to do anything manually.
If you decline to flush and don't strictly need to even when you semantically perhaps should, but your graphics are predicated on real-time display anyway, then you're probably going to be suffering at worst a fraction of a second of latency.

GetMessage() while the thread is blocked in SwapBuffers()

Vsync blocks SwapBuffers(), which is what I want. My problem is that, since input messages go to the same thread that owns the window, any messages that come in while SwapBuffers() is blocked won't be processed immediately, but only after the vsync triggers the buffer swap and SwapBuffers() returns. So I have all my compute threads sitting idle instead of processing the scene for rendering in the next frame using the most recent input. I'm particularly concerned with having very low latency. I need some way to access all pending input messages to the window from other threads.
Windows API provides a way to wait for either Windows events or input messages using MsgWaitForMultipleObjects(), yet there's no similar way to wait for a buffer swap together with other things. That's very unfortunate.
I considered calling SwapBuffers() in another thread, but that requires glFinish() to be called in the window's thread before signalling another thread to SwapBuffers(), and glFinish() is still a blocking call so it's not a good solution.
I considered hooking, but that also looks like a dead end. Hooking with WH_GETMESSAGE will have the GetMsgProc() be called not asynchronously, but when the window's thread calls GetMessage()/PeekMessage(), so it's no help. Installing a global hook doesn't help me either due to the need of calling RegisterTouchWindow() with a specific window handle to process WM_TOUCH -- and my input is touch. And, while for mouse and keyboard, you can install low level hooks that capture messages as they're posted to the thread's queue, rather than when the thread calls GetMessage()/PeekMessage(), there appears to be no similar option for touch.
I also looked at wglDelayBeforeSwapNV(), but I don't see what's preventing the OS from preempting a thread sometimes after the call to that function but before SwapBuffers(), causing a miss of the next vsync signal.
So what's a good workaround? Can I make a second, invisible window, that will somehow be always the active one and so get all input messages, while the visible one is displaying the rendering? According to another discussion, message-only windows (CreateWindow with HWND_MESSAGE) are not compatible with WM_TOUCH. Is there perhaps some undocumented event that SwapBuffers() is internally waiting on that I could access and feed to MsgWaitForMultipleObjects()? My target is a fixed platform (Windows 8.1 64-bit) so I'm fine with using undocumented functionality, should it exist. I do want to avoid writing my own touchscreen driver, however.
Out of curiosity, why not implement your entire drawing logic in that other thread? It appears the problem you are running into is that the message pump is driven by the same thread that draws. Since Windows does not let you drive the message pump from a different thread than the one that created the window, the easiest solution would just be to push all the GL stuff into a different thread.
SwapBuffers (...) is also not necessarily going to block. As-per requirements of VSYNC an implementation need only block the next command that would modify the backbuffer while all backbuffers are pending a swap. Triple buffering changes things up a little bit by introducing a second backbuffer.
One possible implementation of triple buffering will discard the oldest backbuffer when it comes time to swap, thus SwapBuffers (...) would never cause blocking (this is effectively how modern versions of Windows work in windowed mode with the DWM enabled). Other implementations will eventually present both backbuffers, this reduces (but does not eliminate) blocking but also results in the display of late frames.
Unfortunately WGL does not let you request the number of backbuffers in a swap-chain (beyond 0 single-buffered or 1 double-buffered); the only way to get triple buffering on Windows is using driver settings. Lowest message latency will come from driving GL in a different thread, but triple buffering can help a little bit while requiring no effort on your part.

SDL 2.0: Create window in main thread, but do all rendering in separate one

This is my current setup: I'm doing OpenGL rendering using SDL (currently on Linux). I initialize SDL (SDL_Init) and create the application window (SDL_CreateWindow) in the main thread and pass it to a second thread. This second thread creates an OpenGL context from it (SDL_GL_CreateContext) and starts a render loop, while the main thread listens for events. I think it's important to note that GL calls are completely confined to this second thread; actually most of my application logic happens there, the main thread is really only responsible for handling events that come in over SDL.
Originally I did this the other way around, but it turns out you can't process events in anything other than the main thread on OSX and probably also Windows, so I switched it around to be compatible those two in the future.
Should I have any concerns that this will not work on OSX/Windows? On Linux, I haven't had any whatsoever. There's lots of information on the internet about context sharing and doing GL calls from multiple threads, but all I want to do is do OpenGL in one thread that is not the main one. I wouldn't like to continue coding my application only to later find out that it won't work anywhere else.
I have an app which runs on Mac/iOS/Windows that is structured this way (all GL in a rendering thread), but I don't use SDL.
I just took a look at SDL's Cocoa_GL_CreateContext (called by SDL_GL_CreateContext on OS X) and it makes calls that I make from my main thread to set up the context.
So, if you hit any problems, try creating the GL context in the main thread and then pass that off to the rendering thread (instead of creating the GL context in the rendering thread).
OpenGL and multithreading are basically enemies : only one thread can 'own the render context' at any given moment - yes, you can switch the GL render context whenever threads switch, but think of the cost, and also consider that, from one OEM driver to the next, it's not well supported and likely to work for some people and not others.
The only logical (and sane) alternative, is to keep all your OpenGL calls to one thread (note: there are exceptions, there are things that any thread can call in gl, relating to streaming data, without them needing to own render context).
Unfortunately, we can't simply pass the GL context around threads as suggested, we must call (w)glMakeCurrent, which tells GL "this caller thread now owns you", but fails to tell other threads that...

OpenGL ES 2.0 multithreading

I have been trying to use OpenGL ES 2.0 to make a photo viewing application. To optimize the code I'm changing the textures loaded with the objects as the user scrolls down. But image loading into the texture takes some time and thus the effect is not good. To solve the above problem I tried using multithreading with the following ways:
Create a separate context for the new thread and then share the resources (texture object) with the other context
Use multiple threads and a single context. Make the context current while executing gl commands in the threads.
But wasn't successful in either of them. So if anyone has tried similar things with opengl earlier, could you please tell which of the above ones would work and the things I need to pay attention to while doing the same? Also would FBO's and pbuffers be of any use in this case?
Thanks for any help.
Himanshu
I would suggest keeping the OpenGL stuff in your primary thread, and delegating just the data loading to the worker thread.
Have, say, a circular buffer of image data objects that your worker thread can keep itself busy filling, but let your primary thread generate the textures from the image data as it needs them.
I don't think approach 1 is valid - you're not supposed to share resources across contexts.
I was very successful with something like your approach 2. My app, iSynth, does a lot of texture creation on the fly at the same time the app is handling network requests and trying to render a decent number of frames per second. I use two threads for OpenGL and they share a context.
One of these is a dedicated render thread. At the high level, it's a while loop that repeatedly calls my rendering code using whatever textures are available at the time.
The other does nothing but create texture resources for the render thread to use. Images are continuously downloaded on the main thread and queued up - this thread's sole mission in life is to eat through that queue and output GL textures.
This approach works well because one thread is strictly a producer and the other a consumer. I could foresee lots of ugly problems if you start trying to create textures on the render thread, or modify GL state on the texture thread. If you're having problems, I'd make sure you're properly setting up the shared context and making that context current on each thread. Also, if a thread utilizing that context goes away, I've found that it's necessary to call setCurrentContext: nil from that thread first.

Resources