Meaningfulness of glInvalidateFramebuffer with shared attachments - opengl-es

I have a scenario where I switch between two FBOs. Both FBOs share the same depth/stencil attachment, only the color attachments are different. Now, if I call glInvalidateFramebuffer on the depth/stencil attachment before calling glBindFramebuffer, will this have any benefit at all since the attachment is reused between the FBOs and the gpu should be able to recognize this and avoid unnecessary memory operations?
Thanks.

In principle, yes. Once you have marked an attachment as "invalid" then the contents are undefined until something writes new data into it.
In practice, I suspect you are at the mercy of how good drivers are at state tracking across FBOs if you want to get the full benefits.
I'd recommend issuing all necessary clears and invalidates for every FBO separately; they are not expensive operations.

Related

eglDestroyContext and unfinished drawcalls

Following scenario:
I have an OpenGL ES app that renders frames via drawcalls, the end of a frame is marked by eglSwapBuffers. Now imagine that after the last eglSwapBuffers I immediately call eglMakeCurrent to unbind the context from the surface, then immediately call eglDestroyContext. Assuming the context was the last one to hold references to any resources the drawcalls use (shader, buffer, textures etc.) what happens to the drawcalls that the gpu has not yet finished and that use some of these resources?
Regards.
then immediately call eglDestroyContext().
All this really says is that the application is done with the context, and promises not to use it again. The actual context includes a reference count held by the graphics driver, and that won't drop to zero until all pending rendering has actually completed.
TLDR - despite what the APIs say, nothing actually happens "immediately" when you make an API call - it's an elaborate illusion that is mostly a complete lie.

Is it safe to read concurrently from a pointer?

I'm working on an image uploader and want to concurrently resize the image to different sizes. Once I've read the file as a []byte I'm passing a reference of that buffer to my resize functions that are being run concurrently.
Is this safe? I'm thinking by passing a reference of a large file to be read by resize functions will save me memory, and the concurrency will save me time.
Thank you!
Read-only data is usually fine for concurrent access, but you have to be very careful when passing references (pointers, slices, maps and so on) around. Today maybe no one is modifying them while you're also reading, but tomorrow someone may be.
If this is a throwaway script, you'll be fine. But if it's part of a larger program, I'd recommend future-proofing your code by judiciously protecting concurrent access. In your case something like a reader-writer lock could be a good match - all the readers will be able to acquire the lock concurrently, so the performance impact is negligible. And then if you do decide in the future this data could be modified, you already have the proper groundwork laid down w.r.t. safety.
Don't forget to run your code with the race detector enabled.

Is there a way to determine what CGLFlushDrawable is doing to the back buffer?

According to Apple's documentation, CGLFlushDrawable or it's Cocoa equivalent flushBuffer may behave in couple different ways. Normally for a windowed application the contents of a back buffer are copied to the visible buffer like it's stated here:
CGLFlushDrawable
Copies the back buffer of a double-buffered context to the front buffer.
I assume the contents of the drawing buffer are left untouched (see question 1.). Even if I'm wrong, it can be assured by passing the kCGLPFABackingStore attribute to CGLChoosePixelFormat.
But further reading reaveals, that under some circumstances the buffers may be swapped rather than copying being performed:
If the backing store attribute is set to false, the buffers can be exchanged rather than copied. This is often the case in full-screen mode.
And also this states
When there is no content above your full-screen window, Mac OS X automatically attempts to optimize this context’s performance. For example, when your application calls flushBuffer on the NSOpenGLContext object, the system may swap the buffers rather than copying the contents of the back buffer to the front buffer. (...) Because the system may choose to swap the buffers rather than copy them, your application must completely redraw the scene after every call to flushBuffer.
And here go my questions:
If the back buffer is copied, is it guaranteed, that it's contents are preserved even without the backing store attribute?
If the bufferse are swapped, does the back buffer get contents of the front buffer, or is it undefined so it could as well get random stuff?
The system may choose to swap buffers, but is there any way to determine if it actually did choose to do so?
In any of those cases, is there a way to determine if the buffer was preserved, exchanged with the front buffer or got messed up?
Also any information on how it is made in WGL, GLX or EGL would be appreciated. I particulary need the answer to the question 4.
No, it's not guaranteed.
It might be random.
No, I don't believe so.
No. If you don't specify kCGLPFABackingStore or NSOpenGLPFABackingStore, then you can't make any assumptions about the contents of the back buffer, which is why the docs say you must redraw from scratch for every frame.
I'm not sure what you're asking about WGL, GLX, and EGL.

Switch OpenGL contexts or switch context render target instead, what is preferable?

On MacOS X, you can render OpenGL to any NSView object of your choice, simply by creating an NSOpenGLContext and then calling -setView: on it. However, you can only associate one view with a single OpenGL context at any time. My question is, if I want to render OpenGL to two different views within a single window (or possibly within two different windows), I have two options:
Create one context and always change the view, by calling setView as appropriate each time I want to render to the other view. This will even work if the views are within different windows or on different screens.
Create two NSOpenGLContext objects and associate one view with either one. These two contexts could be shared, which means most resources (like textures, buffers, etc.) will be available in both views without wasting twice the memory. In that case, though, I have to keep switching the current context each time I want to render to the other view, by calling -makeCurrentContext on the right context before making any OpenGL calls.
I have in fact used either option in the past, each of them worked okay for my needs, however, I asked myself, which way is better in terms of performance, compatibility, and so on. I read that context switching is actually horribly slow, or at least it used to be very slow in the past, might have changed meanwhile. It may depend on how many data is associated with a context (e.g. resources), since switching the active context might cause data to be transferred between system memory and GPU memory.
On the other hand switching the view could be very slow as well, especially if this might cause the underlying renderer to change; e.g. if your two views are part of two different windows located on two different screens that are driven by two different graphic adapters. Even if the renderer does not change, I have no idea if the system performs a lot of expensive OpenGL setup/clean-up when switching a view, like creating/destroying render-/framebuffer objects for example.
I investigated context switching between 3 windows on Lion, where I tried to resolve some performance issues with a somewhat misused VTK library, which itself is terribly slow already.
Wether you switch render contexts or the windows doesn't really matter,
because there is always the overhead of making both of them current to the calling thread as a triple. I measured roughly 50ms per switch, where some OS/Window manager overhead charges in aswell. This overhead depends also greatly on the arrangement of other GL calls, because the driver could be forced to wait for commands to be finished, which can be achieved manually by a blocking call to glFinish().
The most efficient setup I got working is similar to your 2nd, but has two dedicated render threads having their render context (shared) and window permanently bound. Aforesaid context switches/bindings are done just once on init.
The threads can be controlled using some threading stuff like a common barrier, which lets both threads render single frames in sync (both get stalled at the barrier before they can be launched again). Data handling must also be interlocked, which can be done in one thread while stalling other render threads.

How to handle GDI resources

Does anyone know a good document/article about GDI resource handling?
I need to share some resources like icons and bitmaps among classes that can have different lifetime, and I want to understand how I should approach this problem.
For Mutexes and other kernel objects there is a DuplicateHandle function, but GDI is confusing me a little. Also, the way CBitmap returns HBITMAP through const operator HBITMAP, and that like, is a little bit scary.
I would like to avoid creating local bitmaps on every redraw, so some resource caching would be good, but also, I am not sure I can start creating and loading C##### resources while the main message pump hasn't started running.
Seems that I'm using wrong keywords, as I can't find any good, but manageably short documentation.
There is no such documentation, it is all pretty straightforward. It is completely up to you to decide when to call DeleteObject(). And to decide how to balance resource usage of your program against dynamically creating and destroying the object when needed. Only largish bitmaps are really worthy to keep around. Pens and brushes are very cheap, you create and destroy them on-the-fly. Fonts are a corner case, often cached simply for the live of the program since you need so few of them.
There are plenty of ways to manage caching, a shared_ptr<> in C++ provides the standard reference counting pattern for example. But it is very typical to just keep the reference as a member of your window wrapper class. It isn't very common that the same bitmap would be used in multiple windows. Ymmv.
Creating GDI objects doesn't require a message loop.

Resources