egl: what is it that is shared when using a shared context - opengl-es

In EGL, there is shared context.
when a context(created by a first thread) is shared(to a second thread), what resources are made available to the second thread?
Is it textures, buffers, framebuffers, renderbuffers or other objects?
A second question:
If a first thread created textures(handles being 2) and its context is shared to a second thread.
In the second thread, when I call glGenTextures. will it give a texture whoes handle is also 2(in this case, it will conflict will the texture in the shared context).

In general, bulk data resources are shared:
Textures
Buffers
Renderbuffers
Shaders / shader programs
Pure state resources are context local:
Context settings are context local
Vertex array objects (this one normally catches people by surprise)
Framebuffer objects (this one normally catches people by surprise)
Some points to note:
The application must manually ensure synchronization across the threads. If one thread is modifying some texture data and another thread uses it in a draw then you might get a racy read of the texture data while it is still being updated.
State associated with a data resource object (e.g. texture sampler state) is only weakly shared across contexts; state settings which are modified by Context B are only (re)fetched when a resource is bound in a Context A. State settings in Context A will continue to reflect the old values that Context A fetched when the resource was bound; subequent edits made by Context B will only be visible in Context A when a (re)bind of the modified resource happens in Context A.
If a first thread created textures(handles being 2) and its context is
shared to a second thread. In the second thread, when I call
glGenTextures. will it give a texture whoes handle is also 2(in this
case, it will conflict will the texture in the shared context).
No, IDs must obviously be unique within a share group or it wouldn't work.

Related

eglDestroyContext and unfinished drawcalls

Following scenario:
I have an OpenGL ES app that renders frames via drawcalls, the end of a frame is marked by eglSwapBuffers. Now imagine that after the last eglSwapBuffers I immediately call eglMakeCurrent to unbind the context from the surface, then immediately call eglDestroyContext. Assuming the context was the last one to hold references to any resources the drawcalls use (shader, buffer, textures etc.) what happens to the drawcalls that the gpu has not yet finished and that use some of these resources?
Regards.
then immediately call eglDestroyContext().
All this really says is that the application is done with the context, and promises not to use it again. The actual context includes a reference count held by the graphics driver, and that won't drop to zero until all pending rendering has actually completed.
TLDR - despite what the APIs say, nothing actually happens "immediately" when you make an API call - it's an elaborate illusion that is mostly a complete lie.

OpenGL Driver Monitor says textures are rapidly increasing. How to find the leak?

When I run my app, OpenGL Driver Monitor says the Textures count is rapidly increasing — within 30 seconds the Textures count increases by about 45,000.
But I haven't been able to find the leak. I've instrumented every glGen*() call to print out every GL object name it returns — but they're all less than 50, so apparently GL objects created by glGen*() aren't being leaked.
It's a large, complex app that renders multiple shaders to multiple FBOs on shared contexts on separate threads, so reducing this to a simple test case isn't practical.
What else, besides glGen*() calls, should I check in order to identify what is leaking?
Funny thing, those glGen* (...) functions. All they do is return the first unused name for a particular type of object and reserve the name so that a subsequent call to glGen* (...) does not also give out the name.
Texture objects (and all objects, really) are actually created in OpenGL the first time you bind a name. That is to say, glBindTexture (GL_TEXTURE_2D, 1) is the actual function that creates a texture with the name 1. The interesting thing here is that in many implementations (OpenGL 2.1 or older) you are free to use any random number you want for the name even if it was not acquired through a call to glGenTextures (...) and glBindTexture (...) will still create a texture for that name (provided one does not already exist).
The bottom line is that glGenTextures (...) is not what creates a texture, it only gives you the first unused texture name it finds. I would focus on tracking down all calls to glBindTexture (...) instead, it is likely you are passing uninitialized data as the name.
UPDATE:
As datenwolf points out, if you are using a 3.2+ core context then this behavior does not apply (names must be generated with a matching glGen* (...) call starting with OpenGL 3.0). However, OS X gives you a 2.1 implementation by default.

Understanding OpenGL state associations

I've been using OpenGL for quite a while now but I always get confused by it's state management system. In particular the issue I struggle with is understanding exactly which object or target a particular state is stored against.
Eg 1: assigning a texture parameter. Are those parameters stored with the texture itself, or the texture unit? Will binding a texture with a different texture unit move those parameter settings?
Eg 2: glVertexAttribPointer - what exactly is that associated with - is the it the active shader program, the the bound data buffer, the ES context itself? If I bind a different vertex buffer object, do I need to call glVertexAttribPointer again?
So I'm not asking for answers to the above questions - I'm asking if those answers are written down somewhere so I don't need to do the whole trial and error thing everytime I use something new.
Those answers are written in the OpenGL ES 2.0 specification (PDF link). Every function states what state it affects, and there's a big series of tables at the end that specify which state is part of which objects, or just part of the global context.

OpenGL ES 2.0 multithreading

I have been trying to use OpenGL ES 2.0 to make a photo viewing application. To optimize the code I'm changing the textures loaded with the objects as the user scrolls down. But image loading into the texture takes some time and thus the effect is not good. To solve the above problem I tried using multithreading with the following ways:
Create a separate context for the new thread and then share the resources (texture object) with the other context
Use multiple threads and a single context. Make the context current while executing gl commands in the threads.
But wasn't successful in either of them. So if anyone has tried similar things with opengl earlier, could you please tell which of the above ones would work and the things I need to pay attention to while doing the same? Also would FBO's and pbuffers be of any use in this case?
Thanks for any help.
Himanshu
I would suggest keeping the OpenGL stuff in your primary thread, and delegating just the data loading to the worker thread.
Have, say, a circular buffer of image data objects that your worker thread can keep itself busy filling, but let your primary thread generate the textures from the image data as it needs them.
I don't think approach 1 is valid - you're not supposed to share resources across contexts.
I was very successful with something like your approach 2. My app, iSynth, does a lot of texture creation on the fly at the same time the app is handling network requests and trying to render a decent number of frames per second. I use two threads for OpenGL and they share a context.
One of these is a dedicated render thread. At the high level, it's a while loop that repeatedly calls my rendering code using whatever textures are available at the time.
The other does nothing but create texture resources for the render thread to use. Images are continuously downloaded on the main thread and queued up - this thread's sole mission in life is to eat through that queue and output GL textures.
This approach works well because one thread is strictly a producer and the other a consumer. I could foresee lots of ugly problems if you start trying to create textures on the render thread, or modify GL state on the texture thread. If you're having problems, I'd make sure you're properly setting up the shared context and making that context current on each thread. Also, if a thread utilizing that context goes away, I've found that it's necessary to call setCurrentContext: nil from that thread first.

Is it safe to manipulate objects that I created outside my thread if I don't explicitly access them on the thread which created them?

I am working on a cocoa software and in order to keep the GUI responsive during a massive data import (Core Data) I need to run the import outside the main thread.
Is it safe to access those objects even if I created them in the main thread without using locks if I don't explicitly access those objects while the thread is running.
With Core Data, you should have a separate managed object context to use for your import thread, connected to the same coordinator and persistent store. You cannot simply throw objects created in a context used by the main thread into another thread and expect them to work. Furthermore, you cannot do your own locking for this; you must at minimum lock the managed object context the objects are in, as appropriate. But if those objects are bound to by your views a controls, there are no "hooks" that you can add that locking of the context to.
There's no free lunch.
Ben Trumbull explains some of the reasons why you need to use a separate context, and why "just reading" isn't as simple or as safe as you might think, in this great post from late 2004 on the webobjects-dev list. (The whole thread is great.) He's discussing the Enterprise Objects Framework and WebObjects, but his advice is fully applicable to Core Data as well. Just replace "EC" with "NSManagedObjectContext" and "EOF" with "Core Data" in the meat of his message.
The solution to the problem of sharing data between threads in Core Data, like the Enterprise Objects Framework before it, is "don't." If you've thought about it further and you really, honestly do have to share data between threads, then the solution is to keep independent object graphs in thread-isolated contexts, and use the information in the save notification from one context to tell the other context what to re-fetch. -[NSManagedObjectContext refreshObject:mergeChanges:] is specifically designed to support this use.
I believe that this is not safe to do with NSManagedObjects (or subclasses) that are managed by a CoreData NSManagedObjectContext. In general, CoreData may do many tricky things with the sate of managed objects, including firing faults related to those objects in separate threads. In particular, [NSManagedObject initWithEntity:insertIntoManagedObjectContext:] (the designated initializer for NSManagedObjects as of OS X 10.5), does not guarantee that the returned object is safe to pass to an other thread.
Using CoreData with multiple threads is well documented on Apple's dev site.
The whole point of using locks is to ensure that two threads don't try to access the same resource. If you can guarantee that through some other mechanism, go for it.
Even if it's safe, but it's not the best practice to use shared data between threads without synchronizing the access to those fields. It doesn't matter which thread created the object, but if more than one line of execution (thread/process) is accessing the object at the same time, since it can lead to data inconsistency.
If you're absolutely sure that only one thread will ever access this object, than it'd be safe to not synchronize the access. Even then, I'd rather put synchronization in my code now than wait till later when a change in the application puts a second thread sharing the same data without concern about synchronizing access.
Yes, it's safe. A pretty common pattern is to create an object, then add it to a queue or some other collection. A second "consumer" thread takes items from the queue and does something with them. Here, you'd need to synchronize the queue but not the objects that are added to the queue.
It's NOT a good idea to just synchronize everything and hope for the best. You will need to think very carefully about your design and exactly which threads can act upon your objects.
Two things to consider are:
You must be able to guarantee that the object is fully created and initialised before it is made available to other threads.
There must be some mechanism by which the main (GUI) thread detects that the data has been loaded and all is well. To be thread safe this will inevitably involve locking of some kind.
Yes you can do it, it will be safe
...
until the second programmer comes around and does not understand the same assumptions you have made. That second (or 3rd, 4th, 5th, ...) programmer is likely to start using the object in a non safe way (in the creator thread). The problems caused could be very subtle and difficult to track down. For that reason alone, and because its so tempting to use this object in multiple threads, I would make the object thread safe.
To clarify, (thanks to those who left comments):
By "thread safe" I mean programatically devising a scheme to avoid threading issues. I don't necessarily mean devise a locking scheme around your object. You could find a way in your language to make it illegal (or very hard) to use the object in the creator thread. For example, limiting the scope, in the creator thread, to the block of code that creates the object. Once created, pass the object over to the user thread, making sure that the creator thread no longer has a reference to it.
For example, in C++
void CreateObject()
{
Object* sharedObj = new Object();
PassObjectToUsingThread( sharedObj); // this function would be system dependent
}
Then in your creating thread, you no longer have access to the object after its creation, responsibility is passed to the using thread.

Resources