I'm doing an iOS app where I have a black box rendering library that has its own EAGLContext and framebuffer and does it's own rendering. I also need to do additional rendering outside the black box lib.
Up until now I've been doing that by carefully reading, setting and restoring all the pertinent states each frame. This works, but is fiddly and hard to maintain. Then it occurred to me, "Why not have a separate EAGLContext instead?"
I've implemented a second context, so now I'm switching contexts instead of setting/restoring all the states each frame. Only problem is I'm getting lots of visual artifacts and the performance went from a rock solid 30 FPS to about 5 FPS...
So apparently I'm not meant to render to the same framebuffer from several contexts. Can anyone confirm this?
Related
What APIs do I need to use, and what precautions do I need to take, when writing to an IOSurface in an XPC process that is also being used as the backing store for an MTLTexture in the main application?
In my XPC service I have the following:
IOSurface *surface = ...;
CIRenderDestination *renderDestination = [... initWithIOSurface:surface];
// Send the IOSurface to the client using an NSXPCConnection.
// In the service, periodically write to the IOSurface.
In my application I have the following:
IOSurface *surface = // ... fetch IOSurface from NSXPConnection.
id<MTLTexture> texture = [device newTextureWithDescriptor:... iosurface:surface];
// The texture is used in a fragment shader (Read-only)
I have an MTKView that is running it's normal update loop. I want my XPC service to be able to periodically write to the IOSurface using Core Image and then have the new contents rendered by Metal on the app side.
What synchronization is needed to ensure this is done properly? A double or triple buffering strategy is one, but that doesn't really work for me because I might not have enough memory to allocate 2x or 3x the number of surfaces. (The example above uses one surface for clarity, but in reality I might have dozens of surfaces I'm drawing to. Each surface represents a tile of an image. An image can be as large as JPG/TIFF/etc allows.)
WWDC 2010-442 talks about IOSurface and briefly mentions that it all "just works", but that's in the context of OpenGL and doesn't mention Core Image or Metal.
I originally assumed that Core Image and/or Metal would be calling IOSurfaceLock() and IOSurfaceUnlock() to protect read/write access, but that doesn't appear to be the case at all. (And the comments in the header file for IOSurfaceRef.h suggest that the locking is only for CPU access.)
Can I really just let Core Image's CIRenderDestination write at-will to the IOSurface while I read from the corresponding MTLTexture in my application's update loop? If so, then how is that possible if, as the WWDC video states, all textures bound to an IOSurface share the same video memory? Surely I'd get some tearing of the surface's content if reading and writing occurred during the same pass.
The thing you need to do is ensure that the CoreImage drawing has completed in the XPC before the IOSurface is used to draw in the application. If you were using either OpenGL or Metal on both sides, you would either call glFlush() or [-MTLRenderCommandEncoder waitUntilScheduled]. I would assume that something in CoreImage is making one of those calls.
I can say that it will likely be obvious if that's not happening because you will get tearing or images that are half new rendering and half old rendering if things aren't properly synchronized. I've seen that happen when using IOSurfaces across XPCs.
One thing you can do is put some symbolic breakpoints on -waitUntilScheduled and -waitUntilCompleted and see if CI is calling them in your XPC (assuming the documentation doesn't explicitly tell you). There are other synchronization primitives in Metal, but I'm not very familiar with them. They may be useful as well. (It's my understanding that CI is all Metal under the hood now.)
Also, the IOSurface object has methods -incrementUseCount, -decrementUseCount, and -localUseCount. It might be worth checking those to see if CI sets them appropriately. (See <IOSurface/IOSurfaceObjC.h> for details.)
I'm trying to do pretty much what TangoARScreen does but with multithreaded rendering on in Unity. I did some experiments and I'm stuck.
I tried several things, such as letting Tango render into the OES texture that would be then blitted into a regular Texture2D in Unity, but OpenGL keeps complaining about invalid display when I try to use it. Probably not even OnTangoCameraTextureAvailable is called in the correct GL context? Hard to say when you have no idea how Tango Core works internally.
Is registering a YUV texture via TangoService_Experimental_connectTextureIdUnity the way to go? I'd have to deal with YUV2RGB conversion I assume. Or should I use OnTangoImageMultithreadedAvailable and deal with the buffer? Render it with a custom share for instance? The documentation is pretty blank in these areas and every experiment means several wasted days at least. Did anyone get this working? Could you point me in the right direction? All I need is live camera image rendered into Unity's camera background.
Frome the April 2017: Gankino release notes: "The C API now supports getting the latest camera image's timestamp outside a GL thread .... Unity multithreaded rendering support will get added in a future release.". So I guess we need to wait a little bit.
Multithreaded rendering still can be used in applications without camera feed (with motion tracking only), choosing "YUV Texture and Raw Bytes" as overlay method in Tango Application Script.
My game screen uses both Scene2d and normal libgdx sprites. I use scene2d for the pause menus which contain some tables and textbuttons. All is ok on the pc. All is ok also on two mobile phones I'm testing the game on, but I have a pb on a third phone. It seems that after a restart or two of the game level all the scene2d elements that are supposed to appear on the screen have turned black. They are still responsive, meaning the buttons do what they are supposed to do, they move rotate and execute properly but they are all black. what could be the issue here? I don't have this pb on the pc or on the other phones.
What you describe is a symptom of using a texture across a reset of the OpenGL context. Your app contains pointers, in the Libgdx Texture objects, into OpenGL state, and when the OpenGL device is given over to another app, your pointers become stale.
LibGDX generally does a good job of restoring state across simple resets, but there are several ways to cause problems. The most common is to (1) store LibGDX OpenGL state (e.g., a Texture) into a static property. The JVM will get reused across application instances, so LibGDX cannot tell that this static object has become stale. See http://bitiotic.com/blog/2013/05/23/libgdx-and-android-application-lifecycle/ for details on how to trigger the different lifecyles.
See In game Images disappear on Android device if i run from widget, but not when I install apk first time and Android static object lifecycle
I know that there is already best answer for this post, but maybe this will somehow help you too:
Texture is not displayed in the application
the main idea is to dispose your assets and load them again when application becomes visible
I am trying to create an application with an OpenGL view using MonoMac. Setting up an application and an NSOpenGLView was fairly simple...
...but for some reason I cannot get a consistent frame rate rendering OpenGL. The issue I am having is that 9 out of ten frames have perfect performance and every tenth frame or so I am getting a massive frame time spike (about 60ms-80ms for a single frame). The time of the slow frame correlates with the size of the control (and even more so using retina backing buffer).
I have been digging and have come up with nothing that works for my case.
I tried to use NSOpenGLView with CVDisplayLink and rendering on the main thread with timers and DrawRect.
I tried MonoMacGameView also both versions. Actually MonoMacGameView has consistent performance but only draws when my window does not have a background color.
I reimplemented the run loop to do my own NextEvent polling just to find out that that is not the issue...
So, my current hunch is that it has something to do with layer backed rendering in Cocoa views but I really cannot figure out what is causing this.
Any hint as to what is causing this delay?
I found a solution which produces pretty good results:
Do not use NSOpenGLView or MonoMacGameView.
Use the approach described in the example on this page: http://cocoa-mono.org/archives/366/monomac-and-opengl/
To enable retina support export the ContentScale property on the deriving class and set that depending whether you are running on retina screen or not
In conclusion, using a core animation layer is the only viable solution.
Nearly every example I see of OpenGL ES involves it updating every frame, even if the image itself is not moving in any way.
I did some tests and I see it works quite fine to just render (using drawArrays etc) and then present the render buffer (these two actions, together) just once and then not do either again until you have something change onscreen.
Is this "normal" ? I just don't see this really done much. Once drawn, the graphics stay on the screen without additional constant rendering.
Is this acceptable?
Yes, it is acceptable and completely valid. You also need to take account to render again when the context is lost. To give you an example, using Android standard OpenGL helper classes there is an option to only draw when needed, not in loop (RENDERMODE_WHEN_DIRTY).