I am trying to modify the Apple MultiGPUIOSurface sample (specifically the file http://developer.apple.com/library/mac/#samplecode/MultiGPUIOSurface/Listings/ServerOpenGLView_m.html) so that the server side will render to an IOSurface, without the need for a NSOpenGLView.
My modified version of that is at: http://pastebin.com/z3r715jJ
The difference in my approach is I'm rendering to the IOSurface based on a timer, and not in drawRect. I also am not using the NSOpenGlView's context.
The problem is that I see a corrupt view of the IOSurface in the client application. However if I set the NSOpenGLView's context to the one I created, or use the context from the NSOpenGLView, it works. This leads me to think that the NSOpenGLView is doing something extra that I also need to do, but I'm not sure what.
Found a solution (though I don't understand why): Create a pixelbuffer.
I found some discussion about offscreen buffers and the need for a drawable (http://www.mentby.com/Group/mac-opengl/opengl-offscreen-rendering-without-a-window.html)
Anyways, my fix was adding the lines:
NSOpenGLPixelBuffer* pbuf = [[NSOpenGLPixelBuffer alloc] initWithTextureTarget:GL_TEXTURE_RECTANGLE_EXT textureInternalFormat:GL_RGBA textureMaxMipMapLevel:0 pixelsWide:512 pixelsHigh:512];
[_nsContext setPixelBuffer:pbuf cubeMapFace:0 mipMapLevel:0 currentVirtualScreen:[_nsContext currentVirtualScreen]];
Related
What APIs do I need to use, and what precautions do I need to take, when writing to an IOSurface in an XPC process that is also being used as the backing store for an MTLTexture in the main application?
In my XPC service I have the following:
IOSurface *surface = ...;
CIRenderDestination *renderDestination = [... initWithIOSurface:surface];
// Send the IOSurface to the client using an NSXPCConnection.
// In the service, periodically write to the IOSurface.
In my application I have the following:
IOSurface *surface = // ... fetch IOSurface from NSXPConnection.
id<MTLTexture> texture = [device newTextureWithDescriptor:... iosurface:surface];
// The texture is used in a fragment shader (Read-only)
I have an MTKView that is running it's normal update loop. I want my XPC service to be able to periodically write to the IOSurface using Core Image and then have the new contents rendered by Metal on the app side.
What synchronization is needed to ensure this is done properly? A double or triple buffering strategy is one, but that doesn't really work for me because I might not have enough memory to allocate 2x or 3x the number of surfaces. (The example above uses one surface for clarity, but in reality I might have dozens of surfaces I'm drawing to. Each surface represents a tile of an image. An image can be as large as JPG/TIFF/etc allows.)
WWDC 2010-442 talks about IOSurface and briefly mentions that it all "just works", but that's in the context of OpenGL and doesn't mention Core Image or Metal.
I originally assumed that Core Image and/or Metal would be calling IOSurfaceLock() and IOSurfaceUnlock() to protect read/write access, but that doesn't appear to be the case at all. (And the comments in the header file for IOSurfaceRef.h suggest that the locking is only for CPU access.)
Can I really just let Core Image's CIRenderDestination write at-will to the IOSurface while I read from the corresponding MTLTexture in my application's update loop? If so, then how is that possible if, as the WWDC video states, all textures bound to an IOSurface share the same video memory? Surely I'd get some tearing of the surface's content if reading and writing occurred during the same pass.
The thing you need to do is ensure that the CoreImage drawing has completed in the XPC before the IOSurface is used to draw in the application. If you were using either OpenGL or Metal on both sides, you would either call glFlush() or [-MTLRenderCommandEncoder waitUntilScheduled]. I would assume that something in CoreImage is making one of those calls.
I can say that it will likely be obvious if that's not happening because you will get tearing or images that are half new rendering and half old rendering if things aren't properly synchronized. I've seen that happen when using IOSurfaces across XPCs.
One thing you can do is put some symbolic breakpoints on -waitUntilScheduled and -waitUntilCompleted and see if CI is calling them in your XPC (assuming the documentation doesn't explicitly tell you). There are other synchronization primitives in Metal, but I'm not very familiar with them. They may be useful as well. (It's my understanding that CI is all Metal under the hood now.)
Also, the IOSurface object has methods -incrementUseCount, -decrementUseCount, and -localUseCount. It might be worth checking those to see if CI sets them appropriately. (See <IOSurface/IOSurfaceObjC.h> for details.)
I've been trying to figure out the best way to show an existing CALayer in a secondary window to allow real-time full-screen output on a secondary monitor. Additionally I would like to have the ability to show real-time thumbnails in my application of the original CALayer, it seems like I should be able to find a setup that could fulfill both requirements.
So far my research resulted in the following options:
CALayer.render(in: CGContext) Using the original layer and redrawing it to additional views this way and setting up a timer or a CVDisplayLink to redraw it every frame.
Rendering the CALayer to a NSBitmap every frame. And using that bitmap in NSImageView across the application.
Using a CAMetalLayer and rendering the texture multiple times using a MTKView. I'm not really familiar with Metal, this seems like a fairly elegant solution, but I'm not sure if it is required to go all the way down to Metal myself.
Using a CARemoteLayerServer with a CARemoteLayerServer.
This seems like an overly complicated setup for in-process sharing of a CALayer and it feels like this approach is more suitable if I'd need to share the layer cross-process.
Using CAReplicatorLayer. Instead of using the replicator layer to create a grid of copies I tried to use it to create just one copy, but it seems like you cant add a CALayer to multiple "parent layers".
All in all I've found some workable solutions, but as I'm quite a novice working with Core Animation I'm not sure which direction is the least resource-heavy and I might still be missing an easier solution.
Has anyone tried something similar?
I have an existing component that draws Direct2D content to an ID2D1RenderTarget and I would like to save that drawing to an image file. The questions here, here and here, although they helped me, did not provide a clear answer as how to do it.
My nullth idea was to try the official MSDN method. Unfortunately, it is not available in Win7.
My first idea was to modify the drawing routine to make it accept the RenderTarget as a parameter and use ID2D1Factory::CreateWicBitmapRenderTarget to draw directly into a IWICBitmap, but it turns out to be quite difficult for me (because it would be necessary to modify not only the drawing routine itself, but also the drawing callbacks of all users of that component (the code, written in Delphi, uses Embarcadero's TDirect2DCanvas, and thus did not need to manage all Direct2D resources, like render target or brushes)).
My second idea was to create an ID2D1Bitmap, fill it with what is already drawn using ID2D1Bitmap::CopyFromRenderTarget and then draw that ID2D1Bitmap to a WicBitmapRenderTarget (this is about what was done here). I had the same kind of problems as those who asked the questions I link to: different resources affinities, as briefly explained Kenny Kerr.
So is it possible under Win7 without having to implement my first idea, and how would you do it?
Direct2D 1.1 is supported on Windows 7 if you install the Platform Update. Unfortunately, that doesn't solve your problem without first creating two more of them: 1) it's still pre-release/beta, and 2) it adds another installation dependency for you to worry about.
I'm trying to insert an OpenGL ES 3D view in a Cocos2D app on the IPad. I'm relatively new to these frameworks, so I basically added those lines in my CCLayer:
CGRect rScreen;
// some code to define the bounds and origin of my frame
EAGL3DView * view3d = [[EAGL3DView alloc] initWithFrame:rScreen] ;
[[[CCDirector sharedDirector] openGLView] addSubview: view3d];
[view3d startAnimation]
The code I'm using for the 3D part is based on a sample code from Apple Developer : http://developer.apple.com/library/mac/#samplecode/GLEssentials/Introduction/Intro.html
The only changes I made were to create my view programmatically (no xib file, initWithCoder -> initWithFrame...), and I also renamed the EAGLView class & files to EAGL3DView so as not to interfere with the EAGLView that comes along with Cocos2D.
Now onto my problem: when I run these, I get an "OpenGL error 0x0502 in -[EAGLView swapBuffers]", the 3D view being properly displayed but with a completely pink screen otherwise.
I went into the swapBuffers function in Cocos2d EAGLView, and it turns out the only block of code that is important is this one:
if(![context_ presentRenderbuffer:GL_RENDERBUFFER_OES])
CCLOG(#"cocos2d: Failed to swap renderbuffer in %s\n", __FUNCTION__);
which btw does not enter the "if" condition (presentRenderbuffer does not return a null value, but is not correct though since the CHECK_GL_ERROR() afterwards gives an 0x0502 error).
So I understand that there is some kind of incorrect overriding of the OpenGL ES renderbuffer by my 3D view (since Cocos2d also uses OpenGL ES) causing the Cocos2D view not to work properly. This is what I got so far, and I can't figure out precisely what needs to be done in order to fix it. So what do you think?
Hoping this is only a newbie problem…
Pixelvore
I think the correct approach for what you are trying to do is:
create your own custom CCSprite/CCNode class;
put all the GL code that you are using from the Apple sample into that class (i.e., overriding the draw or visit method of the class).
If you want to try and make the two GL views work nicely together, you could try reading this post, which will explain how you associate different buffers to your views.
As to the first approach, have a look at this post and this one.
To be true, the first approach might be more complex (depending on how the Apple sample is doing open gl), but will use less memory and will be more optimized that the second.
I've been having problems and, after spending a week trying out all kinds of solutions and tearing my hair out, I've come here to see whether anybody could help me.
I'm working on a 3D browser plugin for the Mac (I have one that works on Windows). The only fully-hardware accelerated way to do this is to use a CAOpenGLLayer (or something that inherits from that). If a NSWindow is created and you attach the layer to that window's NSView then everything works correctly. But, for some reason, I can only get a specific number of frames (16) to render when passing the layer into the browser.
Cocoa calls my layer's drawInCGLContext for the first 16 frames. Then, for some unknown reason, it stops calling it. 16 seems like a very specific - and programmatic - number of frames and so I wondered whether anybody had any insight into why drawInCGLContext would not be called after 16 frames?
I'm reasonably sure it's not because I pass the layer into the browser - I've created a very minimal example plugin that renders a rotating quad using CAOpenGLLayer and that actually works. But the full plugin is a lot more complicated than that and I just don't know where to look anymore. I just don't know why drawInCGLContext stops being called. I've tried forcing it using CATransaction, it definitely gets sent the setNeedsDisplay message - but drawInCGLContext is never called. OpenGL doesn't report any errors either (I'm currently checking the results of all OpenGL calls). I'm confused! Help?
So, for anybody else who has this problem in future: You're trying to draw using the OpenGL context outside of the drawInCGLContext. There was a bug in the code where nearly all the drawing was happening from the correct place (drawInCGLContext) but one code path led to it rendering outside of there.
No errors are raised nor does glGetError return any problems. It just stops rendering. So if this happens to you - you're almost certainly making the same mistake I made!