Sharing an OpenGL framebuffer between processes in Mac OS X - macos

Is there any way in Mac OS X to share an OpenGL framebuffer between processes? That is, I want to render to an off-screen target in one process and display it in another.
You can do this with DirectX (actually DXGI) in Windows by creating a surface (the DXGI equivalent of an OpenGL framebuffer) in shared mode, getting an opaque handle for that surface, passing that to another process via whatever means you like, then creating a surface in that other process, but passing in the existing handle. You use the surface as a render target in one process then and use it as a texture in the other to consume as you wish. And in fact the whole compositing Window system works like this from Vista onwards.
If this isn't possible I can of course get the contents of the framebuffer into system memory and use cross-process shared memory to get it to the target process, then upload it again from there, but that would be unnecessarily slow.

Depending on what you're really trying to do this sample code project may be what you want:
MultiGPUIOSurface sample code

It really depends upon the context of how you're using it.
Objects that may be shared between contexts include buffer objects,
program and shader objects, renderbuffer objects, sampler objects,
sync objects, and texture objects (except for the texture objects
named zero).
Some of these objects may contain views (alternate interpretations) of
part or all of the data store of another object. Examples are texture
buffer objects, which contain a view of a buffer object’s data store,
and texture views, which contain a view of another texture object’s
data store. Views act as references on the object whose data store is
viewed.
Objects which contain references to other objects include framebuffer,
program pipeline, query, transform feedback, and vertex array objects.
Such objects are called container objects and are not shared.
Chapter 5 / OpenGL-4.4 core specification
The reason you can do those things on Windows and not OS X is that graphics obviously utilizes an API that allows DirectX contexts to be shared between those processes. If OS X doesn't have the capability within the OpenGL API then you're going to have to come up with your own solution. Take a look at OpenGL Programming Guide for Mac, there's a small section that describes using multiple OpenGL contexts.

Related

How do you synchronize an MTLTexture and IOSurface across processes?

What APIs do I need to use, and what precautions do I need to take, when writing to an IOSurface in an XPC process that is also being used as the backing store for an MTLTexture in the main application?
In my XPC service I have the following:
IOSurface *surface = ...;
CIRenderDestination *renderDestination = [... initWithIOSurface:surface];
// Send the IOSurface to the client using an NSXPCConnection.
// In the service, periodically write to the IOSurface.
In my application I have the following:
IOSurface *surface = // ... fetch IOSurface from NSXPConnection.
id<MTLTexture> texture = [device newTextureWithDescriptor:... iosurface:surface];
// The texture is used in a fragment shader (Read-only)
I have an MTKView that is running it's normal update loop. I want my XPC service to be able to periodically write to the IOSurface using Core Image and then have the new contents rendered by Metal on the app side.
What synchronization is needed to ensure this is done properly? A double or triple buffering strategy is one, but that doesn't really work for me because I might not have enough memory to allocate 2x or 3x the number of surfaces. (The example above uses one surface for clarity, but in reality I might have dozens of surfaces I'm drawing to. Each surface represents a tile of an image. An image can be as large as JPG/TIFF/etc allows.)
WWDC 2010-442 talks about IOSurface and briefly mentions that it all "just works", but that's in the context of OpenGL and doesn't mention Core Image or Metal.
I originally assumed that Core Image and/or Metal would be calling IOSurfaceLock() and IOSurfaceUnlock() to protect read/write access, but that doesn't appear to be the case at all. (And the comments in the header file for IOSurfaceRef.h suggest that the locking is only for CPU access.)
Can I really just let Core Image's CIRenderDestination write at-will to the IOSurface while I read from the corresponding MTLTexture in my application's update loop? If so, then how is that possible if, as the WWDC video states, all textures bound to an IOSurface share the same video memory? Surely I'd get some tearing of the surface's content if reading and writing occurred during the same pass.
The thing you need to do is ensure that the CoreImage drawing has completed in the XPC before the IOSurface is used to draw in the application. If you were using either OpenGL or Metal on both sides, you would either call glFlush() or [-MTLRenderCommandEncoder waitUntilScheduled]. I would assume that something in CoreImage is making one of those calls.
I can say that it will likely be obvious if that's not happening because you will get tearing or images that are half new rendering and half old rendering if things aren't properly synchronized. I've seen that happen when using IOSurfaces across XPCs.
One thing you can do is put some symbolic breakpoints on -waitUntilScheduled and -waitUntilCompleted and see if CI is calling them in your XPC (assuming the documentation doesn't explicitly tell you). There are other synchronization primitives in Metal, but I'm not very familiar with them. They may be useful as well. (It's my understanding that CI is all Metal under the hood now.)
Also, the IOSurface object has methods -incrementUseCount, -decrementUseCount, and -localUseCount. It might be worth checking those to see if CI sets them appropriately. (See <IOSurface/IOSurfaceObjC.h> for details.)

What is the difference between framebuffer and image in Vulkan?

I've known that framebuffer is the final destination of the rendering pipeline and swapchain contains many image. So what is the relation between those two things? Which one is the actual render target? And does the framebuffer later attach the final picture of the current frame on the image view? If so, how will it transfer?
Describing this via paint or diagram would be pleased.
VkFramebuffer + VkRenderPass defines the render target.
Render pass defines which attachment will be written with colors.
VkFramebuffer defines which VkImageView is to be which attachment.
VkImageView defines which part of VkImage to use.
VkImage defines which VkDeviceMemory is used and a format of the texel.
Or maybe in opposite sequence:
VkDeviceMemory is just a sequence of N bytes in memory.
VkImage object adds to it e.g. information about the format (so you can address by texels, not bytes).
VkImageView object helps select only part (array or mip) of the VkImage (like stringView, arrayView or whathaveyou does). Also can help to match to some incompatible interface (by type casting format).
VkFramebuffer binds a VkImageView with an attachment.
VkRenderpass defines which attachment will be drawn into
So it's not like you do not use an image. You do, through the Vulkan Framebuffer.
Swapchain image is no different from any other image. Except that the driver is the owner of the image. You can't destroy it directly or allocate it yourself. You just borrow it from the driver for the duration between acquire and present operation.
There's (usually) more of the swapchain images for the purposes of buffering and advance rendering. AFAIK you would need a separate VkFramebuffer for each image (which is annoying, but more in tune with what actually happens underneath).
Probably the best single sentence from the Vulkan spec that describes framebuffers is:
The specific image views that will be used for the attachments, and
their dimensions, are specified in VkFramebuffer objects.
Yes, you would need a VkFramebuffer object for each image in a swapchain, but you generally would need to allocate only one VkMemory for a depth buffer VkImage and then add the VkImageView for that single depth buffer VkImage to all of your framebuffers.

Copying pixel data directly from windows Device Context to an openGL rendering context

Is it possible to copy pixel data directly from a windows device context into an openGL rendering context (an openGL texture, to be specific)? I know that I can copy the windows device context pixel data into system memory (take it out of the graphics card), and then later upload it back into my openGL framebuffer, but what I'm looking for is a direct transfer where the data doesn't have to leave the GPU.
I am trying to write an application that is essentially a virtual magnifier. It consists of one window, which displays the contents of any other windows that are open underneath it. The target for my application is machines with windows 8 and higher, and I am using the basic win32 API. The reason why I want to use openGL to display the contents of my window is because I wish to perform various spatial transformations (distortions) with the GPU, not just magnification. Using openGL, I believe I can perform these transformations very fast.
Previously, I thought that all I had to do was to "move" my openGL rendering context onto each "third party" window that I wanted to steal pixel data from, and use glReadPixels() to copy this data into a PBO. I could then switch my rendering context back to my magnifier window, and proceed with rendering. However, I understand that this isn't possible, because openGL doesn't have access to any pixel data that wasn't rendered using openGL itself.
Any help would be greatly appreciated.

Changing default OpenGL context profile version

To create an OpenGL context with profile version 3.2, I need to add the attributes to the pixel format when creating the context:
...,
NSOpenGLPFAOpenGLProfile, (NSOpenGLPixelFormatAttribute) NSOpenGLProfileVersion3_2Core,
...,
Is there a way (environment variable, global variable, function to call before the NSOpenGLPixelFormat is created, ...), to alter the default OpenGL profile version, for when no such attributes are specified. It defaults to an older version on OS X 10.10. I'm trying to integrate code that relies on newer OpenGL features with a framework (ROOT) that sets up the OpenGL context and gives no way to alter the parameters.
And is there a way to change the pixel format attributes of an OpenGL context after it has been set up?
Pixel formats associated with OpenGL contexts (the proper terminology here is "default framebuffer") are always immutable. OpenGL itself does not control or dictate this, it is actually the window system API (e.g. WGL, GLX, EGL, NSOpenGL) that impose this restriction.
The only way I see around this particular problem is if you create your own offscreen (core 3.2) context that shares resources with the legacy (2.1) context that ROOT created. If OS X actually allows you to do this context sharing (it is iffy, because the two contexts probably count as different renderers), you can draw into a renderbuffer object in your core context and then blit that renderbuffer into your legacy context using glBlitFramebuffer (...).
Note that Framebuffer Objects are not a context shareable resource. What you wind up sharing in this example is the actual image attachment (Renderbuffer or Texture), and that means you will have to maintain separate FBOs with the same attachments in both contexts.
glutInitContextVersion (3, 3);
can set the openGL version to 3.3,you can change the version as you like.

What does this bit of MSDN documentation mean?

The first parameter to the EnumFontFamiliesEx function, according to the MSDN documentation, is described as:
hdc [in]
A handle to the device context from which to enumerate the fonts.
What exactly does it mean?
What does device context mean?
Why should a device context be related to fonts?
Question (3) is a legitimately difficult thing to find an explanation for, but the reason is simple enough:
Some devices provide their own font support. For example, a PostScript printer will allow you to use PostScript fonts. But those same fonts won't be usable when rendering on-screen, or to another printer without PostScript support. Another example would be that a plotter (which is a motorized pen) requires vector fonts with a fixed stroke thickness, so raster fonts can't be used with such a device.
If you're interested in device-specific font support, you'll want to know about the GetDeviceCaps function.
The windows API uses the concept of handles extensively. A handle is an integer value that you can use as a token to access an API resource. You can think of it as a kind of "this" pointer, although it is definitely not a pointer.
A device context is an object within the windows API that represents a something that you can draw on or display graphics on. It might be a printer, a bitmap, or a screen, or some other context in which creating graphics makes sense. In Windows, fonts must be selected into device contexts before they can be used. In order to find out what fonts are currently available in any given device context, you can enumerate them. That's where EnumFontFamiliesEx comes in.
Microsoft has other articles on device context,
https://learn.microsoft.com/en-us/windows/win32/gdi/about-device-contexts
An application must inform GDI to load a particular device driver and,
once the driver is loaded, to prepare the device for drawing
operations (such as selecting a line color and width, a brush pattern
and color, a font typeface, a clipping region, and so on). These tasks
are accomplished by creating and maintaining a device context (DC). A
DC is a structure that defines a set of graphic objects and their
associated attributes, and the graphic modes that affect output. The
graphic objects include a pen for line drawing, a brush for painting
and filling, a bitmap for copying or scrolling parts of the screen, a
palette for defining the set of available colors, a region for
clipping and other operations, and a path for painting and drawing
operations. Unlike most of the structures, an application never has
direct access to the DC; instead, it operates on the structure
indirectly by calling various functions.
Obviously font is a kind of drawing.

Resources