To create an OpenGL context with profile version 3.2, I need to add the attributes to the pixel format when creating the context:
...,
NSOpenGLPFAOpenGLProfile, (NSOpenGLPixelFormatAttribute) NSOpenGLProfileVersion3_2Core,
...,
Is there a way (environment variable, global variable, function to call before the NSOpenGLPixelFormat is created, ...), to alter the default OpenGL profile version, for when no such attributes are specified. It defaults to an older version on OS X 10.10. I'm trying to integrate code that relies on newer OpenGL features with a framework (ROOT) that sets up the OpenGL context and gives no way to alter the parameters.
And is there a way to change the pixel format attributes of an OpenGL context after it has been set up?
Pixel formats associated with OpenGL contexts (the proper terminology here is "default framebuffer") are always immutable. OpenGL itself does not control or dictate this, it is actually the window system API (e.g. WGL, GLX, EGL, NSOpenGL) that impose this restriction.
The only way I see around this particular problem is if you create your own offscreen (core 3.2) context that shares resources with the legacy (2.1) context that ROOT created. If OS X actually allows you to do this context sharing (it is iffy, because the two contexts probably count as different renderers), you can draw into a renderbuffer object in your core context and then blit that renderbuffer into your legacy context using glBlitFramebuffer (...).
Note that Framebuffer Objects are not a context shareable resource. What you wind up sharing in this example is the actual image attachment (Renderbuffer or Texture), and that means you will have to maintain separate FBOs with the same attachments in both contexts.
glutInitContextVersion (3, 3);
can set the openGL version to 3.3,you can change the version as you like.
Related
During the developement of my engine, I'm trying to implement a feature, that enables hot-swapping between OpenGL and DirectX. Currently I'm testing on Win32 platform, and came across the following problem:
I implemented both renderer (OpenGL 3.0, and Direct3D11), both work fine alone. The swapping mechanism is the following:
Destroy the current rendering context, and build up the new one. For example: Release all DirectX objects, and then create OpenGL context, via WGL. I'm trying to implement this, using only one window (HWND).
Swapping from OpenGL 3.0 to DirectX11 works. (After destroying OpenGL, DirectX renders fine)
Destroying OpenGL and then recreating OpenGL again, works. Same with DirectX.
When I'm trying to swap from DirectX to OpenGL, the window will stop displaying the newly draw content, only the lastly drawn DirectX frame.
To construct the OpenGL context I'm using WGL. The class for the window was created with the CS_OWNDC style. I'm using SwapBuffers to flip the window buffers. Before setting up the context, I use SetPixelFormat with the previously returned value from ChoosePixelFormat. The created context is version 3.0, ensured via wglCreateContextAttribsARB.
Additional information:
All of the DirectX references are released, this was checked by calling ReportLiveDeviceObjects and checking the return value of ID3D11Device1::Release (0). ID3D11DeviceContext1::ClearState and Flush were called to ensure object destruction.
None of the OpenGL methods report error via glGetError, this is checked after every call. This is same for all OS, and WGL calls.
The OpenGL rendering calls are executing as expected, for example:
OpenGL rendering with 150 fps
Swap to DirectX, render with 60 fps (VSYNC)
Swap back to OpenGL, rendering again with 150 fps (not more)
There are other scenarios where OpenGL renders with more than 150 fps, so the rendering calls are executing properly.
My guess is that the flipping of the buffers doesn't work somehow, however SwapBuffers returns TRUE anyway.
I tried using SaveDC and RestoreDC before and after using DirectX, this resulted in now solution.
Using wglSwapLayerBuffers instead of SwapBuffers gives no change.
Can I somehow restore the HWND, or HDC to the original state, or do you guys have any idea why this might happen?
Guess I posted my question to soon, but however, this is how I solved it.
I dug around the documnentation for DirectX, and for the function CreateSwapChainForHwnd, I found the following:
Because you can associate only one flip presentation model swap chain at a time with an HWND, the Microsoft Direct3D 11 policy of deferring the destruction of objects can cause problems if you attempt to destroy a flip presentation model swap chain and replace it with another swap chain.
I was using DXGI_SWAP_EFFECT_FLIP_SEQUENTIAL in my swap chain descriptor, and this could mean, that DirectX sets up a flip swap chain for the window, but when I try to use it with OpenGL, it will fail swapping buffers somehow.
The solution for this, is to not use FLIP mode for creating the swap chain:
DXGI_SWAP_CHAIN_DESC1 scd;
scd.SwapEffect = DXGI_SWAP_EFFECT_DISCARD;
scd.Scaling = DXGI_SCALING_ASPECT_RATIO_STRETCH;
You have to set the Scaling to something else than DXGI_SCALING_NONE, or the creation will fail.
The interesting part is, that the DirectX still does not properly destroy the flip model on the window, altough I did everything it suggested in the documentation (ClearState and Flush calls).
CreateSwapChainForHwnd see Remarks
Edit: I found this question after some time. If anybody still has some idea, how to revert back to using GDI again instead of the DWM backbuffer, it is greatly appreciated.
Is there any way in Mac OS X to share an OpenGL framebuffer between processes? That is, I want to render to an off-screen target in one process and display it in another.
You can do this with DirectX (actually DXGI) in Windows by creating a surface (the DXGI equivalent of an OpenGL framebuffer) in shared mode, getting an opaque handle for that surface, passing that to another process via whatever means you like, then creating a surface in that other process, but passing in the existing handle. You use the surface as a render target in one process then and use it as a texture in the other to consume as you wish. And in fact the whole compositing Window system works like this from Vista onwards.
If this isn't possible I can of course get the contents of the framebuffer into system memory and use cross-process shared memory to get it to the target process, then upload it again from there, but that would be unnecessarily slow.
Depending on what you're really trying to do this sample code project may be what you want:
MultiGPUIOSurface sample code
It really depends upon the context of how you're using it.
Objects that may be shared between contexts include buffer objects,
program and shader objects, renderbuffer objects, sampler objects,
sync objects, and texture objects (except for the texture objects
named zero).
Some of these objects may contain views (alternate interpretations) of
part or all of the data store of another object. Examples are texture
buffer objects, which contain a view of a buffer object’s data store,
and texture views, which contain a view of another texture object’s
data store. Views act as references on the object whose data store is
viewed.
Objects which contain references to other objects include framebuffer,
program pipeline, query, transform feedback, and vertex array objects.
Such objects are called container objects and are not shared.
Chapter 5 / OpenGL-4.4 core specification
The reason you can do those things on Windows and not OS X is that graphics obviously utilizes an API that allows DirectX contexts to be shared between those processes. If OS X doesn't have the capability within the OpenGL API then you're going to have to come up with your own solution. Take a look at OpenGL Programming Guide for Mac, there's a small section that describes using multiple OpenGL contexts.
The first parameter to the EnumFontFamiliesEx function, according to the MSDN documentation, is described as:
hdc [in]
A handle to the device context from which to enumerate the fonts.
What exactly does it mean?
What does device context mean?
Why should a device context be related to fonts?
Question (3) is a legitimately difficult thing to find an explanation for, but the reason is simple enough:
Some devices provide their own font support. For example, a PostScript printer will allow you to use PostScript fonts. But those same fonts won't be usable when rendering on-screen, or to another printer without PostScript support. Another example would be that a plotter (which is a motorized pen) requires vector fonts with a fixed stroke thickness, so raster fonts can't be used with such a device.
If you're interested in device-specific font support, you'll want to know about the GetDeviceCaps function.
The windows API uses the concept of handles extensively. A handle is an integer value that you can use as a token to access an API resource. You can think of it as a kind of "this" pointer, although it is definitely not a pointer.
A device context is an object within the windows API that represents a something that you can draw on or display graphics on. It might be a printer, a bitmap, or a screen, or some other context in which creating graphics makes sense. In Windows, fonts must be selected into device contexts before they can be used. In order to find out what fonts are currently available in any given device context, you can enumerate them. That's where EnumFontFamiliesEx comes in.
Microsoft has other articles on device context,
https://learn.microsoft.com/en-us/windows/win32/gdi/about-device-contexts
An application must inform GDI to load a particular device driver and,
once the driver is loaded, to prepare the device for drawing
operations (such as selecting a line color and width, a brush pattern
and color, a font typeface, a clipping region, and so on). These tasks
are accomplished by creating and maintaining a device context (DC). A
DC is a structure that defines a set of graphic objects and their
associated attributes, and the graphic modes that affect output. The
graphic objects include a pen for line drawing, a brush for painting
and filling, a bitmap for copying or scrolling parts of the screen, a
palette for defining the set of available colors, a region for
clipping and other operations, and a path for painting and drawing
operations. Unlike most of the structures, an application never has
direct access to the DC; instead, it operates on the structure
indirectly by calling various functions.
Obviously font is a kind of drawing.
What are the differences between OES/EXT/ARB_framebuffer_object extensions. Can all of these extensions be used with OpenGLES 1.1 or OpenGLES2.0 applications? Or are there any guidlines w.r.t what extension to be used with which version of GLESx.x?
OK After some googling i found the below piece of info...
GLES FBO
a. are core under GLES2
b. under GLES1, exposed via the extension GL_OES_framebuffer_object,
under which API entry point are glFunctionNameOES
OpenGL 1.x/2.x with GL_EXT_framebuffer_object
under which API entry points are glSomeFunctionEXT
OpenGL 3.x FBO/GL_ARB_framebuffer_object
under GL 3.x, FBO's are core and API points are glSomeFunction
also, there is a "backport" exttention for GL 2.x, GL_ARB_framebuffer_object
API entry point are glSomeFunction(). Note the lack of EXT or ARB suffix.
Token naming:
1a. no suffix
1b. _OES
_EXT
no suffix.
fortuantely, the token names map to the same values
Additionally, their usage is different:
1a,1b: Depth and stencil buffers are attatched sperately as render buffers
or also possibly supported is attatching both as one buffer with
the extension GL_OES_packed_depth_stencil.
Depth buffer is defualt at 16bits!
2,3: The spec allows for attatching depth and stencil seperately, but
all consumer level desktop hardware does not support this, rather to
attatch both a stencil and depth buffer calls for a depth-stencil texture.
2. extension GL_EXT_packed_depth_stencil, type is GL_DEPTH24_STENCIL8_EXT
3. part of the FBO spec, type is GL_DEPTH24_STENCIL8
Note: the value of the tokens GL_DEPTH24_STENCIL8 and GL_DEPTH24_STENCIL8_EXT
are the same.
Issues with GL_EXT_framebuffer_object
a) GL_EXT_framebuffer_object might not be listed in GL 3.x contexts because
FBO's are core.
b) also, if have a GL 2.x context with newer hardware, possible that
GL_EXT_framebuffer_object is not listed but GL_ARB_framebuffer_object is
Differences of capabilities:
FBO support via 3.x/GL_ARB_framebuffer_object allows for color buffer attathments
to have different types and resoltions, additionally, MSAA and blit functionality
is part of 3.x core and part of GL_ARB_framebuffer_object.
FBO support via GL_EXT_framebuffer_object, both blit and MSAA support
are exposed as separate extensions.
I'm currently writing an OpenGL renderer and am part-way through writing some classes for enumerating display adaptors, devices and modes for use in drop-down lists.
I'm using EnumDisplayDevices to get the adaptors and then EnumDisplaySettings for each device, giving me bpp, width, height and refresh rate. However I'm not sure how to find out which modes are available full-screen (there doesn't appear to be a flag for it in the DEVMODE structure). Can I assume all modes listed can in-principle be instantiated full-screen?
As a follow up question, is this approach to device enumeration generally the best way of getting the required information on Windows?
OpenGL has not this distinction between windowed and fullscreen mode. If you want an OpenGL program to be fullscreen you just set the window to be toplevel, borderless, without decoration, stay on top and maximum size.
The above is actually a dumb question. By definition windowed mode must be the current display settings. All other modes must be available full-screen (provided the OS supports them, i.e. 640x480 not advisable in Vista/7).
Hmmph, not correct at all, and with an attitude too. You have variety of functions that can be used.
SetPixelFormat, ChoosePixelFormat, ChangeDisplaySettings.
The PixelFormat functions will let you enumerator available modes. ChangeDisplaySettings with allow you to set whatever screen mode (including bit depth) your app wants. Look them up in MSDN.