OpenGL Context with Multiple Devices (Monitors) - windows

In OpenGL I implicitly create a graphics context with something like GLUT when I create a window. Suppose I drag my window into a monitor driven by a different video card (e.g. Intel embedded graphics on one and NVidia on another). Who renders the window? I.e. which device runs the graphics pipeline for each of the cases below.
The glGetString(GL_RENDERER) seems to always return the primary display (where the GLUT window was created) even if I drag the window fully into one window or the other. (I am guessing it all gets done by the primary...) Can someone help me understand this?
Note, using Windows 10, GLUT, OpenGL, but I ask the questions in general if it matters.

GL knows nothing about windows, only about contexts. GL renders to the framebuffer in the current context.
You may code a way of asking the OS about where a window is, and use two context, and set as current the proper one depending on OS answer.

Related

Creating NSOpenGLView inside an NSDocument (OS X)

I am creating a music-eduaction app that reads in musical scores - not audio files - and will need to present an animated graphical screen. I created a document-based app to make file access easy, and I have it now reading and parsing the files, and I have all the song data stored in my Obj-C classes. I also have a textview in my xib that I can write song attributes and other text tidbits to. Now I want a second view, which needs to be graphical and animatable, for the music. I am an Xcode novice, but have some openGL experience. My setup is latest OS and Xcode versions.
When I try to drag the OpenGL View into my window in IB, I get a weird error/warning that says "Unsupported Configuration - NSOpenGLView in One Shot memory enabled window" (so that is weird), and the openGL view does not appear when I run the app.
I can't find much reference to OpenGL Views in NSdocuments on this site, or anywhere else, which makes me think I might be trying to do something that is not meant to be done. Does anyone have any advice for me? Should I not use a document-based app? Should I use something other than openGL? Or maybe I need to build the openGL View and View Controller 100% programmatically in this case? Any advice or pointers to some applicable samples/tutorials would be a huge help.
Try disabling the "One Shot" option from the Windows's memory attributes in Interface Builder.
From NSWindow documentation:
setOneShot: Sets whether the window device that the window manages
should be freed when it’s removed from the screen list.
- (void)setOneShot:(BOOL)oneShot
Parameters
oneShot YES to free the window’s window device when it’s removed from the screen list (hidden)
and to create another one when it’s returned to the screen; NO to
reuse the window device.
Discussion
Freeing the window device when
it’s removed from the screen list can result in memory savings and
performance improvement for NSWindow objects that don’t take long to
display. It’s particularly appropriate for NSWindow objects the user
might use once or twice but not display continually.

Qt keyboard events with DirectX fullscreen

I need to display a full screen DirectX window from a Qt app.
Although directX isn't supported directly anymore by Qt this should be easy enough - just override QWidget, provide your own paintEvent() and set a WA_PaintOnScreen attribute.
But when the app is full screen DirectX is grabbing all the mouse and keyboard inputs - so the only way out of the app is ctrl-alt-del.
ps. Even if I wrote DirectX keyboard handlers I would still have to find a way of creating the correct Qkeypress event to pass to Qt.
Has anyone done this? Or is there a simple way to tell DirectX not to grab the keyboard?
To my knowledge Direct3D does not get the keyboard. Your problem more likely arises from the fact that Direct3D in full-screen is quite a different beast. Things like GDI (which Qt may well use to do rendering) do not work by default, the run-time hooks lots of bits of information. That info then, presumably, never manages to get to Qt. The options you have are to re-implement Qt to render using Direct3D (Lighthouse project?) or to use a pseudo full screen. This is usually done by creating a window that has a client area the same size as the screen and then positioning it correctly.
The latter would probably be the simplest solution ...
There was an attempt to get a D3DWidget kind of thing in Qt 4.3-4.5 or something like that, but it never was stabilized or approved and later even removed.
Perhaps indeed lighthouse is an option (with a medium sized amount of work, basically links OS/DX stuff to Qt stuff) or you can take a look at the old direct3D code in older Qt branches. I never used it, and it probably isn't intended to use with recent versions of Qt, but it's better than nothing.

How does a Windows non-native user interface work?

Through experience I have found that the native windows forms/components don’t like to be changed. I know using Delphi or Visual Studio you are given native windows components to populate a form or window with and then you attach code on events that these components may do (onClick for example).
However, how do all of these programs like Word or google’s Chrome browser alter the standard windows’ window? I thought it was somehow protected?
Chrome seems to have tabs actually on the window’s frame?
I know you can also get toolkits like Swing and QT that have their own controls/components to populate a form. How do these work? (How does the operating system/computer know what a non-native button should act like? For example; Chrome's back and forward buttons, they're not native components?).
I can understand how OpenGL/DirectX window would work because you’re telling the computer exactly what to draw with polygons/quads.
I hope this question is clear!
Windows does not protect GUI elements. Windows and controls can be subclassed to handle various drawing operations in a custom way. For example, windows may override and reimplement the handling of the WM_NCPAINT message to draw a custom titlebar and frame:
http://msdn.microsoft.com/en-us/library/dd145212(VS.85).aspx
Some Windows controls have an "owner-draw" mode. If you use this, you get to draw the control (or at least vital parts of the control), while Windows takes care of responding to user input in the standard way.
Swing ant QT draw their own widgets at a low level using basic primitives, but they also have theme engines which can mimic the native controls.
Qt moved to native controls a while back. As for how swing does it, it gets a basic window from the OS. Then much like Opengl\Directx it does all of the drawing with in that window. As for where to position things that is what the layout managers do. Each manager has a layout style horizontal, vertical, grid, components it has to draw and a section of window it is expected to fill. From there it does some pretty easy math to allocate its space to its controls.
There's no magic: non native controls are simply drawn on a blank window. Or, instead of being drawn they may be represented as one of several bitmaps based on state (ie: a button may be represented as a .png for the normal state, another .png for the pressed state, etc)

Problem with fullscreen OpenGL on the Mac

Now I am porting some OpenGL tutorials from win/glut to cocoa/mac os x. In the windowed mode everything works, but when context switches to fullscreen, screen may be empty (only clear colour)! For example: in the first, second, third times cube is, but in the fourth time cube is not. Even if app launches in fullscreen without sharing context. I don`t understand.
Xcode 3.2.1, Mac OS X 10.6.2
source link
It looks like AFController's enterFullScreen method probably needs to set up the OpenGL context ([scene initGL]).
Also, awakeFromNib may be called before the application is ready to draw, so perhaps it's not the best place for [scene initGL]. I suggest implementing NSApplication's delegate method, applicationDidFinishLaunching:, and moving [scene initGL] there. Just to be safe, you might also try calling NSOpenGLContext's makeCurrentContext from there as well.

Does anyone know if there is a performance benefit to fullscreen opengl vs windowed opengl in OSX?

The client for the MMO I work on uses two contexts, one for a window view and one fullscreen. I'm wondering if I just use a window sized to the display I can simply resize it if the user wants a smaller window so they can access their desktop.
Is their a performance penalty for running opengl in a window vs fullscreen assuming the same dimensions etc?
The client shell is written in cocoa; the game code itself is cross-platform.
We only support OSX 10.5 and 10.6 for the next release.
Before 10.6, if your context did not have the full screen flag in it's creation, then you had a small performance difference. Now, with 10.6, this has changed.
Have a look at:
http://lists.apple.com/archives/Cocoa-dev/2009/Sep/msg01054.html
if there is a cost associated with clipping each frame, then yes.

Resources