I'm new to U/I programming, and I'm trying to get started with OpenGL. When I run an example program which creates a new OpenGL window with GLUT, it works fine. Good. However, in the context of another program, where I have to respond to Draw events (on Windows), with a device context passed to me - and where I might not have GLUT available - my confusion is this:
When is a device context created and destroyed? Can I draw to any device context given to me, or only some of them (and how do I know)?
Do I have to create my own OpenGL context and use that to draw to, or can I use a "current" OpenGL context? Do I have to re-create the context every time a draw event is sent?
Basically my question is, given a situation where I am sent "Draw" events, how often do I attempt to create OpenGL contexts and how does this relate to the creation/destruction cycle of device contexts?
In general, it's usually safe to think of a single OpenGL context as a window, especially on windows.
A device context will (typically) map to an Window Handle (HWND). It's actually a DC (HDC is the handle), but normally you associate one HDC with a single HWND. In Windows, you'll create a window to use based off the window on screen where you want to render.
Typically, you'll reuse this device context for the entire runtime of your application. If you want to render into a different window, you'll need to generate a device context (HDC) for the new window handle. Also, offscreen rendering is a bit different, since you'd create a compatible device context for that, as well.
As for your questions:
1) When you create the window where you want to do the rendering, you'll grab a device context, and use it for the lifetime of that window.
2) You'll want to always use the device context created for the window where you are rendering.
Related
So I`m very new to win32ui, basically just starting. I was once using BitBlt wit python win32api module and as far as I remember to draw on top of display (so any application - if they are opened) I had to get specific context handle. But my memory is hazy on whether it simply was NULL or was it some specific context? Null doesn't seem to work, so I wonder how to obtain that general context? I really want to avoid to create fully transparent un blocking window.
The GetDC API allows you to get a device context for any given window. Alternatively,
If [hWnd] is NULL, GetDC retrieves the DC for the entire screen.
You can use the device context for the entire screen to read from, reliably (with restrictions). Rendering into a device context for a window you do not own won't be reliable, though. While it won't fail straight away, the window owner can overwrite your rendering at any point. There's no way for you to even be notified about this.
If you need to render on top of the screen you will have to create a top-most (transparent) window yourself, and use its device context. Make sure you ask the question: What if two programs did this?
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
Is wglMakeCurrent supposed to be called only once or does it need to be called before every buffer swap?
Can current opengl context be reset by some external thing other then setting it via wglMakeCurrent?
I am here just to narrow the possible problem. I can't post relevant code here because I have no idea which part is relevant.
Currently I have loop that does makeCurrent -> clear -> render. It renders correctly. I tried make context current on initialization without making it current every draw, but it rendered empty screen. Only when I exited window the correct render flickered for one frame. I figured that something is wrong by using nvidia's graphics debugger. The debugger's overlay strangelly flickered. It doesn't do that with other applications.
Is wglMakeCurrent supposed to be called only once
Each GL context can be made current to at most one thread and drawable at every single point in time. If you only use one Window and one GL Context, it is enough to call wglMakeCurrent only once after you created the context and the window.
If you use one context for multiple windows, you have to re-bind it at least once per frame and window. Note that switching the current context or window traditionally implied flushing the GL pipeline, but this can nowadays be prevented via the KHR_context_flush_control extension, making such a scheme much more efficient.
If you use multiple threads but a single GL context, you must push around the context from thread to thread by making it uncurrent in some thread, and making it current again in the new thread, and so on. But that scheme should almost never be necessary. Fur mutli-threaded GL, you should create mutlipe shared contexts, and then, you usually need one wglMakeCurrent per thread.
or does it need to be called before every buffer swap?
Note that the SwapBuffers function is not a GL function (hence also no gl prefix in the name), and therefore, does work independently of the currently active GL context - the function takes the HDC of the window you want the buffer swap to occur.
Can current opengl context be reset by some external thing other then setting it via wglMakeCurrent?
No, not really. There is the graphics reset situation which can be handled via ARB_robustness:
* Provide a mechanism for an OpenGL application to learn about
graphics resets that affect the context. When a graphics reset
occurs, the OpenGL context becomes unusable and the application
must create a new context to continue operation. Detecting a
graphics reset happens through an inexpensive query.
But such a _graphics reset does not unbind the current GL context - the affected context is just not usable any more.
I have a Windows 7 system, a regular monitor as the primary display (serving as a desktop, etc.), and an additional screen attached to the same graphics card.
I want to write a program that takes control of the secondary display and uses it for fullscreen OpenGL rendering. I tried to enumerate displays with EnumDisplaySettings, pick the secondary display, create a device context associated with the display, set the pixel format on the DC, and create a WGL context associated with it. I can get this far without errors, but then the call to wglMakeCurrent fails for no apparent reason (return value is 0, GetLastError() is 0, and OpenGL does not function.)
The only way I could get it to work is to extend the desktop onto the secondary display (manually, from Windows display settings), create a window and move it onto the secondary display. Which is tolerable but undesirable (I don't want the secondary display to interfere with the desktop. For example, in this setup, I can move the mouse cursor from the desktop into the secondary display.) Is there a way to avoid this?
More generally, in order to get OpenGL to work on a display, do I need (1) to have the display attached to the desktop (or "a" desktop?), and/or (2) to have a window of my own on that display?
P.S. It seems that I might be able to get this to work with a third-party library such as glfw3, but I don't want extra baggage (I don't need 90% of functionality of glfw3) and I'd prefer to get this done directly through native API calls if possible.
Unfortunately the Windows graphics driver model does not allow to use displays independently. You will have to extend the desktop to the second display and create a fullscreen window on it. When it comes to constraining the mouse, the usual way is to hook into the system mouse events and whenever the mouse pointer is moved into the secondary screen remove it back to the primary screen.
I want to capture everything what a window displays. On the other hand, it will be very nice if that window doesn't actually display nothing on the screen. How? The process will call drawing functions, my function will hook and draw it to somewhere else (for example into a bitmap file) and return without actually drawing on screen.
What I know is, in Windows NT architecture, every thread has a system call table and you can change system call table of a single thread (or just set it in the beginning) to your functions. By only changing drawing api (GDI?) (i am not sure how I would survive if application uses directx rendering but maybe there is a way) i feel that I can do it. Can I? What should I do if the application uses DirectX rendering?
Thanks in advance,
Ali Veli
I ended up hooking only CreateDC-like functions, made it always creating a memory device context and letting all other functions draw on that memory DC.
I would like to create an OpenGL context with GLX inside a window. However, I do not want it to span over the whole window region. Instead, it should only cover a subregion.
For example, GLUT provides a function for this behaviour. Also major toolkits like GTK+ or QT provide GL widgets, which are only subregions of X windows. However I need to work low-level.
glXMakeCurrent() accepts a X Drawable identifier. Is it possible to define a Drawable as being a subregion of a window? Or are there other ways to bind the context to a window region?
GLX reference (Blue Book)
Edit: Added awesome bounty!
You can only glXMakeCurrent() an X Drawable, not a subsection of it, however your solution is simple: stop thinking about an X window as if it is your application. Each X application is typically made up of 10's or 100's of X windows. Create a child window in the area you want and draw into it.
Alternately, you could create a pixmap, render into it and then copy to an area of a window, but that would be slower.
I found this helpful piece of information in a BSD manpage:
In almost every regard that is important to you, a subwindow is like a top-level window. It has a window id; it has its own set of event callbacks; you can render to it; you are notified of its creation; ...
A subwindow lives inside of some other window (possibly a top-level window, possibly another subwindow). Because of this, it generally only interacts with other windows of your own creation, hence it is not subjected to a window manager. This is the primary source for its differences from a top-level window:
So I assume that GL widgets in popular toolkits also act in fact as a distinct (sub)window. The interesting part is that this is transparent to the window manager, and therefore the user.