Create GLX context in specific region of a window - x11

I would like to create an OpenGL context with GLX inside a window. However, I do not want it to span over the whole window region. Instead, it should only cover a subregion.
For example, GLUT provides a function for this behaviour. Also major toolkits like GTK+ or QT provide GL widgets, which are only subregions of X windows. However I need to work low-level.
glXMakeCurrent() accepts a X Drawable identifier. Is it possible to define a Drawable as being a subregion of a window? Or are there other ways to bind the context to a window region?
GLX reference (Blue Book)
Edit: Added awesome bounty!

You can only glXMakeCurrent() an X Drawable, not a subsection of it, however your solution is simple: stop thinking about an X window as if it is your application. Each X application is typically made up of 10's or 100's of X windows. Create a child window in the area you want and draw into it.
Alternately, you could create a pixmap, render into it and then copy to an area of a window, but that would be slower.

I found this helpful piece of information in a BSD manpage:
In almost every regard that is important to you, a subwindow is like a top-level window. It has a window id; it has its own set of event callbacks; you can render to it; you are notified of its creation; ...
A subwindow lives inside of some other window (possibly a top-level window, possibly another subwindow). Because of this, it generally only interacts with other windows of your own creation, hence it is not subjected to a window manager. This is the primary source for its differences from a top-level window:
So I assume that GL widgets in popular toolkits also act in fact as a distinct (sub)window. The interesting part is that this is transparent to the window manager, and therefore the user.

Related

What context should I refer to so I am able to BitBlt on top of any application or window? Is there a "general" context which refer to display?

So I`m very new to win32ui, basically just starting. I was once using BitBlt wit python win32api module and as far as I remember to draw on top of display (so any application - if they are opened) I had to get specific context handle. But my memory is hazy on whether it simply was NULL or was it some specific context? Null doesn't seem to work, so I wonder how to obtain that general context? I really want to avoid to create fully transparent un blocking window.
The GetDC API allows you to get a device context for any given window. Alternatively,
If [hWnd] is NULL, GetDC retrieves the DC for the entire screen.
You can use the device context for the entire screen to read from, reliably (with restrictions). Rendering into a device context for a window you do not own won't be reliable, though. While it won't fail straight away, the window owner can overwrite your rendering at any point. There's no way for you to even be notified about this.
If you need to render on top of the screen you will have to create a top-most (transparent) window yourself, and use its device context. Make sure you ask the question: What if two programs did this?

Fullscreen borderless window with 'owned' window on-top

I have two OpenGL windows: a main one and a smaller one that is set to be 'owned' by the main one (hWndParent is set in CreateWindowEx, but the WS_CHILD style is not set).
If I then convert my main window to be borderless and the same size as my desktop it will jump in front of the smaller window even though it's owned and that should not be possible (https://msdn.microsoft.com/en-us/library/windows/desktop/ms632599%28v=vs.85%29.aspx#owned_windows). This is true even if the smaller window is set to be always-on-top.
On it's own this isn't terrible, but the core issue is that I can still click-through my main window on where the smaller window is, and the smaller window will pop infront. I can go between the two windows endlessly like this by clicking on the main window, then clicking-through the main window.
If I make the main window size 1 pixel less than the full desktop size, none of these issues occur and the windows behave is as expected.
I can't find any documentation that describes this behavior. It is a feature to keep windows from going infront of content (such as a video playing back) that isn't documented, or am I just missing it?
I'll mention I'm not using layered or transparent window here, so I don't think click-through should even be possible?
Thanks
What you experience may very well be a OpenGL implementation bug that's triggered by the heuristic in which the driver switched between "windowed" and "fullscreen" rendering: You see, for OpenGL there's no special "exclusive fullscreen mode" as Direct3D has. Instead a borderless window covering the whole screen, which is not overlapped by foreign windows may trigger a "fullscreen" detection, which may the OpenGL implementation in question make switch to another code path (namely one, where all pixel ownership tests are disabled and the framebuffer flips go directly to the display scanout, bypassing the windowing compositor.
What you do there is so uncommon, that it likely may have slipped through all conformance tests. Having child windows to a OpenGL window is uncommon in the first place and them being floating is even rarer.
If you've got a minimal example showcase, you should probably report it as a bug to the driver vendor. In the meantime I propose a workaround: Make your OpenGL window a child-window to your top level window (will of course require resizing in the toplevel WM_SIZE) and make your floating window another child to the toplevel; the z-order between childs in a parent window is respected and kept. Being a child to a toplevel window should inhibit most heuristics and OpenGL drivers should not loop at the border and size of OpenGL parent windows.

How to create an invisible X11 window for GPGPU?

Is it possible to create an invisible X window? For initialization of an OpenGL ES 2.0 context, one has to create a X window manually, but I can't find a way to make it invisible. Since I'm only doing GPGPU I don't need an output window. In fact, it is rather annoying in my case.
I'm aware of a solution from an earlier question, where it has been pointed out to use InputOnly in XCreateWindow(). This, however, leads to the X error GLXBadDrawable. Probably because EGL requires the window to respond to graphics request. Is there another way? Maybe create it minimized? But I can't find anything on that either. Also setting the window's size really small doesn't help, since it always occupies the whole screen on my device (Nokia N9).
When you create an X window, it is created unmapped, so what about creating an InputOutput window and leaving it unmapped? Another option would be (if the window must stay mapped), to move it out of the screen.

How does a GUI Framework work?

I have been all over the web looking for an answer to this, and my question is this:
How does a GUI framework work? for instance how does Qt work, is there any books or wibsites on the topic of writing a GUI framework from scratch? and also does the framework have to call methods from the operating systems GUI framework?
-- Thank you to any one who takes the time to try to answer this question, and forgive me if i misspelled anything.
In the old days we did a lot of GUI programming from scratch. It is not as hard as it seems, but it requires a few weeks to come with results.
First you need a good drawing library. Minimal functionality for this library is drawing clipped rectangles (using patterns), lines, bitmaps, and fonts. You can cheat by creating fonts as bitmaps, and clipped rectangle is just a bunch of horizontal lines.
Now you need at least drivers for mouse, keyboard, and timer (if not already provided by the operating system). In general, you will need to detect keys, symbol keys (such as shift, etc.), mouse moves and mouse clicks. Basic timer functions will allow you to detect double clicks.
Then you need to create a window data structure. This data structure needs to have coordinates i.e. a rectangle, link to parent window (if not top window), and window function i.e. the function that will be called when this window should handle some events.
Once you can draw on screen you need some rectangle algebra functions. You need at least good function to calculate intersection of rectangles, and a quick resolution of relative to absolute coordinates. For example - if your child window has parent then its' x and y should recursively be added to parent x and y until you reach top window.
At this point you have your:
- primitive graphical functions,
- window structure,
- mouse driver, keyboard driver, and timer,
- rectangle arithmetic.
Now you can write your main event harvesting function. This function will run all the time. It's purpose will be to detect events and send messages to correct windows. What is an event? Well, when you start your program, store mouse x and y coordinates. Then in a loop check if they have changed. If they have changed, find the window at that position ... and send WM_MOUSEMOVE event to it. Your harvesting function should handle:
- mouse moves
- mouse clicks
- mouse double clicks (remember last click and position, measure time and decide if it is a double click or not)
- timer events
- keyboard buffer changes
...
Now you should be able to send events to windows. But you really need a mechanism for it. It is a combination of message queue, and window procedure. It usually works like this: each window has a window procedure which commonly accepts four arguments: message id (i.e. is it mouse move, is it paint message), window handle, parameter 1 and parameter 2. You can call this window procedure directly using something like a send_message functions. Or you can send this window a message via post_message function. This will put message to the queue and window will process messages one by one, eventually receiving this one. So why should you call one messages directly and put others to the queue? Because of priority. You see, a keyboard click can wait some time before being processed. But a window redraw must complete immediately to prevent flicker and wrong data on screen.
So your harvest_events function sends messages to windows using post_message, and send_message. And your window message pump gets them using typical message pump like this:
while (pmsg = get_message() != NULL) send_message(pmsg->id, pmsg->hwnd, pmsg->p1, pmsg->p2);
get_message simply obtains message from the queue, and calls send message. Simple, huh? Well, not quite so. This way you would only receive driver messages to windows, but you also need some functions to redraw windows, move them, etc.. When you create move_window function, resize_window, show_window, and hide_window function, your window coordinates will change. Parts of other windows will be uncovered (if top window is moved or closed).You need to calculate which windows are affected by coordinate changes and send paint message to those windows (to repaint only the parts that were uncovered - remember, you have clipping drawing functions so this will work).
These functions introduces messages msg_paint, msg_move, msg_resize, msg_hide...
Last, you need to maintain hierarchy of windows. Your top window should be the desktop. It should have child windows (application top windows). These windows may have further child windows (buttons, edit boxes, etc.) The obvious structure for holding these is the window tree. When you detect mouse click you have to traverse window tree and do it in a smart way (finding out who has focus, who is modal, etc.) to send message to the right window. And when you draw you also must traverse all children to see who is uncovered and who is not. Last but not least, you need to handle mouse rectangle as top window to prevent flickering the mouse as windows are re-drawn or (using timers and msg_paint events) animated.
That's roughly it.
A GUI framework like Qt generally works by taking the existing OS's primitive objects (windows, fonts, bitmaps, etc), wrapping them in more platform-neutral and less clunky classes/structures/handles, and giving you the functionality you'll need to manipulate them. Yes, that almost always involves using the OS's own functions, but it doesn't HAVE to -- if you're designing an API to draw an OpenGL UI, for example, most of the underlying OS's GUI stuff won't even work, and you'll be doing just about everything on your own.
Either way, it's not for the faint of heart. If you have to ask how a GUI framework works, you're not even close to ready to design one. You're better off sticking with an existing framework and extending it to do the spiffy stuff it doesn't do already.

How do OpenGL contexts and device contexts work?

I'm new to U/I programming, and I'm trying to get started with OpenGL. When I run an example program which creates a new OpenGL window with GLUT, it works fine. Good. However, in the context of another program, where I have to respond to Draw events (on Windows), with a device context passed to me - and where I might not have GLUT available - my confusion is this:
When is a device context created and destroyed? Can I draw to any device context given to me, or only some of them (and how do I know)?
Do I have to create my own OpenGL context and use that to draw to, or can I use a "current" OpenGL context? Do I have to re-create the context every time a draw event is sent?
Basically my question is, given a situation where I am sent "Draw" events, how often do I attempt to create OpenGL contexts and how does this relate to the creation/destruction cycle of device contexts?
In general, it's usually safe to think of a single OpenGL context as a window, especially on windows.
A device context will (typically) map to an Window Handle (HWND). It's actually a DC (HDC is the handle), but normally you associate one HDC with a single HWND. In Windows, you'll create a window to use based off the window on screen where you want to render.
Typically, you'll reuse this device context for the entire runtime of your application. If you want to render into a different window, you'll need to generate a device context (HDC) for the new window handle. Also, offscreen rendering is a bit different, since you'd create a compatible device context for that, as well.
As for your questions:
1) When you create the window where you want to do the rendering, you'll grab a device context, and use it for the lifetime of that window.
2) You'll want to always use the device context created for the window where you are rendering.

Resources