I'm creating an OpenGL application on windows. I can't use GLUT, because I would like to render to more than one window. I know how to do it by using wgl, but it is very messy and I would like to know what is happening under the hood.
In first place I have to create a window with desired pixel format. Than I connect this window to the OpenGL and everything is working. How does the driver know, where to render? Where are the window data stored? I'm looking for some kind of explanation, but I can't find anything good.
I know how to do it by using wgl, but it is very messy and I would like to know what is happening under the hood.
WGL is as as "under the hood" as it gets. That is the interface for creating an OpenGL context from a HWND. You aren't allowed to get any more low-level.
How does the driver know, where to render? Where are the window data stored?
The device context, the HDC, is how rendering gets done on a HWND. Note that wglMakeCurrent takes a HDC, which does not have to be the HDC the context was created from (it simply must use the same pixel format). So "where to render" comes from that function.
This stuff is all stored internally to Windows and the Windows Installable Client Driver model for OpenGL. You are not allowed to poke at it, modify it, or even look at it. You can simply use it.
Related
I have a situation where I need to embed some 3rd party closed-source Unity applications into our own. I'm injecting a DLL which creates a DX11 shared texture from their swapchain. This part works and it's done.
Additionally I want to hide the form wrapping the Unity app (you can set their parent handle with a command line luckily) so I can have 100% control what happens to its texture in our own app (+ so it wouldn't interfere with the overall look of our own app). Which also works fine, I get the texture without a problem even when the Unity form is completely off-screen.
Now my problem is that this Unity application requires to be used with multitouch and after some fair amount of googling/stack overflow reading I kinda concluded that there's no way (or I haven't found any way) to compose valid WM_POINTER* messages just for one window in Windows. (this is kinda supported by the fact that you need to call a separate WinApi function to get all the data of a Pointer/Touch based on their ID which is received in the lParam of WM_POINTER* message)
So I'm using the TouchInjection Windows API (InitializeTouchInjection and InjectTouchInput) (information about these API's on the internet are misleading at their best but I solved actually all their quirks) and it works fine if the Unity form is visible on the screen. Or in other words if the touch position is inside the screen boundaries.
And now finally the problem: When I specify an offscreen coordinate for the injected touches, I get an ERROR_INVALID_PARAMETER (87 / 0x57) system error message. Otherwise it works. Is there a way to turn off this check in windows? Or anybody who solved this problem before some other way?
(Our app is not an end-user one, we have full control over the environment it runs inside, system-wide modifications are also OK.)
Thanks in advance!
You can't turn off error code checking because it's a return value inside the function, and represents a failure of the function call, then the function return or change nothing but error code. If the error code can be disabled, then what the status of the function call? succeed or fail?
You need to check coordinate manually and detect what to do.
In OpenGL I implicitly create a graphics context with something like GLUT when I create a window. Suppose I drag my window into a monitor driven by a different video card (e.g. Intel embedded graphics on one and NVidia on another). Who renders the window? I.e. which device runs the graphics pipeline for each of the cases below.
The glGetString(GL_RENDERER) seems to always return the primary display (where the GLUT window was created) even if I drag the window fully into one window or the other. (I am guessing it all gets done by the primary...) Can someone help me understand this?
Note, using Windows 10, GLUT, OpenGL, but I ask the questions in general if it matters.
GL knows nothing about windows, only about contexts. GL renders to the framebuffer in the current context.
You may code a way of asking the OS about where a window is, and use two context, and set as current the proper one depending on OS answer.
What is the best way to access the rendering area of every single window in Microsoft Windows, so I can render them myself in 3D? I know the Desktop Window Manager (dwm) gives every window a rendertarget and renders them alltogether on the screen.
There are tons of 3D window managers out there, but finding some code or other information is hard for me, so I'm asking you.
I would like to do it in OpenGL, but I can imagine it's just possible in DirectX, but that's fine too.
Thanks in advance!
You have to use the operaring system / graphics system specific API dedicated for that task. OpenGL is operating and graphics system agnostic and has no notion of "windows".
In Windows the API you must use is the DWM API. For Windows versions predating the DWM API (XP and earlier) some "dirty" hacks have to be employed to get the contents of the windows (essentially hooking the WM_PAINT and WM_ERASEBACKGROUND messages, and the BeginPaint and EndPaint functions of the GDI to copy the windows' contents into a DIBSECTION that can be used for updating a texture).
In X11 the API to use is XComposite + the GLX_ARB_texture_from_pixmap extension
I've heard about various methods of rendering to a Window, but these all involve using some thing such as GDI+, DirectX, OpenGL, or something else.
How do these libraries work, and how do they get pixels into a Window? Just out of curiosity, how hard is it to raw access a Window's image data?
Thanks.
That's a pretty broad question.
The various Windows subsystems that draw images interface with the video drivers. Or use so combination of working with GDI+ and interfacing with the video drivers. How the drivers work is going to depend on the video card manufacturer.
I don't know what you mean by "raw access a Window's image data." You can capture a window's image into a bitmap, massage it, and write it back to the window's DC. But getting to the actual bits that Windows uses to render the bitmap would require digging into undocumented data structures. You'd have to know how to follow a window handle down to the low-level data structures that are maintained inside the GDI subsystem.
I need to display a full screen DirectX window from a Qt app.
Although directX isn't supported directly anymore by Qt this should be easy enough - just override QWidget, provide your own paintEvent() and set a WA_PaintOnScreen attribute.
But when the app is full screen DirectX is grabbing all the mouse and keyboard inputs - so the only way out of the app is ctrl-alt-del.
ps. Even if I wrote DirectX keyboard handlers I would still have to find a way of creating the correct Qkeypress event to pass to Qt.
Has anyone done this? Or is there a simple way to tell DirectX not to grab the keyboard?
To my knowledge Direct3D does not get the keyboard. Your problem more likely arises from the fact that Direct3D in full-screen is quite a different beast. Things like GDI (which Qt may well use to do rendering) do not work by default, the run-time hooks lots of bits of information. That info then, presumably, never manages to get to Qt. The options you have are to re-implement Qt to render using Direct3D (Lighthouse project?) or to use a pseudo full screen. This is usually done by creating a window that has a client area the same size as the screen and then positioning it correctly.
The latter would probably be the simplest solution ...
There was an attempt to get a D3DWidget kind of thing in Qt 4.3-4.5 or something like that, but it never was stabilized or approved and later even removed.
Perhaps indeed lighthouse is an option (with a medium sized amount of work, basically links OS/DX stuff to Qt stuff) or you can take a look at the old direct3D code in older Qt branches. I never used it, and it probably isn't intended to use with recent versions of Qt, but it's better than nothing.