I've heard about various methods of rendering to a Window, but these all involve using some thing such as GDI+, DirectX, OpenGL, or something else.
How do these libraries work, and how do they get pixels into a Window? Just out of curiosity, how hard is it to raw access a Window's image data?
Thanks.
That's a pretty broad question.
The various Windows subsystems that draw images interface with the video drivers. Or use so combination of working with GDI+ and interfacing with the video drivers. How the drivers work is going to depend on the video card manufacturer.
I don't know what you mean by "raw access a Window's image data." You can capture a window's image into a bitmap, massage it, and write it back to the window's DC. But getting to the actual bits that Windows uses to render the bitmap would require digging into undocumented data structures. You'd have to know how to follow a window handle down to the low-level data structures that are maintained inside the GDI subsystem.
Related
What is the best way to access the rendering area of every single window in Microsoft Windows, so I can render them myself in 3D? I know the Desktop Window Manager (dwm) gives every window a rendertarget and renders them alltogether on the screen.
There are tons of 3D window managers out there, but finding some code or other information is hard for me, so I'm asking you.
I would like to do it in OpenGL, but I can imagine it's just possible in DirectX, but that's fine too.
Thanks in advance!
You have to use the operaring system / graphics system specific API dedicated for that task. OpenGL is operating and graphics system agnostic and has no notion of "windows".
In Windows the API you must use is the DWM API. For Windows versions predating the DWM API (XP and earlier) some "dirty" hacks have to be employed to get the contents of the windows (essentially hooking the WM_PAINT and WM_ERASEBACKGROUND messages, and the BeginPaint and EndPaint functions of the GDI to copy the windows' contents into a DIBSECTION that can be used for updating a texture).
In X11 the API to use is XComposite + the GLX_ARB_texture_from_pixmap extension
I presume dwm holds bitmap data of each rendered window in the GPU. Can I access this data? I want to use it as a texture in D3D (or preferably OpenGL). Screenshotting each window to RAM and back to GPU is too slow.
Ive seen other posts like : obtaining full desktop screenshot from the GPU
so Im doubtful, but maybe something has changed in the last 3 years.
Edit
So do all applications use Direct3D to draw all components? Would, say, this chrome browser's content, or file explorer's, or anything exist as an image in the graphics card or are only borders and such rendered through Direct3D/2D? Want to make sure before pursuing. BTW: my idea is a desktop for the Rift without running an alternate shell.
I'm creating an OpenGL application on windows. I can't use GLUT, because I would like to render to more than one window. I know how to do it by using wgl, but it is very messy and I would like to know what is happening under the hood.
In first place I have to create a window with desired pixel format. Than I connect this window to the OpenGL and everything is working. How does the driver know, where to render? Where are the window data stored? I'm looking for some kind of explanation, but I can't find anything good.
I know how to do it by using wgl, but it is very messy and I would like to know what is happening under the hood.
WGL is as as "under the hood" as it gets. That is the interface for creating an OpenGL context from a HWND. You aren't allowed to get any more low-level.
How does the driver know, where to render? Where are the window data stored?
The device context, the HDC, is how rendering gets done on a HWND. Note that wglMakeCurrent takes a HDC, which does not have to be the HDC the context was created from (it simply must use the same pixel format). So "where to render" comes from that function.
This stuff is all stored internally to Windows and the Windows Installable Client Driver model for OpenGL. You are not allowed to poke at it, modify it, or even look at it. You can simply use it.
This is basically a scoping question, where would I look for the facility to render window contents to 3d surfaces and manipulate them? I mean can I have a program like a shell that composites live windows in 3d like the Vista DWM's 3d Task Switcher and can translate UI interactions back into 2d interactions for each window?
I've seen mention of DWM extensions here and there on the web but can't find any resources as to how that would work. Also there are guides to implementing DWM Thumbnails by relating two windows' HWNDs but that doesn't allow me to do arbitrary transforms on the window and is display only (you can't click on stuff in the thumbnail.)
Any ideas?
There is no interface for manipulating the DWM's 3D visuals. They are internal to the DWM.
Is there any way to get window handle of the window which is currently playing video. This is the only information my program will be having.
updated to include info incorrectly provided as an answer
I think I should explain what exactly I want to achieve here.
I actually wanted to share/stream my DVD data to the remote machine. Currently what I am doing is, I am capturing the screen/video into to still frames and sending it to remote system but now I don't want to see the playing video on my host machine. I could think of few probable solutions,
1.) If we can capture the data of a hide/minimized window.
Did some investigation and seems it is not possible. Please add your thoughts.
2.) Convert the DVD data format into ffmpeg format and stream it.
Don't have any idea if we will be allowed to convert the data format. If most of the DVD formats allow to covert then I can go for this option but not sure how complicated it could be.
3.) Will create some virtual surface play the DVD data to that surface and capture the screen of that surface.
Again not sure if DVD will play on that virtual/fake surface created by kernel mode driver.
There are probably three main playback engines used on windows; DirectShow (WMP, MPC) , ffmpeg (VLC, MPlayer) and QuickTime.
If you look closer at DirectShow will will see that it supports hardware overlays, windowed and windowless rendering and Direct3d surface support.
Even if you focus on a single app you are going to have problems since you don't know what kind of renderer is in use. You might be able to find a child window that always has the same position and dimensions as the video, but then you are relying on things that could change between versions etc.