Detecting hardware overlay - windows

How can I detect if a given window has an hardware overlay surface? (like MediaPlayer, WinDVD, VLC,...)

Overlay surfaces are not related to any system window, in principle, they are surfaces to be "composed" with the video framebuffer (or the primary display surface). What you can detect is (depending on your API) if the hardware supports overlays, how many planes, supported formats (YUV, etc) and so on. This can be done from DX and OpenGL, for example.
Many tasks done in the past with overlays can be done now with the composition support of modern window managers, e.g: compiz, DWM in Vista, Quartz in OSX. In fact, I think programming with raw overlay surfaces is discouraged since WMs are providing such composition facilities for application developers.

Related

Procedurally generated GUI

I've developed an interactive audio visualization engine. I need to make its GUI scalable to various screen sizes with various PPIs (this includes both very large screens and mobile devices). Designer simply sent me a PSD with graphical representation of supported widgets. I'm exporting these into PNGs. The problem is that those bitmaps are of course not scalable and looks ugly.
I've thought about several ways how to achieve resolution and PPI independent GUI:
Export PNGs with various sizes and select the current set on runtime (waste of space simply for storing bitmaps in various resolutions)
Use scale 9 images only (no fancy stuff)
Use SVG (not supported by rendering APIs, could use smth like nanovg for OpenGL but what to do with raw framebuffer then?, also performance problems and too much complexity for what I need)
I came to an idea to pregenerate bitmaps at runtime for specific device once and use them afterwards. Are there any specific libraries for that and maybe already available themes which I could employ for now? I imagine tool could work similarly to how cairo graphics library or javascript canvas work by reading command list and outputting a bitmap. Any other ideas?
One possible solution is this:
CPlayer is a procedural graphics player with an IMGUI toolkit. It can
be used for anything from quick demos, prototyping graphics apps, to
full-fledged apps and games.
http://luapower.com/cplayer.html

Store resource in Direct2D on GPU

Is there some way to store a "scene" in Direct2D on the GPU?
I'm looking for something like ID2D1Mesh (i.e. storing the resource in vector format, not as a bitmap) but where I can configure if the mesh/scene/resource should be rendered with anti-aliasing or not.
Rick is correct in that you can apply antialiasing at two different levels. Either through Direct2D or through Direct3D. You can do both but that’s pointless and would only waste resources and lead to poor results. Direct2D antialiasing is suitable if you want per-primitive geometry-aware antialiasing. Direct3D antialiasing is useful if you want to sacrifice a bit of quality for better overall performance in some scenarios.
The Direct2D 1.1 command list literally stores/records a list of drawing commands that can be played back against different targets. This may be what you’re after as it’s not rasterized. Conceptually it’s like storing a vector image in device memory. Command lists are somewhat limited in that you cannot modify the command list once created and resources being drawn may also not be changed, but it’s still quite handy nonetheless.
There is a way to get antialiasing with ID2D1Mesh, but it's non-trivial. You have to create the Direct3D device yourself and then use ID2D1Factory::CreateDxgiSurfaceRenderTarget(). This allows you to configure the multisampling/antialiasing settings of the D3D device directly, and then meshes play along just fine (in fact I think you'd just always tell Direct2D to use aliased rendering). I haven't done this myself, but there is a MSDN sample that shows how to do this. It's not for the faint of heart ... and in order to do software rendering you have to initialize a WARP device. It does work, however.
Also, in Direct2D 1.1 (Windows 8, or Windows 7 + Platform Update), you can use the ID2D1CommandList interface for record/playback stuff. I'm not sure if that's implemented as "compile to GPU" (ala mesh), or if it's just macros (record/playback of commands).
In Windows 8.1, Direct2D introduced geometry realizations, which lets you store a tessellated version of the geometry and later render it back with or without anti-aliasing, just like you asked. These are highly recommended over the use of meshes. Command lists, while convenient, don't have the same caching abilities as creating and storing the geometry realizations yourself.

What's the fastest way to draw to a HWND on modern Windows?

When I last did this you would use DirectDraw to blit to a hardware surface, or even directly map it and draw directly.
What is the recommended method to do this today? Use Direct3D 10/11 and do the same?
Edit: To clarify my question, I want to do some software rasterization and therefore need a fast way to blit pixel data directly to the display.
I would suggest to use Direct2D which is meant for desktop applications these days. Quote:
Purpose
Direct2D is a hardware-accelerated, immediate-mode, 2-D graphics API
that provides high performance and high-quality rendering for 2-D
geometry, bitmaps, and text. The Direct2D API is designed to
interoperate well with GDI, GDI+, and Direct3D.
Requirements: Vista and higher as well as the respective server versions (if that is needed).

Pixel level manipulation windows

I've been using SDL to render graphics in C. I know there are several options to create graphics at the pixel level on Windows, including SDL and OpenGL. But how do these programs do it? Fine, I can use SDL. But I'd like to know what SDL is doing so I don't feel like an ignorant fool. Am I the only one slightly frustrated by the opaque layer of frosting on modern computers?
A short explanation as to how this is done on other operating systems would also be interesting, but I am most concerned with Windows.
Edit: Since this question seems to be somehow unclear, this is precisely what I want:
I would like to know how pixel level graphics manipulations (drawing on the screen pixel by pixel) works on Windows. What do libraries like SDL do with the operating system to allow this to happen. I can manipulate the screen pixel by pixel using SDL, so what magic happens in SDL to let me do this?
Windows has many graphics APIs. Some are layers built on top of others (e.g., GDI+ on top of GDI), and others are completely independent stacks (like the Direct3D family).
In an API like GDI, there are functions like SetPixel which let you change the value of a single pixel on the screen (or within a region of the screen that you have access to). But using SetPixel to setting lots of pixels is generally slow.
If you were to build a photorealistic renderer, like a ray tracer, then you'd probably build up a bitmap in memory (pixel by pixel), and use an API like BitBlt that sends the entire bitmap to the screen at once. This is much faster.
But it still may not be fast enough for rendering something like video. Moving all that data from system memory to the video card memory takes time. For video, it's common to use a graphics stack that's closer to the low-level graphics drivers and hardware. If the graphics card can do the video decompression directly, then sending the compressed video stream to the card will be much more efficient than sending the decompressed data from system memory to the video card--and that's often the limiting factor.
But conceptually, it's the same thing: you're manipulating a bitmap (or texture or surface or raster or ...), but that bitmap lives in graphics memory, and you're issuing commands to the GPU to set the pixels the way you want, and then to display that bitmap at some portion of the screen (often with some sort of transformation).
Modern graphics processors actually run little programs--called shaders--that can (among other things) do calculations to determine the pixel values. The GPUs are optimized to do these types of calculations and can do many of them in parallel. But ultimately, it boils down to getting the pixel values into some sort of bitmap in video memory.

Equivalent of windows gdi regions in linux

I sometimes use windows gdi regions for graphics drawing and invalidation/validation. For example the program http://www.maxerist.net/main/soft-for-win/tubicus (oss) was made using only regions (no bitmaps or off-screen buffer). The drawing was made with FillRgn and FrameRgn, invalidation and painting with InvalidateRgn and CombineRgn, every cell (see screenshot) is a simple region created with CreateEllipticRgn, CreatePolygonRgn and CombineRgn.
I have plans to port it to Linux. As I understood, there are many graphics libraries in Linux. Are there ones that support windows-like regions?
Thanks
You want to use The X Window System, a.k.a. X11, as your graphic platform. Its client library is called Xlib. The platform supports polygonal regions.
There are many libraries written on top of Xlib (Gtk, Qt, wxWindows and more) but you always can use the low-level Xlib API directly with any of them. Qt even supports elliptic regions. I don't know the details but I guess it's implemented on top of X11 polygonal regions.
Qt has a lot of options for painting, and using QPainter with QPainterPath objects might suit you well. (There are samples in the Qt distribution). You can combine (add/intersect/substact) paths.
The QGraphicsView framework is a good alternative too.

Resources