Store resource in Direct2D on GPU - direct2d

Is there some way to store a "scene" in Direct2D on the GPU?
I'm looking for something like ID2D1Mesh (i.e. storing the resource in vector format, not as a bitmap) but where I can configure if the mesh/scene/resource should be rendered with anti-aliasing or not.

Rick is correct in that you can apply antialiasing at two different levels. Either through Direct2D or through Direct3D. You can do both but that’s pointless and would only waste resources and lead to poor results. Direct2D antialiasing is suitable if you want per-primitive geometry-aware antialiasing. Direct3D antialiasing is useful if you want to sacrifice a bit of quality for better overall performance in some scenarios.
The Direct2D 1.1 command list literally stores/records a list of drawing commands that can be played back against different targets. This may be what you’re after as it’s not rasterized. Conceptually it’s like storing a vector image in device memory. Command lists are somewhat limited in that you cannot modify the command list once created and resources being drawn may also not be changed, but it’s still quite handy nonetheless.

There is a way to get antialiasing with ID2D1Mesh, but it's non-trivial. You have to create the Direct3D device yourself and then use ID2D1Factory::CreateDxgiSurfaceRenderTarget(). This allows you to configure the multisampling/antialiasing settings of the D3D device directly, and then meshes play along just fine (in fact I think you'd just always tell Direct2D to use aliased rendering). I haven't done this myself, but there is a MSDN sample that shows how to do this. It's not for the faint of heart ... and in order to do software rendering you have to initialize a WARP device. It does work, however.
Also, in Direct2D 1.1 (Windows 8, or Windows 7 + Platform Update), you can use the ID2D1CommandList interface for record/playback stuff. I'm not sure if that's implemented as "compile to GPU" (ala mesh), or if it's just macros (record/playback of commands).

In Windows 8.1, Direct2D introduced geometry realizations, which lets you store a tessellated version of the geometry and later render it back with or without anti-aliasing, just like you asked. These are highly recommended over the use of meshes. Command lists, while convenient, don't have the same caching abilities as creating and storing the geometry realizations yourself.

Related

2D graphics acceleration in SDL2 for Windows game development

I am using Pascal and SDL2.0.5 for 2D game development for windows and as the development progresses it starts to show frame rate drop, especially as I am using each frame particles. I don't use graphics acceleration. I'm just using the SDL2 API. I want to ask you:
For windows should I choose OpenGL or Direct3d?
Will I see asignificant change in my games performance?
I don't use graphics acceleration.
Are you sure? SDL2 uses hardware acceleration by default unless you create your renderer explicitly with SDL_CreateSoftwareRenderer, or if you don't use SDL_Renderer in the first place.
For windows should I choose OpenGL or Direct3d?
If you do use SDL_Renderer, make sure you're using SDL_CreateRenderer, and SDL will use what it thinks is best for the platform it's running on.
Otherwise, it doesn't really matter, especially for a 2D game. While on one system, one API might outperform the other, on another system the opposite might be true. When you look at the big picture, both average out to be roughly the same in terms of performance. Since you don't care about portability, pick the one that looks easiest to you.
Will I see asignificant change in my games performance?
Yes. Hardware acceleration is faster in almost every case. The only exception that comes to mind is when you need to render frames pixel by pixel.

Anti-aliasing techniques for OpenGL ES profiles

I've been investigating and trialling different AA techniques in OpenGL on both desktop and ES profiles ( es 2.0 & above ) for rendering 2D content to offscreen FBOs.
So far I've tried FSAA, FXAA, MSAA, and using the GL_LINE_SMOOTH feature.
I've not been able to find an adequate AA solution for ES profiles, where I've been limited to FSAA and FXAA because of API limitations. For example glTexImage2DMultisample ( required for MSAA ) and GL_LINE_SMOOTH functionality are unavailable.
FXAA is clever but blurs text glyphs to the point it's doing more harm than good, its only really suited to 3D scenes.
FSAA gives really good results particularly at 2x super-sampling but consumes too much extra video memory for me to use on most of our hardware.
I'm asking if anyone else has had much luck with any other techniques in similar circumstances - i.e: rendering anti-aliased 2D content to off-screen FBOs using an OpenGL ES profile.
Please note: I know I can ask for multi-sampling of the default frame buffer when setting up the window via GL_MULTISAMPLE on an ES profule, but this is no good to me rendering into off-screen FBOs where AA must be implemented oneself.
If any of my above statements are incorrect then please do jump in & put me straight, it may help!
Thank you.
For example glTexImage2DMultisample ( required for MSAA )
Why is it required? To do this "within spec" render to a multi-sample storage buffer, and then use glBlitFramebuffer to resolve it to a single sampled surface.
If you don't mind extensions then many vendors implement this extensions which behaves like an EGL window surface, with an implicit resolve, which is more efficient than the above as it avoids the round-trip to memory on tile-based hardware architectures.
https://www.khronos.org/registry/gles/extensions/EXT/EXT_multisampled_render_to_texture.txt
and GL_LINE_SMOOTH functionality are unavailable.
Do you have a compelling use case for lines? If you really need them then just triangulate them, but for 3D content where MSAA really helps lines are really not commonly used.

Procedurally generated GUI

I've developed an interactive audio visualization engine. I need to make its GUI scalable to various screen sizes with various PPIs (this includes both very large screens and mobile devices). Designer simply sent me a PSD with graphical representation of supported widgets. I'm exporting these into PNGs. The problem is that those bitmaps are of course not scalable and looks ugly.
I've thought about several ways how to achieve resolution and PPI independent GUI:
Export PNGs with various sizes and select the current set on runtime (waste of space simply for storing bitmaps in various resolutions)
Use scale 9 images only (no fancy stuff)
Use SVG (not supported by rendering APIs, could use smth like nanovg for OpenGL but what to do with raw framebuffer then?, also performance problems and too much complexity for what I need)
I came to an idea to pregenerate bitmaps at runtime for specific device once and use them afterwards. Are there any specific libraries for that and maybe already available themes which I could employ for now? I imagine tool could work similarly to how cairo graphics library or javascript canvas work by reading command list and outputting a bitmap. Any other ideas?
One possible solution is this:
CPlayer is a procedural graphics player with an IMGUI toolkit. It can
be used for anything from quick demos, prototyping graphics apps, to
full-fledged apps and games.
http://luapower.com/cplayer.html

What's the fastest way to draw to a HWND on modern Windows?

When I last did this you would use DirectDraw to blit to a hardware surface, or even directly map it and draw directly.
What is the recommended method to do this today? Use Direct3D 10/11 and do the same?
Edit: To clarify my question, I want to do some software rasterization and therefore need a fast way to blit pixel data directly to the display.
I would suggest to use Direct2D which is meant for desktop applications these days. Quote:
Purpose
Direct2D is a hardware-accelerated, immediate-mode, 2-D graphics API
that provides high performance and high-quality rendering for 2-D
geometry, bitmaps, and text. The Direct2D API is designed to
interoperate well with GDI, GDI+, and Direct3D.
Requirements: Vista and higher as well as the respective server versions (if that is needed).

How to work with pixels using Direct2D

Could somebody provide an example of an efficient way to work with pixels using Direct2D?
For example, how can I swap all green pixels (RGB = 0x00FF00) with red pixels (RGB = 0xFF0000) on a render target? What is the standard approach? Is it possible to use ID2D1HwndRenderTarget for that? Here I assume using some kind of hardware acceleration. Should I create a different object for direct pixels manipulations?
Using DirectDraw I would use BltFast method on the IDirectDrawSurface7 with logical operation. Is there something similar with Direct2D?
Another task is to generate complex images dynamically where each point location and color is a result of a mathematical function. For the sake of an example let's simplify everything and draw Y = X ^ 2. How to do that with Direct2D? Ultimately I'm going to need to draw complex functions but if somebody could give me a simple example for Y = X ^ 2.
First, it helps to think of ID2D1Bitmap as a "device bitmap". It may or may not live in local, CPU-addressable memory, and it doesn't give you any convenient (or at least fast) way to read/write the pixels from the CPU side of the bus. So approaching from that angle is probably the wrong approach.
What I think you want is a regular WIC bitmap, IWICBitmap, which you can create with IWICImagingFactory::CreateBitmap(). From there you can call Lock() to get at the buffer, and then read/write using pointers and do whatever you want. Then, when you need to draw it on-screen with Direct2D, use ID2D1RenderTarget::CreateBitmap() to create a new device bitmap, or ID2D1Bitmap::CopyFromMemory() to update an existing device bitmap. You can also render into an IWICBitmap by making use of ID2D1Factory::CreateWicBitmapRenderTarget() (not hardware accelerated).
You will not get hardware acceleration for these types of operations. The updated Direct2D in Win8 (should also be available for Win7 eventually) has some spiffy stuff for this but it's rather complex looking.
Rick's answer talks about the methods you can use if you don't care about losing hardware acceleration. I'm focusing on how to accomplish this using a substantial amount of GPU acceleration.
In order to keep your rendering hardware accelerated and to get the best performance, you are going to want to switch from ID2DHwndRenderTarget to using the newer ID2DDevice and ID2DDeviceContext interfaces. It honestly doesn't add that much more logic to your code and the performance benefits are substantial. It also works on Windows 7 with the Platform Update. To summarize the process:
Create a DXGI factory when you create your D2D factory.
Create a D3D11 device and a D2D device to match.
Create a swap chain using your DXGI factory and the D3D device.
Ask the swap chain for its back buffer and wrap it in a D2D bitmap.
Render like before, between calls to BeginDraw() and EndDraw(). Remember to unbind the back buffer and destroy the D2D bitmap wrapping it!
Call Present() on the swap chain to see the results.
Repeat from 4.
Once you've done that, you have unlocked a number of possible solutions. Probably the simplest and most performant way to solve your exact problem (swapping color channels) is to use the color matrix effect as one of the other answers mentioned. It's important to recognize that you need to use the newer ID2DDeviceContext interface rather than the ID2DHwndRenderTarget to get this however. There are lots of other effects that can do more complicated operations if you so choose. Here are some of the most useful ones for simple pixel manipulation:
Color matrix effect
Arithmetic operation
Blend operation
For generally solving the problem of manipulating the pixels directly without dropping hardware acceleration or doing tons of copying, there are two options. The first is to write a pixel shader and wrap it in a completely custom D2D effect. It's more work than just getting the pixel buffer on the CPU and doing old-fashioned bit mashing, but doing it all on the GPU is substantially faster. The D2D effects framework also makes it super simple to reuse your effect for other purposes, combine it with other effects, etc.
For those times when you absolutely have to do CPU pixel manipulation but still want a substantial degree of acceleration, you can manage your own mappable D3D11 textures. For example, you can use staging textures if you want to asynchronously manipulate your texture resources from the CPU. There is another answer that goes into more detail. See ID3D11Texture2D for more information.
The specific issue of swapping all green pixels with red pixels can be addressed via ID2D1Effect as of Windows 8 and Platform Update for Windows 7.
More specifically, Color matrix effect.

Resources