Recently i have been investigating using Direct 2D (D2D) instead of GDI+ to draw cached bitmaps, i figured that since D2D was hardware accelerated i would get much lower drawing times (the API also seemed to be fairly friendly). My prototyping in fact indicates i should stick with GDI+ for now and it is in fact faster for this particular scenario.
In my prototype i have an MFC app which draws about 1000 icons to a canvas in the OnPaint function. The icons are not all different, there are probably less than 20 different types. From there i initialise both GDI+ and D2D from the GDI drawing context. First access of the bitmaps comes at a cost after that they are cached. I store them as a map of CachedBitmaps and ID2D1Bitmaps.
My feeling is that at this level (drawing bitmaps) the slowness is caused by interop'ing D2D with GDI (i believe this results in copying the results of the D2D rendering back to the GDI context). I dont know at what level GDI+ is hardware accelerated but i presume BitBlt is...
Can anyone shed any more light on my findings?
Related
I'm using Direct2D with for some simple accelerated image compositing/manipulation, and now need to get the pixel data the RenderTarget to pass it to an encoder.
So far, I've managed this by rendering to a BitmapRenderTarget, then finally drawing the bitmap from that onto a WicBitmapRenderTarget which allows me to Lock an area and get a pointer to the pixels.
However...
This only works if my initial RenderTarget uses D2D1_RENDER_TARGET_TYPE_SOFTWARE, because a hardware rendertarget's bitmap can't be 'shared' with the WicBitmapRenderTarget which only supports software rendering. The software rendering seems significantly slower than hardware.
Is there any way round this? Would I be better off using Direct3D instead?
Some background:
I have an existing OS X card game app that uses OpenGL.
The window is resizable, and a 4:3 aspect ratio is always maintained.
When the window is resized, the OpenGL view is resized accordingly. All visual elements are scaled accordingly. i.e. the cards maintain their relative sizes and distances from each other.
I'm interested in moving the code to a system that either uses Sprite Kit, or one predominantly based on Core Animation layers. Sprite Kit is more attractive to me in terms of feature set for my needs, but...
... I am concerned about Sprite Kit performance (or rather, needless performance, particularly on battery-powered Macs) for a game that essentially blasts the same textures to the screen, 60fps, even when nothing much is happening. (Most of the time, the cards are static, as the player ponders their next move.)
To reduce some of the (repetitive) drawing required, particularly at very large window sizes (e.g. fullscreen on a 30" monitor), I'm interested in using a "dirty rects/region" or "as-required" drawing system.
Question:
Does Sprite Kit provide some kind of dirty-rect drawing system, or the ability to implement such a drawing system? (Or, is it basically going to draw everything over and over at 60fps, regardless of the need to redraw?)
SK is a OpenGL renderer, naturally it will redraw its contents every frame. That however doesn't make it slow. While the dirty rect drawing of UI frameworks is a way to improve performance but also to reduce power consumption, they have to use this approach because rendering in UI frameworks is typically a lot slower (often not hardware accelerated) than in an OpenGL renderer.
On the other hand SK can be slower frame over frame if the rendered scene's complexity is extreme. But that sounds highly unlikely for a card game.
Generally You shouldn't concern yourself with performance until you wrote some code to test it with. Premature optimization and all...
When I last did this you would use DirectDraw to blit to a hardware surface, or even directly map it and draw directly.
What is the recommended method to do this today? Use Direct3D 10/11 and do the same?
Edit: To clarify my question, I want to do some software rasterization and therefore need a fast way to blit pixel data directly to the display.
I would suggest to use Direct2D which is meant for desktop applications these days. Quote:
Purpose
Direct2D is a hardware-accelerated, immediate-mode, 2-D graphics API
that provides high performance and high-quality rendering for 2-D
geometry, bitmaps, and text. The Direct2D API is designed to
interoperate well with GDI, GDI+, and Direct3D.
Requirements: Vista and higher as well as the respective server versions (if that is needed).
I've been using SDL to render graphics in C. I know there are several options to create graphics at the pixel level on Windows, including SDL and OpenGL. But how do these programs do it? Fine, I can use SDL. But I'd like to know what SDL is doing so I don't feel like an ignorant fool. Am I the only one slightly frustrated by the opaque layer of frosting on modern computers?
A short explanation as to how this is done on other operating systems would also be interesting, but I am most concerned with Windows.
Edit: Since this question seems to be somehow unclear, this is precisely what I want:
I would like to know how pixel level graphics manipulations (drawing on the screen pixel by pixel) works on Windows. What do libraries like SDL do with the operating system to allow this to happen. I can manipulate the screen pixel by pixel using SDL, so what magic happens in SDL to let me do this?
Windows has many graphics APIs. Some are layers built on top of others (e.g., GDI+ on top of GDI), and others are completely independent stacks (like the Direct3D family).
In an API like GDI, there are functions like SetPixel which let you change the value of a single pixel on the screen (or within a region of the screen that you have access to). But using SetPixel to setting lots of pixels is generally slow.
If you were to build a photorealistic renderer, like a ray tracer, then you'd probably build up a bitmap in memory (pixel by pixel), and use an API like BitBlt that sends the entire bitmap to the screen at once. This is much faster.
But it still may not be fast enough for rendering something like video. Moving all that data from system memory to the video card memory takes time. For video, it's common to use a graphics stack that's closer to the low-level graphics drivers and hardware. If the graphics card can do the video decompression directly, then sending the compressed video stream to the card will be much more efficient than sending the decompressed data from system memory to the video card--and that's often the limiting factor.
But conceptually, it's the same thing: you're manipulating a bitmap (or texture or surface or raster or ...), but that bitmap lives in graphics memory, and you're issuing commands to the GPU to set the pixels the way you want, and then to display that bitmap at some portion of the screen (often with some sort of transformation).
Modern graphics processors actually run little programs--called shaders--that can (among other things) do calculations to determine the pixel values. The GPUs are optimized to do these types of calculations and can do many of them in parallel. But ultimately, it boils down to getting the pixel values into some sort of bitmap in video memory.
I want to make a skinning engine capable of drawing custom-shaped windows with alpha blending. That is, it'll use layered windows (UpdateLayeredWindow). A typical window will contain among its background a couple dozens of other bitmaps ranging from 10×10 to, say, 300×150 pixels. In the worst case most of these elements will have smooth animation up to 30 fps. Everything will be alpha-blended and I am going to use Direct2D for this (yes, I know older Windows versions doesn't support it). In general, Winamp's modern skin engine is the closest example.
Given all this and taking in account modern PCs performance, can I just redraw the whole window every single frame or do I have to constrain to some sort of clip rectangle?
D2D required you to render with WM_Paint messages
Honneslty, use The IAnimation interface, and just let D2D and windows worry about how often to redraw , though i will let you know , winamp is done with adobe air, and layerd windows with d2d causes issues. (Kinda think you have to use a DXGI render target, but with the window being layerd it needs a DC to be returned to an end paint call so it can update it's alpha channel)
I have some experience with this.
If you need to support Windows XP, using UpdateLayeredWindow is the only choice available for solving this problem. The documentation for this call says it copies the whole bitmap to the screen each time it is called and this bottleneck showed up in my benchmarking as the real limiting factor. If your window is 300x300 you pay that price on every update, even if you are careful to modify only a couple of pixels. It would be very easy to over-optimize the rendering side for no real benefit so implement something simple, measure, and then decide if you need to optimize.
If you can drop support for Windows XP then you can avoid UpdateLayeredWindow completely and use DwmExtendFrameIntoClientArea to create the same effect as a layered window. You'll write less code, avoid the UpdateLayeredWindow bottleneck, and D2D will be easier to work with.