How to implement an animation effect? - animation

On Windows platform, with Windows sdk, how to do some effect like the blur effect? The expend / collapse / scroll effect and so on.
My major ideal is timer combine with gdi paintings.
But the gdi painting have some problems, the action was not very smooth, compared with some other effect on the computer.
Can gdi do this or not? If so how?
I'm interested in this kind of action for a while. don't figure out how the effect has been implied.

Related

OS X Sprite Kit - Dirty Rects/Regions

Some background:
I have an existing OS X card game app that uses OpenGL.
The window is resizable, and a 4:3 aspect ratio is always maintained.
When the window is resized, the OpenGL view is resized accordingly. All visual elements are scaled accordingly. i.e. the cards maintain their relative sizes and distances from each other.
I'm interested in moving the code to a system that either uses Sprite Kit, or one predominantly based on Core Animation layers. Sprite Kit is more attractive to me in terms of feature set for my needs, but...
... I am concerned about Sprite Kit performance (or rather, needless performance, particularly on battery-powered Macs) for a game that essentially blasts the same textures to the screen, 60fps, even when nothing much is happening. (Most of the time, the cards are static, as the player ponders their next move.)
To reduce some of the (repetitive) drawing required, particularly at very large window sizes (e.g. fullscreen on a 30" monitor), I'm interested in using a "dirty rects/region" or "as-required" drawing system.
Question:
Does Sprite Kit provide some kind of dirty-rect drawing system, or the ability to implement such a drawing system? (Or, is it basically going to draw everything over and over at 60fps, regardless of the need to redraw?)
SK is a OpenGL renderer, naturally it will redraw its contents every frame. That however doesn't make it slow. While the dirty rect drawing of UI frameworks is a way to improve performance but also to reduce power consumption, they have to use this approach because rendering in UI frameworks is typically a lot slower (often not hardware accelerated) than in an OpenGL renderer.
On the other hand SK can be slower frame over frame if the rendered scene's complexity is extreme. But that sounds highly unlikely for a card game.
Generally You shouldn't concern yourself with performance until you wrote some code to test it with. Premature optimization and all...

A skinning engine in Windows: draw “dirty” regions only or the whole window at once?

I want to make a skinning engine capable of drawing custom-shaped windows with alpha blending. That is, it'll use layered windows (UpdateLayeredWindow). A typical window will contain among its background a couple dozens of other bitmaps ranging from 10×10 to, say, 300×150 pixels. In the worst case most of these elements will have smooth animation up to 30 fps. Everything will be alpha-blended and I am going to use Direct2D for this (yes, I know older Windows versions doesn't support it). In general, Winamp's modern skin engine is the closest example.
Given all this and taking in account modern PCs performance, can I just redraw the whole window every single frame or do I have to constrain to some sort of clip rectangle?
D2D required you to render with WM_Paint messages
Honneslty, use The IAnimation interface, and just let D2D and windows worry about how often to redraw , though i will let you know , winamp is done with adobe air, and layerd windows with d2d causes issues. (Kinda think you have to use a DXGI render target, but with the window being layerd it needs a DC to be returned to an end paint call so it can update it's alpha channel)
I have some experience with this.
If you need to support Windows XP, using UpdateLayeredWindow is the only choice available for solving this problem. The documentation for this call says it copies the whole bitmap to the screen each time it is called and this bottleneck showed up in my benchmarking as the real limiting factor. If your window is 300x300 you pay that price on every update, even if you are careful to modify only a couple of pixels. It would be very easy to over-optimize the rendering side for no real benefit so implement something simple, measure, and then decide if you need to optimize.
If you can drop support for Windows XP then you can avoid UpdateLayeredWindow completely and use DwmExtendFrameIntoClientArea to create the same effect as a layered window. You'll write less code, avoid the UpdateLayeredWindow bottleneck, and D2D will be easier to work with.

Why doesn't OS X have the same flickering problems that Windows does?

I was reading Larry Osterman's latest blog post about debugging a flickering problem in the Windows Vista/7 volume control, and I suddenly realized that I can't recall ever seeing an application flicker on my OS X laptop. Even applications that otherwise seem to be poorly written avoid the flicker problem in my experience. Without this turning into an Apple vs Windows debate (please), why do OS X applications not seem to have the same flickering problem?
I have trouble believing that Apple developers are simply amazing at programming flicker-free GUIs, while Windows programmers suck, so what's the reason? Does the OS X API require all GUIs to implement double-buffering? While some apps have the slightly sluggish double-buffered resize behavior, many don't, and they still avoid flickering. Is the OS X repaint flow somehow fundamentally different from Windows, avoiding the WM_ERASEBKGRND problem entirely? Or is there some other possibility that I'm not seeing?
Update: Thank you for your answers. I wish I could select both ken and cb160's answers, because they are both helpful.
Mac OS X has double buffered windows.
You don't have to do anything to make it happen. It's behind the scenes.
You (almost always) don't explicitly draw to a window in Cocoa when something changes, you invalidate a region of the window. The framework will later descend the hierarchy of views and draw the dirty regions of the window into a secondary buffer. Then it swaps the buffers.
You can optionally make some promises that allow the framework to take shortcuts when redrawing, but they're all opt-in. Only savvy views are affected.
If your subclass of NSView implements the isOpaque method to return YES, then the framework will never clear anything behind your view or draw any of the views under it.
Implementing preservesContentDuringLiveResize to return YES gives you some extra responsibilities, but can improve performance during window resizing.
10.6 added another two new APIs of this sort, layerContentsRedrawPolicy and layerContentsPlacement.
Last, custom drawing is less common than on Windows. The majority of views you see are framework-supplied and not subclassed. Framework-supplied means optimized-by-apple.
Both Windows Vista/7 and OSX use compositing engines to draw rasterised bitmaps on the screen. These compositing engines are responsible for processing output from all windows and drawing the final screen image. This compositing approach is how OSX is able to use the genie effect when minimizing to the dock and how aero draws the translucent borders. They also prevent flickering as if the bitmap to fill a particular area of the screen is not available, it will use the image it has already rather than drawing a blank region.
OSX has had a compositing engine since it first shipped. At the time, lots of people though this was a crazy appraoch as all the video cards shipping at the time wer optimized to draw bitmaps (ie, windows buttons and borders) and not composited images. In later versions of OSX, the compositing was pushed off to the GPU (in Quartz Extreme)and so took significant load off of the CPU and made more effects possible.
Because the Windows compositer was only added in windows Vista and then only when there was a GPU available and you had the right version of the OS, it is not as pervassive as the Quartz Compositer in OSX. Because the compositer is not always used in Windows, flickering will occur when a region is blanked and the application responsible for drawing is not able to redraw the region qucikly enough.
Yup, it's all double buffered automagically. Of course, if you are running
legacy code from mac os 9, or code ported from windoze, that mean's you're
probably triple buffering without knowing it. Hey, cycles are cheap!

Why Direct3D application performs better in full screen mode?

The performance of a Direct3D application seems to be significantly better in full screen mode compared to windowed mode. What are the technical reasons behind this?
I guess it has something to do with the fact that a full screen application can gain exclusive control for the display. But why the application cannot gain exclusive control for part of the screen (i.e. window) and have the same performance benefits?
Here are the cliff notes on how things work underneath.
Monitor screen always needs to be associated with so-called primary surface to be able to display anything, i.e. videocard can only scan out of one surface in video memory.
When application is fullscreen (and everything was set up correctly to enable flipping), primary surface is just one of the application backbuffers, and flipped to another backbuffer every frame. It is the most efficient way of presenting on the screen, but it requires application to own the entire monitor area (i.e. entire primary surface).
When there's no fullscreen application and DWM is off, primary surface is owned by OS, and every windowed application performs a blit from application backbuffer to a primary surface. This blit takes some GPU time to complete (as well as blits from the other applications visible on the screen), so it's not as efficient as fullscreen presentation. XP worked that way.
When DWM is composing the screen, things get even more complicated.
Here, DWM owns the primary surface and needs to draw application windows there. To make it possible, every window has an associated surface holding its contents, called redirection surface (which allows DWM to enable window ghosting, glass effects, and all that good stuff). Every time D3D application issues a frame, it adds a blit to a redirection surface.
That way, several blits need to happen: blit to a redirection surface by the app, blit from a redirection surface to the primary by DWM, which is, again, some overhead compared to fullscreen.
Note all of that additional work is on the GPU, so it doesn't affect CPU performance.
Stuff to read further:
http://blogs.msdn.com/greg_schechter/archive/2006/03/19/555087.aspx
http://blogs.msdn.com/greg_schechter/archive/2006/05/02/588934.aspx
http://blogs.msdn.com/greg_schechter/archive/2006/03/05/544314.aspx
There's a bit on MSDN that says full screen mode uses buffer flipping, if set up correctly, as opposed to blitting. It makes sense.
Of course you can (and in a way, do) give exclusive control for part of the screen to an application, but what happens to the rest of the screen? You still have to blit, do occlusion checking, etc. on the rest of the windows, and I think that's what causes the performance hit.
I'll add to #aib's answer that the rest of the screen is being managed by the OS. So, if anything else needs to be drawn/worked upon simultaneously, there has to be a performance hit.
For example, if you have a video playing in Windows Media Player in one window, then start Civilization in another, when Civ starts doing its fancy graphics, it will need to share screen space with everything else (like the video.
Whereas if the DirectX app has the full-screen, everything else might be "updating" or "playing", but not being drawn.
Basically, the video hardware is completely dedicated to the exclusive mode application.
There is no contention for video resources (pipeline, texture memory, etc...)
In particular, texture upload can be a big bottleneck. The less you have to do it (because you have it all), the better.

Is it possible to create full screen color overlay effects in windows?

I remember my old Radeon graphics drivers which had a number of overlay effects or color filters (whatever they are called) that would render the screen in e.g. sepia tones or negative colors. My current NVIDIA card does not seem to have such a function so I wondered if it is possible to make my own for Vista.
I don't know if there is some way to hook into window's rendering engine or, alternatively, into NVIDIA's drivers to achieve this effect. While it would be cool to just be able to modify the color, it would be even better to modify the color based on its screen coordinates or perform other more varied functions. An example would be colors which are more desaturated the longer they are from the center of the screen.
I don't have a specific use scenario so I cannot provide much more information. Basically, I'm just curious if there is anything to work with in this area.
You could have a full-screen layered window on top of everything and passing through click events.. However that's hacky and slow compared to what could be done by getting a hook in the WDM renderer's DirectX context. However, so far it's not possible, as Microsoft does not provide any public interface into this.
The Flip 3D utility does this, though, but even there that functionality is not in the program, it's in the WDM DLL, called by ordinal (hidden/undocumented function, obviously, since it doesn't serve any other purpose). So pretty much another dead end, from where I haven't bothered to dig deeper.
On that front, the best we can do is wait for some kind of official API.

Resources