Why not use GDI to repeatedly fill a window with RGB data from an array? - winapi

This is a follow-up to this question. I'm currently writing a simple game and am looking for the fastest way to (repeatedly) display an array of RGB data in a Win32 window, without flickering or other artifacts.
Several different approaches were recommended in the answers to the previous question, but there was no consensus on which would be the fastest. So, I threw together a test program. The code simply displays a framebuffer on the screen repeatedly, as fast as possible.
These are the results I obtained, for 32-bit data running in a 32-bit video mode - they may surprise some people:
- Direct3D (1): 500 fps
- Direct3D (2): 650 fps
- DirectDraw (3): 1100 fps
- DirectDraw (4): 800 fps
- GDI (SetDIBitsToDevice): 2000 fps
Given these figures:
Why are many people adamant that GDI is simply too slow for this operation?
Is there any reason to prefer DirectDraw or Direct3D over SetDIBitsToDevice?
Here is a brief summary of the calls made by each of the Direct* codepaths. If anyone knows a more efficient way to use DirectDraw/Direct3D, please comment.
1. CreateTexture(D3DUSAGE_DYNAMIC, D3DPOOL_DEFAULT);
LockRect(); memcpy(); UnlockRect(); DrawPrimitive()
2. CreateTexture(0, D3DPOOL_SYSTEMMEM); CreateTexture(0, D3DPOOL_DEFAULT);
LockRect(); memcpy(); UnlockRect(); UpdateTexture(); DrawPrimitive()
3. CreateSurface(); SetSurfaceDesc(lpSurface = &frameBuffer[0]);
memcpy(); primarySurface->Blt();
4. CreateSurface();
Lock(); memcpy(); Unlock(); primarySurface->Blt();

There are a couple of things to keep in mind here. First of all, a lot of "common knowledge" is based on some facts that no longer really apply.
In the days of AGP, when the CPU talked directly to the GPU, it always used the base PCI protocol, which happened at the "1x" rate (always and inevitably). AGX 2x/4x/8x only applied when the GPU was taking to the memory controller directly. In other words, depending on when you looked, it was up to 8 times as fast to have the GPU load a texture from memory as it was for the CPU to send the same data directly to the GPU. Of course, the CPU also had a great deal more bandwidth to memory than the PCI bus supported.
When things switched to PCI-E, however, that changed completely. While there can be differences in bandwidth depending on path, there's no general rule that memory->GPU will be faster than CPU->GPU. The one generalization that's (mostly) safe is that if you have a dedicated graphics card, then the GPU will almost always have more bandwidth to the memory on the graphics card than it does to main memory on the motherboard.
In your case, that doesn't matter much though -- you're talking about moving data from CPU space to GPU space regardless. The main speed difference with using DirectX (or OpenGL) happens when you keep all (or most) of the computation on the GPU, and avoid using the CPU (or main memory) at all. They don't (now that AGP is history) provide any substantial improvement in memory->display bandwidth.

Jerry Coffin makes some good points. The thing to bear in mind is what the DI stands for in SetDIBitsToDevice. It stands for Device Independent. Which means you were ALWAYS at the mercy of drivers. Some drivers used to be complete rubbish and it affected the performance massively. DirectDraw suffered from similar issues as well ... but you also had access to the hardware blitters so it was generally more useful. IHVs also tended to put more time in to writing proper drivers for DirectDraw because of its gaming association. Who wants to be the bottom of the performance pile when the hardware is quite capable of doing better?
These days many graphics cards can accept the bit data directly so no conversion happens. If it does need to be swizzled this is also INCREDIBLY quick in this day and age.
The reason your Direct3D performance is so terrible, by comparison, is that Direct3D, by nature of the fact it is meant to be used totally internally to the GPU, uses odd and complex formats to improve cache performance and so forth.
Couple that with the fact that you aren't testing like for like (with DDraw and D3D) by creating a texture/surface, locking it, copying, unlocking and then drawing over the back buffer (via various methods). To get best performance you'd be best off directly locking the backbuffer using a DISCARD lock then memcpy'ing directly into the returned buffer before unlocking. This will bring your performance much closer to the SetDIBitsToDevice. I still would expect D3D to be slower than DDraw, however, for the reasons outlined above.

The reason you will hear people trounce on GDI is that it used to just be old windows API calls. The newer versions of it (that were called GDI+ when I last looked at em) are actually just an API placed on top of DirectX calls. So using GDI may seem fairly simple programming wise at times, but adding a layer between things always slows things down. As mentioned in the response from Jerry Coffin, your examples are about moving the data, and that is the slow time. I am a bit surprised that DirectX is that much slower though but I can not be much more help with out digging through the DirectX documentation (which has been pretty awesome for quite some time really.. Might want to check out www.codesampler.com. I have always found good starting places from him and actually, while I may be insane for saying this, I would swear the improvements to the DirectX SDK in doc and examples were done based on this guys work!)
As for the DirectDraw vs Direct3D (and not the GDI calls) discussion. I would say go to Direct3D. I believe DirectDraw has been deprecated since 8.0 or so, and 9.0 has been around for quite a long while. And at the end of the day all of DirectX is 3D, it just varies on the levels of helpful 2D apis that are around, but you may find you can do some very interesting things in a 2D environment when you are actually using 3D space. (I had a pretty neat randomly generated lightning weapon for a space invaders clone at one time :))
Anywho, hope this helped!
PS: It should be noted that DirectX is not always the fastest. For keyboard input (unless this has changed in 10 or 11) it has pretty much always been recommended to use the windows events.. as DirectInput was actually just a wrapper for that system!.. XInput however is -awesome-!!

Related

Game performance optimization interview

This question came up:
You're searching for bottlenecks in your game, but nothing you're changing is making the game any faster, be it anything in the GPU pipeline or the CPU. Nothing is spiking, and the slowness appears to be distributed across everywhere. What do you do next?
I was flummoxed. Is it a trick question? When fixing perf issues, I always assume that this was the point at which you need to scale everything back. I don't think it's mem alloc, as that shows up in CPU perf.
I would have asked for more information. "Slow" is a poor indicator of bad performance and is a classification of a symptom rather than a symptom itself. For example, you might describe "slow" as being:
Low frame rate
Poor responsiveness to input
High responsiveness and smooth framerate, but slow game mechanics (i.e.: the player and entities move smoothly but very slowly)
In the case of networked games, apparent network lag
All of these problems have different potential causes and solutions:
Low but consistent frame rate may be due to inefficiencies in your game loop. Simply running your favorite profiler may indicate that large amounts of time are spent in one particular piece of code. In a game I wrote, for instance, I discovered that low FPS was the result of a bad loop that calculated distances between entities multiple times without caching. In another game, I discovered that the data structure I was using to perform lookups against the terrain was O(N) rather than O(1) (python stdlib...ick). You can't diagnose a problem you can't see, and profiling is the first line of defense.
Poor responsiveness may be due to a number of things. If the FPS is high but the controls are sluggish to respond, the API that you're using to access the controls may simply be bad. Some controllers may have crappy drivers that can kill responsiveness. It might even be your game loop: you might simply not be checking for input from the controller frequently enough (perhaps you're not checking on every tick). In one of the aforementioned games, I had an issue where certain actions had a delayed effect: you'd use an item and the game would respond a half second or so later. It turned out that the issue was caused by the client making a full round-trip to the server to perform the action, verify that it happened, and wait for the server to broadcast back that the item was used. Simply having the behavior take place instantaneously on the client remedied the issue.
Slow game mechanics might indicate that game constants simply aren't set high enough. If everything is smooth and beautiful but everything just moves very slowly, it's quite possible that default velocities or accelerations aren't turned up enough.
Network lag can be caused by any number of things: the router you're connected to might be failing, the VPS you're developing against might be on a host that's being DDoSed, you might be using a protocol that's overly (but uniformly) chatty, or you're simply sending too much data over the wire. In a piece of simulation software I wrote in college, the computations were performed on some beefy computers in a lab, while the visualizations were being run on my MBP in my dorm. It turned out that the sheer amount of data that I was sending from the lab computers to my dorm was enough to overload the cheap network switches in the building and drop packets, resulting in horrible lag but perfectly reasonable log output.
So I guess the answer here is to have the interviewer describe the symptoms more fully. #Ali's answer is great, but it could be that there's a more nuanced problem at hand that requires some coaxing to diagnose.
You're searching for bottlenecks in your game, but nothing you're
changing is making the game any faster, be it anything in the GPU
pipeline or the CPU. Nothing is spiking, and the slowness appears to
be distributed across everywhere.
It pretty much sounds like the definition of Uniformly Slow Code. Let's assume it is really what is meant by this (and not some I/O bottleneck or creation of unnecessary objects in a loop or some poor choice for the datastructures or for the algorithms, etc).
To make a uniformly slow code faster, you usually have to go against good practices, and that is why I usually stop optimizing my code when it is uniformly slow. (I suppose "stop optimizing" is not a good answer at an interview...)
One way to make things faster is to identify an appropriate sequence of small operations, collect them together in one place, and then manually improve the things; sort of "manually inlining" these operations then doing high-level simplifications on the code that emerges. It requires good intuition where this might be worth doing and excellent understanding of the involved code. This answer calls it bunching and horizontal optimization.
Another thing that might be worth looking into if your really have uniformly slow code is Andrei Alexandrescu's optimization tips.
Maybe this is about thinking about more efficient algorithms. "Micro-optimization" has its limits; you can perfectly optimize a bubble sort, for example, but to get real big speedup you'd invent another sorting algorithm.
Also, in games you may introduce different kinds of adjustable quality/speed (or precision/speed) tradeoffs. Typically all games have some settings that change graphic detail level.
anecdotal:
i can tell you what the problem is without actually knowing the answer to the question ;p
sloppy directx calls. too many objects. especially bad on some old dx9 games, since dx9 needed to make a new directdraw call for every object. or something like that, the story goes. basically resulted in the cpu waiting idle for the gpu to process all the messages.
although def not the solution to every issue, i though it was worth mentioning as an interesing piece of information ;p didn't see it in the other comments.
it's almost like having too many pixel shaders, except at least the gpu works at 100% with a mass of those :D good for frying omelettes. (also, using occlusion to save performance and then adding a mass of pixel shaders to that model is a BAD idea)
i hope you can see the humor in this ;p

Detect whether a Quartz Composition in a QCView will be rendered through software or hardware

I have a feeling there are combinations of Cocoa Quartz Compositions and GPUs which can't be handled by the GPU and which fall back on the software renderer, even if Core Image is "accelerated" normally. How would I detect such a situation?
Or more generally, how do I detect that a machine is too underpowered to handle a certain composition of a certain size, without actually playing the composition and measuring the FPS?
(Measuring the FPS through playing the composition in a hidden window is unlikely to work, since the QCView might detect that situation and optimise away the whole operation, or parts thereof. And even if it didn't do that today it might start doing that with the next update from Apple - it'd be an unreliable solution.)
Update: to be thorough I did write some code to test render the composition at full resolution in an ordered out but properly sized window, trying to force the render to happen with [self startRendering];[self snapshotImage];[self stopRendering];. This took an amount of time which looked reasonable at first, until it turned out the slow machine was faster at running this test than the fast one. ;) In reality the slow machine renders the composition at a measly 2.24 FPS vs 27 FPS on the fast machine.
I'm guessing you're asking so that you can make a simpler fallback animation for weaker systems?
One option may be to check the user's hardware string as is mentioned here:
GPU Chipset Detection.
glGetString can return GL_VENDOR, GL_RENDERER, GL_VERSION, or GL_EXTENSIONS. You could theoretically use GL_VENDOR to identify Intel GMA's as too slow, or compare GL_RENDERER to a list of known poor-performing GPUs. If you're writing code for 10.6+ only, you only have to compare to GPUs used in Intel Macs, so the list shouldn't be too long.
This might not be quite the elegant solution you're looking for, but it should do the trick. I would also provide the user with an override to choose the higher or lower quality graphics if they wish.

How are 3D games so efficient? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
There is something I have never understood. How can a great big PC game like GTA IV use 50% of my CPU and run at 60fps while a DX demo of a rotating Teapot # 60fps uses a whopping 30% ?
Patience, technical skill and endurance.
First point is that a DX Demo is primarily a teaching aid so it's done for clarity not speed of execution.
It's a pretty big subject to condense but games development is primarily about understanding your data and your execution paths to an almost pathological degree.
Your code is designed around two things - your data and your target hardware.
The fastest code is the code that never gets executed - sort your data into batches and only do expensive operations on data you need to
How you store your data is key - aim for contiguous access this allows you to batch process at high speed.
Parellise everything you possibly can
Modern CPUs are fast, modern RAM is very slow. Cache misses are deadly.
Push as much to the GPU as you can - it has fast local memory so can blaze through the data but you need to help it out by organising your data correctly.
Avoid doing lots of renderstate switches ( again batch similar vertex data together ) as this causes the GPU to stall
Swizzle your textures and ensure they are powers of two - this improves texture cache performance on the GPU.
Use levels of detail as much as you can -- low/medium/high versions of 3D models and switch based on distance from camera player - no point rendering a high-res version if it's only 5 pixels on screen.
In general, it's because
The games are being optimal about what they need to render, and
They take special advantage of your hardware.
For instance, one easy optimization you can make involves not actually trying to draw things that can't be seen. Consider a complex scene like a cityscape from Grand Theft Auto IV. The renderer isn't actually rendering all of the buildings and structures. Instead, it's rendering only what the camera can see. If you could fly around to the back of those same buildings, facing the original camera, you would see a half-built hollowed-out shell structure. Every point that the camera cannot see is not rendered -- since you can't see it, there's no need to try to show it to you.
Furthermore, optimized instructions and special techniques exist when you're developing against a particular set of hardware, to enable even better speedups.
The other part of your question is why a demo uses so much CPU:
... while a DX demo of a rotating Teapot # 60fps uses a whopping 30% ?
It's common for demos of graphics APIs (like dxdemo) to fall back to what's called a software renderer when your hardware doesn't support all of the features needed to show a pretty example. These features might include things like shadows, reflection, ray-tracing, physics, et cetera.
This mimics the function of a completely full-featured hardware device which is unlikely to exist, in order to show off all the features of the API. But since the hardware doesn't actually exist, it runs on your CPU instead. That's much more inefficient than delegating to a graphics card -- hence your high CPU usage.
3D games are great at tricking your eyes. For example, there is a technique called screen space ambient occlusion (SSAO) which will give a more realistic feel by shadowing those parts of a scene that are close to surface discontinuities. If you look at the corners of your wall, you will see they appear slightly darker than the centers in most cases.
The very same effect can be achieved using radiosity, which is based on rather accurate simulation. Radiosity will also take into account more effects of bouncing lights, etc. but it is computationally expensive - it's a ray tracing technique.
This is just one example. There are hundreds of algorithms for real time computer graphics and they are essentially based on good approximations and typically make a lot assumptions. For example, spatial sorting must be chosen very carefully depending on the speed, typical position of the camera as well as the amount of changes to the scene geometry.
These 'optimizations' are huge - you can implement an algorithm efficiently and make it run 10 times faster, but choosing a smart algorithm that produces a similar result ("cheating") can make you go from O(N^4) to O(log(N)).
Optimizing the actual implementation is what makes games even more efficient, but that is only a linear optimization.
Eeeeek!
I know that this question is old, but its exciting that no one has mentioned VSync!!!???
You compared the CPU usage of the game at 60fps to CPU usage of the teapot demo at 60fps.
Isn't it apparent, that both run (more or less) at exactly 60fps? That leads to the answer...
Both apps run with vsync enabled! This means (dumbed-down) that the rendering framerate is locked to the "vertical blank interval" of your monitor. The graphics hardware (and/or driver) will only render at max. 60fps. 60fps = 60Hz (Hz=per second) refresh rate. So you probably use a rather old, flickering CRT or a common LCD display. On a CRT running at 100Hz you will probably see framerates of up to 100Hz. VSync also applies in a similar way to LCD displays (they usually have a refresh rate of 60Hz).
So, the teapot demo may actually run much more efficient! If it uses 30% of CPU time (compared to 50% CPU time for GTA IV), then it probably uses less cpu time each frame, and just waits longer for the next vertical blank interval. To compare both apps, you should disable vsync and measure again (you will measure much higher fps for both apps).
Sometimes its ok to disable vsync (most games have an option in its settings). Sometimes you will see "tearing artefacts" when vsync is disabled.
You can find details of it and why it is used at wikipedia: http://en.wikipedia.org/wiki/Vsync
Whilst many answers here provide excellent indications of how I will instead answer the simpler question of why
GTA4 took $400 Million dollars in it's first week
Crytech wrote an extremely impressive graphics demo to allow nVidia to 'show off' at a trade show. The resulting impressions got them the leg up to create what would become FarCry.
Valve's 2005 revenue and operating profit have been stated as 70 and 55 million USD respectively.
Perhaps the best example (certainly one of the best known) is Id software. They realised very early, in the days of Commander Keen (well before 3D) that coming up with a clever way to achieve something1, even if it relied on modern hardware (in this case an EGA graphics card!) that was graphically superior to the competition that this would make your game stand out. This was true but they further realised that, rather than then having to come up with new games and content themselves they could licence the technology, thus getting income from others whilst being able to develop the next generation of engine and thus leap frog the competition again.
The abilities of these programmers (coupled with business savvy) is what made them rich.
That said it is not necessarily money that motivates such people. It is likely just as much the desire to achieve, to accomplish. The money they earned in the early days simply means that they now have time to devote to what they enjoy. And whilst many have outside interests almost all still program and try to work out ways to do better than the last iteration.
Put simply the person who wrote the teapot demo likely had one or more of the following issues:
less time
less resources
less reward incentive
less internal and external competition
lesser goals
less talent
The last may sound harsh2 but clearly there are some who are better than others, bell curves sometimes have extreme ends and they tend to be attracted to the corresponding extreme ends of what is done with that skill.
The lesser goals one is actually likely to be the main reason. The target of the teapot demo was just that, a demo. But not a demo of the programmers skill3. It would be a demo of one small facet of a (big) OS, in this case DX rendering.
To those viewing the demo it wouldn't mater it it used way more CPU than required so long as it looked good enough. There would be no incentive to eliminate waste when there would be no beneficiary. In comparison a game would love to have spare cycles for better AI, better sound, more polygons, more effects.
in that case smooth scrolling on PC hardware
Likely more than me so we're clear about that
strictly speaking it would have been a demo to his/her manager too, but again the drive here would be time and/or visual quality.
Because of a few reasons
3D game engines are highly optimized
most of the work is done by your graphics adapter
50% Hm, let me guess you have a dual core and only one core is used ;-)
EDIT: To give few numbers
2.8 Ghz Athlon-64 with NV-6800 GPU. The results are:
CPU: 72.78 Mflops
GPU: 2440.32 Mflops
Sometimes a scene may have more going on than it appears. For example, a rotating teapot with thousands of vertices, environment mapping, bump mapping, and other complex pixel shaders all being rendered simultaneously amounts to a whole lot of processing. A lot of times these teapot demos are simply meant to show off some sort of special effect. They also may not always make the best use of the GPU when absolute performance isn't the goal.
In a game you may see similar effects but they're usually done in a compromised fashion in effort to maximize the frame rate. These optimizations extend to everything you see in the game. The issue becomes, "How can we create the most spectacular and realistic scene with the least amount of processing power?" It's what makes game programmers some of the best optimizers around.
Scene management. kd-trees, frustrum culling, bsps, heirarchical bounding boxes, partial visibility sets.
LOD. Switching out lower detail versions to substitute in for far away objects.
Impostors. Like LOD but not even an object just a picture or 'billboard'.
SIMD.
Custom memory management. Aligned memory, less fragmentation.
Custom data structures (ie no STL, relatively minimal templating).
Assembly in places, mainly for SIMD.
By all the qualified and good answers given, the one that matter is still missing: The CPU utilization counter of Windows is not very reliable. I guess that this simple teapot demo just calls the rendering function in it's idle loop, blocking at the buffer swap.
Now the Windows CPU utilization counter just looks at how much CPU time is spent within each process, but not how this CPU time is used. Try adding a
Sleep(0);
just after returning from the rendering function, and compare.
In addition, there are many many tricks from an artistic standpoint to save computational power. In many games, especially older ones, shadows are precalculated and "baked" right into the textures of the map. Many times, the artists tried to use planes (two triangles) to represent things like trees and special effects when it would look mostly the same. Fog in games is an easy way to avoid rendering far-off objects, and often, games would have multiple resolutions of every object for far, mid, and near views.
The core of any answer should be this -- The transformations that 3D engines perform are mostly specified in additions and multiplications (linear algebra) (no branches or jumps), the operations of a drawing a single frame is often specified in a way that multiple such add-mul's jobs can be done in parallel. GPU cores are very good add add-mul's, and they have dozens or hundreds of add-mull cores.
The CPU is left with doing simple stuff -- like AI and other game logic.
How can a great big PC game like GTA IV use 50% of my CPU and run at 60fps while a DX demo of a rotating Teapot # 60fps uses a whopping 30% ?
While GTA is quite likely to be more efficient than DX demo, measuring CPU efficiency this way is essentially broken. Efficiency could be defined e.g. by how much work you do per given time. A simple counterexample: spawn one thread per a logical CPU and let a simple infinite loop run on it. You will get CPU usage of 100 %, but it is not efficient, as no useful work is done.
This also leads to an answer: how can a game be efficient? When programming "great big games", a huge effort is dedicated to optimize the game in all aspects (which nowadays usually also includes multi-core optimizations). As for the DX demo, its point is not running fast, but rather demonstrating concepts.
I think you should take a look to GPU utilisation rather than CPU... I bet the graphic card is much busier in GTA IV than in the Teapot sample (it should be practically idle).
Maybe you could use something like this monitor to check that:
http://downloads.guru3d.com/Rivatuner-GPU-Monitor-Vista-Sidebar-Gadget-download-2185.html
Also the framerate is something to consider, maybe the teapot sample is running at full speed (maybe 1000fps) and most games are limited to the refresh frequency of the monitor (about 60fps).
Look at the answer on vsync; that is why they are running at same frame rate.
Secondly, CPU is miss leading in a game. A simplified explanation is that the main game loop is just an infinite loop:
while(1) {
update();
render();
}
Even if your game (or in this case, teapot) isn't doing much you are still eating up CPU in your loop.
The 50% cpu in GTA is "more productive" then the 30% in the demo, since more than likely it's not doing much at all; but the GTA is updating tons of details. Even adding a "Sleep (10)" to the demo will probably drop it's CPU by a ton.
Lastly look at GPU usage. The demo is probably taking <1% on a modern video card while the GTA will probably be taking majority during game play.
In short, your benchmarks and measurements aren't accurate.
The DX teapot demo is not using 30% of the CPU doing useful work. It's busy-waiting because it has nothing else to do.
From what I know of the Unreal series some conventions are broken like encapsulation. Code is compiled to bytecode or directly into machine code depending on the game. Also, objects are rendered and packaged under the form of a meshes and things such as textures, lighting and shadows are precalculated whereas as a pure 3d animation requires this to this real time. When the game is actually running there are also some optimizations such as only rendering only the visible parts of an object and displaying texture detail only when close up. Finally, it's probable that video games are designed to get the best out of a platform at a given time (ex: Intelx86 MMX/SSE, DirectX, ...).
I think there is an important part of the answer missing here. Most of the answers tell you to "Know your data". The fact is that you must, in the same way and with the same degree of importance, also know your:
CPU (clock and caches)
Memory (frequency and latency)
Hard drive (in term of speed and seek times)
GPU (#cores, clock and its Memory/Caches)
Interfaces: Sata controllers, PCI revisions, etc.
BUT, on top of that, with the current modern computers, you would never be able to player a real 1080p video at >>30ftp (a single 1080p image in 64bits would take 15 000 Ko/14.9 MB). The reason for that is because of the sampling/precision. A video game would never use a double precision (64bits) for pixels, images, data, etc..., but rather use a lower custom precision (~4-8 bits) and sometimes less precision rescaled with interpolation techniques to allow reasonable computation time.
There are other techniques as well such as Clipping the data (both with OpenGL standard and software implementation), Data compression, etc. Keep also in mind, that current GPUs can be >300 times faster than the current CPUs in term of hardware capability. However, a good programmer may get a 10-20x factor, unless your problem is fully optimized and completely parallelizable (particularly task parallelizable).
By experience, I can tell you that optimization is like an exponential curve. To reach optimal performance, the time required may be incredibly important.
So to get back to the teapot, you should see how the geometry is represented, sampled and with what precision Vs see in GTA 5, in term of geometry/textures and most important, the details (precision, sampling, etc.)

Why is GUI code so computationally expensive?

All you Stackoverflowers,
I was wondering why GUI code is responsible for sucking away many, many cpu cycles. In principle, the graphical rendering is far less complex than Doom (although most corporate GUIs will introduce lots of window dressing). The event handling layer is also seemingly a heavy cost, however, it seems that a well-written implementation should switch between contexts efficiently on modern processors with a lot of memory/cache.
If anybody has run a profiler on their big GUI application, or a common API itself, I'm interested in where the bottlenecks lie.
Possible explanations (that I imagine) may be:
High levels of abstraction between hardware and application interface
Lots of levels of indirection to the correct code to execute
Low priority (compared to other processes)
Misbehaving applications flooding API with calls
Excessive object orientation?
Complete poor design choices in API (not just issues, but design philosophy)
Some GUI frameworks are much better than others, so I'd like to hear varied perspectives. For example, the Unix/X11 system is much different than Windows and even than WinForms.
Edit: Now a community wiki - go for it. I have one more thing to add -- I'm an algorithms guy in school and would be interested if there are inefficient algorithms in GUI code and which they are. Then again, it's probably just the implementation overhead.
I've no idea generally, but I'd like to add another item to your list - font rendering and calculations. Finding vector glyphs in a font and converting them to bitmap representations with anti-aliasing is no small task. And often it needs to be done twice - first to calculate the width/height of the text for positioning, and then actually drawing the text at the right coordinates.
Also, most drawing code today relies on clipping mechanisms to update just a part of the GUI. So, if just one part needs to be redrawn, the code actually redraws the whole window behind the scenes, and then takes just the needed part to actually update.
Added:
In the comments I found this:
I'm also very interested in this. It can't be that the gui is rendered using only the cpu because if you don't have proper drivers for your gfx-card, desktop graphics render incredibly slow. If you have gfx-drivers however desktop-gfx go kinda fast but never as fast as a directx/opengl app.
Here's the deal as I understand it: every graphic card out there today supports a generic interface for drawing. I'm not sure if it's called "VESA", "SVGA", or if those are just old names from the past. Anyway, this interface involves doing everything through interrupts. For every pixel there is an interrupt call. Or something like that. The proper VGA driver however is able to take advantage of DMA and other enhancements that make the whole process WAY less CPU-intensive.
Added 2: Ah, and for OpenGL/DirectX - that's another feature of today's graphics cards. They are optimized for 3D operations in exclusive mode. That's why the speed. The normal GUI just utilizes basic 2D drawing procedures. So it gets to send the contents of the whole screen every time it wants an update. 3D applications however send a bunch of textures and triangle definitions to the VRAM (video-RAM) and then just reuse them for drawing. They just say something like "take the triangle set #38 with the texture set #25 and draw them". All these things are cached in the VRAM so this is again way faster.
I'm not sure, but I would suspect that the modern 3D-accelerated GUIs (Vista Aero, compiz on Linux, etc.) also might take advantage of this. They could send common bitmaps to the VGA up front and then just reuse them directly from the VRAM. Any application-drawn surfaces however would still need to be sent directly every time for updates.
Added 3: More ideas. :) The modern GUI's for Windows, Linux, etc. are widget-oriented (that's control-oriented for Windows speakers). The problem with this is that each widget has its own drawing code and associated drawing surface (more or less). When the window needs to get redrawn, it calls the drawing code for all its child-widgets, who in turn call the drawing code for their child-widgets, etc.. Every widget redraws its whole surface, even though some of it is obscured by other widgets. With above mentioned clipping techniques some of this drawn information is immediately discarded to reduce flickering and other artifacts. But still it's lots of manual drawing code that includes bitmap blitting, stretching, skewing, drawing lines, text, flood-filling, etc.. And all this gets translated to a series of putpixel calls that get filtered through clipping filters/masks and other stuff. Ah, yes, and alpha blending has also become popular today for nice effects which means even more work. So... yes, you could say this is because of lots of abstraction and indirection. But... could you really do it any better? I don't think so. Only 3D techniques might help, because they take advantage of GPU for alpha-calculations and clipping.
Let's begin by saying that writing libraries is much harder than writing a stand-alone code. The requirement that your abstraction be reusable in as many contexts as possible, including contexts which you haven't though of yet, makes the task challenging even for experienced programmers.
Amongst libraries, writing a GUI toolkit library is a famously difficult problem. This is because the programs which use GUI libraries range over a very wide variety of domains with very different needs. Mr Why and Martin DeMollo discussed the requirements placed of GUI libraries a little while ago.
Writing GUI widgets themselves is difficult because computer users are very sensitive minute details of the behavior of the interface. Non-native widget never feel right, don't they? In order to get non-native widget right -- in order to get any widget right, in fact -- you need to spend an inordinate amount of time tweaking the details of the behavior.
So, GUI are slow because of the inefficiencies introduced by the abstraction mechanisms used to create highly-reusable components, that added to shortness of time available to optimize the code once so much time has been spent just getting the behavior right.
Uhm, that's quite a lot.
The most simple but probably obvious answer is that the programmers behind these GUI apps, are really bad programmers. You can go along way in writing code which does the most bizarre things and it will be faster but few people seem to care how to do this or they deem it to be an expensive non-profitable time wasted effort.
To set things straight off-loading computations to the GPU won't necessarily fix any problems. The GPU is just like the CPU except it's less general purpose and more a data paralleled processor. It can do graphics computations exceptionally well. Whatever graphics API/OS and driver combination you have doesn't really matter that much... well OK, with Vista as an example, they changed the desktop composition engine. This engine is far better composting only that which has changed, and since the number one bottle neck for GUI apps is redrawing is a neat optimization strategy. This idea of virtualizing your computational needs and only update the smallest change every time.
Win32 sends WM_PAINT messages to windows when they need to be redrawn, this can be a result of windows occluding each other. However it's up to the window itself to figure out whats actually changed. More than so nothing did change or the change that was made was trivial enough so that it could have been just preformed on top of what ever top most surface you had.
This kind of graphics handling doesn't necessarily exist today. I would say that people have refrained from writing really efficient and virtualizing rendering solutions because the benefit/cost ration is rather low/high (bad).
Something Windows Presentation Foundation (WPF) does, which I think is far superior to most other GUI API is that it splits layout updates and rendering updates into two separate passes. And while WPF is managed code the rendering engine is not. What happens with rendering is that the managed WPF rendering engine builds a command queue (this is what DirectX and OpenGL does) which is then handed of to the native rendering engine. What's a bit more elegant here is that WPF will then try to retain any computation which didn't change the visual state. A trick if you may, where you avoid costly rendering calls for things that doesn't have to be rendered (virtualizing).
In contrast to WM_PAINT which tells a Win32 window to repaint itself a WPF app would check what parts of that window requires repainting and only repaint the smallest change.
Now WPF is not supreme, it's a solid effort from Microsoft but it's not the holy grail yet... the code which runs the pipeline could still be improved and the memory footprint of any managed app is still more than I would want. But I hope this is the kind of answer you are looking for.
WPF is able to do some things asynchronously rather decent, which is a huge deal if you wanna make a really responsive low-latency/low-cpu UI. Asynchronous operations is more than off-loading work on a different thread.
To summarize things slow and expensive GUI means too much repainting and the kind of repainting which is very expensive i.e. the entire surface area.
I does to some degree depend on the language. You might have noticed that Java and RealBasic applications are a fair bit slower than their C-based (C++, C#, Objective-C) counterparts.
However GUI applications are much more complex than command line apps. The Terminal window needs only to draw a simple window that doesn't support buttons.
There are also multiple loops for extra inputs and features.
I think that you can find some interesting thoughts on this topic in "Window System Design: If I had it to do over again in 2002" by James Gosling (the Java guy, also known for his work on pre-X11 windowing systems). Available online here[pdf].
The article focuses on the positive side (how to make it fast), not on the negative side (what's making it slow), but it is still a good read on the topic.

GDI has been accelerated. Does anyone know when this happened?

To sketch the background of this question : at work we use Dell Precision workstations. My current one has got an NVidia Quadro FX1700.
My team is developing the graphics components for a real time data acquisition system.
So we are always looking out to see if the graphics operations don't use up too much CPU time. For quick checks, we have a couple of test programs that we run, which draw scenes at a specified rate ( e.g. 10 fps ) and we use plain old Task Manager to see where CPU usage is at.
One of these programs is heavy on GDI DrawRectangle calls ( which are filled ). This program always used to consume about 40% CPU user-time, but since about a year or so ( just guessing here ) it only uses about 2-3 % kernel-time. So clearly some hardware acceleration is going on here. And indeed, if I turn HW-accell off, we're back to the original 40% user-time.
All of this is of course good news, because we were already thinking about going to OpenGL. Year after year GDI never got the benefit of hardware acceleration. Until some time ago that is.
Does anyone know anything more about this? Did Microsoft do this? Or is it gfx-card vendor specific?
Edit
Thnx for the answers already ( Ferrucio, Torlack and Rob Walker ) but my question has not been answered yet. We are talking about a filled rectangle here. Probably the most trivial function to optimize : just send a couple of coordinates to the GPU and let it rip. Yet it was always implemented on the CPU side.
So far the answers lead me to believe that NVidia finally saw the light ( after more than 10 years ) and accelerated GDI. And no announcement about this? There's no information to be found on this at all.
My internal customers ask me about the speedup of the graphics, and all I can say is "well, we got lucky".
Edit2
It does seem like it is driver related according to the different answers. So, then NVidia has made crappy GDI drivers for its workstation cards for years. It really was an accepted fact within this company that GDI was not accelerated and all the tests confirmed this.
GDI works by calling various functions in the graphics device driver. There are a core set of functions that every driver must implement. Other functions may be implemented by the driver. If they are not, GDI will perform those functions itself.
If a particular function is not implemented in hardware then there is no point in the driver doing a software implementation of that function since GDI can probably do a better job. GDI is extremely well optimized for performance.
As more functions are implemented in hardware, not only do those functions perform much better, but there is also less work for GDI to do, resulting in less CPU time spent on graphics.
It may also be the case that the graphics card vendor, in an effort to get a card out to market quickly, may not have implemented all possible hardware functions that the card could perform. Later versions of that driver may then implement that functionality, resulting in improved performance.
GDI was accelerated for quite awhile. Far as I recall it did depend on your hardware and drivers to some extent. Why you've seen such a jump in performance only recently seems odd.
However, don't get too happy - GDI hardware acceleration is no longer supported in Vista. The new desktop composition engine doesn't support it. However, in Vista you do gain fast moving of windows since content doesn't always have to be re-drawn by the application (and no tearing I think?).
To some extent, GDI has always been accelerated. Even back in the old Win31 days I remember buys cards (Number9) who's main selling point was hardware acceleration of GDI.
Vista has a new display driver architecture which would provide an opportunity for a dramatic increase in performance. Are you comparing like hardware/OS combinations?
A lot of the 2D stuff has been accelerated for some time, each new major version of windows has changed the way display drivers have worked. I believe it was with XP that windows revamped it's window manager layer. Hard to compare, really, since XP is more similar to windows 2000/NT than any earlier versions.
Some more info on wikipedia, Development of Windows XP.
Windows 2000, certainly, was the first NT-kernel-based windows to include DirectX, and had some graphical improvements as well. Windows 2000 (wikipedia)
I don't believe there have been major changes to the display driver model/2D subsystem between releases. So if you noticed a change like that, it's likely due to something nVidia did.

Resources