Effect of Trig Functions on Frame Rate in Processing - processing

I'm working on a ping-pong simulation program in which I have to calculate deflection angles based upon the angle of the paddle and incoming ball trajectory. I've developed a system of equations that calculates deflection. Unfortunately, it involves a system of at least thirty trig functions that can't really be simplified and must be run once during each draw function.
My question is: How much will this slow my program?
Without the functions implemented the program runs fine on my new iMac, but on any older computer it already has a great deal of lag as a result of OPENGL and probably some inefficiency on my part.
Will running that many trig functions substantially affect my framerate? If so, would replacing the trig functions with taylor polynomials and then simplifying be, in theory, any better?

After experimenting a little, I found that, as should probably have been obvious to me, running that many functions seriously affects frameRate. I tried simplification using Taylor Polynomials, but found that I couldn't really simplify my equations much without a great deal of error. The solution I settled on was a 3-dimensional array in my setup loop that precalculates a range of values suitable for my purposes. Thanks to everyone who replied for the help, I hope I didn't waste too much of your time.

Related

Game performance optimization interview

This question came up:
You're searching for bottlenecks in your game, but nothing you're changing is making the game any faster, be it anything in the GPU pipeline or the CPU. Nothing is spiking, and the slowness appears to be distributed across everywhere. What do you do next?
I was flummoxed. Is it a trick question? When fixing perf issues, I always assume that this was the point at which you need to scale everything back. I don't think it's mem alloc, as that shows up in CPU perf.
I would have asked for more information. "Slow" is a poor indicator of bad performance and is a classification of a symptom rather than a symptom itself. For example, you might describe "slow" as being:
Low frame rate
Poor responsiveness to input
High responsiveness and smooth framerate, but slow game mechanics (i.e.: the player and entities move smoothly but very slowly)
In the case of networked games, apparent network lag
All of these problems have different potential causes and solutions:
Low but consistent frame rate may be due to inefficiencies in your game loop. Simply running your favorite profiler may indicate that large amounts of time are spent in one particular piece of code. In a game I wrote, for instance, I discovered that low FPS was the result of a bad loop that calculated distances between entities multiple times without caching. In another game, I discovered that the data structure I was using to perform lookups against the terrain was O(N) rather than O(1) (python stdlib...ick). You can't diagnose a problem you can't see, and profiling is the first line of defense.
Poor responsiveness may be due to a number of things. If the FPS is high but the controls are sluggish to respond, the API that you're using to access the controls may simply be bad. Some controllers may have crappy drivers that can kill responsiveness. It might even be your game loop: you might simply not be checking for input from the controller frequently enough (perhaps you're not checking on every tick). In one of the aforementioned games, I had an issue where certain actions had a delayed effect: you'd use an item and the game would respond a half second or so later. It turned out that the issue was caused by the client making a full round-trip to the server to perform the action, verify that it happened, and wait for the server to broadcast back that the item was used. Simply having the behavior take place instantaneously on the client remedied the issue.
Slow game mechanics might indicate that game constants simply aren't set high enough. If everything is smooth and beautiful but everything just moves very slowly, it's quite possible that default velocities or accelerations aren't turned up enough.
Network lag can be caused by any number of things: the router you're connected to might be failing, the VPS you're developing against might be on a host that's being DDoSed, you might be using a protocol that's overly (but uniformly) chatty, or you're simply sending too much data over the wire. In a piece of simulation software I wrote in college, the computations were performed on some beefy computers in a lab, while the visualizations were being run on my MBP in my dorm. It turned out that the sheer amount of data that I was sending from the lab computers to my dorm was enough to overload the cheap network switches in the building and drop packets, resulting in horrible lag but perfectly reasonable log output.
So I guess the answer here is to have the interviewer describe the symptoms more fully. #Ali's answer is great, but it could be that there's a more nuanced problem at hand that requires some coaxing to diagnose.
You're searching for bottlenecks in your game, but nothing you're
changing is making the game any faster, be it anything in the GPU
pipeline or the CPU. Nothing is spiking, and the slowness appears to
be distributed across everywhere.
It pretty much sounds like the definition of Uniformly Slow Code. Let's assume it is really what is meant by this (and not some I/O bottleneck or creation of unnecessary objects in a loop or some poor choice for the datastructures or for the algorithms, etc).
To make a uniformly slow code faster, you usually have to go against good practices, and that is why I usually stop optimizing my code when it is uniformly slow. (I suppose "stop optimizing" is not a good answer at an interview...)
One way to make things faster is to identify an appropriate sequence of small operations, collect them together in one place, and then manually improve the things; sort of "manually inlining" these operations then doing high-level simplifications on the code that emerges. It requires good intuition where this might be worth doing and excellent understanding of the involved code. This answer calls it bunching and horizontal optimization.
Another thing that might be worth looking into if your really have uniformly slow code is Andrei Alexandrescu's optimization tips.
Maybe this is about thinking about more efficient algorithms. "Micro-optimization" has its limits; you can perfectly optimize a bubble sort, for example, but to get real big speedup you'd invent another sorting algorithm.
Also, in games you may introduce different kinds of adjustable quality/speed (or precision/speed) tradeoffs. Typically all games have some settings that change graphic detail level.
anecdotal:
i can tell you what the problem is without actually knowing the answer to the question ;p
sloppy directx calls. too many objects. especially bad on some old dx9 games, since dx9 needed to make a new directdraw call for every object. or something like that, the story goes. basically resulted in the cpu waiting idle for the gpu to process all the messages.
although def not the solution to every issue, i though it was worth mentioning as an interesing piece of information ;p didn't see it in the other comments.
it's almost like having too many pixel shaders, except at least the gpu works at 100% with a mass of those :D good for frying omelettes. (also, using occlusion to save performance and then adding a mass of pixel shaders to that model is a BAD idea)
i hope you can see the humor in this ;p

Performing many small matrix operations in parallel in OpenCL

I have a problem that requires me to do eigendecomposition and matrix multiplication of many (~4k) small (~3x3) square Hermitian matrices. In particular, I need each work item to perform eigendecomposition of one such matrix, and then perform two matrix multiplications. Thus, the work that each thread has to do is rather minimal, and the full job should be highly parallelizable.
Unfortunately, it seems all the available OpenCL LAPACKs are for delegating operations on large matrices to the GPU rather than for doing smaller linear algebra operations inside an OpenCL kernel. As I'd rather not implement matrix multiplcation and eigendecomposition for arbitrarily sized matrices in OpenCL myself, I was hoping someone here might know of a suitable library for the job?
I'm aware that OpenCL might be getting built-in matrix operations at some point since the matrix type is reserved, but that is not really of much use right now. There is a similar question here from 2011, but it pretty much just says to roll your own, so I'm hoping the situation has improved since then.
In general, my experience with libraries like LAPACK, fftw, cuFFT, etc. is that when you want to do many really small problems like this, you are better off writing your own for performance. Those libraries are usually written for generality, so you can often beat their performance for specific small problems, especially if you can use unique properties of your particular problem.
I realize you don't want to hear "roll your own" but for this type of problem it is really the best thing to do IMO. You might find a library to do this, but considering the code that you really want (for performance) will not generalize, I doubt it exists. You'll be looking specifically for code to find the eigenvalues of 3x3 matrices. That's less of a library and more of a random code snippet with a suitable license that you can manipulate to take advantage of your specific problem.
In this specific case, you can find the eigenvalues of a 3x3 matrix with the textbook method using the characteristic polynomial. Remember that there is a relatively simple closed form solution for cubic equations: http://en.wikipedia.org/wiki/Cubic_function#General_formula_for_roots.
While I think it is very likely that this approach would be much faster than iterative methods, it would be wise to verify that if performance is an issue.

CUDA - Simple matrix addition/sum operation

This should be very simple but I could not find an exhaustive answer:
I need to perform A+B = C with matrices, where A and B are two matrices of unknown size (they could be 2x2 or 20.000x20.000 as greatest value)
Should I use CUBLAS with Sgemm function to calculate?
I need the maximum speed achievable so I thought of CUBLAS library which should be well-optimized
For any sort of technical computing, you should always use optimized libraries when available. Existing libraries, used by hundreds of other people, are going to be better tested and better optimized than anything you do yourself, and the time you don't spend writing (and debugging, and optimizing) that function yourself can be better spent working on the actual high-level problem you want to solve instead of re-discovering things other people have already implemented. This is just basic specialization of labour stuff; focus on the compute problem you want to solve, and let people who spend their days professionally writing GPGPU matrix routines do that for you.
Only when you are sure that existing libraries don't do what you need -- maybe they solve too general a problem, or make certain assumptions that don't hold in your case -- should you roll your own.
I agree with the others that in this particular case, the operation is pretty straightforward and it's feasible to DIY; but if you're going to be doing anything else with those matricies once you're done adding them, you'd be best off using optimized BLAS routines for whatever platform you're on.
What you want to do would be trivial to implement in CUDA and will be bandwidth limited.
And since CUBLAS5.0, cublasgeam can be used for that. It computes the weighted sum of 2 optionally transposed matrices.

Most hazardous performance bottleneck misconceptions

The guys who wrote Bespin (cloud-based canvas-based code editor [and more]) recently spoke about how they re-factored and optimize a portion of the Bespin code because of a misconception that JavaScript was slow. It turned out that when all was said and done, their optimization produced no significant improvements.
I'm sure many of us go out of our way to write "optimized" code based on misconceptions similar to that of the Bespin team.
What are some common performance bottleneck misconceptions developers commonly subscribe to?
In no particular order:
"Ready, Fire, Aim" - thinking you know what needs to be optimized without proving it (i.e. guessing) and then acting on that, and since it doesn't help much, therefore assuming the code must have been optimal to begin with.
"Penny Wise, Pound Foolish" - thinking that optimization is all about compiler optimization, fussing about ++i vs. i++ while mountains of time are being spent needlessly in overblown designs, especially of data structures and databases.
"Swat Flies With a Bazooka" - being so enamored of the fanciest ideas heard in classrooms that they are just used for everything, regardless of scale.
"Fuzzy Thinking about Performance" - throwing around terms like "hotspot" and "bottleneck" and "profiler" and "measure" as if these things were well understood and / or relevant. (I bet I get clobbered for that!) OK, one at a time:
hotspot - What's the definition? I have one: it is a region of physical addresses where the PC register is found a significant fraction of time. It is the kind of thing PC samplers are good at finding. Many performance problems exhibit hotspots, but only in the simplest programs is the problem in the same place as the hotspot is.
bottleneck - A catch-all term used for performance problems, it implies a limited channel constraining how fast work can be accomplished. The unstated assumption is that the work is necessary. In my decades of performance tuning, I have in fact found a few problems like that - very few. Nearly all are of a very different nature. Rather than taking the shortest route from point A to B, little detours are taken, in the form of function calls which take little code, but not little time. Then those detours take further nested detours, sometimes 30 levels deep. The more the detours are nested, the more likely it is that some of them are less than necessary - wasteful, in fact - and it nearly always arises from galloping generality - unquestioning over-indulgence in "abstraction".
profiler - a universal good thing, right? All you gotta do is get a profiler and do some profiling, right? Ever think about how easy it is to fool a profiler into telling you a lot of nothing, when your goal is to find out what you need to fix to get better performance? Somewhere in your call tree, tuck a little file I/O, or a little call to some system routine, or have your evil twin do it without your knowledge. At some point, that will be your problem, and most profilers will miss it completely because the only inefficiency they contemplate is algorithmic inefficiency. Or, not all your routines will be little, and they may not call another routine in a small number of places, so your call graph says there's a link between the two routines, but which call? Or suppose you can figure out that some big percent of time is spent in a path A calls B calls C. You can look at that and think there's not much you can do about it, when if you could also look at the data being passed in those calls, you could see if it's necesssary. Here's a fun project - pick your favorite profiler, and then see how many ways you could fool it. That is, find ways to make the program take longer without the profiler being able to tell what you did, because if you can do it intentionally, you can also do it without intending to.
measure - (i.e. measure time) that's what profilers have done for decades, and they take pride in the accuracy and precision with which they measure. But measure what time? and why with accuracy? Remember, the goal is to precisely locate performance problems, such that you could fruitfully optimize them to gain a speedup. When you get that speedup, it is what it is, regardless of how precisely you estimated it beforehand. If that precision of measurement is bought at the expense of precision of location, then you've just bought apples when what you needed was oranges.
Here's a list of myths about performance.
And this is what happens when one optimizes without a valid profile in hand. All you are doing without a profile is guessing and probably wasting time and money. I could list a bunch of other misconceptions, but many come down to the fact that if the code in question isn't a top resource consumer, it is probably fine as is. Kinda like unrolling a for loop that is doing disk I/O...
If I convert the whole code base over to [Insert xxx latest technology here], it'll be much faster.
Relational databases are slow.
I'm smarter than the optimizer.
This should be optimized.
Java is slow
And, unrelated:
Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems.
-jwz
Optimizing the WRONG part of the code (people, use your profiler!).
The optimizer in my compiler is smart, so I don't have to help it.
Java is fast (LOL)
Relational databases are fast (ROTFL LOL LMAO)
"This has to be as fast as possible."
If you don't have a performance problem, you don't need to worry about optimizing performance (beyond just paying attention to using good algorithms).
This misconception also manifests in attempts to optimize performance for every single aspect of your program. I see this most often with people trying to shave every last millisecond off of a low-volume web application's execution time while failing to take into account that network latency will take orders of magnitude longer than their code's execution time, making any reduction in execution time irrelevant anyhow.
My rules of optimization.
Don't optimize
Don't optimize now.
Profile to identify the problem.
Optimize the component that is taking at least 80% of the time.
Find an optimization that is 10 times faster.
My best optimization has been reducing a report from 3 days to 9 minutes. The optimized code was sped up from three days to three minutes.
In my carreer I have met three people who had been tasked with producing a faster sort on VAX than the native sort. They invariably had been able to produce sorts that took only three times longer.
The rules are simple:
Try to use standard library functions first.
Try to use brute-force and ignorance second.
Prove you've got a problem before trying to do any optimization.

How are 3D games so efficient? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
There is something I have never understood. How can a great big PC game like GTA IV use 50% of my CPU and run at 60fps while a DX demo of a rotating Teapot # 60fps uses a whopping 30% ?
Patience, technical skill and endurance.
First point is that a DX Demo is primarily a teaching aid so it's done for clarity not speed of execution.
It's a pretty big subject to condense but games development is primarily about understanding your data and your execution paths to an almost pathological degree.
Your code is designed around two things - your data and your target hardware.
The fastest code is the code that never gets executed - sort your data into batches and only do expensive operations on data you need to
How you store your data is key - aim for contiguous access this allows you to batch process at high speed.
Parellise everything you possibly can
Modern CPUs are fast, modern RAM is very slow. Cache misses are deadly.
Push as much to the GPU as you can - it has fast local memory so can blaze through the data but you need to help it out by organising your data correctly.
Avoid doing lots of renderstate switches ( again batch similar vertex data together ) as this causes the GPU to stall
Swizzle your textures and ensure they are powers of two - this improves texture cache performance on the GPU.
Use levels of detail as much as you can -- low/medium/high versions of 3D models and switch based on distance from camera player - no point rendering a high-res version if it's only 5 pixels on screen.
In general, it's because
The games are being optimal about what they need to render, and
They take special advantage of your hardware.
For instance, one easy optimization you can make involves not actually trying to draw things that can't be seen. Consider a complex scene like a cityscape from Grand Theft Auto IV. The renderer isn't actually rendering all of the buildings and structures. Instead, it's rendering only what the camera can see. If you could fly around to the back of those same buildings, facing the original camera, you would see a half-built hollowed-out shell structure. Every point that the camera cannot see is not rendered -- since you can't see it, there's no need to try to show it to you.
Furthermore, optimized instructions and special techniques exist when you're developing against a particular set of hardware, to enable even better speedups.
The other part of your question is why a demo uses so much CPU:
... while a DX demo of a rotating Teapot # 60fps uses a whopping 30% ?
It's common for demos of graphics APIs (like dxdemo) to fall back to what's called a software renderer when your hardware doesn't support all of the features needed to show a pretty example. These features might include things like shadows, reflection, ray-tracing, physics, et cetera.
This mimics the function of a completely full-featured hardware device which is unlikely to exist, in order to show off all the features of the API. But since the hardware doesn't actually exist, it runs on your CPU instead. That's much more inefficient than delegating to a graphics card -- hence your high CPU usage.
3D games are great at tricking your eyes. For example, there is a technique called screen space ambient occlusion (SSAO) which will give a more realistic feel by shadowing those parts of a scene that are close to surface discontinuities. If you look at the corners of your wall, you will see they appear slightly darker than the centers in most cases.
The very same effect can be achieved using radiosity, which is based on rather accurate simulation. Radiosity will also take into account more effects of bouncing lights, etc. but it is computationally expensive - it's a ray tracing technique.
This is just one example. There are hundreds of algorithms for real time computer graphics and they are essentially based on good approximations and typically make a lot assumptions. For example, spatial sorting must be chosen very carefully depending on the speed, typical position of the camera as well as the amount of changes to the scene geometry.
These 'optimizations' are huge - you can implement an algorithm efficiently and make it run 10 times faster, but choosing a smart algorithm that produces a similar result ("cheating") can make you go from O(N^4) to O(log(N)).
Optimizing the actual implementation is what makes games even more efficient, but that is only a linear optimization.
Eeeeek!
I know that this question is old, but its exciting that no one has mentioned VSync!!!???
You compared the CPU usage of the game at 60fps to CPU usage of the teapot demo at 60fps.
Isn't it apparent, that both run (more or less) at exactly 60fps? That leads to the answer...
Both apps run with vsync enabled! This means (dumbed-down) that the rendering framerate is locked to the "vertical blank interval" of your monitor. The graphics hardware (and/or driver) will only render at max. 60fps. 60fps = 60Hz (Hz=per second) refresh rate. So you probably use a rather old, flickering CRT or a common LCD display. On a CRT running at 100Hz you will probably see framerates of up to 100Hz. VSync also applies in a similar way to LCD displays (they usually have a refresh rate of 60Hz).
So, the teapot demo may actually run much more efficient! If it uses 30% of CPU time (compared to 50% CPU time for GTA IV), then it probably uses less cpu time each frame, and just waits longer for the next vertical blank interval. To compare both apps, you should disable vsync and measure again (you will measure much higher fps for both apps).
Sometimes its ok to disable vsync (most games have an option in its settings). Sometimes you will see "tearing artefacts" when vsync is disabled.
You can find details of it and why it is used at wikipedia: http://en.wikipedia.org/wiki/Vsync
Whilst many answers here provide excellent indications of how I will instead answer the simpler question of why
GTA4 took $400 Million dollars in it's first week
Crytech wrote an extremely impressive graphics demo to allow nVidia to 'show off' at a trade show. The resulting impressions got them the leg up to create what would become FarCry.
Valve's 2005 revenue and operating profit have been stated as 70 and 55 million USD respectively.
Perhaps the best example (certainly one of the best known) is Id software. They realised very early, in the days of Commander Keen (well before 3D) that coming up with a clever way to achieve something1, even if it relied on modern hardware (in this case an EGA graphics card!) that was graphically superior to the competition that this would make your game stand out. This was true but they further realised that, rather than then having to come up with new games and content themselves they could licence the technology, thus getting income from others whilst being able to develop the next generation of engine and thus leap frog the competition again.
The abilities of these programmers (coupled with business savvy) is what made them rich.
That said it is not necessarily money that motivates such people. It is likely just as much the desire to achieve, to accomplish. The money they earned in the early days simply means that they now have time to devote to what they enjoy. And whilst many have outside interests almost all still program and try to work out ways to do better than the last iteration.
Put simply the person who wrote the teapot demo likely had one or more of the following issues:
less time
less resources
less reward incentive
less internal and external competition
lesser goals
less talent
The last may sound harsh2 but clearly there are some who are better than others, bell curves sometimes have extreme ends and they tend to be attracted to the corresponding extreme ends of what is done with that skill.
The lesser goals one is actually likely to be the main reason. The target of the teapot demo was just that, a demo. But not a demo of the programmers skill3. It would be a demo of one small facet of a (big) OS, in this case DX rendering.
To those viewing the demo it wouldn't mater it it used way more CPU than required so long as it looked good enough. There would be no incentive to eliminate waste when there would be no beneficiary. In comparison a game would love to have spare cycles for better AI, better sound, more polygons, more effects.
in that case smooth scrolling on PC hardware
Likely more than me so we're clear about that
strictly speaking it would have been a demo to his/her manager too, but again the drive here would be time and/or visual quality.
Because of a few reasons
3D game engines are highly optimized
most of the work is done by your graphics adapter
50% Hm, let me guess you have a dual core and only one core is used ;-)
EDIT: To give few numbers
2.8 Ghz Athlon-64 with NV-6800 GPU. The results are:
CPU: 72.78 Mflops
GPU: 2440.32 Mflops
Sometimes a scene may have more going on than it appears. For example, a rotating teapot with thousands of vertices, environment mapping, bump mapping, and other complex pixel shaders all being rendered simultaneously amounts to a whole lot of processing. A lot of times these teapot demos are simply meant to show off some sort of special effect. They also may not always make the best use of the GPU when absolute performance isn't the goal.
In a game you may see similar effects but they're usually done in a compromised fashion in effort to maximize the frame rate. These optimizations extend to everything you see in the game. The issue becomes, "How can we create the most spectacular and realistic scene with the least amount of processing power?" It's what makes game programmers some of the best optimizers around.
Scene management. kd-trees, frustrum culling, bsps, heirarchical bounding boxes, partial visibility sets.
LOD. Switching out lower detail versions to substitute in for far away objects.
Impostors. Like LOD but not even an object just a picture or 'billboard'.
SIMD.
Custom memory management. Aligned memory, less fragmentation.
Custom data structures (ie no STL, relatively minimal templating).
Assembly in places, mainly for SIMD.
By all the qualified and good answers given, the one that matter is still missing: The CPU utilization counter of Windows is not very reliable. I guess that this simple teapot demo just calls the rendering function in it's idle loop, blocking at the buffer swap.
Now the Windows CPU utilization counter just looks at how much CPU time is spent within each process, but not how this CPU time is used. Try adding a
Sleep(0);
just after returning from the rendering function, and compare.
In addition, there are many many tricks from an artistic standpoint to save computational power. In many games, especially older ones, shadows are precalculated and "baked" right into the textures of the map. Many times, the artists tried to use planes (two triangles) to represent things like trees and special effects when it would look mostly the same. Fog in games is an easy way to avoid rendering far-off objects, and often, games would have multiple resolutions of every object for far, mid, and near views.
The core of any answer should be this -- The transformations that 3D engines perform are mostly specified in additions and multiplications (linear algebra) (no branches or jumps), the operations of a drawing a single frame is often specified in a way that multiple such add-mul's jobs can be done in parallel. GPU cores are very good add add-mul's, and they have dozens or hundreds of add-mull cores.
The CPU is left with doing simple stuff -- like AI and other game logic.
How can a great big PC game like GTA IV use 50% of my CPU and run at 60fps while a DX demo of a rotating Teapot # 60fps uses a whopping 30% ?
While GTA is quite likely to be more efficient than DX demo, measuring CPU efficiency this way is essentially broken. Efficiency could be defined e.g. by how much work you do per given time. A simple counterexample: spawn one thread per a logical CPU and let a simple infinite loop run on it. You will get CPU usage of 100 %, but it is not efficient, as no useful work is done.
This also leads to an answer: how can a game be efficient? When programming "great big games", a huge effort is dedicated to optimize the game in all aspects (which nowadays usually also includes multi-core optimizations). As for the DX demo, its point is not running fast, but rather demonstrating concepts.
I think you should take a look to GPU utilisation rather than CPU... I bet the graphic card is much busier in GTA IV than in the Teapot sample (it should be practically idle).
Maybe you could use something like this monitor to check that:
http://downloads.guru3d.com/Rivatuner-GPU-Monitor-Vista-Sidebar-Gadget-download-2185.html
Also the framerate is something to consider, maybe the teapot sample is running at full speed (maybe 1000fps) and most games are limited to the refresh frequency of the monitor (about 60fps).
Look at the answer on vsync; that is why they are running at same frame rate.
Secondly, CPU is miss leading in a game. A simplified explanation is that the main game loop is just an infinite loop:
while(1) {
update();
render();
}
Even if your game (or in this case, teapot) isn't doing much you are still eating up CPU in your loop.
The 50% cpu in GTA is "more productive" then the 30% in the demo, since more than likely it's not doing much at all; but the GTA is updating tons of details. Even adding a "Sleep (10)" to the demo will probably drop it's CPU by a ton.
Lastly look at GPU usage. The demo is probably taking <1% on a modern video card while the GTA will probably be taking majority during game play.
In short, your benchmarks and measurements aren't accurate.
The DX teapot demo is not using 30% of the CPU doing useful work. It's busy-waiting because it has nothing else to do.
From what I know of the Unreal series some conventions are broken like encapsulation. Code is compiled to bytecode or directly into machine code depending on the game. Also, objects are rendered and packaged under the form of a meshes and things such as textures, lighting and shadows are precalculated whereas as a pure 3d animation requires this to this real time. When the game is actually running there are also some optimizations such as only rendering only the visible parts of an object and displaying texture detail only when close up. Finally, it's probable that video games are designed to get the best out of a platform at a given time (ex: Intelx86 MMX/SSE, DirectX, ...).
I think there is an important part of the answer missing here. Most of the answers tell you to "Know your data". The fact is that you must, in the same way and with the same degree of importance, also know your:
CPU (clock and caches)
Memory (frequency and latency)
Hard drive (in term of speed and seek times)
GPU (#cores, clock and its Memory/Caches)
Interfaces: Sata controllers, PCI revisions, etc.
BUT, on top of that, with the current modern computers, you would never be able to player a real 1080p video at >>30ftp (a single 1080p image in 64bits would take 15 000 Ko/14.9 MB). The reason for that is because of the sampling/precision. A video game would never use a double precision (64bits) for pixels, images, data, etc..., but rather use a lower custom precision (~4-8 bits) and sometimes less precision rescaled with interpolation techniques to allow reasonable computation time.
There are other techniques as well such as Clipping the data (both with OpenGL standard and software implementation), Data compression, etc. Keep also in mind, that current GPUs can be >300 times faster than the current CPUs in term of hardware capability. However, a good programmer may get a 10-20x factor, unless your problem is fully optimized and completely parallelizable (particularly task parallelizable).
By experience, I can tell you that optimization is like an exponential curve. To reach optimal performance, the time required may be incredibly important.
So to get back to the teapot, you should see how the geometry is represented, sampled and with what precision Vs see in GTA 5, in term of geometry/textures and most important, the details (precision, sampling, etc.)

Resources