Game performance optimization interview - performance

This question came up:
You're searching for bottlenecks in your game, but nothing you're changing is making the game any faster, be it anything in the GPU pipeline or the CPU. Nothing is spiking, and the slowness appears to be distributed across everywhere. What do you do next?
I was flummoxed. Is it a trick question? When fixing perf issues, I always assume that this was the point at which you need to scale everything back. I don't think it's mem alloc, as that shows up in CPU perf.

I would have asked for more information. "Slow" is a poor indicator of bad performance and is a classification of a symptom rather than a symptom itself. For example, you might describe "slow" as being:
Low frame rate
Poor responsiveness to input
High responsiveness and smooth framerate, but slow game mechanics (i.e.: the player and entities move smoothly but very slowly)
In the case of networked games, apparent network lag
All of these problems have different potential causes and solutions:
Low but consistent frame rate may be due to inefficiencies in your game loop. Simply running your favorite profiler may indicate that large amounts of time are spent in one particular piece of code. In a game I wrote, for instance, I discovered that low FPS was the result of a bad loop that calculated distances between entities multiple times without caching. In another game, I discovered that the data structure I was using to perform lookups against the terrain was O(N) rather than O(1) (python stdlib...ick). You can't diagnose a problem you can't see, and profiling is the first line of defense.
Poor responsiveness may be due to a number of things. If the FPS is high but the controls are sluggish to respond, the API that you're using to access the controls may simply be bad. Some controllers may have crappy drivers that can kill responsiveness. It might even be your game loop: you might simply not be checking for input from the controller frequently enough (perhaps you're not checking on every tick). In one of the aforementioned games, I had an issue where certain actions had a delayed effect: you'd use an item and the game would respond a half second or so later. It turned out that the issue was caused by the client making a full round-trip to the server to perform the action, verify that it happened, and wait for the server to broadcast back that the item was used. Simply having the behavior take place instantaneously on the client remedied the issue.
Slow game mechanics might indicate that game constants simply aren't set high enough. If everything is smooth and beautiful but everything just moves very slowly, it's quite possible that default velocities or accelerations aren't turned up enough.
Network lag can be caused by any number of things: the router you're connected to might be failing, the VPS you're developing against might be on a host that's being DDoSed, you might be using a protocol that's overly (but uniformly) chatty, or you're simply sending too much data over the wire. In a piece of simulation software I wrote in college, the computations were performed on some beefy computers in a lab, while the visualizations were being run on my MBP in my dorm. It turned out that the sheer amount of data that I was sending from the lab computers to my dorm was enough to overload the cheap network switches in the building and drop packets, resulting in horrible lag but perfectly reasonable log output.
So I guess the answer here is to have the interviewer describe the symptoms more fully. #Ali's answer is great, but it could be that there's a more nuanced problem at hand that requires some coaxing to diagnose.

You're searching for bottlenecks in your game, but nothing you're
changing is making the game any faster, be it anything in the GPU
pipeline or the CPU. Nothing is spiking, and the slowness appears to
be distributed across everywhere.
It pretty much sounds like the definition of Uniformly Slow Code. Let's assume it is really what is meant by this (and not some I/O bottleneck or creation of unnecessary objects in a loop or some poor choice for the datastructures or for the algorithms, etc).
To make a uniformly slow code faster, you usually have to go against good practices, and that is why I usually stop optimizing my code when it is uniformly slow. (I suppose "stop optimizing" is not a good answer at an interview...)
One way to make things faster is to identify an appropriate sequence of small operations, collect them together in one place, and then manually improve the things; sort of "manually inlining" these operations then doing high-level simplifications on the code that emerges. It requires good intuition where this might be worth doing and excellent understanding of the involved code. This answer calls it bunching and horizontal optimization.
Another thing that might be worth looking into if your really have uniformly slow code is Andrei Alexandrescu's optimization tips.

Maybe this is about thinking about more efficient algorithms. "Micro-optimization" has its limits; you can perfectly optimize a bubble sort, for example, but to get real big speedup you'd invent another sorting algorithm.
Also, in games you may introduce different kinds of adjustable quality/speed (or precision/speed) tradeoffs. Typically all games have some settings that change graphic detail level.

anecdotal:
i can tell you what the problem is without actually knowing the answer to the question ;p
sloppy directx calls. too many objects. especially bad on some old dx9 games, since dx9 needed to make a new directdraw call for every object. or something like that, the story goes. basically resulted in the cpu waiting idle for the gpu to process all the messages.
although def not the solution to every issue, i though it was worth mentioning as an interesing piece of information ;p didn't see it in the other comments.
it's almost like having too many pixel shaders, except at least the gpu works at 100% with a mass of those :D good for frying omelettes. (also, using occlusion to save performance and then adding a mass of pixel shaders to that model is a BAD idea)
i hope you can see the humor in this ;p

Related

When is performance gain significant enough to implement that optimization?

following the text book, I do measure performance whenever I try optimizing my code. Sometimes, however, the performance gain is rather small and I can't decisively decide whether I should implement that optimization.
For example, when a fix shortens an average response time of 100ms to 90ms under some conditions, should I implement that fix? What if it shortens 200ms to 190ms? How many condition should I try before I can conclude that it will be beneficial overall?
I guess it's not possible to give a straight forward answer to this, as it depends on too many things, but is there a good rule of thumb that I should follow? Are there any guideline/best-practices?
EDIT:Thanks for the great answers! I guess the moral of the story is, there is no easy way to tell whether you should, but there ARE guidelines that can aid that process.. Things you should consider, things you shouldn't do etc. This particular time I ended up implementing the fix, even though it made a few line of code into 20-30 lines of code. Because our app. is very performance critical, and it was a consistent 10% gain in various realistic cases.
I think the rule of thumb (at least for me) is two-fold:
"It matters if it matters"--in the business world, this generally means that it matters if the clients care. That is, if the end users will "notice" the difference between 100ms and 90ms (I'm not being facetious here), then it matters.
If "it matters," then you will want to test your code thoroughly against a realistic variety of use cases that are likely to arise or at least may arise. If an optimization speeds up code in 50% of cases, but actually runs slower than what you previously had the other 50% of the time, obviously, it may not be worth implementing.
Regarding point 1 above: by suggesting an end user of your software might "notice" a 10ms difference, I don't mean to suggest that they will actually visibly see a difference. But if your app runs on a server with millions of connections and every little speed increase takes a substantial load off the server, that might matter to the client running the server. Or if your app performs extremely time-critical work, this is another case where the result of a 10ms speedup might be noticeable, even if the speedup itself isn't.
The only sensible approach to your question is something along the lines of "when the benefit is large enough to warrant the time you invest in exploring, implementing and testing the optimization."
The "benefit is large enough" is extremely subjective. Can you or your employer sell more units of software if you make this change? Will your user base notice? Will it give you personal gratification to have the fastest-possible code? Which of those or similar questions apply is something only you can know.
By and large, most of the software I have written (in a 20+ year career) has been "fast enough" out of the box, and the code I cared to optimize presented itself as an obvious bottleneck to the end users: Queries taking a long time, scrolling too slow, that sort of thing.
Donald Knuth made the following two statements on optimization:
"We should forget about small
efficiencies, say about 97% of the
time: premature optimization is the
root of all evil" [2]
and
"In established engineering
disciplines a 12 % improvement, easily
obtained, is never considered marginal
and I believe the same viewpoint
should prevail in software
engineering"[5]
src: http://en.wikipedia.org/wiki/Program_optimization
Is the optimization obfuscating your code too much?
Do you really need an optimization? if your app just runs fine then readability of the code is probably more important
Did you work on the general design and algorithms of your application before trying small hacky optimizations?
You should focus optimisation efforts on the parts of code that account for the most runtime. If a particular piece of code takes up 80% of the total runtime, then optimising it to reduce the time is takes by 5% will have as much impact as reducing the time of the rest of the code by 20%.
In general, optimisations make code less readable (not always, but often). Therefore you should avoid optimising until you are sure that there is a problem.
If it speeds up your program at all, why not implement it? You have already done the work by creating the new implementation, so you are not doing extra work by applying the new implementation.
Unless the code is THAT much harder to understand.
Also, 100 ms to 90 ms is a 10% gain in performance. A 10% gain should not be taken lightly.
The real question is, if it only took 100 ms to run in the first place, what was the point in trying to optimize it?
As long as it's fast enough, then you don't need to optimise any more. But then, you wouldn't even bother profiling if that was the case...
If the performance gain is small, consider the other factors: maintainability, risk of making the change, understandability, etc. If it reduces the ability to maintain or understand the code, it probably isn't worth doing. If it improves those attributes, then it's more reason to implement the change.
In most cases, your time is more valuable than the computer's. If you think it'll take you half an hour longer to work out what the code is doing later (say if there's a bug in it), and it's only saved you a few seconds, ever, you're at a net loss.
It depends very much on the usage scenario. I'll assume here that the code in question has been profiled and thus it is known to be the bottleneck--i.e. not just "this could be faster", but "the program would give results/finish running faster if this were faster". In situations where this is not the case--e.g. if you spend 99% of your time waiting for more data to come over an ethernet connection--then you should care about correctness but not optimize for speed.
If you are writing a piece of user interface code, what you care about is perceived speed. Generally anything under ~100 ms is perceived as "instant"--no point speeding it up.
If you are writing a piece of code for a giant server farm, then if the cost of your salary to make the code fast is less than the cost of the extra electricity for the server farm, it's worthwhile. (But be sure to prioritize your time.)
If you are writing a piece of code that is used rarely or when unattended, as long as it completes in a semi-sane duration, don't worry about it. Install scripts tend to be of this sort (unless you start running into many minutes, at which point users might start abandoning the install because it's taking too long).
If you are writing code to automate a task for someone else, then if (your time spent coding + their time spent using the optimized code) is less than (their time spent using the slow code), it's worthwhile. If you're doing this in a commercial setting, weight this by your respective salaries.
If you are writing library code that will be used by many thousands of people, always make it faster if you have time to.
If you are under time pressure to simply have something working e.g. as a demo, don't optimize (except through sensible choice of algorithms from libraries) unless the result would be so slow that it isn't even "working".
One of the biggest annoyances for me personally is finding software which perhaps initially fell into one category and then later fell into another, but for which nobody went back to do needed optimizations. Until recently Javascript performance was a great example of this. Moral of the story is: don't just decide once; revisit the issue as the situation demands.

How are 3D games so efficient? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
There is something I have never understood. How can a great big PC game like GTA IV use 50% of my CPU and run at 60fps while a DX demo of a rotating Teapot # 60fps uses a whopping 30% ?
Patience, technical skill and endurance.
First point is that a DX Demo is primarily a teaching aid so it's done for clarity not speed of execution.
It's a pretty big subject to condense but games development is primarily about understanding your data and your execution paths to an almost pathological degree.
Your code is designed around two things - your data and your target hardware.
The fastest code is the code that never gets executed - sort your data into batches and only do expensive operations on data you need to
How you store your data is key - aim for contiguous access this allows you to batch process at high speed.
Parellise everything you possibly can
Modern CPUs are fast, modern RAM is very slow. Cache misses are deadly.
Push as much to the GPU as you can - it has fast local memory so can blaze through the data but you need to help it out by organising your data correctly.
Avoid doing lots of renderstate switches ( again batch similar vertex data together ) as this causes the GPU to stall
Swizzle your textures and ensure they are powers of two - this improves texture cache performance on the GPU.
Use levels of detail as much as you can -- low/medium/high versions of 3D models and switch based on distance from camera player - no point rendering a high-res version if it's only 5 pixels on screen.
In general, it's because
The games are being optimal about what they need to render, and
They take special advantage of your hardware.
For instance, one easy optimization you can make involves not actually trying to draw things that can't be seen. Consider a complex scene like a cityscape from Grand Theft Auto IV. The renderer isn't actually rendering all of the buildings and structures. Instead, it's rendering only what the camera can see. If you could fly around to the back of those same buildings, facing the original camera, you would see a half-built hollowed-out shell structure. Every point that the camera cannot see is not rendered -- since you can't see it, there's no need to try to show it to you.
Furthermore, optimized instructions and special techniques exist when you're developing against a particular set of hardware, to enable even better speedups.
The other part of your question is why a demo uses so much CPU:
... while a DX demo of a rotating Teapot # 60fps uses a whopping 30% ?
It's common for demos of graphics APIs (like dxdemo) to fall back to what's called a software renderer when your hardware doesn't support all of the features needed to show a pretty example. These features might include things like shadows, reflection, ray-tracing, physics, et cetera.
This mimics the function of a completely full-featured hardware device which is unlikely to exist, in order to show off all the features of the API. But since the hardware doesn't actually exist, it runs on your CPU instead. That's much more inefficient than delegating to a graphics card -- hence your high CPU usage.
3D games are great at tricking your eyes. For example, there is a technique called screen space ambient occlusion (SSAO) which will give a more realistic feel by shadowing those parts of a scene that are close to surface discontinuities. If you look at the corners of your wall, you will see they appear slightly darker than the centers in most cases.
The very same effect can be achieved using radiosity, which is based on rather accurate simulation. Radiosity will also take into account more effects of bouncing lights, etc. but it is computationally expensive - it's a ray tracing technique.
This is just one example. There are hundreds of algorithms for real time computer graphics and they are essentially based on good approximations and typically make a lot assumptions. For example, spatial sorting must be chosen very carefully depending on the speed, typical position of the camera as well as the amount of changes to the scene geometry.
These 'optimizations' are huge - you can implement an algorithm efficiently and make it run 10 times faster, but choosing a smart algorithm that produces a similar result ("cheating") can make you go from O(N^4) to O(log(N)).
Optimizing the actual implementation is what makes games even more efficient, but that is only a linear optimization.
Eeeeek!
I know that this question is old, but its exciting that no one has mentioned VSync!!!???
You compared the CPU usage of the game at 60fps to CPU usage of the teapot demo at 60fps.
Isn't it apparent, that both run (more or less) at exactly 60fps? That leads to the answer...
Both apps run with vsync enabled! This means (dumbed-down) that the rendering framerate is locked to the "vertical blank interval" of your monitor. The graphics hardware (and/or driver) will only render at max. 60fps. 60fps = 60Hz (Hz=per second) refresh rate. So you probably use a rather old, flickering CRT or a common LCD display. On a CRT running at 100Hz you will probably see framerates of up to 100Hz. VSync also applies in a similar way to LCD displays (they usually have a refresh rate of 60Hz).
So, the teapot demo may actually run much more efficient! If it uses 30% of CPU time (compared to 50% CPU time for GTA IV), then it probably uses less cpu time each frame, and just waits longer for the next vertical blank interval. To compare both apps, you should disable vsync and measure again (you will measure much higher fps for both apps).
Sometimes its ok to disable vsync (most games have an option in its settings). Sometimes you will see "tearing artefacts" when vsync is disabled.
You can find details of it and why it is used at wikipedia: http://en.wikipedia.org/wiki/Vsync
Whilst many answers here provide excellent indications of how I will instead answer the simpler question of why
GTA4 took $400 Million dollars in it's first week
Crytech wrote an extremely impressive graphics demo to allow nVidia to 'show off' at a trade show. The resulting impressions got them the leg up to create what would become FarCry.
Valve's 2005 revenue and operating profit have been stated as 70 and 55 million USD respectively.
Perhaps the best example (certainly one of the best known) is Id software. They realised very early, in the days of Commander Keen (well before 3D) that coming up with a clever way to achieve something1, even if it relied on modern hardware (in this case an EGA graphics card!) that was graphically superior to the competition that this would make your game stand out. This was true but they further realised that, rather than then having to come up with new games and content themselves they could licence the technology, thus getting income from others whilst being able to develop the next generation of engine and thus leap frog the competition again.
The abilities of these programmers (coupled with business savvy) is what made them rich.
That said it is not necessarily money that motivates such people. It is likely just as much the desire to achieve, to accomplish. The money they earned in the early days simply means that they now have time to devote to what they enjoy. And whilst many have outside interests almost all still program and try to work out ways to do better than the last iteration.
Put simply the person who wrote the teapot demo likely had one or more of the following issues:
less time
less resources
less reward incentive
less internal and external competition
lesser goals
less talent
The last may sound harsh2 but clearly there are some who are better than others, bell curves sometimes have extreme ends and they tend to be attracted to the corresponding extreme ends of what is done with that skill.
The lesser goals one is actually likely to be the main reason. The target of the teapot demo was just that, a demo. But not a demo of the programmers skill3. It would be a demo of one small facet of a (big) OS, in this case DX rendering.
To those viewing the demo it wouldn't mater it it used way more CPU than required so long as it looked good enough. There would be no incentive to eliminate waste when there would be no beneficiary. In comparison a game would love to have spare cycles for better AI, better sound, more polygons, more effects.
in that case smooth scrolling on PC hardware
Likely more than me so we're clear about that
strictly speaking it would have been a demo to his/her manager too, but again the drive here would be time and/or visual quality.
Because of a few reasons
3D game engines are highly optimized
most of the work is done by your graphics adapter
50% Hm, let me guess you have a dual core and only one core is used ;-)
EDIT: To give few numbers
2.8 Ghz Athlon-64 with NV-6800 GPU. The results are:
CPU: 72.78 Mflops
GPU: 2440.32 Mflops
Sometimes a scene may have more going on than it appears. For example, a rotating teapot with thousands of vertices, environment mapping, bump mapping, and other complex pixel shaders all being rendered simultaneously amounts to a whole lot of processing. A lot of times these teapot demos are simply meant to show off some sort of special effect. They also may not always make the best use of the GPU when absolute performance isn't the goal.
In a game you may see similar effects but they're usually done in a compromised fashion in effort to maximize the frame rate. These optimizations extend to everything you see in the game. The issue becomes, "How can we create the most spectacular and realistic scene with the least amount of processing power?" It's what makes game programmers some of the best optimizers around.
Scene management. kd-trees, frustrum culling, bsps, heirarchical bounding boxes, partial visibility sets.
LOD. Switching out lower detail versions to substitute in for far away objects.
Impostors. Like LOD but not even an object just a picture or 'billboard'.
SIMD.
Custom memory management. Aligned memory, less fragmentation.
Custom data structures (ie no STL, relatively minimal templating).
Assembly in places, mainly for SIMD.
By all the qualified and good answers given, the one that matter is still missing: The CPU utilization counter of Windows is not very reliable. I guess that this simple teapot demo just calls the rendering function in it's idle loop, blocking at the buffer swap.
Now the Windows CPU utilization counter just looks at how much CPU time is spent within each process, but not how this CPU time is used. Try adding a
Sleep(0);
just after returning from the rendering function, and compare.
In addition, there are many many tricks from an artistic standpoint to save computational power. In many games, especially older ones, shadows are precalculated and "baked" right into the textures of the map. Many times, the artists tried to use planes (two triangles) to represent things like trees and special effects when it would look mostly the same. Fog in games is an easy way to avoid rendering far-off objects, and often, games would have multiple resolutions of every object for far, mid, and near views.
The core of any answer should be this -- The transformations that 3D engines perform are mostly specified in additions and multiplications (linear algebra) (no branches or jumps), the operations of a drawing a single frame is often specified in a way that multiple such add-mul's jobs can be done in parallel. GPU cores are very good add add-mul's, and they have dozens or hundreds of add-mull cores.
The CPU is left with doing simple stuff -- like AI and other game logic.
How can a great big PC game like GTA IV use 50% of my CPU and run at 60fps while a DX demo of a rotating Teapot # 60fps uses a whopping 30% ?
While GTA is quite likely to be more efficient than DX demo, measuring CPU efficiency this way is essentially broken. Efficiency could be defined e.g. by how much work you do per given time. A simple counterexample: spawn one thread per a logical CPU and let a simple infinite loop run on it. You will get CPU usage of 100 %, but it is not efficient, as no useful work is done.
This also leads to an answer: how can a game be efficient? When programming "great big games", a huge effort is dedicated to optimize the game in all aspects (which nowadays usually also includes multi-core optimizations). As for the DX demo, its point is not running fast, but rather demonstrating concepts.
I think you should take a look to GPU utilisation rather than CPU... I bet the graphic card is much busier in GTA IV than in the Teapot sample (it should be practically idle).
Maybe you could use something like this monitor to check that:
http://downloads.guru3d.com/Rivatuner-GPU-Monitor-Vista-Sidebar-Gadget-download-2185.html
Also the framerate is something to consider, maybe the teapot sample is running at full speed (maybe 1000fps) and most games are limited to the refresh frequency of the monitor (about 60fps).
Look at the answer on vsync; that is why they are running at same frame rate.
Secondly, CPU is miss leading in a game. A simplified explanation is that the main game loop is just an infinite loop:
while(1) {
update();
render();
}
Even if your game (or in this case, teapot) isn't doing much you are still eating up CPU in your loop.
The 50% cpu in GTA is "more productive" then the 30% in the demo, since more than likely it's not doing much at all; but the GTA is updating tons of details. Even adding a "Sleep (10)" to the demo will probably drop it's CPU by a ton.
Lastly look at GPU usage. The demo is probably taking <1% on a modern video card while the GTA will probably be taking majority during game play.
In short, your benchmarks and measurements aren't accurate.
The DX teapot demo is not using 30% of the CPU doing useful work. It's busy-waiting because it has nothing else to do.
From what I know of the Unreal series some conventions are broken like encapsulation. Code is compiled to bytecode or directly into machine code depending on the game. Also, objects are rendered and packaged under the form of a meshes and things such as textures, lighting and shadows are precalculated whereas as a pure 3d animation requires this to this real time. When the game is actually running there are also some optimizations such as only rendering only the visible parts of an object and displaying texture detail only when close up. Finally, it's probable that video games are designed to get the best out of a platform at a given time (ex: Intelx86 MMX/SSE, DirectX, ...).
I think there is an important part of the answer missing here. Most of the answers tell you to "Know your data". The fact is that you must, in the same way and with the same degree of importance, also know your:
CPU (clock and caches)
Memory (frequency and latency)
Hard drive (in term of speed and seek times)
GPU (#cores, clock and its Memory/Caches)
Interfaces: Sata controllers, PCI revisions, etc.
BUT, on top of that, with the current modern computers, you would never be able to player a real 1080p video at >>30ftp (a single 1080p image in 64bits would take 15 000 Ko/14.9 MB). The reason for that is because of the sampling/precision. A video game would never use a double precision (64bits) for pixels, images, data, etc..., but rather use a lower custom precision (~4-8 bits) and sometimes less precision rescaled with interpolation techniques to allow reasonable computation time.
There are other techniques as well such as Clipping the data (both with OpenGL standard and software implementation), Data compression, etc. Keep also in mind, that current GPUs can be >300 times faster than the current CPUs in term of hardware capability. However, a good programmer may get a 10-20x factor, unless your problem is fully optimized and completely parallelizable (particularly task parallelizable).
By experience, I can tell you that optimization is like an exponential curve. To reach optimal performance, the time required may be incredibly important.
So to get back to the teapot, you should see how the geometry is represented, sampled and with what precision Vs see in GTA 5, in term of geometry/textures and most important, the details (precision, sampling, etc.)

Does Wirth's law still hold true?

Adage made by Niklaus Wirth in 1995:
«Software is getting slower more rapidly than hardware becomes faster»
Do you think it's actually true?
How should you measure "speed" of software? By CPU cycles or rather by time you need to complete some task?
What about software that is actually getting faster and leaner (measured by CPU cycles and MB of RAM) and more responsive with new versions, like Firefox 3.0 compared with 2.0, Linux 2.6 compared with 2.4, Ruby 1.9 compared to 1.8. Or completely new software that is order of magnitude faster then old stuff (like Google's V8 Engine)? Doesn't it negate that law?
Yes I think it is true.
How do I measure the speed of software? Well time to solve tasks is a relevant indicator. For me as a user of software I do not care whether there are 2 or 16 cores in my machine. I want my OS to boot fast, my programs to start fast and I absolutely do not want to wait for simple things like opening files to be done. A software has to just feel fast.
So .. when booting Windows Vista there is no fast software I am watching.
Software / Frameworks often improve their performance. That's great but these are mostly minor changes. The exception proves the rule :)
In my opinion it is all about feeling. And it feels like computers have been faster years ago. Of course I couldn't run the current games and software on those old machines. But they were just faster :)
It's not that software becomes slower, it's that its complexity increases.
We now build upon many levels of abstraction.
When was the last time people on SO coded in assembly language?
Most never have and never will.
It is wrong. Correct is
Software is getting slower at the same rate as hardware becomes faster.
The reason is that this is mostly determined by human patience, which stays the same.
It also neglects to mention that the software of today does more than 30 years ago, even if we ignore eye candy.
In general, the law holds true. As you have stated, there are exceptions "that prove the rule". My brother recently installed Win3.1 on his 2GHz+ PC and it boots in a blink of an eye.
I guess there are many reasons why the law holds:
Many programmers entering the profession now have never had to consider limited speed / resourced systems, so they never really think about the performance of their code.
There's generally a higher importance on getting the code written for deadlines and performance tuning usually comes last after bug fixing / new features.
I find FF's lack of immediate splash dialog annoying as it takes a while for the main window to appear after starting the application and I'm never sure if the click 'worked'. OO also suffers from this.
There are a few articles on the web about changing the perception of speed of software without changing the actual speed.
EDIT:
In addition to the above points, an example of the low importance given to efficiency is this site, or rather, most of the other Q&A sites. This site has always been developed to be fast and responsive and it shows. Compare this to the other sites out there - I've found phpBB based sites are flexible but slow. Google is another example of putting speed high up in importance (it even tells you how long the search took) - compare with other search engines that were around when google started (now, they're all fast thanks to google).
It takes a lot of effort, skill and experience to make fast code which is something I found many programmers lack.
From my own experience, I have to disagree with Wirth's law.
When I first approached a computer (in the 80'), the time for displaying a small still picture was perceptible. Today my computer can decode and display 1080p AVCHD movies in realtime.
Another indicator is the frames per second of video games. Not long ago it used to be around 15fps. Today 30fps to 60 fps are not uncommon.
Quoting from a UX study:
The technological advancements of 21 years have placed modern PCs in a completely different league of varied capacities. But the “User Experience” has not changed much in two decades. Due to bloated code that has to incorporate hundreds of functions that average users don’t even know exist, let alone ever utilize, the software companies have weighed down our PCs to effectively neutralize their vast speed advantages.
Detailed comparison of UX on a vintage Mac and a modern Dual Core: http://hubpages.com/hub/_86_Mac_Plus_Vs_07_AMD_DualCore_You_Wont_Believe_Who_Wins
One of the issues of slow software is a result of most developers using very high end machines with multicore CPUs and loads of RAM as their primary workstation. As a result they don't notice performance issues easily.
Part of their daily activity should be running their code on slower more mainstream hardware that the expected clients will be using. This will show the real world performance and allow them to focus on improving bottlenecks. Or even running within a VM with limited resources can aid in this review.
Faster hardware shouldn't be an excuse for creating slow sloppy code, however it is.
My machine is getting slower and clunkier every day. I attribute most of the slowdown to running antivirus. When I want to speed up, I find that disabling the antivirus works wonders, although I am apprehensive like being in a seedy brothel.
I think that Wirth's law was largely caused by Moore's law - if your code ran slow, you'd just disregard since soon enough, it would run fast enough anyway. Performance didn't matter.
Now that Moore's law has changed direction (more cores rather than faster CPUs), computers don't actually get much faster, so I'd expect performance to become a more important factor in software development (until a really good concurrent programming paradigm hits the mainstream, anyway). There's a limit to how slow software can be while still being useful, y'know.
Yes, software nowadays may be slower or faster, but you're not comparing like with like. The software now has so much more capability, and a lot more is expected of it.
Lets take as an example: Powerpoint. If I created a slideshow with Powerpoint from the early nineties, I can have a slideshow with pretty colours fairly easily, nice text etc. Now, its a slideshow with moving graphics, fancy transitions, nice images.
The point is, yes, software is slower, but it does more.
The same holds true of the people who use the software. back in the 70s, to create a presentation you had to create your own transparencies, maybe even using a pen :-). Now, if you did the same thing, you'd be laughed out of the room. It takes the same time, but the quality is higher.
This (in my opinion) is why computers don't give you gains in productivity, because you spend the same amount of time doing 'the job'. But if you use todays software, your results looks more professional, you gain in quality of work.
Skizz and Dazmogan have it right.
On the one hand, when programmers try to make their software take as few cycles as possible, they succeed, and it is blindingly fast.
On the other hand, when they don't, which is most of the time, their interest in "Galloping Generality" uses up every available cycle and then some.
I do a lot of performance tuning. (My method of choice is random halting.) In nearly every case, the reason for the slowness is over-design of class and data structure.
Oddly enough, the reason usually given for excessively event-driven and redundant data structure is "efficiency".
As Bompuis says, we build upon many layers of abstraction. That is exactly the problem.
Yes it holds true. You have given some prominent examples to counter the thesis, but bear in mind that these examples are developed by a big community of quite knowledgeable people, who are more or less aware of good practices in programming.
People working with the kernel are aware of different CPU's architectures, multicore issues, cache lines, etc. There is an interesting ongoing discussion about inclusion of hardware performance counters support in the mainline kernel. It is interesting from the 'political' point of view, as there is a conflict between the kernel people and people having much experience in performance monitoring.
People developing Firefox understand that the browser should be "lightweight" and fast in order to be popular. And to some extend they manage to do a good job.
New versions of software are supposed to be run on faster hardware in order to have the same user experience. But whether the price is just? How can we asses whether the functionality was added in the efficient way?
But coming back to the main subject, many of the people after finishing their studies are not aware of the issues related to performance, concurrency (or even worse, they do not care). For quite a long time Moore law was providing a stable performance boost. Thus people wrote mediocre code and nobody even noticed that there was something wrong with inefficient algorithms, data-structures or more low-level things.
Then some limitations came into play (thermal efficiency for example) and it is no longer possible to get 'easy' speed for few bucks. People who just depend on hardware performance improvements might get a cold shower. On the other hand, people who have in-depth knowledge of algorithms, data structures, concurrency issues (quite difficult to recruit these...) will continue to write good applications and their value on the job market will increase.
The Wirth law should not only be interpreted literally, it is also about poor code bloat, violating the keep-it-simple-stupid rule and people who waste the opportunity to use the 'faster' hardware.
Also if you happen to work in the area of HPC then these issues become quite obvious.
In some cases it is not true: the frame rate of games and the display/playing of multimedia content is far superior today than it was even a few years ago.
In several aggravatingly common cases, the law holds very, very true. When opening the "My Computer" window in Vista to see your drives and devices takes 10-15 seconds, it feels like we are going backward. I really don't want to start any controversy here but it was that as well as the huge difference in time needed to open Photoshop that drove me off of the Windows platform and on to the Mac. The point is that this slowdown in common tasks is serious enough to make me jump way out of my former comfort zone to get away from it.
Can´t find the sense. Why is this sentence a law?
You never can compare Software and Hardware, they are too different.
Hardware is genuine material and Software is a written code.
The connection is only that Software has to control the performance of Hardware. After an executed step in Hardware the Software needs a completion-sign, so the next order of Software can be done.
Why should I slow down Software? We allways try to make it faster !
It´s a lot of things to do in real physical way to make Hardware faster (changing print-modules or even physical parts of a computer).
It may be senseful,if Wirth means: to do this in one computer (= one Software- and Hardware-System).
To get a higher speed of Hardware it´s necessary to know the function of the Hardware, amount of parallel inputs or outputs at one moment and the frequency of possible switches in one second. Last not least it´s important that different Hardware-prints have the same or with a numeric factor multiplicated frequeny.
So perhaps the Software may slow down automaticaly very easy if You change something in the Hardware. - Wirth was thinking much more in Hardware, he is one of the great inventors since the computer is existing in the German-speaking area.
The other way is not easy. You have to know the System-Software of a computer very exactly to make the Hardware faster by changing the Software (=System-Software, Machine-Programs) of a computer. And if You use more layers you nearly have no direct influence in the speed of the Hardware.
Perhaps this may be the explanation of Wirth´s Law-Thinking....I got it!

One could use a profiler, but why not just halt the program? [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
If something is making a single-thread program take, say, 10 times as long as it should, you could run a profiler on it. You could also just halt it with a "pause" button, and you'll see exactly what it's doing.
Even if it's only 10% slower than it should be, if you halt it more times, before long you'll see it repeatedly doing the unnecessary thing. Usually the problem is a function call somewhere in the middle of the stack that isn't really needed. This doesn't measure the problem, but it sure does find it.
Edit: The objections mostly assume that you only take 1 sample. If you're serious, take 10. Any line of code causing some percentage of wastage, like 40%, will appear on the stack on that fraction of samples, on average. Bottlenecks (in single-thread code) can't hide from it.
EDIT: To show what I mean, many objections are of the form "there aren't enough samples, so what you see could be entirely spurious" - vague ideas about chance. But if something of any recognizable description, not just being in a routine or the routine being active, is in effect for 30% of the time, then the probability of seeing it on any given sample is 30%.
Then suppose only 10 samples are taken. The number of times the problem will be seen in 10 samples follows a binomial distribution, and the probability of seeing it 0 times is .028. The probability of seeing it 1 time is .121. For 2 times, the probability is .233, and for 3 times it is .267, after which it falls off. Since the probability of seeing it less than two times is .028 + .121 = .139, that means the probability of seeing it two or more times is 1 - .139 = .861. The general rule is if you see something you could fix on two or more samples, it is worth fixing.
In this case, the chance of seeing it in 10 samples is 86%. If you're in the 14% who don't see it, just take more samples until you do. (If the number of samples is increased to 20, the chance of seeing it two or more times increases to more than 99%.) So it hasn't been precisely measured, but it has been precisely found, and it's important to understand that it could easily be something that a profiler could not actually find, such as something involving the state of the data, not the program counter.
On Java servers it's always been a neat trick to do 2-3 quick Ctrl-Breakss in a row and get 2-3 threaddumps of all running threads. Simply looking at where all the threads "are" may extremely quickly pinpoint where your performance problems are.
This technique can reveal more performance problems in 2 minutes than any other technique I know of.
Because sometimes it works, and sometimes it gives you completely wrong answers. A profiler has a far better record of finding the right answer, and it usually gets there faster.
Doing this manually can't really be called "quick" or "effective", but there are several profiling tools which do this automatically; also known as statistical profiling.
Callstack sampling is a very useful technique for profiling, especially when looking at a large, complicated codebase that could be spending its time in any number of places. It has the advantage of measuring the CPU's usage by wall-clock time, which is what matters for interactivity, and getting callstacks with each sample lets you see why a function is being called. I use it a lot, but I use automated tools for it, such as Luke Stackwalker and OProfile and various hardware-vendor-supplied things.
The reason I prefer automated tools over manual sampling for the work I do is statistical power. Grabbing ten samples by hand is fine when you've got one function taking up 40% of runtime, because on average you'll get four samples in it, and always at least one. But you need more samples when you have a flat profile, with hundreds of leaf functions, none taking more than 1.5% of the runtime.
Say you have a lake with many different kinds of fish. If 40% of the fish in the lake are salmon (and 60% "everything else"), then you only need to catch ten fish to know there's a lot of salmon in the lake. But if you have hundreds of different species of fish, and each species is individually no more than 1%, you'll need to catch a lot more than ten fish to be able to say "this lake is 0.8% salmon and 0.6% trout."
Similarly in the games I work on, there are several major systems each of which call dozens of functions in hundreds of different entities, and all of this happens 60 times a second. Some of those functions' time funnels into common operations (like malloc), but most of it doesn't, and in any case there's no single leaf that occupies more than 1000 μs per frame.
I can look at the trunk functions and see, "we're spending 10% of our time on collision", but that's not very helpful: I need to know exactly where in collision, so I know which functions to squeeze. Just "do less collision" only gets you so far, especially when it means throwing out features. I'd rather know "we're spending an average 600 μs/frame on cache misses in the narrow phase of the octree because the magic missile moves so fast and touches lots of cells," because then I can track down the exact fix: either a better tree, or slower missiles.
Manual sampling would be fine if there were a big 20% lump in, say, stricmp, but with our profiles that's not the case. Instead I have hundreds of functions that I need to get from, say, 0.6% of frame to 0.4% of frame. I need to shave 10 μs off every 50 μs function that is called 300 times per second. To get that kind of precision, I need more samples.
But at heart what Luke Stackwalker does is what you describe: every millisecond or so, it halts the program and records the callstack (including the precise instruction and line number of the IP). Some programs just need tens of thousands of samples to be usefully profiled.
(We've talked about this before, of course, but I figured this was a good place to summarize the debate.)
There's a difference between things that programmers actually do, and things that they recommend others do.
I know of lots of programmers (myself included) that actually use this method. It only really helps to find the most obvious of performance problems, but it's quick and dirty and it works.
But I wouldn't really tell other programmers to do it, because it would take me too long to explain all the caveats. It's far too easy to make an inaccurate conclusion based on this method, and there are many areas where it just doesn't work at all. (for example, that method doesn't reveal any code that is triggered by user input).
So just like using lie detectors in court, or the "goto" statement, we just don't recommend that you do it, even though they all have their uses.
I'm surprised by the religous tone on both sides.
Profiling is great, and certainly is a more refined and precise when you can do it. Sometimes you can't, and it's nice to have a trusty back-up. The pause technique is like the manual screwdriver you use when your power tool is too far away or the bateries have run-down.
Here is a short true story. An application (kind of a batch proccessing task) had been running fine in production for six months, suddenly the operators are calling developers because it is going "too slow". They aren't going to let us attach a sampling profiler in production! You have to work with the tools already installed. Without stopping the production process, just using Process Explorer, (which operators had already installed on the machine) we could see a snapshot of a thread's stack. You can glance at the top of the stack, dismiss it with the enter key and get another snapshot with another mouse click. You can easily get a sample every second or so.
It doesn't take long to see if the top of the stack is most often in the database client library DLL (waiting on the database), or in another system DLL (waiting for a system operation), or actually in some method of the application itself. In this case, if I remember right, we quickly noticed that 8 times out of 10 the application was in a system DLL file call reading or writing a network file. Sure enough recent "upgrades" had changed the performance characteristics of a file share. Without a quick and dirty and (system administrator sanctioned) approach to see what the application was doing in production, we would have spent far more time trying to measure the issue, than correcting the issue.
On the other hand, when performance requirements move beyond "good enough" to really pushing the envelope, a profiler becomes essential so that you can try to shave cycles from all of your closely-tied top-ten or twenty hot spots. Even if you are just trying to hold to a moderate performance requirement durring a project, when you can get the right tools lined-up to help you measure and test, and even get them integrated into your automated test process it can be fantasticly helpful.
But when the power is out (so to speak) and the batteries are dead, it's nice know how to use that manual screwdriver.
So the direct answer: Know what you can learn from halting the program, but don't be afraid of precision tools either. Most importantly know which jobs call for which tools.
Hitting the pause button during the execution of a program in "debug" mode might not provide the right data to perform any performance optimizations. To put it bluntly, it is a crude form of profiling.
If you must avoid using a profiler, a better bet is to use a logger, and then apply a slowdown factor to "guesstimate" where the real problem is. Profilers however, are better tools for guesstimating.
The reason why hitting the pause button in debug mode, may not give a real picture of application behavior is because debuggers introduce additional executable code that can slowdown certain parts of the application. One can refer to Mike Stall's blog post on possible reasons for application slowdown in a debugging environment. The post sheds light on certain reasons like too many breakpoints,creation of exception objects, unoptimized code etc. The part about unoptimized code is important - the "debug" mode will result in a lot of optimizations (usually code in-lining and re-ordering) being thrown out of the window, to enable the debug host (the process running your code) and the IDE to synchronize code execution. Therefore, hitting pause repeatedly in "debug" mode might be a bad idea.
If we take the question "Why isn't it better known?" then the answer is going to be subjective. Presumably the reason why it is not better known is because profiling provides a long term solution rather than a current problem solution. It isn't effective for multi-threaded applications and isn't effective for applications like games which spend a significant portion of its time rendering.
Furthermore, in single threaded applications if you have a method that you expect to consume the most run time, and you want to reduce the run-time of all other methods then it is going to be harder to determine which secondary methods to focus your efforts upon first.
Your process for profiling is an acceptable method that can and does work, but profiling provides you with more information and has the benefit of showing you more detailed performance improvements and regressions.
If you have well instrumented code then you can examine more than just the how long a particular method; you can see all the methods.
With profiling:
You can then rerun your scenario after each change to determine the degree of performance improvement/regression.
You can profile the code on different hardware configurations to determine if your production hardware is going to be sufficient.
You can profile the code under load and stress testing scenarios to determine how the volume of information impacts performance
You can make it easier for junior developers to visualise the impacts of their changes to your code because they can re-profile the code in six months time while you're off at the beach or the pub, or both. Beach-pub, ftw.
Profiling is given more weight because enterprise code should always have some degree of profiling because of the benefits it gives to the organisation of an extended period of time. The more important the code the more profiling and testing you do.
Your approach is valid and is another item is the toolbox of the developer. It just gets outweighed by profiling.
Sampling profilers are only useful when
You are monitoring a runtime with a small number of threads. Preferably one.
The call stack depth of each thread is relatively small (to reduce the incredible overhead in collecting a sample).
You are only concerned about wall clock time and not other meters or resource bottlenecks.
You have not instrumented the code for management and monitoring purposes (hence the stack dump requests)
You mistakenly believe removing a stack frame is an effective performance improvement strategy whether the inherent costs (excluding callees) are practically zero or not
You can't be bothered to learn how to apply software performance engineering day-to-day in your job
....
Stack trace snapshots only allow you to see stroboscopic x-rays of your application. You may require more accumulated knowledge which a profiler may give you.
The trick is knowing your tools well and choose the best for the job at hand.
These must be some trivial examples that you are working with to get useful results with your method. I can't think of a project where profiling was useful (by whatever method) that would have gotten decent results with your "quick and effective" method. The time it takes to start and stop some applications already puts your assertion of "quick" in question.
Again, with non-trivial programs the method you advocate is useless.
EDIT:
Regarding "why isn't it better known"?
In my experience code reviews avoid poor quality code and algorithms, and profiling would find these as well. If you wish to continue with your method that is great - but I think for most of the professional community this is so far down on the list of things to try that it will never get positive reinforcement as a good use of time.
It appears to be quite inaccurate with small sample sets and to get large sample sets would take lots of time that would have been better spent with other useful activities.
What if the program is in production and being used at the same time by paying clients or colleagues. A profiler allows you to observe without interferring (as much, because of course it will have a little hit too as per the Heisenberg principle).
Profiling can also give you much richer and more detailed accurate reports. This will be quicker in the long run.
EDIT 2008/11/25: OK, Vineet's response has finally made me see what the issue is here. Better late than never.
Somehow the idea got loose in the land that performance problems are found by measuring performance. That is confusing means with ends. Somehow I avoided this by single-stepping entire programs long ago. I did not berate myself for slowing it down to human speed. I was trying to see if it was doing wrong or unnecessary things. That's how to make software fast - find and remove unnecessary operations.
Nobody has the patience for single-stepping these days, but the next best thing is to pick a number of cycles at random and ask what their reasons are. (That's what the call stack can often tell you.) If a good percentage of them don't have good reasons, you can do something about it.
It's harder these days, what with threading and asynchrony, but that's how I tune software - by finding unnecessary cycles. Not by seeing how fast it is - I do that at the end.
Here's why sampling the call stack cannot give a wrong answer, and why not many samples are needed.
During the interval of interest, when the program is taking more time than you would like, the call stack exists continuously, even when you're not sampling it.
If an instruction I is on the call stack for fraction P(I) of that time, removing it from the program, if you could, would save exactly that much. If this isn't obvious, give it a bit of thought.
If the instruction shows up on M = 2 or more samples, out of N, its P(I) is approximately M/N, and is definitely significant.
The only way you can fail to see the instruction is to magically time all your samples for when the instruction is not on the call stack. The simple fact that it is present for a fraction of the time is what exposes it to your probes.
So the process of performance tuning is a simple matter of picking off instructions (mostly function call instructions) that raise their heads by turning up on multiple samples of the call stack. Those are the tall trees in the forest.
Notice that we don't have to care about the call graph, or how long functions take, or how many times they are called, or recursion.
I'm against obfuscation, not against profilers. They give you lots of statistics, but most don't give P(I), and most users don't realize that that's what matters.
You can talk about forests and trees, but for any performance problem that you can fix by modifying code, you need to modify instructions, specifically instructions with high P(I). So you need to know where those are, preferably without playing Sherlock Holmes. Stack sampling tells you exactly where they are.
This technique is harder to employ in multi-thread, event-driven, or systems in production. That's where profilers, if they would report P(I), could really help.
Stepping through code is great for seeing the nitty-gritty details and troubleshooting algorithms. It's like looking at a tree really up close and following each vein of bark and branch individually.
Profiling lets you see the big picture, and quickly identify trouble points -- like taking a step backwards and looking at the whole forest and noticing the tallest trees. By sorting your function calls by length of execution time, you can quickly identify the areas that are the trouble points.
I used this method for Commodore 64 BASIC many years ago. It is surprising how well it works.
I've typically used it on real-time programs that were overrunning their timeslice. You can't manually stop and restart code that has to run 60 times every second.
I've also used it to track down the bottleneck in a compiler I had written. You wouldn't want to try to break such a program manually, because you really have no way of knowing if you are breaking at the spot where the bottlenck is, or just at the spot after the bottleneck when the OS is allowed back in to stop it. Also, what if the major bottleneck is something you can't do anything about, but you'd like to get rid of all the other largeish bottlenecks in the system? How to you prioritize which bottlenecks to attack first, when you don't have good data on where they all are, and what their relative impact each is?
The larger your program gets, the more useful a profiler will be. If you need to optimize a program which contains thousands of conditional branches, a profiler can be indispensible. Feed in your largest sample of test data, and when it's done import the profiling data into Excel. Then you check your assumptions about likely hot spots against the actual data. There are always surprises.

For your complicated algorithms, how do you measure its performance?

Let's just assume for now that you have narrowed down where the typical bottlenecks in your app are. For all you know, it might be the batch process you run to reindex your tables; it could be the SQL queries that runs over your effective-dated trees; it could be the XML marshalling of a few hundred composite objects. In other words, you might have something like this:
public Result takeAnAnnoyingLongTime(Input in) {
// impl of above
}
Unfortunately, even after you've identified your bottleneck, all you can do is chip away at it. No simple solution is available.
How do you measure the performance of your bottleneck so that you know your fixes are headed in the right direction?
Two points:
Beware of the infamous "optimizing the idle loop" problem. (E.g. see the optimization story under the heading "Porsche-in-the-parking-lot".) That is, just because a routine is taking a significant amount of time (as shown by your profiling), don't assume that it's responsible for slow performance as perceived by the user.
The biggest performance gains often come not from that clever tweak or optimization to the implementation of the algorithm, but from realising that there's a better algorithm altogether. Some improvements are relatively obvious, while others require more detailed analysis of the algorithms, and possibly a major change to the data structures involved. This may include trading off processor time for I/O time, in which case you need to make sure that you're not optimizing only one of those measures.
Bringing it back to the question asked, make sure that whatever you're measuring represents what the user actually experiences, otherwise your efforts could be a complete waste of time.
Profile it
Find the top line in the profiler, attempt to make it faster.
Profile it
If it worked, go to 1. If it didn't work, go to 2.
I'd measure them using the same tools / methods that allowed me to find them in the first place.
Namely, sticking timing and logging calls all over the place. If the numbers start going down, then you just might be doing the right thing.
As mentioned in this msdn column, performance tuning is compared to the job of painting Golden Gate Bridge: once you finish painting the entire thing, it's time to go back to the beginning and start again.
This is not a hard problem. The first thing you need to understand is that measuring performance is not how you find performance problems. Knowing how slow something is doesn't help you find out why. You need a diagnostic tool, and a good one. I've had a lot of experience doing this, and this is the best method. It is not automatic, but it runs rings around most profilers.
It's an interesting question. I don't think anyone knows the answer. I believe that significant part of the problem is that for more complicated programs, no one can predict their complexity. Therefore, even if you have profiler results, it's very complicated to interpret it in terms of changes that should be made to the program, because you have no theoretical basis for what the optimal solution is.
I think this is a reason why we have so bloated software. We optimize only so that quite simple cases would work on our fast machines. But once you put such pieces together into a large system, or you use order of magnitude larger input, wrong algorithms used (which were until then invisible both theoretically and practically) will start showing their true complexity.
Example: You create a string class, which handles Unicode. You use it somewhere like computer-generated XML processing where it really doesn't matter. But Unicode processing is in there, taking part of the resources. By itself, the string class can be very fast, but call it million times, and the program will be slow.
I believe that most of the current software bloat is of this nature. There is a way to reduce it, but it contradicts OOP. There is an interesting book There is an interesting book about various techniques, it's memory oriented but most of them could be reverted to get more speed.
I'd identify two things:
1) what complexity is it? The easiest way is to graph time-taken verses size of input.
2) how is it bound? Is it memory, or disk, or IPC with other processes or machines, or..
Now point (2) is the easier to tackle and explain: Lots of things go faster if you have more RAM or a faster machine or faster disks or move over to gig ethernet or such. If you identify your pain, you can put some money into hardware to make it tolerable.

Resources