Can you estimate an application's performance before testing? - performance

It's a tricky question I was asked the other day... We're working on a pretty complex telephony (SIP) application with mixed C++ and PHP code with MySQL databases and several open source components.
A telecom engineer asked us to estimate the performance of the application (which is not ready yet). He went like 'well, you know how many packets can pass through the Linux kernel per second, plus you might know how quick your app is, so tell me how many calls will pass through your stuff per second'.
Seems nonsense to me, as there are a million scenarios that might happen (well, literally...)
However... is there a way to estimate application performance (knowing the hardware it will run on, being able to run standard benchmarks on it, etc) before actual testing?

You certainly can bound the problem with upper (max throughput) limits. There is nothing nonsense about that. In fact, not knowing that stuff indicates a pretty haphazard approach to a problem - especially in the telephony world.
You can work through the problem yourself - what is the minimum "work" you have to accomplish for a transaction or whatever unit of task you have in your app?
Some messages to and from, some processing and a database hit for example? Getting information on the individual pieces will give you an idea of the fastest possible throughput. If you load up the system and see significantly lower performance then you can take time to figure out where you are possibly losing throughput with inefficient algorithms, etc.
EDIT
To do this exercise you need to know all the steps your app does for each use case. Then you can identify the max throughput for each use case. You should definitely know this stuff prior to release and going live.
I'm ignoring the worst case analysis as that - as you point out - is quite a bit harder.

See Capacity Planning for Web Performance: Metrics, Models, and Methods. There are also some tools that can do this sort of discrete event simulation:
Hyperformix
SimPy
WikiPedia list of simulation tools
This stuff ain't easy, and the commercial tools will cost ya. The Capacity Planning book comes with a CD with lots of Excel workbook templates and examples of models that can jump start you.
Good luck :)

If you really have to answer this you could say something like this:
"I don't know off the top of my head. I am will to estimate this for you but it will take time. Obviously the accuracy of my answer depends upon how much effort (I.E. time) I put into calculating my estimate. How much time should I put into calculating my estimate?"
Put the burden back on them. If they really want an accurate answer, they're going to have to let you build at least some test applications that can simulate the actual environment.

You can spike to measure performance. Your whole system may not be working yet, but you know how the parts are intended to fit together. You can whip something up in a few hours that does the same kind of work as the final app will, across all the layers, and use it to measure performance of your design.
Remember: prototypes are broad, spikes are deep.

You should do the estimate. An estimate won't give you the right answer. It will however make you to think about the problem. Right now it sounds like your coding and hoping that everything will be OK. Or you are in panic mode and feel you don't have time for estimates.
Spend some time thinking about it. Analyse the important use cases. Think about the memory you may need; think about database access; think about network access (local and remote). These will effect the performance of your system. Get the whole team together to do this.
Regularly measure your system's performance during development for these important use cases. Mock up components/other systems if you have to. Analyse the results. How do these compare to your estimate. Maybe components are memory/database/network bound. Maybe you need more memory; less database access; simpler queries; caching. You don't have to make these changes straight away. However you do know how your system operates and what you need to do.
Result: Fewer nasty surprises at system test. Less panic as the release date looms.

You can definitely do capacity planning in advance, but the quality of the estimate will depend on the quality of the data available.
The best estimate is to build the system in test, run simulated workloads, then predict capacity as a function of performance requirements and workload. These 3 form a prediction space - given 2 of the 3, you can predict the third:
Given performance requirements and capacity (i.e. hardware) you can calculate the workload you can handle.
Given performance requirements and workload, you can calculate the capacity (i.e. hardware) that you need.
Given Workload and capacity, you can predict your expected performance.

This is true in some domains, but unless you are an expert in that domain then you don't have any idea. For example I write code to controlling industrial robots. The speed is limited by the robot motion, not by the execution speed of the code. Knowing how fast the robot is and how far it has to go, we can make fairly good estimates of "speed". I'd have no idea how to estimate time for your application.

Related

Are there any resources for language independent performance tips?

I work with many people that program video games for a living. I have a quite a bit of knowledge in C++ and I know a number of general performance strategies to utilize in day to day programming. Like using prefix ++/-- over post fix.
My problem is that often times people come to me to give them tips on general optimizations they can do on a regular basis when programming, but often times these people program in all sorts of languages. Some use C++, C#, Java, ActionScript, etc.
I am wondering if there are any general performance tips that can be utilized on a day by day programming basis? For example, I would suggest prefix ++/-- over postfix for people programming in another language, but I am just not sure if that is true.
My guess is that it is language specific and the best way to go about general optimizations is to make sure you are not using majorly bloated algorithms, but maybe someone has some advice.
Without going into language specifics, or even knowing whether this is embedded, web, CAD, game, or iPhone programming, there isn't much that can be said. All we know is that there's multiple languages involved, and for some unknown reason performance is always slower than desirable.
First, check your algorithms. A slow algorithm can cause horrible performance. Read up on algorithms and their complexity.
Second, note if there are any really slow operations, such as hitting a database or transmitting information or moving a robot arm. See if the program is doing more of those than it should.
Third, profile. If there's a section of code that's taking 5% of the time, no optimization will make your program more than 5% faster. If a section of code is taking a lot of the time, it's worth looking at.
Fourth, get somebody who knows what they're doing to make any specific optimizations. Test them when they're done to make sure they actually speed up performance. When performance was an issue, I've improved it with some counterintuitive measures, like rolling up loops.
I don't think you can generalize optimization as such. To optimize execution time, you need to dig deep into the language and understand how things work in detail. Just guessing or making assumptions on experiences with other languages won't work! For example, writing x = x << 1 instead of x = x*2 might be a big benefit in C++. In JavaScript it will slow you down.
With all the differences between all the languages it's hard to find generic optimization tips. Maybe for some languages which are similar (f.ex. C# and Java). But if you add both JavaScript and Python to that list I'm pretty sure not many common optimization techniques will be left over.
Also keep in mind that premature optimization is often considered bad practice. Developer-hours are much more expensive than buying additional hardware.
However, there is one thing which comes to mind. Over the past decade or so, Object Relational Mappers have become quite popular. And hence, they emerge(d) in pretty much all popular languages. But you have to be careful with those. It's easy to load tons of data into memory that you will never use in your code if not properly configured. Keep that in mind. Lazy loading might be of some help here. But your mileage will vary.
Optimization depends on so many things that answering such a generic question would make this post explode into a full-fledged paper. In my opinion, optimization should be regarded on a project-by-project basis. Not only Language-by-Language basis.
I think you need to split this into two separate questions:
1) Are there language-agnostic ways to find performance problems? YES. Profile, but avoid the myths around that subject.
2) Are there language-agnostic ways to fix performance problems? IT DEPENDS.
A general language-agnostic principle is: do (1) before you do (2).
In other words, Ready-Aim-Fire, not Ready-Fire-Aim.
Here's an example of performance tuning, in C, but it could be any language.
A few things I have learned since asking this:
I/O operations are usually the most expensive to performance. This holds especially true when you are doing disk or network I/O (which is usually the most expensive because if you have to wait for a response from the other host you have to wait for all processing and I/O operations the remote host does). Only do these operations when absolutely necessary and possibly consider using a cache when possible.
Database operations can be very expensive because of network/disk I/O and the translation time to and from SQL. Using in-memory DB or cache can help reduce I/O issues and some (not all) NoSQL databases can reduce SQL translation time.
Only log important information. Using logging libraries like log4j can help because you can put logging to your hearts desire in your application but you set each message to a certain log level. Whichever log level you set the application to it will only log messages at that level or higher. This way if you need to troubleshoot functionality you only have to change a quick config and restart you application to give you additional messages. Then when you are done just turn you application back to the default level so that you do not log too often.
Only include functionality that is needed. Additional functionality may be nice to have but can increase processing time, provide additional locations for the application to fail, and costs your team development time that could be spent on more important tasks.
Use and configure your memory manager correctly. Garbage collection routines can kill performance if they are not configured correctly. If every minute you application freezes for a second or two for garbage collection your customer probably will not be happy.
Profile only after you have discovered a performance issue. Profilers will make the applications performance look worse than it is because you have your application and the profiler running on the same host, consuming the same hardware resources.
Do not prematurely do performance tuning. There are general practices you can take that should be better on performance in each language, but starting performance tuning in the middle of application development can cost you a lot on development because there is still functionality to be added.
This is not necessarily going to help performance but keep class dependency to a minimal. When you get into performance tuning there is good chance you will have to rewrite whole portions of code, which if there is a lot of dependencies on the section you are performance tuning the greater chance you will break the code. It can often be a domino affect because after fixing the performance issue than you have to fix all the dependencies, and possibly dependencies of the original dependencies. A performance tuning exercise estimate for a few hours can quickly turn into months with an application that has a lot of dependencies.
If performance is a concern do not use interpreted languages (scripting languages).
Only use the hardware you need. Having a system with a 64 core processor may seem cool but if you only have two or three threads running in your application than you are getting little benefit from having 64 cores. In fact, in rare instances having overly excessive hardware can sometimes hurt performance because the chips have to be wired to handle all the hardware which can cause your application to spend more time switching between cores or processors than actually being processed.
Any timing metrics you report make as granular as possible. Currently, you may only need to be worried about the number of milliseconds a process takes but in the future as you make your application faster and faster you may need more granular timings. If version A uses milliseconds and version B uses microseconds, how can you compare performance if version B is taking about the same number of milliseconds. Version B may be better but you just can't tell because version A did not use granular enough metrics.

When is performance gain significant enough to implement that optimization?

following the text book, I do measure performance whenever I try optimizing my code. Sometimes, however, the performance gain is rather small and I can't decisively decide whether I should implement that optimization.
For example, when a fix shortens an average response time of 100ms to 90ms under some conditions, should I implement that fix? What if it shortens 200ms to 190ms? How many condition should I try before I can conclude that it will be beneficial overall?
I guess it's not possible to give a straight forward answer to this, as it depends on too many things, but is there a good rule of thumb that I should follow? Are there any guideline/best-practices?
EDIT:Thanks for the great answers! I guess the moral of the story is, there is no easy way to tell whether you should, but there ARE guidelines that can aid that process.. Things you should consider, things you shouldn't do etc. This particular time I ended up implementing the fix, even though it made a few line of code into 20-30 lines of code. Because our app. is very performance critical, and it was a consistent 10% gain in various realistic cases.
I think the rule of thumb (at least for me) is two-fold:
"It matters if it matters"--in the business world, this generally means that it matters if the clients care. That is, if the end users will "notice" the difference between 100ms and 90ms (I'm not being facetious here), then it matters.
If "it matters," then you will want to test your code thoroughly against a realistic variety of use cases that are likely to arise or at least may arise. If an optimization speeds up code in 50% of cases, but actually runs slower than what you previously had the other 50% of the time, obviously, it may not be worth implementing.
Regarding point 1 above: by suggesting an end user of your software might "notice" a 10ms difference, I don't mean to suggest that they will actually visibly see a difference. But if your app runs on a server with millions of connections and every little speed increase takes a substantial load off the server, that might matter to the client running the server. Or if your app performs extremely time-critical work, this is another case where the result of a 10ms speedup might be noticeable, even if the speedup itself isn't.
The only sensible approach to your question is something along the lines of "when the benefit is large enough to warrant the time you invest in exploring, implementing and testing the optimization."
The "benefit is large enough" is extremely subjective. Can you or your employer sell more units of software if you make this change? Will your user base notice? Will it give you personal gratification to have the fastest-possible code? Which of those or similar questions apply is something only you can know.
By and large, most of the software I have written (in a 20+ year career) has been "fast enough" out of the box, and the code I cared to optimize presented itself as an obvious bottleneck to the end users: Queries taking a long time, scrolling too slow, that sort of thing.
Donald Knuth made the following two statements on optimization:
"We should forget about small
efficiencies, say about 97% of the
time: premature optimization is the
root of all evil" [2]
and
"In established engineering
disciplines a 12 % improvement, easily
obtained, is never considered marginal
and I believe the same viewpoint
should prevail in software
engineering"[5]
src: http://en.wikipedia.org/wiki/Program_optimization
Is the optimization obfuscating your code too much?
Do you really need an optimization? if your app just runs fine then readability of the code is probably more important
Did you work on the general design and algorithms of your application before trying small hacky optimizations?
You should focus optimisation efforts on the parts of code that account for the most runtime. If a particular piece of code takes up 80% of the total runtime, then optimising it to reduce the time is takes by 5% will have as much impact as reducing the time of the rest of the code by 20%.
In general, optimisations make code less readable (not always, but often). Therefore you should avoid optimising until you are sure that there is a problem.
If it speeds up your program at all, why not implement it? You have already done the work by creating the new implementation, so you are not doing extra work by applying the new implementation.
Unless the code is THAT much harder to understand.
Also, 100 ms to 90 ms is a 10% gain in performance. A 10% gain should not be taken lightly.
The real question is, if it only took 100 ms to run in the first place, what was the point in trying to optimize it?
As long as it's fast enough, then you don't need to optimise any more. But then, you wouldn't even bother profiling if that was the case...
If the performance gain is small, consider the other factors: maintainability, risk of making the change, understandability, etc. If it reduces the ability to maintain or understand the code, it probably isn't worth doing. If it improves those attributes, then it's more reason to implement the change.
In most cases, your time is more valuable than the computer's. If you think it'll take you half an hour longer to work out what the code is doing later (say if there's a bug in it), and it's only saved you a few seconds, ever, you're at a net loss.
It depends very much on the usage scenario. I'll assume here that the code in question has been profiled and thus it is known to be the bottleneck--i.e. not just "this could be faster", but "the program would give results/finish running faster if this were faster". In situations where this is not the case--e.g. if you spend 99% of your time waiting for more data to come over an ethernet connection--then you should care about correctness but not optimize for speed.
If you are writing a piece of user interface code, what you care about is perceived speed. Generally anything under ~100 ms is perceived as "instant"--no point speeding it up.
If you are writing a piece of code for a giant server farm, then if the cost of your salary to make the code fast is less than the cost of the extra electricity for the server farm, it's worthwhile. (But be sure to prioritize your time.)
If you are writing a piece of code that is used rarely or when unattended, as long as it completes in a semi-sane duration, don't worry about it. Install scripts tend to be of this sort (unless you start running into many minutes, at which point users might start abandoning the install because it's taking too long).
If you are writing code to automate a task for someone else, then if (your time spent coding + their time spent using the optimized code) is less than (their time spent using the slow code), it's worthwhile. If you're doing this in a commercial setting, weight this by your respective salaries.
If you are writing library code that will be used by many thousands of people, always make it faster if you have time to.
If you are under time pressure to simply have something working e.g. as a demo, don't optimize (except through sensible choice of algorithms from libraries) unless the result would be so slow that it isn't even "working".
One of the biggest annoyances for me personally is finding software which perhaps initially fell into one category and then later fell into another, but for which nobody went back to do needed optimizations. Until recently Javascript performance was a great example of this. Moral of the story is: don't just decide once; revisit the issue as the situation demands.

How to explain at your boss that code/resources optimization is important?

Ahhhw, every time is so frustrating..
We have a dedicated server in our hosting company, and everytime i have to write down a new app (or an add to an pre-existing app), i use to 'lose' some time to optimize the code for many behaviors (reducing the db query, optimizing the db structure, reducing the bandwith, etc..) depending on what the app is supposed to do.
Obviously, is the point is not that i write bad code and then rebuild it, its just that after the project is complete, i allways find somethings that could be done better.
And everytime, if my boss catch me doing this, he say 'Your wasting your time! if the application need more resources, we buy more RAM, more CPU, or more bandwith!'.
What is the best (and simplyest) way to explain he that optimization is still important, and that is not so easy or automagically upgrating the hardware of an (production!) server?
EDIT: im not talking just about database optimization, but every aspect of an application
Chances are you are really wasting time. Most of the optimizations that you learn in college or university are sort of irrelevant in the business world. I recommend concentrating on those optimizations where you can quantify WHY your optimization is worth your company's money (your time spent is their money spent).
When trying to optimise code you need to follow the optimisation cycle. There are no short-cuts.
Define specific, detailed, measurable, performance requirements.
Measure your application's performance.
If it meets the requirements, stop, as continuing is a waste of time/money.
Determine where the bottleneck is (which machine? CPU, memory, disk, etc.?).
Fix the bottleneck (either hardware or software fix, whichever is faster/cheaper).
Go to 2.
From your post it doesn't sound like you have done 1 or 2, so you've skipped over 3 and 4 because you don't know enough to do them, and are now stumbling blindly around in 5 attempting to optimise whatever you think needs optimising.
So from what you've said I'd have to agree with your boss. You are wasting your time. Or at least you haven't followed enough process to prove otherwise, which is essentially the same thing.
Keep detailed notes and time sheets the next time you make a server move/upgrade, and protocol every minute of work that arose from it (including things in the weeks after). Then you'll have hard evidence that upgrading is expensive, and optimizations help delay the investment.
In the current situation, you might be able to do something with benchmarks and statistics along the line of "without my optimization, the server will be at 90% capacity with X number of users. With my optimization, we can cater for Y users more, before we have to get a new machine."
On the other hand, the line between optimization and over-optimization is thin. While writing code that doesn't waste resources is good craftsmanship and should be a human right, RAM and disk space are awfully cheap these days. Optimizations also tend to make code more complicated, and harder to maintain for others. When you control your own hardware, code optimization may not always be the top goal.
Do you have performance objectives for the application(s)? Free capacity, response time, bandwidth usage etc. Without them you will find it difficult to justify why an improvement needs to be made. You need to be able to quantify the problem before people will spend money on it.
If the application is already meeting the requirements then your boss is probably right.
To ensure reserve capacity for future customers you (and your boss) could set a threashold such that once performance comes within a percentage of the requirements you need to investigate the problem. This is the time for you to request/push/negociate for performing optimisations as he will be aware of the problem and the risk of doing nothing. If you can recommend some changes to correct the problem that will cost less than the hardware then hopefully he should agree to let you do it. Of course the reverse still applies. If the total cost of the hardware upgrade is cheaper then that is probably the better option.
Wait until buying RAM and CPU becomes more expensive than paying you to optimize the code. Then you have a case.
There's a break even point between improving software vs. improving hardware. For desktop systems, a good threshold is by purchase time:
If that machine fails, can you go to the next mall and buy a replacement within an hour?
As long as you can answer yes, optimization is futile - an investment into the future maybe, but is that really your decision alone?
A similar rule for web sites might be "how long would it take to get a new hosting contract on that?"
This isn't perfect, it covers just a single aspect of scalability. I like it because it's an easy to understand aspect outside of technical concerns.
Of course, you should have more material if that aspect suddenly gets brushed away. It helps to just ask what are the companies/bosses plans for a few key scenarios. - e.g. what contingency plans for DDOS attacks, for "rush events" etc. Having some data to back that up, some key stories ("company foo was mentioned on popular night show, web server broke down under requests, business lost, potential customers angered") helps a lot.
The key to realize is that (assuming from your description alone) it is your responsibility to forsee such events, and provide the technical solutions, but it's not your responsibility to mke these decisions.
Also, getting a written statement from your boss that "five hours downtime isn't a problem, get bck to work on the important things!" is a good CYA.
about the same way one would explain a developer that code/resources optimization is unimportant ...
i guess this problem arises because your way to measure importance is different than his ...
his job is to care about money and make sure he gets the "most" out of it ... and frankly, servers are cheaper than developers ... and managed servers are cheaper than admins ... and the ugly truth is, that many people think that way ... personally, I think PHP isn't really the right choice for performance-critical parts of an app ... neither are RMDBSs ... but nearly every managed server runs PHP and MySQL and business clients prefer to have an app they can run everywhere, based on a technology widely known, which turns out to be PHP and MySQL on the server side ... the world is a cruel place and the truth is ugly ... now when it comes to you, you have only 2 choices:
live with it
work with people who are more like you, and who share the same ambitions
your boss does the job right, from his perspective ... what you do is a waste of company money, because the company's goal is not to write beautiful and highly optimized code ... whatever your company does, in the end you only provide a tool ... and the value of a tool is measured by the use it provides in relation to the money it costs ...
if your optimizations were economically reasonable, then the easiest way would be not to optimize anymore and wait for the moment until your boss comes to you, asking for speedup ...
i suggest, you either look for a new job, or focus on other important things:
good design and use of best practices such as KISS, DRY, SOLID and GRASP
scalable architecture (doesn't mean you have to actually implement it, just have an idea how the whole thing could scale well)
maintainability, including readability and good documentation
flexibility & extensibility
robustness & correctness
do not reinvent the wheel and use the right tools for a job - use adequate technologies, libraries and frameworks
fast developement
these goals are important for the present and the future of a software project (also when it comes to money (especially the bold ones)) ... and they provide the possibility of doing optimization when actually needed ...
this probably isn't the answer you wanted, but I hope it helps ...
Optimizing is often really just a waste of time, what is more important is optimizing for maintainability:
- keep the code simple (see: KISS)
- move duplicate code into functions (see: DRY)
- optimize comments (as few comments as possible, as many as necessary)
- remove unused code
Those changes do not necessarily affect the speed of the program but they reduce the time needed to implement changes and fix bugs so they have real business value as they save expensive developer time.
If you're having to explain why there is a need for optimisation, then you're already on a low-win-rate path. Unfortunately, the best argument is when there is a situation where performance is becoming a noticeable problem (e.g. customers complaining) - this makes people sit up and listen if they're getting constant ear-ache about it. Often, the solution can be that throwing hardware at it is the best/cheapest solution.
However for me, performance is important, when working on highly scalable systems, you need to spend time on optimisations. It should at the very least be on your mind while you're developing and a lot of things can be swallowed up in the dev process (as opposed to after you've finished, then going back to optimise).
e.g.
- What happens if my database has 100,000,000 records instead of 100,000? (you can easily fool SQL Server into thinking there are more records in a table than there actually are to see what the effect on the execution plan would be).
If you're being pro-active about optimisation (which I personally think is good), then you need to try and come with a real-world meaning to your boss. e.g. "if we optimise our site by cutting down the page size, we could save x amount of bandwidth a month, saving us xx much money".
You do have to be careful about micro-optimisations though as you can micro-optimise until the cows come home, for much lower benefit.
You could just wait until your boss says "Why is it taking so long?" Then you could say "Hey, boss, we could buy 10 times as much hardware, or I could spend a couple days and make it 10 times faster". Here's an example.
Added: If your boss says "Well, why did you make it slow in the first place?" I would have no trouble explaining that he could hire God's Right Hand Programmer, and the code as first written would still have room for optimizing.
If you are on a relational database (and your not working with trivial applications).... you cannot afford to write bad database queries. Since you are you using a red-gate tag I assume you are on SQL server. If you design a bad schema (including queries) on SQL Server a database server when you start getting moderate traffic you will be forced to scale in horrendous ways. Once you get a high level of traffic you will be screwed -- and will need to hire a very expensive consultant/DBA to undo the mess -- use this argument with your boss :D.
What I am trying to get at is even though SQL is declarative and it doesn't appear that you have to pay attention to performance -- it doesn't really work that way. You have to design your data model by keeping in mind the way it is going to be used.
If you are writing business logic in application code ... well that doesn't matter much -- you can fix that latter with less trouble if performance becomes an issue. Most business logic these days is stateless (especially if its web-based or web-service based) meaning you can just chuck more machines at the problem -- if it's not than its just a bit harder.
Every time something takes long than it would have if the refactoring had been done ahead of time make sure your boss hears about it.
IMO, It's less a case of convincing that optimizations need to be done (and based on your Boss's responses, you're not going to win this fight), and more a case of always building in refactoring time into your estimates.
Do a cost analysis...
This will take me X hours and will mean every project needs 50% less data storage. Based on us doing 2 projects a year using on average 100Gb space, that will save $Y after a year, $Z after 3 years.
Crude example but it suffices.
Ultimately you and your boss are both wrong... you think code has to be fast (primarily) to satisfy your idealism, he doesn't care at all as he wants to deliver products.

Is Performance Always Important?

Since I am a Lone Developer, I have to think about every aspect of the systems I am working on. Lately I've been thinking about performance of my two websites, and ways to improve it. Sites like StackOverflow proclaim, "performance is a feature." However, "premature optimization is the root of all evil," and none of my customers have complained yet about the sites' performance.
My question is, is performance always important? Should performance always be a feature?
Note: I don't think this question is the same as this one, as that poster is asking when to consider performance and I am asking if the answer to that question is always, and if so, why. I also don't think this question should be CW, as I believe there is an answer and reasoning for that answer.
Adequate performance is always important.
Absolute fastest possible performance is almost never important.
It's always worth keeping an eye on performance and being aware of anything outrageously non-optimal that you're doing (particularly at a design/architecture level) but that's not the same as micro-optimising every line of code.
Performance != Optimization.
Performance is a feature indeed, but premature optimization will cost you time and will not yield the same result as when you optimize the parts that need optimization. And you can't really know which parts need optimization until you can actually profile something.
Performance is the feature that your clients will not tell you about if it's missing, unless it's really painfully slow and they're forced to use your product. Existing customers may report it in the end, but new customers will simply not bother if the performance is required.
You need to know what performance you need, and formulate it as a requirement. Then, you have to meet your own requirement.
That 'root of all evil' quote is almost always misused and misunderstood.
Designing your application to perform well can be mostly be done with just good design. Good design != premature optimization, and it's utterly ridiculous to go off writing crap code and blowing off doing a better job on the design as an 'evil' waste. Now, I'm not specifically talking about you here... but I see people do this a lot.
It usually saves you time to do a good job on the design. If you emphasize that, you'll get better at it... and get faster and faster at writing systems that perform well from the start.
Understanding what kinds of structures and access methods work best in certain situations is key here.
Sure, if you're app becomes truly massive or has insane speed requirements you may find yourself doing tricked out optimizations that make your code uglier or harder to maintain... and it would be wrong to do those things before you need to.
But that is absolutely NOT the same thing as making an effort to understand and use the right algorithms or data patterns or whatever in the first place.
Your users are probably not going to complain about bad performance if it's bearable. They possibly wouldn't even know it could be faster. Reacting to complaints as a primary driver is a bad way to operate. Sure, you need to address complaints you receive... but a lack of them does not mean there isn't a problem. The fact that you are considering improving performance is a bit of an indicator right there. Was it just a whim, or is some part of you telling you it should be better? Why did you consider improving it?
Just don't go crazy doing unnecessary stuff.
Keep performance in mind but given your situation it would be unwise to spend too much time up front on it.
Performance is important but it's often hard to know where your bottleneck will be. Therefore I'd suggest planning to dedicate some time to this feature once you've got something to work with.
Thus you need to set up metrics that are important to your clients and you. Keep and analyse these measurements. Then estimate how long and how much each would take to implement. Now you can aim on getting as much bang for you buck/time.
If it's web it would be wise to note your page size and performance using Firebug + yslow and/or google page speed. Again, know what applies to a small site like yours and things that only apply to yahoo and google.
Jackson’s Rules of Optimization:
Rule 1. Don’t do it.
Rule 2 (for experts only). Don’t do it
yet— that is, not until you have a
perfectly clear and unoptimized
solution.
—M. A. Jackson
Extracted from Code Complete 2nd edition.
To give a generalized answer to a general question:
First make it work, then make it right, then make it fast.
http://c2.com/cgi/wiki?MakeItWorkMakeItRightMakeItFast
This puts a more constructive perspective on "premature optimization is the root of all evil".
So to parallel Jon Skeet's answer, adequate performance (as part of making something work, and making it right) is always important. Even then it can often be addressed after other functionality.
Jon Skeets 'adequate' nails it, with the additional provision that for a library you don't know yet what's adequate, so it's better to err on the safe side.
It is one of the many stakes you must not get wrong, but the quality of your app is largely determined by the weakest link.
Performance is definitely always important in a certain sense - maybe not the one you mean: namely in all phases of development.
In Big O notation, what's inside the parantheses is largely decided by design - both components isolation and data storage. Choice of algorithm will usually only best/worst case behavior (unless you start with decidedly substandard algorithms). Code optimizations will mostly affect the constant factor - which shouldn't be neglected, either.
But that's true for all aspects of code: in any stage, you have a good chance to fail any aspect - stability, maintainability, compatibility etc. Performance needs to be balanced, so that no aspect is left behind.
In most applications 90% or more of execution time is spend in 10% or less of the code. Usually there is little use in optimizing other code than these 10%.
performance is only important to the extent that developing the performance improvement takes less time than the total amount of time that will be saved for the user(s).
the result is that if you're developing something for millions... yeah it's important to save them time. if you're coding up a tool for your own use... it might be more trouble than it's worth to save a minute or even an hour or more.
(this is clearly not a rule set in stone... there are times when performance is truly critical no matter how much development time it takes)
There should be a balance to everything. Cost (or time to develop) vs Performance for instance. More performance = more cost. If a requirement of the system being built is high performance then the cost should not matter, but if cost is a factor then you optimize within reason. After a while, your return on investment suffers in that more performance does not bring in more returns.
The importance of performance is IMHO highly correlated to your problem set. If you are creating a site with an expectation of a heavy load and lot of server side processing, then you might want to put some more time into performance (otherwise your site might end up being unusable). However, for most applications the the time put into optimizing your perfomance on a website is not going to pay off - users won't notice the difference.
So I guess it breaks down to this:
Will users notice the improvements?
How does this improvement compare to competing sites?
If users will notice AND the improvement would be enough to differentiate you from the competition - performance is an important feature - otherwise not so much. (To a point - I don't recommend ignoring it entirely - you don't want your site to turtle along after all).
No. Fast enough is generally good enough.
It's not necessarily true, however, that your client's ideas about "fast enough" should trump your own. If you think it's fast enough and your client doesn't then yes, you need to accommodate your ideas to theirs. But if you're client thinks it's fast enough and you don't you should seriously consider going with your opinion, no theirs (since you may be more knowledgeable about performance standards in the wider world).
How important performance is depends largely and foremost on what you do.
For example, if you write a library that can be used in any environment, this can hardly ever have too much performance. In some environments, a 10% performance advantage can be a major feature for a library.
If you, OTOH, write an application, there's always a point where it is fast enough. Users won't neither realize nor care whether a button pressed reacts within 0.05 or 0.2 seconds - even though that's a factor of 4.
However, it is always easier to get working code faster, than it is to get fast code working.
No. Performance is not important.
Lack of performance is important.
Performance is something to be designed in from the outset, not tacked on at the end. For the past 15 years I have been working in the performance engineering space and the cause of most project failures that I work on is a lack of requirements on performance. A couple of posts have noted "fast enough" as an observation and whether your expectation matches that of your clients, but what about when you have a situation of your client, your architectural team, your platform engineering team, your functional test team, your performance test team and your operations team all have different expectations on performance, none of which have been committed to stone and measured against. Bad Magic to be certain.
Capture those expectations on the part of your clients. Commit them to a specific, objective, measurable requirement that you can evaluate at each stage of production of your software. Expectations may not be uniform, with one section of your app/code needing to be faster than others, nor will each customer have the same expectations on what is considered acceptable. Having this information will force you to confront decisions in the design and implementation that you may have overlooked in the past and it will result in a product which is a better match to your clients expectations.

For your complicated algorithms, how do you measure its performance?

Let's just assume for now that you have narrowed down where the typical bottlenecks in your app are. For all you know, it might be the batch process you run to reindex your tables; it could be the SQL queries that runs over your effective-dated trees; it could be the XML marshalling of a few hundred composite objects. In other words, you might have something like this:
public Result takeAnAnnoyingLongTime(Input in) {
// impl of above
}
Unfortunately, even after you've identified your bottleneck, all you can do is chip away at it. No simple solution is available.
How do you measure the performance of your bottleneck so that you know your fixes are headed in the right direction?
Two points:
Beware of the infamous "optimizing the idle loop" problem. (E.g. see the optimization story under the heading "Porsche-in-the-parking-lot".) That is, just because a routine is taking a significant amount of time (as shown by your profiling), don't assume that it's responsible for slow performance as perceived by the user.
The biggest performance gains often come not from that clever tweak or optimization to the implementation of the algorithm, but from realising that there's a better algorithm altogether. Some improvements are relatively obvious, while others require more detailed analysis of the algorithms, and possibly a major change to the data structures involved. This may include trading off processor time for I/O time, in which case you need to make sure that you're not optimizing only one of those measures.
Bringing it back to the question asked, make sure that whatever you're measuring represents what the user actually experiences, otherwise your efforts could be a complete waste of time.
Profile it
Find the top line in the profiler, attempt to make it faster.
Profile it
If it worked, go to 1. If it didn't work, go to 2.
I'd measure them using the same tools / methods that allowed me to find them in the first place.
Namely, sticking timing and logging calls all over the place. If the numbers start going down, then you just might be doing the right thing.
As mentioned in this msdn column, performance tuning is compared to the job of painting Golden Gate Bridge: once you finish painting the entire thing, it's time to go back to the beginning and start again.
This is not a hard problem. The first thing you need to understand is that measuring performance is not how you find performance problems. Knowing how slow something is doesn't help you find out why. You need a diagnostic tool, and a good one. I've had a lot of experience doing this, and this is the best method. It is not automatic, but it runs rings around most profilers.
It's an interesting question. I don't think anyone knows the answer. I believe that significant part of the problem is that for more complicated programs, no one can predict their complexity. Therefore, even if you have profiler results, it's very complicated to interpret it in terms of changes that should be made to the program, because you have no theoretical basis for what the optimal solution is.
I think this is a reason why we have so bloated software. We optimize only so that quite simple cases would work on our fast machines. But once you put such pieces together into a large system, or you use order of magnitude larger input, wrong algorithms used (which were until then invisible both theoretically and practically) will start showing their true complexity.
Example: You create a string class, which handles Unicode. You use it somewhere like computer-generated XML processing where it really doesn't matter. But Unicode processing is in there, taking part of the resources. By itself, the string class can be very fast, but call it million times, and the program will be slow.
I believe that most of the current software bloat is of this nature. There is a way to reduce it, but it contradicts OOP. There is an interesting book There is an interesting book about various techniques, it's memory oriented but most of them could be reverted to get more speed.
I'd identify two things:
1) what complexity is it? The easiest way is to graph time-taken verses size of input.
2) how is it bound? Is it memory, or disk, or IPC with other processes or machines, or..
Now point (2) is the easier to tackle and explain: Lots of things go faster if you have more RAM or a faster machine or faster disks or move over to gig ethernet or such. If you identify your pain, you can put some money into hardware to make it tolerable.

Resources