With so much system resources available, how sure are you your code is tuned? - performance

With CPUs being increasingly faster, hard disks spinning, bits flying around so quickly, network speeds increasing as well, it's not that simple to tell bad code from good code like it used to be.
I remember a time when you could optimize a piece of code and undeniably perceive an improvement in performance. Those days are almost over. Instead, I guess we now have a set of rules that we follow like "Don't declare variables inside loops" etc. It's great to adhere to these so that you write good code by default. But how do you know it can't be improved even further without some tool?
Some may argue that a couple of nanoseconds won't really make that big a difference these days. The truth is, we are stuck with so many layers that you get a staggering effect.
I'm not saying we should optimize every little millisecond out of our code as that will be expensive and unfeasible. I believe we have to do our best, given our time constraints, to write efficient code as well.
I'm just interested to know what tools you use to profile and measure performance of code, if at all.

I think that optimization should be thought of not as looking at each line of code, but rather, what asymptotic complexity is your algorithm. For example, using a bubble sort is probably one of the worst sorting algorithms you could use in terms of optimization. It takes the longest. Quicksort and mergesort are faster in terms of sorting, and should be always used before a bubble sort.
If you keep optimization always in your mind when designing a solution to a problem, then you should be able to write readable code, which other developers will approve of. Also, if you are programming in a higher level language that will be compiled before it is run, remember that compilers make some awesome optimizations nowadays that you or I may not think of, and also (more importantly) do not have to worry about.
Stick with a good and low big O(), and it should be optimized pretty good. If you are working with millions or greater in some type of dataset, then look for a big O(logn) algorithm. They work great for large tasks, and keep your code optimized.
Let the compilers work on the line by line code optimizations so you can focus on the solutions.
There are times that do warrant line by line optimizations, and if that is the case that you need that much speed, maybe you might want to look into assembly so that you can control every line that is written.

There's a big difference between "good" code and "fast" code. They aren't exactly separate from each other either, but "fast" code doesn't mean "good". Often times, "fast" actually means bad code because readability compromises must be made to make it fast.
The way I look at it, hardware is cheap, programmers are expensive. Unless there is a serious performance problem with some piece of code, you should never have to worry about speed. If there are performance problems, you'll notice them. Only when you notice the performance problem on good hardware should you have to worry about optimization (in my opinion)
If you reach the point where your code is slow, but you can't figure out why, I'd use a profiler like ANT, or dotTrace if you're in the .NET world (I'm sure there are others out there for other platforms & languages). They're pretty useful, but I've only ever had one situation where I needed a profiler to identify the problem. It was something that now that I know the issue, I won't need a profiler again to tell me it's a problem because I'll never forget the amount of time I spent trying to optimize it.

This is absolutely a valid concern, but not for most developers. Most developers are concerned with getting a product that works to their employer. Optimized code is seldom a requirement.
The best way to make sure your code is fast is to benchmark or profile it. A lot of compiler optimizations create non-intuitive oddities in the performance of a programmer's code, so in the end measurement becomes essential.

In my experience, Rational Quantify has given me the best results in terms of code tuning. It is not free, but it is very fully featured and seems to have given me the most useful results.
In terms of free tools, check out gprof or oprofile, if you are on a Unix environment. They are not as good as some of the commercial tools, but can often point you in the right direction.
On a side note, I am almost always surprised at what profilers turn up the first time I use them. You can have intuition as to where code may be bottlenecking, and it can often be completely wrong.

Almost all code I write is plenty fast enough. On the rare occasions when it isn't, for C, C++, and Objective Caml I use the venerable gprof and the excellent valgrind with its superb visualizer kcachegrind (part of the KDE SDK; don't be fooled by the out-of-date code on sourceforge).
The MLton Standard ML compiler and the Glasgow Haskell Compiler both ship with excellent profilers.
I wish there were a better profiler for Lua.

Uh, a profiler maybe? There are ones available for almost all platforms and languages.

Related

Most hazardous performance bottleneck misconceptions

The guys who wrote Bespin (cloud-based canvas-based code editor [and more]) recently spoke about how they re-factored and optimize a portion of the Bespin code because of a misconception that JavaScript was slow. It turned out that when all was said and done, their optimization produced no significant improvements.
I'm sure many of us go out of our way to write "optimized" code based on misconceptions similar to that of the Bespin team.
What are some common performance bottleneck misconceptions developers commonly subscribe to?
In no particular order:
"Ready, Fire, Aim" - thinking you know what needs to be optimized without proving it (i.e. guessing) and then acting on that, and since it doesn't help much, therefore assuming the code must have been optimal to begin with.
"Penny Wise, Pound Foolish" - thinking that optimization is all about compiler optimization, fussing about ++i vs. i++ while mountains of time are being spent needlessly in overblown designs, especially of data structures and databases.
"Swat Flies With a Bazooka" - being so enamored of the fanciest ideas heard in classrooms that they are just used for everything, regardless of scale.
"Fuzzy Thinking about Performance" - throwing around terms like "hotspot" and "bottleneck" and "profiler" and "measure" as if these things were well understood and / or relevant. (I bet I get clobbered for that!) OK, one at a time:
hotspot - What's the definition? I have one: it is a region of physical addresses where the PC register is found a significant fraction of time. It is the kind of thing PC samplers are good at finding. Many performance problems exhibit hotspots, but only in the simplest programs is the problem in the same place as the hotspot is.
bottleneck - A catch-all term used for performance problems, it implies a limited channel constraining how fast work can be accomplished. The unstated assumption is that the work is necessary. In my decades of performance tuning, I have in fact found a few problems like that - very few. Nearly all are of a very different nature. Rather than taking the shortest route from point A to B, little detours are taken, in the form of function calls which take little code, but not little time. Then those detours take further nested detours, sometimes 30 levels deep. The more the detours are nested, the more likely it is that some of them are less than necessary - wasteful, in fact - and it nearly always arises from galloping generality - unquestioning over-indulgence in "abstraction".
profiler - a universal good thing, right? All you gotta do is get a profiler and do some profiling, right? Ever think about how easy it is to fool a profiler into telling you a lot of nothing, when your goal is to find out what you need to fix to get better performance? Somewhere in your call tree, tuck a little file I/O, or a little call to some system routine, or have your evil twin do it without your knowledge. At some point, that will be your problem, and most profilers will miss it completely because the only inefficiency they contemplate is algorithmic inefficiency. Or, not all your routines will be little, and they may not call another routine in a small number of places, so your call graph says there's a link between the two routines, but which call? Or suppose you can figure out that some big percent of time is spent in a path A calls B calls C. You can look at that and think there's not much you can do about it, when if you could also look at the data being passed in those calls, you could see if it's necesssary. Here's a fun project - pick your favorite profiler, and then see how many ways you could fool it. That is, find ways to make the program take longer without the profiler being able to tell what you did, because if you can do it intentionally, you can also do it without intending to.
measure - (i.e. measure time) that's what profilers have done for decades, and they take pride in the accuracy and precision with which they measure. But measure what time? and why with accuracy? Remember, the goal is to precisely locate performance problems, such that you could fruitfully optimize them to gain a speedup. When you get that speedup, it is what it is, regardless of how precisely you estimated it beforehand. If that precision of measurement is bought at the expense of precision of location, then you've just bought apples when what you needed was oranges.
Here's a list of myths about performance.
And this is what happens when one optimizes without a valid profile in hand. All you are doing without a profile is guessing and probably wasting time and money. I could list a bunch of other misconceptions, but many come down to the fact that if the code in question isn't a top resource consumer, it is probably fine as is. Kinda like unrolling a for loop that is doing disk I/O...
If I convert the whole code base over to [Insert xxx latest technology here], it'll be much faster.
Relational databases are slow.
I'm smarter than the optimizer.
This should be optimized.
Java is slow
And, unrelated:
Some people, when confronted with a problem, think "I know, I'll use regular expressions." Now they have two problems.
-jwz
Optimizing the WRONG part of the code (people, use your profiler!).
The optimizer in my compiler is smart, so I don't have to help it.
Java is fast (LOL)
Relational databases are fast (ROTFL LOL LMAO)
"This has to be as fast as possible."
If you don't have a performance problem, you don't need to worry about optimizing performance (beyond just paying attention to using good algorithms).
This misconception also manifests in attempts to optimize performance for every single aspect of your program. I see this most often with people trying to shave every last millisecond off of a low-volume web application's execution time while failing to take into account that network latency will take orders of magnitude longer than their code's execution time, making any reduction in execution time irrelevant anyhow.
My rules of optimization.
Don't optimize
Don't optimize now.
Profile to identify the problem.
Optimize the component that is taking at least 80% of the time.
Find an optimization that is 10 times faster.
My best optimization has been reducing a report from 3 days to 9 minutes. The optimized code was sped up from three days to three minutes.
In my carreer I have met three people who had been tasked with producing a faster sort on VAX than the native sort. They invariably had been able to produce sorts that took only three times longer.
The rules are simple:
Try to use standard library functions first.
Try to use brute-force and ignorance second.
Prove you've got a problem before trying to do any optimization.

Performance anti patterns

I am currently working for a client who are petrified of changing lousy un-testable and un-maintainable code because of "performance reasons". It is clear that there are many misconceptions running rife and reasons are not understood, but merely followed with blind faith.
One such anti-pattern I have come across is the need to mark as many classes as possible as sealed internal...
*RE-Edit: I see marking everything as sealed internal (in C#) as a premature optimisation.*
I am wondering what are some of the other performance anti-patterns people may be aware of or come across?
The biggest performance anti-pattern I have come across is:
Not measuring performance before and
after the changes.
Collecting performance data will show if a certain technique was successful or not. Not doing so will result in pretty useless activities, because someone has the "feeling" of increased performance when nothing at all has changed.
The elephant in the room: Focusing on implementation-level micro-optimization instead of on better algorithms.
Variable re-use.
I used to do this all the time figuring I was saving a few cycles on the declaration and lowering memory footprint. These savings were of minuscule value compared with how unruly it made the code to debug, especially if I ended up moving a code block around and the assumptions about starting values changed.
Premature performance optimizations comes to mind. I tend to avoid performance optimizations at all costs and when I decide I do need them I pass the issue around to my collegues several rounds trying to make sure we put the obfu... eh optimization in the right place.
One that I've run into was throwing hardware at seriously broken code, in an attempt to make it fast enough, sort of the converse of Jeff Atwood's article mentioned in Rulas' comment. I'm not talking about the difference between speeding up a sort that uses a basic, correct algorithm by running it on faster hardware vs. using an optimized algorithm. I'm talking about using a not obviously correct home brewed O(n^3) algorithm when a O(n log n) algorithm is in the standard library. There's also things like hand coding routines because the programmer doesn't know what's in the standard library. That one's very frustrating.
Using design patterns just to have them used.
Using #defines instead of functions to avoid the penalty of a function call.
I've seen code where expansions of defines turned out to generate huge and really slow code. Of course it was impossible to debug as well. Inline functions is the way to do this, but they should be used with care as well.
I've seen code where independent tests has been converted into bits in a word that can be used in a switch statement. Switch can be really fast, but when people turn a series of independent tests into a bitmask and starts writing some 256 optimized special cases they'd better have a very good benchmark proving that this gives a performance gain. It's really a pain from maintenance point of view and treating the different tests independently makes the code much smaller which is also important for performance.
Lack of clear program structure is the biggest code-sin of them all. Convoluted logic that is believed to be fast almost never is.
Do not refactor or optimize while writing your code. It is extremely important not to try to optimize your code before you finish it.
Julian Birch once told me:
"Yes but how many years of running the application does it actually take to make up for the time spent by developers doing it?"
He was referring to the cumulative amount of time saved during each transaction by an optimisation that would take a given amount of time to implement.
Wise words from the old sage... I often think of this advice when considering doing a funky optimisation. You can extend the same notion a little further by considering how much developer time is being spent dealing with the code in its present state versus how much time is saved by the users. You could even weight the time by hourly rate of the developer versus the user if you wanted.
Of course, sometimes its impossible to measure, for example, if an e-commerce application takes 1 second longer to respond you will loose some small % money from users getting bored during that 1 second. To make up that one second you need to implement and maintain optimised code. The optimisation impacts gross profit positively, and net profit negatively, so its much harder to balance. You could try - with good stats.
Exploiting your programming language. Things like using exception handling instead of if/else just because in PLSnakish 1.4 it's faster. Guess what? Chances are it's not faster at all and that two years from now someone maintaining your code will get really angry with you because you obfuscated the code and made it run much slower, because in PLSnakish 1.8 the language maintainers fixed the problem and now if/else is 10 times faster than using exception handling tricks. Work with your programming language and framework!
Changing more than one variable at a time. This drives me absolutely bonkers! How can you determine the impact of a change on a system when more than one thing's been changed?
Related to this, making changes that are not warranted by observations. Why add faster/more CPUs if the process isn't CPU bound?
General solutions.
Just because a given pattern/technology performs better in one circumstance does not mean it does in another.
StringBuilder overuse in .Net is a frequent example of this one.
Once I had a former client call me asking for any advice I had on speeding up their apps.
He seemed to expect me to say things like "check X, then check Y, then check Z", in other words, to provide expert guesses.
I replied that you have to diagnose the problem. My guesses might be wrong less often than someone else's, but they would still be wrong, and therefore disappointing.
I don't think he understood.
Some developers believe a fast-but-incorrect solution is sometimes preferable to a slow-but-correct one. So they will ignore various boundary conditions or situations that "will never happen" or "won't matter" in production.
This is never a good idea. Solutions always need to be "correct".
You may need to adjust your definition of "correct" depending upon the situation. What is important is that you know/define exactly what you want the result to be for any condition, and that the code gives those results.
Michael A Jackson gives two rules for optimizing performance:
Don't do it.
(experts only) Don't do it yet.
If people are worried about performance, tell 'em to make it real - what is good performance and how do you test for it? Then if your code doesn't perform up to their standards, at least it's something the code writer and the application user agree on.
If people are worried about non-performance costs of rewriting ossified code (for example, the time sink) then present your estimates and demonstrate that it can be done in the schedule. Assuming it can.
I believe it is a common myth that super lean code "close to the metal" is more performant than an elegant domain model.
This was apparently de-bunked by the creator/lead developer of DirectX, who re-wrote the c++ version in C# with massive improvements. [source required]
Appending to an array using (for example) push_back() in C++ STL, ~= in D, etc. when you know how big the array is supposed to be ahead of time and can pre-allocate it.

When/how to optimize generic code?

When writing application code, it's generally accepted that premature micro-optimization is evil, and that profiling first is essential, and there is some debate about how much, if any, higher level optimization to do up front. However, I haven't seen any guidelines for when/how to optimize generic code that will be part of a library or framework, where you never know exactly how your code will be used in the future. What are some guidelines for this? Is premature micro-optimization still evil? How should performance be balanced with other design goals such as ease of use, ease of demonstrating correctness, ease of implementation, and flexibility?
"How should performance be balanced with other design goals...?"
Get it to work.
Optimize it until it cannot be optimized further.
Note the order. Avoid premature optimization means optimize it after it works.
Optimization is still very, very important. Premature optimization does not mean NO optimization. It means optimize after it works.
I would say that optimization must take a back seat to other design goals such as ease of use, ease of demonstrating correctness, ease of implementation, and flexibility.
Try to write your code intelligently using good practices and avoiding the obvious pitfalls. Still, don't optimize until you can do it with a profiler and real use cases.
You will still encounter some use cases you never thought of but you can't optimize for them if you never thought of them.
A well designed framework will usually be a reasonably performing one too.
I heard an interesting and very enlightening discussion about the famous knuth quote on a podcast recently (think it was deep fried bytes), which I'll try summarize:
Everyone knows the famous quote: Premature optimization is the root of all evil..
However, that's only half of it. The full quote is:
We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.
Look at this carefully - say about 97% of the time.
The other side of that statement is about 3% of the time, "small" efficiencies are critical.
My monitor displays about 50 lines of code. Statistically, at least 1-2 lines of code on every screen will contain something performance sensitive! Following the common wisdom of 'do it now, optimize it later' doesn't seem like such a cunning plan when you think that on every screen you have a possible performance issue.
IMHO you should always be thinking about performance. You shouldn't expend a great deal of effort or sacrifice maintainability for it until proven by profiling/testing, but you should definitely have it in the back of your mind.
I'd personally apply this to generic code like this:
You are bound to have some code somewhere, which when you wrote it you thought "this will be slow", or "this is a dumb algorithm, but it's not important right now, so I'll fix it later." As you're in a shared library and you can't assert that method A will only ever get called with 5 items, you should go in and clean all this stuff up.
Once you've sorted those things out, I wouldn't bother going much further. Maybe run the profiler over your unit tests to make sure nothing dumb has snuck through, but otherwise wait for feedback from the consumers of your library.
My rule of thumb is:
don't optimize
The full rule is actually:
if you don't have a metric, don't optimize
This means that if you haven't measured the performance and generated a concrete metric, you shouldn't be doing anything to make the code perform better.
After all: without a metric, how do you know what to optimize?
Once you have one some profiling, you may actually be surprised by where the performance bottlenecks of your system are ... in my experience it is often the case that relatively minor changes can have a drastic impact.
You're right it's not always clear where the best bang for the buck is for your time. Your best bet is to be a user of your framework as well as its designer.
Employ your own framework in a non-trivial application, try to exercise the whole range of functionality. The more you use it, it will become clear which are the things you need most to be optimal.
Also, get feedback and suggestions from other users as frequently as possible. You will inevitably find that other people want to do things to do with your framework that you would never think of.
I think the best approach is to have a really good set of use cases for how your framework will be exercised. Only then will you have any good idea of whether the performance is adequate for its intended use.
Sure, you're never going to know how somebody is going to use your framework in the future (in the early years of my career, it never failed to amaze me the creative ways that users put my software to use - ways I'd never envisaged!) but having thought about how you think it will be used should get you most of the way there.

Should a developer aim for readability or performance first? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Oftentimes a developer will be faced with a choice between two possible ways to solve a problem -- one that is idiomatic and readable, and another that is less intuitive, but may perform better. For example, in C-based languages, there are two ways to multiply a number by 2:
int SimpleMultiplyBy2(int x)
{
return x * 2;
}
and
int FastMultiplyBy2(int x)
{
return x << 1;
}
The first version is simpler to pick up for both technical and non-technical readers, but the second one may perform better, since bit shifting is a simpler operation than multiplication. (For now, let's assume that the compiler's optimizer would not detect this and optimize it, though that is also a consideration).
As a developer, which would be better as an initial attempt?
You missed one.
First code for correctness, then for clarity (the two are often connected, of course!). Finally, and only if you have real empirical evidence that you actually need to, you can look at optimizing. Premature optimization really is evil. Optimization almost always costs you time, clarity, maintainability. You'd better be sure you're buying something worthwhile with that.
Note that good algorithms almost always beat localized tuning. There is no reason you can't have code that is correct, clear, and fast. You'll be unreasonably lucky to get there starting off focusing on `fast' though.
IMO the obvious readable version first, until performance is measured and a faster version is required.
Take it from Don Knuth
Premature optimization is the root of all evil (or at least most of it) in programming.
Readability 100%
If your compiler can't do the "x*2" => "x <<1" optimization for you -- get a new compiler!
Also remember that 99.9% of your program's time is spent waiting for user input, waiting for database queries and waiting for network responses. Unless you are doing the multiple 20 bajillion times, it's not going to be noticeable.
Readability for sure. Don't worry about the speed unless someone complains
In your given example, 99.9999% of the compilers out there will generate the same code for both cases. Which illustrates my general rule - write for readability and maintainability first, and optimize only when you need to.
Readability.
Coding for performance has it's own set of challenges. Joseph M. Newcomer said it well
Optimization matters only when it
matters. When it matters, it matters a
lot, but until you know that it
matters, don't waste a lot of time
doing it. Even if you know it matters,
you need to know where it matters.
Without performance data, you won't
know what to optimize, and you'll
probably optimize the wrong thing.
The result will be obscure, hard to
write, hard to debug, and hard to
maintain code that doesn't solve your
problem. Thus it has the dual
disadvantage of (a) increasing
software development and software
maintenance costs, and (b) having no
performance effect at all.
I would go for readability first. Considering the fact that with the kind of optimized languages and hugely loaded machines we have in these days, most of the code we write in readable way will perform decently.
In some very rare scenarios, where you are pretty sure you are going to have some performance bottle neck (may be from some past bad experiences), and you managed to find some weird trick which can give you huge performance advantage, you can go for that. But you should comment that code snippet very well, which will help to make it more readable.
Readability. The time to optimize is when you get to beta testing. Otherwise you never really know what you need to spend the time on.
A often overlooked factor in this debate is the extra time it takes for a programmer to navigate, understand and modify less readible code. Considering a programmer's time goes for a hundred dollars an hour or more, this is a very real cost.
Any performance gain is countered by this direct extra cost in development.
Putting a comment there with an explanation would make it readable and fast.
It really depends on the type of project, and how important performance is. If you're building a 3D game, then there are usually a lot of common optimizations that you'll want to throw in there along the way, and there's no reason not to (just don't get too carried away early). But if you're doing something tricky, comment it so anybody looking at it will know how and why you're being tricky.
The answer depends on the context. In device driver programming or game development for example, the second form is an acceptable idiom. In business applications, not so much.
Your best bet is to look around the code (or in similar successful applications) to check how other developers do it.
If you're worried about readability of your code, don't hesitate to add a comment to remind yourself what and why you're doing this.
using << would by a micro optimization.
So Hoare's (not Knuts) rule:
Premature optimization is the root of all evil.
applies and you should just use the more readable version in the first place.
This is rule is IMHO often misused as an excuse to design software that can never scale, or perform well.
Both. Your code should balance both; readability and performance. Because ignoring either one will screw the ROI of the project, which in the end of the day is all that matters to your boss.
Bad readability results in decreased maintainability, which results in more resources spent on maintenance, which results in a lower ROI.
Bad performance results in decreased investment and client base, which results in a lower ROI.
Readability is the FIRST target.
In the 1970's the army tested some of the then "new" techniques of software development (top down design, structured programming, chief programmer teams, to name a few) to determine which of these made a statistically significant difference.
THe ONLY technique that made a statistically significant difference in development was...
ADDING BLANK LINES to program code.
The improvement in readability in those pre-structured, pre-object oriented code was the only technique in these studies that improved productivity.
==============
Optimization should only be addressed when the entire project is unit tested and ready for instrumentation. You never know WHERE you need to optimize the code.
In their landmark books Kernigan and Plauger in the late 1970's SOFTWARE TOOLS (1976) and SOFTWARE TOOLS IN PASCAL (1981) showed ways to create structured programs using top down design. They created text processing programs: editors, search tools, code pre-processors.
When the completed text formating function was INSTRUMENTED they discovered that most of the processing time was spent in three routines that performed text input and output ( In the original book, the i-o functions took 89% of the time. In the pascal book, these functions consumed 55%!)
They were able to optimize these THREE routines and produced the results of increased performance with reasonable, manageable development time and cost.
The larger the codebase, the more readability is crucial. Trying to understand some tiny function isn't so bad. (Especially since the Method Name in the example gives you a clue.) Not so great for some epic piece of uber code written by the loner genius who just quit coding because he has finally seen the top of his ability's complexity and it's what he just wrote for you and you'll never ever understand it.
As almost everyone said in their answers, I favor readability. 99 out of 100 projects I run have no hard response time requirements, so it's an easy choice.
Before you even start coding you should already know the answer. Some projects have certain performance requirements, like 'need to be able to run task X in Y (milli)seconds'. If that's the case, you have a goal to work towards and you know when you have to optimize or not. (hopefully) this is determined at the requirements stage of your project, not when writing the code.
Good readability and the ability to optimize later on are a result of proper software design. If your software is of sound design, you should be able to isolate parts of your software and rewrite them if needed, without breaking other parts of the system. Besides, most true optimization cases I've encountered (ignoring some real low level tricks, those are incidental) have been in changing from one algorithm to another, or caching data to memory instead of disk/network.
If there is no readability , it will be very hard to get performance improvement when you really need it.
Performance should be only improved when it is a problem in your program, there are many places would be a bottle neck rather than this syntax. Say you are squishing 1ns improvement on a << but ignored that 10 mins IO time.
Also, regarding readability, a professional programmer should be able to read/understand computer science terms. For example we can name a method enqueue rather than we have to say putThisJobInWorkQueue.
The bitshift versus the multiplication is a trivial optimization that gains next to nothing. And, as has been pointed out, your compiler should do that for you. Other than that, the gain is neglectable anyhow as is the CPU this instruction runs on.
On the other hand, if you need to perform serious computation, you will require the right data structures. But if your problem is complex, finding out about that is part of the solution. As an illustration, consider searching for an ID number in an array of 1000000 unsorted objects. Then reconsider using a binary tree or a hash map.
But optimizations like n << C are usually neglectible and trivial to change to at any point. Making code readable is not.
It depends on the task needed to be solved. Usually readability is more importrant, but there are still some tasks when you shoul think of performance in the first place. And you can't just spend a day or to for profiling and optimization after everything works perfectly, because optimization itself may require rewriting sufficiant part of a code from scratch. But it is not common nowadays.
I'd say go for readability.
But in the given example, I think that the second version is already readable enough, since the name of the function exactly states, what is going on in the function.
If we just always had functions that told us, what they do ...
You should always maximally optimize, performance always counts. The reason we have bloatware today, is that most programmers don't want to do the work of optimization.
Having said that, you can always put comments in where slick coding needs clarification.
There is no point in optimizing if you don't know your bottlenecks. You may have made a function incredible efficient (usually at the expense of readability to some degree) only to find that portion of code hardly ever runs, or it's spending more time hitting the disk or database than you'll ever save twiddling bits.
So you can't micro-optimize until you have something to measure, and then you might as well start off for readability.
However, you should be mindful of both speed and understandability when designing the overall architecture, as both can have a massive impact and be difficult to change (depending on coding style and methedologies).
It is estimated that about 70% of the cost of software is in maintenance. Readability makes a system easier to maintain and therefore brings down cost of the software over its life.
There are cases where performance is more important the readability, that said they are few and far between.
Before sacrifing readability, think "Am I (or your company) prepared to deal with the extra cost I am adding to the system by doing this?"
I don't work at google so I'd go for the evil option. (optimization)
In Chapter 6 of Jon Bentley's "Programming Pearls", he describes how one system had a 400 times speed up by optimizing at 6 different design levels. I believe, that by not caring about performance at these 6 design levels, modern implementors can easily achieve 2-3 orders of magnitude of slow down in their programs.
Readability first. But even more than readability is simplicity, especially in terms of data structure.
I'm reminded of a student doing a vision analysis program, who couldn't understand why it was so slow. He merely followed good programming practice - each pixel was an object, and it worked by sending messages to its neighbors...
check this out
Write for readability first, but expect the readers to be programmers. Any programmer worth his or her salt should know the difference between a multiply and a bitshift, or be able to read the ternary operator where it is used appropriately, be able to look up and understand a complex algorithm (you are commenting your code right?), etc.
Early over-optimization is, of course, quite bad at getting you into trouble later on when you need to refactor, but that doesn't really apply to the optimization of individual methods, code blocks, or statements.
How much does an hour of processor time cost?
How much does an hour of programmer time cost?
IMHO both things have nothing to do. You should first go for code that works, as this is more important than performance or how well it reads. Regarding readability: your code should always be readable in any case.
However I fail to see why code can't be readable and offer good performance at the same time. In your example, the second version is as readable as the first one to me. What is less readable about it? If a programmer doesn't know that shifting left is the same as multiplying by a power of two and shifting right is the same as dividing by a power of two... well, then you have much more basic problems than general readability.

Performance vs Readability

Reading this question I found this as (note the quotation marks) "code" to solve the problem (that's perl by the way).
100,{)..3%!'Fizz'*\5%!'Buzz'*+\or}%n*
Obviously this is an intellectual example without real (I hope to never see that in real code in my life) implications but, when you have to make the choice, when do you sacrifice code readability for performance? Do you apply just common sense, do you do it always as a last resort? What are your strategies?
Edit: I'm sorry, seeing the answers I might have expressed the question badly (English is not my native language). I don't mean performance vs readability only after you've written the code, I ask about before you write it as well. Sometimes you can foresee a performance improvement in the future by making some darker design or providing with some properties that will make your class darker. You may decide you will use multiple threads or just a single one because you expect the scalability that such threads may give you, even when that will make the code much more difficult to understand.
My process for situations where I think performance may be an issue:
Make it work.
Make it clear.
Test the performance.
If there are meaningful performance issues: refactor for speed.
Note that this does not apply to higher-level design decisions that are more difficult to change at a later stage.
I always start with the most readable version I can think of. If performance is a problem, I refactor. If the readable version makes it hard to generalize, I refactor.
The key is to have good tests so that refactoring is easy.
I view readability as the #1 most important issue in code, though working correctly is a close second.
Readability is most important. With modern computers, only the most intensive routines of the most demanding applications need to worry too much about performance.
My favorite answer to this question is:
Make it work
Make it right
Make it fast
In the scope of things no one gives a crap about readability except the next unlucky fool that has to take care of your code. However, that being said... if you're serious about your art, and this is an art form, you will always strive to make your code the most per formant it can be while still being readable by others. My friend and mentor (who is a BADASS in every way) once graciously told me on a code-review that "the fool writes code only they can understand, the genius writes code that anyone can understand." I'm not sure where he got that from but it has stuck with me.
Reference
Programs must be written for people to read, and only incidentally for
machines to execute. — Abelson & Sussman, SICP
Well written programs are probably easier to profile and hence improve performance.
You should always go for readability first. The shape of a system will typically evolve as you develop it, and the real performance bottlenecks will be unexpected. Only when you have the system running and can see real evidence - as provided by a profiler or other such tool - will the best way to optimise be revealed.
"If you're in a hurry, take the long way round."
agree with all the above, but also:
when you decide that you want to optimize:
Fix algorithmic aspects before syntax (for example don't do lookups in large arrays)
Make sure that you prove that your change really did improve things, measure everything
Comment your optimization so the next guy seeing that function doesn't simplify it back to where you started from
Can you precompute results or move the computation to where it can be done more effectively (like a db)
in effect, keep readability as long as you can - finding the obscure bug in optimized code is much harder and annoying than in the simple obvious code
I apply common sense - this sort of thing is just one of the zillion trade-offs that engineering entails, and has few special characteristics that I can see.
But to be more specific, the overwhelming majority of people doing weird unreadable things in the name of performance are doing them prematurely and without measurement.
Choose readability over performance unless you can prove that you need the performance.
I would say that you should only sacrifice readability for performance if there's a proven performance problem that's significant. Of course "significant" is the catch there, and what's significant and what isn't should be specific to the code you're working on.
"Premature optimization is the root of all evil." - Donald Knuth
Readability always wins. Always. Except when it doesn't. And that should be very rarely.
at times when optimization is necessary, i'd rather sacrifice compactness and keep the performance enhancement. perl obviously has some deep waters to plumb in search of the conciseness/performance ratio, but as cute as it is to write one-liners, the person who comes along to maintain your code (who in my experience, is usually myself 6 months later) might prefer something more in the expanded style, as documented here:
http://www.perl.com/pub/a/2004/01/16/regexps.html
There are exceptions to the premature optimization rule. For example, when accessing an image in memory, reading a pixel should not be an out-of-line function. And when providing for custom operations on the image, never do it like this:
typedef Pixel PixelModifierFunction(Pixel);
void ModifyAllPixels(PixelModifierFunction);
Instead, let external functions access the pixels in memory, though it's uglier. Otherwise, you are sure to write slow code that you'll have to refactor later anyway, so you're doing extra work.
At least, that's true if you know you're going to deal with large images.

Resources