In a recent conversation with a fellow programmer, I asserted that "if you're writing the same code more than once, it's probably a good idea to refactor that functionality such that it can be called once from each of those places."
My fellow programmer buddy instead insisted that the performance impact of making these function calls was not acceptable.
Now, I'm not looking for validation of who was right. I'm simply curious to know if there are situations or patterns where I should consider the performance impact of a function call before refactoring.
"My fellow programmer buddy instead insisted that the performance impact of making these function calls was not acceptable."
...to which the proper answer is "Prove it."
The old saw about premature optimization applies here. Anyone who isn't familiar with it needs to be educated before they do any more harm.
IMHO, if you don't have the attitude that you'd rather spend a couple hours writing a routine that can be used for both than 10 seconds cutting and pasting code, you don't deserve to call yourself a coder.
Don't even consider the effect of calling overhead if the code isn't in a loop that's being called millions of times, in an area where the user is likely to notice the difference. Once you've met those conditions, go ahead and profile to see if your worries are justified.
Modern compilers of languages such as Java will inline certain function calls anyway. My opinion is that the design is way more important over the few instructions spent with function call. The only situation I can think about would be writing some really fine tuned code in assembler.
You need to ask yourself several questions:
Cost of time spent on optimizing code vs cost of throwing more hardware at it.
How does this impact maintainability?
How does going in either direction impact your deadline?
Does this really beg optimization when many modern compilers will do it for you anyway? Do not try to outsmart the compiler.
And of course, which will help you sleep better at night? :)
My bet is that there was a time in which the performance cost of a call to an external method or function WAS something to be concerned with, in the same way that the lengths of variable names and such all needed to be evaluated with respect to performance implications.
With the monumental increases in processor speed and memory resources int he last two decades, I propose that these concerns are no longer as pertinent as they once were.
We have been able use long variable names without concern for some time, and the cost of a call to external code is probably negligible in most cases.
There might be exceptions. If you place a function call within a large loop, you may see some impact, depending upon the number of iterations.
I propose that in most cases you will find that refactoring code into discrete function calls will have a negligible impact. There might be occasions in which there IS an impact. However, proper TESTING of a refactoring will reveal this. In those minority of cases, your friend might be correct. For most of the rest of the time, I propose that your friend is clining a little to closely to practices which pre-date most modern processors and storage media.
You care about function call overhead the same time you care about any other overhead: when your performance profiling tool indicates that it's a problem.
for the c/c++ family:
the 'cost' of the call is not important. if it needs to be fast, you just have to make sure the compiler is able to inline it. that means that:
the body must be visible to the compiler
the body is indeed small enough to be considered an inline candidate.
the method does not require dynamic dispatch
there are a few ways to break this default ability. for example:
huge instruction count already in the callsite. even with early inlining, the compiler may pop a trivial function out of line (even though it could generate more instructions/slower execution). early inlining is the compiler's ability to inline a function early on, when it sees the call costs more than the inline.
recursion
the inline keyword is more or less useless in this era, regarding its original intent. however, many compilers offer a means to restore the meaning, with a compiler specific directive. using this directive (correctly) helps considerably. learning how to use it correctly takes time. if in doubt, omit the directive and leave it up to the compiler.
assuming you are using a modern compiler, there is no excuse to avoid the function, unless you're also willing to go down to assembly for this particular program.
as it stands, and if performance is crucial, you really have two choices:
1) learn to write well organized programs for speed. downside: longer compile times
2) maintain a poorly written program
i prefer 1. any day.
(yes, i have spent a lot of time writing performance critical programs)
Related
This question already has answers here:
When is a function too long? [closed]
(24 answers)
Closed 5 years ago.
How small should I be making functions? For example, if I have a cake baking program.
bakeCake(){
if(cakeType == "chocolate")
fetchIngredients("chocolate")
else
if(cakeType == "plain")
fetchIngredients("plain")
else
if(cakeType == "Red velvet")
fetchIngredients("Red Velvet")
//Rest of program
My question is, while this stuff is simple enough on its own, when I add much more stuff to the bakeCake function it becomes cluttered. But lets say that this program has to bake thousands of cakes per second. From what I've heard, it takes significantly longer (relative to computer time) to use another function compared to just doing the statements in the current function. So something that's similar like this should be very easy to read, and if efficiency is important wouldn't I want to keep it in there?
Basically, at what point do I sacrifice readability for efficiency. And a quick bonus question, at what point does having too many functions decrease readability? Here's an example of Apple's swift tutorial.
func isCandyAmountAcceptable(bandMemberCount: Int, candyCount: Int) -> Bool {
return candyCount % bandMemberCount == 0
They said that because the function name isCandyAmountAcceptable was easier to read than candyCount % bandMemberCount == 0 that it'd be good to make a function for that. But from my perspective it may take a few seconds to figure out what the second option is saying, but it's also more readable when ti comes to knowing how it works.
Sorry about being all over the place and kinda asking 2 questions in one. Just to summarize my questions:
Does using functions extraneously make efficiency (speed) suffer? If it does how can I figure out what the cutoff between readability and efficiency is?
How small and simple should I make functions for? Obviously I'd make them if I ever have to repeat the function, but what about one time use functions?
Thanks guys, sorry if these questions are ignorant or anything but I'd really appreciate an answer.
Does using functions extraneously make efficiency (speed) suffer? If
it does how can I figure out what the cutoff between readability and
efficiency is?
For performance I would generally not factor in any overhead of direct function calls against any decent optimizer, since it can even make those come free of charge. When it doesn't, it's still a negligible overhead in, say, 99.9% of scenarios. That applies even for performance-critical fields. I work in areas like raytracing, mesh processing, and image processing and still the cost of a function call is typically on the bottom of the priority list as opposed to, say, locality of reference, efficient data structures, parallelization, and vectorization. Even when you're micro-optimizing, there are much bigger priorities than the cost of a direct function call, and even when you're micro-optimizing, you often want to leave a lot of the optimization for your optimizer to perform instead of trying to fight against it and do it all by hand (unless you're actually writing assembly code).
Of course with some compilers you might deal with ones that never inline function calls and have a bit of an overhead to every function call. But in that case I'd still say it's relatively negligible since you probably shouldn't be worrying about such micro-level optimizations when using those languages and interpreters/compilers. Even then it will probably often be bottom on the priority list, relatively speaking, as opposed to more impactful things like improving locality of reference and thread efficiency.
It's like if you're using a compiler with very simplistic register allocation that has a stack spill for every single variable you use, that doesn't mean you should be trying to use and reuse as few variables as possible to work around its tendencies. It means reach for a new compiler in those cases where that's a non-negligible overhead (ex: write some C code into a dylib and use that for the most performance-critical parts), or focus on higher-level optimizations like making everything run in parallel.
How small and simple should I make functions for? Obviously I'd make
them if I ever have to repeat the function, but what about one time
use functions?
This is where I'm going to go slightly off-kilter and actually suggest you consider avoiding the teeniest of functions for maintainability reasons. This is admittedly a controversial opinion although at least John Carmack seems to somewhat agree (specifically in respect to inlining code and avoiding excess function calls for cases where side effects occur to make the side effects easier to comprehend).
However, if you are going to make a lot of state changes, having them
all happen inline does have advantages; you should be made constantly
aware of the full horror of what you are doing.
The reason I believe it can sometimes be good to err on the side of meatier functions is because there's often more to comprehend than that of a simple function to understand all the information necessary to make a change or fix a problem.
Which is simpler to comprehend, a function whose logic consists of 80 lines of inlined code, or one distributed across a couple dozen functions and possibly ones that lead to disparate places throughout the codebase?
The answer is not so clear cut. Naturally if the teeny functions are used widely, like say sqrt or abs, then the reader can simply skim over the function call, knowing full well what it does like the back of his hand. But if there are a lot of teeny exotic functions that are only used one time, then the ability to comprehend the operation as a whole requires looking them up and understanding what they all individually do before you can get a proper comprehension of what's going on in terms of the big picture.
I actually disagree with that Apple Swift tutorial somewhat with that one-liner function because while it is easier to understand than figuring out what the arithmetic and comparison are supposed to do, in exchange it might require looking it up to see what it does in scenarios where you can't just say, isCandyAmountAcceptable is enough information for me and need to figure out exactly what makes an amount acceptable. Instead I would actually prefer a simple comment:
// Determine if candy amount is acceptable.
if (candyCount % bandMemberCount == 0)
...
... because then you don't have to jump to disparate places in code (the analogy of a book referring its reader to other pages in the book causing the readers to constantly have to flip back and forth between pages) to figure that out. Of course the idea behind this isCandyAmountAcceptable kind of function is that you shouldn't have to be concerned with such details about what makes a candy amount of acceptable, but too often in practice, we do end up having to understand the details more often than we optimally should to debug the code or make changes to it. If the code never needs to be debugged or changed, then it doesn't really matter how it's written. It could even be written in binary code for all we care. But if it's written to be maintained, as in debugged and changed in the future, then sometimes it is helpful to avoid making the readers have to jump through lots of hoops. The details do often matter in those scenarios.
So sometimes it doesn't help to understand the big picture by fragmenting it into the teeniest of puzzle pieces. It's a balancing act, but certain types of developers can err on the side of overly dicing up their systems into the most granular bits and pieces and finding maintenance problems that way. Those types are still often promising engineers -- they just have to find their balance. The other extreme is the one that writes 500-line functions and doesn't even consider refactoring -- those are kinda hopeless. But I think you fit in the former category, and for you, I'd actually suggest erring on the side of meatier functions ever-so-slightly just to keep the puzzle pieces a healthy size (not too small, not too big).
There's even a balancing act I see between code duplication and minimizing dependencies. An image library doesn't necessarily become easier to comprehend by shaving off a few dozen lines of duplicated math code if the exchange is a dependency to a complex math library with 800,000 lines of code and an epic manual on how to use it. In such cases, the image library might very well be easier to comprehend as well as use and deploy in new projects if it chooses instead to duplicate a few math functions here and there to avoid external dependencies, isolating its complexity instead of distributing it elsewhere.
Basically, at what point do I sacrifice readability for efficiency.
As stated above, I don't think readability of the small picture and comprehensibility of the big picture are synonymous. It can be really easy to read a two-line function and know what it does and still be miles away from understanding what you need to understand to make the necessary changes. Having many of those teeny one-shot two-liners can even delay the ability to comprehend the big picture.
But if I use "comprehensibility vs. efficiency" instead, I'd say upfront at the design-level for cases where you anticipate processing huge inputs. As an example, a video processing application with custom filters knows it's going to be looping over millions of pixels many times per frame. That knowledge should be utilized to come up with an efficient design for looping over millions of pixels repeatedly. But that's with respect to design -- towards the central aspects of the system that many other places will depend upon because big central design changes are too costly to apply late in hindsight.
That doesn't mean it has to start applying hard-to-understand SIMD code right off the bat. That's an implementation detail provided the design leaves enough breathing room to explore such an optimization in hindsight. Such a design would imply abstracting at the Image level, at the level of a million+ pixels, not at the level of a single IPixel. That's the worthy thing to take into consideration upfront.
Then later on, you can optimize hotspots and potentially use some difficult-to-understand algorithms and micro-optimizations here and there for those truly critical cases where there's a strong perceived business need for the operation to go faster, and hopefully with good tools (profilers, i.e.) in hand. The user cases guide you about what operations to optimize based on what the users do most often and find a strong desire to spend less time waiting. The profiler guides you about precisely what parts of the code involved in that operation need to be optimized.
Readability, performance and maintainability are three different things. Readability will make your code look simple and understandable, not necessarily best way to go. Performance is always going to be important, unless you are running this code in non-production environment where end result is more important than how it was achieved. Enter the world of enterprise applications, maintainability suddenly gains lot more importance. What you work on today will be handed over to somebody else after 6 months and they will be fixing/changing your code. This is why suddenly standard design patterns become so important. In a way, the readability is part of maintainability on larger scale. If the cake baking program above is something more complex than what its looking like, first thing stands out as a code smell is existence if if-else. Its gotta get replaced with polymorphism. Same goes with switch case kind of construct.
At what point do you decide to sacrifice one for other? That purely depends upon what business your code is achieving. Is it academic? Its got to be the perfect solution even if it means 90% devs struggle to figure out at first glance what the hell is happening. Is it a website belonging to retail store being maintained by distributed team of 50 devs working from 2 or more different geographic locations? Follow the conventional design patterns.
A rule of thumb I have always seen being followed in almost all situations is that if a function is growing beyond half the screen, its a candidate for refactoring. Do you have functions that end up you having your editor long length scroll bars? Refactor!!!
I'm working on a large performance-critical project that is very branch heavy. In the process of designing algorithms for this product, my employer often reminds me to write code that is more "human logical", or written in a manner that more closely aligns with the way we logically think.
While this makes sense to me from a few different perspectives (e.g. ease of understanding/remembering, code maintenance, etc.), I'm also wondering whether this approach could also ever be expected to lead to a more optimized compiled output.
Could this be the case due to the fact that compilers are written by humans, and optimizers are often designed to recognize familiar code blocks?
I would love to hear some thoughts on why this could/not be the case.
Consider two different kinds of code, library code and application code.
Library code (like a string class library) is likely to own the program counter a lot of the time, like this:
while(some test){
massage some data, while seldom calling sub-functions
}
That kind of code will benefit from compiler optimization.
(So to answer your question, people write benchmark functions like this, and the compiler-writers use those as test cases.)
On the other hand, application code tends to look like this:
if (some test){
do a bunch of things, including many function calls
} else if (some other test){
do a bunch of things, including many function calls
} else {
do a bunch of things, including many function calls
}
In this case, the time you save by branch prediction or cycle-shaving might be 1 time unit, say, while the do a bunch of things... might spend from 10^2 to 10^8 time units, with or without I/O.
So the benefit of compiler optimization of this code tends to be completely lost in the noise.
That's not to say it can't be optimized.
It's just that the compiler can't do it - it's your job.
If you want to make the latter kind of code run fast, the best way is to find out which lines of code are on the call stack a high percent of time, and if possible, finding a way to avoid doing them.
(Here's an example of a 43x speedup.)
What is "human logical" probably varies from human to human.
For instance, if I am a newbie performing tasks according to written instructions I will (usually), over time, learn some tasks by heart whereas for others I will return to the instructions simply because the tasks are not performed often enough/are too boring or both. Others in the same situation may or may not function similiarly and it is not certain that the tasks they'll learn will be the ones I learn.
For programming it works similarly. Some may construct a loop in one manner and perform a test inside it for the sake of readability while I might do the test outside for performance reasons. What is more wrong and what is more right?
There is a widespread belief that compilers will optimize anything. This is true but as I've written (drastically) in another post, GIGO (Garbage In = Garbage Out) applies. Compilers don't operate in a vacuum: given a set of rules they'll perform safe optimizations on source code to the extent of their (the compilers') constructors' imagination and competence in code optimizations. Bloat source code will become optimized bloat machine code. In the same manner lean and mean source code will become optimized lean and mean machine code. In critical places it is possible to feed the compiler source code that it "feels" (YES! they do have personalities) absolutely comfortable in optimizing and the resulting machine code will fly.
We've all experienced poorly performing software. If we're lucky we've experienced software that performs incredibly well. One developer can learn to write a piece of code that performs well in the same amount of time that another writes code that performs poorly.
I am currently working for a client who are petrified of changing lousy un-testable and un-maintainable code because of "performance reasons". It is clear that there are many misconceptions running rife and reasons are not understood, but merely followed with blind faith.
One such anti-pattern I have come across is the need to mark as many classes as possible as sealed internal...
*RE-Edit: I see marking everything as sealed internal (in C#) as a premature optimisation.*
I am wondering what are some of the other performance anti-patterns people may be aware of or come across?
The biggest performance anti-pattern I have come across is:
Not measuring performance before and
after the changes.
Collecting performance data will show if a certain technique was successful or not. Not doing so will result in pretty useless activities, because someone has the "feeling" of increased performance when nothing at all has changed.
The elephant in the room: Focusing on implementation-level micro-optimization instead of on better algorithms.
Variable re-use.
I used to do this all the time figuring I was saving a few cycles on the declaration and lowering memory footprint. These savings were of minuscule value compared with how unruly it made the code to debug, especially if I ended up moving a code block around and the assumptions about starting values changed.
Premature performance optimizations comes to mind. I tend to avoid performance optimizations at all costs and when I decide I do need them I pass the issue around to my collegues several rounds trying to make sure we put the obfu... eh optimization in the right place.
One that I've run into was throwing hardware at seriously broken code, in an attempt to make it fast enough, sort of the converse of Jeff Atwood's article mentioned in Rulas' comment. I'm not talking about the difference between speeding up a sort that uses a basic, correct algorithm by running it on faster hardware vs. using an optimized algorithm. I'm talking about using a not obviously correct home brewed O(n^3) algorithm when a O(n log n) algorithm is in the standard library. There's also things like hand coding routines because the programmer doesn't know what's in the standard library. That one's very frustrating.
Using design patterns just to have them used.
Using #defines instead of functions to avoid the penalty of a function call.
I've seen code where expansions of defines turned out to generate huge and really slow code. Of course it was impossible to debug as well. Inline functions is the way to do this, but they should be used with care as well.
I've seen code where independent tests has been converted into bits in a word that can be used in a switch statement. Switch can be really fast, but when people turn a series of independent tests into a bitmask and starts writing some 256 optimized special cases they'd better have a very good benchmark proving that this gives a performance gain. It's really a pain from maintenance point of view and treating the different tests independently makes the code much smaller which is also important for performance.
Lack of clear program structure is the biggest code-sin of them all. Convoluted logic that is believed to be fast almost never is.
Do not refactor or optimize while writing your code. It is extremely important not to try to optimize your code before you finish it.
Julian Birch once told me:
"Yes but how many years of running the application does it actually take to make up for the time spent by developers doing it?"
He was referring to the cumulative amount of time saved during each transaction by an optimisation that would take a given amount of time to implement.
Wise words from the old sage... I often think of this advice when considering doing a funky optimisation. You can extend the same notion a little further by considering how much developer time is being spent dealing with the code in its present state versus how much time is saved by the users. You could even weight the time by hourly rate of the developer versus the user if you wanted.
Of course, sometimes its impossible to measure, for example, if an e-commerce application takes 1 second longer to respond you will loose some small % money from users getting bored during that 1 second. To make up that one second you need to implement and maintain optimised code. The optimisation impacts gross profit positively, and net profit negatively, so its much harder to balance. You could try - with good stats.
Exploiting your programming language. Things like using exception handling instead of if/else just because in PLSnakish 1.4 it's faster. Guess what? Chances are it's not faster at all and that two years from now someone maintaining your code will get really angry with you because you obfuscated the code and made it run much slower, because in PLSnakish 1.8 the language maintainers fixed the problem and now if/else is 10 times faster than using exception handling tricks. Work with your programming language and framework!
Changing more than one variable at a time. This drives me absolutely bonkers! How can you determine the impact of a change on a system when more than one thing's been changed?
Related to this, making changes that are not warranted by observations. Why add faster/more CPUs if the process isn't CPU bound?
General solutions.
Just because a given pattern/technology performs better in one circumstance does not mean it does in another.
StringBuilder overuse in .Net is a frequent example of this one.
Once I had a former client call me asking for any advice I had on speeding up their apps.
He seemed to expect me to say things like "check X, then check Y, then check Z", in other words, to provide expert guesses.
I replied that you have to diagnose the problem. My guesses might be wrong less often than someone else's, but they would still be wrong, and therefore disappointing.
I don't think he understood.
Some developers believe a fast-but-incorrect solution is sometimes preferable to a slow-but-correct one. So they will ignore various boundary conditions or situations that "will never happen" or "won't matter" in production.
This is never a good idea. Solutions always need to be "correct".
You may need to adjust your definition of "correct" depending upon the situation. What is important is that you know/define exactly what you want the result to be for any condition, and that the code gives those results.
Michael A Jackson gives two rules for optimizing performance:
Don't do it.
(experts only) Don't do it yet.
If people are worried about performance, tell 'em to make it real - what is good performance and how do you test for it? Then if your code doesn't perform up to their standards, at least it's something the code writer and the application user agree on.
If people are worried about non-performance costs of rewriting ossified code (for example, the time sink) then present your estimates and demonstrate that it can be done in the schedule. Assuming it can.
I believe it is a common myth that super lean code "close to the metal" is more performant than an elegant domain model.
This was apparently de-bunked by the creator/lead developer of DirectX, who re-wrote the c++ version in C# with massive improvements. [source required]
Appending to an array using (for example) push_back() in C++ STL, ~= in D, etc. when you know how big the array is supposed to be ahead of time and can pre-allocate it.
With CPUs being increasingly faster, hard disks spinning, bits flying around so quickly, network speeds increasing as well, it's not that simple to tell bad code from good code like it used to be.
I remember a time when you could optimize a piece of code and undeniably perceive an improvement in performance. Those days are almost over. Instead, I guess we now have a set of rules that we follow like "Don't declare variables inside loops" etc. It's great to adhere to these so that you write good code by default. But how do you know it can't be improved even further without some tool?
Some may argue that a couple of nanoseconds won't really make that big a difference these days. The truth is, we are stuck with so many layers that you get a staggering effect.
I'm not saying we should optimize every little millisecond out of our code as that will be expensive and unfeasible. I believe we have to do our best, given our time constraints, to write efficient code as well.
I'm just interested to know what tools you use to profile and measure performance of code, if at all.
I think that optimization should be thought of not as looking at each line of code, but rather, what asymptotic complexity is your algorithm. For example, using a bubble sort is probably one of the worst sorting algorithms you could use in terms of optimization. It takes the longest. Quicksort and mergesort are faster in terms of sorting, and should be always used before a bubble sort.
If you keep optimization always in your mind when designing a solution to a problem, then you should be able to write readable code, which other developers will approve of. Also, if you are programming in a higher level language that will be compiled before it is run, remember that compilers make some awesome optimizations nowadays that you or I may not think of, and also (more importantly) do not have to worry about.
Stick with a good and low big O(), and it should be optimized pretty good. If you are working with millions or greater in some type of dataset, then look for a big O(logn) algorithm. They work great for large tasks, and keep your code optimized.
Let the compilers work on the line by line code optimizations so you can focus on the solutions.
There are times that do warrant line by line optimizations, and if that is the case that you need that much speed, maybe you might want to look into assembly so that you can control every line that is written.
There's a big difference between "good" code and "fast" code. They aren't exactly separate from each other either, but "fast" code doesn't mean "good". Often times, "fast" actually means bad code because readability compromises must be made to make it fast.
The way I look at it, hardware is cheap, programmers are expensive. Unless there is a serious performance problem with some piece of code, you should never have to worry about speed. If there are performance problems, you'll notice them. Only when you notice the performance problem on good hardware should you have to worry about optimization (in my opinion)
If you reach the point where your code is slow, but you can't figure out why, I'd use a profiler like ANT, or dotTrace if you're in the .NET world (I'm sure there are others out there for other platforms & languages). They're pretty useful, but I've only ever had one situation where I needed a profiler to identify the problem. It was something that now that I know the issue, I won't need a profiler again to tell me it's a problem because I'll never forget the amount of time I spent trying to optimize it.
This is absolutely a valid concern, but not for most developers. Most developers are concerned with getting a product that works to their employer. Optimized code is seldom a requirement.
The best way to make sure your code is fast is to benchmark or profile it. A lot of compiler optimizations create non-intuitive oddities in the performance of a programmer's code, so in the end measurement becomes essential.
In my experience, Rational Quantify has given me the best results in terms of code tuning. It is not free, but it is very fully featured and seems to have given me the most useful results.
In terms of free tools, check out gprof or oprofile, if you are on a Unix environment. They are not as good as some of the commercial tools, but can often point you in the right direction.
On a side note, I am almost always surprised at what profilers turn up the first time I use them. You can have intuition as to where code may be bottlenecking, and it can often be completely wrong.
Almost all code I write is plenty fast enough. On the rare occasions when it isn't, for C, C++, and Objective Caml I use the venerable gprof and the excellent valgrind with its superb visualizer kcachegrind (part of the KDE SDK; don't be fooled by the out-of-date code on sourceforge).
The MLton Standard ML compiler and the Glasgow Haskell Compiler both ship with excellent profilers.
I wish there were a better profiler for Lua.
Uh, a profiler maybe? There are ones available for almost all platforms and languages.
Reading this question I found this as (note the quotation marks) "code" to solve the problem (that's perl by the way).
100,{)..3%!'Fizz'*\5%!'Buzz'*+\or}%n*
Obviously this is an intellectual example without real (I hope to never see that in real code in my life) implications but, when you have to make the choice, when do you sacrifice code readability for performance? Do you apply just common sense, do you do it always as a last resort? What are your strategies?
Edit: I'm sorry, seeing the answers I might have expressed the question badly (English is not my native language). I don't mean performance vs readability only after you've written the code, I ask about before you write it as well. Sometimes you can foresee a performance improvement in the future by making some darker design or providing with some properties that will make your class darker. You may decide you will use multiple threads or just a single one because you expect the scalability that such threads may give you, even when that will make the code much more difficult to understand.
My process for situations where I think performance may be an issue:
Make it work.
Make it clear.
Test the performance.
If there are meaningful performance issues: refactor for speed.
Note that this does not apply to higher-level design decisions that are more difficult to change at a later stage.
I always start with the most readable version I can think of. If performance is a problem, I refactor. If the readable version makes it hard to generalize, I refactor.
The key is to have good tests so that refactoring is easy.
I view readability as the #1 most important issue in code, though working correctly is a close second.
Readability is most important. With modern computers, only the most intensive routines of the most demanding applications need to worry too much about performance.
My favorite answer to this question is:
Make it work
Make it right
Make it fast
In the scope of things no one gives a crap about readability except the next unlucky fool that has to take care of your code. However, that being said... if you're serious about your art, and this is an art form, you will always strive to make your code the most per formant it can be while still being readable by others. My friend and mentor (who is a BADASS in every way) once graciously told me on a code-review that "the fool writes code only they can understand, the genius writes code that anyone can understand." I'm not sure where he got that from but it has stuck with me.
Reference
Programs must be written for people to read, and only incidentally for
machines to execute. — Abelson & Sussman, SICP
Well written programs are probably easier to profile and hence improve performance.
You should always go for readability first. The shape of a system will typically evolve as you develop it, and the real performance bottlenecks will be unexpected. Only when you have the system running and can see real evidence - as provided by a profiler or other such tool - will the best way to optimise be revealed.
"If you're in a hurry, take the long way round."
agree with all the above, but also:
when you decide that you want to optimize:
Fix algorithmic aspects before syntax (for example don't do lookups in large arrays)
Make sure that you prove that your change really did improve things, measure everything
Comment your optimization so the next guy seeing that function doesn't simplify it back to where you started from
Can you precompute results or move the computation to where it can be done more effectively (like a db)
in effect, keep readability as long as you can - finding the obscure bug in optimized code is much harder and annoying than in the simple obvious code
I apply common sense - this sort of thing is just one of the zillion trade-offs that engineering entails, and has few special characteristics that I can see.
But to be more specific, the overwhelming majority of people doing weird unreadable things in the name of performance are doing them prematurely and without measurement.
Choose readability over performance unless you can prove that you need the performance.
I would say that you should only sacrifice readability for performance if there's a proven performance problem that's significant. Of course "significant" is the catch there, and what's significant and what isn't should be specific to the code you're working on.
"Premature optimization is the root of all evil." - Donald Knuth
Readability always wins. Always. Except when it doesn't. And that should be very rarely.
at times when optimization is necessary, i'd rather sacrifice compactness and keep the performance enhancement. perl obviously has some deep waters to plumb in search of the conciseness/performance ratio, but as cute as it is to write one-liners, the person who comes along to maintain your code (who in my experience, is usually myself 6 months later) might prefer something more in the expanded style, as documented here:
http://www.perl.com/pub/a/2004/01/16/regexps.html
There are exceptions to the premature optimization rule. For example, when accessing an image in memory, reading a pixel should not be an out-of-line function. And when providing for custom operations on the image, never do it like this:
typedef Pixel PixelModifierFunction(Pixel);
void ModifyAllPixels(PixelModifierFunction);
Instead, let external functions access the pixels in memory, though it's uglier. Otherwise, you are sure to write slow code that you'll have to refactor later anyway, so you're doing extra work.
At least, that's true if you know you're going to deal with large images.