Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I've read somewhere in the comments section of one of the questions here in stack overflow that:
Always start coding unoptimized.
If it meets the requirements then it's good,
else code an optimized version.
Check if the optimized code meets the requirements, if it meets the requirement, keep it but also keep the unoptimized version or paste the unoptimized version as a comment.
if the optimized version doesn't meet the requirements, delete it and stick with the unoptimized.
^Is there a term for this kind of programming? Is this a good or bad programming practice
Is optimization dangerous? The only reason I can think of is that it can create unnecessary complexity which can lead to errors. Is there anything else?
Is there a general rule to be followed about when one should optimize or not?
Optimising code takes time from the developers that they could instead use to add new features or polish their product. Since the end goal of development is not the code but the product that is build with it, spending time on optimisation should be balanced with the other uses that could be done of that time.
It's a waste when the effort is spent on code that does not end up in the product due to a change in the requirements. If optimisation is performed from the beginning, you may also spend lots of time optimising a part of the code that only marginally contribute to the overall time spent by the application.
Instead, you should probably wait until you have a clear vision of what the application is and what are the bottleneck before spending too much effort on optimisation. And then, you'll have a large suite of unit tests and of use cases that will allow you to optimise in confidence that you don't break the application and only spend your effort on parts that are really worth optimising thanks to profiling.
As always in engineering, optimisation is a tradeoff that you make. And you should definitely be sure that it is going to payoff before doing it if you mind your resources (time, money, ...).
In general, optimized code is more complex and difficult to get correct. It's also often counter productive to optimize code early (simply because you may be spending time optimizing something that doesn't provide any real improvement in overall performance).
So the guidance you're asking about really boils down to:
write code that easier to write and verify that it's correct
optimize that code when it makes sense to expend the effort
No matter how fast it runs, incorrect code is not optimized code.
Always profile before optimizing. If a small amount of code takes up a majority of the execution time and you can prove this from your profiling results, consider the programming effort to write, test, reprofile, maintain, and have someone else inherit this added complexity. Once you've done this, revert your code back to before you optimized it for runtime and deoptimized it for readability. Just don't do it. Seriously, unless over 90% of your execution is spent on one function, it's not worth the effort.
Keep in mind that a speedup of 10x on code that consumes 90% of your runtime will decrease your total runtime by a factor of ~5. A speedup of infinity on that slow function still only speeds up your entire program by a factor of 10. If you're hoping for more than an order of magnitude speed improvement (which is my threshold for whether I may start o think about optimizing), you will need to change how you approach a problem, and this kind of change means rethinking the architecture of the program. If you're lucky, it may be as simple as replacing your queue with a priority queue. Most likely you won't be lucky. Sorry the answer is bleak.
If optimization (by your compiler) is breaking your code while you believe it should not,
your code is not following the language standard, or
your compiler is broken, and you should upgrade it.
Language standards are quite complex to understand (in particular because not everything is specified, and some things are explicitly left unspecified or implementation specific). Read about undefined behavior
Compilers are in practice tested a big lot, and you should often first suspect your own code, and only after be sure your code is right (and fully standard conforming) suspect the compiler (in other words, compiler optimization bugs -where the generated code is wrong- are quite rare in practice).
Be sure to upgrade your compiler to a recent version. For GCC it is today (december 2013) 4.8.2; don't blame GCC if you are using a 4.4 or 3.6 GCC compiler, these ancient versions are not maintained anymore!
In practice, enable all warnings and debugging info when developping your code (e.g. compile with gcc -Wall -g at least, perhaps even with -Wextra). When you are sure of the quality of your code, compile it with optimizations and warnings (e.g. gcc -Wall -g -O2) and test it a lot.
In practice, profile the execution of your tests and (when possible) focus your efforts on the hot code (the one taking most of the CPU time).
Premature optimization is the root of all evil .... but sometimes you don't have really a choice, see audio codec implementation on ARM devices, in that case you need to get benefit from DSP ARM assembly extensions (like QADD, QSUB, QDADD, and QDSUB
) that can only be mapped on C code with multiple line instructions (highly inefficient), compilers cannot do a good job there, so you will need to optimize code inlining assembly.
You will probably write a "non optimized code" first in that case, but with the optimization in mind... so that when you will add optimization you won't need to change your code too much.
Another case in which you know you will need to optimize your code is when you will write signal processing functions (correlation, convolution, fft) for embedded devices. In that case you will have to do algorithmic optimization (choose the best method to approach the problem, choose the right approximation) and code optimizations (to use the pipeline properly for example) and it will be good to know that you are going to optimize the code before starting doing it (expecially the algorithmic one that can be performed on paper even before coding, and that can be tested separately).
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
In this test, we can see that the performance of golang is sometimes much slower than scala. In my opinion, since the code of golang is compiled directly to c/c++ compatible binary code, while the code of scala is compiled to JVM byte code, golang should have much better performance, especially in these computation-intensive algorithm the benchmark did. Is my understanding incorrect?
http://benchmarksgame.alioth.debian.org/u64/chartvs.php?r=eNoljskRAEEIAlPCA48ozD%2Bb1dkX1UIhzELXeGcih5BqXeksDvbs8Vgi9HFr23iGiD82SgxJqRWkKNctgkMVUfwlHXnZWDkut%2BMK1nGawoYeDLlYQ8eLG1tvF91Dd8NVGm4sBfGaYo0Pok0rWQ%3D%3D&m=eNozMFFwSU1WMDIwNFYoNTNRyAMAIvoEBA%3D%3D&w=eNpLz%2FcvTk7MSQQADkoDKg%3D%3D
Here's what I think's going on in the four benchmarks where the go solutions are the slowest compared to the scala solutions.
mandelbrot: the scala implementation has its internal loop unrolled one time. It may be also that the JVM can vectorise the calculation like this, which I think the go compiler doesn't yet do. This is good manual optimisation plus better JVM support for speeding arithmetic.
regex-dna: the scala implementation isn't doing what the benchmark requires: it's asked to """(one pattern at a time) match-replace the pattern in the redirect file, and record the sequence length""" but it's just calculating the length and printing that. The go version does the match-replace so is slower.
k-nucleotide: the scala implementation has been optimised by using bit-twiddling to pack nucleotides into a long rather than use chars. It's a good optimisation that could also be applied to the Go code.
binary-trees: this tests gc performance by filling RAM. It's true that java gc is much faster than the go gc, but the argument for this not being the top priority for go is that usually one can avoid gc in real programs by not producing garbage in the first place.
This chart is from the Programming Shootout. You should read the disclaimers on the Shootout page before taking the benchmarks as gospel. At best these benchmarks are only useful for indicating broad expectations of performance.
That said, the JVM has a decade of well-funded optimization and apart from startup time, provides excellent performance for running code. Go is still a young language. The fact that Go comes within spitting distance of a JVM language is impressive. If you enjoy programming in Go, you should not reject it over one benchmark.
This is discussed in the go FAQ:
One of Go's design goals is to approach the performance of C for comparable programs, yet on some benchmarks it does quite poorly, including several in test/bench/shootout. The slowest depend on libraries for which versions of comparable performance are not available in Go. For instance, pidigits.go depends on a multi-precision math package, and the C versions, unlike Go's, use GMP (which is written in optimized assembler). Benchmarks that depend on regular expressions (regex-dna.go, for instance) are essentially comparing Go's native regexp package to mature, highly optimized regular expression libraries like PCRE.
Benchmark games are won by extensive tuning and the Go versions of most of the benchmarks need attention. If you measure comparable C and Go programs (reverse-complement.go is one example), you'll see the two languages are much closer in raw performance than this suite would indicate.
Still, there is room for improvement. The compilers are good but could be better, many libraries need major performance work, and the garbage collector isn't fast enough yet. (Even if it were, taking care not to generate unnecessary garbage can have a huge effect.)
As an aside, consider the 10x (!) speed difference between the different versions of a benchmark for a given programming language. C gcc #7 is 8.3 times slower than C gcc #5, and Ada #3 almost 10 times slower than Ada #5. These benchmarks provide a rough idea of how language compare, but the difference between Go and Scala is within one order of magnitude, which means any 'intrinsic' variation between the runtimes is likely to be dwarfed by differences in the implementation: this post describes how they sped up a program 11x by performing smarter memory allocation. Maybe the compiler/runtime should be handling this kind of optimisations automatically (as the JVM does, to a certain level), but I am not sure you can really draw the conclusion that 'Go is slower (resp. faster) than Scala' in the general case from these figures. Just my opinion though :)
Since you seem to be keen in looking at these biased benchmarks. Let's take a real example for real scenario not some Fibonacci implementations.
Take a look at these rankings for web frameworks benchmarks, the testing was done using native client if available and sometimes using OSS web frameworks, they also use many packages for testing with the same language. The tests vary from requests for raw strings to using ORM to query a database.
It is clear that Scala performance is no where close to Go, in all of the tests Scala was below Go. Having said this, benchmarks are nothing close to reality and I suggest you look at a language from tools/features perspective or simply what would be best to solve your problem.
As Brad pointed out, these results are from one particular benchmark suite. This provides some information, but don't assume it's the whole picture. It would be helpful to know whether the source code is well enough written in each case to give the fastest speed, the least memory use, or some other target goal.
Perhaps we might compare with another website that ranks languages. Take a look at http://www.techempower.com/benchmarks/ in which web service codes are compared. In spite of being a young language, Go is one of the best in some of these benchmarks.
As in all benchmarks, it always depends what you strive for and how you measure it.
I'm aware this will vary between systems and implementations, but I have absolutely no idea if OpenMP is feasible for use where you have easily parallelizable code that doesn't takea long time to run. For instance if I have a loop which would be ideal for OpenMP, but only takes a couple of ms to run on a single thread, will the overhead kill any gains?
My usage is in real-time rendering, where we have perhaps 20ms to generate each frame. So typically we don't have any single block of code taking more than 5-10ms... is OpenMP targeted at this kind of time-scale, or only on operations taking several seconds to run?
Any anecdotal/empirical data is welcome, as well as more 'official' sources. I suppose I'm primarily interested in MSVC++ and GCC OpenMP implementations.
I found this interesting article: http://software.intel.com/en-us/blogs/2010/06/02/using-openmp-to-parallelize-a-game/
The answer, to all empirical questions when data is not available, is 42 ...
But seriously, you're going to have to derive your own answer to this question for your own configuration of hardware, software and problem.
Yes, perhaps when, in 1997, OpenMP was first published, the typical use case was for relatively heavyweight threads of computation taking seconds each. Since then processors have got faster and implementations have got better, I don't think you'll get a good answer to your question without benchmarking. And you're going to have to benchmark anyway, whatever you learn from SO, so stop wasting time and get coding.
You might be persuaded that OpenMP is not useful: but you won't quite trust the answer wrt your own requirements.
You might be persuaded that OpenMP is useful: but you won't quite trust the answer wrt your own requirements.
You might get a lot of conflicting anecdotal evidence: but you won't quite trust the answer wrt your own requirements.
Get benchmarking. And let us know how you get on, SO is far too long on anecdotes (such as the trash I'm writing), far too short on hard data.
PS Don't lose your benchmark codes. Revise them and rerun them as your experience develops and as your hardware, systems software and applications software change. Of one thing I am convinced without data: the answer to your question is a moving target.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 13 years ago.
It seems that I often hear people criticize certain programming languages because they "have poor performance", or because some other language is "faster" in general (not necessarily for a specific application). However, my experience and education have taught me that anytime you have a performance problem, at least one of the following is probably happening:
The bottleneck isn't in the CPU, it's in some other device, such as the network or the hard drive.
The poor performance is caused by your algorithms, not by the language you're using.
My general impression is that the speed of a programming language itself is all but irrelevant in the vast majority of cases, with exceptions for serious data processing problems. Even in those cases, I believe you could use a hybrid approach and use a lower-level language only for the CPU-intensive pieces so that you wouldn't lose the benefits of the more abstract language altogether.
Do you agree? Is programming language speed insignificant most of the time, or do the critics have a right to point out language performance issues?
I hope this question isn't too subjective, but it seems to me that there should be a relatively objective answer to this.
Performance can be a serious concern in libraries, operating systems, and the like. However, I believe that upwards of 90% of the time raw performance is irrelevant.
What is more important in many cases is TIMING. Any garbage collected language is going to have some unpredictability in this regard, which makes them unsuited to embedded and realtime design spaces.
The overlap of GC'd and "slow" languages is considerable, and so you may see a language discounted for speed reasons when the real problem is inconsistent timing.
There are some allocation/threading/etc. schemes that allow for garbage collection while also guaranteeing the runtime of parts of the system, such as Realtime Java, though I haven't personally seen it in use anywhere.
Short answer: most of the time the speed of the language is irrelevant (within reason), language choices are made based on familiarity and available libraries.
Amazingly, the performance of a system is a combination of the programming language, the system it's executing on, the operations that system is performing and the external resources (network, disk, slow line printers, etc.) that it relies upon.
If your system is slow, rather than guessing, test it.
If there is any "Rule" in computing, it's "Test your assumptions". Everything else is gross guideline.
This is impossible to answer so broadly. It's like asking if big engines are a waste in cars. Well, for some people, yes. For others, not at all. And all sorts in between.
There are a myriad of factors that come in to play. What is your target environment? End-user deployment or servers? Let's suppose we're talking about web development and coding for a server. RoR is well-known to be (relatively) slow. .NET is pretty fast by comparison. But RoR also has RAD qualities that .NET can't compete with.
Is getting your app up-and-running yesterday more of a priority than scalability?
Does your business model live or die on the milliseconds you serve a page, or the time you went to market?
Does your TCO and application architecture support scaling out or scaling up? Do you even expect to need to scale up?
Those are just a tiny handful of the questions an architect has to answer when making platform/language decisions. Does speed matter? Sometimes. If I am planning to write a LoB service that will eventually need to scale to thousands of transactions per second, and it will be deployed in an enterprise environment, I will probably go with .NET. If I have an idea for a Web 2.0 business like selling Twitter teeshirts, I need to capitalize on that idea yesterday and I can know I probably won't get slammed with enough business to bring the site down before preparing for it.
This is honestly over-simplifying a very complex issue, but hopefully illustrated the point that it's impossible to simply "say" whether it matters or not.
I think it's a good question. To answer it requires having a general framework for thinking about performance, so let me try to provide one. (Some of this is going to sound really obvious, but bear with me.)
To keep things simple,
let's just consider the simple case of applications that have a specific job to do, and that start, and then finish, and what you care about is wall-clock time. Let's assume a standard CPU cycle rate, and a mono-processor.
The time duration consists of a stream of time-slices (nanoseconds, say). To do that job, there is a minimum amount of time required, and it is usually greater than zero. There is no maximum amount of time required. If a program spends longer than the minimum number of nanoseconds, then some of those nanoseconds are being spent, strictly speaking, unnecessarily (i.e. for poor reasons).
So, to optimize a program's execution time, it is necessary to find the nanoseconds it is spending that do not have to be spent (i.e. that do not have good reasons) and remove them.
One way to do this is to, if possible, step through the program and keep track at each step of why it is doing that step. If the reason is not good, there is an opportunity for removing steps.
Another way to do this is to select nanoseconds at random from the program's execution, and inquire their reasons. For example, the program counter can tell you what the program is doing, but the call stack can tell you why. In order for the nanosecond to be spent for a good reason, every call instruction on the call stack has to have a good reason. If any instruction on the call stack does not have a good reason, then there is an opportunity to optimize. In fact, the amount of time that instruction is on the call stack is the amount of time that would be saved by its removal.
In some kinds of software that are highly asynchronous, message-driven, or interpreted, the call stack may not provide enough information. In that case, to answer why a given nanosecond is being spent may be more difficult. It may require examining more state information than just the call stack. For example, in an interpreter, the stack of the program being interpreted may also need to be examined. However, often the hardware call stack does provide sufficient information, so it is a useful thing to examine.
Now, to try to answer your question.
There is such a thing as a "hot spot". This is a small set of addresses that are often at the bottom of the call stack. Nanoseconds spent in that code may or may not have good reasons.
There is such a thing as a "performance problem". This is an instruction that often accounts for why nanoseconds are being spent, but that does not have a good reason. Such an instruction may be in a hot spot. It may also be a subroutine call instruction. (It cannot be both.) It may be an instruction to send a message to be processed later, that does not have a good reason for being spent. To optimize software, such instructions (not functions) are what are being looked for.
Languages, loosely speaking, are either compiled into machine language or interpreted. Interpreted languages are usually 1 or 2 orders of magnitude slower than compiled, because they are constantly re-determining what they need to do. However, roughly speaking, this is only a performance problem if it occurs in a hot spot. If a program spends all its time calling compiled library functions, or waiting for I/O completions, then its speed of execution probably doesn't matter, because most of the nanoseconds are being spent for other reasons.
Now, certainly, any language or program can in principle be highly non-optimal, but in terms of compilers, for hotspot code, they are mostly pretty good, give or take maybe 30%. If there is a background process involved, like garbage collection, that adds an overhead, but it depends on the rate at which the program generates garbage.
So to sum up, the speed of a language matters in hotspot code, but not much elsewhere. When a program has been optimized by removal of all other performance problems, and if the hotspot code is actually seen by the compiler/interpreter, then speed of language matters.
Your question is framed very broadly, so I'll try to give a somewhat narrower answer:
Unless there is some good reason not to do so, the language for a project should always be chosen from among those languages that will help the project team be productive and produce reliable software that can easily be adapted for future needs. The tradeoffs generally favor high-level languages with automatic memory management.
N.B. There are plenty of good reasons to make other choices, such as compatibility with current products and libraries.
It sometimes happens that when a program is too slow, the quickest and easiest way to speed it up is to rewrite the program (or a critical part) in a new language. This happens most often when the implementation language is interpreted and the new language is compiled.
Example: I got about a 4x speedup out of the OSBF-Lua spam filter by rewriting the lexical analysis of the mail headers. By rewriting from Lua to C I not only went from interpreted to compiled but was able to eliminate an array-bounds check for every input character.
To answer your question as stated, it is not very often that language performance per se is an issue.
With CPUs being increasingly faster, hard disks spinning, bits flying around so quickly, network speeds increasing as well, it's not that simple to tell bad code from good code like it used to be.
I remember a time when you could optimize a piece of code and undeniably perceive an improvement in performance. Those days are almost over. Instead, I guess we now have a set of rules that we follow like "Don't declare variables inside loops" etc. It's great to adhere to these so that you write good code by default. But how do you know it can't be improved even further without some tool?
Some may argue that a couple of nanoseconds won't really make that big a difference these days. The truth is, we are stuck with so many layers that you get a staggering effect.
I'm not saying we should optimize every little millisecond out of our code as that will be expensive and unfeasible. I believe we have to do our best, given our time constraints, to write efficient code as well.
I'm just interested to know what tools you use to profile and measure performance of code, if at all.
I think that optimization should be thought of not as looking at each line of code, but rather, what asymptotic complexity is your algorithm. For example, using a bubble sort is probably one of the worst sorting algorithms you could use in terms of optimization. It takes the longest. Quicksort and mergesort are faster in terms of sorting, and should be always used before a bubble sort.
If you keep optimization always in your mind when designing a solution to a problem, then you should be able to write readable code, which other developers will approve of. Also, if you are programming in a higher level language that will be compiled before it is run, remember that compilers make some awesome optimizations nowadays that you or I may not think of, and also (more importantly) do not have to worry about.
Stick with a good and low big O(), and it should be optimized pretty good. If you are working with millions or greater in some type of dataset, then look for a big O(logn) algorithm. They work great for large tasks, and keep your code optimized.
Let the compilers work on the line by line code optimizations so you can focus on the solutions.
There are times that do warrant line by line optimizations, and if that is the case that you need that much speed, maybe you might want to look into assembly so that you can control every line that is written.
There's a big difference between "good" code and "fast" code. They aren't exactly separate from each other either, but "fast" code doesn't mean "good". Often times, "fast" actually means bad code because readability compromises must be made to make it fast.
The way I look at it, hardware is cheap, programmers are expensive. Unless there is a serious performance problem with some piece of code, you should never have to worry about speed. If there are performance problems, you'll notice them. Only when you notice the performance problem on good hardware should you have to worry about optimization (in my opinion)
If you reach the point where your code is slow, but you can't figure out why, I'd use a profiler like ANT, or dotTrace if you're in the .NET world (I'm sure there are others out there for other platforms & languages). They're pretty useful, but I've only ever had one situation where I needed a profiler to identify the problem. It was something that now that I know the issue, I won't need a profiler again to tell me it's a problem because I'll never forget the amount of time I spent trying to optimize it.
This is absolutely a valid concern, but not for most developers. Most developers are concerned with getting a product that works to their employer. Optimized code is seldom a requirement.
The best way to make sure your code is fast is to benchmark or profile it. A lot of compiler optimizations create non-intuitive oddities in the performance of a programmer's code, so in the end measurement becomes essential.
In my experience, Rational Quantify has given me the best results in terms of code tuning. It is not free, but it is very fully featured and seems to have given me the most useful results.
In terms of free tools, check out gprof or oprofile, if you are on a Unix environment. They are not as good as some of the commercial tools, but can often point you in the right direction.
On a side note, I am almost always surprised at what profilers turn up the first time I use them. You can have intuition as to where code may be bottlenecking, and it can often be completely wrong.
Almost all code I write is plenty fast enough. On the rare occasions when it isn't, for C, C++, and Objective Caml I use the venerable gprof and the excellent valgrind with its superb visualizer kcachegrind (part of the KDE SDK; don't be fooled by the out-of-date code on sourceforge).
The MLton Standard ML compiler and the Glasgow Haskell Compiler both ship with excellent profilers.
I wish there were a better profiler for Lua.
Uh, a profiler maybe? There are ones available for almost all platforms and languages.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Oftentimes a developer will be faced with a choice between two possible ways to solve a problem -- one that is idiomatic and readable, and another that is less intuitive, but may perform better. For example, in C-based languages, there are two ways to multiply a number by 2:
int SimpleMultiplyBy2(int x)
{
return x * 2;
}
and
int FastMultiplyBy2(int x)
{
return x << 1;
}
The first version is simpler to pick up for both technical and non-technical readers, but the second one may perform better, since bit shifting is a simpler operation than multiplication. (For now, let's assume that the compiler's optimizer would not detect this and optimize it, though that is also a consideration).
As a developer, which would be better as an initial attempt?
You missed one.
First code for correctness, then for clarity (the two are often connected, of course!). Finally, and only if you have real empirical evidence that you actually need to, you can look at optimizing. Premature optimization really is evil. Optimization almost always costs you time, clarity, maintainability. You'd better be sure you're buying something worthwhile with that.
Note that good algorithms almost always beat localized tuning. There is no reason you can't have code that is correct, clear, and fast. You'll be unreasonably lucky to get there starting off focusing on `fast' though.
IMO the obvious readable version first, until performance is measured and a faster version is required.
Take it from Don Knuth
Premature optimization is the root of all evil (or at least most of it) in programming.
Readability 100%
If your compiler can't do the "x*2" => "x <<1" optimization for you -- get a new compiler!
Also remember that 99.9% of your program's time is spent waiting for user input, waiting for database queries and waiting for network responses. Unless you are doing the multiple 20 bajillion times, it's not going to be noticeable.
Readability for sure. Don't worry about the speed unless someone complains
In your given example, 99.9999% of the compilers out there will generate the same code for both cases. Which illustrates my general rule - write for readability and maintainability first, and optimize only when you need to.
Readability.
Coding for performance has it's own set of challenges. Joseph M. Newcomer said it well
Optimization matters only when it
matters. When it matters, it matters a
lot, but until you know that it
matters, don't waste a lot of time
doing it. Even if you know it matters,
you need to know where it matters.
Without performance data, you won't
know what to optimize, and you'll
probably optimize the wrong thing.
The result will be obscure, hard to
write, hard to debug, and hard to
maintain code that doesn't solve your
problem. Thus it has the dual
disadvantage of (a) increasing
software development and software
maintenance costs, and (b) having no
performance effect at all.
I would go for readability first. Considering the fact that with the kind of optimized languages and hugely loaded machines we have in these days, most of the code we write in readable way will perform decently.
In some very rare scenarios, where you are pretty sure you are going to have some performance bottle neck (may be from some past bad experiences), and you managed to find some weird trick which can give you huge performance advantage, you can go for that. But you should comment that code snippet very well, which will help to make it more readable.
Readability. The time to optimize is when you get to beta testing. Otherwise you never really know what you need to spend the time on.
A often overlooked factor in this debate is the extra time it takes for a programmer to navigate, understand and modify less readible code. Considering a programmer's time goes for a hundred dollars an hour or more, this is a very real cost.
Any performance gain is countered by this direct extra cost in development.
Putting a comment there with an explanation would make it readable and fast.
It really depends on the type of project, and how important performance is. If you're building a 3D game, then there are usually a lot of common optimizations that you'll want to throw in there along the way, and there's no reason not to (just don't get too carried away early). But if you're doing something tricky, comment it so anybody looking at it will know how and why you're being tricky.
The answer depends on the context. In device driver programming or game development for example, the second form is an acceptable idiom. In business applications, not so much.
Your best bet is to look around the code (or in similar successful applications) to check how other developers do it.
If you're worried about readability of your code, don't hesitate to add a comment to remind yourself what and why you're doing this.
using << would by a micro optimization.
So Hoare's (not Knuts) rule:
Premature optimization is the root of all evil.
applies and you should just use the more readable version in the first place.
This is rule is IMHO often misused as an excuse to design software that can never scale, or perform well.
Both. Your code should balance both; readability and performance. Because ignoring either one will screw the ROI of the project, which in the end of the day is all that matters to your boss.
Bad readability results in decreased maintainability, which results in more resources spent on maintenance, which results in a lower ROI.
Bad performance results in decreased investment and client base, which results in a lower ROI.
Readability is the FIRST target.
In the 1970's the army tested some of the then "new" techniques of software development (top down design, structured programming, chief programmer teams, to name a few) to determine which of these made a statistically significant difference.
THe ONLY technique that made a statistically significant difference in development was...
ADDING BLANK LINES to program code.
The improvement in readability in those pre-structured, pre-object oriented code was the only technique in these studies that improved productivity.
==============
Optimization should only be addressed when the entire project is unit tested and ready for instrumentation. You never know WHERE you need to optimize the code.
In their landmark books Kernigan and Plauger in the late 1970's SOFTWARE TOOLS (1976) and SOFTWARE TOOLS IN PASCAL (1981) showed ways to create structured programs using top down design. They created text processing programs: editors, search tools, code pre-processors.
When the completed text formating function was INSTRUMENTED they discovered that most of the processing time was spent in three routines that performed text input and output ( In the original book, the i-o functions took 89% of the time. In the pascal book, these functions consumed 55%!)
They were able to optimize these THREE routines and produced the results of increased performance with reasonable, manageable development time and cost.
The larger the codebase, the more readability is crucial. Trying to understand some tiny function isn't so bad. (Especially since the Method Name in the example gives you a clue.) Not so great for some epic piece of uber code written by the loner genius who just quit coding because he has finally seen the top of his ability's complexity and it's what he just wrote for you and you'll never ever understand it.
As almost everyone said in their answers, I favor readability. 99 out of 100 projects I run have no hard response time requirements, so it's an easy choice.
Before you even start coding you should already know the answer. Some projects have certain performance requirements, like 'need to be able to run task X in Y (milli)seconds'. If that's the case, you have a goal to work towards and you know when you have to optimize or not. (hopefully) this is determined at the requirements stage of your project, not when writing the code.
Good readability and the ability to optimize later on are a result of proper software design. If your software is of sound design, you should be able to isolate parts of your software and rewrite them if needed, without breaking other parts of the system. Besides, most true optimization cases I've encountered (ignoring some real low level tricks, those are incidental) have been in changing from one algorithm to another, or caching data to memory instead of disk/network.
If there is no readability , it will be very hard to get performance improvement when you really need it.
Performance should be only improved when it is a problem in your program, there are many places would be a bottle neck rather than this syntax. Say you are squishing 1ns improvement on a << but ignored that 10 mins IO time.
Also, regarding readability, a professional programmer should be able to read/understand computer science terms. For example we can name a method enqueue rather than we have to say putThisJobInWorkQueue.
The bitshift versus the multiplication is a trivial optimization that gains next to nothing. And, as has been pointed out, your compiler should do that for you. Other than that, the gain is neglectable anyhow as is the CPU this instruction runs on.
On the other hand, if you need to perform serious computation, you will require the right data structures. But if your problem is complex, finding out about that is part of the solution. As an illustration, consider searching for an ID number in an array of 1000000 unsorted objects. Then reconsider using a binary tree or a hash map.
But optimizations like n << C are usually neglectible and trivial to change to at any point. Making code readable is not.
It depends on the task needed to be solved. Usually readability is more importrant, but there are still some tasks when you shoul think of performance in the first place. And you can't just spend a day or to for profiling and optimization after everything works perfectly, because optimization itself may require rewriting sufficiant part of a code from scratch. But it is not common nowadays.
I'd say go for readability.
But in the given example, I think that the second version is already readable enough, since the name of the function exactly states, what is going on in the function.
If we just always had functions that told us, what they do ...
You should always maximally optimize, performance always counts. The reason we have bloatware today, is that most programmers don't want to do the work of optimization.
Having said that, you can always put comments in where slick coding needs clarification.
There is no point in optimizing if you don't know your bottlenecks. You may have made a function incredible efficient (usually at the expense of readability to some degree) only to find that portion of code hardly ever runs, or it's spending more time hitting the disk or database than you'll ever save twiddling bits.
So you can't micro-optimize until you have something to measure, and then you might as well start off for readability.
However, you should be mindful of both speed and understandability when designing the overall architecture, as both can have a massive impact and be difficult to change (depending on coding style and methedologies).
It is estimated that about 70% of the cost of software is in maintenance. Readability makes a system easier to maintain and therefore brings down cost of the software over its life.
There are cases where performance is more important the readability, that said they are few and far between.
Before sacrifing readability, think "Am I (or your company) prepared to deal with the extra cost I am adding to the system by doing this?"
I don't work at google so I'd go for the evil option. (optimization)
In Chapter 6 of Jon Bentley's "Programming Pearls", he describes how one system had a 400 times speed up by optimizing at 6 different design levels. I believe, that by not caring about performance at these 6 design levels, modern implementors can easily achieve 2-3 orders of magnitude of slow down in their programs.
Readability first. But even more than readability is simplicity, especially in terms of data structure.
I'm reminded of a student doing a vision analysis program, who couldn't understand why it was so slow. He merely followed good programming practice - each pixel was an object, and it worked by sending messages to its neighbors...
check this out
Write for readability first, but expect the readers to be programmers. Any programmer worth his or her salt should know the difference between a multiply and a bitshift, or be able to read the ternary operator where it is used appropriately, be able to look up and understand a complex algorithm (you are commenting your code right?), etc.
Early over-optimization is, of course, quite bad at getting you into trouble later on when you need to refactor, but that doesn't really apply to the optimization of individual methods, code blocks, or statements.
How much does an hour of processor time cost?
How much does an hour of programmer time cost?
IMHO both things have nothing to do. You should first go for code that works, as this is more important than performance or how well it reads. Regarding readability: your code should always be readable in any case.
However I fail to see why code can't be readable and offer good performance at the same time. In your example, the second version is as readable as the first one to me. What is less readable about it? If a programmer doesn't know that shifting left is the same as multiplying by a power of two and shifting right is the same as dividing by a power of two... well, then you have much more basic problems than general readability.