What is the difference between Benchmark and Workload concpets? - performance

I am confused between the meaning of benchmark and workload in computer science context.
My understanding for both is the following:
Workload == the data or logs that generated to simulate users behavior on a specific system.
Benchmark, the way of testing a piece of software to see how well the system is doing in term of performance.
Again, I might be totally wrong and I wish there will be somebody to clear my understanding of these concepts.
Sincerely

The meanings of these terms depend on the context.
The term benchmark as a noun could mean any of the following:
A computer program or a collection of programs.
A quantifiable metric or a collection of quantifiable metrics.
An evaluation or measurement of a metric by running a program.
A collection of workloads (programs)
The term benchmark as a verb means to to measure a metric or a collection of metrics by running a program or a collection or programs, possibly repeatedly.
The terms benchmarking and benchmark testing are synonyms. In the term benchmark testing, the term benchmark refers to a metric, not a program. When the purpose of benchmarking is to evaluate performance (execution time or throughput), the term performance testing means the same thing.
The term workload could mean any of the following:
A single benchmark (a computer program or application)
A collection of benchmarks (a collection of computer programs).
A benchmark and the input provided to it.
As you might imagine, the two terms in be used together. A workload benchmark can mean either an evaluation of a collection of programs or a collection of programs. A benchmark workload refers to a collection of programs used for a particular evaluation. The term workload benchmark testing refers to evaluating a collection of a programs.
The goal of benchmarking is to compare systems or programs against either other.

Related

How do you reason about fluctuations in benchmarking data?

Suppose you're trying to optimize a function and using some benchmarking framework (like Google Benchmark) for measurement. You run the benchmarks on the original function 3 times and see average wall clock time/CPU times of 100 ms, 110 ms, 90 ms. Then you run the benchmarks on the "optimized" function 3 times and see 80 ms, 95 ms, 105 ms. (I made these numbers up). Do you conclude that your optimizations were successful?
Another problem I often run into is that I'll go do something else and run the benchmarks later in the day and get numbers that are further away than the delta between the original and optimized earlier in the day (say, 80 ms, 85 ms, 75 ms for the original function).
I know there are statistical methods to determine whether the improvement is "significant". Do software engineers actually use these formal calculations in practice?
I'm looking for some kind of process to follow when optimizing code.
Rule of Thumb
Minimum(!) of each series => 90ms vs 80ms
Estimate noise => ~ 10ms
Pessimism => It probably didn't get any slower.
Not happy yet?
Take more measurements. (~13 runs each)
Interleave the runs. (Don't measure 13x A followed by 13x B.)
Ideally you always randomize whether you run A or B next (scientific: randomized trial), but it's probably overkill. Any source of error should affect each variant with the same probability. (Like the CPU building up heat over time, or a background task starting after run 11.)
Go back to step 1.
Still not happy? Time to admit it that you've been nerd-sniped. The difference, if it exists, is so small that you can't even measure it. Pick the more readable variant and move on. (Or alternatively, lock your CPU frequency, isolate a core just for the test, quiet down your system...)
Explanation
Minimum: Many people (and tools, even) take the average, but the minimum is statistically more stable. There is a lower limit how fast your benchmark can run on a given hardware, but no upper limit much it can get slowed down by other programs. Also, taking the minimum will automatically drop the initial "warm-up" run.
Noise: Apply common sense, just glance over the numbers. If you look a the standard deviation, make that look very skeptical! A single outlier will influence it so much that it becomes nearly useless. (It's not a normal distribution, usually.)
Pessimism: You were really clever to find this optimization, you really want the optimized version to be faster! If it looks better just by chance, you will believe it. (You knew it!) So if you care about being correct, you must counter this tendency.
Disclaimer
Those are just basic guidelines. Worst-case latency is relevant in some applications (smooth animations or motor control), but it will be harder to measure. It's easy (and fun!) to optimize something that doesn't matter in practice. Instead of wondering if your 1% gain is statistically significant, try something else. Measure the full program including OS overhead. Comment out code, or run work twice, only to check if optimizing it might be worth it.
Do you conclude that your optimizations were successful?
No. 3 runs is not enough especially due to the huge variation and the fact that some timings of the two groups are mixed once merged and sorted.
For small timings like this, the first run should be removed and at least dozens of runs should be performed. I would personally use at least hundreds of runs.
Do software engineers actually use these formal calculations in practice?
Only very few developers does advanced statistical analysis. It is often not needed to do something very formal when the gab before/after the target optimization is huge and the variation within groups is small.
For example, if your program is twice faster than before with a min-max variation of <5%, then you can quite safely say that the optimization is successful. That being said, it is sometimes not the case due to unexpected external factors (though it is very rare when the gap is so big).
If the result is not obvious, then you need to do some statistic basics. You need to compute the standard deviation, the mean and median time, remove the first run, interleave runs and use many runs (at least dozens). The distribution of the timings almost always follow a normal distribution due to the central limit theorem. It is sometimes a mixture distribution due to the threshold effects (eg. caching). You can plot the value to see that easily if you see some outliers in timings.
If there are threshold effects, then you need to apply an advanced statistical analysis but this is complex to do and generally it is not an expected behaviour. I is generally a sign that the benchmark is biased, there is a bug or a complex effect you have to consider during the analysis of the result anyway. Thus, I strongly advise you to fix/mitigate the problem before analysing the results in that case.
Assuming the timings follow a normal distribution, you can just check if the median is close to the mean and if the standard deviation is small compare to the gap between the mean.
A more formal way to do that is to compute the Student t-test and its associated p-value and check the significance of the p-value (eg. <5%). If there are more groups, An Anova can be used. If you are unsure about the distribution, you can apply non-parametric statistical tests like the Wilcoxon and Kruskal-Wallis tests (note that the statistical power of these test is not the same). In practice, doing such a formal analysis is time-consuming and it is generally not so useful compare to a naive basic check (using the mean and standard deviation) unless your modification impacts a lot of users or you plan to write research papers.
Keep in mind that using a good statistical analysis does not prevent biased benchmarks. You need to minimize the external factors that can cause biased results. One frequent bias is frequency scaling: the first benchmark can be faster than the second because of turbo-boost or it can be slower because the processor can take some time to reach a high frequency. Caches also plays a huge role in benchmark biases. There are many other factors that can cause biases in practice like the compiler/runtime versions, environment variables, configuration files, OS/driver updates, memory alignment, OS paging (especially on NUMA systems), the hardware (eg. thermal throttling), software bugs (it is not rare to find bugs by analysing strange performance behaviours), etc.
As a result, it is critical to make benchmarks as reproducible as possible (by fixing versions and reporting the environment parameters (as well as possibly run the benchmarks in a sandbox if you are paranoid and if it does not affect too much the timings). Software like Nix/Spack help for packaging, and containers like LXD, Docker could help for a more reproducible environment.
Many big software team use automated benchmarking to check the presence of performance regression. Tools can do the run properly and statistical analysis for you regularly. A good example is the Numpy team which use a package called Airspeed Velocity (see the results). The PyPy team also designed their own benchmarking tool. The Linux kernel also have benchmarking suite to check for regression (eg. PTS) and many company focusing on performance have such automated benchmarking tools (often home-made). There are many existing tools for that.
For more information about this topic, please give a look to the great Performance Matters presentation by Emery Berger.

How to accurately measure performance of sorting algorithms

I have a bunch of sorting algorithms in C I wish to benchmark. I am concerned regarding good methodology for doing so. Things that could affect benchmark performance include (but are not limited to): specific coding of the implementation, programming language, compiler (and compiler options), benchmarking machine and critically the input data and time measuring method. How do I minimize the effect of said variables on the benchmark's results?
To give you a few examples, I've considered multiple implementations on two different languages to adjust for the first two variables. Moreover I could compile the code with different compilers on fairly mundane (and specified) arguments. Now I'm going to be running the test on my machine, which features turbo boost and whatnot and often boosts a core running stuff to the moon. Of course I will be disabling that and doing multiple runs and likely taking their mean completion time to adjust for that as well. Regarding the input data, I will be taking different array sizes, from very small to relatively large. I do not know what the increments should ideally be like, and what the range of the elements should be as well. Also I presume duplicate elements should be allowed.
I know that theoretical analysis of algorithms accounts for all of these methods, but it is crucial that I complement my study with actual benchmarks. How would you go about resolving the mentioned issues, and adjust for these variables once the data is collected? I'm comfortable with the technologies I'm working with, less so with strict methodology for studying a topic. Thank you.
You can't benchmark abstract algorithms, only specific implementations of them, compiled with specific compilers running on specific machines.
Choose a couple different relevant compilers and machines (e.g. a Haswell, Ice Lake, and/or Zen2, and an Apple M1 if you can get your hands on one, and/or an AArch64 cloud server) and measure your real implementations. If you care about in-order CPUs like ARM Cortex-A53, measure on one of those, too. (Simulation with GEM5 or similar performance simulators might be worth trying. Also maybe relevant are low-power implementations like Intel Silvermont whose out-of-order window is much smaller, but also have a shorter pipeline so smaller branch mispredict penalty.)
If some algorithm allows a useful micro-optimization in the source, or that a compiler finds, that's a real advantage of that algorithm.
Compile with options you'd use in practice for the use-cases you care about, like clang -O3 -march=native, or just -O2.
Benchmarking on cloud servers makes it hard / impossible to get an idle system, unless you pay a lot for a huge instance, but modern AArch64 servers are relevant and may have different ratios of memory bandwidth vs. branch mispredict costs vs. cache sizes and bandwidths.
(You might well find that the same code is the fastest sorting implementation on all or most of the systems you test one.
Re: sizes: yes, a variety of sizes would be good.
You'll normally want to test with random data, perhaps always generated from the same PRNG seed so you're sorting the same data every time.
You may also want to test some unusual cases like already-sorted or almost-sorted, because algorithms that are extra fast for those cases are useful.
If you care about sorting things other than integers, you might want to test with structs of different sizes, with an int key as a member. Or a comparison function that does some amount of work, if you want to explore how sorts do with a compare function that isn't as simple as just one compare machine instruction.
As always with microbenchmarking, there are many pitfalls around warm-up of arrays (page faults) and CPU frequency, and more. Idiomatic way of performance evaluation?
taking their mean completion time
You might want to discard high outliers, or take the median which will have that effect for you. Usually that means "something happened" during that run to disturb it. If you're running the same code on the same data, often you can expect the same performance. (Randomization of code / stack addresses with page granularity usually doesn't affect branches aliasing each other in predictors or not, or data-cache conflict misses, but tiny changes in one part of the code can change performance of other code via effects like that if you're re-compiling.)
If you're trying to see how it would run when it has the machine to itself, you don't want to consider runs where something else interfered. If you're trying to benchmark under "real world" cloud server conditions, or with other threads doing other work in a real program, that's different and you'd need to come up with realistic other loads that use some amount of shared resources like L3 footprint and memory bandwidth.
Things that could affect benchmark performance include (but are not limited to): specific coding of the implementation, programming language, compiler (and compiler options), benchmarking machine and critically the input data and time measuring method.
Let's look at this from a very different perspective - how to present information to humans.
With 2 variables you get a nice 2-dimensional grid of results, maybe like this:
A = 1 A = 2
B = 1 4 seconds 2 seconds
B = 2 6 seconds 3 seconds
This is easy to display and easy for humans to understand and draw conclusions from (e.g. from my silly example table it's trivial to make 2 very different observations - "A=1 is twice as fast as A=2 (regardless of B)" and "B=1 is faster than B=2 (regardless of A)").
With 3 variables you get a 3-dimensional grid of results, and with N variables you get an N-dimensional grid of results. Humans struggle with "3-dimensional data on 2-dimensional screen" and more dimensions becomes a disaster. You can mitigate this a little by "peeling off" a dimension (e.g. instead of trying to present a 3D grid of results you could show multiple 2D grids); but that doesn't help humans much.
Your primary goal is to reduce the number of variables.
To reduce the number of variables:
a) Determine how important each variable is for what you intend to observe (e.g. "which algorithm" will be extremely important and "which language" will be less important).
b) Merge variables based on importance and "logical grouping". For example, you might get three "lower importance" variables (language, compiler, compiler options) and merge them into a single "language+compiler+options" variable.
Note that it's very easy to overlook a variable. For example, you might benchmark "algorithm 1" on one computer and benchmark "algorithm 2" on an almost identical computer, but overlook the fact that (even though both benchmarks used identical languages, compilers, compiler options and CPUs) one computer has faster RAM chips, and overlook "RAM speed" as a possible variable.
Your secondary goal is to reduce number of values each variable can have.
You don't want massive table/s with 12345678 million rows; and you don't want to spend the rest of your life benchmarking to generate such a large table.
To reduce the number of values each variable can have:
a) Figure out which values matter most
b) Select the right number of values in order of importance (and ignore/skip all other values)
For example, if you merged three "lower importance" variables (language, compiler, compiler options) into a single variable; then you might decide that 2 possibilities ("C compiled by GCC with -O3" and "C++ compiled by MSVC with -Ox") are important enough to worry about (for what you're intending to observe) and all of the other possibilities get ignored.
How do I minimize the effect of said variables on the benchmark's results?
How would you go about resolving the mentioned issues, and adjust for these variables once the data is collected?
By identifying the variables (as part of the primary goal) and explicitly deciding which values the variables may have (as part of the secondary goal).
You've already been doing this. What I've described is a formal method of doing what people would unconsciously/instinctively do anyway. For one example, you have identified that "turbo boost" is a variable, and you've decided that "turbo boost disabled" is the only value for that variable you care about (but do note that this may have consequences - e.g. consider "single-threaded merge sort without the turbo boost it'd likely get in practice" vs. "parallel merge sort that isn't as influenced by turning turbo boost off").
My hope is that by describing the formal method you gain confidence in the unconscious/instinctive decisions you're already making, and realize that you were very much on the right path before you asked the question.

does i/o operations within loop decrease performance?

I am confused how to take inputs from file. In many of hacking challenge sites input will be in the format
-no of test cases
case1
case2
..so on
My question is what is the best way to read the inputs. Whether i should read all the cases in one loop, store it in an array and then perform operations on the inputs in separate loop OR just use one loop for both- to read and perform operations on input. Is there any performance issue between these two methods?
I'd say not significantly. Either way, the same number of operations will be performed, the question is about clumping together "the same kind" of operations. Like AndreyT said, organizing it based in stages that act on the same general area of memory might increase performance. It really depends on the kind of input and output your doing, as well as some operating system and programming language specific variables. The question basically comes down to the question of if "input output input output" is slower than "input input output output", and I think that depends highly on the programming language and the data-structures you're using. Look up how to set a timer or stopwatch on some code, and you can test it out for yourself. My hunch is that it will make very little to no difference, but you could be using different data-structures than I have in mind, so you'll just have to test it out.
So it could help, but in my experience unless you're doing some serious number crunching or need a highly optimized code, there's a certain point where gaining a fraction of a second faster operation isn't necessary. Modern computers run so blindingly fast that you usually have a lot of computational power to spare. Of course, in a case where you're doing something really computationally intensive, every little bit helps. It's all about trade-off between programming time and running time.
There's no definite answer to your question. Basically, "it depends".
A general performance-affecting principle that should be observed in many cases on modern hardware platforms is as follows: when you attempt to do many unrealted (or loosely related) things at once, involving access to several unrelated regions of memory, the memory locality of your code worsens and the cache behavior of your program worsens as well. Bad memory locality and the consequent poor cache behavior can have notable negative impact on the performance of your code. For this reason, it is usually good idea to organize your code in consequent stages, each stage working within a more-or-less well defined localized memory region.
This is a very general principle that is not directly related to input/output. It just that input/output might prove to be one of those things that could make your code to do "too many things at once". Whether it will have an impact on the performance of your code really depends on the specifics of your code.
If your code is dominated by slow input/output operations, then its cache performance will not be a significant factor at all. If, on the other hand, your code spends most of its time doing memory-intensive computations, then it might be a good idea to experiment with such things as elimination of I/O operations from the main computation cycles.
Depends on some variants:
Type of operation ? Input or Output
Parallel ? Sync call or Async call
Parallelism Degree ? is there dependency between iterations

Preventing performance regressions in R

What is a good workflow for detecting performance regressions in R packages? Ideally, I'm looking for something that integrates with R CMD check that alerts me when I have introduced a significant performance regression in my code.
What is a good workflow in general? What other languages provide good tools? Is it something that can be built on top unit testing, or that is usually done separately?
This is a very challenging question, and one that I'm frequently dealing with, as I swap out different code in a package to speed things up. Sometimes a performance regression comes along with a change in algorithms or implementation, but it may also arise due to changes in the data structures used.
What is a good workflow for detecting performance regressions in R packages?
In my case, I tend to have very specific use cases that I'm trying to speed up, with different fixed data sets. As Spacedman wrote, it's important to have a fixed computing system, but that's almost infeasible: sometimes a shared computer may have other processes that slow things down 10-20%, even when it looks quite idle.
My steps:
Standardize the platform (e.g. one or a few machines, a particular virtual machine, or a virtual machine + specific infrastructure, a la Amazon's EC2 instance types).
Standardize the data set that will be used for speed testing.
Create scripts and fixed intermediate data output (i.e. saved to .rdat files) that involve very minimal data transformations. My focus is on some kind of modeling, rather than data manipulation or transformation. This means that I want to give exactly the same block of data to the modeling functions. If, however, data transformation is the goal, then be sure that the pre-transformed/manipulated data is as close as possible to standard across tests of different versions of the package. (See this question for examples of memoization, cacheing, etc., that can be used to standardize or speed up non-focal computations. It references several packages by the OP.)
Repeat tests multiple times.
Scale the results relative to fixed benchmarks, e.g. the time to perform a linear regression, to sort a matrix, etc. This can allow for "local" or transient variations in infrastructure, such as may be due to I/O, the memory system, dependent packages, etc.
Examine the profiling output as vigorously as possible (see this question for some insights, also referencing tools from the OP).
Ideally, I'm looking for something that integrates with R CMD check that alerts me when I have introduced a significant performance regression in my code.
Unfortunately, I don't have an answer for this.
What is a good workflow in general?
For me, it's quite similar to general dynamic code testing: is the output (execution time in this case) reproducible, optimal, and transparent? Transparency comes from understanding what affects the overall time. This is where Mike Dunlavey's suggestions are important, but I prefer to go further, with a line profiler.
Regarding a line profiler, see my previous question, which refers to options in Python and Matlab for other examples. It's most important to examine clock time, but also very important to track memory allocation, number of times the line is executed, and call stack depth.
What other languages provide good tools?
Almost all other languages have better tools. :) Interpreted languages like Python and Matlab have the good & possibly familiar examples of tools that can be adapted for this purpose. Although dynamic analysis is very important, static analysis can help identify where there may be some serious problems. Matlab has a great static analyzer that can report when objects (e.g. vectors, matrices) are growing inside of loops, for instance. It is terrible to find this only via dynamic analysis - you've already wasted execution time to discover something like this, and it's not always discernible if your execution context is pretty simple (e.g. just a few iterations, or small objects).
As far as language-agnostic methods, you can look at:
Valgrind & cachegrind
Monitoring of disk I/O, dirty buffers, etc.
Monitoring of RAM (Cachegrind is helpful, but you could just monitor RAM allocation, and lots of details about RAM usage)
Usage of multiple cores
Is it something that can be built on top unit testing, or that is usually done separately?
This is hard to answer. For static analysis, it can occur before unit testing. For dynamic analysis, one may want to add more tests. Think of it as sequential design (i.e. from an experimental design framework): if the execution costs appear to be, within some statistical allowances for variation, the same, then no further tests are needed. If, however, method B seems to have an average execution cost greater than method A, then one should perform more intensive tests.
Update 1: If I may be so bold, there's another question that I'd recommend including, which is: "What are some gotchas in comparing the execution time of two versions of a package?" This is analogous to assuming that two programs that implement the same algorithm should have the same intermediate objects. That's not exactly true (see this question - not that I'm promoting my own questions, here - it's just hard work to make things better and faster...leading to multiple SO questions on this topic :)). In a similar way, two executions of the same code can differ in time consumed due to factors other than the implementation.
So, some gotchas that can occur, either within the same language or across languages, within the same execution instance or across "identical" instances, which can affect runtime:
Garbage collection - different implementations or languages can hit garbage collection under different circumstances. This can make two executions appear different, though it can be very dependent on context, parameters, data sets, etc. The GC-obsessive execution will look slower.
Cacheing at the level of the disk, motherboard (e.g. L1, L2, L3 caches), or other levels (e.g. memoization). Often, the first execution will pay a penalty.
Dynamic voltage scaling - This one sucks. When there is a problem, this may be one of the hardest beasties to find, since it can go away quickly. It looks like cacheing, but it isn't.
Any job priority manager that you don't know about.
One method uses multiple cores or does some clever stuff about how work is parceled among cores or CPUs. For instance, getting a process locked to a core can be useful in some scenarios. One execution of an R package may be luckier in this regard, another package may be very clever...
Unused variables, excessive data transfer, dirty caches, unflushed buffers, ... the list goes on.
The key result is: Ideally, how should we test for differences in expected values, subject to the randomness created due to order effects? Well, pretty simple: go back to experimental design. :)
When the empirical differences in execution times are different from the "expected" differences, it's great to have enabled additional system and execution monitoring so that we don't have to re-run the experiments until we're blue in the face.
The only way to do anything here is to make some assumptions. So let us assume an unchanged machine, or else require a 'recalibration'.
Then use a unit-test alike framework, and treat 'has to be done in X units of time' as just yet another testing criterion to be fulfilled. In other words, do something like
stopifnot( timingOf( someExpression ) < savedValue plus fudge)
so we would have to associate prior timings with given expressions. Equality-testing comparisons from any one of the three existing unit testing packages could be used as well.
Nothing that Hadley couldn't handle so I think we can almost expect a new package timr after the next long academic break :). Of course, this has to be either be optional because on a "unknown" machine (think: CRAN testing the package) we have no reference point, or else the fudge factor has to "go to 11" to automatically accept on a new machine.
A recent change announced on the R-devel feed could give a crude measure for this.
CHANGES IN R-devel UTILITIES
‘R CMD check’ can optionally report timings on various parts of the check: this is controlled by environment variables documented in ‘Writing R Extensions’.
See http://developer.r-project.org/blosxom.cgi/R-devel/2011/12/13#n2011-12-13
The overall time spent running the tests could be checked and compared to previous values. Of course, adding new tests will increase the time, but dramatic performance regressions could still be seen, albeit manually.
This is not as fine grained as timing support within individual test suites, but it also does not depend on any one specific test suite.

Are dynamic languages slower than static languages? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Are dynamic languages slower than static languages because, for example, the run-time has to check the type consistently?
No.
Dynamic languages are not slower than static languages. In fact, it is impossible for any language, dynamic or not, to be slower than another language (or faster, for that matter), simply because a language is just a bunch of abstract mathematical rules. You cannot execute a bunch of abstract mathematical rules, therefore they cannot ever be slow(er) or fast(er).
The statement that "dynamic languages are slower than static languages" is not only wrong, it doesn't even make sense. If English were a typed language, that statement wouldn't even typecheck.
In order for a language to even be able to run, it has to be implemented first. Now you can measure performance, but you aren't measuring the performance of the language, you are measuring the performance of the execution engine. Most languages have many different execution engines, with very different performance characteristics. For C, for example, the difference between the fastest and slowest implementations is a factor of 100000 or so!
Also, you cannot really measure the performance of an execution engine, either: you have to write some code to run on that exection engine first. But now you aren't measuring the performance of the execution engine, you are measuring the performance of the benchmark code. Which has very little to do with the performance of the execution engine and certainly nothing to do with the performance of the language.
In general, running well-designed code on well-designed high-performance execution engines will yield about the same performance, independent of whether the language is static or dynamic, procedural, object-oriented or functional, imperative or declarative, lazy or strict, pure or impure.
In fact, I would propose that the performance of a system is solely dependent on the amount of money that was spent making it fast, and completely independent of any particular typing discipline, programming paradigm or language.
Take for example Smalltalk, Lisp, Java and C++. All of them are, or have at one point been, the language of choice for high-performance code. All of them have huge amounts of engineering and research man-centuries expended on them to make them fast. All of them have highly-tuned proprietary commercial high-performance execution engines available. Given roughly the same problem, implemented by roughly comparable developers, they all perform roughly the same.
Two of those languages are dynamic, two are static. Java is interesting, because although it is a static language, most modern high-performance implementations are actually dynamic implementations. (In fact, several modern high-performance JVMs are actually either Smalltalk VMs in disguise, derived from Smalltalk VMs or written by Smalltalk VM companies.) Lisp is also interesting, because although it is a dynamic language, there are some (although not many) static high-performance implementations.
And we haven't even begun talking about the rest of the execution environment: modern mainstream operating systems, mainstream CPUs and mainstream hardware architectures are heavily biased towards static languages, to the point of being actively hostile for dynamic languages. Given that modern mainstream execution environments are pretty much of a worst-case scenario for dynamic languages, it is quite astonishing how well they actually perform and one can only imagine what the performance in a less hostile environment would look like.
All other things being equal, usually, yes.
First you must clarify whether you consider
dynamic typing vs. static typing or
statically compiled languaged vs. interpreted languages vs. bytecode JIT.
Usually we mean
dynamc language = dynamic typing + interpreted at run-time and
static languages = static typing + statically compiled
, but it's not necessary the case.
Type information can help the VM dispatch the message faster than witout type information, but the difference tend to disappear with optimization in the VM which detect monomorphic call sites. See the paragraph "performance consideration" in this post about dynamic invokation.
The debates between compiled vs. interpreted vs. byte-code JIT is still open. Some argue that bytecode JIT results in faster execution than regular compilation because the compilation is more accurate due to the presence of more information collected at run-time. Read the wikipedia entry about JIT for more insight. Interpreted language are indeed slower than any of the two forms or compilation.
I will not argue further, and start a heated discussion, I just wanted to point out that the gap between both tend to get smaller and smaller. Chances are that the performance problem that you might face will not be related to the language and VM but because of your design.
EDIT
If you want numbers, I suggest you look at the The Computer Language Benchmarks. I found it insightful.
At the instruction level current implementations of dynamically typed languages are typically slower than current implementations of statically typed languages.
However that does not necessarily mean that the implementation of a program will be slower in dynamic languages - there are lots of documented cases of the same program being implemented in both a static and dynamic language and the dynamic implementation has turned out to be faster. For example this study (PDF) gave the same problem to programmers in a variety of languages and compared the result. The mean runtime for the Python and Perl implementations were faster than the mean runtime for the C++ and Java implementations.
There are several reasons for this:
1) the code can be implemented more quickly in a dynamic language, leaving more time for optimisation.
2) high level data structures (maps, sets etc) are a core part of most dynamic languages and so are more likely to be used. Since they are core to the language they tend to be highly optimised.
3) programmer skill is more important than language speed - an inexperienced programmer can write slow code in any language. In the study mentioned above there were several orders of magnitude difference between the fastest and slowest implementation in each of the languages.
4) in many problem domains execution speed it dominated by I/O or some other factor external to the language.
5) Algorithm choice can dwarf language choice. In the book "More Programming Pearls" Jon Bentley implemented two algorithms for a problem - one was O(N^3) and implemented in optimised fortran on a Cray1. The other was O(N) and implemented in BASIC on a TRS80 home micro (this was in the 1980s). The TRS80 outperformed the Cray 1 for N > 5000.
Dynamic language run-times only need to check the type occasionally.
But it is still, typically, slower.
There are people making good claims that such performance gaps are attackable, however; e.g. http://steve-yegge.blogspot.com/2008/05/dynamic-languages-strike-back.html
Themost important factor is to consider the method dispatch algorithm. With static languages each method is typically allocated an index. THe names we see in source are not actually used at runtime and are in source for readaility purposes. Naturally languages like java keep them and make them available in reflection but in terms of when one invokes a method they are not used. I will leave reflection and binding out of this discussion. This means when a method is invoked the runtmne simply uses the offset to lookup a table and call. A dynamic language on the other hand uses the name of the function to lookup a map and then calls said function. A hashmap is always going to be slower than using an index lookup into an array.
No, dynamic languages are not necessarily slower than static languages.
The pypy and psyco projects have been making a lot of progress on building JIT compilers for python that have data-driven compilation; in other words, they will automatically compile versions of frequently called functions specialised for particular common values of arguments. Not just by type, like a C++ template, but actual argument values; say an argument is usually zero, or None, then there will be a specifically compiled version of the function for that value.
This can lead to compiled code that is faster than you'd get out of a C++ compiler, and since it is doing this at runtime, it can discover optimisations specifically for the actual input data for this particular instance of the program.
Reasonable to assume as more things need to be computed in runtime.
Actually, it's difficult to say because many of the benchmarks used are not that representative. And with more sophisticated execution environments, like HotSpot JVM, differences are getting less and less relevant. Take a look at following article:
Java theory and practice: Dynamic compilation and performance measurement

Resources