For instance, algorithms such as LZ77 might require previous results in order to proceed but it is still possible to execute them in parallel, at least to some extent (e.g. http://www.cs.cmu.edu/~jshun/dcc2013-final.pdf).
Are there any specific, real-world algorithms which have to be executed only sequentially?
Wikipedia:
Some problems have no parallel algorithms, and are called inherently
serial problems.
Examples:
http://en.wikipedia.org/wiki/Three-body_problem
http://en.wikipedia.org/wiki/Newton%27s_method
You can also have a look at this thread algorithms that cannot be sped up by parallelisation
Related
I am studying Parallel Computing exam, and i read that hotspot problem can happens both in parallel that sequential execution. But in the former it is more dangerous.
Now i hope to understand by simple examples a case of sequential and a parallel hotspot problem, and if possible the difference between them.
I have written a new algorithm for something. Now I need to compare it with existing methods, some of which are old about 10 years.
The idea I had is to look at benchmarks of different processors over the years in order to establish how much faster my processor (i7-920) is than average processor from 2003. Then I would simply divide old methods' execution time by the speedup factor and use those numbers to compare with my own algorithm.
Has something like this been done? So I don't redo the existing work.
Can such a comparison be done some other way?
Are there some scientific papers written about such comparisons which I can reference?
I don't know which of these are possible for you, but here's a list of options I can think of:
Run their implementation side-by-side on your machine against yours.
This is the best option.
Rewrite their implementations and do (1).
You preferably need to compare it against their test to ensure you get vaguely similar results.
Find a library that implements their algorithm (or multiple libraries) and do (1).
I suggest multiple libraries, if possible, since a single one may not have implemented the algorithm efficiently. You may also want to compare these against their test.
Compare the algorithms mathematically.
This may be difficult, but it's not impossible.
Do what you presented.
(a) I would not recommend this as there are other determining factors in your computer other than the processor speed that affect the speed of an algorithm. Getting an equation that perfectly balances these will likely be very difficult.
(b) There is a massive difference between top and bottom of the line computers, so using the average is not a particularly good idea. If the author didn't provide details regarding this, I'm afraid your benchmark is not likely to be too accurate.
Go out and buy a machine of similar specs to the one used by the desired test to benchmark on.
A 10-year-old machine should be pretty cheap, if you can find one. Also, see (5.b).
Contact the author to allow for any of the other options.
Papers often provide contact details of the authors, or you should be able to find them elsewhere if they have any sort of online presence and you're half-decent at using Google.
If I were reviewing your results, I would be annoyed if you attempted to demonstrate less than an order of magnitude speedup this way. There are a lot of variables determining algorithm performance, and I would be skeptical that a generic benchmark could capture the right ones. My gold standard is old and new algorithms implemented by the same programmer, with similar effort made to optimize, running on the same hardware. Using the previous authors' implementation instead of making a new one is commonplace in the experimental algorithms literature, but using different hardware isn't.
Algorithmic performance is usually measured in big-O terms, for which it is better to count basic operations, like comparisons, and do it for a range of input sizes.
If you must measure overall time, at least eliminate other sources of difference.
As #larsmans said, do it on the same processor.
Also, if there is existing work, there's no harm in repeating it.
Generally, in science, that's a good thing.
You should attempt to reduce the amount of differing factors between the two runs. I think just run-timing the two algorithms side by side on the same machine and/or comparing their Big O times are both equally valid and important. You should also attempt to use updated libraries and other external functions; using outdated ones my also be the cause of timing results.
Inkeeping with my interests in algorithms (see here), I would like to know if there are (contrary to my previous question), algorithms and data structures that are mainstream in parallel programming. It is probably early to ask about mainstream parallel algos and ds, but some of the gurus here may have had good experiences/bad experiences with some of them.
EDIT: I am more interested in successful practical applications of algos and ds than in academic papers.
Thanks
Many of Google's whitepapers, especially but not exclusively ones linked from this page, describe successful practical applications of parallel distributed computing and/or their DS and algorithmic underpinnings. For example, this paper deals with modifying a DBMS's data structures to extract intra-transaction parallelism; this one (and some others) introduces the popular mapreduce architecture, since implemented e.g. in hadoop; this one is about highly parallelizable approximate matrix factoring suitable for use in "kernel methods" in machine learning; etc, etc...
Maybe, I totally miss the point, but there are a ton of mainstream parallel algos and data structures, e.g. matrix multiplication, FFT, PDE and linear equation solvers, integration and simulation (Monte-Carlo / random numbers), searching and sorting, and so on. Take a look at the Designing and Building Parallel Programs or Patterns for Parallel Programming. And then there is CUDA and the like. What are you after?
Sorting:
Standard Template Library for Extra Large Data Sets
Sort Benchmark
How to implement efficient sorting algorithms for multiple processors in Scala? Here's the link for radix algorithm in GPU:
radix algorithm in GPU
Use scala.actors.Futures. It isn't a good solution, because you are talking about parallel computation, not concurrent computation, and Futures is aimed at the latter, not the former.
Things like parallel arrays that are coming with Java 7 and a later (not 2.8) version of Scala are more appropriate for parallel algorithms.
Just explaining, a parallel algorithm is one which does the same computation on multiple processing units. It is easy to see that each of which runs the same code. A concurrent computation is one in which each processing unit is running a potentially different code.
Also related, in parallel algorithms, the code being run doesn't change, only the data. In concurrent computation you have code changing constantly.
By the way, though that is not what you are asking, let me state that there's a library for Scala to run OpenCL code (ie, run computation on GPU). It's called ScalaCL.
My experience thus far has shown me that even with multi-core processors, parallelizing an algorithm won't always speed it up noticably. In fact, sometimes it can slow things down. What are some good hints that an algorithm can be sped up significantly by being parallelized?
(Of course given the caveats with premature optimization and their correlation to evil)
To gain the most benefit from parallelisation, a task should be able to be broken into similiar-sized course-grain chunks that are independent (or mostly so), and require little communication of data or synchronisation between the chunks.
Fine-grain parallelisation, almost always suffers from increased overheads, and will have a finite speed-up regardless of the number of physical cores available.
[The caveat to this, is those architectures that have a very large no. of 'cores' (such as the connection machines 64,000 cores). These are well suited to calculations that can be broken into relatively simple actions assigned to a particular topology (like a rectangular mesh).]
If you can divide the work into independent parts then it may be parallelized well.
Remember also Amdahl's Law which is a sobering reminder of how little we can expect in terms of performances gains by adding more cores to most programs.
First, check out this paper by the late Jim Gray:
Distributed Computing Economics
Actually, this will clear up some misunderstanding based on what you wrote in the question. Obviously, if the less amenable your problem set is to being discretized, the more difficult it will be.
Any time you have computations that depend on previous computations, it is not a parallel problem. Things like linear image processing, brute force methods, and genetic algorithms are all easily parallelized.
A good analogy is what could you work on that you could get a bunch of friends to do different parts at once? For example, putting ikea furniture together might parallelize well if different people can work on different sections, but rolling wallpaper might not because you need to do walls in sequence.
If you're doing large matrix computations, like simulations involving finite element models, these can often be broken down into smaller pieces in straight-forward ways. Matrix-vector multiplies can benefit well from parallelization, assuming you are dealing with very large matrices. Unless there is a real performance bottleneck that is causing code to run slowly, it's probably not necessary to hassle with parallel processing.
Well, if you need lots of locks for it to work, then its probably one of those difficult algorithms that doesn't parallelise well. Is there any part of the algorithm that can be broken up into separate parts that don't need to touch each other?