Concerning NVIDIA GPU the author in High Performance and Scalable GPU Graph Traversal paper says:
1-A sequence of kernel invocations is bulk- synchronous: each kernel is
initially presented with a consistent view of the results from the
previous.
2-Prefix-sum is a bulk-synchronous algorithmic primitive
I am unable to understand these two points (I know GPU based prefix sum though), Can smeone help me this concept.
1-A sequence of kernel invocations is bulk- synchronous: each kernel is initially presented with a consistent view of the results from the previous.
It's about parallel computation model: each processor has its own memory which is fast (like cache in CPU) and is performing computation using values stored there without any synchronization. Then non-blocking synchronization takes place - processor puts data it has computed so far and gets data from its neighbours. Then there's another synchronization - barrier, which makes all of them wait for each other.
2-Prefix-sum is a bulk-synchronous algorithmic primitive
I believe that's about the second step of BSP model - synchronization. That's the way processors store and get data for the next step.
Name of the model implies that it is highly concurrent (many many processes that work synchronously relatively to each other). And this is how we get to the second point.
As far as we want to live up to the name (be highly concurrent) we want get rid of sequential parts where it is possible. We can achieve that with prefix-sum.
Consider prefix-sum associative operator +. Then scan on set [5 2 0 3 1] returns the set [0 5 7 7 10 11]. So, now we can replace such sequential pseudocode:
foreach i = 1...n
foo[i] = foo[i-1] + bar(i);
with this pseudocode, which now can be parallel(!):
foreach(i)
baz[i] = bar(i);
scan(foo, baz);
That is very much naive version, but it's okay for explanation.
Related
I am working with Stream parallel processing and get to know that if I am using plane Array stream, it gets processed very fast. But if I am using ArrayList, then the processing gets a bit slower. But if I use LinkedList or some Binary Tree, the processing gets even more slower.
All that sounds like the more is the splittability of stream, more faster the processing would be. That means array and array list is most efficient in case of parallel streams. Is it true? If so, Shall we always use ArrayList or Array if we want to process stream in parallel? If so, how to use LinkedList and BlockingQueue in case of parallel stream?
Another thing is the state-fulness of the intermediate functions chosen. If I perform stateless operations like filter(), map(), the performance is high, but if perform the state full operations like distinct(), sorted(), limit(), skip(), it takes a lot of time. So again, the parallel stream get slower. Does that means we should not go for state full intermediate functions in parallel stream? If so, then what is the work around for that?
Well, as discussed in this question, there is hardly any reason to use LinkedList at all. The higher iteration costs apply to all operations, not just parallel streams.
Generally, the splitting support has indeed a big impact on the parallel performance. First, whether it has a genuine, hopefully cheap, splitting support rather than inheriting the buffering default behavior of AbstractSpliterator, second, how balanced the splits are.
In this regard, there is no reason why a binary tree should perform badly. A tree can be split into sub-trees easily and if the tree is balanced at the beginning, the splits will be balanced too. Of course, this requires that the actual Collection implementation implements the spliterator() method returning a suitable Spliterator implementation rather than inheriting the default method. E.g. TreeSet has a dedicated spliterator. Still, iterating the sub-trees might be more expensive than iterating an array, but that’s not a property of the parallel processing, as that would apply to sequential processing as well or any kind of iteration over the elements in general.
The question, how to use LinkedList and BlockingQueue in case of parallel streams, is moot. You choose the collection type depending on the application’s needs and if you really need one of these (in case of LinkedList hard to imagine), then you use it and live with the fact that its parallel stream performance would be less than that of ArrayList, which apparently didn’t fit your other needs. There is no general trick to make the parallel stream performance of badly splittable collections better. If there was, it would be part of the library.
There are some corner cases where the JRE doesn’t provide the maximum performance, which will be addressed in Java 9, like String.chars(), Files.lines() or the default spliterator for 3rd part RandomAccess Lists, but none of these apply to LinkedList, BlockingQueue or custom Binary Tree implementations.
In other words, if you have a particular use case with a particular collection, there might be something to improve, but there is no trick that could improve the parallel performance of all tasks with all collections.
It is correct that stateful intermediate operations like distinct(), sorted(), limit(), skip() have higher costs for parallel streams and their documentation even tells this. So we could give the general advice to avoid them, especially for parallel stream, but that would be kind of pointless, as you didn’t use them, if you didn’t need them. And again, there is no general work-around for that, as there wouldn’t be much sense in offering these operations if there was a generally better alternative.
Not a bad questions IMO.
Of course the array and ArrayList are going to be splittable much better then LinkedList or some type of a Tree. You can look at how their Spliterators are made to convince yourself. They usually start with some batch size (1024 elements) and increase from that. LinkedList does that and Files.lines if I remember correctly. So yes, using an arrays and ArrayList will have a very good parallelization.
If you want a better parallel support for some structures like LinkedList you could write your own spliterator - I think StreamEx did that for Files.lines to start with a smaller batch size... And this is a related question btw.
The other thing is that when you use stateful intermediate operations - you will effectively make intermediate operations that are above that stateful one - into stateful too... Let me provide an example:
IntStream.of(1, 3, 5, 2, 6)
.filter(x -> {
System.out.println("Filtering : " + x);
return x > 2;
})
.sorted()
.peek(x -> System.out.println("Peek : " + x))
.boxed()
.collect(Collectors.toList());
This will print:
Filtering : 1
Filtering : 3
Filtering : 5
Filtering : 2
Filtering : 6
Peek : 3
Peek : 5
Peek : 6
Because you have used sorted and filter is above that, filter has to take all elements and process them - so that sorted is applied to the correct ones.
On the other hand if you dropped sorted:
IntStream.of(1, 3, 5, 2, 6)
.filter(x -> {
System.out.println("Filtering : " + x);
return x > 2;
})
// .sorted()
.peek(x -> System.out.println("Peek : " + x))
.boxed()
.collect(Collectors.toList());
The output is going to be:
Filtering : 1
Filtering : 3
Peek : 3
Filtering : 5
Peek : 5
Filtering : 2
Filtering : 6
Peek : 6
Generally I do agree, I try to avoid (if I can) stateful intermediate operations - may be you don't want sorted let's say - may be you can collect to a TreeSet... etc. But I don't overthink it - it I need to use it - I just do and may be measure to see if it's really a bottleneck.
Unless you are really hitting some performance problems around this - I would not take that much into account; especially since you would need lots of elements to actually have some speed benefit from parallel.
Here is a related question that shows that you really really need lots of elements to see a performance gain.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
Lazy evaluation is said to be a way of delaying a process until the first time it is needed. This tends to avoid repeated evaluations and thats why I would imagine that is performing a lot faster.
Functional language like Haskell (and JavaScript..?) have this functionality built-in.
However, I don't understand how and why other 'normal' approaches (that is; same functionality but not using lazy evaluation) are slower.. how and why do these other approaches do repeated evaluations? Can someone elaborate on this by giving simple examples and explaining the mechanics of each approach?
Also, according to Wikipedia page about lazy evaluation these are said to be the advantages of this approach:
Performance increases by avoiding needless calculations, and error
conditions in evaluating compound expressions
The ability to construct potentially infinite data structures
The ability to define control flow (structures) as abstractions
instead of primitives
However, can we just control the calculations needed and avoid repeating the same ones? (1)
We can use i.e. a Linked List to create an infinite data structure (2)
Can we do (3) already..??? We can define classes/templates/objects and use those instead of primitives (i.e JavaScript).
Additionally, it seems to me that (at least from the cases i have seen), lazy evaluation goes hand-to-hand with recursion and using the 'head' and 'tail' (along with others) notions. Surely, there are cases where recursion is useful but is lazy evaluation something more than that...? more than a recursive approach to solving a problem..? Streamjs is JavaScript library that uses recursion along with some other simple operations (head,tail,etc) to perform lazy evaluation.
It seems i can't get my head around it...
Thanks in advance for any contribution.
I'll show examples in both Python 2.7 and Haskell.
Say, for example, you wanted to do a really inefficient sum of all the numbers from 0 to 10,000,000. You could do this with a for loop in Python as
total = 0
for i in range(10000000):
total += i
print total
On my computer, this takes about 1.3s to execute. If instead, I changed range to xrange (the generator form of range, lazily produces a sequence of numbers), it takes 1.2s, only slightly faster. However, if I check the memory used (using the memory_profiler package), the version with range uses about 155MB of RAM, while the xrange version uses only 1MB of RAM (both numbers not including the ~11MB Python uses). This is an incredibly dramatic difference, and we can see where it comes from with this tool as well:
Mem usage Increment Line Contents
===========================================
10.875 MiB 0.004 MiB total = 0
165.926 MiB 155.051 MiB for i in range(10000000):
165.926 MiB 0.000 MiB total += i
return total
This says that before we started we were using 10.875MB, total = 0 added 0.004MB, and then for i in range(10000000): added 155.051MB when it generated the entire list of numbers [0..9999999]. If we compare to the xrange version:
Mem usage Increment Line Contents
===========================================
11.000 MiB 0.004 MiB total = 0
11.109 MiB 0.109 MiB for i in xrange(10000000):
11.109 MiB 0.000 MiB total += i
return total
So we started with 11MB and for i in xrange(10000000): added only 0.109MB. This is a huge memory savings by only adding a single letter to the code. While this example is fairly contrived, it shows how not computing a whole list until the element is needed can make things a lot more memory efficient.
Python has iterators and generators which act as a sort of "lazy" programming for when you need to yield sequences of data (although there's nothing stopping you from using them for single values), but Haskell has laziness built into every value in the language, even user-defined ones. This lets you take advantage of things like data structures that won't fit in memory without having to program complicated ways around that fact. The canonical example would be the fibonacci sequence:
fibs = 1 : 1 : zipWith (+) fibs (tail fibs)
which very elegantly expresses this famous sequence to define a recursive infinite list generating all fibonacci numbers. It's CPU efficient because all values are cached, so each element only has to be computed once (compared to a naive recursive implementation)1, but if you calculate too many elements your computer will eventually run out of RAM because you're now storing this huge list of numbers. This is an example where lazy programming lets you have CPU efficiency, but not RAM efficiency. There is a way around this, though. If you were to write
fib :: Int -> Integer
fib n = let fibs = 1 : 1 : zipWith (+) fibs (tail fibs) in fibs !! n
then this runs in near-constant memory, and does so very quickly, but memoization is lost as subsequent calls to fib have to recompute fibs.
A more complex example can be found here, where the author shows how to use lazy programming and recursion in Haskell to perform dynamic programming with arrays, a feat that most initially think is very difficult and requires mutation, but Haskell manages to do very easily with "tying the knot" style recursion. It results in both CPU and RAM efficiency, and does so in fewer lines than I'd expect in C/C++.
All this being said, there are plenty of cases where lazy programming is annoying. Often you can build up huge numbers of thunks instead of computing things as you go (I'm looking at you, foldl), and some strictness has to be introduced to attain efficiency. It also bites a lot of people with IO, when you read a file to a string as a thunk, close the file, and then try to operate on that string. It's only after the file is closed that the thunk gets evaluated, causing an IO error to occur and crashes your program. As with anything, lazy programming is not without its flaws, gotchas, and pitfalls. It takes time to learn how to work with it well, and to know what its limitations are.
1) By "naive recursive implementation", I mean implementing the fibonacci sequence as
fib :: Integer -> Integer
fib 0 = 1
fib 1 = 1
fib n = fib (n-1) + fib (n-2)
With this implementation, you can see the mathematical definition very clearly, it's very much in the style of inductive proofs, you show your base cases and then the general case. However, if I call fib 5, this will "expand" into something like
fib 5 = fib 4 + fib 3
= fib 3 + fib 2 + fib 2 + fib 1
= fib 2 + fib 1 + fib 1 + fib 0 + fib 1 + fib 0 + fib 1
= fib 1 + fib 0 + fib 1 + fib 1 + fib 0 + fib 1 + fib 0 + fib 1
= 1 + 1 + 1 + 1 + 1 + 1 + 1 + 1
= 8
When instead we'd like to share some of those computations, that way fib 3 only gets computed once, fib 2 only gets computed once, etc.
By using a recursively defined list in Haskell, we can avoid this. Internally, this list is represented something like this:
fibs = 1 : 1 : zipWith (+) fibs (tail fibs)
= 1 : 1 : zipWith (+) (f1:f2:fs) (f2:fs)
^--------------------^ ^ ^
^-------------------|-------|
= 1 : 1 : 2 : zipWith (+) (f2:f3:fs) (f3:fs)
^--------------------^ ^ ^
^-------------------|-------|
= 1 : 1 : 2 : 3 : zipWith (+) (f3:f4:fs) (f4:fs)
^--------------------^ ^ ^
^-------------------|-------|
So hopefully you can see the pattern forming here, as the list is build, it keeps pointers back to the last two elements generated in order to compute the next element. This means that for the nth element computed, there are n-2 additions performed. Even for the naive fib 5, you can see that there are more additions performed than that, and the number of additions will continue to grow exponentially. This definition is made possible through laziness and recursions, letting us turn an O(2^n) algorithm into an O(n) algorithm, but we have to give up RAM to do so. If this is defined at the top level, then values are cached for the lifetime of the program. It does mean that if you need to refer to the 1000th element repeatedly, you don't have to recompute it, just index it.
On the other hand, the definition
fib :: Int -> Integer
fib n =
let fibs = 1 : 1 : zipWith (+) fibs (tail fibs)
in fibs !! n
uses a local copy of fibs every time fib is called. We don't get caching between calls to fib, but we do get local caching, leaving our complexity O(n). Additionally, GHC is smart enough to know that we don't have to keep the beginning of the list around after we've used it to calculate the next element, so as we traverse fibs looking for the nth element, it only needs to hold on to 2-3 elements and a thunk pointing at the next element. This saves us RAM while computing it, and since it isn't defined at a global level it doesn't eat up RAM over the lifetime of the program. It's a tradeoff between when we want to spend RAM and CPU cycles, and different approaches are better for different situations. These techniques are applicable to much of Haskell programming in general, not just for this sequence!
Lazy evaluation is not, in general, faster. When it's said that lazy evaluation is more efficient, it is because when you consider Lambda Calculus (which is essentially what your Haskell programs are once the compiler finishes de-sugaring them) as a system of terms and reduction rules, then applying those rules in the order specified by the rules of a call-by-name with sharing evaluation policy always applies the same or fewer reduction rules than when you follow the rules in the order specified by call-by-value evaluation.
The reason that this theoretical result does not make lazy evaluation faster in general is that the translation to a linear sequential machine model with a memory access bottleneck tends to make all the reductions performed much more expensive! Initial attempts at implementing this model on computers led to programs that executed orders of magnitude more slowly than typical eagerly-evaluating language implementations. It has taken a lot of research and engineering into techniques for implementing lazy evaluation efficiently to get Haskell performance to where it is today. And the fastest Haskell programs take advantage of a form of static analysis called "strictness analysis" which attempts to determine at compile time which expressions will always be needed so that they can be evaluated eagerly rather than lazily.
There are still some cases where straightforward implementations of algorithms will execute faster in Haskell due to only evaluating terms that are needed for the result, but even eager languages always have some facility for evaluating some expressions by need. Conditionals and short-circuiting boolean expressions are ubiquitous examples, and in many eager languages, one can also delay evaluation by wrapping an expression in an anonymous function or some other sort of delaying form. So you can typically use these mechanisms (or even more awkward rewrites) to avoid evaluating expensive things that won't be necessary in an eager language.
The real advantage of Haskell's lazy evaluation is not a performance-related one. Haskell makes it easier to pull expressions apart, re-combine them in different ways, and generally reason about code as if it were a system of mathematical equations instead of being a sequentially-evaluated set of machine instructions. By not specifying any evaluation order, it forced the developers of the language to avoid side-effects that rely on a simple evaluation ordering, such as mutation or IO. This in turn led to a host of elegant abstractions that are generally useful and might not have been developed into usability otherwise.
The state of Haskell is now such that you can write high-level, elegant algorithms that make better re-use of existing higher-order functions and data structures than in nearly any other high-level typed language. And once you become familiar with the costs and benefits of lazy evaluation and how to control when it occurs, you can ensure that the elegant code also performs very well. But getting the elegant code to a state of high performance is not necessarily automatic and may require a bit more thought than in a similar but eagerly-evaluated language.
The concept of "lazy evaluation" is only about 1 thing, and only about that 1 thing:
The ability to postpone evaluation of something until needed
That's it.
Everything else in that wikipedia article follows from it.
Infinite data structures? Not a problem. We'll just make sure we don't actually figure out what the next element is until you actually ask for it. For instance, asking some code what the next value after X is, if the operation to perform is just to increase X by 1, will be infite. If you create a list containing all those values, it's going to fill your available memory in the computer. If you only figure out what the next value is when asked, not so much.
Needless calculations? Sure. You can return an object containing a lot of properties that when asked will provide you with some value. If you don't ask (ie. never inspect the value of a given property), the calculation necessary to figure out the value of that property will never be done.
Control flow ... ? Not at all sure what that is about.
The purpose of lazy evaluation of something is exactly as I stated to begin with, to avoid evaluating something until you actually need it. Be it the next value of something, the value of a property, whatever, adding support for lazy evaluation might conserve CPU cycles.
What would the alternative be?
I want to return an object to the calling code, containing any number of properties, some of which might be expensive to calculate. Without lazy evaluation, I would have to calculate the values of all those properties either:
Before constructing the object
After constructing the object, on the first time you inspected a property
After constructing the object, every time you inspected that property
With lazy evaluation you usually end up with number 2. You postpone evaluating the value of that property until some code inspects it. Note that you might cache the value once evaluated, which would save CPU cycles when inspecting the same property more than once, but that is caching, not quite the same, but in the same line of work: optimizations.
Let's say that I have a variable x.
x = 0
I then spawn some number of threads, and each of them may or may not run the following expression WITHOUT the use of atomics.
x |= 1
After all threads have joined with my main thread, the main thread branches on the value.
if(x) { ... } else { ... }
Is it possible for there to be a race condition in this situation? My thoughts say no, because it doesn't seem to matter whether or not a thread is interrupted by another thread between reading and writing 'x' (in both cases, either 'x == 1', or 'x == 1'). That said, I want to make sure I'm not missing something stupid obvious or ridiculously subtle.
Also, if you happen to provide an answer to the contrary, please provide an instruction-by-instruction example!
Context:
I'm trying to, in OpenCL, have my threads indicate the presence or absence of a feature among any of their work-items. If any of the threads indicate the presence of the feature, my host ought to be able to branch on the result. I'm thinking of using the above method. If you guys have a better suggestion, that works too!
Detail:
I'm trying to add early-exit to my OpenCL radix-sort implementation, to skip radix passes if the data is banded (i.e. 'x' above would be x[RADIX] and I'd have all work groups, right after partial reduction of the data, indicate presence or absence of elements in the RADIX bins via 'x').
It may work within a work-group. You will need to insert a barrier before testing x. I'm not sure it will be faster than using atomic increments.
It will not work across several work-groups. Imagine you have 1000 work-groups to run on 20 cores. Typically, only a small number of work-groups can be resident on a single core, for example 4, meaning only 80 work-groups can be in flight inside the GPU at a given time. Once a work-group is done executing, it is retired, and another one is started. Halting a kernel in the middle of execution to wait for all 1000 work-groups to reach the same point is impossible.
I have to solve a huge linear equation for multiple right sides (Let's say 20 to 200). The Matrix is stored in a sparse format and distributed over multiple MPI nodes (Let's say 16 to 64). I run a CG solver on the rank 0 node. It's not possible to solve the linear equation directly, because the system matrix would be dense (Sys = A^T * S * A).
The basic Matrix-Vector multiplication is implemented as:
broadcast x
y = A_part * x
reduce y
While the collective operations are reasonably fast (OpenMPI seems to use a binary tree like communication pattern + Infiniband), it still accounts for a quite large part of the runtime. For performance reasons we already calculate 8 right sides per iteration (Basicly SpM * DenseMatrix, just to be complete).
I'm trying to come up with a good scheme to hide the communication latency, but I did not have a good idea yet. I also try to refrain from doing 1:n communication, although I did not yet measure if scaling would be a problem.
Any suggestions are welcome!
If your matrix is already distributed, would it be possible to use a distributed sparse linear solver instead of running it only on rank 0 and then broadcasting the result (if I'm reading your description correctly..). There's plenty of libraries for that, e.g. SuperLU_DIST, MUMPS, PARDISO, Aztec(OO), etc.
The "multiple rhs" optimization is supported by at least SuperLU and MUMPS (haven't checked the others, but I'd be VERY surprised if they didn't support it!), since they solve AX=B where X and B are matrices with potentially > 1 column. That is, each "rhs" is stored as a column vector in B.
If you don't need to have the results of an old right-hand-side before starting the next run you could try to use non-blocking communication (ISend, IRecv) and communicate the result while calculating the next right-hand-side already.
But make sure you call MPI_Wait before reading the content of the communicated array, in order to be sure you're not reading "old" data.
If the matrices are big enough (i.e. it takes long enough to calculate the matrix-product) you don't have any communication delay at all with this approach.
I currently have an application which can contain 100s of user defined formulae. Currently, I use reverse polish notation to perform the calculations (pushing values and variables on to a stack, then popping them off the stack and evaluating). What would be the best way to start parallelizing this process? Should I be looking at a functional language?
The calculations are performed on arrays of numbers so for example a simple A+B could actually mean 100s of additions. I'm currently using Delphi, but this is not a requirement going forward. I'll use the tool most suited to the job. Formulae may also be dependent on each other So we may have one formula C=A+B and a second one D=C+A for example.
Let's assume your formulae (equations) are not cyclic, as otherwise you cannot "just" evaluate them. If you have vectorized equations like A = B + C where A, B and C are arrays, let's conceptually split them into equations on the components, so that if the array size is 5, this equation is split into
a1 = b1 + c1
a2 = b2 + c2
...
a5 = b5 + c5
Now assuming this, you have a large set of equations on simple quantities (whether integer, rational or something else).
If you have two equations E and F, let's say that F depends_on E if the right-hand side of F mentions the left-hand side of E, for example
E: a = b + c
F: q = 2*a + y
Now to get towards how to calculate this, you could always use randomized iteration to solve this (this is just an intermediate step in the explanation), following this algorithm:
1 while (there is at least one equation which has not been computed yet)
2 select one such pending equation E so that:
3 for every equation D such that E depends_on D:
4 D has been already computed
5 calculate the left-hand side of E
This process terminates with the correct answer regardless on how you make your selections on line // 2. Now the cool thing is that it also parallelizes easily. You can run it in an arbitrary number of threads! What you need is a concurrency-safe queue which holds those equations whose prerequisites (those the equations depend on) have been computed but which have not been computed themselves yet. Every thread pops out (thread-safely) one equation from this queue at a time, calculates the answer, and then checks if there are now new equations so that all their prerequisites have been computed, and then adds those equations (thread-safely) to the work queue. Done.
Without knowing more, I would suggest taking a SIMD style approach if possible. That is, create threads to compute all formulas for a single data set. Trying to divide the computation of formulas to parallelise them wouldn't yield much speed improvement as the logic required to be able to split up the computations into discrete units suitable for threading would be hard to write and harder to get right, the overhead would cancel out any speed gains. It would also suffer quickly from diminishing returns.
Now, if you've got a set of formulas that are applied to many sets of data then the parallelisation becomes easier and would scale better. Each thread does all computations for one set of data. Create one thread per CPU core and set its affinity to each core. Each thread instantiates one instance of the formula evaluation code. Create a supervisor which loads a single data set and passes it an idle thread. If no threads are idle, wait for the first thread to finish processing its data. When all data sets are processed and all threads have finished, then exit. Using this method, there's no advantage to having more threads than there are cores on the CPU as thread switching is slow and will have a negative effect on overall speed.
If you've only got one data set then it is not a trivial task. It would require parsing the evaluation tree for branches without dependencies on other branches and farming those branches to separate threads running on each core and waiting for the results. You then get problems synchronizing the data and ensuring data coherency.