Perl fast matrix multiply - performance

I have implemented the following statistical computation in perl http://en.wikipedia.org/wiki/Fisher_information.
The results are correct. I know this because I have 100's of test cases that match input and output. The problem is that I need to compute this many times every single time I run the script. The average number of calls to this function is around 530. I used Devel::NYTProf to find out this out as well as where the slow parts are. I have optimized the algorithm to only traverse the top half of the matrix and reflect it onto the bottom as they are the same. I'm not a perl expert, but I need to know if there is anything I can try to speed up the perl. This script is distributed to clients so compiling a C file is not an option. Is there another perl library I can try? This needs to be sub second in speed if possible.
More information is $MatrixRef is a matrix of floating point numbers that is $rows by $variables. Here is the NYTProf dump for the function.
#-----------------------------------------------
#
#-----------------------------------------------
sub ComputeXpX
# spent 4.27s within ComputeXpX which was called 526 times, avg 8.13ms/call:
# 526 times (4.27s+0s) by ComputeEfficiency at line 7121, avg 8.13ms/call
{
526 0s my ($MatrixRef, $rows, $variables) = #_;
526 0s my $r = 0;
526 0s my $c = 0;
526 0s my $k = 0;
526 0s my $sum = 0;
526 0s my #xpx = ();
526 11.0ms for ($r = 0; $r < $variables; $r++)
{
14202 19.0ms my #temp = (0) x $variables;
14202 6.01ms push(#xpx, \#temp);
526 0s }
526 7.01ms for ($r = 0; $r < $variables; $r++)
{
14202 144ms for ($c = $r; $c < $variables; $c++)
{
198828 43.0ms $sum = 0;
#for ($k = 0; $k < $rows; $k++)
198828 101ms foreach my $RowRef (#{$MatrixRef})
{
#$sum += $MatrixRef->[$k]->[$r]*$MatrixRef->[$k]->[$c];
6362496 3.77s $sum += $RowRef->[$r]*$RowRef->[$c];
}
198828 80.1ms $xpx[$r]->[$c] = $sum;
#reflect on other side of matrix
198828 82.1ms $xpx[$c]->[$r] = $sum if ($r != $c);
14202 1.00ms }
526 2.00ms }
526 2.00ms return \#xpx;
}

Since each element of the result matrix can be calculated independently, it should be possible to calculate some/all of them in parallel. In other words, none of the instances of the innermost loop depend on the results of any other, so they could run simultaneously on their own threads.

There really isn't much you can do here, without rewriting parts in C, or moving to a better framework for mathematic operations than bare-bone Perl (→ PDL!).
Some minor optimization ideas:
You initialize #xpx with arrayrefs containing zeros. This is unneccessary, as you assign a value to every position either way. If you want to pre-allocate array space, assign to the $#array value:
my #array;
$#array = 100; # preallocate space for 101 scalars
This isn't generally useful, but you can benchmark with and without.
Iterate over ranges; don't use C-style for loops:
for my $c ($r .. $variables - 1) { ... }
Perl scalars aren't very fast for math operations, so offloading the range iteration to lower levels will gain a speedup.
Experiment with changing the order of the loops, and toy around with caching a level of array accesses. Keeping $my $xpx_r = $xpx[$r] around in a scalar will reduce the number of array accesses. If your input is large enough, this translates into a speed gain. Note that this only works when the cached value is a reference.
Remember that perl does very few “big” optimizations, and that the opcode tree produced by compilation closely resembles your source code.
Edit: On threading
Perl threads are heavyweight beasts that literally clone the current interpreter. It is very much like forking.
Sharing data structures across thread boundaries is possible (use threads::shared; my $variable :shared = "foo") but there are various pitfalls. It is cleaner to pass data around in a Thread::Queue.
Splitting the calculation of one product over multiple threads could end up with your threads doing more communication than calculation. You could benchmark a solution that divides responsibility for certain rows between the threads. But I think recombining the solutions efficiently would be difficult here.
More likely to be useful is to have a bunch of worker threads running from the beginning. All threads listen to a queue which contains a pair of a matrix and a return queue. The worker would then dequeue a problem, and send back the solution. Multiple calculations could be run in parallel, but a single matrix multiplication will be slower. Your other code would have to be refactored significantly to take advantage of the parallelism.
Untested code:
use strict; use warnings; use threads; use Thread::Queue;
# spawn worker threads:
my $problem_queue = Thread::Queue->new;
my #threads = map threads->new(\&worker, $problem_queue), 1..3; # make 3 workers
# automatically close threads when program exits
END {
$problem_queue->enqueue((undef) x #threads);
$_->join for #threads;
}
# This is the wrapper around the threading,
# and can be called exactly as ComputeXpX
sub async_XpX {
my $return_queue = Thread::Queue->new();
$problem_queue->enqueue([$return_queue, #_]);
return sub { $return_queue->dequeue };
}
# The main loop of worker threads
sub worker {
my ($queue) = #_;
while(defined(my $problem = $queue->dequeue)) {
my ($return, #args) = #$problem;
$return->enqueue(ComputeXpX(#args));
}
}
sub ComputeXpX { ... } # as before
The async_XpX returns a coderef that will eventually collect the result of the computation. This allows us to carry on with other stuff until we need the result.
# start two calculations
my $future1 = async_XpX(...);
my $future2 = async_XpX(...);
...; # do something else
# collect the results
my $result1 = $future1->();
my $result2 = $future2->();
I benchmarked the bare-bones threading code without doing actual calculations, and the communication is about as expensive as the calculations. I.e. with a bit of luck, you may start to get a benefit on a machine with at least four processors/kernel threads.
A note on profiling threaded code: I know of no way to do that elegantly. Benchmarking threaded code, but profiling with single-threaded test cases may be preferable.

Related

Performance of local variable vs. array access

I was doing some benchmarking of Perl performance, and ran into a case that I thought was somewhat odd. Suppose you have a function which uses a value from an array multiple times. In this case, you often see some code as:
sub foo {
my $value = $array[17];
do_something_with($value);
do_something_else_with($value);
}
The alternative is not to create a local variable at all:
sub foo {
do_something_with($array[17]);
do_something_else_with($array[17]);
}
For readability, the first is clearer. I assumed that performance would be at least equal (or better) for the first case too - array lookup requires a multiply-and-add, after all.
Imagine my surprise when this test program showed the opposite. On my machine, re-doing the array lookup is actually faster than storing the result, until I increase ITERATIONS to 7; in other words, for me, creating a local variable is only worthwhile if it's used at least 7 times!
use Benchmark qw(:all);
use constant { ITERATIONS => 4, TIME => -5 };
# sample array
my #array = (1 .. 100);
cmpthese(TIME, {
# local variable version
'local_variable' => sub {
my $index = int(rand(scalar #array));
my $val = $array[$index];
my $ret = '';
for (my $i = 0; $i < ITERATIONS; $i ++) {
$ret .= $val;
}
return $ret;
},
# multiple array access version
'multi_access' => sub {
my $index = int(rand(scalar #array));
my $ret = '';
for (my $i = 0; $i < ITERATIONS; $i ++) {
$ret .= $array[$index];
}
return $ret;
}
});
Result:
Rate local_variable multi_access
local_variable 245647/s -- -5%
multi_access 257907/s 5% --
It's not a HUGE difference, but it brings up my question: why is it slower to create a local variable and cache the array lookup, than to do the lookup again? Reading other S.O. posts, I've seen that other languages / compilers do have the expected outcome, and sometimes even transform these into the same code. What is Perl doing?
I've done more poking around at this today, and what I've determined is that scalar assignment of any sort is an expensive operation, relative to the overhead of one-deep array lookup.
This seems like it's just restating the initial question, but I feel I have found more clarity. If, for example, I modify my local_variable subroutine to do another assignment like so:
my $index = int(rand(scalar #array));
my $val = 0; # <- this is new
$val = $array[$index];
my $ret = '';
...the code suffers an additional 5% speed penalty beyond the single-assignment version - even though it does nothing but a dummy assignment to the variable.
I also tested to see if scope caused setup/teardown of $var to impede performance, by switching it to global instead of local scoped one. The difference is negligible (see comments to #zdim above), pointing away from construct/destruct as the performance bottleneck.
In the end, my confusion was based on faulty assumptions that scalar assignment should be fast. I am used to working in C, where copying a value to a local variable is an extremely quick operation (1-2 asm instructions).
As it turns out, this is not the case in Perl (though I don't know exactly why, it's ok). Scalar assignment is a relatively "slow" operation... Whatever Perl internals are doing to get at the nth element of an Array object is actually quite fast by comparison. The "multiply and add" I mentioned in the initial post is still far less work than the code for scalar assignment.
That is why it takes so many lookups to match the performance of caching the result: simply assigning to the "cache" variable is ~7 times slower (for my setup).
Let's first turn the statement: Caching the lookup is expected to be faster as it avoids the repeated lookups, even as it does cost some, and it starts being faster once more than 7 lookups are done. Now that's not so shocking, I think.
As to why it's slower for fewer than seven iterations ... I'll guess that the cost of the scalar creation is still greater than those few lookups. It is surely greater than one lookup, yes? How about two, then? I'd say that "a few" may well be a good measure.

Julia: Parallel for loop over partitions iterator

So I'm trying to iterate over the list of partitions of something, say 1:n for some n between 13 and 21. The code that I ideally want to run looks something like this:
valid_num = #parallel (+) for p in partitions(1:n)
int(is_valid(p))
end
println(valid_num)
This would use the #parallel for to map-reduce my problem. For example, compare this to the example in the Julia documentation:
nheads = #parallel (+) for i=1:200000000
Int(rand(Bool))
end
However, if I try my adaptation of the loop, I get the following error:
ERROR: `getindex` has no method matching getindex(::SetPartitions{UnitRange{Int64}}, ::Int64)
in anonymous at no file:1433
in anonymous at multi.jl:1279
in run_work_thunk at multi.jl:621
in run_work_thunk at multi.jl:630
in anonymous at task.jl:6
which I think is because I am trying to iterate over something that is not of the form 1:n (EDIT: I think it's because you cannot call p[3] if p=partitions(1:n)).
I've tried using pmap to solve this, but because the number of partitions can get really big, really quickly (there are more than 2.5 million partitions of 1:13, and when I get to 1:21 things will be huge), constructing such a large array becomes an issue. I left it running over night and it still didn't finish.
Does anyone have any advice for how I can efficiently do this in Julia? I have access to a ~30 core computer and my task seems easily parallelizable, so I would be really grateful if anyone knows a good way to do this in Julia.
Thank you so much!
The below code gives 511, the number of partitions of size 2 of a set of 10.
using Iterators
s = [1,2,3,4,5,6,7,8,9,10]
is_valid(p) = length(p)==2
valid_num = #parallel (+) for i = 1:30
sum(map(is_valid, takenth(chain(1:29,drop(partitions(s), i-1)), 30)))
end
This solution combines the takenth, drop, and chain iterators to get the same effect as the take_every iterator below under PREVIOUS ANSWER. Note that in this solution, every process must compute every partition. However, because each process uses a different argument to drop, no two processes will ever call is_valid on the same partition.
Unless you want to do a lot of math to figure out how to actually skip partitions, there is no way to avoid computing partitions sequentially on at least one process. I think Simon's answer does this on one process and distributes the partitions. Mine asks each worker process to compute the partitions itself, which means the computation is being duplicated. However, it is being duplicated in parallel, which (if you actually have 30 processors) will not cost you time.
Here is a resource on how iterators over partitions are actually computed: http://www.informatik.uni-ulm.de/ni/Lehre/WS03/DMM/Software/partitions.pdf.
PREVIOUS ANSWER (More complicated than necessary)
I noticed Simon's answer while writing mine. Our solutions seem similar to me, except mine uses iterators to avoid storing partitions in memory. I'm not sure which would actually be faster for what size sets, but I figure it's good to have both options. Assuming it takes you significantly longer to compute is_valid than to compute the partitions themselves, you can do something like this:
s = [1,2,3,4]
is_valid(p) = length(p)==2
valid_num = #parallel (+) for i = 1:30
foldl((x,y)->(x + int(is_valid(y))), 0, take_every(partitions(s), i-1, 30))
end
which gives me 7, the number of partitions of size 2 for a set of 4. The take_every function returns an iterator that returns every 30th partition starting with the ith. Here is the code for that:
import Base: start, done, next
immutable TakeEvery{Itr}
itr::Itr
start::Any
value::Any
flag::Bool
skip::Int64
end
function take_every(itr, offset, skip)
value, state = Nothing, start(itr)
for i = 1:(offset+1)
if done(itr, state)
return TakeEvery(itr, state, value, false, skip)
end
value, state = next(itr, state)
end
if done(itr, state)
TakeEvery(itr, state, value, true, skip)
else
TakeEvery(itr, state, value, false, skip)
end
end
function start{Itr}(itr::TakeEvery{Itr})
itr.value, itr.start, itr.flag
end
function next{Itr}(itr::TakeEvery{Itr}, state)
value, state_, flag = state
for i=1:itr.skip
if done(itr.itr, state_)
return state[1], (value, state_, false)
end
value, state_ = next(itr.itr, state_)
end
if done(itr.itr, state_)
state[1], (value, state_, !flag)
else
state[1], (value, state_, false)
end
end
function done{Itr}(itr::TakeEvery{Itr}, state)
done(itr.itr, state[2]) && !state[3]
end
One approach would be to divide the problem up into pieces that are not too big to realize and then process the items within each piece in parallel, e.g. as follows:
function my_take(iter,state,n)
i = n
arr = Array[]
while !done(iter,state) && (i>0)
a,state = next(iter,state)
push!(arr,a)
i = i-1
end
return arr, state
end
function get_part(npart,npar)
valid_num = 0
p = partitions(1:npart)
s = start(p)
while !done(p,s)
arr,s = my_take(p,s,npar)
valid_num += #parallel (+) for a in arr
length(a)
end
end
return valid_num
end
valid_num = #time get_part(10,30)
I was going to use the take() method to realize up to npar items from the iterator but take() appears to be deprecated so I've included my own implementation which I've called my_take(). The getPart() function therefore uses my_take() to obtain up to npar partitions at a time and carry out a calculation on them. In this case, the calculation just adds up their lengths, because I don't have the code for the OP's is_valid() function. get_part() then returns the result.
Because the length() calculation isn't very time-consuming, this code is actually slower when run on parallel processors than it is on a single processor:
$ julia -p 1 parpart.jl
elapsed time: 10.708567515 seconds (373025568 bytes allocated, 6.79% gc time)
$ julia -p 2 parpart.jl
elapsed time: 15.70633439 seconds (548394872 bytes allocated, 9.14% gc time)
Alternatively, pmap() could be used on each piece of the problem instead of the parallel for loop.
With respect to the memory issue, realizing 30 items from partitions(1:10) took nearly 1 gigabyte of memory on my PC when I ran Julia with 4 worker processes so I expect realizing even a small subset of partitions(1:21) will require a great deal of memory. It may be desirable to estimate how much memory would be needed to see if it would be at all possible before trying such a computation.
With respect to the computation time, note that:
julia> length(partitions(1:10))
115975
julia> length(partitions(1:21))
474869816156751
... so even efficient parallel processing on 30 cores might not be enough to make the larger problem solvable in a reasonable time.

Generating a race condition with MRI

I was wondering whether it's easy to make a race condition using MRI ruby(2.0.0) and some global variables, but as it turns out it's not that easy. It looks like it should fail at some point, but it doesn't and I've been running it for 10 minutes. This is the code I've been trying to achieve it:
def inc(*)
a = $x
a += 1
a *= 3000
a /= 3000
$x = a
end
THREADS = 10
COUNT = 5000
loop do
$x = 1
THREADS.times.map do Thread.new { COUNT.times(&method(:inc)) } end.each(&:join)
break puts "woo hoo!" if $x != THREADS * COUNT + 1
end
puts $x
Why am I not able to generate (or detect) the expected race condition, and get the output woo hoo! in Ruby MRI 2.0.0?
Your example does (almost instantly) work in 1.8.7.
The following variation does the trick for 1.9.3+:
def inc
a = $x + 1
# Just one microsecond
sleep 0.000001
$x = a
end
THREADS = 10
COUNT = 50
loop do
$x = 1
THREADS.times.map { Thread.new { COUNT.times { inc } } }.each(&:join)
break puts "woo hoo!" if $x != THREADS * COUNT + 1
puts "No problem this time."
end
puts $x
The sleep command is a strong hint to the interpreter that it can schedule another thread, so this is not a huge surprise.
Note if you replace the sleep with something that takes just as long or longer, e.g. b = a; 500.times { b *= 100 }, then there is no race condition detected in the above code. But take it further with b = a; 2500.times { b *= 100 }, or increase COUNT from 50 to 500, and the race condition is more reliably triggered.
The thread scheduling in Ruby 1.9.3 onwards (of course including 2.0.0) appears to be assigning CPU time in larger chunks than in 1.8.7. Opportunities to switch threads can be low in simple code, unless some kind of I/O waiting is involved.
It is even possible that the threads in the OP, each of which is performing just a few thousand calculations, are in essence occurring in series - although increasing the COUNT global to avoid this still does not trigger additional race conditions.
Generally MRI Ruby does not switch context between threads during atomic processes (e.g. during a Fixnum multiply or division) that occur within its C implementation. This means that the only opportunities for a thread context switch where all methods are calls to Ruby internals without I/O waiting, are "in-between" each line of code. In the original example, there are only 4 such fleeting opportunities, and it seems that in the scheme of things that this is not very much at all for MRI 1.9.3+ (in fact, see update below, these opportunities probably have been removed by Ruby)
When I/O waits or sleep are involved, it actually gets more complex, as Ruby MRI (1.9+) will allow a little bit of true parallel processing on multi-core CPUs. Although this is not the direct cause of race conditions with threads, it is more likely to result in them, as Ruby will usually make a thread context switch at the same time to take advantage of the parallelism.
Whilst I was researching this rough answer, I found an interesting link: Nobody understands the GIL (part 2 linked, as more relevant to this question)
Update: I suspect that the interpretter is optimising away some potential thread-switching points
in the Ruby source. Starting with my sleep version of the code, and setting:
COUNT = 500000
the following variation of inc does not seem to have a race condition affecting $x:
def inc
a = $x + 1
b = 0
b += 1
$x = a
end
However, these minor changes both trigger a race condition:
def inc
a = $x + 1
b = 0
b = b.send( :+, 1 )
$x = a
end
def inc
a = $x + 1
b = 0
b += '1'.to_i
$x = a
end
My interpretation is that the Ruby parser has optimised b += 1 to remove some of the
overhead of method despatch. One of the optimised-away steps is likely to include
the check for a possible switch to a waiting thread.
If that is the case, then the code in the question may never have the opportunity to switch threads within the inc method, because all the operations inside it can be optimised
in the same way.

What is a thread-safe random number generator for perl?

The core perl function rand() is not thread-safe, and I need random numbers in a threaded monte carlo simulation.
I'm having trouble finding any notes in CPAN on the various random-number generators there as to which (if any) are thread-safe, and every google search I do keeps getting cluttered with C/C++/python/anything but perl. Any suggestions?
Do not use built-in rand for Monte Carlo on Windows. At least, try:
my %r = map { rand() => undef } 1 .. 1_000_000;
print scalar keys %r, "\n";
If nothing has changed, it should print 32768 which is utterly unsuitable for any kind of serious work. And, even if it does print a larger number, you're better off sticking with a PRNG with known good qualities for simulation.
You can use Math::Random::MT.
You can instantiate a new Math::Random::MT object in each thread with its own array of seeds. Mersenne Twister has good properties for simulation.
Do you have /dev/urandom on your system?
BEGIN {
open URANDOM, '<', '/dev/urandom';
}
sub urand { # drop in replacement for rand.
my $expr = shift || 1;
my $x;
read URANDOM, $x, 4;
return $expr * unpack("I", $x) / (2**32);
}
rand is thread safe, and I think you got the wrong definition of what "thread safe" means, If its not "thread safe" It means the program/function is modifying its "shared" data structure that makes its execution in thread mode unsafe.
Check Rand function documentation, Notice it take EXPR as argument, in every thread you can provide a different EXPR.
http://perldoc.perl.org/functions/rand.html

Use of pthread increases execution time, suggestions for improvements

I had a piece of code, which looked like this,
for(i=0;i<NumberOfSteps;i++)
{
for(k=0;k<NumOfNodes;k++)
{
mark[crawler[k]]++;
r = rand() % node_info[crawler[k]].num_of_nodes;
crawler[k] = (int)DataBlock[node_info[crawler[k]].index+r][0];
}
}
I changed it so that the load can be split among multiple threads. Now it looks like this,
for(i=0;i<NumberOfSteps;i++)
{
for(k=0;k<NumOfNodes;k++)
{
pthread_mutex_lock( &mutex1 );
mark[crawler[k]]++;
pthread_mutex_unlock( &mutex1 );
pthread_mutex_lock( &mutex1 );
r = rand() % node_info[crawler[k]].num_of_nodes;
pthread_mutex_unlock( &mutex1 );
pthread_mutex_lock( &mutex1 );
crawler[k] = (int)DataBlock[node_info[crawler[k]].index+r][0];
pthread_mutex_unlock( &mutex1 );
}
}
I need the mutexes to protect shared variables. It turns out that my parallel code is slower. But why ? Is it because of the mutexes ?
Could this possibly be something to do with the cacheline size ?
You are not parallelizing anything but the loop heads. Everything between lock and unlock is forced to be executed sequentially. And since lock/unlock are (potentially) expensive operations, the code is getting slower.
To fix this, you should at least separate expensive computations (without mutex protection) from access to shared data areas (with mutexes). Then try to move the mutexes out of the inner loop.
You could use atomic increment instructions (depends on platform) instead of plain '++', which is generally cheaper than mutexes. But beware of doing this often on data of a single cache line from different threads in parallel (see 'false sharing').
AFAICS, you could rewrite the algorithm as indicated below with out needing mutexes and atomic increment at all. getFirstK() is NumOfNodes/NumOfThreads*t if NumOfNodes is an integral multiple of NumOfThreads.
for(t=0;t<NumberOfThreads;t++)
{
kbegin = getFirstK(NumOfNodes, NumOfThreads, t);
kend = getFirstK(NumOfNodes, NumOfThreads, t+1);
// start the following in a separate thread with kbegin and kend
// copied to thread local vars kbegin_ and kend_
int k, i, r;
unsigned state = kend_; // really bad seed
for(k=kbegin_;k<kend_;k++)
{
for(i=0;i<NumberOfSteps;i++)
{
mark[crawler[k]]++;
r = rand_r(&state) % node_info[crawler[k]].num_of_nodes;
crawler[k] = (int)DataBlock[node_info[crawler[k]].index+r][0];
}
}
}
// wait for threads/jobs to complete
This way to generate random numbers may lead to bad random distributions, see this question for details.

Resources