I am trying to wrap my head around how to use GCD to parallelize and speed up Monte Carlo simulations. Most/all simple examples are presented for Objective C and I really need a simple example for Swift since Swift is my first “real” programming language.
The minimal working version of a monte carlo simulation in Swift would be something like this:
import Foundation
import Cocoa
var winner = 0
var j = 0
var i = 0
var chance = 0
var points = 0
for j=1;j<1000001;++j{
var ability = 500
var player1points = 0
for i=1;i<1000;++i{
chance = Int(arc4random_uniform(1001))
if chance<(ability-points) {++points}
else{points = points - 1}
}
if points > 0{++winner}
}
println(winner)
The code works directly pasted into a command line program project in xcode 6.1
The innermost loop cannot be parallelized because the new value of variable “points” is used in the next loop. But the outermost just run the innermost simulation 1000000 times and tally up the results and should be an ideal candidate for parallelization.
So my question is how to use GCD to parallelize the outermost for loop?
A "multi-threaded iteration" can be done with dispatch_apply():
let outerCount = 100 // # of concurrent block iterations
let innerCount = 10000 // # of iterations within each block
let the_queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
dispatch_apply(UInt(outerCount), the_queue) { outerIdx -> Void in
for innerIdx in 1 ... innerCount {
// ...
}
}
(You have to figure out the best relation between outer and inner counts.)
There are two things to notice:
arc4random() uses an internal mutex, which makes it extremely slow when called
from several threads in parallel, see Performance of concurrent code using dispatch_group_async is MUCH slower than single-threaded version. From the answers given there,
rand_r() (with separate seeds for each thread) seems to be faster alternative.
The result variable winner must not be modified from multiple threads simultaneously.
You can use an array instead where each thread updates its own element, and the results
are added afterwards. A thread-safe method has been described in https://stackoverflow.com/a/26790019/1187415.
Then it would roughly look like this:
let outerCount = 100 // # of concurrent block iterations
let innerCount = 10000 // # of iterations within each block
let the_queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0);
var winners = [Int](count: outerCount, repeatedValue: 0)
winners.withUnsafeMutableBufferPointer { winnersPtr -> Void in
dispatch_apply(UInt(outerCount), the_queue) { outerIdx -> Void in
var seed = arc4random() // seed for rand_r() in this "thread"
for innerIdx in 1 ... innerCount {
var points = 0
var ability = 500
for i in 1 ... 1000 {
let chance = Int(rand_r(&seed) % 1001)
if chance < (ability-points) { ++points }
else {points = points - 1}
}
if points > 0 {
winnersPtr[Int(outerIdx)] += 1
}
}
}
}
// Add results:
let winner = reduce(winners, 0, +)
println(winner)
Just to update this for contemporary syntax, we now use concurrentPerform (which replaces dispatch_apply).
So we can replace
for j in 0 ..< 1_000_000 {
for i in 0 ..< 1000 {
...
}
}
With
DispatchQueue.concurrentPerform(1_000_000) { j in
for i in 0 ..< 1000 {
...
}
}
Note, parallelizing introduces a little overhead, in both the basic GCD dispatch mechanism, as well as the synchronization of the results. If you had 32 iterations in your parallel loop this would be inconsequential, but you have a million iterations, and it will start to add up.
We generally solve this by “striding”: Rather than parallelizing 1 million iterations, you might only do 100 parallel iterations, doing 10,000 iterations each. E.g. something like:
let totalIterations = 1_000_000
let stride = 10_000
let (quotient, remainder) = totalIterations.quotientAndRemainder(dividingBy: stride)
let iterations = quotient + remainder == 0 ? 0 : 1
DispatchQueue.concurrentPerform(iterations: iterations) { iteration in
for j in iteration * stride ..< min(totalIterations, (iteration + 1) * stride) {
for i in 0 ..< 1000 {
...
}
}
}
My question doesn't depend expressly on one snippet of code, but is more conceptual.
Unlike some programming languages, MATLAB doesn't require variables to be initialized expressly before they're used. For example, this is perfectly valid to have halfway through a script file to define 'myVector':
myVector = vectorA .* vectorB
My question is: Is it faster to initialize variables (such as 'myVector' above) to zero and then assign values to them, or to keep initializing things throughout the program?
Here's a direct comparison of what I'm talking about:
Initializing throughout:
varA = 8;
varB = 2;
varC = varA - varB;
varD = varC * varB;
Initializing at start:
varA = 8;
varB = 2;
varC = 0;
varD = 0;
varC = varA - varB;
varD = varC * varB;
On one hand, it seems a bit of a waste to have these extra lines of code for no reason. On the other hand, though, it makes a little bit of sense that it would be faster to allocate all the memory for a program at once instead of spread out over the runtime.
Does anyone have a little insight?
Copy and paste your Initializing at start: code into MATLAB Editor Window and you would get this warning that looks like this -
And if you go into the Details, you would read this -
Explanation
The code does not appear to use the assignment to the indicated variable. This situation occurs when any of the following are true:
Another assignment overwrites the value of the variable before an operation uses it.
The specified argument value contains a typographical error, causing it to appear unused.
The code does not use all values returned by a function call...
In our case, the reason for this warning is The code does not use all values. So, this clarifies that initialization/pre-allocation won't help for that case.
When should we pre-allocate?
From my experience, pre-allocation helps when you need to later on index into part of it.
Thus, if you need to index into a portion of varC to store the results, pre-allocation would help. Hence, this would make more sense -
varC = zeros(...)
varD = zeros(...)
varC(k,:) = varA - varB;
varD(k,:) = varC * varB;
Again, while indexing if you are going beyond the size of varC, MATLAB would spend time trying to allocate more memory space for it, so that would slow things a bit. So, pre-allocate output variables to the maximum size which you think would be used for storing results. But, if you don't know the size of results, you are in a catch there and have to append results into the output variable(s) and that would slow down things for sure.
Alright! I've done some tests, and here are the results.
This is the code I used for the "throughout" variable assignments:
tic;
a = 1;
b = 2;
c = 3;
d = 4;
e = a - b;
f = e + c;
g = f - a;
h = g * c;
i = h - g;
j = 9 * i;
k = [j i h];
l = any(k);
b2(numel(b2) + 1) = toc
Here's the code for the "At Start" variable assignments:
tic;
a = 1;
b = 2;
c = 3;
d = 4;
e = 0;
f = 0;
g = 0;
h = 0;
i = 0;
j = 0;
k = 0;
l = 0;
e = a - b;
f = e + c;
g = f - a;
h = g * c;
i = h - g;
j = 9 * i;
k = [j i h];
l = any(k);
b1(numel(b1) + 1) = toc
I saved the time in the vectors 'b1' and 'b2'. Each was run with only MATLAB and Chrome open, and was the only script file open inside MATLAB. Each was run 201 times. Because the first time a program is run it compiles, I disregarded the first time value for both (I'm not interested in compile time).
To find the average, I used
mean(b1(2:201))
and
mean(b2(2:201))
The results:
"Throughout": 1.634311562062418e-05 seconds (0.000016343)
"At Start": 2.832598989758290e-05 seconds (0.000028326)
Interestingly (or perhaps not, who knows) defining variables only when needed, spread throughout the program was almost twice as fast.
I don't know whether this is because of the way MATLAB allocates memory (maybe it just grabs a huge chunk and doesn't need to keep allocating more every time you define a variable?) or if the allocation speed is just so fast that it's eclipsed by the extra lines of code.
NOTE: As Divakar points out, mileage may vary when using arrays. My testing should hold true for when the size of variables doesn't change, however.
tl;dr Setting variables to zero only to change it later is slow
I am trying to create random lines and select some of them, which are really rare. My code is rather simple, but to get something that I can use I need to create very large vectors(i.e.: <100000000 x 1, tracks variable in my code). Is there any way to be able to creater larger vectors and to reduce the time needed for all those calculations?
My code is
%Initial line values
tracks=input('Give me the number of muon tracks: ');
width=1e-4;
height=2e-4;
Ystart=15.*ones(tracks,1);
Xstart=-40+80.*rand(tracks,1);
%Xend=-40+80.*rand(tracks,1);
Xend=laprnd(tracks,1,Xstart,15);
X=[Xstart';Xend'];
Y=[Ystart';zeros(1,tracks)];
b=(Ystart.*Xend)./(Xend-Xstart);
hot=0;
cold=0;
for i=1:tracks
if ((Xend(i,1)<width/2 && Xend(i,1)>-width/2)||(b(i,1)<height && b(i,1)>0))
plot(X(:, i),Y(:, i),'r');%the chosen ones!
hold all
hot=hot+1;
else
%plot(X(:, i),Y(:, i),'b');%the rest of them
%hold all
cold=cold+1;
end
end
I am also using and calling a Laplace distribution generator made my Elvis Chen which can be found here
function y = laprnd(m, n, mu, sigma)
%LAPRND generate i.i.d. laplacian random number drawn from laplacian distribution
% with mean mu and standard deviation sigma.
% mu : mean
% sigma : standard deviation
% [m, n] : the dimension of y.
% Default mu = 0, sigma = 1.
% For more information, refer to
% http://en.wikipedia.org./wiki/Laplace_distribution
% Author : Elvis Chen (bee33#sjtu.edu.cn)
% Date : 01/19/07
%Check inputs
if nargin < 2
error('At least two inputs are required');
end
if nargin == 2
mu = 0; sigma = 1;
end
if nargin == 3
sigma = 1;
end
% Generate Laplacian noise
u = rand(m, n)-0.5;
b = sigma / sqrt(2);
y = mu - b * sign(u).* log(1- 2* abs(u));
The result plot is
As you indicate, your problem is two-fold. On the one hand, you have memory issues because you need to do so many trials. On the other hand, you have performance issues, because you have to process all those trials.
Solutions to each issue often have a negative impact on the other issue. IMHO, the best approach would be to find a compromise.
More trials are only possible of you get rid of those gargantuan arrays that are required for vectorization, and use a different strategy to do the loop. I will give priority to the possibility of using more trials, possibly at the cost of optimal performance.
When I execute your code as-is in the Matlab profiler, it immediately shows that the initial memory allocation for all your variables takes a lot of time. It also shows that the plot and hold all commands are the most time-consuming lines of them all. Some more trial-and-error shows that there is a disappointingly low maximum value for the trials you can do before OUT OF MEMORY errors start appearing.
The loop can be accelerated tremendously if you know a few things about its limitations in Matlab. In older versions of Matlab, it used to be true that loops should be avoided completely in favor of 'vectorized' code. In recent versions (I believe R2008a and up), the Mathworks introduced a piece of technology called the JIT accelerator (Just-in-Time compiler) which translates M-code into machine language on the fly during execution. Simply put, the JIT accelerator allows your code to bypass Matlab's interpreter and talk much more directly with the underlying hardware, which can save a lot of time.
The advice you'll hear a lot that loops should be avoided in Matlab, is no longer generally true. While vectorization still has its value, any procedure of sizable complexity that is implemented using only vectorized code is often illegible, hard to understand, hard to change and hard to upkeep. An implementation of the same procedure that uses loops, often has none of these drawbacks, and moreover, it will quite often be faster and require less memory.
Unfortunately, the JIT accelerator has a few nasty (and IMHO, unnecessary) limitations that you'll have to learn about.
One such thing is plot; it's generally a better idea to let a loop do nothing other than collect and manipulate data, and delay any plotting commands etc. until after the loop.
Another such thing is hold; the hold function is not a Matlab built-in function, meaning, it is implemented in M-language. Matlab's JIT accelerator is not able to accelerate non-builtin functions when used in a loop, meaning, your entire loop will run at Matlab's interpretation speed, rather than machine-language speed! Therefore, also delay this command until after the loop :)
Now, in case you're wondering, this last step can make a HUGE difference -- I know of one case where copy-pasting a function body into the upper-level loop caused a 1200x performance improvement. Days of execution time had been reduced to minutes!).
There is actually another minor issue in your loop (which is really small, and rather inconvenient, I will immediately agree with) -- the name of the loop variable should not be i. The name i is the name of the imaginary unit in Matlab, and the name resolution will also unnecessarily consume time on each iteration. It's small, but non-negligible.
Now, considering all this, I've come to the following implementation:
function [hot, cold, h] = MuonTracks(tracks)
% NOTE: no variables larger than 1x1 are initialized
width = 1e-4;
height = 2e-4;
% constant used for Laplacian noise distribution
bL = 15 / sqrt(2);
% Loop through all tracks
X = [];
hot = 0;
ii = 0;
while ii <= tracks
ii = ii + 1;
% Note that I've inlined (== copy-pasted) the original laprnd()
% function call. This was necessary to work around limitations
% in loops in Matlab, and prevent the nececessity of those HUGE
% variables.
%
% Of course, you can still easily generalize all of this:
% the new data
u = rand-0.5;
Ystart = 15;
Xstart = 800*rand-400;
Xend = Xstart - bL*sign(u)*log(1-2*abs(u));
b = (Ystart*Xend)/(Xend-Xstart);
% the test
if ((b < height && b > 0)) ||...
(Xend < width/2 && Xend > -width/2)
hot = hot+1;
% growing an array is perfectly fine when the chances of it
% happening are so slim
X = [X [Xstart; Xend]]; %#ok
end
end
% This is trivial to do here, and prevents an 'else' in the loop
cold = tracks - hot;
% Now plot the chosen ones
h = figure;
hold all
Y = repmat([15;0], 1, size(X,2));
plot(X, Y, 'r');
end
With this implementation, I can do this:
>> tic, MuonTracks(1e8); toc
Elapsed time is 24.738725 seconds.
with a completely negligible memory footprint.
The profiler now also shows a nice and even distribution of effort along the code; no lines that really stand out because of their memory use or performance.
It's possibly not the fastest possible implementation (if anyone sees obvious improvements, please, feel free to edit them in). But, if you're willing to wait, you'll be able to do MuonTracks(1e23) (or higher :)
I've also done an implementation in C, which can be compiled into a Matlab MEX file:
/* DoMuonCounting.c */
#include <math.h>
#include <matrix.h>
#include <mex.h>
#include <time.h>
#include <stdlib.h>
void CountMuons(
unsigned long long tracks,
unsigned long long *hot, unsigned long long *cold, double *Xout);
/* simple little helper functions */
double sign(double x) { return (x>0)-(x<0); }
double rand_double() { return (double)rand()/(double)RAND_MAX; }
/* the gateway function */
void mexFunction(int nlhs, mxArray *plhs[], int nrhs, const mxArray *prhs[])
{
int
dims[] = {1,1};
const mxArray
/* Output arguments */
*hot_out = plhs[0] = mxCreateNumericArray(2,dims, mxUINT64_CLASS,0),
*cold_out = plhs[1] = mxCreateNumericArray(2,dims, mxUINT64_CLASS,0),
*X_out = plhs[2] = mxCreateDoubleMatrix(2,10000, mxREAL);
const unsigned long long
tracks = (const unsigned long long)mxGetPr(prhs[0])[0];
unsigned long long
*hot = (unsigned long long*)mxGetPr(hot_out),
*cold = (unsigned long long*)mxGetPr(cold_out);
double
*Xout = mxGetPr(X_out);
/* call the actual function, and return */
CountMuons(tracks, hot,cold, Xout);
}
// The actual muon counting
void CountMuons(
unsigned long long tracks,
unsigned long long *hot, unsigned long long *cold, double *Xout)
{
const double
width = 1.0e-4,
height = 2.0e-4,
bL = 15.0/sqrt(2.0),
Ystart = 15.0;
double
Xstart,
Xend,
u,
b;
unsigned long long
i = 0ul;
*hot = 0ul;
*cold = tracks;
/* seed the RNG */
srand((unsigned)time(NULL));
/* aaaand start! */
while (i++ < tracks)
{
u = rand_double() - 0.5;
Xstart = 800.0*rand_double() - 400.0;
Xend = Xstart - bL*sign(u)*log(1.0-2.0*fabs(u));
b = (Ystart*Xend)/(Xend-Xstart);
if ((b < height && b > 0.0) || (Xend < width/2.0 && Xend > -width/2.0))
{
Xout[0 + *hot*2] = Xstart;
Xout[1 + *hot*2] = Xend;
++(*hot);
--(*cold);
}
}
}
compile in Matlab with
mex DoMuonCounting.c
(after having run mex setup :) and then use it in conjunction with a small M-wrapper like this:
function [hot,cold, h] = MuonTrack2(tracks)
% call the MEX function
[hot,cold, Xtmp] = DoMuonCounting(tracks);
% process outputs, and generate plots
hot = uint32(hot); % circumvents limitations in 32-bit matlab
X = Xtmp(:,1:hot);
clear Xtmp
h = NaN;
if ~isempty(X)
h = figure;
hold all
Y = repmat([15;0], 1, hot);
plot(X, Y, 'r');
end
end
which allows me to do
>> tic, MuonTrack2(1e8); toc
Elapsed time is 14.496355 seconds.
Note that the memory footprint of the MEX version is slightly larger, but I think that's nothing to worry about.
The only flaw I see is the fixed maximum number of Muon counts (hard-coded as 10000 as the initial array size of Xout; needed because there are no dynamically growing arrays in standard C)...if you're worried this limit could be broken, simply increase it, change it to be equal to a fraction of tracks, or do some smarter (but more painful) dynamic array-growing tricks.
In Matlab, it is sometimes faster to vectorize rather than use a for loop. For example, this expression:
(Xend(i,1) < width/2 && Xend(i,1) > -width/2) || (b(i,1) < height && b(i,1) > 0)
which is defined for each value of i, can be rewritten in a vectorised manner like this:
isChosen = (Xend(:,1) < width/2 & Xend(:,1) > -width/2) | (b(:,1) < height & b(:,1)>0)
Expessions like Xend(:,1) will give you a column vector, so Xend(:,1) < width/2 will give you a column vector of boolean values. Note then that I have used & rather than && - this is because & performs an element-wise logical AND, unlike && which only works on scalar values. In this way you can build the entire expression, such that the variable isChosen holds a column vector of boolean values, one for each row of your Xend/b vectors.
Getting counts is now as simple as this:
hot = sum(isChosen);
since true is represented by 1. And:
cold = sum(~isChosen);
Finally, you can get the data points by using the boolean vector to select rows:
plot(X(:, isChosen),Y(:, isChosen),'r'); % Plot chosen values
hold all;
plot(X(:, ~isChosen),Y(:, ~isChosen),'b'); % Plot unchosen values
EDIT: The code should look like this:
isChosen = (Xend(:,1) < width/2 & Xend(:,1) > -width/2) | (b(:,1) < height & b(:,1)>0);
hot = sum(isChosen);
cold = sum(~isChosen);
plot(X(:, isChosen),Y(:, isChosen),'r'); % Plot chosen values
I've got a detector that spits out two numbers at a high rate when I turn it on. I've been capturing my data using HyperTerminal, which seems to be able to keep up with the device.
I wanted to automate the process and control the device entirely through Matlab, but discovered that less than half of the data gets through to Matlab. Are there any known issues with Matlab's speed in this area?
Here's what I'm using to read in data:
s = serial('COM1', 'BaudRate', 115200, 'DataBits', 8, 'Terminator','CR/LF', 'InputBufferSize', 1024);
T1 = 1; % Initial T1, T2 values
T2 = 10000;
timer = 300;
% Inputs to serial device: T1, T2, runtime (seconds)
fprintf(s, sprintf('%d %d %d\r', T1, T2, timer));
tdata = zeros(1e5,2,'uint16');
data = fopen(sprintf('%s.txt',date_and_trial),'w');
tic;
while toc <= timer
% Read data into an array, and write to file.
if s.BytesAvailable >= 13
line = fgets(s);
if length(line) == 13
a = sscanf(line, '%u %u');
if length(a) == 2
tdata(i,1) = a(1);
tdata(i,2) = a(2);
fprintf(data, sprintf('%d %d\r', tdata(i,1), tdata(i,2)));
i++;
end
end
else
pause(0.01);
end
end
disp(toc);
fclose(s);
fclose(data);
fprintf('Finished!\r');
I've been thinking that the conditionals might be slowing it down, but they also seem to be necessary to keep things in the '%i %i\n' format that I need. Maybe there's some way to read in all the data and process it after completion?
Since you pass 'Terminator', 'CR/LF' in the serial port open call, you can replace
if s.BytesAvailable >= 13
line = fgets(s);
if length(line) == 13
with line = fscanf(s);. fscanf will wait for the terminator, and return a whole line (assuming there aren't errors on the wire). You can also remove the else pause part. These changes should make the loop run fast enough to keep up with the serial data. I would guess that s.BytesAvailable and pause actually take much more time than you'd expect - the former because it calls out to the OS and the latter because the pause could go much longer than you specify depending on timeslicing.
Making this change introduces a new problem though: fscanf will block waiting for the terminator, meaning if the device stops sending before your toc >= timer condition becomes true, the program will hang. So you should make sure to set a sensible timeout in the serial call.
You can make two minor speedups in the body of the loop: tdata(i,:) = a'; will fill the row in one shot, and fprintf(data, '%s\r', line); will skip printf. So putting it all together:
s = serial('COM1', 'BaudRate', 115200, 'DataBits', 8, 'Terminator','CR/LF', 'InputBufferSize', 1024, 'Timeout', 3);
T1 = 1; % Initial T1, T2 values
T2 = 10000;
timer = 300;
% Inputs to serial device: T1, T2, runtime (seconds)
fprintf(s, sprintf('%d %d %d\r', T1, T2, timer));
tdata = zeros(1e5,2,'uint16');
data = fopen(sprintf('%s.txt',date_and_trial),'wt');
tic;
while toc <= timer
% Read data into an array, and write to file.
line = fscanf(s); %# waits for CR/LF terminator
a = sscanf(line, '%u %u');
if length(a) == 2
tdata(i,:) = a'; %# ' assumes sscanf returned a column vector
fprintf(data, '%s\r', line);
i++;
end
end
disp(toc);
fclose(data);
fclose(s);
fprintf('Finished!\r');