I have two long (1,000,000+ entries) vectors I and J like so
N = 1000000;
M = 1000;
I = ceil(M*rand(1,N));
J = ceil(M*rand(1,N));
and they contain a bunch of indices which I use to select elements from a small vector
R = randn(1,M)
like so
R1 = R(I);
R2 = R(J);
The resulting vectors R1 and R2 have length N of course. Now I want to do some heavy-duty processing on these vectors, for instance compute
Q = exp(-(R1-R2).^2);
How could I parallelize this code for efficient multicore processing? I suppose that R1 and R2 should be cast in distributed (or rather codistributed) form?
Thanks!
If you have an NVIDIA GPU compatible with CUDA and the Parallel Computing Toolbox installed, you can try this:
N = 1000000;
M = 1000;
I = ceil(M*gpuArray.rand(1,N));
J = ceil(M*gpuArray.rand(1,N));
R = gpuArray.randn(1,M);
R1 = R(I);
R2 = R(J);
Q = exp(-(R1-R2).^2);
The code will be parallelized under-the-hood and executed on the GPU on thousands of threads. On my machine (CPU Xeon E5-2665, GPU Tesla K20) I get more than 10x speedup between CPU (serial) and GPU.
Related
I have 5000 3D points in a Matrix A and another 5000 3D point in a matrix B.
For each point in A i want to find the smallest distance to a point in B. These distances should be stored in an array with 5000 entries.
So far I have this solution, running in about 0.145342 seconds (23 allocations: 191.079 MiB). How can I improve this further?
using Distances
A = rand(5000, 3)
B = rand(5000, 3)
mis = #time minimum(Distances.pairwise(SqEuclidean(), A, B, dims=1), dims=2)
This is a standard way to do it as it will have a better time complexity (especially for larger data):
using NearestNeighbors
nn(KDTree(B'; leafsize = 10), A')[2] .^ 2
Two comments:
by default Euclidean distance is computed (so I square it)
by default NearestNeigbors.jl assumes observations are stored in columns (so I need B' and A' in the solution; if your original data were transposed it would not be needed; the reason why it is designed this way is that Julia uses column major matrix storage)
Generating a big distance matrix using Distances.pairwise(SqEuclidean(), A, B, dims=1) is not efficient because the main memory is pretty slow nowadays compared to CPU caches and the computing power of modern CPUs and this is not gonna be better any time soon (see "memory wall"). It is faster to compute the minimum on-the-fly using two basic nested for loops. Additionally, one can use multiple cores to compute this faster using multiple threads.
function computeMinDist(A, B)
n, m = size(A, 1), size(B, 1)
result = zeros(n)
Threads.#threads for i = 1:n
minSqDist = Inf
#inbounds for j = 1:m
dx = A[i,1] - B[j,1]
dy = A[i,2] - B[j,2]
dz = A[i,3] - B[j,3]
sqDist = dx*dx + dy*dy + dz*dz
if sqDist < minSqDist
minSqDist = sqDist
end
end
result[i] = minSqDist
end
return result
end
mis = #time computeMinDist(A, B)
Note the Julia interpreter uses 1 thread by default but this can be tuned using the environment variable JULIA_NUM_THREADS=auto or just by running it using the flag --threads=auto. See the multi-threading documentation for more information.
Performance results
Here are performance results on my i5-9600KF machine with 6 cores (with two 5000x3 matrices):
Initial implementation: 93.4 ms
This implementation: 4.4 ms
This implementation is thus 21 times faster.
Results are the same to few ULP.
Note the code can certainly be optimized further using loop tiling, and possibly by transposing A and B so the JIT can generate a more efficient implementation using SIMD instructions.
There are many PyopenCL examples on doing arithmetic operations on vectors of size 4. if I have to multiply 100 integers with another 100 integers all at once using AMD GPU on Mac through PyOpenCL, can someone provide and explain the code please? Since max vector size can be 16, I would like to know how can I ask the GPU to do this operation which needs processing more than 16 integers in parallel.
I have a AMD D500 firepro GPU.
Does every Work Item( thread) perform a task independently, if yes there are 24 compute units and each compute unit has 255 work items for single dimension and [255,255,255] for three dimension. Does it mean my GPU has 6120 independent work items?
I made a short example for an entry-wise multiplication of two one-dimensional integer arrays. Note that if you plan to multiply only 100 values it will not be faster than doing this on the CPU, since you have a lot of overhead with copying the data and so on.
import pyopencl as cl
import numpy as np
#this is compiled by the GPU driver and will be executed on the GPU
kernelsource = """
__kernel void multInt( __global int* res,
__global int* a,
__global int* b){
int i = get_global_id(0);
int N = get_global_size(0); //this is the dimension given as second argument in the kernel execution
res[i] = a[i] * b[i];
}
"""
device = cl.get_platforms()[0].get_devices()[0]
context = cl.Context([device])
program = cl.Program(context, kernelsource).build()
queue = cl.CommandQueue(context)
#preparing input data in numpy arrays in local memory (i.e. accessible by the CPU)
N = 100
a_local = np.array(range(N)).astype(np.int32)
b_local = (np.ones(N)*10).astype(np.int32)
#preparing result buffer in local memory
res_local = np.zeros(N).astype(np.int32)
#copy input data to GPU-memory
a_buf = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=a_local)
b_buf = cl.Buffer(context, cl.mem_flags.READ_ONLY | cl.mem_flags.COPY_HOST_PTR, hostbuf=b_local)
#prepare result buffer in GPU-memory
res_buf = cl.Buffer(context, cl.mem_flags.WRITE_ONLY, res_local.nbytes)
#execute previously compiled kernel on GPU
program.multInt(queue,(N,), None, res_buf, a_buf, b_buf)
#copy the result from GPU-memory to CPU-memory
cl.enqueue_copy(queue, res_local, res_buf)
print("result: {}".format(res_local))
For the documentation of PyOpenCL: Once you have understood the working principle of GPGPU programming and the programming concepts of OpenCL, PyOpenCL is pretty straightforward.
I have the following problem: given a 3D irregular geometry A with
(i,j,k)-coordinates, which are the centroids of connected voxels, create a table with the (i_out,j_out,k_out)-coordinates of the cells that represent the complementary set B of the bounding box of A, which we may call C. That is to say, I need the voxel coordinates of the set B = C - A.
To get this done, I am using the Matlab code below, but it is taking too much time to complete when C is fairly large. Then, I would like to speed up the code. To make it clear: cvc is the matrix of voxel coordinates of A; allcvc should produce C and B results from outcvc after setdiff.
Someone has a clue regarding the code performance, or even to improve my strategy?
Problem: the for-loop seems to be the villain.
My attempts: I have tried to follow some hints of Yair Altman's book by doing some tic,toc analyses, using pre-allocation and int8 since I do not need double values. deal yet gave me a slight improvement with min,max. I have also checked this discussion here, but, parallelism, for instance, is a limitation that I have for now.
% A bounding box limits
m = min(cvc,[],1);
M = max(cvc,[],1);
[im,jm,km,iM,jM,kM] = deal(m(1),m(2),m(3),M(1),M(2),M(3));
% (i,j,k) indices of regular grid
I = im:iM;
J = jm:jM;
K = km:kM;
% (i,j,k) table
m = length(I);
n = length(J);
p = length(K);
num = m*n*p;
allcvc = zeros(num,3,'int8');
for N = 1:num
for i = 1:m
for j = 1:n
for k = 1:p
aux = [I(i),J(j),K(k)];
allcvc(N,:) = aux;
end
end
end
end
% operation of exclusion: out = all - in
[outcvc,~] = setdiff(allcvc,cvc,'rows');
To avoid all for-loops in the present code you can use ndgrid or meshgrid functions. For example
[I,J,K] = ndgrid(im:iM, jm:jM, km:kM);
allcvc = [I(:),J(:),K(:)];
instead of your code between % (i,j,k) indices of regular grid and % operation of exclusion: out =.
Background
I have a large set of vectors (orientation data in an axis-angle representation... the axis is the vector). I want to apply a clustering algorithm to. I tried kmeans but the computational time was too long (never finished). So instead I am trying to implement KFCG algorithm which is faster (Kirke 2010):
Initially we have one cluster with the entire training vectors and the codevector C1 which is centroid. In the first iteration of the algorithm, the clusters are formed by comparing first element of training vector Xi with first element of code vector C1. The vector Xi is grouped into the cluster 1 if xi1< c11 otherwise vector Xi is grouped into cluster2 as shown in Figure 2(a) where codevector dimension space is 2. In second iteration, the cluster 1 is split into two by comparing second element Xi2 of vector Xi belonging to cluster 1 with that of the second element of the codevector. Cluster 2 is split into two by comparing the second element Xi2 of vector Xi belonging to cluster 2 with that of the second element of the codevector as shown in Figure 2(b). This procedure is repeated till the codebook size is reached to the size specified by user.
I'm unsure what ratio is appropriate for the codebook, but it shouldn't matter for the code optimization. Also note mine is 3-D so the same process is done for the 3rd dimension.
My code attempts
I've tried implementing the above algorithm into Matlab 2013 (Student Version). Here's some different structures I've tried - BUT take way too long (have never seen it completed):
%training vectors:
Atgood = Nx4 vector (see test data below if want to test);
vecA = Atgood(:,1:3);
roA = size(vecA,1);
%Codebook size, Nsel, is ratio of data
remainFrac2=0.5;
Nseltemp = remainFrac2*roA; %codebook size
%Ensure selected size after nearest power of 2 is NOT greater than roA
if 2^round(log2(Nseltemp)) < roA
NselIter = round(log2(Nseltemp));
else
NselIter = ceil(log2(Nseltemp)-1);
end
Nsel = 2^NselIter; %power of 2 - for LGB and other algorithms
MAIN BLOCK TO OPTIMIZE:
%KFCG:
%%cluster = cell(1,Nsel); %Unsure #rows - Don't know how to initialize if need mean...
codevec(1,1:3) = mean(vecA,1);
count1=1;
count2=1;
ind=1;
for kk = 1:NselIter
hh2 = 1:2:size(codevec,1)*2;
for hh1 = 1:length(hh2)
hh=hh2(hh1);
% for ii = 1:roA
% if vecA(ii,ind) < codevec(hh1,ind)
% cluster{1,hh}(count1,1:4) = Atgood(ii,:); %want all 4 elements
% count1=count1+1;
% else
% cluster{1,hh+1}(count2,1:4) = Atgood(ii,:); %want all 4
% count2=count2+1;
% end
% end
%EDIT: My ATTEMPT at optimizing above for loop:
repcv=repmat(codevec(hh1,ind),[size(vecA,1),1]);
splitind = vecA(:,ind)>=repcv;
splitind2 = vecA(:,ind)<repcv;
cluster{1,hh}=vecA(splitind,:);
cluster{1,hh+1}=vecA(splitind2,:);
end
clear codevec
%Only mean the 1x3 vector portion of the cluster - for centroid
codevec = cell2mat((cellfun(#(x) mean(x(:,1:3),1),cluster,'UniformOutput',false))');
if ind < 3
ind = ind+1;
else
ind=1;
end
end
if length(codevec) ~= Nsel
warning('codevec ~= Nsel');
end
Alternatively, instead of cells I thought 3D Matrices would be faster? I tried but it was slower using my method of appending the next row each iteration (temp=[]; for...temp=[temp;new];)
Also, I wasn't sure what was best to loop with, for or while:
%If initialize cell to full length
while length(find(~cellfun('isempty',cluster))) < Nsel
Well, anyways, the first method was fastest for me.
Questions
Is the logic standard? Not in the sense that it matches with the algorithm described, but from a coding perspective, any weird methods I employed (especially with those multiple inner loops) that slows it down? Where can I speed up (you can just point me to resources or previous questions)?
My array size, Atgood, is 1,000,000x4 making NselIter=19; - do I just need to find a way to decrease this size or can the code be optimized?
Should this be asked on CodeReview? If so, I'll move it.
Testing Data
Here's some random vectors you can use to test:
for ii=1:1000 %My size is ~ 1,000,000
omega = 2*rand(3,1)-1;
omega = (omega/norm(omega))';
Atgood(ii,1:4) = [omega,57];
end
Your biggest issue is re-iterating through all of vecA FOR EACH CODEVECTOR, rather than just the ones that are part of the corresponding cluster. You're supposed to split each cluster on it's codevector. As it is, your cluster structure grows and grows, and each iteration is processing more and more samples.
Your second issue is the loop around the comparisons, and the appending of samples to build up the clusters. Both of those can be solved by vectorizing the comparison operation. Oh, I just saw your edit, where this was optimized. Much better. But codevec(hh1,ind) is just a scalar, so you don't even need the repmat.
Try this version:
% (preallocs added in edit)
cluster = cell(1,Nsel);
codevec = zeros(Nsel, 3);
codevec(1,:) = mean(Atgood(:,1:3),1);
cluster{1} = Atgood;
nClusters = 1;
ind = 1;
while nClusters < Nsel
for c = 1:nClusters
lower_cluster_logical = cluster{c}(:,ind) < codevec(c,ind);
cluster{nClusters+c} = cluster{c}(~lower_cluster_logical,:);
cluster{c} = cluster{c}(lower_cluster_logical,:);
codevec(c,:) = mean(cluster{c}(:,1:3), 1);
codevec(nClusters+c,:) = mean(cluster{nClusters+c}(:,1:3), 1);
end
ind = rem(ind,3) + 1;
nClusters = nClusters*2;
end
I am writing a program which requires to multiply hundreds of matrices in parallel using CUDA. Can somebody explain how to perform this operation.
I have seen that Kepler architecture is capable of dynamic parallelism. Has somebody used this architecture and if yes, which Nvidia graphics card.
The easiest way to get fast performing matrix multiply in parallel using CUDA is through the ArrayFire CUDA library using the GFOR loop. Here's some code that does what you want:
int n = 8, int m = 8; // dimensions
int t = 10; // number of different matricies
array A = randu(m,n,t); // many matricies
array B = randu(m,n); // one matrix
array C = zeros(m,n,t); // destination
// multiply C=A*B for all A, at the same time
gfor (array i, A.dims(2)) {
C(span,span,i) = matmul(A(span,span,i), B);
}
print( A );
print( B );
print( C );
ArrayFire automatically tiles out the computation efficiently for execution on the GPU. All that is optimized behind the scenes for you. I find it to be faster than trying to write it by hand myself.