How to make sum calculations without using atomic in CUDA - performance

In the below code, how can I calculate sum_array value without using atomicAdd.
Kernel method
__global__ void calculate_sum( int width,
int height,
int *pntrs,
int2 *sum_array )
{
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
if ( row >= height || col >= width ) return;
int idx = pntrs[ row * width + col ];
//atomicAdd( &sum_array[ idx ].x, col );
//atomicAdd( &sum_array[ idx ].y, row );
sum_array[ idx ].x += col;
sum_array[ idx ].y += row;
}
Launch Kernel
dim3 dimBlock( 16, 16 );
dim3 dimGrid( ( width + ( dimBlock.x - 1 ) ) / dimBlock.x,
( height + ( dimBlock.y - 1 ) ) / dimBlock.y );

Reduction is a general name for this kind of problems. Look at the presentation for further explanation or use Google for other examples.
General way to solve this is to make parallel sum of global memory segments inside the thread blocks and store the results in global memory. Afterwards, copy the partial results to CPU memory space, sum the partial results using CPU, and copy the result back to GPU memory. You can avoid coping of memory by execution of another parallel sum for the partial results.
Another approach is to use highly optimized libraries for CUDA such as Thrust or CUDPP which contain functions doing the stuff.

My Cuda is very very rusty, but this is roughly how you do it (courtesy of "Cuda by Example", which I would strongly suggest you to read):
https://developer.nvidia.com/content/cuda-example-introduction-general-purpose-gpu-programming-0
Do a better partitioning of the array you need to sum: threads in CUDA are lightweight, but not so much that you can spawn one for just two sums and hope to get any performance benefit in return.
At this point each thread will be tasked to sum over a slice of your data: create an array of shared int as big as the number of your threads, where each thread will save the partial sum it computed.
Synchronize the threads and reduce the shared memory array:
(please, take it as pseudocode)
// Code to sum over a slice, essentially a loop over each thread subset
// and accumulate over "localsum" (a local variable)
...
// Save the result in the shared memory
partial[threadidx] = localsum;
// Synchronize the threads:
__syncthreads();
// From now on partial is filled with the result of all computations: you can reduce partial
// we'll do it the illiterate way, using a single thread (it can be easily parallelized)
if(threadidx == 0) {
for(i = 1; i < nthreads; ++i) {
partial[0] += partial[i];
}
}
and off you go: partial[0] will hold your sum (or computation).
See the dot product example in "CUDA by example" for a more rigorous discussion of the topic and a reduction algorithm that runs in about O(log(n)).
Hope this helps

Related

How do I allocate memory and copy 2D arrays between CPU / GPU in CUDA without flattening them?

So I want to allocate 2D arrays and also copy them between the CPU and GPU in CUDA, but I am a total beginner and other online materials are very difficult for me to understand or are incomplete. It is important that I am able to access them as a 2D array in the kernel code as shown below.
Note that height != width for the arrays, that's something that further confuses me if it's possible as I always struggle choosing grid size.
I've considered flattening them, but I really want to get it working this way.
This is how far I've got by my own research.
__global__ void myKernel(int *firstArray, int *secondArray, int rows, int columns) {
int row = blockIdx.x * blockDim.x + threadIdx.x;
int column = blockIdx.y * blockDim.y + threadIdx.y;
if (row >= rows || column >= columns)
return;
// Do something with the arrays like you would on a CPU, like:
firstArray[row][column] = row * 2;
secondArray[row[column] = row * 3;
}
int main() {
int rows = 300, columns = 200;
int h_firstArray[rows][columns], h_secondArray[rows][columns];
int *d_firstArray[rows][columns], *d_secondArray[rows][columns];
// populate h_ arrays (Can do this bit myself)
// Allocate memory on device, no idea how to do for 2D arrays.
// Do memcopies to GPU, no idea how to do for 2D arrays.
dim3 block(rows,columns);
dim3 grid (1,1);
myKernel<<<grid,block>>>(d_firstArray, d_secondArray, rows, columns);
// Do memcopies back to host, no idea how to do for 2D arrays.
cudaFree(d_firstArray);
cudaFree(d_secondArray);
return 0;
}
EDIT: I was asked if the array width will be known at compile time in the problems I would try to solve. You can assume it is as I'm interested primarily in this particular situation as of now.
In the general case (array dimensions not known until runtime), handling doubly-subscripted access in CUDA device code requires an array of pointers, just as it does in host code. C and C++ handle each subscript as a pointer dereference, in order to reach the final location in the "2D array".
Double-pointer/doubly-subscripted access in device code in the general case is already covered in the canonical answer linked from the cuda tag info page. There are several drawbacks to this, which are covered in that answer so I won't repeat them here.
However, if the array width is known at compile time (array height can be dynamic - i.e. determined at runtime), then we can leverage the compiler and the language typing mechanisms to allow us to circumvent most of the drawbacks. Your code demonstrates several other incorrect patterns for CUDA and/or C/C++ usage:
Passing an item for doubly-subscripted access to a C or C++ function cannot be done with a simple single pointer type like int *firstarray
Allocating large host arrays via stack-based mechanisms:
int h_firstArray[rows][columns], h_secondArray[rows][columns];
is often problematic in C and C++. These are stack based variables and will often run into stack limits if large enough.
CUDA threadblocks are limited to 1024 threads total. Therefore such a threadblock dimension:
dim3 block(rows,columns);
will not work except for very small sizes of rows and columns (the product must be less than or equal to 1024).
When declaring pointer variables for a device array in CUDA, it is almost never correct to create arrays of pointers:
int *d_firstArray[rows][columns], *d_secondArray[rows][columns];
nor do we allocate space on the host, then "reallocate" those pointers for device usage.
What follows is a worked example with the above items addressed and demonstrating the aforementioned method where the array width is known at runtime:
$ cat t50.cu
#include <stdio.h>
const int array_width = 200;
typedef int my_arr[array_width];
__global__ void myKernel(my_arr *firstArray, my_arr *secondArray, int rows, int columns) {
int column = blockIdx.x * blockDim.x + threadIdx.x;
int row = blockIdx.y * blockDim.y + threadIdx.y;
if (row >= rows || column >= columns)
return;
// Do something with the arrays like you would on a CPU, like:
firstArray[row][column] = row * 2;
secondArray[row][column] = row * 3;
}
int main() {
int rows = 300, columns = array_width;
my_arr *h_firstArray, *h_secondArray;
my_arr *d_firstArray, *d_secondArray;
size_t dsize = rows*columns*sizeof(int);
h_firstArray = (my_arr *)malloc(dsize);
h_secondArray = (my_arr *)malloc(dsize);
// populate h_ arrays
memset(h_firstArray, 0, dsize);
memset(h_secondArray, 0, dsize);
// Allocate memory on device
cudaMalloc(&d_firstArray, dsize);
cudaMalloc(&d_secondArray, dsize);
// Do memcopies to GPU
cudaMemcpy(d_firstArray, h_firstArray, dsize, cudaMemcpyHostToDevice);
cudaMemcpy(d_secondArray, h_secondArray, dsize, cudaMemcpyHostToDevice);
dim3 block(32,32);
dim3 grid ((columns+block.x-1)/block.x,(rows+block.y-1)/block.y);
myKernel<<<grid,block>>>(d_firstArray, d_secondArray, rows, columns);
// Do memcopies back to host
cudaMemcpy(h_firstArray, d_firstArray, dsize, cudaMemcpyDeviceToHost);
cudaMemcpy(h_secondArray, d_secondArray, dsize, cudaMemcpyDeviceToHost);
// validate
if (cudaGetLastError() != cudaSuccess) {printf("cuda error\n"); return -1;}
for (int i = 0; i < rows; i++)
for (int j = 0; j < columns; j++){
if (h_firstArray[i][j] != i*2) {printf("first mismatch at %d,%d, was: %d, should be: %d\n", i,j,h_firstArray[i][j], i*2); return -1;}
if (h_secondArray[i][j] != i*3) {printf("second mismatch at %d,%d, was: %d, should be: %d\n", i,j,h_secondArray[i][j], i*3); return -1;}}
printf("success!\n");
cudaFree(d_firstArray);
cudaFree(d_secondArray);
return 0;
}
$ nvcc -arch=sm_61 -o t50 t50.cu
$ cuda-memcheck ./t50
========= CUDA-MEMCHECK
success!
========= ERROR SUMMARY: 0 errors
$
I've reversed the sense of your kernel indexing (x,y) to help with coalesced global memory access. We see that with this kind of type creation, we can leverage the compiler and the language features to end up with a code that allows for doubly-subscripted access in both host and device code, while otherwise allowing CUDA operations (e.g. cudaMemcpy) as if we are dealing with single-pointer (e.g. "flattened") arrays.

warp shuffling to reduction of arrays with any length

I am working on a Cuda kernel which performs vector dot product (A x B). I assumed that the length of each vector is multiple of 32 (32,64, ...) and defined the block size to be equal to the length of the array. Each thread in the block multiplies one element of A to the corresponding element of B (thread i ==>psum = A[i]xB[i]). After multiplication, I used the following functions which used warp shuffling technique to perform reduction and calculate the sum all multiplications.
__inline__ __device__
float warpReduceSum(float val) {
int warpSize =32;
for (int offset = warpSize/2; offset > 0; offset /= 2)
val += __shfl_down(val, offset);
return val;
}
__inline__ __device__
float blockReduceSum(float val) {
static __shared__ int shared[32]; // Shared mem for 32 partial sums
int lane = threadIdx.x % warpSize;
int wid = threadIdx.x / warpSize;
val = warpReduceSum(val); // Each warp performs partial reduction
if (lane==0)
shared[wid]=val; // Write reduced value to shared memory
__syncthreads(); // Wait for all partial reductions
//read from shared memory only if that warp existed
val = (threadIdx.x < blockDim.x / warpSize) ? shared[lane] : 0;
if (wid==0)
val = warpReduceSum(val); // Final reduce within first warp
return val;
}
I simply call blockReduceSum(psum) which psum is the multiplication of two elements by a thread.
This approach doesn't work when the length of the array is not multiple of 32, so my question is, can we change this code so that it also works for any length? or is it impossible because if the length of the array is not multiple of 32, some warps have elements belonging more than one array?
First of all, depending on the GPU you are using, performing dot product with just 1 block will probably not be very efficient (as long as you are not batching several dot products in 1 kernel, each done by a single block).
To answer your question: you can reuse the code you have written by just calling your kernel with the number of threads being the closest multiple of 32 higher than N (length of the array) and introducing if statement before calling to blockReduceSum that would like this:
__global__ void kernel(float * A, float * B, int N) {
float psum = 0;
if(threadIdx.x < N) //threadIDx.x because your are using single block, you will need to change it to more general id once you move to multiple blocks
psum = A[threadIdx.x] * B[threadIdx.x];
blockReduceSum(psum);
//The rest of computation
}
That way, threads that do not have array element associated with them, but that need to be there due to use of __shfl, will contribute 0 to the sum.

Which way to order a shared 2D/3D array for parallel reduction over 1 dimension in CUDA/OpenCL?

Overall goal
I have several reductions to make on a bipartite graph, represented by two dense arrays for vertices and a dense array specifying whether an edge is present b/w the two. Say, two arrays are a0[] and a1[], and all edges go like e[i0][i1] (that is, from elements in a0 to elements in a1).
There are ~100+100 vertices, and ~100*100 edges, so each thread is responsible for one edge.
Task 1 : max reduction
For each vertex in a0 I want to find the maximum of all vertices (in a1) connected to it, and then the same in reverse: having assigned the result to an array b0, for each vertex in a1, I want to find the maximum b0[i0] of the connected vertices.
To do this, I:
1) load into shared memory
#define DC_NUM_FROM_SHARED 16
#define DC_NUM_TO_SHARED 16
__global__ void max_reduce_down(
Value* value1
, Value* max_value_in_connected
, int r0_size, int r1_size
, bool** connected
)
{
int id_from;
id_from = blockIdx.x * blockDim.x + threadIdx.x;
id_to = blockIdx.y * blockDim.y + threadIdx.y;
bool within_bounds = (id_from < r0_size) && (id_to < r1_size);
//load into shared memory
__shared__ Value value[DC_NUM_TO_SHARED][DC_NUM_FROM_SHARED]; //FROM is the inner (consecutive) dimension
if(within_bounds)
value[threadIdx.y][threadIdx.x] = connected[id_to][id_from]? value1[id_to] : 0;
else
value[threadIdx.y][threadIdx.x] = 0;
__syncthreads();
if(!within_bounds)
return;
2) reduce
for(int stride = DC_NUM_TO_SHARED/2; threadIdx.y < stride; stride >>= 1)
{
value[threadIdx.y][threadIdx.x] = max(value[threadIdx.y][threadIdx.x], dc[threadIdx.y + stride][threadIdx.x]);
__syncthreads();
}
3) write back
max_value_connected[id_from] = value[0][threadIdx.x];
Task 2 : best k
Similar problem, but reduction is only in for vertices in a0, I need to find the k best candidates are chosen from connected in a1 (k is ~5).
1) I initialize the shared array with zero elements except for the 1st place
int id_from, id_to;
id_from = blockIdx.x * blockDim.x + threadIdx.x;
id_to = blockIdx.y * blockDim.y + threadIdx.y;
__shared Value* values[MAX_CHAMPS * CHAMPS_NUM_FROM_SHARED * CHAMPS_NUM_TO_SHARED]; //champion overlaps
__shared int* champs[MAX_CHAMPS * CHAMPS_NUM_FROM_SHARED * CHAMPS_NUM_TO_SHARED]; // overlap champions
bool within_bounds = (id_from < r0_size) && (id_to < r1_size);
int i = threadIdx.y * CHAMPS_NUM_FROM_SHARED + threadIdx.x;
if(within_bounds)
{
values[i] = connected[id_to][id_from] * values1[id_to];
champs[i] = connected[id_to][id_from] ? id_to : -1;
}
else
{
values[i] = 0;
champs[i] = -1;
}
for(int place = 1; place < CHAMP_COUNT; place++)
{
i = (place * CHAMPS_NUM_TO_SHARED + threadIdx.y) * CHAMPS_NUM_FROM_SHARED + threadIdx.x;
values[i] = 0;
champs[i] = -1;
}
if(! within_bounds)
return;
__syncthreads();
2) reduce it
for(int stride = CHAMPS_NUM_TO_SHARED/2; threadIdx.y < stride; stride >>= 1)
{
merge_2_champs(values, champs, CHAMP_COUNT, id_from, id_to, id_to + stride);
__syncthreads();
}
3) write the results back
for(int place = 0; place < LOCAL_DESIRED_ACTIVITY; place++)
champs0[place][id_from] = champs[place * CHAMPS_NUM_TO_SHARED * CHAMPS_NUM_FROM_SHARED + threadIdx.x];
Issue
How do I order (transpose) the elements in the shared array, so that memory access uses the cache better?
Does it matter at this point, or there is much more I can gain from other optimizations?
Would it be better to transpose the edge matrix if I needed to optimize for Task 2? (as far as I understood, there is a symmetry in Task 1, so it doesn't matter).
P.S.
I have delayed unrolling loops and doing the first reduction iteration while loading, since I thought it is too complicated to do before I have explored simpler ways.
For Task 2, it would be nice to not load zero elements, since the array would never need to grow, and only start shrinking once log k steps have been made. This would make it k times more compact in shared memory! But I dread the resulting index math.
Syntax and Correctness
The unusual types are just typedef'ed ints/chars/etc - AFAIK, in GPUs, it makes sense to compactify those as much as possible. I have not run the code yet, no need to check for indexing errors.
Also, I am using CUDA, but I am interested in an OpenCL perspective as well, since I think the best solution should be the same, and I will be using OpenCL in the future anyway.
OK, I think I figured this out.
The two alternatives that I am considering are to have reductions work on the y dimension, and independent on the x dimension, or vice versa (x dimension being the contiguous one). In any case, the scheduler is able to assemble threads into warps along the x dimension, so some coherence is guaranteed. However, having coherence extend beyond a warp would be great. Also, due to the 2D/3D nature of the shared arrays, one would have to limit the dimensions to 16 or even 8.
To ensure coalescence within a warp, the scheduler has to assemble warps along the x dimension.
If reducing over x dimension, after each iteration, the number of active threads in a warp will halve. However, if reducing over y dimension, it is the number of active warps that will halve.
So, I need to reduce over y.
Unless the transpose (load) is the slowest, which is an abnormal case.
Coalesced buffer reads really matter; kernels can be 32x slower if you don't do them. It can be worth doing a re-arrangement pass if it means being able to do them (of course, the re-arrangement pass needs to be coalesced as well, but you can often leverage shared local memory to do this).

OpenCL slow -- not sure why

I'm teaching myself OpenCL by trying to optimize the mpeg4dst reference audio encoder. I achieved a 3x speedup by using vector instructions on CPU but I figured the GPU could probably do better.
I'm focusing on computing auto-correlation vectors in OpenCL as my first area of improvement. The CPU code is:
for (int i = 0; i < NrOfChannels; i++) {
for (int shift = 0; shift <= PredOrder[ChannelFilter[i]]; shift++)
vDSP_dotpr(Signal[i] + shift, 1, Signal[i], 1, &out, NrOfChannelBits - shift);
}
NrOfChannels = 6
PredOrder = 129
NrOfChannelBits = 150528.
On my test file, this function take approximately 188ms to complete.
Here's my OpenCL method:
kernel void calculateAutocorrelation(size_t offset,
global const float *input,
global float *output,
size_t size) {
size_t index = get_global_id(0);
size_t end = size - index;
float sum = 0.0;
for (size_t i = 0; i < end; i++)
sum += input[i + offset] * input[i + offset + index];
output[index] = sum;
}
This is how it is called:
gcl_memcpy(gpu_signal_in, Signal, sizeof(float) * NrOfChannels * MAXCHBITS);
for (int i = 0; i < NrOfChannels; i++) {
size_t sz = PredOrder[ChannelFilter[i]] + 1;
cl_ndrange range = { 1, { 0, 0, 0 }, { sz, 0, 0}, { 0, 0, 0 } };
calculateAutocorrelation_kernel(&range, i * MAXCHBITS, (cl_float *)gpu_signal_in, (cl_float *)gpu_out, NrOfChannelBits);
gcl_memcpy(out, gpu_out, sizeof(float) * sz);
}
According to Instruments, my OpenCL implementation seems to take about 13ms, with about 54ms of memory copy overhead (gcl_memcpy).
When I use a much larger test file, 1 minute of 2-channel music vs, 1 second of 6-channel, while the measured performance of the OpenCL code seems to be the same, the CPU usage falls to about 50% and the whole program takes about 2x longer to run.
I can't find a cause for this in Instruments and I haven't read anything yet that suggests that I should expect very heavy overhead switching in and out of OpenCL.
If I'm reading your kernel code correctly, each work item is iterating over all of the data from it's location to the end. This isn't going to be efficient. For one (and the primary performance concern), the memory accesses won't be coalesced and so won't be at full memory bandwidth. Secondly, because each work item has a different amount of work, there will be branch divergence within a work group, which will leave some threads idle waiting for others.
This seems like it has a lot in common with a reduction problem and I'd suggest reading up on "parallel reduction" to get some hints about doing an operation like this in parallel.
To see how memory is being read, work out how 16 work items (say, global_id 0 to 15) will be reading data for each step.
Note that if every work item in a work group access the same memory, there is a "broadcast" optimization the hardware can make. So just reversing the order of your loop could improve things.

CUDA: different results from CPU

Some questions about CUDA.
1) I noticed that, in every sample code, operations which are not parallel (i.e., the computation of a scalar), performed in global functions, are always done specifying a certain thread. For example, in this simple code for a dot product, thread 0 performs the summation:
__global__ void dot( int *a, int *b, int *c )
{
// Shared memory for results of multiplication
__shared__ int temp[N];
temp[threadIdx.x] = a[threadIdx.x] * b[threadIdx.x];
// Thread 0 sums the pairwise products
if( 0 == threadIdx.x )
{
int sum = 0;
for( int i = 0; i < N; i++ )
sum += temp[i];
*c = sum;
}
}
This is fine for me; however, in a code which I wrote I did not specify the thread for the non-parallel operation, and it still works: hence, is it compulsory to define the thread? In particular, the non-parallel operation which I want to perform is the following:
if (epsilon == 1)
{
V[0] = B*(Exp - 1 - b);
}
else
{
V[0] = B*(Exp - 1 + a);
}
The various variables were passed as arguments of the global function. And here comes my second question.
2) I computed the value of V[0] with a program in CUDA and another serial on the CPU, obtaining different results. Obviously I thought that the problem in CUDA could be that I did not specify the thread, but, even with this, the result does not change, and it is still (much) greater from the serial one: 6.71201e+22 vs -2908.05. Where could be the problem? The other calculations performed in the global function are the following:
int tid = threadIdx.x;
if ( tid != 0 && tid < N )
{
{Various stuff which does not involve V or the variables used to compute V[0]}
V[tid] = B*(1/(1+alpha[tid]*alpha[tid])*(One_G[tid]*Exp - Cos - alpha[tid]*Sin) + kappa[tid]*Sin);
}
As you can see, in my condition I avoid to consider the case tid == 0.
3) Finally, a last question: usually in the sample codes I noticed that, if you want to use on the CPU values allocated and computed on the GPU memory, you should copy those values on the CPU (e.g, with command cudaMemcpy, specifying cudaMemcpyDeviceToHost). But I manage to use those values directly in the main code (CPU) without any problem. Can be this a clue that there is something wrong with my GPU (or my installation of CUDA), which also causes the previous odd things?
Thank you for your help.
== Added on the 5th January ==
Sorry for the late of my reply. Before invoking the kernel, there are all the memory allocations of the arrays to compute (which are quite a lot). In particular, the code about the array involved in my question is:
float * V;
cudaMalloc( (void**)&V, N * sizeof(float) );
At the end of the code I wrote:
float V_ [N];
cudaMemcpy( &V_, V, N * sizeof(float), cudaMemcpyDeviceToHost );
cudaFree(V);
cout << V_[0] << endl;
Thank you again for your attention.
if you don't have any cudaMemcpy in your code, that's exactly the problem. ;-)
The GPU is accessing it's own memory (the RAM on your graphics card), while the CPU is accessing the RAM on your mainboard.
You need to allocate and copy alpha, kappa, One_g and all other arrays to your GPU first, using cudaMemcpy, then run your kernel and after that copy your results back to the CPU.
Also, don't forget to allocate the memory on BOTH sides.
As for the non-parallel stuff: If the result is always the same, all threads will write the same thing, so the result is exactly the same, just quite a bit more inefficient, since all of them try to access the same resources.
Is that the exact code you're using?
In regards to question 1, you should have a __syncthreads() after the assignment to your shared memory, temp.
Otherwise you'll get a race condition where thread 0 can start the summation prior to temp being fully populated.
As for your other question about specifying the thread, if you have
if (epsilon == 1)
{
V[0] = B*(Exp - 1 - b);
}
else
{
V[0] = B*(Exp - 1 + a);
}
Then every thread will execute that code; for example, if you have X number of threads executing, and epsilon is 1 for all of them, then all X threads will evaluate the same line:
V[0] = B*(Exp - 1 - b);
and hence you'll have another race condition, as you'll have all X threads writing to V[0]. If all the threads have the same value for B*(Exp - 1 - b), then you might not notice a difference, while if they have different values then you're liable to get different results each time, depending on what order the threads arrive

Resources