Concurrently initializing many arrays with random numbers using Curand and CUDA kernel - random

I am trying to initialize 100 elements of each these parallel arrays with randomly generated numbers concurrently on the GPU. However, my routine is not producing a variety of random numbers. When I debug the code in Visual Studio I see one number for every element in the array. The object of this code is to optimize the CImg FilledTriangles routine to use the GPU where it can.
What am I doing wrong and how can I fix it? Here is my code:
__global__ void initCurand(curandState* state, unsigned long seed)
int idx = threadIdx.x + blockIdx.x * blockDim.x;
curand_init(seed, idx, 0, &state[idx]);
__syncthreads();
}
/*
* CUDA kernel that will execute 100 threads in parallel
*/
__global__ void initializeArrays(float* posx, float* posy,float* rayon, float* veloc, float* opacity
,float * angle, unsigned char** color, int height, int width, curandState* state){
int idx = threadIdx.x + blockIdx.x * blockDim.x;
curandState localState = state[idx];
__syncthreads();
posx[idx] = (float)(curand_uniform(&localState)*width);
posy[idx] = (float)(curand_uniform(&localState)*height);
rayon[idx] = (float)(10 + curand_uniform(&localState)*50);
angle[idx] = (float)(curand_uniform(&localState)*360);
veloc[idx] = (float)(curand_uniform(&localState)*20 - 10);
color[idx][0] = (unsigned char)(curand_uniform(&localState)*255);
color[idx][1] = (unsigned char)(curand_uniform(&localState)*255);
color[idx][2] = (unsigned char)(curand_uniform(&localState)*255);
opacity[idx] = (float)(0.3 + 1.5*curand_uniform(&localState));
}
Here is the host code that prepares and calls these kernels: I am trying to create 100 threads (for each element) on one block in a grid.
// launch grid of threads
dim3 dimBlock(100);
dim3 dimGrid(1);
initCurand<<<dimBlock,dimGrid>>>(devState, unsigned(time(nullptr)));
// synchronize the device and the host
cudaDeviceSynchronize();
initializeArrays<<<dimBlock, dimGrid>>>(d_posx, d_posy, d_rayon, d_veloc, d_opacity, d_angle,d_color, img0.height(), img0.width(), devState);
Preliminaries:
// Define random properties (pos, size, colors, ..) for all triangles that will be displayed.
float posx[100], posy[100], rayon[100], angle[100], veloc[100], opacity[100];
// Define the same properties but for the device
float* d_posx;
float* d_posy;
float* d_rayon;
float* d_angle;
float* d_veloc;
float* d_opacity;
//unsigned char d_color[100][3];
unsigned char** d_color;
curandState* devState;
cudaError_t err;
// allocate memory on the device for the device arrays
err = cudaMalloc((void**)&d_posx, 100 * sizeof(float));
err = cudaMalloc((void**)&d_posy, 100 * sizeof(float));
err = cudaMalloc((void**)&d_rayon, 100 * sizeof(float));
err = cudaMalloc((void**)&d_angle, 100 * sizeof(float));
err = cudaMalloc((void**)&d_veloc, 100 * sizeof(float));
err = cudaMalloc((void**)&d_opacity, 100 * sizeof(float));
err = cudaMalloc((void**)&devState, 100*sizeof(curandState));
errCheck(err);
size_t pitch;
//allocated the device memory for source array
err = cudaMallocPitch(&d_color, &pitch, 3 * sizeof(unsigned char),100);
Getting the results:
// get the populated arrays back to the host for use
err = cudaMemcpy(posx,d_posx, 100 * sizeof(float), cudaMemcpyDeviceToHost);
err = cudaMemcpy(posy,d_posy, 100 * sizeof(float), cudaMemcpyDeviceToHost);
err = cudaMemcpy(rayon,d_rayon, 100 * sizeof(float), cudaMemcpyDeviceToHost);
err = cudaMemcpy(veloc,d_veloc, 100 * sizeof(float), cudaMemcpyDeviceToHost);
err = cudaMemcpy(opacity,d_opacity, 100 * sizeof(float), cudaMemcpyDeviceToHost);
err = cudaMemcpy(angle,d_angle, 100 * sizeof(float), cudaMemcpyDeviceToHost);
err = cudaMemcpy2D(color,pitch,d_color,100, 100 *sizeof(unsigned char),3, cudaMemcpyDeviceToHost);

definitely you will need to make a change from this:
err = cudaMalloc((void**)&devState, 100*sizeof(float));
to this:
err = cudaMalloc((void**)&devState, 100*sizeof(curandState));
If you ran your code through cuda-memcheck, you would have discovered this. Your initCurand kernel had plenty of out-of-bounds accesses due to this.
You should also be doing error checking on all cuda calls and all kernel launches. I believe your second kernel call is failing due to a messed up operation on your color[][] array.
Normally when we create an array with cudaMallocPitch, we need to access it using the pitch parameter. C doubly-subscripted arrays by themselves won't work, because C has no inherent knowledge of the actual array width.
I was able to fix it by making the following changes:
__global__ void initializeArrays(float* posx, float* posy,float* rayon, float* veloc, float* opacity,float * angle, unsigned char* color, int height, int width, curandState* state, size_t pitch){
int idx = threadIdx.x + blockIdx.x * blockDim.x;
curandState localState = state[idx];
__syncthreads();
posx[idx] = (float)(curand_uniform(&localState)*width);
posy[idx] = (float)(curand_uniform(&localState)*height);
rayon[idx] = (float)(10 + curand_uniform(&localState)*50);
angle[idx] = (float)(curand_uniform(&localState)*360);
veloc[idx] = (float)(curand_uniform(&localState)*20 - 10);
color[idx*pitch] = (unsigned char)(curand_uniform(&localState)*255);
color[(idx*pitch)+1] = (unsigned char)(curand_uniform(&localState)*255);
color[(idx*pitch)+2] = (unsigned char)(curand_uniform(&localState)*255);
opacity[idx] = (float)(0.3 + 1.5*curand_uniform(&localState));
}
and
initializeArrays<<<dimBlock, dimGrid>>>(d_posx, d_posy, d_rayon, d_veloc, d_opacity, d_angle,d_color, img0.height(), img0.width(), devState, pitch);
and
unsigned char* d_color;
with those changes, I was able to eliminate the errors I found and the code spit out various random values. I haven't inspected all the values, but that should get you started.

Related

curand_uniform not deterministic?

I want to generate pseudo-random numbers on a CUDA device in a deterministic way, saying if I ran the program two times I expect the exact same results, given that the program uses a hardcoded seed. Following the examples provided by nvidia: https://docs.nvidia.com/cuda/curand/device-api-overview.html#device-api-example
I would expect exactly the described behavior.
But I do get different results, running the exact same code multiple times. Is there a way to get pseudo-random numbers in a deterministic way, as I described?
Following example code shows my problem:
#include <iostream>
#include <cuda.h>
#include <curand_kernel.h>
__global__ void setup_kernel(curandState *state)
{
auto id = threadIdx.x + blockIdx.x * blockDim.x;
curand_init(123456, id, 0, &state[id]);
}
__global__ void draw_numbers(curandState *state, float* results)
{
auto id = threadIdx.x + blockIdx.x * blockDim.x;
// Copy state
curandState localState = state[id % 1024];
// Generate random number
results[id] = curand_uniform(&localState);
// Copy back state
state[id % 1024] = localState;
}
int main(int argc, char* argv[])
{
// Setup
curandState* dStates;
cudaMalloc((void **) &dStates, sizeof(curandState) * 1024);
setup_kernel<<<1024, 1>>>(dStates);
// Random numbers
float* devResults;
cudaMalloc((void **) &devResults, sizeof(float) * 16 * 1024);
float *hostResults = (float*) calloc(16 * 1024, sizeof(float));
// Call draw random numbers
draw_numbers<<<1024, 16>>>(dStates, devResults);
// Copy results
cudaMemcpy(hostResults, devResults, 16 * 1024 * sizeof(float), cudaMemcpyDeviceToHost);
// Output number 12345
::std::cout << "12345 is: " << hostResults[12345] << ::std::endl;
return 0;
}
Compiling and running the code produces different output on my machine:
$ nvcc -std=c++11 curand.cu && ./a.out && ./a.out && ./a.out
12345 is: 0.8059
12345 is: 0.53454
12345 is: 0.382981
As I said, I would expect three times the same output in this example.
curand_uniform does deterministically depend on the state it is provided.
Thanks to the comments by Robert Crovella I see now that the error was in relying on the thread execution order. Just not reusing the state would result in the same "random" numbers, when the draw_numbers kernel is called multiple times, which is not an option for me either.
My guess is that the best solution in my case is to only launch 1024 threads (as many as curandState are set up) and generating multiple random numbers in each thread (in my example 16/thread). This way I receive different random numbers on multiple calls within the program, but the same numbers for every program launch.
Updated code:
#include <iostream>
#include <cuda.h>
#include <curand_kernel.h>
__global__ void setup_kernel(curandState *state)
{
auto id = threadIdx.x + blockIdx.x * blockDim.x;
curand_init(123456, id, 0, &state[id]);
}
__global__ void draw_numbers(curandState *state, float* results, int runs)
{
auto id = threadIdx.x + blockIdx.x * blockDim.x;
// Copy state
curandState localState = state[id];
// Generate random numbers
for (int i = 0; i < runs; ++i)
{
results[id + i * 1024] = curand_uniform(&localState);
}
// Copy back state
state[id] = localState;
}
int main(int argc, char* argv[])
{
// Setup
curandState* dStates;
cudaMalloc((void **) &dStates, sizeof(curandState) * 1024);
setup_kernel<<<1024, 1>>>(dStates);
// Random numbers
float* devResults;
cudaMalloc((void **) &devResults, sizeof(float) * 16 * 1024);
float *hostResults = (float*) calloc(16 * 1024, sizeof(float));
// Call draw random numbers
draw_numbers<<<16, 64>>>(dStates, devResults, 16);
// Copy results
cudaMemcpy(hostResults, devResults, 16 * 1024 * sizeof(float), cudaMemcpyDeviceToHost);
// Output number 12345
::std::cout << "12345 is " << hostResults[12345];
// Call draw random numbers (again)
draw_numbers<<<16, 64>>>(dStates, devResults, 16);
// Copy results
cudaMemcpy(hostResults, devResults, 16 * 1024 * sizeof(float), cudaMemcpyDeviceToHost);
// Output number 12345 again
::std::cout << " and " << hostResults[12345] << ::std::endl;
return 0;
}
Producing following output:
$ nvcc -std=c++11 curand.cu && ./a.out && ./a.out && ./a.out
12345 is 0.164181 and 0.295907
12345 is 0.164181 and 0.295907
12345 is 0.164181 and 0.295907
which serves exactly my use-case.

Understanding CUDA indexing

I inherited some CUDA code that I need to work on but some of the indexing done in it is confusing me.
A simple example would be the normalisation of data. Say we have a shared array A[2*N] which is a matrix of shape 2xN which has been unrolled to an array. Then we have the normalisation means and standard deviation: norm_means[2] and norm_stds[2]. The goal is to normalise the data in A in parallel. A minimal example would be:
__global__ void normalise(float *data, float *norm, float *std) {
int tdy = threadIdx.y;
for (int i=tdy; i<D; i+=blockDim.y)
data[i] = data[i] * norm[i] + std[i];
}
int main(int argc, char **argv) {
// generate data
int N=100;
int D=2;
MatrixXd A = MatrixXd::Random(N*D,1);
MatrixXd norm_means = MatrixXd::Random(D,1);
MatrixXd norm_stds = MatrixXd::Random(D,1);
// transfer data to device
float* A_d;
float* norm_means_d;
float* nrom_stds_d;
cudaMalloc((void **)&A_d, N * D * sizeof(float));
cudaMalloc((void **)&norm_means_d, D * sizeof(float));
cudaMalloc((void **)&norm_stds_d, D * sizeof(float));
cudaMemcpy(A_d, A.data(), D * N * sizeof(float), cudaMemcpyHostToDevice);
cudaMemcpy(norm_means_d, norm_means.data(), D * sizeof(float), cudaMemcpyHostToDevice);
cudaMemcpy(norm_stds_d, norm_stds.data(), D * sizeof(float), cudaMemcpyHostToDevice);
// Setup execution
const int BLOCKSIZE_X = 8;
const int BLOCKSIZE_Y = 16;
const int GRIDSIZE_X = (N-1)/BLOCKSIZE_X + 1;
dim3 dimBlock(BLOCKSIZE_X, BLOCKSIZE_Y, 1);
dim3 dimGrid(GRIDSIZE_X, 1, 1);
normalise<<dimGrid, dimBlock, 0>>>(A_d, norm_means_d, norm_stds_d);
}
Note that I am using Eigen for the matrix generation. I have omitted the includes for brevity.
This code above through some magic works and achieves the desired results. However, the CUDA kernel function does not make any sense to me because the for loop should stop after one execution as i>D after the first iteration .. but it doesn't?
If I change the kernel that makes more sense to me eg.
__global__ void normalise(float *data, float *norm, float *std) {
int tdy = threadIdx.y;
for (int i=0; i<D; i++)
data[tdy + i*blockDim.y] = data[tdy + i*blockDim.y] * norm[i] + std[i];
}
the program stops working and just outputs gibberish data.
Can somebody explain why I get this behaviour?
PS. I am very new to CUDA
It is indeed senseless to have a 2-dimensional kernel to perform an elementwise operation on an array. There is also no reason to work in blocks of size 8x16. But your modified kernel uses the second dimension (y) only; that's probably why it doesn't work. You probably needed to use the first dimension (x) only.
However - it would be reasonable to use the Y dimension for the actual second dimension, e.g. something like this:
__global__ void normalize(
float __restrict *data,
const float __restrict *norm,
const float __restrict *std)
{
auto pos = threadIdx.x + blockDim.x * blockIdx.x;
auto d = threadIdx.y + blockDim.y * blockIdx.y; // or maybe just threadIdx.y;
data[pos + d * N] = data[pos + d * N] * norm[d] + std[d];
}
Other points to consider:
I added __restrict to your pointers. Always do that when relevant; here's why.
It's a good idea to have a single thread to work on more than one element of data - but you should make that happen in the longer dimension, where the thread can reuse its norm and std values rather than read them from memory every time.

Binary Matrix Reduction in CUDA

I have to traverse all cells of an imaginary matrix m * n and add + 1 for all cells that meet a certain condition.
My naive solution was as follows:
#include <stdio.h>
__global__ void calculate_pi(int center, int *count) {
int x = threadIdx.x;
int y = blockIdx.x;
if (x*x + y*y <= center*center) {
*count++;
}
}
int main() {
int interactions;
printf("Enter the number of interactions: ");
scanf("%d", &interactions);
int l = sqrt(interactions);
int h_count = 0;
int *d_count;
cudaMalloc(&d_count, sizeof(int));
cudaMemcpy(&d_count, &h_count, sizeof(int), cudaMemcpyHostToDevice);
calculate_pi<<<l,l>>>(l/2, d_count);
cudaMemcpy(&h_count, d_count, sizeof(int), cudaMemcpyDeviceToHost);
cudaFree(d_count);
printf("Sum: %d\n", h_count);
return 0;
}
In my use case, the value of interactions can be very large, making it impossible to allocate l * l of space.
Can someone help me? Any suggestions are welcome.
There are at least 2 problems with your code:
Your kernel code will not work correctly with an ordinary add here:
*count++;
this is because multiple threads are trying to do this at the same time, and CUDA does not automatically sort that out for you. For the purpose of this explanation, we will fix this with an atomicAdd(), although other methods are possible.
The ampersand doesn't belong here:
cudaMemcpy(&d_count, &h_count, sizeof(int), cudaMemcpyHostToDevice);
^
I assume that is just a typo, since you did it correctly on the subsequent cudaMemcpy operation:
cudaMemcpy(&h_count, d_count, sizeof(int), cudaMemcpyDeviceToHost);
This methodology (effectively creating a square array of threads using threadIdx.x for one dimension and blockIdx.x for the other) will only work up to an interactions value that leads to an l value of 1024, or less, because CUDA threadblocks are limited to 1024 threads, and you are using l as the size of the threadblock in your kernel launch. To fix this you would want to learn how to create a CUDA 2D grid of arbitrary dimensions, and adjust your kernel launch and in-kernel indexing calculations appropriately. For now we will just make sure that the calculated l value is in range for your code design.
Here's an example addressing the above issues:
$ cat t1590.cu
#include <stdio.h>
__global__ void calculate_pi(int center, int *count) {
int x = threadIdx.x;
int y = blockIdx.x;
if (x*x + y*y <= center*center) {
atomicAdd(count, 1);
}
}
int main() {
int interactions;
printf("Enter the number of interactions: ");
scanf("%d", &interactions);
int l = sqrt(interactions);
if ((l > 1024) || (l < 1)) {printf("Error: interactions out of range\n"); return 0;}
int h_count = 0;
int *d_count;
cudaMalloc(&d_count, sizeof(int));
cudaMemcpy(d_count, &h_count, sizeof(int), cudaMemcpyHostToDevice);
calculate_pi<<<l,l>>>(l/2, d_count);
cudaMemcpy(&h_count, d_count, sizeof(int), cudaMemcpyDeviceToHost);
cudaFree(d_count);
cudaError_t err = cudaGetLastError();
if (err == cudaSuccess){
printf("Sum: %d\n", h_count);
printf("fraction satisfying test: %f\n", h_count/(float)interactions);
}
else
printf("CUDA error: %s\n", cudaGetErrorString(err));
return 0;
}
$ nvcc -o t1590 t1590.cu
$ ./t1590
Enter the number of interactions: 1048576
Sum: 206381
fraction satisfying test: 0.196820
$
We see that the code indicates a calculated fraction of about 0.2. Does this appear to be correct? I claim that it does appear to be correct based on your test. You are effectively creating a grid that represents dimensions of lxl. Your test is asking, effectively, "which points in that grid are within a circle, with the center at the origin (corner) of the grid, and radius l/2 ?"
Pictorially, that looks something like this:
and it is reasonable to assume the red shaded area is somewhat less than 0.25 of the total area, so 0.2 is a reasonable estimate of that area.
As a bonus, here is a version of the code that reduces the restriction listed in item 3 above:
#include <stdio.h>
__global__ void calculate_pi(int center, int *count) {
int x = threadIdx.x+blockDim.x*blockIdx.x;
int y = threadIdx.y+blockDim.y*blockIdx.y;
if (x*x + y*y <= center*center) {
atomicAdd(count, 1);
}
}
int main() {
int interactions;
printf("Enter the number of interactions: ");
scanf("%d", &interactions);
int l = sqrt(interactions);
int h_count = 0;
int *d_count;
const int bs = 32;
dim3 threads(bs, bs);
dim3 blocks((l+threads.x-1)/threads.x, (l+threads.y-1)/threads.y);
cudaMalloc(&d_count, sizeof(int));
cudaMemcpy(d_count, &h_count, sizeof(int), cudaMemcpyHostToDevice);
calculate_pi<<<blocks,threads>>>(l/2, d_count);
cudaMemcpy(&h_count, d_count, sizeof(int), cudaMemcpyDeviceToHost);
cudaFree(d_count);
cudaError_t err = cudaGetLastError();
if (err == cudaSuccess){
printf("Sum: %d\n", h_count);
printf("fraction satisfying test: %f\n", h_count/(float)interactions);
}
else
printf("CUDA error: %s\n", cudaGetErrorString(err));
return 0;
}
This is launching a 2D grid based on l, and should work up to at least 1 billion interactions .

cuda big-matrix and blocks/threads

I'm having some issues regarding how to handle big matrices. Like explained in this other question I've a program which does work on big square matrices (like 5k-10k). The computational part is correct (still not 100% optimized) and I've tested it with smaller square matrices (like 256-512). Here is my code:
#define N 10000
#define RADIUS 100
#define SQRADIUS RADIUS*RADIUS
#define THREADS 512
//many of these device functions are declared
__device__ unsigned char avg(const unsigned char *src, const unsigned int row, const unsigned int col) {
unsigned int sum = 0, c = 0;
//some work with radius and stuff
return sum;
}
__global__ void applyAvg(const unsigned char *src, unsigned char *dest) {
unsigned int tid = blockDim.x * blockIdx.x + threadIdx.x, tmp = 0;
unsigned int stride = blockDim.x * gridDim.x;
int col = tid%N, row = (int)tid/N;
while(tid < N*N) {
if(row * col < N * N) {
//choose which of the __device__ functions needs to be launched
}
tid += stride;
col = tid%N, row = (int)tid/N;
}
__syncthreads();
}
int main( void ) {
cudaError_t err;
unsigned char *base, *thresh, *d_base, *d_thresh, *avg, *d_avg;
int i, j;
base = (unsigned char*)malloc((N * N) * sizeof(unsigned char));
thresh = (unsigned char*)malloc((N * N) * sizeof(unsigned char));
avg = (unsigned char*)malloc((N * N) * sizeof(unsigned char));
err = cudaMalloc((void**)&d_base, (N * N) * sizeof(unsigned char));
if(err != cudaSuccess) {printf("ERROR 1"); exit(-1);}
err = cudaMalloc((void**)&d_thresh, (N * N) * sizeof(unsigned char));
if(err != cudaSuccess) {printf("ERROR 2"); exit(-1);}
err = cudaMalloc((void**)&d_avg, (N * N) * sizeof(unsigned char));
if(err != cudaSuccess) {printf("ERROR 3"); exit(-1);}
for(i = 0; i < N * N; i++) {
base[i] = (unsigned char)(rand() % 256);
}
err = cudaMemcpy(d_base, base, (N * N) * sizeof(unsigned char), cudaMemcpyHostToDevice);
if(err != cudaSuccess){printf("ERROR 4"); exit(-1);}
//more 'light' stuff to do before the 'heavy computation'
applyAvg<<<(N + THREADS - 1) / THREADS, THREADS>>>(d_thresh, d_avg);
err = cudaMemcpy(thresh, d_thresh, (N * N) * sizeof(unsigned char), cudaMemcpyDeviceToHost);
if(err != cudaSuccess) {printf("ERROR 5"); exit(-1);}
err = cudaMemcpy(avg, d_avg, (N * N) * sizeof(unsigned char), cudaMemcpyDeviceToHost);
if(err != cudaSuccess) {printf("ERROR 6"); exit(-1);}
getchar();
return 0;
}
When launching the problem with a big matrix (like 10000 x 10000) and a radius of 100 (which is how 'far' from every point in the matrix i look ahead) it takes so much time.
I believe that the problem resides both in the applyAvg<<<(N + THREADS - 1) / THREADS, THREADS>>> (how many blocks and threads I decide to run) and in the applyAvg(...) method (the stride and the tid).
Can someone clarify me which is the best way to decide how many blocks/threads to launch given that the matrix can vary from 5k to 10k size?
I suppose what you want to do is image filtering/convolution. based on your current cuda kernel, two thing you could do to improve the performance.
Use 2-D threads/blocks to avoid / and % operators. They are very slow.
Use shared memory to reduce global memory bandwidth.
Here's a white paper about image convolution. It shows how to implement a high performance box filer with CUDA.
http://docs.nvidia.com/cuda/samples/3_Imaging/convolutionSeparable/doc/convolutionSeparable.pdf
Nvidia cuNPP library also provides functions nppiFilterBox() and nppiFilterBox(), so you don't need write your own kernel. Here's the document and example.
http://docs.nvidia.com/cuda/cuda-samples/index.html#box-filter-with-npp
NPP doc pp.1009 http://docs.nvidia.com/cuda/pdf/NPP_Library.pdf

Cuda thrust global memory writing very slow

I am currently writing a code, that calculates a integral Histogram on the GPU using the Nvidia thrust library.
Therefore I allocate a continuous Block of device memory which I update with a custom functor all the time.
The problem is, that the write to the device memory is veeery slow, but the reads are actually ok.
The basic setup is the following:
struct HistogramCreation
{
HistogramCreation(
...
// pointer to memory
...
){}
/// The actual summation operator
__device__ void operator()(int index){
.. do the calculations ..
for(int j=0;j<30;j++){
(1) *_memoryPointer = values (also using reads to such locations) ;
}
}
}
void foo(){
cudaMalloc(_pointer,size);
HistogramCreation initialCreation( ... _pointer ...);
thrust::for_each(
thrust::make_counting_iterator(0),
thrust::make_counting_iterator(_imageSize),
initialCreation);
}
if I change the writing in (1) to the following>
unsigned int val = values;
The performance is much better. THis is the only global memory write I have.
Using the memory write I get about 2s for HD Footage.
using the local variable it takes about 50 ms so about a factor of 40 less.
Why is this so slow? how could I improve it?
Just as #OlegTitov said, frequent load/store with global
memory should be avoided as much as possible. When there's a
situation where it's inevitable, then coalesced memory
access can help the execution process not to get too slow;
however in most cases, histogram calculation is pretty tough
to realize the coalesced access.
While most of the above is basically just restating
#OlegTitov's answer, i'd just like to share about an
investigation i did about finding summation with NVIDIA
CUDA. Actually the result is pretty interesting and i hope
it'll be a helpful information for other xcuda developers.
The experiment was basically to run a speed test of finding
summation with various memory access patterns: using global
memory (1 thread), L2 cache (atomic ops - 128 threads), and
L1 cache (shared mem - 128 threads)
This experiment used:
Kepler GTX 680,
1546 cores # 1.06GHz
GDDR5 256-bit # 3GHz
Here are the kernels:
__global__
void glob(float *h) {
float* hist = h;
uint sd = SEEDRND;
uint random;
for (int i = 0; i < NUMLOOP; i++) {
if (i%NTHREADS==0) random = rnd(sd);
int rind = random % NBIN;
float randval = (float)(random % 10)*1.0f ;
hist[rind] += randval;
}
}
__global__
void atom(float *h) {
float* hist = h;
uint sd = SEEDRND;
for (int i = threadIdx.x; i < NUMLOOP; i+=NTHREADS) {
uint random = rnd(sd);
int rind = random % NBIN;
float randval = (float)(random % 10)*1.0f ;
atomicAdd(&hist[rind], randval);
}
}
__global__
void shm(float *h) {
int lid = threadIdx.x;
uint sd = SEEDRND;
__shared__ float shm[NTHREADS][NBIN];
for (int i = 0; i < NBIN; i++) shm[lid][i] = h[i];
for (int i = lid; i < NUMLOOP; i+=NTHREADS) {
uint random = rnd(sd);
int rind = random % NBIN;
float randval = (float)(random % 10)*1.0f ;
shm[lid][rind] += randval;
}
/* reduction here */
for (int i = 0; i < NBIN; i++) {
__syncthreads();
if (threadIdx.x < 64) {
shm[threadIdx.x][i] += shm[threadIdx.x+64][i];
}
__syncthreads();
if (threadIdx.x < 32) {
shm[threadIdx.x][i] += shm[threadIdx.x+32][i];
}
__syncthreads();
if (threadIdx.x < 16) {
shm[threadIdx.x][i] += shm[threadIdx.x+16][i];
}
__syncthreads();
if (threadIdx.x < 8) {
shm[threadIdx.x][i] += shm[threadIdx.x+8][i];
}
__syncthreads();
if (threadIdx.x < 4) {
shm[threadIdx.x][i] += shm[threadIdx.x+4][i];
}
__syncthreads();
if (threadIdx.x < 2) {
shm[threadIdx.x][i] += shm[threadIdx.x+2][i];
}
__syncthreads();
if (threadIdx.x == 0) {
shm[0][i] += shm[1][i];
}
}
for (int i = 0; i < NBIN; i++) h[i] = shm[0][i];
}
OUTPUT
atom: 102656.00 shm: 102656.00 glob: 102656.00
atom: 122240.00 shm: 122240.00 glob: 122240.00
... blah blah blah ...
One Thread: 126.3919 msec
Atomic: 7.5459 msec
Sh_mem: 2.2207 msec
The ratio between these kernels is 57:17:1. Many things can
be analyzed here, and it truly does not mean that using
L1 or L2 memory spaces will always give you more than 10
times speedup of the whole program.
And here's the main and other funcs:
#include <iostream>
#include <cstdlib>
#include <cstdio>
using namespace std;
#define NUMLOOP 1000000
#define NBIN 36
#define SEEDRND 1
#define NTHREADS 128
#define NBLOCKS 1
__device__ uint rnd(uint & seed) {
#if LONG_MAX > (16807*2147483647)
int const a = 16807;
int const m = 2147483647;
seed = (long(seed * a))%m;
return seed;
#else
double const a = 16807;
double const m = 2147483647;
double temp = seed * a;
seed = (int) (temp - m * floor(temp/m));
return seed;
#endif
}
... the above kernels ...
int main()
{
float *h_hist, *h_hist2, *h_hist3, *d_hist, *d_hist2,
*d_hist3;
h_hist = (float*)malloc(NBIN * sizeof(float));
h_hist2 = (float*)malloc(NBIN * sizeof(float));
h_hist3 = (float*)malloc(NBIN * sizeof(float));
cudaMalloc((void**)&d_hist, NBIN * sizeof(float));
cudaMalloc((void**)&d_hist2, NBIN * sizeof(float));
cudaMalloc((void**)&d_hist3, NBIN * sizeof(float));
for (int i = 0; i < NBIN; i++) h_hist[i] = 0.0f;
cudaMemcpy(d_hist, h_hist, NBIN * sizeof(float),
cudaMemcpyHostToDevice);
cudaMemcpy(d_hist2, h_hist, NBIN * sizeof(float),
cudaMemcpyHostToDevice);
cudaMemcpy(d_hist3, h_hist, NBIN * sizeof(float),
cudaMemcpyHostToDevice);
cudaEvent_t start, end;
float elapsed = 0, elapsed2 = 0, elapsed3;
cudaEventCreate(&start);
cudaEventCreate(&end);
cudaEventRecord(start, 0);
atom<<<NBLOCKS, NTHREADS>>>(d_hist);
cudaThreadSynchronize();
cudaEventRecord(end, 0);
cudaEventSynchronize(start);
cudaEventSynchronize(end);
cudaEventElapsedTime(&elapsed, start, end);
cudaEventRecord(start, 0);
shm<<<NBLOCKS, NTHREADS>>>(d_hist2);
cudaThreadSynchronize();
cudaEventRecord(end, 0);
cudaEventSynchronize(start);
cudaEventSynchronize(end);
cudaEventElapsedTime(&elapsed2, start, end);
cudaEventRecord(start, 0);
glob<<<1, 1>>>(d_hist3);
cudaThreadSynchronize();
cudaEventRecord(end, 0);
cudaEventSynchronize(start);
cudaEventSynchronize(end);
cudaEventElapsedTime(&elapsed3, start, end);
cudaMemcpy(h_hist, d_hist, NBIN * sizeof(float),
cudaMemcpyDeviceToHost);
cudaMemcpy(h_hist2, d_hist2, NBIN * sizeof(float),
cudaMemcpyDeviceToHost);
cudaMemcpy(h_hist3, d_hist3, NBIN * sizeof(float),
cudaMemcpyDeviceToHost);
/* print output */
for (int i = 0; i < NBIN; i++) {
printf("atom: %10.2f shm: %10.2f glob:
%10.2f¥n",h_hist[i],h_hist2[i],h_hist3[i]);
}
printf("%12s: %8.4f msec¥n", "One Thread", elapsed3);
printf("%12s: %8.4f msec¥n", "Atomic", elapsed);
printf("%12s: %8.4f msec¥n", "Sh_mem", elapsed2);
return 0;
}
When writing GPU code you should avoid reading and writing to/from global memory. Global memory is very slow on GPU. That's the hardware feature. The only thing you can do is to make neighboring treads read/write in neighboring adresses in global memory. This will cause coalescing and speed up the process. But in general read your data once, process it and write it out once.
Note that NVCC might optimize out a lot of your code after you make the modification - it detects that no write to global memory is made and just removes the "unneeded" code. So this speedup may not be coming out of the global writer per ce.
I would recommend using profiler on your actual code (the one with global write) to see if there's anything like unaligned access or other perf problem.

Resources