how to divide n numbers among N processors using open mp - sorting

I was a given a task to perform CREW sort in parallel programming. As the first step of this, I have an array of size n and N processors, I need to divide these elements among N processors and sort each part sequentially and merge them back , how can I do this in openmp. I am new to using openmp ,so any resources to solve this problem will be helpful.

This is what I wrote from the back of my head. It might not be optimal and is not tested. But it should give you a direction how to handle such a problem.
#include <stddef.h>
#include <stdlib.h>
#include <openmp.h>
ptrdiff_t min(ptrdiff_t a, ptrdiff_t b) {
return ((a > b) ? b : a);
}
void inplace_sequential_sort(double *data, ptrdiff_t n) { /* ... */ }
void inplace_merge(double *data, ptrdiff_t n1, ptrdiff_t n2) { /* ... */ }
void inplace_parallel_sort(double *data, ptrdiff_t n) {
if (n < 2)
return;
/* allocate memory for helper array */
int const max_threads = omp_get_max_threads();
ptrdiff_t *my_n = calloc(max_threads, sizeof(*my_n));
if (!my_n) { /* ... */ }
#pragma omp parallel default(none) \
shared(n, data, my_n)
{
/* get thread ID and actual number of threads */
int const tid = omp_get_thread_num();
int const N = omp_get_num_threads();
/* distribute data among threads */
ptrdiff_t const max_elem_per_thread = ((n + N - 1) / N);
ptrdiff_t const my_begin = min(tid * max_elem_per_thread, n);
my_n[tid] = min(n - begin, max_elem_per_thread);
if (my_n[tid] > 1)
inplace_squential_sort(data + my_begin, my_n[tid]);
/* merge sorted data sections (parallel reduction algorithm) */
for (ptrdiff_t stride = 1; stride < N; stride *= 2 ) {
#pragma omp barrier
if (ti % (2 * stride) == 0 && my_begin + my_n[tid] != n) {
inplace_merge(data + my_begin, my_n[tid], my_n[tid + stride]);
my_n[tid] += my_n[tid + stride];
}
}
} /* end of parallel region */
free(my_n);
}
I assumed that you want a C solution (not C++ or Fortran) and sort the data inplace. This is a very basic solution. OpenMP can do much more (e.g. tasking). The functions inplace_sequantial_sort() and inplace_merge() have to be provided.

Related

OpenMP and (Rcpp)Eigen

I am wondering how to write code that at times makes use of OpenMP parallelization built into the Eigen library while at other times uses Parallelization that I specify. Hopefully, the below code snippet should provide background into my problem.
I am asking this question at the design stage of my library (sorry I don't have a working / broken code example).
#ifdef _OPENMP
#include <omp.h>
#endif
#include <RcppEigen.h>
void fxn(..., int ncores=-1){
if (ncores > 0) omp_set_num_threads(ncores);
/*
* Code with matrix products
* where I would like to use Eigen's
* OpenMP parallelization
*/
#pragma omp parallel for
for (int i=0; i < iter; i++){
/*
* Code I would like to parallelize "myself"
* even though it involves matrix products
*/
}
}
What is best practice for controlling the balance between Eigen's own parallelization with OpenMP and my own.
UPDATE:
I wrote a simple example and tested ggael's suggestion. In short, I am skeptical that it solves the problem I was posing (or I am doing something else wrong - apologies if its the latter). Notice that with explicit parallelization of the for loop there is no change in run-time (not even a slow
#ifdef _OPENMP
#include <omp.h>
#endif
#include <RcppEigen.h>
using namespace Rcpp;
// [[Rcpp::plugins(openmp)]]
// [[Rcpp::export]]
Eigen::MatrixXd testing(Eigen::MatrixXd A, Eigen::MatrixXd B, int n_threads=1){
Eigen::setNbThreads(n_threads);
Eigen::MatrixXd C = A*B;
Eigen::setNbThreads(1);
for (int i=0; i < A.cols(); i++){
A.col(i).array() = A.col(i).array()*B.col(i).array();
}
return A;
}
// [[Rcpp::export]]
Eigen::MatrixXd testing_omp(Eigen::MatrixXd A, Eigen::MatrixXd B, int n_threads=1){
Eigen::setNbThreads(n_threads);
Eigen::MatrixXd C = A*B;
Eigen::setNbThreads(1);
#pragma omp parallel for num_threads(n_threads)
for (int i=0; i < A.cols(); i++){
A.col(i).array() = A.col(i).array()*B.col(i).array();
}
return A;
}
/*** R
A <- matrix(rnorm(1000*1000), 1000, 1000)
B <- matrix(rnorm(1000*1000), 1000, 1000)
microbenchmark::microbenchmark(testing(A,B, n_threads=1),
testing_omp(A,B, n_threads=1),
testing(A,B, n_threads=8),
testing_omp(A,B, n_threads=8),
times=10)
*/
Unit: milliseconds
expr min lq mean median uq max neval cld
testing(A, B, n_threads = 1) 169.74272 183.94500 212.83868 218.15756 236.97049 264.52183 10 b
testing_omp(A, B, n_threads = 1) 166.53132 178.48162 210.54195 227.65258 234.16727 238.03961 10 b
testing(A, B, n_threads = 8) 56.03258 61.16001 65.15763 62.67563 67.37089 83.43565 10 a
testing_omp(A, B, n_threads = 8) 54.18672 57.78558 73.70466 65.36586 67.24229 167.90310 10 a
The easiest is probably to disable/enable Eigen's multi-threading at runtime:
Eigen::setNbThreads(1); // single thread mode
#pragma omp parallel for
for (int i=0; i < iter; i++){
// Code I would like to parallelize "myself"
// even though it involves matrix products
}
Eigen::setNbThreads(0); // restore default

Bi-variate polynomial (2-D Matrix) multiplication: Not much gains in performance when mexing using CPP compared to Matlab

I looked around the forum but could't find the answer to my specific problem.
Objective
To compute A(x,y)^N where N is large - say 500. I store A(x,y) in Matlab the form of matrix where A(x,y)=\sum A(i+1,j+1)* x^i * y^j. Critical point to note is that I can't compute the real values of the coefficients because the coefficients are huge. So at each step I just store log of the coefficients so that I will always store the exponents of the coefficients but not the actual values which is important to me. If not this complication things would have been much easier i guess.
I am attaching the code here. For the power values small i.e n=50,100 instead of 500 I am seeing 1.5x faster with C-Mex code. But for values like n=500, I observe that Mex function is atmost same as Matlab if not slower. Is that the best I can do with cpp Mex file. I am not very experienced at cpp so I am hoping that some experienced users can point out any inefficiencies in my cpp code.
Matlab Code:
C=log(A);
P1=log(1);
n=500;
for i=1:n
P1=log_MatMat(P1,C,zeroApprox);
%P1=reshape(log_MatMat_CPP(P1,C,zeroApprox),size(P1,1)+size(C,1)-1,[]); %Using the cpp mex file.
end
Matlab Function:log_MatMat
function OP=log_MatMat(A,B,zeroApprox)
xDeg1=size(A,1)-1;
yDeg1=size(A,2)-1;
xDeg2=size(B,1)-1;
yDeg2=size(B,2)-1;
OP=-Inf*ones(xDeg1+xDeg2+1,yDeg1+yDeg2+1);
[xAInd,yAInd]=find(A~=-Inf);
linAInd=sub2ind(size(A),xAInd,yAInd)';
[xBInd,yBInd]=find(B~=-Inf);
for i=0:numel(xBInd)-1
xdegOffset=xBInd(i+1)-1 ;
ydegOffset=yBInd(i+1)-1 ;
tmp=A(linAInd)+B(xBInd(i+1),yBInd(i+1));
opIndx=xAInd+xdegOffset;
opIndy=yAInd+ydegOffset;
opIndLin=sub2ind(size(OP),opIndx,opIndy)';
maxExp=max([OP(opIndLin);tmp]);
diffExp=abs(OP(opIndLin)-tmp);
for j=1:numel(tmp)
% Does the addition in Log-domain without converting the values into real domain. e^x+e^y=e^y*(1+e^(x-y));
if maxExp(j)<zeroApprox
OP(opIndx(j),opIndy(j))=-Inf;
else
if OP(opIndx(j),opIndy(j))==-Inf
OP(opIndx(j),opIndy(j))=tmp(j);
elseif diffExp(j)>20
OP(opIndx(j),opIndy(j))=maxExp(j);
else OP(opIndx(j),opIndy(j))=maxExp(j)+log(1+exp(-1*diffExp(j)));
end
end
end
clear tmp;
end
Mex Code:
#include "mex.h"
#include "math.h"
#include <iostream>
#include <limits>
#define MIN(a, b) ((a < b) ? a : b)
#define MAX(a, b) ((a > b) ? a : b)
#define A(i,j) a[i+j*Xa]
#define B(i,j) b[i+j*Xb]
#define C(i,j) c[i+j*Xc]
const double Inf = std::numeric_limits <double> ::max();
int zeroApprox;
/* Addition in log-domain */
double logAdd(double x, double y)
{
double z;
if(x>y) {
(x-y>20)?(z=x):(z=x+log(1+exp(y-x)));
}
else if (y>=x){
(y-x>20)?(z=y):(z=y+log(1+exp(x-y)));
}
if (z<zeroApprox)
z=-Inf;
return z;
}
/* The bivariate polynomial product routine: c(x,y)=a(x,y)*b(x,y) */
void matProduct(double *a,double *b,double *c,int Xa,int Ya,int Xb, int Yb)
{
int opX,opY;
int Xc, Yc;
unsigned int nonInfA[Xa*Ya][3];
bool cInf_Flag[Xa+Xb-1][Ya+Yb-1];
Xc=Xa+Xb-1;
Yc=Ya+Yb-1;
/* Initializing C with -Inf*/
for (int i=0; i< Xc;i++){
for (int j=0; j< Yc;j++) {
C(i,j)=-Inf;
}
}
/* Matrix Mult in Log- domain */
for (int jX=0; jX<Xb; jX++) {
for(int jY=0; jY<Yb; jY++) {
if (!isinf(B(jX,jY))) {
for (int iX=0; iX<Xa; iX++) {
for (int iY=0; iY<Ya; iY++) {
if (!isinf(A(iX,iY))) {
opX=iX+jX; opY=iY+jY;
C(opX,opY)=logAdd(C(opX,opY),A(iX,iY)+B(jX,jY));
}
}
}
}
}
}
}
/* The gateway function */
void mexFunction( int nlhs, mxArray *plhs[],
int nrhs, const mxArray *prhs[])
{
int nrowsA,nrowsB,ncolsA, ncolsB; /* size of Matrices */
int nrowsC,ncolsC; /* size of output Matrix */
double *outMatrix; /* output matrix */
double *A, *B;
/* create a pointer to the real data in the input matrix */
A = mxGetPr(prhs[0]);
B = mxGetPr(prhs[1]);
zeroApprox = mxGetScalar(prhs[2]);
/* get dimensions of the input matrix */
nrowsA = mxGetM(prhs[0]);
ncolsA = mxGetN(prhs[0]);
nrowsB = mxGetM(prhs[1]);
ncolsB = mxGetN(prhs[1]);
/* Compute output dimensions*/
nrowsC=nrowsA+nrowsB-1;
ncolsC=ncolsA+ncolsB-1;
/* create the output matrix */
plhs[0] = mxCreateDoubleMatrix(1,nrowsC*ncolsC,mxREAL);
/* get a pointer to the real data in the output matrix */
outMatrix = mxGetPr(plhs[0]);
/* call the computational routine */
if (ncolsA<=ncolsB){
matProduct(B,A,outMatrix,nrowsB,ncolsB,nrowsA,ncolsA);
}
else {
matProduct(A,B,outMatrix,nrowsA,ncolsA,nrowsB,ncolsB);
}
}

Conditional reduction in CUDA

I need to sum about 100000 values stored in an array, but with conditions.
Is there a way to do that in CUDA to produce fast results?
Can anyone post a small code to do that?
I think that, to perform conditional reduction, you can directly introduce the condition as a multiplication by 0 (false) or 1 (true) to the addends. In other words, suppose that the condition you would like to meet is that the addends be smaller than 10.f. In this case, borrowing the first code at Optimizing Parallel Reduction in CUDA by M. Harris, then the above would mean
__global__ void reduce0(int *g_idata, int *g_odata) {
extern __shared__ int sdata[];
// each thread loads one element from global to shared mem
unsigned int tid = threadIdx.x;
unsigned int i = blockIdx.x*blockDim.x + threadIdx.x;
sdata[tid] = g_idata[i]*(g_data[i]<10.f);
__syncthreads();
// do reduction in shared mem
for(unsigned int s=1; s < blockDim.x; s *= 2) {
if (tid % (2*s) == 0) {
sdata[tid] += sdata[tid + s];
}
__syncthreads();
}
// write result for this block to global mem
if (tid == 0) g_odata[blockIdx.x] = sdata[0];
}
If you wish to use CUDA Thrust to perform conditional reduction, you can do the same by using thrust::transform_reduce. Alternatively, you can create a new vector d_b copying in that all the elements of d_a satisfying the predicate by thrust::copy_if and then applying thrust::reduce on d_b. I haven't checked which solution performs the best. Perhaps, the second solution will perform better on sparse arrays. Below is an example with an implementation of both the approaches.
#include <thrust/host_vector.h>
#include <thrust/device_vector.h>
#include <thrust/reduce.h>
#include <thrust/count.h>
#include <thrust/copy.h>
// --- Operator for the first approach
struct conditional_operator {
__host__ __device__ float operator()(const float a) const {
return a*(a<10.f);
}
};
// --- Operator for the second approach
struct is_smaller_than_10 {
__host__ __device__ bool operator()(const float a) const {
return (a<10.f);
}
};
void main(void)
{
int N = 20;
// --- Host side allocation and vector initialization
thrust::host_vector<float> h_a(N,1.f);
h_a[0] = 20.f;
h_a[1] = 20.f;
// --- Device side allocation and vector initialization
thrust::device_vector<float> d_a(h_a);
// --- First approach
float sum = thrust::transform_reduce(d_a.begin(), d_a.end(), conditional_operator(), 0.f, thrust::plus<float>());
printf("Result = %f\n",sum);
// --- Second approach
int N_prime = thrust::count_if(d_a.begin(), d_a.end(), is_smaller_than_10());
thrust::device_vector<float> d_b(N_prime);
thrust::copy_if(d_a.begin(), d_a.begin() + N, d_b.begin(), is_smaller_than_10());
sum = thrust::reduce(d_b.begin(), d_b.begin() + N_prime, 0.f);
printf("Result = %f\n",sum);
getchar();
}

Sorting many small arrays in CUDA

I am implementing a median filter in CUDA. For a particular pixel, I extract its neighbors corresponding to a window around the pixel, say a N x N (3 x 3) window, and now have an array of N x N elements. I do not envision using a window of more than 10 x 10 elements for my application.
This array is now locally present in the kernel and already loaded into device memory. From previous SO posts that I have read, the most common sorting algorithms are implemented by Thrust. But, Thrust can only be called from the host. Thread - Thrust inside user written kernels
Is there a quick and efficient way to sort a small array of N x N elements inside the kernel?
If the number of elements is fixed and small, you can use sorting networks (http://pages.ripco.net/~jgamble/nw.html). It provides a fixed number of compare/swap operations for a fixed number of elements (eg. 19 compare/swap iterations for 8 elements).
Your problem is sorting many small arrays in CUDA.
Following Robert's suggestion in his comment, CUB offers a possible solution to face this problem. Below I report an example that was constructed around Robert's code at cub BlockRadixSort: how to deal with large tile size or sort multiple tiles?.
The idea is assigning the small arrays to be sorted to different thread blocks and then using cub::BlockRadixSort to sort each array. Two versions are provided, one loading and one loading the small arrays into shared memory.
Let me finally note that your statement that CUDA Thrust is not callable from within kernels is not anymore true. The post Thrust inside user written kernels you linked to has been updated with other answers.
#include <cub/cub.cuh>
#include <stdio.h>
#include <stdlib.h>
#include "Utilities.cuh"
using namespace cub;
/**********************************/
/* CUB BLOCKSORT KERNEL NO SHARED */
/**********************************/
template <int BLOCK_THREADS, int ITEMS_PER_THREAD>
__global__ void BlockSortKernel(int *d_in, int *d_out)
{
// --- Specialize BlockLoad, BlockStore, and BlockRadixSort collective types
typedef cub::BlockLoad <int*, BLOCK_THREADS, ITEMS_PER_THREAD, BLOCK_LOAD_TRANSPOSE> BlockLoadT;
typedef cub::BlockStore <int*, BLOCK_THREADS, ITEMS_PER_THREAD, BLOCK_STORE_TRANSPOSE> BlockStoreT;
typedef cub::BlockRadixSort <int , BLOCK_THREADS, ITEMS_PER_THREAD> BlockRadixSortT;
// --- Allocate type-safe, repurposable shared memory for collectives
__shared__ union {
typename BlockLoadT ::TempStorage load;
typename BlockStoreT ::TempStorage store;
typename BlockRadixSortT::TempStorage sort;
} temp_storage;
// --- Obtain this block's segment of consecutive keys (blocked across threads)
int thread_keys[ITEMS_PER_THREAD];
int block_offset = blockIdx.x * (BLOCK_THREADS * ITEMS_PER_THREAD);
BlockLoadT(temp_storage.load).Load(d_in + block_offset, thread_keys);
__syncthreads();
// --- Collectively sort the keys
BlockRadixSortT(temp_storage.sort).Sort(thread_keys);
__syncthreads();
// --- Store the sorted segment
BlockStoreT(temp_storage.store).Store(d_out + block_offset, thread_keys);
}
/*******************************/
/* CUB BLOCKSORT KERNEL SHARED */
/*******************************/
template <int BLOCK_THREADS, int ITEMS_PER_THREAD>
__global__ void shared_BlockSortKernel(int *d_in, int *d_out)
{
// --- Shared memory allocation
__shared__ int sharedMemoryArray[BLOCK_THREADS * ITEMS_PER_THREAD];
// --- Specialize BlockStore and BlockRadixSort collective types
typedef cub::BlockRadixSort <int , BLOCK_THREADS, ITEMS_PER_THREAD> BlockRadixSortT;
// --- Allocate type-safe, repurposable shared memory for collectives
__shared__ typename BlockRadixSortT::TempStorage temp_storage;
int block_offset = blockIdx.x * (BLOCK_THREADS * ITEMS_PER_THREAD);
// --- Load data to shared memory
for (int k = 0; k < ITEMS_PER_THREAD; k++) sharedMemoryArray[threadIdx.x * ITEMS_PER_THREAD + k] = d_in[block_offset + threadIdx.x * ITEMS_PER_THREAD + k];
__syncthreads();
// --- Collectively sort the keys
BlockRadixSortT(temp_storage).Sort(*static_cast<int(*)[ITEMS_PER_THREAD]>(static_cast<void*>(sharedMemoryArray + (threadIdx.x * ITEMS_PER_THREAD))));
__syncthreads();
// --- Write data to shared memory
for (int k = 0; k < ITEMS_PER_THREAD; k++) d_out[block_offset + threadIdx.x * ITEMS_PER_THREAD + k] = sharedMemoryArray[threadIdx.x * ITEMS_PER_THREAD + k];
}
/********/
/* MAIN */
/********/
int main() {
const int numElemsPerArray = 8;
const int numArrays = 4;
const int N = numArrays * numElemsPerArray;
const int numElemsPerThread = 4;
const int RANGE = N * numElemsPerThread;
// --- Allocating and initializing the data on the host
int *h_data = (int *)malloc(N * sizeof(int));
for (int i = 0 ; i < N; i++) h_data[i] = rand() % RANGE;
// --- Allocating the results on the host
int *h_result1 = (int *)malloc(N * sizeof(int));
int *h_result2 = (int *)malloc(N * sizeof(int));
// --- Allocating space for data and results on device
int *d_in; gpuErrchk(cudaMalloc((void **)&d_in, N * sizeof(int)));
int *d_out1; gpuErrchk(cudaMalloc((void **)&d_out1, N * sizeof(int)));
int *d_out2; gpuErrchk(cudaMalloc((void **)&d_out2, N * sizeof(int)));
// --- BlockSortKernel no shared
gpuErrchk(cudaMemcpy(d_in, h_data, N*sizeof(int), cudaMemcpyHostToDevice));
BlockSortKernel<N / numArrays / numElemsPerThread, numElemsPerThread><<<numArrays, numElemsPerArray / numElemsPerThread>>>(d_in, d_out1);
gpuErrchk(cudaMemcpy(h_result1, d_out1, N*sizeof(int), cudaMemcpyDeviceToHost));
printf("BlockSortKernel no shared\n\n");
for (int k = 0; k < numArrays; k++)
for (int i = 0; i < numElemsPerArray; i++)
printf("Array nr. %i; Element nr. %i; Value %i\n", k, i, h_result1[k * numElemsPerArray + i]);
// --- BlockSortKernel with shared
gpuErrchk(cudaMemcpy(d_in, h_data, N*sizeof(int), cudaMemcpyHostToDevice));
shared_BlockSortKernel<N / numArrays / numElemsPerThread, numElemsPerThread><<<numArrays, numElemsPerArray / numElemsPerThread>>>(d_in, d_out2);
gpuErrchk(cudaMemcpy(h_result2, d_out2, N*sizeof(int), cudaMemcpyDeviceToHost));
printf("\n\nBlockSortKernel with shared\n\n");
for (int k = 0; k < numArrays; k++)
for (int i = 0; i < numElemsPerArray; i++)
printf("Array nr. %i; Element nr. %i; Value %i\n", k, i, h_result2[k * numElemsPerArray + i]);
return 0;
}
If you are using CUDA 5.X, you can use dynamic parallelism. You can make some child kernel in your filter kernel to finish the sort job. As how to sort by CUDA, you can use some induction skills.

CUDA's Mersenne Twister for an arbitrary number of threads

CUDA's implementation of the Mersenne Twister (MT) random number generator is limited to a maximal number of threads/blocks of 256 and 200 blocks/grid, i.e. the maximal number of threads is 51200.
Therefore, it is not possible to launch the kernel that uses the MT with
kernel<<<blocksPerGrid, threadsPerBlock>>>(devMTGPStates, ...)
where
int blocksPerGrid = (n+threadsPerBlock-1)/threadsPerBlock;
and n is the total number of threads.
What is the best way to use the MT for threads > 51200?
My approach if to use constant values for blocksPerGrid and threadsPerBlock, e.g. <<<128,128>>> and use the following in the kernel code:
__global__ void kernel(curandStateMtgp32 *state, int n, ...) {
int id = threadIdx.x+blockIdx.x*blockDim.x;
while (id < n) {
float x = curand_normal(&state[blockIdx.x]);
/* some more calls to curand_normal() followed
by the algorithm that works with the data */
id += blockDim.x*gridDim.x;
}
}
I am not sure if this is the correct way or if it can influence the MT status in an undesired way?
Thank you.
I suggest you read the CURAND documentation carefully and thoroughly.
The MT API will be most efficient when using 256 threads per block with up to 64 blocks to generate numbers.
If you need more than that, you have a variety of options:
simply generate more numbers from the existing state - set (i.e. 64
blocks, 256 threads), and distribute these numbers amongst the
threads that need them.
Use more than a single state per block (but this does not allow you to exceed the overall limit within a state-set, it just addresses the need for a single block.)
Create multiple MT generators with independent seeds (and therefore independent state-sets).
Generally, I don't see a problem with the kernel that you've outlined, and it's roughly in line with choice 1 above. However it does not allow you to exceed 51200 threads. (your example has <<<128, 128>>> so 16384 threads)
Following Robert's answer, below I'm providing a fully worked example on using cuRAND's Mersenne Twister for an arbitrary number of threads. I'm using Robert's first option to generate more numbers from the existing state-set and distributing these numbers amongst the threads that need them.
// --- Generate random numbers with cuRAND's Mersenne Twister
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
#include <cuda.h>
#include <curand_kernel.h>
/* include MTGP host helper functions */
#include <curand_mtgp32_host.h>
#define BLOCKSIZE 256
#define GRIDSIZE 64
/*******************/
/* GPU ERROR CHECK */
/*******************/
#define gpuErrchk(x) do { if((x) != cudaSuccess) { \
printf("Error at %s:%d\n",__FILE__,__LINE__); \
return EXIT_FAILURE;}} while(0)
#define CURAND_CALL(x) do { if((x) != CURAND_STATUS_SUCCESS) { \
printf("Error at %s:%d\n",__FILE__,__LINE__); \
return EXIT_FAILURE;}} while(0)
/*******************/
/* iDivUp FUNCTION */
/*******************/
__host__ __device__ int iDivUp(int a, int b) { return ((a % b) != 0) ? (a / b + 1) : (a / b); }
/*********************/
/* GENERATION KERNEL */
/*********************/
__global__ void generate_kernel(curandStateMtgp32 * __restrict__ state, float * __restrict__ result, const int N)
{
int tid = threadIdx.x + blockIdx.x * blockDim.x;
for (int k = tid; k < N; k += blockDim.x * gridDim.x)
result[k] = curand_uniform(&state[blockIdx.x]);
}
/********/
/* MAIN */
/********/
int main()
{
const int N = 217 * 123;
// --- Allocate space for results on host
float *hostResults = (float *)malloc(N * sizeof(float));
// --- Allocate and initialize space for results on device
float *devResults; gpuErrchk(cudaMalloc(&devResults, N * sizeof(float)));
gpuErrchk(cudaMemset(devResults, 0, N * sizeof(float)));
// --- Setup the pseudorandom number generator
curandStateMtgp32 *devMTGPStates; gpuErrchk(cudaMalloc(&devMTGPStates, GRIDSIZE * sizeof(curandStateMtgp32)));
mtgp32_kernel_params *devKernelParams; gpuErrchk(cudaMalloc(&devKernelParams, sizeof(mtgp32_kernel_params)));
CURAND_CALL(curandMakeMTGP32Constants(mtgp32dc_params_fast_11213, devKernelParams));
//CURAND_CALL(curandMakeMTGP32KernelState(devMTGPStates, mtgp32dc_params_fast_11213, devKernelParams, GRIDSIZE, 1234));
CURAND_CALL(curandMakeMTGP32KernelState(devMTGPStates, mtgp32dc_params_fast_11213, devKernelParams, GRIDSIZE, time(NULL)));
// --- Generate pseudo-random sequence and copy to the host
generate_kernel << <GRIDSIZE, BLOCKSIZE >> >(devMTGPStates, devResults, N);
gpuErrchk(cudaPeekAtLastError());
gpuErrchk(cudaDeviceSynchronize());
gpuErrchk(cudaMemcpy(hostResults, devResults, N * sizeof(float), cudaMemcpyDeviceToHost));
// --- Print results
//for (int i = 0; i < N; i++) {
for (int i = 0; i < 10; i++) {
printf("%f\n", hostResults[i]);
}
// --- Cleanup
gpuErrchk(cudaFree(devMTGPStates));
gpuErrchk(cudaFree(devResults));
free(hostResults);
return 0;
}

Resources