Does anyone have an implementation of drand48() or an equivalent that can work in an OpenCL kernel?
I have been sending random numbers generated on the host through a buffer but I need random numbers generated on the device if there is any way to do this.
Here's an OpenCL device function which you can call from an OpenCL kernel:
uint rng_next(__global ulong *states, uint index) {
/* Assume 32 bits */
uint bits = 32;
/* Get current state */
ulong state = states[index];
/* Update state */
state = (state * 0x5DEECE66DL + 0xBL) & ((1L << 48) - 1);
/* Keep new state */
states[index] = state;
/* Return value */
return (uint) (state >> (48 - bits));
}
The states array contains the state of the PRNG for each work-item and the index is basically - but not necessarily - the work-item ID (which you can get with get_global_id()).
The states array can be generated in the host (using another PRNG) and copied to the device, or it can be initialized in the device using some kind of hash function applied to the work-item global IDs. If you use the work-item global IDs as initial seeds, the random streams for each work-item will be very low quality (due to high correlation between them). Here's a kernel to apply a hash function to decorrelate the initial seeds (note you need a main initial seed, passed by the host):
__kernel void rng_init(
const ulong main_seed,
__global clo_statetype *seeds) {
/* Get initial seed for this workitem. */
ulong seed = get_global_id(0) + main_seed;
/* Apply basic xor-shift hash, better ones probably exist. */
seed = ((seed >> 16) ^ seed) * 0x45d9f3b;
seed = ((seed >> 16) ^ seed) * 0x45d9f3b;
seed = ((seed >> 16) ^ seed);
/* Update seeds array. */
seeds[get_global_id(0)] = seed;
}
Note that, as pointed out in the comments, the drand48 is of very low quality, and if you use a lot of work-items you will see artifacts in your rendering. This post explains this in more detail.
This code is taken from the cl_ops library, which I'm the author of.
Related
If I want to calculate the CRC32 value for a large number of consecutive zero bytes, is there a constant time formula I can use given the length of the run of zeros? For example, if I know I have 1000 bytes all filled with zeros, is there a way to avoid a loop with 1000 iterations (just an example, actual number of zeros is unbounded for the sake of this question)?
You can compute the result of applying n zeros not in O(1) time, but in O(log n) time. This is done in zlib's crc32_combine(). A binary matrix is constructed that represents the operation of applying a single zero bit to the CRC. The 32x32 matrix multiplies the 32-bit CRC over GF(2), where addition is replaced by exclusive-or (^) and multiplication is replaced by and (&), bit by bit.
Then that matrix can be squared to get the operator for two zeros. That is squared to get the operator for four zeros. The third one is squared to get the operator for eight zeros. And so on as needed.
Now that set of operators can be applied to the CRC based on the one bits in the number n of zero bits that you want to compute the CRC of.
You can precompute the resulting matrix operator for any number of zero bits, if you happen to know you will be frequently applying exactly that many zeros. Then it is just one matrix multiplication by a vector, which is in fact O(1).
You do not need to use the pclmulqdq instruction suggested in another answer here, but that would be a little faster if you have it. It would not change the O() of the operation.
Time complexity can be reduced to O(1) using a table lookup followed by a multiply. The explanation and example code are shown in the third section of this answer.
If the 1000 is a constant, a precomputed table of 32 values, each representing
each bit of a CRC to 8000th power mod poly could be used. A set of matrices (one set per byte of the CRC) could be used to work with a byte at a time. Both methods would be constant time (fixed number of loops) O(1).
As commented above, if the 1000 is not a constant, then exponentiation by squaring could be used which would be O(log2(n)) time complexity, or a combination of precomputed tables for some constant number of zero bits, such as 256, followed by exponentiation by squaring could be used so that the final step would be O(log2(n%256)).
Optimization in general: for normal data with zero and non-zero elements, on an modern X86 with pclmulqdq (uses xmm registers), a fast crc32 (or crc16) can be implemented, although it's close to 500 lines of assembly code. Intel document: crc using pclmulqdq. Example source code for github fast crc16. For a 32 bit CRC, a different set of constants is needed. If interested, I converted the source code to work with Visual Studio ML64.EXE (64 bit MASM), and created examples for both left and right shift 32 bit CRC's, each with two sets of constants for the two most popular CRC 32 bit polynomials (left shift polys: crc32:0x104C11DB7 and crc32c: 0x11EDC6F41, right shift poly's are bit reversed).
Example code for fast adjustment of CRC using a software based carryless multiply modulo the CRC polyonomial. This will be much faster than using a 32 x 32 matrix multiply. A CRC is calculated for non-zero data: crf = GenCrc(msg, ...). An adjustment constant is calculated for n zero bytes: pmc = pow(2^(8*n))%poly (using exponentiation by repeated squaring). Then the CRC is adjusted for the zero bytes: crf = (crf*pmc)%poly.
Note that time complexity can be reduced to O(1) with generation of a table of pow(2^(8*i))%poly constants for i = 1 to n. Then the calculation is a table lookup and a fixed iteration (32 cycles) multiply % poly.
#include <stdio.h>
#include <stdlib.h>
typedef unsigned char uint8_t;
typedef unsigned int uint32_t;
static uint32_t crctbl[256];
void GenTbl(void) /* generate crc table */
{
uint32_t crc;
uint32_t c;
uint32_t i;
for(c = 0; c < 0x100; c++){
crc = c<<24;
for(i = 0; i < 8; i++)
crc = (crc<<1)^((0-(crc>>31))&0x04c11db7);
crctbl[c] = crc;
}
}
uint32_t GenCrc(uint8_t * bfr, size_t size) /* generate crc */
{
uint32_t crc = 0u;
while(size--)
crc = (crc<<8)^crctbl[(crc>>24)^*bfr++];
return(crc);
}
/* carryless multiply modulo crc */
uint32_t MpyModCrc(uint32_t a, uint32_t b) /* (a*b)%crc */
{
uint32_t pd = 0;
uint32_t i;
for(i = 0; i < 32; i++){
pd = (pd<<1)^((0-(pd>>31))&0x04c11db7u);
pd ^= (0-(b>>31))&a;
b <<= 1;
}
return pd;
}
/* exponentiate by repeated squaring modulo crc */
uint32_t PowModCrc(uint32_t p) /* pow(2,p)%crc */
{
uint32_t prd = 0x1u; /* current product */
uint32_t sqr = 0x2u; /* current square */
while(p){
if(p&1)
prd = MpyModCrc(prd, sqr);
sqr = MpyModCrc(sqr, sqr);
p >>= 1;
}
return prd;
}
/* # data bytes */
#define DAT ( 32)
/* # zero bytes */
#define PAD (992)
/* DATA+PAD */
#define CNT (1024)
int main()
{
uint32_t pmc;
uint32_t crc;
uint32_t crf;
uint32_t i;
uint8_t *msg = malloc(CNT);
for(i = 0; i < DAT; i++) /* generate msg */
msg[i] = (uint8_t)rand();
for( ; i < CNT; i++)
msg[i] = 0;
GenTbl(); /* generate crc table */
crc = GenCrc(msg, CNT); /* generate crc normally */
crf = GenCrc(msg, DAT); /* generate crc for data */
pmc = PowModCrc(PAD*8); /* pmc = pow(2,PAD*8)%crc */
crf = MpyModCrc(crf, pmc); /* crf = (crf*pmc)%crc */
printf("%08x %08x\n", crc, crf); /* crf == crc */
free(msg);
return 0;
}
CRC32 is based on multiplication in GF(2)[X] modulo some polynomial, which is multiplicative. Tricky part is splitting the non-multiplicative from the multiplicative.
First define a sparse file with the following structure (in Go):
type SparseFile struct {
FileBytes []SparseByte
Size uint64
}
type SparseByte struct {
Position uint64
Value byte
}
In your case it would be SparseFile{[]FileByte{}, 1000}
Then, the function would be:
func IEEESparse (file SparseFile) uint32 {
position2Index := map[uint64]int{}
for i , v := range(file.FileBytes) {
file.FileBytes[i].Value = bits.Reverse8(v.Value)
position2Index[v.Position] = i
}
for i := 0; i < 4; i++ {
index, ok := position2Index[uint64(i)]
if !ok {
file.FileBytes = append(file.FileBytes, SparseByte{Position: uint64(i), Value: 0xFF})
} else {
file.FileBytes[index].Value ^= 0xFF
}
}
// Add padding
file.Size += 4
newReminder := bits.Reverse32(reminderIEEESparse(file))
return newReminder ^ 0xFFFFFFFF
}
So note that:
Division is performed on bits in the opposite order (per byte).
First four bytes are xored with 0xFF.
File is padded with 4 bytes.
Reminder is reversed again.
Reminder is xored again.
The inner function reminderIEEESparse is the true reminder and it is easy to implement it in O(log n) where n is the size of the file.
You can find a full implementation here.
I noticed that on Windows every time I issue an unbuffered fread() request with an odd length, it's split into 2 requests (as observed through procmon):
a) fread for my requested length-1
b) 2-byte fread for the last byte
This has an obvious performance overhead like 2 kernel requests instead of one etc.
Sample code ran on Windows 10:
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[]) {
FILE* pFile;
char* buffer;
pFile = fopen(argv[0], "rb");
setbuf(pFile, nullptr);
size_t len = 3;
buffer = (char*)malloc(sizeof(char)*len);
if (len != fread(buffer, 1, len, pFile)) { fputs("Reading error", stderr); exit(3); }
free(buffer);
fclose(pFile);
return 0;
}
This results in the following procmon reported calls:
ReadFile c:\work\cpptry\Debug\cpptry.exe SUCCESS Offset: 0, Length: 2, Priority: Normal
ReadFile c:\work\cpptry\Debug\cpptry.exe SUCCESS Offset: 2, Length: 2
It seems as if Windows is incapable of issuing odd-sized requests to the file system.
What's up with that?
This is implementation artifact.
MS CRT keeps all FILEs buffered even if you tell it to don't do this. Instead file buffer is set to internal buffer with space for two bytes. This allows to keep one code path instead of two and simplifies implementation of fast path in fgetc and fputc.
#define fgetc(_stream) (--(_stream)->_cnt >= 0 ? 0xff & *(_stream)->_ptr++ : _filbuf(_stream))
Some of you are probably bothered by size of the buffer (2 bytes when quasi unbuffered), but in _fread_nolock_s function we can find optimization
witch tries to read multiplies of buffer size directly to the destination bypassing file buffer.
See fread.c in CRT sources:
/* calc chars to read -- (count/streambufsize) * streambufsize */
nbytes = (unsigned)(count - count % streambufsize);
...
nread = _read_nolock(_fileno(stream), data, nbytes);
Because the file buffer's size is equal 2, even number of bytes is read directly to the destination and eventual one byte goes through the file buffer. Sometimes there could be some bytes in the buffer that need to be transfered to destination before optimized read can take place.
Bonus: buffer size is always forced to multiple of 2.
See setvbuf.c:
/*
* force size to be even by masking down to the nearest multiple
* of 2
*/
size &= (size_t)~1;
...
/*
* CASE 1: No Buffering.
*/
if (type & _IONBF) {
stream->_flag |= _IONBF;
buffer = (char *)&(stream->_charbuf);
size = 2;
}
Code snippets above are from VC 2013 CRT.
For comparison snippets from Universal CRT 10.0.17134
read.cpp
unsigned const bytes_to_read = stream_buffer_size != 0
? static_cast<unsigned>(maximum_bytes_to_read - maximum_bytes_to_read % stream_buffer_size)
: maximum_bytes_to_read;
...
int const bytes_read = _read_nolock(_fileno(stream.public_stream()), data, bytes_to_read);
setvbuf.cpp
// Force the buffer size to be even by masking the low order bit:
size_t const usable_buffer_size = buffer_size_in_bytes & ~static_cast<size_t>(1);
...
// Case 1: No buffering:
if (type & _IONBF)
{
return set_buffer(stream, reinterpret_cast<char*>(&stream->_charbuf), 2, _IOBUFFER_NONE);
}
And snippets from VC 6.0 (1998)
read.c
/* calc chars to read -- (count/bufsize) * bufsize */
nbytes = ( bufsize ? (count - count % bufsize) : count );
nread = _read(_fileno(stream), data, nbytes);
setvbuf.c
/*
* force size to be even by masking down to the nearest multiple
* of 2
*/
size &= (size_t)~1;
...
/*
* CASE 1: No Buffering.
*/
if (type & _IONBF) {
stream->_flag |= _IONBF;
buffer = (char *)&(stream->_charbuf);
size = 2;
}
Using a C# script in the Unity3D game engine to control a HLSL compute shader, I'm trying to generate pseudo random numbers on the GPU and store them in a Texture2D. Following along with
GPU Gems 3 Hybrid Tausworthe method
and another thread Pseudo Random Number Generation on the GPU, I've come across an issue.
The problem:
the resulting texture appears to be one solid color. If I run the shader multiple times, I get a different solid color texture result every time, but the entire texture is the one color.
Compute shader code
#pragma kernel CSMain
RWTexture2D<float4> result; // 256 resolution texture to write to
uint4 seed; //four uniform random numbers generated on the CPU in a C# script
struct RandomResult
{
uint4 state;
float value;
};
uint TausStep(uint z, int S1, int S2, int S3, uint M)
{
uint b = (((z << S1) ^ z) >> S2);
return ((z & M) << S3) ^ b;
}
uint LCGStep(uint z, uint A, uint C)
{
return A * z + C;
}
RandomResult HybridTaus(uint4 state)
{
state.x = TausStep(state.x, 13, 19, 12, 4294967294);
state.y = TausStep(state.y, 2, 25, 4, 4294967288);
state.z = TausStep(state.z, 3, 11, 17, 4294967280);
state.w = LCGStep(state.w, 1664525, 1013904223);
RandomResult rand;
rand.state = state;
rand.value = 2.3283064365387e-10 * (state.x ^ state.y ^ state.z ^ state.w);
return rand;
}
[numthreads(8, 8, 1)]
void CSMain(uint3 id)
{
result[id.xy] = HybridTaus(seed).value;
}
Do I need to save the state on the gpu? If so, how would I do that? Do I need to deallocate the memory afterwards?
I tried to assign the result of the HybridTaus() function to seed in hopes that it would use the new value in the following HybridTaus(seed) call to see if that would make a difference. I also tried to add unique arbitrary numbers based on the thread id, which is the id parameter. This gave some improved results, but I suspect the randomness is only as good as I can make it, coming from maths performed on the thread ids and not effectively from the random number generator.
[numthreads(8, 8, 1)]
void CSMain(uint3 id)
{
//first thing I tried
//RandomResult rand = HybridTaus(seed);
//seed = rand.state; // re-assign seed with the new state
//result[id.xy] = rand.value;
//second thing I tried
RandResult rand = HybridTaus(seed * uint4(id.x*id.y*id.x*id.y,
id.x*id.y/id.x*id.y,
id.x*id.y+id.x*id.y,
id.x*id.y-id.x*id.y));
result[id.xy] = rand.value;
}
First of all, I don't know about the algo you posted but i found this simple algorithm online for generating random numbers on the gpu. Here seed is a 32 bit uint.
uint wang_hash(uint seed)
{
seed = (seed ^ 61) ^ (seed >> 16);
seed *= 9;
seed = seed ^ (seed >> 4);
seed *= 0x27d4eb2d;
seed = seed ^ (seed >> 15);
return seed;
}
Now in most cases this is sufficient, you can pass your compute shader's Local invocation ID as that is unique and get a random number per thread or per invocation. However if you need multiple random numbers per invocation (for example you have a loop or a nested loop) this wasn't working as the seed remains the same. So i messed the function a little bit and came up with this
uint wang_hash(uint seed)
{
seed = seed + 76.897898 * 48.789789 * cos(x) * sin(y) * 20.79797
seed = (seed ^ 61) ^ (seed >> 16);
seed *= 9;
seed = seed ^ (seed >> 4);
seed *= 0x27d4eb2d;
seed = seed ^ (seed >> 15);
return seed;
}
Here x and y are my nested for loops variables. And this works for me. Now you can get multiple random numbers per invocation.
In your case however, I don't think you need the latter one. If I understood correct you just need to store a random number for every texel so you can try the first one and use the unique local invocation ID to get random numbers for every texel value.
I've been doing a lot of OpenGL and shaders before, and now, I decided to give a try to OpenCL. I watched some online tutorials, and started reading books on the subject. In order to better understand, and because I believe that the best way to learn is by intelligently trying and learning from the issues that arose while doing so, I decided to start implementing a kernel for a fully-connected perceptron.
For those who don't know what that is, I'll explain the basic idea. It is a neural network in which each neuron of a layer is connected to every neurons of the next layer. Each neuron has but one action to perform: performing the sum of all the neurons from the previous layer, weighted by a different value for each neuron.
This seemed simple enough to implement, and after reading the paper "Parallel Neural Network Training with OpenCL" I implemented it in the following way
Each layer being dependent on the previous one, they're being run sequentially by the host
For computing a layer, I run my kernel with a global work size of the number of neurons within the layer (which can be quite huge, tens of thousand for instance). That makes it so that all the neurons are performing its sum independently to one another.
Each neuron (identified by its global_work_id) performs the weighted sum with all the neurons from the previous layer.
Here is my fully functional opencl kernel:
/**
* #brief Computes one layer of the perceptron given the previous one and the
* weights
* The kernel is run once for each layer.
* The work items are each tasked with computing the output of a single neuron
* of the out layer.
*
* #param out_layer_size
* Size of the output layer (number of elements in the output array that will
* contain the result for each neuron).
* #param in_layer_size
* Number of elements of the input layer
* #param in_value
* Values of the neuron in the previous layer
* #param in_weights
* Array containing the weights for each input neuron. It is organised as a
* two dimensional matrix, written by concatenating each line in the array
* [ w11, w12, w13, ...
* w21, w22, w23, ...
* ..., ..., ..., ...
* ]
* Where wij is the weight linking the neuron i of the input layer to the
* neuron j of the output layer
* #param out_values
* Computed values for the current layer
*/
void kernel perceptron(global const int* in_layer_size, global const int* out_layer_size, global const float *in_value, global const float* in_weights, global float* out_values)
{
private const int global_id = get_global_id(0);
private const int out_layer_s = *out_layer_size;
private const int in_layer_s = *in_layer_size;
private const int offset = out_layer_s * global_id;
private float sum = 0.;
for(int i=0; i < in_layer_s; i++) {
sum += in_weights[i*out_layer_s+global_id] * in_value[i];
}
//out_values[global_id] = sigma(sum);
out_values[global_id] = sum;
}
And here is how I invoke it:
queue.enqueueNDRangeKernel(kernel, cl::NullRange,cl::NDRange(number of neurons within layer),cl::NullRange);
I realize that the bottleneck of this kernel is the implementation of the weighted sum. It would be really helpful if someone could explain how I could improve upon this to make it faster.
I probably don't make proper use of the different memory regions, I'm thinking essentially of the local memory that I don't even use.
Just to give you an idea of performance (that is on an Nvidia GTX 660M), I'll show you some of the times I achieved. Each value is the number of neurons per layer:
2500, 10 000, 2500 : 0.018s ~ 60FPS. It's about 4 to 5 times faster than on my processor (Intel Core i7 running at 2.40GHz)
100 000, 100 000, 500: 140s -> which I guess isn't surpsising since each neuron in the second layer has to perform the weighted sum of 100 000 elements. Running this on my processor yields about the same results.
As you told, bottleneck is the weighted summ. That's not hard to be, as at each layer every WI (Work Item) is doing a lot of IO operations in comparison to number of arithmetic operations. I have no experience in neural networks, but for me problem looks like poor memory access pattern on GPU.
Potentially, that can be solved by organizing your WI into local WGs (Work Groups). As every WI needs to process all data from prev. layer, I guess that all WI in WG can load some amount of data into local memory, process them and than to next bunch of data. This will make your algorithm much more cache friendly. Pseudo-code of kernel looks like:
void kernel Kernel(
__global const int in_layer_size,
__global const int out_layer_size,
__global const float *in_value,
__global const float *in_weights,
__global float *out_values){
__local float buffer[SOME_SIZE];
__global const float* p_in = in_value;
__global float* p_out = out_values;
const int
global_id = get_global_id(0),
local_id = get_local_id(0),
num_buffers = in_layer_size / SOME_SIZE,
offset = out_layer_size * global_id;
float sum = 0.0f;
for(int i=0; i < num_buffers; i++){
buffer[local_id] = p_in[local_id];
barrier(CLK_LOCAL_MEM_FENCE);
//Process all data inside buffer by every WI in WG
//...
p_in += SOME_SIZE;
out_values += SOME_SIZE;
}
//...
return;
}
So, you're sliding with the window of fixed size & calculating data within & then going to next window. Al data operations are done independently, Work Items are only using same data at same time. Optimal size of local group is Device- and Kernel- dependent.
You can do it in many ways.
But the most generic way, without changing how your kernel behaves is to do it is reusing your workgroup size (whatever you selected, or default) and reuse the memory accesses from the group.
I would suggest something like this:
NOTE: I removed thouse ugly pointers for single values. OpenCL supports this, and it is much easier. There is no need to create a memory zone, just do clSetKernelArg(kernel, arg_index, sizeof(cl_float), &size); Where cl_float size = the_size;.
#define IN_LOCAL_SIZE 4096 //Because 16KB/4B (for each float)
void kernel perceptron(global const int in_layer_size, global const int out_layer_size, global const float *in_value, global const float* in_weights, global float* out_values)
{
const int global_id = get_global_id(0);
__local float in_buffer[IN_LOCAL_SIZE];
float sum = 0.0f;
event_t ev;
int j;
//For each full buffer
for(j=0; j < (in_layer_size/IN_LOCAL_SIZE)-1; i++) {
ev = async_work_group_copy(in_buffer, in_value+j*IN_LOCAL_SIZE, IN_LOCAL_SIZE, ev);
wait_group_events(1,&ev);
barrier(CLK_LOCAL_MEM_FENCE);
for(int i=0; i < IN_LOCAL_SIZE; i++) {
sum += in_weights[(i+j*IN_LOCAL_SIZE)*out_layer_size+global_id] * in_buffer[i];
}
}
//Last one
ev = async_work_group_copy(in_buffer, in_value+j*IN_LOCAL_SIZE, in_layer_size%IN_LOCAL_SIZE, ev);
wait_group_events(1,&ev);
barrier(CLK_LOCAL_MEM_FENCE);
for(int i=0; i < in_layer_size%IN_LOCAL_SIZE; i++) {
sum += in_weights[(i+j*IN_LOCAL_SIZE)*out_layer_size+global_id] * in_buffer[i];
}
out_values[global_id] = sum;
}
However, if the output size is small (100k, 250k, 500), then you will have just 500 work items, which is not optimal. In that case you should reshape the algorithm.
One possible way to do it is that each workitem works in the inner layer, performing sums, and the whole work group creates one output out of all the work items. That would be easy, since you can control the sums inside the workgroup easily.
But maybe other approaches fit better your problem.
You can make large improvements by caching in_values in local memory. The fewer times you have to read each element of in_values from global memory, the better.
I have come up with a solution that caches the maximum number of input values, and reads each element from global memory only once per work group. This is done by copying a block of in_values at a time, processing it against all out_values, and moving on to the next block. There is also a local array of floats used to reduce the work items' sums of each block.
pseudocode:
output elements assumed to be set to 0 already
for each block of input values:
cache the input block
for each target output value:
reset local sum to 0
for each element this work item is responsible for:
read the weight, multiply, and add to sum
reduce sums to a single value, ADD value to output element
I haven't had a chance to run this through a profiler or debugger yet, but I will give it a try when I am back at my home PC. (no opencl tools at my office workstation). Make sure to queue kernel with group size equal to the GROUP_SIZE constant. Also, only create a single group per compute unit on your device.
real code:
//experiment with GROUP_SIZE to discover the optimal value for your device
//this needs to be equal to local_work_size passed into clEnqueueNDRangeKernel
//use a multiple of CL_KERNEL_PREFERRED_WORK_GROUP_SIZE_MULTIPLE
//max. for most devices is 256
#define GROUP_SIZE = 64;
// IN_VALUE_CACHE_SIZE is the number of floats from in_value to copy to local memory at a time
//assuming GROUP_SIZE can be up to 256, sizeof(float)=4, and local memory size is 32kb, full saturation can be achieved with the following:
//(32768 - (256 * 4)) /4 = 7936
//try another multiple of 1024 (6144, 4096... )if there is trouble with this value
#define IN_VALUE_CACHE_SIZE = 7936;
void kernel perceptron(global const int* in_layer_size, global const int* out_layer_size, global const float *in_value, global const float* in_weights, global float* out_values)
{
private const int global_id = get_global_id(0);
private const int out_layer_s = *out_layer_size;
private const int in_layer_s = *in_layer_size;
private const int offset = out_layer_s * global_id;
private const int item_id = get_local_id(0);
private const int group_id = get_group_id(0);
private const int group_count = get_num_groups(0);
local float result_buffer[GROUP_SIZE];
local float in_value_cache[IN_VALUE_CACHE_SIZE];
int i,j,k;
//init the block to 0, in case there are fewer than IN_VALUE_CACHE_SIZE values in total
for(i=item_id; i<IN_VALUE_CACHE_SIZE; i+= GROUP_SIZE){
in_value_cache[i] = 0.0;
}
barrier(CL_LOCAL_MEM_FENCE);
private float sum = 0.0;
event_t e;
int copy_total = 0;
int copy_offset;
for(i=0; i<in_layer_s; i+=IN_VALUE_CACHE_SIZE){
//cap the number of values to copy to local memory if loop is near the end of the input data
copy_total = IN_VALUE_CACHE_SIZE;
if((copy_total + i*IN_VALUE_CACHE_SIZE) > in_layer_s){
copy_total = in_layer_s - i*IN_VALUE_CACHE_SIZE;
}
//copy the next block of values
e = async_work_group_copy(in_value_cache, in_value + i * 4, copy_total, 0);
wait_group_events(1, &e);
for(j=group_id; j<out_layer_s; j+=group_count){
sum = 0.0;
//need to reset result_buffer[item_id] as well
//this is in case there are fewer than GROUP_SIZE input values remaining ie copy_total < GROUP_SIZE
result_buffer[item_id] = 0.0;
for(k=item_id; k<copy_total; k+=GROUP_SIZE){
sum += in_value_cache[k] * in_weights[(k+i) + j * out_layer_s];
}
result_buffer[item_id] = sum;
//simple O(n) reduction can be optimized further
if(item_id == 0){
for(k=1;k<GROUP_SIZE;k++){
sum += result_buffer[k];
}
out_values[j] += sum;
}
barrier(CL_LOCAL_MEM_FENCE);
}
}
}
This will handle input of any size, so you can try it with as many elements as you have global memory for.
Is the following code correct when passing the random generator state(CUDA toolkit 3.2 curand.lib) by reference in function CalculateValue(curandState *localStat) and GetExponential(curandState *localState)?
Thanks
__device__ double GetExponential(curandState *localState) {
double u1 = curand_uniform_double(localState); }
__device__ double CalculateValue(curandState *localStat) {
double x = GetExponential(localState);
return x; }
__global__ void RunMonteCarloKernel(curandState *state, double *results) {
int i = threadIdx.x + blockIdx.x * blockDim.x;
/* Copy state to local memory for efficiency */
curandState localState = state[threadIdx.x + blockIdx.x * blockDim.x];
results[i] = CalculateValue(&localState);
/* Copy state back to global memory */
state[threadIdx.x + blockIdx.x * blockDim.x] = localState; }
__global__ void setup_kernel(curandState *state) {
int i = threadIdx.x + blockIdx.x * blockDim.x;
/* Each thread gets different seed, a different sequence number, no offset */
curand_init(i, i, 0, &state[i]); }
int main(void) {
double *devResults;
curandState *devStates;
/* Allocate space for prng states on device */
CUDA_CALL(cudaMalloc((void **)&devStates, totalThreads * sizeof(curandState)));
/* Setup prng states */
setup_kernel<<<totalBlocks, threadsPerBlock>>>(devStates);
for(int i=0; i< 1000; i++)
{
RunMonteCarloKernel(devStates, devResults);
} }
Is there a problem? It looks ok.
You may want to check out the EstimatePiInlineP sample which is in the MonteCarloCURAND directory of the 3.2 SDK. It uses C++ style pass by reference to avoid taking the address of a local variable. You would need to store the state back to memory at the end of the kernel (as you do in your code).
Passing by C++ reference can assist the compiler by clearly showing that the function can operate on the data directly in the original registers. Taking the address of a local array in a GPU can be detrimental to performance if the compiler cannot be certain that all threads handle the pointer identically (i.e. identical operations on the pointer), in which case it will spill the array to local memory. It'll work, but it may be slower.