I'm playing around with OpenCL, Geforce GTX550 and driver version 331.38 from Ubuntu 14.04. What stumps me is the speed of copying from global to local memory. As far as I know, the following code should do coalesced access to global memory:
void toLocal(__local float* target, const __global float* source, int count) {
const int iterations = (count + get_local_size(0) - 1) / get_local_size(0);
for (int i = 0; i < iterations; i++) {
int idx = i * get_local_size(0) + get_local_id(0);
if (idx < count)
target[idx] = source[idx];
}
}
In practice, the following code (which should use all threads to copy the same float over and over again) is measurably faster:
void toLocal(__local float* target, const __global float* source, int count) {
for (int i = 0; i < count; i++)
target[i] = source[i];
}
Both source and target point directly at the beginning of a buffer, so I would guess they are correctly aligned. Group size is 16 by 16, trying to use all threads makes the code more complex but doesn't affect speed. The optimal coalescing group size would be 128 bytes or 32 floats, but as far as I know, on compute model 2 cards (which GTX550 is) the penalty of using only a part or even permuting the elements shouldn't be that bad. Adding local memory fence to the first version makes it only slower. Is there anything else I missed?
EDIT: Changing group size to 32 by 32 made the parallel version roughly as fast as sequential 16 by 16 and made the sequential version slightly slower. Still not the speed improvement I was expecting.
Related
I am writing some OpenCL code. My kernel should create a special "accumulator" output based on an input image. I have tried two concepts and both are equally slow, although the second one uses local memory. Could you please help me identify why the local memory version is so slow? The target GPU for the kernels is a AMD Radeon Pro 450.
// version one
__kernel void find_points(__global const unsigned char* input, __global unsigned int* output) {
const unsigned int x = get_global_id(0);
const unsigned int y = get_global_id(1);
int ind;
for(k = SOME_BEGINNING; k <= SOME_END; k++) {
// some pretty wild calculation
// ind is not linear and accesses different areas of the output
ind = ...
if(input[y * WIDTH + x] == 255) {
atomic_inc(&output[ind]);
}
}
}
// variant two
__kernel void find_points(__global const unsigned char* input, __global unsigned int* output) {
const unsigned int x = get_global_id(0);
const unsigned int y = get_global_id(1);
__local int buf[7072];
if(y < 221 && x < 32) {
buf[y * 32 + x] = 0;
}
barrier(CLK_LOCAL_MEM_FENCE);
int ind;
int k;
for(k = SOME_BEGINNING; k <= SOME_END; k++) {
// some pretty wild calculation
// ind is not linear and access different areas of the output
ind = ...
if(input[y * WIDTH + x] == 255) {
atomic_inc(&buf[ind]);
}
}
barrier(CLK_LOCAL_MEM_FENCE);
if(get_local_id(0) == get_local_size(0) - 1)
for(k = 0; k < 7072; k++)
output[k] = buf[k];
}
}
I would expect that the second variant is faster than the first one, but it isn't. Sometimes it is even slower.
Local buffer size __local int buf[7072] (28288 bytes) is too big. I don't know how big shared memory for AMD Radeon Pro 450 is but likely that is 32kB or 64kB per computing unit.
32768/28288 = 1, 65536/28288 = 2 means only 1 or maximum 2 wavefronts (64 work items) can run simultaneously only, so occupancy of computing unit is very very low hence poor performance.
Your aim should be to reduce local buffer as much as possible so that more wavefronts can be processed simultaneously.
Use CodeXL to profile your kernel - there are tools to show you all of this.
Alternatively you can have a look at CUDA occupancy calculator excel spreadsheet if you don't want to run the profiler to get a better idea of what that is about.
I have the following matrix multiplication code, implemented using CUDA 3.2 and VS 2008. I am running on Windows server 2008 r2 enterprise. I am running a Nvidia GTX 480. The following code works fine with values of "Width" (Matrix width) up to about 2500 or so.
int size = Width*Width*sizeof(float);
float* Md, *Nd, *Pd;
cudaError_t err = cudaSuccess;
//Allocate Device Memory for M, N and P
err = cudaMalloc((void**)&Md, size);
err = cudaMalloc((void**)&Nd, size);
err = cudaMalloc((void**)&Pd, size);
//Copy Matrix from Host Memory to Device Memory
err = cudaMemcpy(Md, M, size, cudaMemcpyHostToDevice);
err = cudaMemcpy(Nd, N, size, cudaMemcpyHostToDevice);
//Setup the execution configuration
dim3 dimBlock(TileWidth, TileWidth, 1);
dim3 dimGrid(ceil((float)(Width)/TileWidth), ceil((float)(Width)/TileWidth), 1);
MatrixMultiplicationMultiBlock_Kernel<<<dimGrid, dimBlock>>>(Md, Nd, Pd, Width);
err = cudaMemcpy(P, Pd, size, cudaMemcpyDeviceToHost);
//Free Device Memory
cudaFree(Md);
cudaFree(Nd);
cudaFree(Pd);
When I set the "Width" to 3000 or greater, I get the following error after a black screen:
I looked online and I saw that some people has this issue because the watchdog was killing the kernel after it hangs for more than 5 seconds. I tried editing the "TdrDelay" in the registry and this delayed the time before the black screen and same error appeared. So I concluded this was not my issue.
I debugged into my code and found this line to be the culprit:
err = cudaMemcpy(P, Pd, size, cudaMemcpyDeviceToHost);
This is what I use to return my result set from the device after my matrix multiplication kernel function is called. Everything up until this point seems to run fine. I believe I am allocating memory correctly and cannot figure out why this is happening. I thought maybe I didn't have enough memory on my card for this but then shouldn't cudaMalloc have returned an error? (I confirmed it didn't while debugging).
Any ideas/assistance would be greatly appreciated!... Thanks a lot guys!!
Kernel code:
//Matrix Multiplication Kernel - Multi-Block Implementation
__global__ void MatrixMultiplicationMultiBlock_Kernel (float* Md, float* Nd, float* Pd, int Width)
{
int TileWidth = blockDim.x;
//Get row and column from block and thread ids
int Row = (TileWidth*blockIdx.y) + threadIdx.y;
int Column = (TileWidth*blockIdx.x) + threadIdx.x;
//Pvalue store the Pd element that is computed by the thread
float Pvalue = 0;
for (int i = 0; i < Width; ++i)
{
float Mdelement = Md[Row * Width + i];
float Ndelement = Nd[i * Width + Column];
Pvalue += Mdelement * Ndelement;
}
//Write the matrix to device memory each thread writes one element
Pd[Row * Width + Column] = Pvalue;
}
I also have this other function that uses shared memory, and it also gives the same error:
Call:
MatrixMultiplicationSharedMemory_Kernel<<<dimGrid, dimBlock, sizeof(float)*TileWidth*TileWidth*2>>>(Md, Nd, Pd, Width);
Kernel code:
//Matrix Multiplication Kernel - Shared Memory Implementation
__global__ void MatrixMultiplicationSharedMemory_Kernel (float* Md, float* Nd, float* Pd, int Width)
{
int TileWidth = blockDim.x;
//Initialize shared memory
extern __shared__ float sharedArrays[];
float* Mds = (float*) &sharedArrays;
float* Nds = (float*) &Mds[TileWidth*TileWidth];
int tx = threadIdx.x;
int ty = threadIdx.y;
//Get row and column from block and thread ids
int Row = (TileWidth*blockIdx.y) + ty;
int Column = (TileWidth*blockIdx.x) + tx;
float Pvalue = 0;
//For each tile, load the element into shared memory
for( int i = 0; i < ceil((float)Width/TileWidth); ++i)
{
Mds[ty*TileWidth+tx] = Md[Row*Width + (i*TileWidth + tx)];
Nds[ty*TileWidth+tx] = Nd[(ty + (i * TileWidth))*Width + Column];
__syncthreads();
for( int j = 0; j < TileWidth; ++j)
{
Pvalue += Mds[ty*TileWidth+j] * Nds[j*TileWidth+tx];
}
__syncthreads();
}
//Write the matrix to device memory each thread writes one element
Pd[Row * Width + Column] = Pvalue;
}
Controlling the WDDM Timeout
The problem is actually the kernel not the cudaMemcpy(). When you launch the kernel the GPU goes off and does the work asynchronously with the CPU, so it's only when you synchronize with the GPU that you have to wait for the work to finish. cudaMemcpy() involves an implicit synchronization, hence that is where you see the problem.
You could double-check this by calling cudaThreadSynchronize() after the kernel and the problem will appear to be on the cudaThreadSynchronize() instead of the cudaMemcpy().
After changing the TDR timeout, did you restart your machine? Unfortunately Windows needs to be restarted to change the TDR settings. This Microsoft document has a fairly good description of the full settings available.
Kernel problems
In this case the problem is not actually the WDDM timeout. There are errors in the kernel which you would need to resolve (for example you should be able to incremement i by more than one on each iteration) and checking out the matrixMul sample in the SDK may be useful. Incidentally, I hope this is a learning exercise since in reality you would be better off (for performance) using CUBLAS to perform matrix multiplication.
The most critical problem in the code is that you are using shared memory without actually allocating any. In your kernel you have:
//Initialize shared memory
extern __shared__ float sharedArrays[];
But when you launch the kernel you do not specify how much shared memory to allocate for each block:
MatrixMultiplicationMultiBlock_Kernel<<<dimGrid, dimBlock>>>(Md, Nd, Pd, Width);
The <<<>>> syntax actually takes four arguments where the third and fourth are optional. The fourth is the stream index which is used to get overlap between compute and data transfer (and for concurrent kernel execution) but the third argument specifies the amount of shared memory per block. In this case I assume you want to store TileWidth * TileWidth floats in the shared memory, so you would use:
MatrixMultiplicationMultiBlock_Kernel<<<dimGrid, dimBlock, dimBlock.x * dimBlock.x * sizeof(float)>>>(Md, Nd, Pd, Width);
The main problem
As you mention in your comment, the actual problem was that your matrix width was not a multiple of the block width (and height since it is square, meaning the threads beyond the end would access beyond the end of the array. The code should either handle the non-multiple case or it should ensure that the width is a multiple of the block size.
I should have suggested this earlier, but it is often useful to run cuda-memcheck to check for memeory access violations like this.
You have to change the Driver Timeout settings, is windows feature to prevent faulty drivers to make the system unresponsive.
Check the Microsoft Page describing how to do that.
You should also check the "timeout" flag setting on your GPU Device. If you have the CUDA SDK installed, I believe the "deviceQuery" app will report this property.
I am trying to optimize an algorithm I am running on my GPU (AMD HD6850). I counted the number of floating point operations inside my kernel and measured its execution time. I found it to achieve ~20 SP GFLOPS, however according to the GPUs specs I should achieve ~1500 GFLOPS.
To find the bottleneck I created a very simple kernel:
kernel void test_gflops(const float d, global float* result)
{
int gid = get_global_id(0);
float cd;
for (int i=0; i<100000; i++)
{
cd = d*i;
}
if (cd == -1.0f)
{
result[gid] = cd;
}
}
Running this kernel I get ~5*10^5 work_items/sec. I count one floating point operation (not sure if that's right, what about incrementing i and comparing it to 100000?) per iteration of the loop.
==> 5*10^5 work_items/sec * 10^5 FLOPS = 50 GFLOPS.
Even if there are 3 or 4 operations going on in the loop, it's much slower than the what the card should be able to do. What am I doing wrong?
The global work size is big enough (no speed change for 10k vs 100k work items).
Here are a couple of tricks:
GPU doesn't like cycles at all. Use #pragma unroll to unwind them.
Your GPU is good at vector operations. Stick to it, that will allow you to process multiple operands at once.
Use vector load/store whether it's possible.
Measure the memory bandwidth - I'm almost sure that you are bandwidth-limited because of poor access pattern.
In my opinion, kernel should look like this:
typedef union floats{
float16 vector;
float array[16];
} floats;
kernel void test_gflops(const float d, global float* result)
{
int gid = get_global_id(0);
floats cd;
cd.vector = vload16(gid, result);
cd.vector *= d;
#pragma unroll
for (int i=0; i<16; i++)
{
if(cd.array[i] == -1.0f){
result[gid] = cd;
}
}
Make your NDRange bigger to compensate difference between 16 & 1000 in loop condition.
I'm teaching myself OpenCL by trying to optimize the mpeg4dst reference audio encoder. I achieved a 3x speedup by using vector instructions on CPU but I figured the GPU could probably do better.
I'm focusing on computing auto-correlation vectors in OpenCL as my first area of improvement. The CPU code is:
for (int i = 0; i < NrOfChannels; i++) {
for (int shift = 0; shift <= PredOrder[ChannelFilter[i]]; shift++)
vDSP_dotpr(Signal[i] + shift, 1, Signal[i], 1, &out, NrOfChannelBits - shift);
}
NrOfChannels = 6
PredOrder = 129
NrOfChannelBits = 150528.
On my test file, this function take approximately 188ms to complete.
Here's my OpenCL method:
kernel void calculateAutocorrelation(size_t offset,
global const float *input,
global float *output,
size_t size) {
size_t index = get_global_id(0);
size_t end = size - index;
float sum = 0.0;
for (size_t i = 0; i < end; i++)
sum += input[i + offset] * input[i + offset + index];
output[index] = sum;
}
This is how it is called:
gcl_memcpy(gpu_signal_in, Signal, sizeof(float) * NrOfChannels * MAXCHBITS);
for (int i = 0; i < NrOfChannels; i++) {
size_t sz = PredOrder[ChannelFilter[i]] + 1;
cl_ndrange range = { 1, { 0, 0, 0 }, { sz, 0, 0}, { 0, 0, 0 } };
calculateAutocorrelation_kernel(&range, i * MAXCHBITS, (cl_float *)gpu_signal_in, (cl_float *)gpu_out, NrOfChannelBits);
gcl_memcpy(out, gpu_out, sizeof(float) * sz);
}
According to Instruments, my OpenCL implementation seems to take about 13ms, with about 54ms of memory copy overhead (gcl_memcpy).
When I use a much larger test file, 1 minute of 2-channel music vs, 1 second of 6-channel, while the measured performance of the OpenCL code seems to be the same, the CPU usage falls to about 50% and the whole program takes about 2x longer to run.
I can't find a cause for this in Instruments and I haven't read anything yet that suggests that I should expect very heavy overhead switching in and out of OpenCL.
If I'm reading your kernel code correctly, each work item is iterating over all of the data from it's location to the end. This isn't going to be efficient. For one (and the primary performance concern), the memory accesses won't be coalesced and so won't be at full memory bandwidth. Secondly, because each work item has a different amount of work, there will be branch divergence within a work group, which will leave some threads idle waiting for others.
This seems like it has a lot in common with a reduction problem and I'd suggest reading up on "parallel reduction" to get some hints about doing an operation like this in parallel.
To see how memory is being read, work out how 16 work items (say, global_id 0 to 15) will be reading data for each step.
Note that if every work item in a work group access the same memory, there is a "broadcast" optimization the hardware can make. So just reversing the order of your loop could improve things.
I am trying to simulate a recurrent equation of the type
si(t+1) = f[Σj Wijsj(t) + vi*input(t)]
in OpenCL, where f(.) is some non-linear function (in the code below it is just a step-function with threshold th) and s(t) is some external input. Naturally, I implemented one worker for every xi. In every time-step every worker calculates the result of the equation above and subsequently this result is shared with all other workers. Therefore, all workers have to be in the same workgroup.
My current OpenCL kernel looks like this
__kernel void part1(__global int* s, __global float* W, __global float* Th, __global float* V, __global float* input, int N, int T)
{
unsigned int i = get_global_id(0);
float value = 0;
float v = V[i];
float th = Th[i];
for(int t = 0; t < T; t++){
value = v*input[t];
for(int j = 0; j < N; j++){
value = value + W[i*N + j]*s[j];
}
barrier(CLK_GLOBAL_MEM_FENCE);
if (value >= th){
s[i] = 1;
} else {
s[i] = 0;
}
barrier(CLK_GLOBAL_MEM_FENCE);
}
}
Unfortunately, this code is actually three times slower than an equivalent C-implementation. Also, I expected that a change in the number of workers should not make a huge difference (because new workers are sitting on new threads that run in parallel to the others), but actually the processing time increases linearly with the number of workers. The bottleneck seems to be the writing operation after the first barrier. Eliminating this operation (but leaving the barrier in place) cuts down the processing time by a factor of 25 and eliminates the linear dependence.
I am pretty new to OpenCL and I would appreciate any help to speed this code up!
Thanks a lot in advance!
Blue2script
As I already stated in my comment, accessing global memory is slow. Usually the hardware hides the latency by having several subgroups of threads running on the same compute unit. The subgroups I'm referring to are call warps in the NVIDIA lingo and wavefronts in the AMD one. Usually a workgroup is composed of several subgroups.
So meanwhile one subgroup waits to receive the data from the global memory another one that already has all the necessary resources can run. When the running one gets stalled because it needs to read/write data from/to the global memory another one can start running and so on.
However, in your case, because of the barriers, all the workers in all the subgroups will have to that the others wrote in the memory before being able to continue the computation (the barrier is at the workgroup level). Hence the latency hits you right in the face :).
Now, a way to improve your implementation would be to use local memory and this time a barrier at the local memory level (with the flag CLK_LOCAL_MEM_FENCE). The same principle I've just explained still applies but accessing local memory is much faster.
As far as I understand your code (quite likely I didn't get all the subtleties), your s array has N elements, and I guess that you also have N workers. So you create a local array of N elements and you do something like that:
kernel void part1(global int* s, global float* W, global float* Th, global float* V, global float* input, local int* local_s, int N, int T)
{
unsigned int i = get_global_id(0);
unsigned int local_i = get_local_id(0);
float value = 0;
float v = V[i];
float th = Th[i];
//fetch from global to local and sync before computing
local_s[local_i] = s[i];
barrier(CLK_LOCAL_MEM_FENCE);
for(int t = 0; t < T; t++){
value = v*input[t];
for(int j = 0; j < N; j++){
value = value + W[i*N + j]*local_s[j];
}
barrier(CLK_LOCAL_MEM_FENCE);
if (value >= th){
local_s[i] = 1;
} else {
local_s[i] = 0;
}
barrier(CLK_LOCAL_MEM_FENCE);
}
//If necessary write some stuff to global (maybe the last s computed?)
}
Now I have to warn you:
I might have completely misunderstood your needs :)
I've just edited your code while typing this answer, so there are most probably typos and such.
Even using local memory, having so many barriers might still make the opencl version slower than the C one.
Note that I removed the leading __ since they are not necessary and it is easier to read in my opinion.
EDIT: Regarding your comment about CLK_LOCAL_MEM_FENCE vs. CLK_GLOBAL_MEM_FENCE. A barrier is always applied at the workgroup level, so all workers within a workgroup have to hit that barrier. The flag given as parameter refers to memory access. When the flag is CLK_GLOBAL_MEM_FENCE it means that every read/write operation regarding the global memory has to be completed by every worker before any worker can continue to run the next statements. This is exactly the same with the CLK_LOCAL_MEM_FENCE flag but for the local memory.