OpenCL in Xcode/OSX - Can't assign zero in kernel loop - xcode

I'm developing an accelerated component in OpenCL, using Xcode 4.5.1 and Grand Central Dispatch, guided by this tutorial.
The full kernel kept failing on the GPU, giving signal SIGABRT. I couldn't make much progress interpreting the error beyond that.
But I broke out aspects of the kernel to test, and I found something very peculiar involving assigning certain values to positions in an array within a loop.
Test scenario: give each thread a fixed range of array indices to initialize.
kernel void zero(size_t num_buckets, size_t positions_per_bucket, global int* array) {
size_t bucket_index = get_global_id(0);
if (bucket_index >= num_buckets) return;
for (size_t i = 0; i < positions_per_bucket; i++)
array[bucket_index * positions_per_bucket + i] = 0;
}
The above kernel fails. However, when I assign 1 instead of 0, the kernel succeeds (and my host code prints out the array of 1's). Based on a handful of tests on various integer values, I've only had problems with 0 and -1.
I've tried to outsmart the compiler with 1-1, (int) 0, etc, with no success. Passing zero in as a kernel argument worked though.
The assignment to zero does work outside of the context of a for loop:
array[bucket_index * positions_per_bucket] = 0;
The findings above were confirmed on two machines with different configurations. (OSX 10.7 + GeForce, OSX 10.8 + Radeon.) Furthermore, the kernel had no trouble when running on CL_DEVICE_TYPE_CPU -- it's just on the GPU.
Clearly, something ridiculous is happening, and it must be on my end, because "zero" can't be broken. Hopefully it's something simple. Thank you for your help.
Host code:
#include <stdio.h>
#include <OpenCL/OpenCL.h>
#include "zero.cl.h"
int main(int argc, const char* argv[]) {
dispatch_queue_t queue = gcl_create_dispatch_queue(CL_DEVICE_TYPE_GPU, NULL);
size_t num_buckets = 64;
size_t positions_per_bucket = 4;
cl_int* h_array = malloc(sizeof(cl_int) * num_buckets * positions_per_bucket);
cl_int* d_array = gcl_malloc(sizeof(cl_int) * num_buckets * positions_per_bucket, NULL, CL_MEM_WRITE_ONLY);
dispatch_sync(queue, ^{
cl_ndrange range = { 1, { 0 }, { num_buckets }, { 0 } };
zero_kernel(&range, num_buckets, positions_per_bucket, d_array);
gcl_memcpy(h_array, d_array, sizeof(cl_int) * num_buckets * positions_per_bucket);
});
for (size_t i = 0; i < num_buckets * positions_per_bucket; i++)
printf("%d ", h_array[i]);
printf("\n");
}

Refer to the OpenCL standard, section 6, paragraph 8 "Restrictions", bullet point k (emphasis mine):
6.8 k. Arguments to kernel functions in a program cannot be declared with the built-in scalar types bool, half, size_t, ptrdiff_t, intptr_t, and uintptr_t. [...]
The fact that your compiler even let you build the kernel at all indicates it is somewhat broken.
So you might want to fix that... but if that doesn't fix it, then it looks like a compiler bug, plain and simple (of CLC, that is, the OpenCL compiler, not your host code). There is no reason this kernel should work with any constant other than 0, -1. Did you try updating your OpenCL driver, what about trying on a different operating system (though I suppose this code is OS X only)?

Related

How do I allocate memory and copy 2D arrays between CPU / GPU in CUDA without flattening them?

So I want to allocate 2D arrays and also copy them between the CPU and GPU in CUDA, but I am a total beginner and other online materials are very difficult for me to understand or are incomplete. It is important that I am able to access them as a 2D array in the kernel code as shown below.
Note that height != width for the arrays, that's something that further confuses me if it's possible as I always struggle choosing grid size.
I've considered flattening them, but I really want to get it working this way.
This is how far I've got by my own research.
__global__ void myKernel(int *firstArray, int *secondArray, int rows, int columns) {
int row = blockIdx.x * blockDim.x + threadIdx.x;
int column = blockIdx.y * blockDim.y + threadIdx.y;
if (row >= rows || column >= columns)
return;
// Do something with the arrays like you would on a CPU, like:
firstArray[row][column] = row * 2;
secondArray[row[column] = row * 3;
}
int main() {
int rows = 300, columns = 200;
int h_firstArray[rows][columns], h_secondArray[rows][columns];
int *d_firstArray[rows][columns], *d_secondArray[rows][columns];
// populate h_ arrays (Can do this bit myself)
// Allocate memory on device, no idea how to do for 2D arrays.
// Do memcopies to GPU, no idea how to do for 2D arrays.
dim3 block(rows,columns);
dim3 grid (1,1);
myKernel<<<grid,block>>>(d_firstArray, d_secondArray, rows, columns);
// Do memcopies back to host, no idea how to do for 2D arrays.
cudaFree(d_firstArray);
cudaFree(d_secondArray);
return 0;
}
EDIT: I was asked if the array width will be known at compile time in the problems I would try to solve. You can assume it is as I'm interested primarily in this particular situation as of now.
In the general case (array dimensions not known until runtime), handling doubly-subscripted access in CUDA device code requires an array of pointers, just as it does in host code. C and C++ handle each subscript as a pointer dereference, in order to reach the final location in the "2D array".
Double-pointer/doubly-subscripted access in device code in the general case is already covered in the canonical answer linked from the cuda tag info page. There are several drawbacks to this, which are covered in that answer so I won't repeat them here.
However, if the array width is known at compile time (array height can be dynamic - i.e. determined at runtime), then we can leverage the compiler and the language typing mechanisms to allow us to circumvent most of the drawbacks. Your code demonstrates several other incorrect patterns for CUDA and/or C/C++ usage:
Passing an item for doubly-subscripted access to a C or C++ function cannot be done with a simple single pointer type like int *firstarray
Allocating large host arrays via stack-based mechanisms:
int h_firstArray[rows][columns], h_secondArray[rows][columns];
is often problematic in C and C++. These are stack based variables and will often run into stack limits if large enough.
CUDA threadblocks are limited to 1024 threads total. Therefore such a threadblock dimension:
dim3 block(rows,columns);
will not work except for very small sizes of rows and columns (the product must be less than or equal to 1024).
When declaring pointer variables for a device array in CUDA, it is almost never correct to create arrays of pointers:
int *d_firstArray[rows][columns], *d_secondArray[rows][columns];
nor do we allocate space on the host, then "reallocate" those pointers for device usage.
What follows is a worked example with the above items addressed and demonstrating the aforementioned method where the array width is known at runtime:
$ cat t50.cu
#include <stdio.h>
const int array_width = 200;
typedef int my_arr[array_width];
__global__ void myKernel(my_arr *firstArray, my_arr *secondArray, int rows, int columns) {
int column = blockIdx.x * blockDim.x + threadIdx.x;
int row = blockIdx.y * blockDim.y + threadIdx.y;
if (row >= rows || column >= columns)
return;
// Do something with the arrays like you would on a CPU, like:
firstArray[row][column] = row * 2;
secondArray[row][column] = row * 3;
}
int main() {
int rows = 300, columns = array_width;
my_arr *h_firstArray, *h_secondArray;
my_arr *d_firstArray, *d_secondArray;
size_t dsize = rows*columns*sizeof(int);
h_firstArray = (my_arr *)malloc(dsize);
h_secondArray = (my_arr *)malloc(dsize);
// populate h_ arrays
memset(h_firstArray, 0, dsize);
memset(h_secondArray, 0, dsize);
// Allocate memory on device
cudaMalloc(&d_firstArray, dsize);
cudaMalloc(&d_secondArray, dsize);
// Do memcopies to GPU
cudaMemcpy(d_firstArray, h_firstArray, dsize, cudaMemcpyHostToDevice);
cudaMemcpy(d_secondArray, h_secondArray, dsize, cudaMemcpyHostToDevice);
dim3 block(32,32);
dim3 grid ((columns+block.x-1)/block.x,(rows+block.y-1)/block.y);
myKernel<<<grid,block>>>(d_firstArray, d_secondArray, rows, columns);
// Do memcopies back to host
cudaMemcpy(h_firstArray, d_firstArray, dsize, cudaMemcpyDeviceToHost);
cudaMemcpy(h_secondArray, d_secondArray, dsize, cudaMemcpyDeviceToHost);
// validate
if (cudaGetLastError() != cudaSuccess) {printf("cuda error\n"); return -1;}
for (int i = 0; i < rows; i++)
for (int j = 0; j < columns; j++){
if (h_firstArray[i][j] != i*2) {printf("first mismatch at %d,%d, was: %d, should be: %d\n", i,j,h_firstArray[i][j], i*2); return -1;}
if (h_secondArray[i][j] != i*3) {printf("second mismatch at %d,%d, was: %d, should be: %d\n", i,j,h_secondArray[i][j], i*3); return -1;}}
printf("success!\n");
cudaFree(d_firstArray);
cudaFree(d_secondArray);
return 0;
}
$ nvcc -arch=sm_61 -o t50 t50.cu
$ cuda-memcheck ./t50
========= CUDA-MEMCHECK
success!
========= ERROR SUMMARY: 0 errors
$
I've reversed the sense of your kernel indexing (x,y) to help with coalesced global memory access. We see that with this kind of type creation, we can leverage the compiler and the language features to end up with a code that allows for doubly-subscripted access in both host and device code, while otherwise allowing CUDA operations (e.g. cudaMemcpy) as if we are dealing with single-pointer (e.g. "flattened") arrays.

Optimize Cuda Kernel time execution

I'm a learning Cuda student, and I would like to optimize the execution time of my kernel function. As a result, I realized a short program computing the difference between two pictures. So I compared the execution time between a classic CPU execution in C, and a GPU execution in Cuda C.
Here you can find the code I'm talking about:
int *imgresult_data = (int *) malloc(width*height*sizeof(int));
int size = width*height;
switch(computing_type)
{
case GPU:
HANDLE_ERROR(cudaMalloc((void**)&dev_data1, size*sizeof(unsigned char)));
HANDLE_ERROR(cudaMalloc((void**)&dev_data2, size*sizeof(unsigned char)));
HANDLE_ERROR(cudaMalloc((void**)&dev_data_res, size*sizeof(int)));
HANDLE_ERROR(cudaMemcpy(dev_data1, img1_data, size*sizeof(unsigned char), cudaMemcpyHostToDevice));
HANDLE_ERROR(cudaMemcpy(dev_data2, img2_data, size*sizeof(unsigned char), cudaMemcpyHostToDevice));
HANDLE_ERROR(cudaMemcpy(dev_data_res, imgresult_data, size*sizeof(int), cudaMemcpyHostToDevice));
float time;
cudaEvent_t start, stop;
HANDLE_ERROR( cudaEventCreate(&start) );
HANDLE_ERROR( cudaEventCreate(&stop) );
HANDLE_ERROR( cudaEventRecord(start, 0) );
for(int m = 0; m < nb_loops ; m++)
{
diff<<<height, width>>>(dev_data1, dev_data2, dev_data_res);
}
HANDLE_ERROR( cudaEventRecord(stop, 0) );
HANDLE_ERROR( cudaEventSynchronize(stop) );
HANDLE_ERROR( cudaEventElapsedTime(&time, start, stop) );
HANDLE_ERROR(cudaMemcpy(imgresult_data, dev_data_res, size*sizeof(int), cudaMemcpyDeviceToHost));
printf("Time to generate: %4.4f ms \n", time/nb_loops);
break;
case CPU:
clock_t begin = clock(), diff;
for (int z=0; z<nb_loops; z++)
{
// Apply the difference between 2 images
for (int i = 0; i < height; i++)
{
tmp = i*imgresult_pitch;
for (int j = 0; j < width; j++)
{
imgresult_data[j + tmp] = (int) img2_data[j + tmp] - (int) img1_data[j + tmp];
}
}
}
diff = clock() - begin;
float msec = diff*1000/CLOCKS_PER_SEC;
msec = msec/nb_loops;
printf("Time taken %4.4f milliseconds", msec);
break;
}
And here is my kernel function:
__global__ void diff(unsigned char *data1 ,unsigned char *data2, int *data_res)
{
int row = blockIdx.x;
int col = threadIdx.x;
int v = col + row*blockDim.x;
if (row < MAX_H && col < MAX_W)
{
data_res[v] = (int) data2[v] - (int) data1[v];
}
}
I obtained these execution time for each one
CPU: 1,3210ms
GPU: 0,3229ms
I wonder why GPU result is not as lower as it should be. I am a beginner in Cuda so please be comprehensive if there are some classic errors.
EDIT1:
Thank you for your feedback. I tried to delete the 'if' condition from the kernel but it didn't change deeply my program execution time.
However, after having install Cuda profiler, it told me that my threads weren't running concurrently. I don't understand why I have this kind of message, but it seems true because I only have a 5 or 6 times faster application with GPU than with CPU. This ratio should be greater, because each thread is supposed to process one pixel concurrently to all the other ones. If you have an idea of what I am doing wrong, it would be hepful...
Flow.
Here are two things you could do which may improve the performance of your diff kernel:
1. Let each thread do more work
In your kernel, each thread handles just a single element; but having a thread do anything already has a bunch of overhead, at the block and the thread level, including obtaining the parameters, checking the condition and doing address arithmetic. Now, you could say "Oh, but the reads and writes take much more time then that; this overhead is negligible" - but you would be ignoring the fact, that the latency of these reads and writes is hidden by the presence of many other warps which may be scheduled to do their work.
So, let each thread process more than a single element. Say, 4, as each thread can easily read 4 bytes at once into a register. Or even 8 or 16; experiment with it. Of course you'll need to adjust your grid and block parameters accordingly.
2. "Restrict" your pointers
__restrict is not part of C++, but it is supported in CUDA. It tells the compiler that accesses through different pointers passed to the function never overlap. See:
What does the restrict keyword mean in C++?
Realistic usage of the C99 'restrict' keyword?
Using it allows the CUDA compiler to apply additional optimizations, e.g. loading or storing data via non-coherent cache. Indeed, this happens with your kernel although I haven't measured the effects.
3. Consider using a "SIMD" instruction
CUDA offers this intrinsic:
__device__ ​ unsigned int __vsubss4 ( unsigned int a, unsigned int b )
Which subtracts each signed byte value in a from its corresponding one in b. If you can "live" with the result, rather than expecting a larger int variable, that could save you some of work - and go very well with increasing the number of elements per thread. In fact, it might let you increase it even further to get to the optimum.
I don't think you are measuring times correctly, memory copy is a time consuming step in GPU that you should take into account when measuring your time.
I see some details that you can test:
I suppose you are using MAX_H and MAX_H as constants, you may consider doing so using cudaMemcpyToSymbol().
Remember to sync your threads using __syncthreads(), so you don't get issues between each loop iteration.
CUDA works with warps, so block and number of threads per block work better as multiples of 8, but not larger than 512 threads per block unless your hardware supports it. Here is an example using 128 threads per block: <<<(cols*rows+127)/128,128>>>.
Remember as well to free your allocated memory in GPU and destroying your time events created.
In your kernel function you can have a single variable int v = threadIdx.x + blockIdx.x * blockDim.x .
Have you tested, beside the execution time, that your result is correct? I think you should use cudaMallocPitch() and cudaMemcpy2D() while working with arrays due to padding.
Probably there are other issues with the code, but here's what I see. The following lines in __global__ void diff are considered not optimal:
if (row < MAX_H && col < MAX_W)
{
data_res[v] = (int) data2[v] - (int) data1[v];
}
Conditional operators inside a kernel result in warp divergence. It means that if and else parts inside a warp are executed in sequence, not in parallel. Also, as you might have realized, if evaluates to false only at borders. To avoid the divergence and needless computation, split your image in two parts:
Central part where row < MAX_H && col < MAX_W is always true. Create an additional kernel for this area. if is unnecessary here.
Border areas that will use your diff kernel.
Obviously you'll have modify your code that calls the kernels.
And on a separate note:
GPU has throughput-oriented architecture, but not latency-oriented as CPU. It means CPU may be faster then CUDA when it comes to processing small amounts of data. Have you tried using large data sets?
CUDA Profiler is a very handy tool that will tell you're not optimal in the code.

Generate random number within a function with cuRAND without preallocation

Is it possible to generate random numbers within a device function without preallocate all the states? I would like to generate and use them in "realtime". I need to use them for Monte Carlo simulations what are the most suitable for this purpose? The number generated below are single precision is it possible to have them in double precision?
#include <iostream>
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include <curand_kernel.h>
__global__ void cudaRand(float *d_out, unsigned long seed)
{
int i = blockDim.x * blockIdx.x + threadIdx.x;
curandState state;
curand_init(seed, i, 0, &state);
d_out[i] = curand_uniform(&state);
}
int main(int argc, char** argv)
{
size_t N = 1 << 4;
float *v = new float[N];
float *d_out;
cudaMalloc((void**)&d_out, N * sizeof(float));
// generate random numbers
cudaRand << < 1, N >> > (d_out, time(NULL));
cudaMemcpy(v, d_out, N * sizeof(float), cudaMemcpyDeviceToHost);
for (size_t i = 0; i < N; i++)
{
printf("out: %f\n", v[i]);
}
cudaFree(d_out);
delete[] v;
return 0;
}
UPDATE
#include <iostream>
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include <curand_kernel.h>
#include <ctime>
__global__ void cudaRand(double *d_out)
{
int i = blockDim.x * blockIdx.x + threadIdx.x;
curandState state;
curand_init((unsigned long long)clock() + i, 0, 0, &state);
d_out[i] = curand_uniform_double(&state);
}
int main(int argc, char** argv)
{
size_t N = 1 << 4;
double *h_v = new double[N];
double *d_out;
cudaMalloc((void**)&d_out, N * sizeof(double));
// generate random numbers
cudaRand << < 1, N >> > (d_out);
cudaMemcpy(h_v, d_out, N * sizeof(double), cudaMemcpyDeviceToHost);
for (size_t i = 0; i < N; i++)
printf("out: %f\n", h_v[i]);
cudaFree(d_out);
delete[] h_v;
return 0;
}
How I was dealing with the similar situation in the past, within __device__/__global__ function:
int tId = threadIdx.x + (blockIdx.x * blockDim.x);
curandState state;
curand_init((unsigned long long)clock() + tId, 0, 0, &state);
double rand1 = curand_uniform_double(&state);
double rand2 = curand_uniform_double(&state);
So just use curand_uniform_double for generating random doubles and also I believe you don't want the same seed for all of the threads, thats what I am trying to achieve by using clock() + tId instead. This way the odds of having the same rand1/rand2 in any of the two threads are close to nil.
EDIT:
However, based on below comments, proposed approach may perhaps lead to biased result:
JackOLantern pointed me to this part of curand documentation:
Sequences generated with different seeds usually do not have statistically correlated values, but some choices of seeds may give statistically correlated sequences.
Also there is a devtalk thread devoted to how to improve performance of curand_init in which the proposed solution to speed up the curand initialization is:
One thing you can do is use different seeds for each thread and a fixed subsequence of 0 and offset of 0.
But the same poster is later stating:
The downside is that you lose some of the nice mathematical properties between threads. It is possible that there is a bad interaction between the hash function that initializes the generator state from the seed and the periodicity of the generators. If that happens, you might get two threads with highly correlated outputs for some seeds. I don't know of any problems like this, and even if they do exist they will most likely be rare.
So it is basically up to you whether you want better performance (as I did) or 1000% unbiased results. If that is what you desire, then solution proposed by JackOLantern is the correct one, i.e. initialize curand as:
curand_init((unsigned long long)clock(), tId, 0, &state)
Using not 0 value for offset and subsequence parameters is, however, decreasing performance. For more info on these parameters you may review this SO thread and also curand documentation.
I see that JackOLantern stated in comment that:
I would say it is not recommandable to call curand_init and curand_uniform_double from within the same kernel from two reasons ........ Second, curand_init initializes the pseudorandom number generator and sets all of its parameters, so I'm afraid your approach will be somewhat slow.
I was dealing with this in my thesis on several pages, tried various approaches to get different random numbers in each thread and creating curandState in each of the threads turned out to be the most viable solution for me. I needed to generate ~10 random numbers in each thread and among others I tried:
developing my own simple random number generator (Linear Congruential Generator) whose intialization was basically for free, however, the performance suffered greatly when generating numbers, so in the end having curandState in each thread turned out to be superior,
pre-allocating curandStates and reusing them - this was memory heavy and when I decreased number of preallocated states then I had to use non zero values for offset/subsequence parameters of curand_uniform_double in order to get rid of bias which led to decreased performance when generating numbers.
So after making thorough analysis I decided to indeed call curand_init and curand_uniform_double in each thread. The only problem was with the amount of registry that these states were occupying so I had to be careful with the block sizes not to exceed the max number of registry available to each block.
Thats what I have to say about provided solution which I was finally able to test and it is working just fine on my machine/GPU. I run the code from UPDATE section in the above question and 16 different random numbers were displayed in the console correctly. Therefore I advise you to properly perform error checking after executing kernel to see what went wrong inside. This topic is very well covered in this SO thread.

Compiling SSE intrinsics in GCC gives an error

My SSE code works completely fine on Windows platform, but when I run this on Linux I am facing many issues. One amongst them is this:
It's just a sample illustration of my code:
int main(int ref, int ref_two)
{
__128i a, b;
a.m128i_u8[0] = ref;
b.m128i_u8[0] = ref_two;
.
.
.
.....
}
Error 1:
error : request for member 'm128i_u8' in something not a structure or union
In this thread it gives the solution of to use appropriate _mm_set_XXX intrinsics instead of the above method as it only works on Microsoft.
SSE intrinsics compiling MSDN code with GCC error?
I tried the above method mentioned in the thread I have replaced set instruction in my program but it is seriously affecting the performance of my application.
My code is massive and it needs to be changed at 2000 places. So I am looking for better alternative without affecting the performance of my app.
Recently I got this link from Intel, which says to use -fms-diaelect option to port it from windows to Linux.
http://software.intel.com/sites/products/documentation/doclib/iss/2013/compiler/cpp-lin/GUID-7A69898B-BDBB-4AA9-9820-E4A590945903.htm
Has anybody tried the above method? Has anybody found the solution to porting large code to Linux?
#Paul, here is my code and I have placed a timer to measure the time taken by both methods and the results were shocking.
Code 1: 115 ms (Microsoft method to access elements directly)
Code 2: 151 ms (using set instruction)
It costed me a 36 ms when i used set in my code.
NOTE: If I replace in single instruction of mine it takes 36 ms and imagine the performance degrade which I am going to get if I replace it 2000 times in my program.
That's the reason I am looking for a better alternative other than set instruction
Code 1:
__m128i array;
unsigned char* temp_src;
unsigned char* temp_dst;
for (i=0; i< 20; i++)
{
for (j=0; j< 1600; j+= 16)
{
Timerstart(&x);
array = _mm_loadu_si128 ((__m128i *)(src));
array.m128i_u8[0] = 36;
y+ = Timerstop(x);
_mm_store_si128( (__m128i *)temp_dst,array);
}
}
Code2:
__m128i array;
unsigned char* temp_src;
unsigned char* temp_dst;
for (i=0; i< 20; i++)
{
for (j=0; j< 1600; j+= 16)
{
Timerstart(&x);
array = _mm_set_epi8(*(src+15),*(src+14),*(src+13),*(src+12),
*(src+11),*(src+10),*(src+9), *(src+8),
*(src+7), *(src+6), *(src+5), *(src+4),
*(src+3), *(src+2), *(src+1), 36 );
y+ = Timerstop(x);
_mm_store_si128( (__m128i *)temp_dst,array);
}
}
You're trying to use a non-portable Microsoft-ism. Just stick to the more portable intrinsics, e.g. _mm_set_epi8:
__128i a = _mm_set_epi8(0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ref);
This will work on all platforms and compilers.
If you're seeing performance issues then it's probably because you're doing something inefficient inside a loop - without seeing the actual code though it's not possible to make any specific suggestions on making the code more efficient.
EDIT
Often there are much more efficient ways of loading a vector with a combination of values such as in your example, e.g.:
#include "smmintrin.h" // SSE4.1
for (...)
{
for (...)
{
__m128i v = _mm_loadu_si128(0, (__m128i *)src); // load vector from src..src+15
v = _mm_insert_epi8(v, 0, 36); // replace element 0 with constant `36`
_mm_storeu_si128((__m128i *)dst, v); // store vector at dst..dst+15
}
}
This translates to just 3 instructions. (Note: if you can't assume SSE4.1 minimum then the _mm_insert_epi8 can be replaced with two bitwise intrinsics - this will still be much more efficient than using _mm_set_epi8).

glext visual studio cuda

I am currently in a parallel computing class using a book called Cuda by Example. In Chapter 4 of this book I am using some .h files that contain includes for "GL/glut.h" and "GL/glext.h", I have steps for installing GLUT online, and followed those. I think that this worked but I am not sure. I then tried to find directions for glext, but I cannot seem to find as much on this. I did find one .h file and tried to use that by including it in the GL folder as well. This does not seem to work because I received errors when compiling of things similar to this:
Error 1 error : calling a host function("cuComplex::cuComplex") from a device/_global_ function("julia") is not allowed C:\Users\Laptop\Documents\Visual Studio 2010\Projects\Lab1\Lab1\lab1.cu 29 1 Lab1
I think this is because I need more for glext.h, like .dll and things similar to the glut, but I am not sure. Any help with this would be appreciated. Thank You.
EDIT:- this is the code that I am using, and I have not changed it from what I see in the book, except for the top two include statements and the .h files are from google code: thank you for any help
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
#include "book.h"
#include "cpu_bitmap.h"
#define DIM 1000
struct cuComplex {
float r;
float i;
cuComplex( float a, float b) : r(a), i(b) {}
__device__ float magnitude2(void) {
return r*r + i*i;
}
__device__ cuComplex operator* (const cuComplex& a) {
return cuComplex(r*a.r - i*a.i, i*a.r + r*a.i);
}
__device__ cuComplex operator+ (const cuComplex& a) {
return cuComplex(r+a.r, i+a.i);
}
};
__device__ int julia( int x, int y) {
const float scale = 1.5;
float jx = scale * (float)(DIM/2 -x)/(DIM/2);
float jy = scale * (float)(DIM/2 - y)/(DIM/2);
cuComplex c(-0.8, .156);
cuComplex a(jx, jy);
int i = 0;
for(i=0;i<200;i++) {
a = a * a + c;
if(a.magnitude2() > 1000)
return 0;
}
return 1;
}
__global__ void kernel(unsigned char *ptr ) {
//map from threadIdx/BlockIdx to pixel position
int x = blockIdx.x;
int y = blockIdx.y;
int offset = x + y * gridDim.x;
//now claculate the value at that position
int juliaValue = julia(x,y);
ptr[offset*4 + 0] = 255 * juliaValue;
ptr[offset*4 + 1] = 0;
ptr[offset*4 + 2] = 0;
ptr[offset*4 + 3] = 255;
}
int main( void ) {
CPUBitmap bitmap(DIM, DIM);
unsigned char *dev_bitmap;
HANDLE_ERROR(cudaMalloc((void**)&dev_bitmap, bitmap.image_size()));
dim3 grid(DIM,DIM);
kernel<<<grid,1>>>( dev_bitmap );
HANDLE_ERROR( cudaMemcpy( bitmap.get_ptr(), dev_bitmap, bitmap.image_size(), cudaMemcpyDeviceToHost));
bitmap.display_and_exit();
HANDLE_ERROR( cudaFree( dev_bitmap ));
}
try adding the following.
Original code:
cuComplex( float a, float b) : r(a), i(b) {}
Modified:
__host__ __device__ cuComplex( float a, float b ) : r(a), i(b) {}
It fixed the issue for me. I also didn't need the two include files you added, but you may depending on your build process.
A CUDA program consists of 2 types of code: host code and device code. Host code runs on the host CPU and cannot run on the GPU, and device code runs on the GPU and cannot run on the CPU. If you don't decorate your program in any way, then it will be all host code. But once you start adding CUDA sections delineated by keywords like __ global__ or __ device__ then your program will contain some device code.
The compiler error you received indicated that a function that was running on the device was attempting to use code compiled for the CPU. This is a no-no and the compiler will not allow this. This example is unusual since at some point in time (when the book was written) it presumably did not generate this error, and furthermore the code in cuComplex struct appears to be decorated with __ device__ keyword. However at the outermost level of the struct at the line of code I modified, there is no keyword identifying __ device__ . When I add the __ device__ __ host__ keywords, this tells the compiler "for this logical section, create both a device-compiled version and a host-compiled version of the code". This explicitly tells the compiler you want to be able to use this section of code in the device. And with that addition, we have steered the compiler correctly and it no longer gives the complaint.
Apparently something has changed about the level of decoration that the compiler needs to generate device code in this case. Presumably, with older compilers, the __ device__ keywords inside the struct were enough to let the compiler know that it had to generate device versions of the operators callable by cuComplex type.

Resources