Intel gather instruction - intrinsics

I am a little confused about how Intel gather intrinsic works.
I have the following simple code. One of them is to set y[0]=y[1] = x[0], ... y[20002]=y[20003]=x[10002], the other one is to set y[i] = x[i], y[i+1] = x[i+2].
I just randomly print out some values to check the correctness. I found that I could get both y[10] and y[11] equal 2.46 if "zeros" is used. However, I will get a random number for y[11] when I use "stride", while y[10] is still 2.46. Any idea about what's wrong?
#include <stdio.h>
#include <xmmintrin.h>
#include <immintrin.h>
void dummy(double *x, double *y) {
printf("%lf, %lf\n", y[10], y[11]);
return;
}
int main() {
double x[20004];
double y[20004];
__m128i zeros = _mm_set_epi64x(0, 0);
__m128i stride = _mm_set_epi64x(2, 0);
for (int i = 0; i <= 20004; ++i) {
x[i] = i * 0.246;
}
for (int j = 0; j <= 10000; j+=2) {
#ifdef ZERO
__m128d gather = _mm_i64gather_pd(&x[j], zeros, 1);
#else
__m128d gather = _mm_i64gather_pd(&x[j], stride, 1);
#endif
_mm_store_pd(&y[j], gather);
}
dummy(x, y);
}

Related

Why is my code printing symbols instead of letters?

I am supposed to write a program with three files (mysource.c, myMain.c, and mysource.h) to create a randomly generated string of characters. The length of the string is decided by the user. After the string is generated, the program will bump all letters in the string to the next letter in the alphabet to create a new offset string. I have most of the code sorted out, but my output is printing "╠╠╠╠". It prints the correct amount of characters but it is only printing those symbols. What do I need to do so that the characters print as actual letters rather than these symbols?
Here is my header file:
void generateChars(char *myarr, int len);
void offsetChars(char *myarr, int len);
void printChars(char *myarr, int len);
Here is my source code:
#include <stdio.h>
#include <stdlib.h>
#include "mysource.h"
void generateChars(char* myarr, int len)
{
int i = 0;
char letters[26] ={'a','b','c','d','e','f','g','h','i','j','k','l','m','n','o'
,'p','q','r','s','t','u','v','w','x','y','z' };
for (i = 0; i < len; i++);
{
myarr[i] = letters[rand() % 26];
}
}
//end generate function
void offsetChars(char *myarr, int len)
{
char i;
int j;
for (j = 0; j < len; j++)
{
for (i = 'a'; i <= 'z'; i++)
{
if (myarr[j] == i)
{
myarr[j] = i + 1;
break;
}
if (myarr[j] == 'z')
{
myarr[j] = 'a';
break;
}
}
}
}
//end offset function
void printChars(char *myarr, int len)
{
int i;
for (i = 0; i < len; i++)
{
printf("%c",myarr[i]);
}
}//end of print function
Here is my main code:
#include <stdio.h>
#include <stdlib.h>
#include "mysource.h"
int main()
{
int n;
printf("How many random characters do you want to
generate?: ");
scanf_s("%i", &n);
char myarr[1024];
printf("\nOriginal Combo:\n");
generateChars(&myarr, n);
printChars(&myarr, n);
printf("\nOffset Combo:\n");
offsetChars(&myarr, n);
printChars(&myarr, n);
return 0;
}
Here is the output I get:
I don't have enough reputation so this is the picture of the output
Yes there are two source codes, the objective is to make this assignment work with both source codes. Any help is appreciated!

Eigen JacobiSVD cuda compile error

I've got an error, regarding calling JacobiSVD in my cuda function.
This is the part of the code that causing the error.
Eigen::JacobiSVD<Eigen::Matrix3d> svd( cov_e, Eigen::ComputeThinU | Eigen::ComputeThinV);
And this is the error message.
CUDA_voxel_building.cu(43): error: calling a __host__
function("Eigen::JacobiSVD , (int)2> ::JacobiSVD") from a __global__
function("kernel") is not allowed
I've used the following command to compile it.
nvcc -std=c++11 -D_MWAITXINTRIN_H_INCLUDED -D__STRICT_ANSI__ -ptx CUDA_voxel_building.cu
I'm using code 8.0 with eigen3 on ubuntu 16.04.
It seems like other functions such as eigen value decomposition also gives the same error.
Anyone knows a solution? I'm enclosing my code below.
//nvcc -ptx CUDA_voxel_building.cu
#include </usr/include/eigen3/Eigen/Core>
#include </usr/include/eigen3/Eigen/SVD>
/*
#include </usr/include/eigen3/Eigen/Sparse>
#include </usr/include/eigen3/Eigen/Dense>
#include </usr/include/eigen3/Eigen/Eigenvalues>
*/
__global__ void kernel(double *p, double *breaks,double *ind, double *mu, double *cov, double *e,double *v, int *n, char *isgood, int minpts, int maxgpu){
bool debuginfo = false;
int idx = threadIdx.x + blockIdx.x * blockDim.x;
if(debuginfo)printf("Thread %d got pointer\n",idx);
if( idx < maxgpu){
int s_ind = breaks[idx];
int e_ind = breaks[idx+1];
int diff = e_ind-s_ind;
if(diff >minpts){
int cnt = 0;
Eigen::MatrixXd local_p(3,diff) ;
for(int k = s_ind;k<e_ind;k++){
int temp_ind=ind[k];
//Eigen::Matrix<double, 3, diff> local_p;
local_p(1,cnt) = p[temp_ind*3];
local_p(2,cnt) = p[temp_ind*3+1];
local_p(3,cnt) = p[temp_ind*3+2];
cnt++;
}
Eigen::Matrix3d centered = local_p.rowwise() - local_p.colwise().mean();
Eigen::Matrix3d cov_e = (centered.adjoint() * centered) / double(local_p.rows() - 1);
Eigen::JacobiSVD<Eigen::Matrix3d> svd( cov_e, Eigen::ComputeThinU | Eigen::ComputeThinV);
/* Eigen::Matrix3d Cp = svd.matrixU() * svd.singularValues().asDiagonal() * svd.matrixV().transpose();
mu[idx]=p[ind[s_ind]*3];
mu[idx+1]=p[ind[s_ind+1]*3];
mu[idx+2]=p[ind[s_ind+2]*3];
e[idx]=svd.singularValues()(0);
e[idx+1]=svd.singularValues()(1);
e[idx+2]=svd.singularValues()(2);
n[idx] = diff;
isgood[idx] = 1;
for(int x = 0; x < 3; x++)
{
for(int y = 0; y < 3; y++)
{
v[x+ 3*y +idx*9]=svd.matrixV()(x, y);
cov[x+ 3*y +idx*9]=cov_e(x, y);
//if(debuginfo)printf("%f ",R[x+ 3*y +i*9]);
if(debuginfo)printf("%f ",Rm(x, y));
}
}
*/
} else {
mu[idx]=0;
mu[idx+1]=0;
mu[idx+2]=0;
e[idx]=0;
e[idx+1]=0;
e[idx+2]=0;
n[idx] = 0;
isgood[idx] = 0;
for(int x = 0; x < 3; x++)
{
for(int y = 0; y < 3; y++)
{
v[x+ 3*y +idx*9]=0;
cov[x+ 3*y +idx*9]=0;
}
}
}
}
}
First of all, Ubuntu 16.04 provides Eigen 3.3-beta1, which is not really recommended to be used. I would suggest upgrading to a more recent version. Furthermore, to include Eigen, write (e.g.):
#include <Eigen/Eigenvalues>
and compile with -I /usr/include/eigen3 (if you use the version provided by the OS), or better -I /path/to/local/eigen-version.
Then, as talonmies noted, you can't call host-functions from kernels, (I'm not sure at the moment, why JacobiSVD is not marked as device function), but in your case it would make much more sense to use Eigen::SelfAdjointEigenSolver, anyway. Since the matrix you are decomposing is fixed-size 3x3 you should actually use the optimized computeDirect method:
Eigen::SelfAdjointEigenSolver<Eigen::Matrix3d> eig; // default constructor
eig.computeDirect(cov_e); // works for 2x2 and 3x3 matrices, does not require loops
It seems the computeDirect even works on the beta version provided by Ubuntu (I'd still recommend to update).
Some unrelated notes:
The following is wrong, since you should start with index 0:
local_p(1,cnt) = p[temp_ind*3];
local_p(2,cnt) = p[temp_ind*3+1];
local_p(3,cnt) = p[temp_ind*3+2];
Also, you can write this in one line:
local_p.col(cnt) = Eigen::Vector3d::Map(p+temp_ind*3);
This line will not fit (unless diff==3):
Eigen::Matrix3d centered = local_p.rowwise() - local_p.colwise().mean();
What you probably mean is (local_p is actually 3xn not nx3)
Eigen::Matrix<double, 3, Eigen::Dynamic> centered = local_p.colwise() - local_p.rowwise().mean();
And when computing cov_e you need to .adjoint() the second factor, not the first.
You can avoid both 'big' matrices local_p and centered, by directly accumulating Eigen::Matrix3d sum2 and Eigen::Vector3d sum with sum2 += v*v.adjoint() and sum +=v and computing
Eigen::Vector3d mu = sum / diff;
Eigen::Matrix3d cov_e = (sum2 - mu*mu.adjoint()*diff)/(diff-1);

Can I speed up type conversion using intrinsics?

I am working on an application which needs to convert data to float.
The data are unsigned char or unsigned short.
I am using both AVX2 and other SIMDs intrinsics in this code.
I wrote the conversion like this:
unsigned char -> float :
#ifdef __AVX2__
__m256i tmp_v =_mm256_lddqu_si256(reinterpret_cast<const __m256i*>(src+j));
v16_avx[0] = _mm256_cvtepu8_epi16(_mm256_extractf128_si256(tmp_v,0x0));
v16_avx[1] = _mm256_cvtepu8_epi16(_mm256_extractf128_si256(tmp_v,0x1));
v32_avx[0] = _mm256_cvtepi16_epi32(_mm256_extractf128_si256(v16_avx[0],0x0));
v32_avx[1] = _mm256_cvtepi16_epi32(_mm256_extractf128_si256(v16_avx[0],0x1));
v32_avx[2] = _mm256_cvtepi16_epi32(_mm256_extractf128_si256(v16_avx[1],0x0));
v32_avx[3] = _mm256_cvtepi16_epi32(_mm256_extractf128_si256(v16_avx[1],0x1));
for (int l=0; l<4; l++) {
__m256 vc1_ps = _mm256_cvtepi32_ps(_mm256_and_si256(v32_avx[l],m_lt_avx[l]));
__m256 vc2_ps = _mm256_cvtepi32_ps(_mm256_and_si256(v32_avx[l],m_ge_avx[l]));
/*
....
some processing there.
*/
}
#endif
#ifdef __SSE2__
#ifdef __SSE3__
__m128i tmp_v = _mm_lddqu_si128(reinterpret_cast<const __m128i*>(src+j));
#else
__m128i tmp_v = _mm_loadu_si128(reinterpret_cast<const __m128i*>(src+j));
#endif
#ifdef __SSE4_1__
v16[0] = _mm_cvtepu8_epi16(tmp_v);
tmp_v = _mm_shuffle_epi8(tmp_v,mask8);
v16[1] = _mm_cvtepu8_epi16(tmp_v);
v32[0] = _mm_cvtepi16_epi32(v16[0]);
v16[0] = _mm_shuffle_epi32(v16[0],0x4E);
v32[1] = _mm_cvtepi16_epi32(v16[0]);
v32[2] = _mm_cvtepi16_epi32(v16[1]);
v16[1] = _mm_shuffle_epi32(v16[1],0x4E);
v32[3] = _mm_cvtepi16_epi32(v16[1]);
#else
__m128i tmp_v_l = _mm_slli_si128(tmp_v,8);
__m128i tmp_v_r = _mm_srli_si128(tmp_v,8);
v16[0] = _mm_unpacklo_epi8(tmp_v,tmp_v_l);
v16[1] = _mm_unpackhi_epi8(tmp_v,tmp_v_r);
tmp_v_l = _mm_srli_epi16(v16[0],8);
tmp_v_r = _mm_srai_epi16(v16[0],8);
v32[0] = _mm_unpacklo_epi16(v16[0],tmp_v_l);
v32[1] = _mm_unpackhi_epi16(v16[0],tmp_v_r);
v16[0] = _mm_unpacklo_epi8(tmp_v,tmp_v_l);
v16[1] = _mm_unpackhi_epi8(tmp_v,tmp_v_r);
tmp_v_l = _mm_srli_epi16(v16[1],8);
tmp_v_r = _mm_srai_epi16(v16[1],8);
v32[2] = _mm_unpacklo_epi16(v16[1],tmp_v_l);
v32[3] = _mm_unpackhi_epi16(v16[1],tmp_v_r);
#endif
for (int l=0; l<4; l++) {
__m128 vc1_ps = _mm_cvtepi32_ps(_mm_and_si128(v32[l],m_lt[l]));
__m128 vc2_ps = _mm_cvtepi32_ps(_mm_and_si128(v32[l],m_ge[l]));
/*
...
some processing there.
*/
}
#endif
unsigned short -> float
#ifdef __AVX2__
v32_avx[0] = _mm256_cvtepu16_epi32(_mm256_extractf128_si256(tmp_v,0x0));
v32_avx[1] = _mm256_cvtepu16_epi32(_mm256_extractf128_si256(tmp_v,0x1));
for(int l=0;l<2;l++) {
__m256 vc1_ps = _mm256_cvtepi32_ps(_mm256_and_si256(v32_avx[l],m_lt_avx[l]));
__m256 vc2_ps = _mm256_cvtepi32_ps(_mm256_and_si256(v32_avx[l],m_ge_avx[l]));
/*
...
some processing there.
*/
}
#endif
#ifdef __SSE2__
#ifdef __SSE3__
__m128i tmp_v = _mm_lddqu_si128(reinterpret_cast<const __m128i*>(src+j));
#else
__m128i tmp_v = _mm_loadu_si128(reinterpret_cast<const __m128i*>(src+j));
#endif
#ifdef __SSE4_1__
v32[0] = _mm_cvtepu16_epi32(tmp_v);
tmp_v = _mm_shuffle_epi32(tmp_v,0x4E);
v32[1] = _mm_cvtepu16_epi32(tmp_v);
#else
__m128i tmp_v_l = _mm_slli_si128(tmp_v,8);
__m128i tmp_v_r = _mm_srli_si128(tmp_v,8);
v32[0] = _mm_unpacklo_epi16(tmp_v,tmp_v_l);
v32[1] = _mm_unpackhi_epi16(tmp_v,tmp_v_r);
#endif
for(int l=0;l<2;l++) {
__m128 vc1_ps = _mm_cvtepi32_ps(_mm_and_si128(v32[l],m_lt[l]));
__m128 vc2_ps = _mm_cvtepi32_ps(_mm_and_si128(v32[l],m_ge[l]));
/*
...
some processing there.
*/
}
#endif
The processing in the comments have nothing to do with the conversion step.
I would like to speed up those conversions.
I read in SSE: convert short integer to float and in Converting Int to Float/Float to Int using Bitwise that it's possible to do this using bitwise operations.
Are those approaches really any faster?
I experimented with the implementation in the first link; there was almost no change in processing time, it worked fine for signed short and also for unsigned short as long as the value is included between 0 and MAX_SHRT (32767 on my system):
#include <immintrin.h>
#include <iterator>
#include <iostream>
#include <chrono>
void convert_sse_intrinsic(const ushort *source,const int len, int *destination)
{
__m128i zero2 = _mm_setzero_si128();
for (int i = 0; i < len; i+=4)
{
__m128i value = _mm_unpacklo_epi16(_mm_set_epi64x(0,*((long long*)(source+i)) /**ps*/), zero2);
value = _mm_srai_epi32(_mm_slli_epi32(value, 16), 16);
_mm_storeu_si128(reinterpret_cast<__m128i*>(destination+i),value);
}
}
void convert_sse_intrinsic2(const ushort *source,const int len, int *destination)
{
for (int i = 0; i < len; i+=8)
{
__m128i value = _mm_loadu_si128(reinterpret_cast<const __m128i*>(source+i));
_mm_storeu_si128(reinterpret_cast<__m128i*>(destination+i),_mm_cvtepu16_epi32(value));
value = _mm_shuffle_epi32(value,0x4E);
_mm_storeu_si128(reinterpret_cast<__m128i*>(destination+i+4),_mm_cvtepu16_epi32(value));
}
}
int main(int argc, char *argv[])
{
ushort CV_DECL_ALIGNED(32) toto[16] =
{0,500,1000,5000,
10000,15000,20000,25000,
30000,35000,40000,45000,
50000,55000,60000,65000};
int CV_DECL_ALIGNED(32) tutu[16] = {0};
std::chrono::steady_clock::time_point start = std::chrono::steady_clock::now();
convert_sse_intrinsic(toto,16,tutu);
std::chrono::steady_clock::time_point stop = std::chrono::steady_clock::now();
std::cout<<"processing time 1st method : "<<std::chrono::duration_cast<std::chrono::nanoseconds>(stop-start).count()<<" : ns"<<std::endl;
std::copy(tutu,tutu+16,std::ostream_iterator<int>(std::cout," "));
std::cout<<std::endl;
start = std::chrono::steady_clock::now();
convert_sse_intrinsic2(toto,16,tutu);
stop = std::chrono::steady_clock::now();
std::cout<<"processing time 2nd method : "<<std::chrono::duration_cast<std::chrono::nanoseconds>(stop-start).count()<<" : ns"<<std::endl;
std::copy(tutu,tutu+16,std::ostream_iterator<int>(std::cout," "));
std::cout<<std::endl;
return 0;
}
Thanks in advance for any help.
Well I think there is not really any faster way to convert an unsigned char or an unsigned short to float rather than the intrinsics already there.
I tried several other ways using bitwise operators, but none was significantly faster.
So I think that it's not interesting to let this topic linger any longer.

"Warning : Non-POD class type passed through ellipsis" for simple thrust program

In spite of reading many answers on the same kind of questions on SO I am not able to figure out solution in my case. I have written the following code to implement a thrust program. Program performs simple copy and display operation.
#include <stdio.h>
#include <thrust/host_vector.h>
#include <thrust/device_vector.h>
int main(void)
{
// H has storage for 4 integers
thrust::host_vector<int> H(4);
H[0] = 14;
H[1] = 20;
H[2] = 38;
H[3] = 46;
// H.size() returns the size of vector H
printf("\nSize of vector : %d",H.size());
printf("\nVector Contents : ");
for (int i = 0; i < H.size(); ++i) {
printf("\t%d",H[i]);
}
thrust::device_vector<int> D = H;
printf("\nDevice Vector Contents : ");
for (int i = 0; i < D.size(); i++) {
printf("%d",D[i]); //This is where I get the warning.
}
return 0;
}
Thrust implements certain operations to facilitate using elements of a device_vector in host code, but this apparently isn't one of them.
There are many approaches to addressing this issue. The following code demonstrates 3 possible approaches:
explicitly copy D[i] to a host variable, and thrust has an appropriate method defined for that.
copy the thrust device_vector back to a host_vector before print-out.
use thrust::copy to directly copy the elements of the device_vector to a stream.
Code:
#include <stdio.h>
#include <iostream>
#include <thrust/host_vector.h>
#include <thrust/device_vector.h>
#include <thrust/copy.h>
int main(void)
{
// H has storage for 4 integers
thrust::host_vector<int> H(4);
H[0] = 14;
H[1] = 20;
H[2] = 38;
H[3] = 46;
// H.size() returns the size of vector H
printf("\nSize of vector : %d",H.size());
printf("\nVector Contents : ");
for (int i = 0; i < H.size(); ++i) {
printf("\t%d",H[i]);
}
thrust::device_vector<int> D = H;
printf("\nDevice Vector Contents : ");
//method 1
for (int i = 0; i < D.size(); i++) {
int q = D[i];
printf("\t%d",q);
}
printf("\n");
//method 2
thrust::host_vector<int> Hnew = D;
for (int i = 0; i < Hnew.size(); i++) {
printf("\t%d",Hnew[i]);
}
printf("\n");
//method 3
thrust::copy(D.begin(), D.end(), std::ostream_iterator<int>(std::cout, ","));
std::cout << std::endl;
return 0;
}
Note that for methods like these, thrust is generating various kinds of device-> host copy operations to facilitate the use of device_vector in host code. This has performance implications, so you might want to use the defined copy operations for large vectors.

Cuda thrust global memory writing very slow

I am currently writing a code, that calculates a integral Histogram on the GPU using the Nvidia thrust library.
Therefore I allocate a continuous Block of device memory which I update with a custom functor all the time.
The problem is, that the write to the device memory is veeery slow, but the reads are actually ok.
The basic setup is the following:
struct HistogramCreation
{
HistogramCreation(
...
// pointer to memory
...
){}
/// The actual summation operator
__device__ void operator()(int index){
.. do the calculations ..
for(int j=0;j<30;j++){
(1) *_memoryPointer = values (also using reads to such locations) ;
}
}
}
void foo(){
cudaMalloc(_pointer,size);
HistogramCreation initialCreation( ... _pointer ...);
thrust::for_each(
thrust::make_counting_iterator(0),
thrust::make_counting_iterator(_imageSize),
initialCreation);
}
if I change the writing in (1) to the following>
unsigned int val = values;
The performance is much better. THis is the only global memory write I have.
Using the memory write I get about 2s for HD Footage.
using the local variable it takes about 50 ms so about a factor of 40 less.
Why is this so slow? how could I improve it?
Just as #OlegTitov said, frequent load/store with global
memory should be avoided as much as possible. When there's a
situation where it's inevitable, then coalesced memory
access can help the execution process not to get too slow;
however in most cases, histogram calculation is pretty tough
to realize the coalesced access.
While most of the above is basically just restating
#OlegTitov's answer, i'd just like to share about an
investigation i did about finding summation with NVIDIA
CUDA. Actually the result is pretty interesting and i hope
it'll be a helpful information for other xcuda developers.
The experiment was basically to run a speed test of finding
summation with various memory access patterns: using global
memory (1 thread), L2 cache (atomic ops - 128 threads), and
L1 cache (shared mem - 128 threads)
This experiment used:
Kepler GTX 680,
1546 cores # 1.06GHz
GDDR5 256-bit # 3GHz
Here are the kernels:
__global__
void glob(float *h) {
float* hist = h;
uint sd = SEEDRND;
uint random;
for (int i = 0; i < NUMLOOP; i++) {
if (i%NTHREADS==0) random = rnd(sd);
int rind = random % NBIN;
float randval = (float)(random % 10)*1.0f ;
hist[rind] += randval;
}
}
__global__
void atom(float *h) {
float* hist = h;
uint sd = SEEDRND;
for (int i = threadIdx.x; i < NUMLOOP; i+=NTHREADS) {
uint random = rnd(sd);
int rind = random % NBIN;
float randval = (float)(random % 10)*1.0f ;
atomicAdd(&hist[rind], randval);
}
}
__global__
void shm(float *h) {
int lid = threadIdx.x;
uint sd = SEEDRND;
__shared__ float shm[NTHREADS][NBIN];
for (int i = 0; i < NBIN; i++) shm[lid][i] = h[i];
for (int i = lid; i < NUMLOOP; i+=NTHREADS) {
uint random = rnd(sd);
int rind = random % NBIN;
float randval = (float)(random % 10)*1.0f ;
shm[lid][rind] += randval;
}
/* reduction here */
for (int i = 0; i < NBIN; i++) {
__syncthreads();
if (threadIdx.x < 64) {
shm[threadIdx.x][i] += shm[threadIdx.x+64][i];
}
__syncthreads();
if (threadIdx.x < 32) {
shm[threadIdx.x][i] += shm[threadIdx.x+32][i];
}
__syncthreads();
if (threadIdx.x < 16) {
shm[threadIdx.x][i] += shm[threadIdx.x+16][i];
}
__syncthreads();
if (threadIdx.x < 8) {
shm[threadIdx.x][i] += shm[threadIdx.x+8][i];
}
__syncthreads();
if (threadIdx.x < 4) {
shm[threadIdx.x][i] += shm[threadIdx.x+4][i];
}
__syncthreads();
if (threadIdx.x < 2) {
shm[threadIdx.x][i] += shm[threadIdx.x+2][i];
}
__syncthreads();
if (threadIdx.x == 0) {
shm[0][i] += shm[1][i];
}
}
for (int i = 0; i < NBIN; i++) h[i] = shm[0][i];
}
OUTPUT
atom: 102656.00 shm: 102656.00 glob: 102656.00
atom: 122240.00 shm: 122240.00 glob: 122240.00
... blah blah blah ...
One Thread: 126.3919 msec
Atomic: 7.5459 msec
Sh_mem: 2.2207 msec
The ratio between these kernels is 57:17:1. Many things can
be analyzed here, and it truly does not mean that using
L1 or L2 memory spaces will always give you more than 10
times speedup of the whole program.
And here's the main and other funcs:
#include <iostream>
#include <cstdlib>
#include <cstdio>
using namespace std;
#define NUMLOOP 1000000
#define NBIN 36
#define SEEDRND 1
#define NTHREADS 128
#define NBLOCKS 1
__device__ uint rnd(uint & seed) {
#if LONG_MAX > (16807*2147483647)
int const a = 16807;
int const m = 2147483647;
seed = (long(seed * a))%m;
return seed;
#else
double const a = 16807;
double const m = 2147483647;
double temp = seed * a;
seed = (int) (temp - m * floor(temp/m));
return seed;
#endif
}
... the above kernels ...
int main()
{
float *h_hist, *h_hist2, *h_hist3, *d_hist, *d_hist2,
*d_hist3;
h_hist = (float*)malloc(NBIN * sizeof(float));
h_hist2 = (float*)malloc(NBIN * sizeof(float));
h_hist3 = (float*)malloc(NBIN * sizeof(float));
cudaMalloc((void**)&d_hist, NBIN * sizeof(float));
cudaMalloc((void**)&d_hist2, NBIN * sizeof(float));
cudaMalloc((void**)&d_hist3, NBIN * sizeof(float));
for (int i = 0; i < NBIN; i++) h_hist[i] = 0.0f;
cudaMemcpy(d_hist, h_hist, NBIN * sizeof(float),
cudaMemcpyHostToDevice);
cudaMemcpy(d_hist2, h_hist, NBIN * sizeof(float),
cudaMemcpyHostToDevice);
cudaMemcpy(d_hist3, h_hist, NBIN * sizeof(float),
cudaMemcpyHostToDevice);
cudaEvent_t start, end;
float elapsed = 0, elapsed2 = 0, elapsed3;
cudaEventCreate(&start);
cudaEventCreate(&end);
cudaEventRecord(start, 0);
atom<<<NBLOCKS, NTHREADS>>>(d_hist);
cudaThreadSynchronize();
cudaEventRecord(end, 0);
cudaEventSynchronize(start);
cudaEventSynchronize(end);
cudaEventElapsedTime(&elapsed, start, end);
cudaEventRecord(start, 0);
shm<<<NBLOCKS, NTHREADS>>>(d_hist2);
cudaThreadSynchronize();
cudaEventRecord(end, 0);
cudaEventSynchronize(start);
cudaEventSynchronize(end);
cudaEventElapsedTime(&elapsed2, start, end);
cudaEventRecord(start, 0);
glob<<<1, 1>>>(d_hist3);
cudaThreadSynchronize();
cudaEventRecord(end, 0);
cudaEventSynchronize(start);
cudaEventSynchronize(end);
cudaEventElapsedTime(&elapsed3, start, end);
cudaMemcpy(h_hist, d_hist, NBIN * sizeof(float),
cudaMemcpyDeviceToHost);
cudaMemcpy(h_hist2, d_hist2, NBIN * sizeof(float),
cudaMemcpyDeviceToHost);
cudaMemcpy(h_hist3, d_hist3, NBIN * sizeof(float),
cudaMemcpyDeviceToHost);
/* print output */
for (int i = 0; i < NBIN; i++) {
printf("atom: %10.2f shm: %10.2f glob:
%10.2f¥n",h_hist[i],h_hist2[i],h_hist3[i]);
}
printf("%12s: %8.4f msec¥n", "One Thread", elapsed3);
printf("%12s: %8.4f msec¥n", "Atomic", elapsed);
printf("%12s: %8.4f msec¥n", "Sh_mem", elapsed2);
return 0;
}
When writing GPU code you should avoid reading and writing to/from global memory. Global memory is very slow on GPU. That's the hardware feature. The only thing you can do is to make neighboring treads read/write in neighboring adresses in global memory. This will cause coalescing and speed up the process. But in general read your data once, process it and write it out once.
Note that NVCC might optimize out a lot of your code after you make the modification - it detects that no write to global memory is made and just removes the "unneeded" code. So this speedup may not be coming out of the global writer per ce.
I would recommend using profiler on your actual code (the one with global write) to see if there's anything like unaligned access or other perf problem.

Resources