Performance issue with matrix-matrix multiplication involving MaxRowsAtCompileTime / MaxColsAtCompileTime - gcc

In the quest for optimal matrix-matrix multiplication using eigen3 (and hopefully profiting from SIMD support) I wrote the following test:
#include <iostream>
#include <Eigen/Dense>
#include <ctime>
using namespace Eigen;
using namespace std;
const int test_size= 13;
const int test_size_16b= test_size+1;
typedef Matrix<double, Dynamic, Dynamic, ColMajor, test_size_16b, test_size_16b> TestMatrix_dyn16b_t;
typedef Matrix<double, Dynamic, Dynamic> TestMatrix_dynalloc_t;
typedef Matrix<double, test_size, test_size> TestMatrix_t;
typedef Matrix<double, test_size_16b, test_size_16b> TestMatrix_fix16b_t;
template<typename TestMatrix_t> EIGEN_DONT_INLINE void test(const char * msg, int m_size= test_size, int n= 10000) {
double s= 0.0;
clock_t elapsed= 0;
TestMatrix_t m3;
for(int i= 0; i<n; i++) {
TestMatrix_t m1 = TestMatrix_t::Random(m_size, m_size);
TestMatrix_t m2= TestMatrix_t::Random(m_size, m_size);
clock_t begin = clock();
m3.noalias()= m1*m2;
clock_t end = clock();
elapsed+= end - begin;
// make sure m3 is not optimized away
s+= m3(1, 1);
}
double elapsed_secs = double(elapsed) / CLOCKS_PER_SEC;
cout << "Elapsed time " << msg << ": " << elapsed_secs << " size " << m3.cols() << ", " << m3.rows() << endl;
}
int main() {
#ifdef EIGEN_VECTORIZE
cout << "EIGEN_VECTORIZE on " << endl;
#endif
test<TestMatrix_t> ("normal ");
test<TestMatrix_dyn16b_t> ("dyn 16b ");
test<TestMatrix_dynalloc_t>("dyn alloc");
test<TestMatrix_fix16b_t> ("fix 16b ", test_size_16b);
}
compiled with g++ -msse3 -O2 -DEIGEN_DONT_PARALLELIZE test.cpp and ran it on an Athlon II X2 255. The result rather surprised me:
EIGEN_VECTORIZE on
Elapsed time normal : 0.019193 size 13, 13
Elapsed time dyn 16b : 0.025226 size 13, 13
Elapsed time dyn alloc: 0.018648 size 13, 13
Elapsed time fix 16b : 0.018221 size 14, 14
Similar results are attained with other odd numbers for test_size. What confuses me is this:
From reading Eigen Vectorization FAQ I would have thought that a 13x13 matrix has no multiple of 16 bytes size and thus will not profit from SIMD optimization. I expected the computation time to be much worse but it isn't.
From reading about Optional template parameters I would have thought that dynamic matrices with fixed upper bound known at compile time would behave much like dynamically allocated matrices an thus would have a similar computation speed. But they don't. That's actually what surprises me the most and what triggered my initial quest: I wanted to know if it is better to use a dynamic matrix with fixed upper bound that is a multiple of 16 bytes than a fixed size matrix whos size is not a multiple of 16 bytes.
Finally interesting but not so much surprising: a matrix whos fixed size is a multiple of 16 is no slower that that of a matrix whos col and row length is one less. SIMD just does the extra col and row for free.
Not my original question but also interesting: when the test is compiled without SSE2 support and thus without vectorization the relative computation times are roughly proportional. The dynamically sized fixed memory matrix is again slowest.
To put my question short: why is Matrix<double, Dynamic, Dynamic, ColMajor, test_size_16b, test_size_16b> so much slower? Can you confirm my observations and maybe even explain them?

The FAQ was obsolete. Since Eigen version 3.3, unaligned vectors and matrices are vectorized.
Then regarding why Matrix<double, Dynamic, Dynamic, ColMajor, test_size_16b, test_size_16b> was slower, that was just an issue in the compile-time selection of the preferred matrix product implementation. The fix will be part of Eigen 3.3.1.

Related

Why doesn't move-assigning a std::vector seem to have any performance benefit over copying in this code?

Since move-assigning a std::vector is is a O(1) time operation and copying a std::vector to another is O(N) (where N is the sum of the sizes of the 2 vectors), I expected to see move-assignment having a significant performance advantage over copying. To test this, I wrote the following code, which move-assigns/copies a std::vector nums2 of size 1000 to nums 100,000 times.
#include <iostream>
#include <vector>
#include <chrono>
using namespace std;
int main()
{
auto start = clock();
vector <int> nums;
for(int i = 0; i < 100000; ++i) {
vector <int> nums2(1000);
for(int i = 0; i < 1000; ++i) {
nums2[i] = rand();
}
nums = nums2; // or nums = move(nums2);
cout << (nums[0] ? 1:0) << "\b \b"; // prevent compiler from optimizing out nums (I think)
}
cout << "Time: " << (clock() - start) / (CLOCKS_PER_SEC / 1000) << '\n';
return 0;
}
The compiler I am using is g++ 7.5.0. When running with g++ -std=c++1z -O3, both the move-assign/copy versions take around 1600ms, which does not match with the hypothesis that move-assignment has any significant performance benefit. I then tested using std::swap(nums, nums2) (as an alternative to move-assignment), but that also took around the same time.
So, my question is, why doesn't move-assigning a std::vector to another seem to have a performance advantage over copy-assignment? Do I have a fundamental mistake in my understanding of C++ move-assignment?

Recommended way to cast a boost cpp_int to a double?

I have some code were I avoid some costly divisions by converting a boost integer to a double. For the real code I will build an fp type that's big enough to hold the maximal value (exponent). To test I am using a double. So I do this:
#define NTYPE_BITS 512
typedef number<cpp_int_backend<NTYPE_BITS, NTYPE_BITS, unsigned_magnitude, unchecked, void> > NTYPE;
NTYPE a1 = BIG_VALUE;
double a1f = (double)a1;
The code generated for that cast is quite complicated. I see it's basically looping over all the values in a1 (least significant first) scaling them by powers of two.
Now in this case I guess at most the number of elements that could affect the result are the last two (64 bits for each element and the most significant element might have less that 64 bits used).
Is there a better way to do this?
First off, NEVER use C-Style casts. (Why use static_cast<int>(x) instead of (int)x?).
Second, avoid using namespace.
(Third, reserve all-caps names for macros).
That said:
double a1f = a1.convert_to<double>();
Is your ticket.
Live On Coliru
#include <boost/multiprecision/cpp_int.hpp>
#include <iostream>
namespace bmp = boost::multiprecision;
//0xDEADBEEFE1E104B1D00008BADF00D000ABADBABE000D15EA5E
#define BIG_VALUE "0xDEADBEEFE1E104B1D00008BADF00D000ABADBABE000D15EA5E"
#define NTYPE_BITS 512
int main() {
using NTYPE = bmp::number<
bmp::cpp_int_backend<
NTYPE_BITS, NTYPE_BITS,
bmp::unsigned_magnitude, bmp::unchecked, void>>;
NTYPE a1(BIG_VALUE);
std::cout << a1 << "\n";
std::cout << std::hex << a1 << "\n";
std::cout << a1.convert_to<double>() << "\n";
}
Prints
1397776821048146366831161011449418369017198837637750820563550
deadbeefe1e104b1d00008badf00d000abadbabe000d15ea5e
1.39778e+60

Is there any way to reduce sum 100M float elements of an array in CUDA?

I'm new to CUDA. So please bear with questions with trivial solutions, if any.
I am trying to find the sum of 100M float elements of an array. From the following code one could see that I've used a reduction kernel and thrust. I suppose the kernel stores the sum in g_odata[0]. As all the elements are same in g_idata the result should be n*g_idata[1]. But you could clearly see the results are incorrect for both of them.
What am I getting wrong? How could I achieve my target?
Every reduction kernel I found is for integer datatype. e.g. the highly recommended Optimizing Parallel Reduction in CUDA.. Is there any specific reason to that?
Here is my code:
#include <iostream>
#include <math.h>
#include <stdlib.h>
#include <iomanip>
#include <thrust/reduce.h>
#include <thrust/execution_policy.h>
using namespace std;
__global__ void reduce(float *g_idata, float *g_odata) {
__shared__ float sdata[256];
int i = blockIdx.x*blockDim.x + threadIdx.x;
sdata[threadIdx.x] = g_idata[i];
__syncthreads();
for (int s=1; s < blockDim.x; s *=2)
{
int index = 2 * s * threadIdx.x;;
if (index < blockDim.x)
{
sdata[index] += sdata[index + s];
}
__syncthreads();
}
if (threadIdx.x == 0)
atomicAdd(g_odata,sdata[0]);
}
int main(void){
unsigned int n=pow(10,8);
float *g_idata, *g_odata;
cudaMallocManaged(&g_idata, n*sizeof(float));
cudaMallocManaged(&g_odata, n*sizeof(float));
int blockSize = 32;
int numBlocks = (n + blockSize - 1) / blockSize;
for(int i=0;i<n;i++){g_idata[i]=6.1;g_odata[i]=0;}
reduce<<<numBlocks, blockSize>>>(g_idata, g_odata);
cudaDeviceSynchronize();
cout << g_odata[0] << "\t" << (float)n*g_idata[1] << "\t"<< (float)n*g_idata[1]-g_odata[0]<<endl;
g_odata[0]=thrust::reduce(thrust::device, g_idata, g_idata+n);
cout << g_odata[0] << "\t" << (float)n*g_idata[1] << "\t"<< (float)n*g_idata[1]-g_odata[0]<<endl;
cudaFree(g_idata);
cudaFree(g_odata);
}
Result:
6.0129e+08 6.1e+08 8.7097e+06
6.09986e+08 6.1e+08 13824
I am using CUDA 10. nvcc --version :
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2018 NVIDIA Corporation
Built on Sat_Aug_25_21:08:01_CDT_2018
Cuda compilation tools, release 10.0, V10.0.130
Details of my GPU DeviceQuery:
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 750"
CUDA Driver Version / Runtime Version 10.0 / 10.0
CUDA Capability Major/Minor version number: 5.0
Total amount of global memory: 1999 MBytes (2096168960 bytes)
( 4) Multiprocessors, (128) CUDA Cores/MP: 512 CUDA Cores
GPU Max Clock rate: 1110 MHz (1.11 GHz)
Memory Clock rate: 2505 Mhz
Memory Bus Width: 128-bit
L2 Cache Size: 2097152 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65536), 3D=(4096, 4096, 4096)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 65536
Warp size: 32
Maximum number of threads per multiprocessor: 2048
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: Yes
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device supports Compute Preemption: No
Supports Cooperative Kernel Launch: No
Supports MultiDevice Co-op Kernel Launch: No
Device PCI Domain ID / Bus ID / location ID: 0 / 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 10.0, CUDA Runtime Version = 10.0, NumDevs = 1
Result = PASS
Thanks in advance.
I think the reason you are confused about the results here is a lack of understanding of floating point arithmetic. This whitepaper covers the topic pretty well. As a simple concept to grasp, if I have numbers represented as float quantities, and I attempt to do this:
100000000 + 1
the result will be: 100000000 (write some code and try it yourself)
This isn't unique to GPUs, CPU code will behave the same way (try it).
So for very large reductions, we get to the point (often) where we are adding very large numbers to much much smaller numbers, and the results aren't accurate from a "pure math" point of view.
That is fundamentally the problem here. In your CPU code, when you decide that the correct result should be 6.1*n, that kind of multiplication problem is not subject to the limits of adding large numbers to small ones that I just described, so you get an "accurate" result from that.
One of the ways to prove this or work around it, is to use double representation instead of float. This doesn't really completely eliminate the problem, but it pushes the resolution to the point where it can do a much better job of representing the range of numbers here.
The following code primarily has that change. You can change the typedef to compare the behavior between float and double.
There are a few other changes in the code. None of them are the cause of the discrepancy you witnessed.
$ cat t18.cu
#include <iostream>
#include <math.h>
#include <stdlib.h>
#include <iomanip>
#include <thrust/reduce.h>
#include <thrust/execution_policy.h>
#define BLOCK_SIZE 32
typedef double ft;
using namespace std;
__device__ double my_atomicAdd(double* address, double val)
{
unsigned long long int* address_as_ull =
(unsigned long long int*)address;
unsigned long long int old = *address_as_ull, assumed;
do {
assumed = old;
old = atomicCAS(address_as_ull, assumed,
__double_as_longlong(val +
__longlong_as_double(assumed)));
// Note: uses integer comparison to avoid hang in case of NaN (since NaN != NaN)
} while (assumed != old);
return __longlong_as_double(old);
}
__device__ float my_atomicAdd(float* addr, float val){
return atomicAdd(addr, val);
}
__global__ void reduce(ft *g_idata, ft *g_odata, int n) {
__shared__ ft sdata[BLOCK_SIZE];
int i = blockIdx.x*blockDim.x + threadIdx.x;
sdata[threadIdx.x] = (i < n)?g_idata[i]:0;
__syncthreads();
for (int s=1; s < blockDim.x; s *=2)
{
int index = 2 * s * threadIdx.x;;
if ((index +s) < blockDim.x)
{
sdata[index] += sdata[index + s];
}
__syncthreads();
}
if (threadIdx.x == 0)
my_atomicAdd(g_odata,sdata[0]);
}
int main(void){
unsigned int n=pow(10,8);
ft *g_idata, *g_odata;
cudaMallocManaged(&g_idata, n*sizeof(ft));
cudaMallocManaged(&g_odata, sizeof(ft));
cout << "n = " << n << endl;
int blockSize = BLOCK_SIZE;
int numBlocks = (n + blockSize - 1) / blockSize;
g_odata[0] = 0;
for(int i=0;i<n;i++){g_idata[i]=6.1;}
reduce<<<numBlocks, blockSize>>>(g_idata, g_odata, n);
cudaDeviceSynchronize();
cout << g_odata[0] << "\t" << (float)n*g_idata[1] << "\t"<< (float)n*g_idata[1]-g_odata[0]<<endl;
g_odata[0]=thrust::reduce(thrust::device, g_idata, g_idata+n);
cout << g_odata[0] << "\t" << (float)n*g_idata[1] << "\t"<< (float)n*g_idata[1]-g_odata[0]<<endl;
cudaFree(g_idata);
cudaFree(g_odata);
}
$ nvcc -o t18 t18.cu
$ cuda-memcheck ./t18
========= CUDA-MEMCHECK
n = 100000000
6.1e+08 6.1e+08 0.00527966
6.1e+08 6.1e+08 5.13792e-05
========= ERROR SUMMARY: 0 errors
$

Eigen 3.3 Conjugate Gradient is slower when multi-threaded with GCC compiler optimization

I've been using the ConjugateGradient solver in Eigen 3.2 and decided to try upgrading to Eigen 3.3.3 with the hope of benefiting from the new multi-threading features.
Sadly, the solver seems slower (~10%) when I enable -fopenmp with GCC 4.8.4. Looking at xosview, I see that all 8 cpus are being used, yet performance is slower...
After some testing, I discovered that if I disable compiler optimization (use -O0 instead of -O3), then -fopenmp does speed up the solver by ~50%.
Of course, it's not really worth disabling optimization just to benefit from multi-threading, since that would be even slower overall.
Following advice from https://stackoverflow.com/a/42135567/7974125, I am storing the full sparse matrix and passing Lower|Upper as the UpLo parameter.
I've also tried each of the 3 preconditioners and also tried using RowMajor matrices, to no avail.
Is there anything else to try to get the full benefits of both multi-threading and compiler optimization?
I cannot post my actual code, but this is a quick test using the Laplacian example from Eigen's documentation, except for some changes to use ConjugateGradient instead of SimplicialCholesky. (Both of these solvers work with SPD matrices.)
#include <Eigen/Sparse>
#include <bench/BenchTimer.h>
#include <iostream>
#include <vector>
using namespace Eigen;
using namespace std;
// Use RowMajor to make use of multi-threading
typedef SparseMatrix<double, RowMajor> SpMat;
typedef Triplet<double> T;
// Assemble sparse matrix from
// https://eigen.tuxfamily.org/dox/TutorialSparse_example_details.html
void insertCoefficient(int id, int i, int j, double w, vector<T>& coeffs,
VectorXd& b, const VectorXd& boundary)
{
int n = int(boundary.size());
int id1 = i+j*n;
if(i==-1 || i==n) b(id) -= w * boundary(j); // constrained coefficient
else if(j==-1 || j==n) b(id) -= w * boundary(i); // constrained coefficient
else coeffs.push_back(T(id,id1,w)); // unknown coefficient
}
void buildProblem(vector<T>& coefficients, VectorXd& b, int n)
{
b.setZero();
ArrayXd boundary = ArrayXd::LinSpaced(n, 0,M_PI).sin().pow(2);
for(int j=0; j<n; ++j)
{
for(int i=0; i<n; ++i)
{
int id = i+j*n;
insertCoefficient(id, i-1,j, -1, coefficients, b, boundary);
insertCoefficient(id, i+1,j, -1, coefficients, b, boundary);
insertCoefficient(id, i,j-1, -1, coefficients, b, boundary);
insertCoefficient(id, i,j+1, -1, coefficients, b, boundary);
insertCoefficient(id, i,j, 4, coefficients, b, boundary);
}
}
}
int main()
{
int n = 300; // size of the image
int m = n*n; // number of unknowns (=number of pixels)
// Assembly:
vector<T> coefficients; // list of non-zeros coefficients
VectorXd b(m); // the right hand side-vector resulting from the constraints
buildProblem(coefficients, b, n);
SpMat A(m,m);
A.setFromTriplets(coefficients.begin(), coefficients.end());
// Solving:
// Use ConjugateGradient with Lower|Upper as the UpLo template parameter to make use of multi-threading
BenchTimer t;
t.reset(); t.start();
ConjugateGradient<SpMat, Lower|Upper> solver(A);
VectorXd x = solver.solve(b); // use the factorization to solve for the given right hand side
t.stop();
cout << "Real time: " << t.value(1) << endl; // 0=CPU_TIMER, 1=REAL_TIMER
return 0;
}
Resulting output:
// No optimization, without OpenMP
g++ cg.cpp -O0 -I./eigen -o cg
./cg
Real time: 23.9473
// No optimization, with OpenMP
g++ cg.cpp -O0 -I./eigen -fopenmp -o cg
./cg
Real time: 17.6621
// -O3 optimization, without OpenMP
g++ cg.cpp -O3 -I./eigen -o cg
./cg
Real time: 0.924272
// -O3 optimization, with OpenMP
g++ cg.cpp -O3 -I./eigen -fopenmp -o cg
./cg
Real time: 1.04809
Your problem is too small to expect any benefits from multi-threading. Sparse matrices are expected to at least one order of magnitude larger. Eigen's code should be adjusted to reduce the number of threads in this case.
Moreover, I guess that you only have 4 physical cores, so running with OMP_NUM_THREADS=4 ./cg might help.

Large for loop in Cuda kernel doesn't work for large arrays [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 years ago.
Improve this question
I've implemented various algorithms using Cuda, such as matrix multiplication, Cholesky decomposition and inversion (by forward substitution) of a lower triangular matrix.
For some of these algorithms I have a for loop in the kernel that repeats part of the kernel code lots of times. It all works well for (flattened: represented by 1D arrays) matrices (of floats) up to about 200x200, with the for loop calling part of the kernel code 200 times. Increasing the matrix size to say 1000x1000 (with the for loop calling part of the kernel code 1000 times) leaves the GPU to take as much computing time as can be expected based on trials with smaller matrix sizes. But no kernel code (including parts outside the for loop) seems to have been run (the output matrix has none of its elements changed since initialization). If I increase the matrix size to around 500 I'm sometimes able to get the kernel to run if I set the limiter in the for loop to some low value (such has 3).
Have I hit some hardware limit here or is there a trick I can use to make these for loops work for large matrices?
This is an example of complete code that you can copy into a .cu file. The kernel attempts to copy the contents of matrix A (W*H) to matrix B (W*H). The output shows the first element of both matrices, for W*H < 200x200 this works just fine, for W*H = 1000x1000 no copying seems to occur because the elements of B remain zero, as if nothing happened since initialization. I'm compiling and running this code on a linux based server. For large matrices error checking gives me: "GPUassert: unspecified launch failure" at line 67 which is the cudamempcy line that copies matrix B from device to host.
#include <cuda.h>
#include <cuda_runtime.h>
#include <cuda_runtime_api.h>
#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include <iostream>
#include <time.h>
#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(cudaError_t code, char *file, int line, bool abort=true)
{
if (code != cudaSuccess)
{
fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
if (abort) exit(code);
}
}
__global__ void MatrixCopy(float *A, float *B, int W)
{
int i = blockIdx.x*blockDim.x + threadIdx.x;
int j = blockIdx.y*blockDim.y + threadIdx.y;
B[j*W + i]=A[j*W + i];
}
int main(void)
{
clock_t start1=clock();
int W=1000;
int H=1000;
float *A, *B;
float *devA, *devB;
A=(float*)malloc(W*H*sizeof(float));
B=(float*)malloc(W*H*sizeof(float));
for(int i=0; i<=W*H; i++)
{
A[i]=rand() % 3;
A[i]=A[i]+1;
B[i]=0;
}
gpuErrchk( cudaMalloc( (void**)&devA, W*H*sizeof(float) ) );
gpuErrchk( cudaMalloc( (void**)&devB, W*H*sizeof(float) ) );
gpuErrchk( cudaMemcpy( devA, A, W*H*sizeof(float), cudaMemcpyHostToDevice ) );
gpuErrchk( cudaMemcpy( devB, B, W*H*sizeof(float), cudaMemcpyHostToDevice ) );
dim3 threads(32,32);
int bloW=(int)ceil((double)W/32);
int bloH=(int)ceil((double)H/32);
dim3 blocks(bloW, bloH);
clock_t finish1=clock();
clock_t start2=clock();
MatrixCopy<<<blocks,threads>>>(devA, devB, W);
gpuErrchk( cudaPeekAtLastError() );
gpuErrchk( cudaMemcpy( B, devB, W*H*sizeof(float), cudaMemcpyDeviceToHost ) );
clock_t finish2=clock();
printf("\nGPU calculation time (ms): %d\nInitialization time (ms): %d\n\n", (int)ceil(double(((finish2-start2)*1000/(CLOCKS_PER_SEC)))), (int)ceil(double(((finish1-start1)*1000/(CLOCKS_PER_SEC)))));
printf("\n%f\n", A[0]);
printf("\n%f\n\n", B[0]);
gpuErrchk( cudaFree(devA) );
gpuErrchk( cudaFree(devB) );
free(A);
free(B);
#ifdef _WIN32
system ("PAUSE");
#endif
return 0;
}
Your kernel has no thread checking.
You are deciding the grid size (in blocks) like this:
int bloW=(int)ceil((double)W/32);
int bloH=(int)ceil((double)H/32);
For values of H and W that are not even multiples of the threads per block sizes (32) this creates extra threads and blocks, outside of the actual matrix you care about (1000x1000). There's nothing wrong with this; this is common practice.
However, we must make sure those extra threads don't actually do anything (i.e. don't generate invalid accesses to memory). Your kernel does not provide this checking.
If you modify your kernel to be something like this:
__global__ void MatrixCopy(float *A, float *B, int W, int H)
{
int i = blockIdx.x*blockDim.x + threadIdx.x;
int j = blockIdx.y*blockDim.y + threadIdx.y;
if ((i < W) && (j < H))
B[j*W + i]=A[j*W + i];
}
I think you'll have better results. Without this, some of your A and B references in the kernel are generating out-of-bounds accesses, which you can see if your run your code with cuda-memcheck. And you'll have to modify the kernel invocation line to add the H parameter as well. I haven't really sorted out whether your i variable corresponds to H or W; I assume you can do that and make the change if needed. In this case, since the matrix is square, it doesn't really matter.
And you should do proper cuda error checking any time you are having trouble with CUDA code. I would suggest doing this before you post here asking for help.

Resources