OpenCL performance on Lion - performance

Maybe (or rather not maybe) I do something in a wrong way but it seems that I can't get good performance from OpenCl kernel even though it runs significantly faster on GPU than in CPU.
Let me explain.
CPU kernel running time is ~100ms.
GPU kernel running time is ~8ms.
Above was measured using the clCreateCommandQueue with CL_QUEUE_PROFILING_ENABLE flag.
But, what is the problem is the time needed to call (enqueue) the kernel repeatedly.
200 kernel calls on CPU: ~19s
200 kernel calls on GPU: ~18s
Above was measured with calls to gettimeofday before and after the 200 loop. And just after the loop was the call to clFinish to wait until the 200 enqueued kernels are done.
Moreover, the time was measured only for enqueueing and executing the kernel, no data transfer from/to the kernel was involved.
Here is the loop:
size_t global_item_size = LIST_SIZE;
Start_Clock(&startTime);
for(int k=0; k<200; k++)
{
// Execute the OpenCL kernel on the list
ret = clEnqueueNDRangeKernel (command_queue, kernel, 1, NULL, &global_item_size, NULL, 0, NULL, &event);
}
clFinish(command_queue);
printf("] (in %0.4fs)\n", Stop_Clock(&startTime));
If 200 calls to the kernel take ~18s then it's completely irrelevant that the kernel on GPU is several times faster than on CPU...
What am I doing wrong?
EDIT
I made some additional tests and it seems that actually assigning the result of the computation to the output buffer is producing the overhead.
This kernel
__kernel void test_kernel(__global const float *A, __global const float *B, __global float *C)
{
// Get the index of the current element to be processed
int i = get_global_id(0);
// Do the work
C[i] = sqrt(sin(A[i]) + tan(B[i]));
}
executed 200 times has the timings as above. But if I change the C[i] line to
float z = sqrt(sin(A[i]) + tan(B[i]));
then this kernel takes 0.3s on CPU and 2.6s on GPU.
Interesting.
I wonder if it would be possible to speed up the execution by collecting the results in __local table and then assigning them to the output buffer C only in the execution of the last kernel call? (the kernel with the last global id, not the 200th enqueued kernel)

Related

On Skylake (SKL) why are there L2 writebacks in a read-only workload that exceeds the L3 size?

Consider the following simple code:
#include <stdlib.h>
#include <stdio.h>
#include <string.h>
#include <time.h>
#include <err.h>
int cpu_ms() {
return (int)(clock() * 1000 / CLOCKS_PER_SEC);
}
int main(int argc, char** argv) {
if (argc < 2) errx(EXIT_FAILURE, "provide the array size in KB on the command line");
size_t size = atol(argv[1]) * 1024;
unsigned char *p = malloc(size);
if (!p) errx(EXIT_FAILURE, "malloc of %zu bytes failed", size);
int fill = argv[2] ? argv[2][0] : 'x';
memset(p, fill, size);
int startms = cpu_ms();
printf("allocated %zu bytes at %p and set it to %d in %d ms\n", size, p, fill, startms);
// wait until 500ms has elapsed from start, so that perf gets the read phase
while (cpu_ms() - startms < 500) {}
startms = cpu_ms();
// we start measuring with perf here
unsigned char sum = 0;
for (size_t off = 0; off < 64; off++) {
for (size_t i = 0; i < size; i += 64) {
sum += p[i + off];
}
}
int delta = cpu_ms() - startms;
printf("sum was %u in %d ms \n", sum, delta);
return EXIT_SUCCESS;
}
This allocates an array of size bytes (which is passed in on the command line, in KiB), sets all bytes to the same value (the memset call), and finally loops over the array in a read-only manner, striding by one cache line (64 bytes), and repeats this 64 times so that each byte is accessed once.
If we turn prefetching off1, we expect this to hit 100% in a given level of cache if size fits in the cache, and to mostly miss at that level otherwise.
I'm interested in two events l2_lines_out.silent and l2_lines_out.non_silent (and also l2_trans.l2_wb - but the values end up identical to non_silent), which counts lines that are silently dropped from l2 and that are not.
If we run this from 16 KiB up through 1 GiB, and measure these two events (plus l2_lines_in.all) for the final loop only, we get:
The y-axis here is the number of events, normalized to the number of accesses in the loop. For example, the 16 KiB test allocates a 16 KiB region, and makes 16,384 accesses to that region, and so a value of 0.5 means that on average 0.5 counts of the given event occurred per access.
The l2_lines_in.all behaves almost as we'd expect. It starts off around zero and when the size exceeds the L2 size it goes up to 1.0 and stays there: every access brings in a line.
The other two lines behave weirdly. In the region where the test fits in the L3 (but not in the L2), the eviction are nearly all silent. However, as soon as the region moves into main memory, the evictions are all non-silent.
What explains this behavior? It's hard to understand why the evictions from L2 would depend on whether the underlying region fits in main memory.
If you do stores instead of loads, almost everything is a non-silent writeback as expected, since the update value has to be propagated to the outer caches:
We can also take a look at what level of the cache the accesses are hitting in, using the mem_inst_retired.l1_hit and related events:
If you ignore the L1 hit counters, which seem impossibly high at a couple of points (more than 1 L1 hit per access?), the results look more or less as expected: mostly L2 hits when the the region fits cleanly in L2, mostly L3 hits for the L3 region (up to 6 MiB on my CPU), and then misses to DRAM thereafter.
You can find the code on GitHub. The details on building and running can be found in the README file.
I observed this behavior on my Skylake client i7-6700HQ CPU. The same effect seems not to exist on Haswell2. On Skylake-X, the behavior is totally different, as expected, as the L3 cache design has changed to be something like a victim cache for the L2.
1 You can do it on recent Intel with wrmsr -a 0x1a4 "$((2#1111))". In fact, the graph is almost exactly the same with prefetch on, so turning it off is mostly just to eliminate a confounding factor.
2 See the comments for more details, but briefly l2_lines_out.(non_)silent doesn't exist there, but l2_lines_out.demand_(clean|dirty) does which seem to have a similar definition. More importantly, the l2_trans.l2_wb which mostly mirrors non_silent on Skylake exists also on Haswell and appears to mirror demand_dirty and it also does not exhibit the effect on Haswell.

Optimize Cuda Kernel time execution

I'm a learning Cuda student, and I would like to optimize the execution time of my kernel function. As a result, I realized a short program computing the difference between two pictures. So I compared the execution time between a classic CPU execution in C, and a GPU execution in Cuda C.
Here you can find the code I'm talking about:
int *imgresult_data = (int *) malloc(width*height*sizeof(int));
int size = width*height;
switch(computing_type)
{
case GPU:
HANDLE_ERROR(cudaMalloc((void**)&dev_data1, size*sizeof(unsigned char)));
HANDLE_ERROR(cudaMalloc((void**)&dev_data2, size*sizeof(unsigned char)));
HANDLE_ERROR(cudaMalloc((void**)&dev_data_res, size*sizeof(int)));
HANDLE_ERROR(cudaMemcpy(dev_data1, img1_data, size*sizeof(unsigned char), cudaMemcpyHostToDevice));
HANDLE_ERROR(cudaMemcpy(dev_data2, img2_data, size*sizeof(unsigned char), cudaMemcpyHostToDevice));
HANDLE_ERROR(cudaMemcpy(dev_data_res, imgresult_data, size*sizeof(int), cudaMemcpyHostToDevice));
float time;
cudaEvent_t start, stop;
HANDLE_ERROR( cudaEventCreate(&start) );
HANDLE_ERROR( cudaEventCreate(&stop) );
HANDLE_ERROR( cudaEventRecord(start, 0) );
for(int m = 0; m < nb_loops ; m++)
{
diff<<<height, width>>>(dev_data1, dev_data2, dev_data_res);
}
HANDLE_ERROR( cudaEventRecord(stop, 0) );
HANDLE_ERROR( cudaEventSynchronize(stop) );
HANDLE_ERROR( cudaEventElapsedTime(&time, start, stop) );
HANDLE_ERROR(cudaMemcpy(imgresult_data, dev_data_res, size*sizeof(int), cudaMemcpyDeviceToHost));
printf("Time to generate: %4.4f ms \n", time/nb_loops);
break;
case CPU:
clock_t begin = clock(), diff;
for (int z=0; z<nb_loops; z++)
{
// Apply the difference between 2 images
for (int i = 0; i < height; i++)
{
tmp = i*imgresult_pitch;
for (int j = 0; j < width; j++)
{
imgresult_data[j + tmp] = (int) img2_data[j + tmp] - (int) img1_data[j + tmp];
}
}
}
diff = clock() - begin;
float msec = diff*1000/CLOCKS_PER_SEC;
msec = msec/nb_loops;
printf("Time taken %4.4f milliseconds", msec);
break;
}
And here is my kernel function:
__global__ void diff(unsigned char *data1 ,unsigned char *data2, int *data_res)
{
int row = blockIdx.x;
int col = threadIdx.x;
int v = col + row*blockDim.x;
if (row < MAX_H && col < MAX_W)
{
data_res[v] = (int) data2[v] - (int) data1[v];
}
}
I obtained these execution time for each one
CPU: 1,3210ms
GPU: 0,3229ms
I wonder why GPU result is not as lower as it should be. I am a beginner in Cuda so please be comprehensive if there are some classic errors.
EDIT1:
Thank you for your feedback. I tried to delete the 'if' condition from the kernel but it didn't change deeply my program execution time.
However, after having install Cuda profiler, it told me that my threads weren't running concurrently. I don't understand why I have this kind of message, but it seems true because I only have a 5 or 6 times faster application with GPU than with CPU. This ratio should be greater, because each thread is supposed to process one pixel concurrently to all the other ones. If you have an idea of what I am doing wrong, it would be hepful...
Flow.
Here are two things you could do which may improve the performance of your diff kernel:
1. Let each thread do more work
In your kernel, each thread handles just a single element; but having a thread do anything already has a bunch of overhead, at the block and the thread level, including obtaining the parameters, checking the condition and doing address arithmetic. Now, you could say "Oh, but the reads and writes take much more time then that; this overhead is negligible" - but you would be ignoring the fact, that the latency of these reads and writes is hidden by the presence of many other warps which may be scheduled to do their work.
So, let each thread process more than a single element. Say, 4, as each thread can easily read 4 bytes at once into a register. Or even 8 or 16; experiment with it. Of course you'll need to adjust your grid and block parameters accordingly.
2. "Restrict" your pointers
__restrict is not part of C++, but it is supported in CUDA. It tells the compiler that accesses through different pointers passed to the function never overlap. See:
What does the restrict keyword mean in C++?
Realistic usage of the C99 'restrict' keyword?
Using it allows the CUDA compiler to apply additional optimizations, e.g. loading or storing data via non-coherent cache. Indeed, this happens with your kernel although I haven't measured the effects.
3. Consider using a "SIMD" instruction
CUDA offers this intrinsic:
__device__ ​ unsigned int __vsubss4 ( unsigned int a, unsigned int b )
Which subtracts each signed byte value in a from its corresponding one in b. If you can "live" with the result, rather than expecting a larger int variable, that could save you some of work - and go very well with increasing the number of elements per thread. In fact, it might let you increase it even further to get to the optimum.
I don't think you are measuring times correctly, memory copy is a time consuming step in GPU that you should take into account when measuring your time.
I see some details that you can test:
I suppose you are using MAX_H and MAX_H as constants, you may consider doing so using cudaMemcpyToSymbol().
Remember to sync your threads using __syncthreads(), so you don't get issues between each loop iteration.
CUDA works with warps, so block and number of threads per block work better as multiples of 8, but not larger than 512 threads per block unless your hardware supports it. Here is an example using 128 threads per block: <<<(cols*rows+127)/128,128>>>.
Remember as well to free your allocated memory in GPU and destroying your time events created.
In your kernel function you can have a single variable int v = threadIdx.x + blockIdx.x * blockDim.x .
Have you tested, beside the execution time, that your result is correct? I think you should use cudaMallocPitch() and cudaMemcpy2D() while working with arrays due to padding.
Probably there are other issues with the code, but here's what I see. The following lines in __global__ void diff are considered not optimal:
if (row < MAX_H && col < MAX_W)
{
data_res[v] = (int) data2[v] - (int) data1[v];
}
Conditional operators inside a kernel result in warp divergence. It means that if and else parts inside a warp are executed in sequence, not in parallel. Also, as you might have realized, if evaluates to false only at borders. To avoid the divergence and needless computation, split your image in two parts:
Central part where row < MAX_H && col < MAX_W is always true. Create an additional kernel for this area. if is unnecessary here.
Border areas that will use your diff kernel.
Obviously you'll have modify your code that calls the kernels.
And on a separate note:
GPU has throughput-oriented architecture, but not latency-oriented as CPU. It means CPU may be faster then CUDA when it comes to processing small amounts of data. Have you tried using large data sets?
CUDA Profiler is a very handy tool that will tell you're not optimal in the code.

OpenCL very low GFLOPS, no data transfer bottleneck

I am trying to optimize an algorithm I am running on my GPU (AMD HD6850). I counted the number of floating point operations inside my kernel and measured its execution time. I found it to achieve ~20 SP GFLOPS, however according to the GPUs specs I should achieve ~1500 GFLOPS.
To find the bottleneck I created a very simple kernel:
kernel void test_gflops(const float d, global float* result)
{
int gid = get_global_id(0);
float cd;
for (int i=0; i<100000; i++)
{
cd = d*i;
}
if (cd == -1.0f)
{
result[gid] = cd;
}
}
Running this kernel I get ~5*10^5 work_items/sec. I count one floating point operation (not sure if that's right, what about incrementing i and comparing it to 100000?) per iteration of the loop.
==> 5*10^5 work_items/sec * 10^5 FLOPS = 50 GFLOPS.
Even if there are 3 or 4 operations going on in the loop, it's much slower than the what the card should be able to do. What am I doing wrong?
The global work size is big enough (no speed change for 10k vs 100k work items).
Here are a couple of tricks:
GPU doesn't like cycles at all. Use #pragma unroll to unwind them.
Your GPU is good at vector operations. Stick to it, that will allow you to process multiple operands at once.
Use vector load/store whether it's possible.
Measure the memory bandwidth - I'm almost sure that you are bandwidth-limited because of poor access pattern.
In my opinion, kernel should look like this:
typedef union floats{
float16 vector;
float array[16];
} floats;
kernel void test_gflops(const float d, global float* result)
{
int gid = get_global_id(0);
floats cd;
cd.vector = vload16(gid, result);
cd.vector *= d;
#pragma unroll
for (int i=0; i<16; i++)
{
if(cd.array[i] == -1.0f){
result[gid] = cd;
}
}
Make your NDRange bigger to compensate difference between 16 & 1000 in loop condition.

What caused my elapsed time much longer than user time?

I am benchmarking some R statements (see details here) and found that my elapsed time is way longer than my user time.
user system elapsed
7.910 7.750 53.916
Could someone help me to understand what factors (R or hardware) determine the difference between user time and elapsed time, and how I can improve it? In case it helps: I am running data.table data manipulation on a Macbook Air 1.7Ghz i5 with 4GB RAM.
Update: My crude understanding is that user time is what it takes my CPU to process my job. elapsed time is the length from I submit a job until I get the data back. What else did my computer need to do after processing for 8 seconds?
Update: as suggested in the comment, I run a couple times on two data.table: Y, with 104 columns (sorry, I add more columns as time goes by), and X as a subset of Y with only 3 keys. Below are the updates. Please note that I ran these two procedures consecutively, so the memory state should be similar.
X<- Y[, list(Year, MemberID, Month)]
system.time(
{X[ , Month:= -Month]
setkey(X,Year, MemberID, Month)
X[,Month:=-Month]}
)
user system elapsed
3.490 0.031 3.519
system.time(
{Y[ , Month:= -Month]
setkey(Y,Year, MemberID, Month)
Y[,Month:=-Month]}
)
user system elapsed
8.444 5.564 36.284
Here are the size of the only two objects in my workspace (commas added). :
object.size(X)
83,237,624 bytes
object.size(Y)
2,449,521,080 bytes
Thank you
User time is how many seconds the computer spent doing your calculations. System time is how much time the operating system spent responding to your program's requests. Elapsed time is the sum of those two, plus whatever "waiting around" your program and/or the OS had to do. It's important to note that these numbers are the aggregate of time spent. Your program might compute for 1 second, then wait on the OS for one second, then wait on disk for 3 seconds and repeat this cycle many times while it's running.
Based on the fact that your program took as much system time as user time it was a very IO intensive thing. Reading from disk a lot or writing to disk a lot. RAM is pretty fast, a few hundred nanoseconds usually. So if everything fits in RAM elapsed time is usually just a little bit longer than user time. But disk might take a few milliseconds to seek and even longer to reply with the data. That's slower by a factor of of a million.
We've determined that your processor was "doing stuff" for ~8 + ~8 = ~ 16 seconds. What was it doing for the other ~54 - ~16 = ~38 seconds? Waiting for the hard drive to send it the data it asked for.
UPDATE1:
Matthew had made some excellent points that I'm making assumptions that I probably shouldn't be making. Adam, if you'd care to publish a list of all the rows in your table (datatypes are all we need) we can get a better idea of what's going on.
I just cooked up a little do-nothing program to validate my assumption that time not spent in userspace and kernel space is likely spent waiting for IO.
#include <stdio.h>
int main()
{
int i;
for(i = 0; i < 1000000000; i++)
{
int j, k, l, m;
j = 10;
k = i;
l = j + k;
m = j + k - i + l;
}
return 0;
}
When I run the resulting program and time it I see something like this:
mike#computer:~$ time ./waste_user
real 0m4.670s
user 0m4.660s
sys 0m0.000s
mike#computer:~$
As you can see by inspection the program does no real work and as such it doesn't ask the kernel to do anything short of load it into RAM and start it running. So nearly ALL the "real" time is spent as "user" time.
Now a kernel-heavy do-nothing program (with a few less iterations to keep the time reasonable):
#include <stdio.h>
int main()
{
FILE * random;
random = fopen("/dev/urandom", "r");
int i;
for(i = 0; i < 10000000; i++)
{
fgetc(random);
}
return 0;
}
When I run that one, I see something more like this:
mike#computer:~$ time ./waste_sys
real 0m1.138s
user 0m0.090s
sys 0m1.040s
mike#computer:~$
Again it's easy to see by inspection that the program does little more than ask the kernel to give it random bytes. /dev/urandom is a non-blocking source of entropy. What does that mean? The kernel uses a pseudo-random number generator to quickly generate "random" values for our little test program. That means the kernel has to do some computation but it can return very quickly. So this program mostly waits for the kernel to compute for it, and we can see that reflected in the fact that almost all the time is spent on sys.
Now we're going to make one little change. Instead of reading from /dev/urandom which is non-blocking we'll read from /dev/random which is blocking. What does that mean? It doesn't do much computing but rather it waits around for stuff to happen on your computer that the kernel developers have empirically determined is random. (We'll also do far fewer iterations since this stuff takes much longer)
#include <stdio.h>
int main()
{
FILE * random;
random = fopen("/dev/random", "r");
int i;
for(i = 0; i < 100; i++)
{
fgetc(random);
}
return 0;
}
And when I run and time this version of the program, here's what I see:
mike#computer:~$ time ./waste_io
real 0m41.451s
user 0m0.000s
sys 0m0.000s
mike#computer:~$
It took 41 seconds to run, but immeasurably small amounts of time on user and real. Why is that? All the time was spent in the kernel, but not doing active computation. The kernel was just waiting for stuff to happen. Once enough entropy was collected the kernel would wake back up and send back the data to the program. (Note it might take much less or much more time to run on your computer depending on what all is going on). I argue that the difference in time between user+sys and real is IO.
So what does all this mean? It doesn't prove that my answer is right because there could be other explanations for why you're seeing the behavior that you are. But it does demonstrate the differences between user compute time, kernel compute time and what I'm claiming is time spent doing IO.
Here's my source for the difference between /dev/urandom and /dev/random:
http://en.wikipedia.org/wiki//dev/random
UPDATE2:
I thought I would try and address Matthew's suggestion that perhaps L2 cache misses are at the root of the problem. The Core i7 has a 64 byte cache line. I don't know how much you know about caches, so I'll provide some details. When you ask for a value from memory the CPU doesn't get just that one value, it gets all 64 bytes around it. That means if you're accessing memory in a very predictable pattern -- like say array[0], array[1], array[2], etc -- it takes a while to get value 0, but then 1, 2, 3, 4... are much faster. Until you get to the next cache line, that is. If this were an array of ints, 0 would be slow, 1..15 would be fast, 16 would be slow, 17..31 would be fast, etc.
http://software.intel.com/en-us/forums/topic/296674
In order to test this out I've made two programs. They both have an array of structs in them with 1024*1024 elements. In one case the struct has a single double in it, in the other it's got 8 doubles in it. A double is 8 bytes long so in the second program we're accessing memory in the worst possible fashion for a cache. The first will get to use the cache nicely.
#include <stdio.h>
#include <stdlib.h>
#define MANY_MEGS 1048576
typedef struct {
double a;
} PartialLine;
int main()
{
int i, j;
PartialLine* many_lines;
int total_bytes = MANY_MEGS * sizeof(PartialLine);
printf("Striding through %d total bytes, %d bytes at a time\n", total_bytes, sizeof(PartialLine));
many_lines = (PartialLine*) malloc(total_bytes);
PartialLine line;
double x;
for(i = 0; i < 300; i++)
{
for(j = 0; j < MANY_MEGS; j++)
{
line = many_lines[j];
x = line.a;
}
}
return 0;
}
When I run this program I see this output:
mike#computer:~$ time ./cache_hits
Striding through 8388608 total bytes, 8 bytes at a time
real 0m3.194s
user 0m3.140s
sys 0m0.016s
mike#computer:~$
Here's the program with the big structs, they each take up 64 bytes of memory, not 8.
#include <stdio.h>
#include <stdlib.h>
#define MANY_MEGS 1048576
typedef struct {
double a, b, c, d, e, f, g, h;
} WholeLine;
int main()
{
int i, j;
WholeLine* many_lines;
int total_bytes = MANY_MEGS * sizeof(WholeLine);
printf("Striding through %d total bytes, %d bytes at a time\n", total_bytes, sizeof(WholeLine));
many_lines = (WholeLine*) malloc(total_bytes);
WholeLine line;
double x;
for(i = 0; i < 300; i++)
{
for(j = 0; j < MANY_MEGS; j++)
{
line = many_lines[j];
x = line.a;
}
}
return 0;
}
And when I run it, I see this:
mike#computer:~$ time ./cache_misses
Striding through 67108864 total bytes, 64 bytes at a time
real 0m14.367s
user 0m14.245s
sys 0m0.088s
mike#computer:~$
The second program -- the one designed to have cache misses -- it took five times as long to run for the exact same number of memory accesses.
Also worth noting is that in both cases, all the time spent was spent in user, not sys. That means that the OS is counting the time your program has to wait for data against your program, not against the operating system. Given these two examples I think it's unlikely that cache misses are causing your elapsed time to be substantially longer than your user time.
UPDATE3:
I just saw your update that the really slimmed down table ran about 10x faster than the regular-sized one. That too would indicate to me that (as another Matthew also said) you're running out of RAM.
Once your program tries to use more memory than your computer actually has installed it starts swapping to disk. This is better than your program crashing, but its much slower than RAM and can cause substantial slowdowns.
I'll try and put together an example that shows swap problems tomorrow.
UPDATE4:
Okay, here's an example program which is very similar to the previous one. But now the struct is 4096 bytes, not 8 bytes. In total this program will use 2GB of memory rather than 64MB. I also change things up a bit and make sure that I access things randomly instead of element-by-element so that the kernel can't get smart and start anticipating my programs needs. The caches are driven by hardware (driven solely by simple heuristics) but it's entirely possible that kswapd (the kernel swap daemon) could be substantially smarter than the cache.
#include <stdio.h>
#include <stdlib.h>
typedef struct {
double numbers[512];
} WholePage;
int main()
{
int memory_ops = 1024*1024;
int total_memory = memory_ops / 2;
int num_chunks = 8;
int chunk_bytes = total_memory / num_chunks * sizeof(WholePage);
int i, j, k, l;
printf("Bouncing through %u MB, %d bytes at a time\n", chunk_bytes/1024*num_chunks/1024, sizeof(WholePage));
WholePage* many_pages[num_chunks];
for(i = 0; i < num_chunks; i++)
{
many_pages[i] = (WholePage*) malloc(chunk_bytes);
if(many_pages[i] == 0){ exit(1); }
}
WholePage* page_list;
WholePage* page;
double x;
for(i = 0; i < 300*memory_ops; i++)
{
j = rand() % num_chunks;
k = rand() % (total_memory / num_chunks);
l = rand() % 512;
page_list = many_pages[j];
page = page_list + k;
x = page->numbers[l];
}
return 0;
}
From the program I called cache_hits to cache_misses we saw the size of memory increased 8x and execution time increased 5x. What do you expect to see when we run this program? It uses 32x as much memory than cache_misses but has the same number of memory accesses.
mike#computer:~$ time ./page_misses
Bouncing through 2048 MB, 4096 bytes at a time
real 2m1.327s
user 1m56.483s
sys 0m0.588s
mike#computer:~$
It took 8x as long as cache_misses and 40x as long as cache_hits. And this is on a computer with 4GB of RAM. I used 50% of my RAM in this program versus 1.5% for cache_misses and 0.2% for cache_hits. It got substantially slower even though it wasn't using up ALL the RAM my computer has. It was enough to be significant.
I hope this is a decent primer on how to diagnose problems with programs running slow.

Using both GPU device of CUDA and zero copy pinned memory

I am using the CUSP library for sparse matrix-multiplication on CUDA a machine. My current code is
#include <cusp/coo_matrix.h>
#include <cusp/multiply.h>
#include <cusp/print.h>
#include <cusp/transpose.h>
#include<stdio.h>
#define CATAGORY_PER_SCAN 1000
#define TOTAL_CATAGORY 100000
#define MAX_SIZE 1000000
#define ELEMENTS_PER_CATAGORY 10000
#define ELEMENTS_PER_TEST_CATAGORY 1000
#define INPUT_VECTOR 1000
#define TOTAL_ELEMENTS ELEMENTS_PER_CATAGORY * CATAGORY_PER_SCAN
#define TOTAL_TEST_ELEMENTS ELEMENTS_PER_TEST_CATAGORY * INPUT_VECTOR
int main(void)
{
cudaEvent_t start, stop;
cudaEventCreate(&start);
cudaEventCreate(&stop);
cudaEventRecord(start, 0);
cusp::coo_matrix<long long int, double, cusp::host_memory> A(CATAGORY_PER_SCAN,MAX_SIZE,TOTAL_ELEMENTS);
cusp::coo_matrix<long long int, double, cusp::host_memory> B(MAX_SIZE,INPUT_VECTOR,TOTAL_TEST_ELEMENTS);
for(int i=0; i< ELEMENTS_PER_TEST_CATAGORY;i++){
for(int j = 0;j< INPUT_VECTOR ; j++){
int index = i * INPUT_VECTOR + j ;
B.row_indices[index] = i; B.column_indices[ index ] = j; B.values[index ] = i;
}
}
for(int i = 0;i < CATAGORY_PER_SCAN; i++){
for(int j=0; j< ELEMENTS_PER_CATAGORY;j++){
int index = i * ELEMENTS_PER_CATAGORY + j ;
A.row_indices[index] = i; A.column_indices[ index ] = j; A.values[index ] = i;
}
}
/*cusp::print(A);
cusp::print(B); */
//test vector
cusp::coo_matrix<long int, double, cusp::device_memory> A_d = A;
cusp::coo_matrix<long int, double, cusp::device_memory> B_d = B;
// allocate output vector
cusp::coo_matrix<int, double, cusp::device_memory> y_d(CATAGORY_PER_SCAN, INPUT_VECTOR ,CATAGORY_PER_SCAN * INPUT_VECTOR);
cusp::multiply(A_d, B_d, y_d);
cusp::coo_matrix<int, double, cusp::host_memory> y=y_d;
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
float elapsedTime;
cudaEventElapsedTime(&elapsedTime, start, stop); // that's our time!
printf("time elaplsed %f ms\n",elapsedTime);
return 0;
}
cusp::multiply function uses 1 GPU only (as of my understanding).
How can I use setDevice() to run same program on both the GPU(one cusp::multiply per GPU) .
Measure the total time accurately.
How can I use zero-copy pinned memory with this library as I can use malloc myself.
1 How can I use setDevice() to run same program on both the GPU
If you mean "How can I perform a single cusp::multiply operation using two GPUs", the answer is you can't.
EDIT:
For the case where you want to run two separate CUSP sparse matrix-matrix products on different GPUs, it is possible to simply wrap the operation in a loop and call cudaSetDevice before the transfers and the cusp::multiply call. You will probably not, however get any speed up by doing so. I think I am correct in saying that both the memory transfers and cusp::multiply operations are blocking calls, so the host CPU will stall until they are finished. Because of this, the calls for different GPUs cannot overlap and there will be no speed up over performing the same operation on a single GPU twice. If you were willing to use a multithreaded application and have a host CPU with multiple cores, you could probably still run them in parallel, but it won't be as straightforward host code as it seems you are hoping for.
2 Measure the total time accurately
The cuda_event approach you have now is the most accurate way of measuring the execution time of a single kernel. If you had a hypthetical multi-gpu scheme, then the sum of the events from each GPU context would be the total execution time of the kernels. If, by total time, you mean the "wallclock" time to complete the operation, then you would need to either use a host timer around the whole multigpu segment of your code. I vaguely recall that it might be possible in the latest versions of CUDA to synchronize between events in streams from different contexts in some circumstances, so a CUDA event based timer might still be usable in such a scenario.
3 How can I use zero-copy pinned memory with this library as I can use malloc myself.
To the best of my knowledge that isn't possible. The underlying thrust library CUSP uses can support containers using zero copy memory, but CUSP doesn't expose the necessary mechanisms in the standard matrix constructors to be able to use allocate a CUSP sparse matrix in zero copy memory.

Resources