Largest allocatable array in VC++ 2010 [duplicate] - visual-studio-2010

Is there a max length for an array in C++?
Is it a C++ limit or does it depend on my machine? Is it tweakable? Does it depend on the type the array is made of?
Can I break that limit somehow or do I have to search for a better way of storing information? And what should be the simplest way?
What I have to do is storing long long int on an array, I'm working in a Linux environment. My question is: what do I have to do if I need to store an array of N long long integers with N > 10 digits?
I need this because I'm writing some cryptographic algorithm (as for example the p-Pollard) for school, and hit this wall of integers and length of arrays representation.

Nobody mentioned the limit on the size of the stack frame.
There are two places memory can be allocated:
On the heap (dynamically allocated memory).
The size limit here is a combination of available hardware and the OS's ability to simulate space by using other devices to temporarily store unused data (i.e. move pages to hard disk).
On the stack (Locally declared variables).
The size limit here is compiler defined (with possible hardware limits). If you read the compiler documentation you can often tweak this size.
Thus if you allocate an array dynamically (the limit is large and described in detail by other posts.
int* a1 = new int[SIZE]; // SIZE limited only by OS/Hardware
Alternatively if the array is allocated on the stack then you are limited by the size of the stack frame. N.B. vectors and other containers have a small presence in the stack but usually the bulk of the data will be on the heap.
int a2[SIZE]; // SIZE limited by COMPILER to the size of the stack frame

There are two limits, both not enforced by C++ but rather by the hardware.
The first limit (should never be reached) is set by the restrictions of the size type used to describe an index in the array (and the size thereof). It is given by the maximum value the system's std::size_t can take. This data type is large enough to contain the size in bytes of any object
The other limit is a physical memory limit. The larger your objects in the array are, the sooner this limit is reached because memory is full. For example, a vector<int> of a given size n typically takes multiple times as much memory as an array of type vector<char> (minus a small constant value), since int is usually bigger than char. Therefore, a vector<char> may contain more items than a vector<int> before memory is full. The same counts for raw C-style arrays like int[] and char[].
Additionally, this upper limit may be influenced by the type of allocator used to construct the vector because an allocator is free to manage memory any way it wants. A very odd but nontheless conceivable allocator could pool memory in such a way that identical instances of an object share resources. This way, you could insert a lot of identical objects into a container that would otherwise use up all the available memory.
Apart from that, C++ doesn't enforce any limits.

Looking at it from a practical rather than theoretical standpoint, on a 32 bit Windows system, the maximum total amount of memory available for a single process is 2 GB. You can break the limit by going to a 64 bit operating system with much more physical memory, but whether to do this or look for alternatives depends very much on your intended users and their budgets. You can also extend it somewhat using PAE.
The type of the array is very important, as default structure alignment on many compilers is 8 bytes, which is very wasteful if memory usage is an issue. If you are using Visual C++ to target Windows, check out the #pragma pack directive as a way of overcoming this.
Another thing to do is look at what in memory compression techniques might help you, such as sparse matrices, on the fly compression, etc... Again this is highly application dependent. If you edit your post to give some more information as to what is actually in your arrays, you might get more useful answers.
Edit: Given a bit more information on your exact requirements, your storage needs appear to be between 7.6 GB and 76 GB uncompressed, which would require a rather expensive 64 bit box to store as an array in memory in C++. It raises the question why do you want to store the data in memory, where one presumes for speed of access, and to allow random access. The best way to store this data outside of an array is pretty much based on how you want to access it. If you need to access array members randomly, for most applications there tend to be ways of grouping clumps of data that tend to get accessed at the same time. For example, in large GIS and spatial databases, data often gets tiled by geographic area. In C++ programming terms you can override the [] array operator to fetch portions of your data from external storage as required.

As annoyingly non-specific as all the current answers are, they're mostly right but with many caveats, not always mentioned. The gist is, you have two upper-limits, and only one of them is something actually defined, so YMMV:
1. Compile-time limits
Basically, what your compiler will allow. For Visual C++ 2017 on an x64 Windows 10 box, this is my max limit at compile-time before incurring the 2GB limit,
unsigned __int64 max_ints[255999996]{0};
If I did this instead,
unsigned __int64 max_ints[255999997]{0};
I'd get:
Error C1126 automatic allocation exceeds 2G
I'm not sure how 2G correllates to 255999996/7. I googled both numbers, and the only thing I could find that was possibly related was this *nix Q&A about a precision issue with dc. Either way, it doesn't appear to matter which type of int array you're trying to fill, just how many elements can be allocated.
2. Run-time limits
Your stack and heap have their own limitations. These limits are both values that change based on available system resources, as well as how "heavy" your app itself is. For example, with my current system resources, I can get this to run:
int main()
{
int max_ints[257400]{ 0 };
return 0;
}
But if I tweak it just a little bit...
int main()
{
int max_ints[257500]{ 0 };
return 0;
}
Bam! Stack overflow!
Exception thrown at 0x00007FF7DC6B1B38 in memchk.exe: 0xC00000FD:
Stack overflow (parameters: 0x0000000000000001, 0x000000AA8DE03000).
Unhandled exception at 0x00007FF7DC6B1B38 in memchk.exe: 0xC00000FD:
Stack overflow (parameters: 0x0000000000000001, 0x000000AA8DE03000).
And just to detail the whole heaviness of your app point, this was good to go:
int main()
{
int maxish_ints[257000]{ 0 };
int more_ints[400]{ 0 };
return 0;
}
But this caused a stack overflow:
int main()
{
int maxish_ints[257000]{ 0 };
int more_ints[500]{ 0 };
return 0;
}

I would agree with the above, that if you're intializing your array with
int myArray[SIZE]
then SIZE is limited by the size of an integer. But you can always malloc a chunk of memory and have a pointer to it, as big as you want so long as malloc doesnt return NULL.

To summarize the responses, extend them, and to answer your question directly:
No, C++ does not impose any limits for the dimensions of an array.
But as the array has to be stored somewhere in memory, so memory-related limits imposed by other parts of the computer system apply. Note that these limits do not directly relate to the dimensions (=number of elements) of the array, but rather to its size (=amount of memory taken). Dimensions (D) and in-memory size (S) of an array is not the same, as they are related by memory taken by a single element (E): S=D * E.
Now E depends on:
the type of the array elements (elements can be smaller or bigger)
memory alignment (to increase performance, elements are placed at addresses which are multiplies of some value, which introduces
‘wasted space’ (padding) between elements
size of static parts of objects (in object-oriented programming static components of objects of the same type are only stored once, independent from the number of such same-type objects)
Also note that you generally get different memory-related limitations by allocating the array data on stack (as an automatic variable: int t[N]), or on heap (dynamic alocation with malloc()/new or using STL mechanisms), or in the static part of process memory (as a static variable: static int t[N]). Even when allocating on heap, you still need some tiny amount of memory on stack to store references to the heap-allocated blocks of memory (but this is negligible, usually).
The size of size_t type has no influence on the programmer (I assume programmer uses size_t type for indexing, as it is designed for it), as compiler provider has to typedef it to an integer type big enough to address maximal amount of memory possible for the given platform architecture.
The sources of the memory-size limitations stem from
amount of memory available to the process (which is limited to 2^32 bytes for 32-bit applications, even on 64-bits OS kernels),
the division of process memory (e.g. amount of the process memory designed for stack or heap),
the fragmentation of physical memory (many scattered small free memory fragments are not applicable to storing one monolithic structure),
amount of physical memory,
and the amount of virtual memory.
They can not be ‘tweaked’ at the application level, but you are free to use a different compiler (to change stack size limits), or port your application to 64-bits, or port it to another OS, or change the physical/virtual memory configuration of the (virtual? physical?) machine.
It is not uncommon (and even advisable) to treat all the above factors as external disturbances and thus as possible sources of runtime errors, and to carefully check&react to memory-allocation related errors in your program code.
So finally: while C++ does not impose any limits, you still have to check for adverse memory-related conditions when running your code... :-)

As many excellent answers noted, there are a lot of limits that depend on your version of C++ compiler, operating system and computer characteristics. However, I suggest the following script on Python that checks the limit on your machine.
It uses binary search and on each iteration checks if the middle size is possible by creating a code that attempts to create an array of the size. The script tries to compile it (sorry, this part works only on Linux) and adjust binary search depending on the success. Check it out:
import os
cpp_source = 'int a[{}]; int main() {{ return 0; }}'
def check_if_array_size_compiles(size):
# Write to file 1.cpp
f = open(name='1.cpp', mode='w')
f.write(cpp_source.format(m))
f.close()
# Attempt to compile
os.system('g++ 1.cpp 2> errors')
# Read the errors files
errors = open('errors', 'r').read()
# Return if there is no errors
return len(errors) == 0
# Make a binary search. Try to create array with size m and
# adjust the r and l border depending on wheather we succeeded
# or not
l = 0
r = 10 ** 50
while r - l > 1:
m = (r + l) // 2
if check_if_array_size_compiles(m):
l = m
else:
r = m
answer = l + check_if_array_size_compiles(r)
print '{} is the maximum avaliable length'.format(answer)
You can save it to your machine and launch it, and it will print the maximum size you can create. For my machine it is 2305843009213693951.

I'm surprised the max_size() member function of std::vector has not been mentioned here.
"Returns the maximum number of elements the container is able to hold due to system or library implementation limitations, i.e. std::distance(begin(), end()) for the largest container."
We know that std::vector is implemented as a dynamic array underneath the hood, so max_size() should give a very close approximation of the maximum length of a dynamic array on your machine.
The following program builds a table of approximate maximum array length for various data types.
#include <iostream>
#include <vector>
#include <string>
#include <limits>
template <typename T>
std::string mx(T e) {
std::vector<T> v;
return std::to_string(v.max_size());
}
std::size_t maxColWidth(std::vector<std::string> v) {
std::size_t maxWidth = 0;
for (const auto &s: v)
if (s.length() > maxWidth)
maxWidth = s.length();
// Add 2 for space on each side
return maxWidth + 2;
}
constexpr long double maxStdSize_t = std::numeric_limits<std::size_t>::max();
// cs stands for compared to std::size_t
template <typename T>
std::string cs(T e) {
std::vector<T> v;
long double maxSize = v.max_size();
long double quotient = maxStdSize_t / maxSize;
return std::to_string(quotient);
}
int main() {
bool v0 = 0;
char v1 = 0;
int8_t v2 = 0;
int16_t v3 = 0;
int32_t v4 = 0;
int64_t v5 = 0;
uint8_t v6 = 0;
uint16_t v7 = 0;
uint32_t v8 = 0;
uint64_t v9 = 0;
std::size_t v10 = 0;
double v11 = 0;
long double v12 = 0;
std::vector<std::string> types = {"data types", "bool", "char", "int8_t", "int16_t",
"int32_t", "int64_t", "uint8_t", "uint16_t",
"uint32_t", "uint64_t", "size_t", "double",
"long double"};
std::vector<std::string> sizes = {"approx max array length", mx(v0), mx(v1), mx(v2),
mx(v3), mx(v4), mx(v5), mx(v6), mx(v7), mx(v8),
mx(v9), mx(v10), mx(v11), mx(v12)};
std::vector<std::string> quotients = {"max std::size_t / max array size", cs(v0),
cs(v1), cs(v2), cs(v3), cs(v4), cs(v5), cs(v6),
cs(v7), cs(v8), cs(v9), cs(v10), cs(v11), cs(v12)};
std::size_t max1 = maxColWidth(types);
std::size_t max2 = maxColWidth(sizes);
std::size_t max3 = maxColWidth(quotients);
for (std::size_t i = 0; i < types.size(); ++i) {
while (types[i].length() < (max1 - 1)) {
types[i] = " " + types[i];
}
types[i] += " ";
for (int j = 0; sizes[i].length() < max2; ++j)
sizes[i] = (j % 2 == 0) ? " " + sizes[i] : sizes[i] + " ";
for (int j = 0; quotients[i].length() < max3; ++j)
quotients[i] = (j % 2 == 0) ? " " + quotients[i] : quotients[i] + " ";
std::cout << "|" << types[i] << "|" << sizes[i] << "|" << quotients[i] << "|\n";
}
std::cout << std::endl;
std::cout << "N.B. max std::size_t is: " <<
std::numeric_limits<std::size_t>::max() << std::endl;
return 0;
}
On my macOS (clang version 5.0.1), I get the following:
| data types | approx max array length | max std::size_t / max array size |
| bool | 9223372036854775807 | 2.000000 |
| char | 9223372036854775807 | 2.000000 |
| int8_t | 9223372036854775807 | 2.000000 |
| int16_t | 9223372036854775807 | 2.000000 |
| int32_t | 4611686018427387903 | 4.000000 |
| int64_t | 2305843009213693951 | 8.000000 |
| uint8_t | 9223372036854775807 | 2.000000 |
| uint16_t | 9223372036854775807 | 2.000000 |
| uint32_t | 4611686018427387903 | 4.000000 |
| uint64_t | 2305843009213693951 | 8.000000 |
| size_t | 2305843009213693951 | 8.000000 |
| double | 2305843009213693951 | 8.000000 |
| long double | 1152921504606846975 | 16.000000 |
N.B. max std::size_t is: 18446744073709551615
On ideone gcc 8.3 I get:
| data types | approx max array length | max std::size_t / max array size |
| bool | 9223372036854775744 | 2.000000 |
| char | 18446744073709551615 | 1.000000 |
| int8_t | 18446744073709551615 | 1.000000 |
| int16_t | 9223372036854775807 | 2.000000 |
| int32_t | 4611686018427387903 | 4.000000 |
| int64_t | 2305843009213693951 | 8.000000 |
| uint8_t | 18446744073709551615 | 1.000000 |
| uint16_t | 9223372036854775807 | 2.000000 |
| uint32_t | 4611686018427387903 | 4.000000 |
| uint64_t | 2305843009213693951 | 8.000000 |
| size_t | 2305843009213693951 | 8.000000 |
| double | 2305843009213693951 | 8.000000 |
| long double | 1152921504606846975 | 16.000000 |
N.B. max std::size_t is: 18446744073709551615
It should be noted that this is a theoretical limit and that on most computers, you will run out of memory far before you reach this limit. For example, we see that for type char on gcc, the maximum number of elements is equal to the max of std::size_t. Trying this, we get the error:
prog.cpp: In function ‘int main()’:
prog.cpp:5:61: error: size of array is too large
char* a1 = new char[std::numeric_limits<std::size_t>::max()];
Lastly, as #MartinYork points out, for static arrays the maximum size is limited by the size of your stack.

One thing I don't think has been mentioned in the previous answers.
I'm always sensing a "bad smell" in the refactoring sense when people are using such things in their design.
That's a huge array and possibly not the best way to represent your data both from an efficiency point of view and a performance point of view.
cheers,
Rob

If you have to deal with data that large you'll need to split it up into manageable chunks. It won't all fit into memory on any small computer. You can probably
load a portion of the data from disk (whatever reasonably fits), perform your calculations and changes to it, store it to disk, then repeat until complete.

As has already been pointed out, array size is limited by your hardware and your OS (man ulimit). Your software though, may only be limited by your creativity. For example, can you store your "array" on disk? Do you really need long long ints? Do you really need a dense array? Do you even need an array at all?
One simple solution would be to use 64 bit Linux. Even if you do not physically have enough ram for your array, the OS will allow you to allocate memory as if you do since the virtual memory available to your process is likely much larger than the physical memory. If you really need to access everything in the array, this amounts to storing it on disk. Depending on your access patterns, there may be more efficient ways of doing this (ie: using mmap(), or simply storing the data sequentially in a file (in which case 32 bit Linux would suffice)).

i would go around this by making a 2d dynamic array:
long long** a = new long long*[x];
for (unsigned i = 0; i < x; i++) a[i] = new long long[y];
more on this here https://stackoverflow.com/a/936702/3517001

Related

Optimize Cuda Kernel time execution

I'm a learning Cuda student, and I would like to optimize the execution time of my kernel function. As a result, I realized a short program computing the difference between two pictures. So I compared the execution time between a classic CPU execution in C, and a GPU execution in Cuda C.
Here you can find the code I'm talking about:
int *imgresult_data = (int *) malloc(width*height*sizeof(int));
int size = width*height;
switch(computing_type)
{
case GPU:
HANDLE_ERROR(cudaMalloc((void**)&dev_data1, size*sizeof(unsigned char)));
HANDLE_ERROR(cudaMalloc((void**)&dev_data2, size*sizeof(unsigned char)));
HANDLE_ERROR(cudaMalloc((void**)&dev_data_res, size*sizeof(int)));
HANDLE_ERROR(cudaMemcpy(dev_data1, img1_data, size*sizeof(unsigned char), cudaMemcpyHostToDevice));
HANDLE_ERROR(cudaMemcpy(dev_data2, img2_data, size*sizeof(unsigned char), cudaMemcpyHostToDevice));
HANDLE_ERROR(cudaMemcpy(dev_data_res, imgresult_data, size*sizeof(int), cudaMemcpyHostToDevice));
float time;
cudaEvent_t start, stop;
HANDLE_ERROR( cudaEventCreate(&start) );
HANDLE_ERROR( cudaEventCreate(&stop) );
HANDLE_ERROR( cudaEventRecord(start, 0) );
for(int m = 0; m < nb_loops ; m++)
{
diff<<<height, width>>>(dev_data1, dev_data2, dev_data_res);
}
HANDLE_ERROR( cudaEventRecord(stop, 0) );
HANDLE_ERROR( cudaEventSynchronize(stop) );
HANDLE_ERROR( cudaEventElapsedTime(&time, start, stop) );
HANDLE_ERROR(cudaMemcpy(imgresult_data, dev_data_res, size*sizeof(int), cudaMemcpyDeviceToHost));
printf("Time to generate: %4.4f ms \n", time/nb_loops);
break;
case CPU:
clock_t begin = clock(), diff;
for (int z=0; z<nb_loops; z++)
{
// Apply the difference between 2 images
for (int i = 0; i < height; i++)
{
tmp = i*imgresult_pitch;
for (int j = 0; j < width; j++)
{
imgresult_data[j + tmp] = (int) img2_data[j + tmp] - (int) img1_data[j + tmp];
}
}
}
diff = clock() - begin;
float msec = diff*1000/CLOCKS_PER_SEC;
msec = msec/nb_loops;
printf("Time taken %4.4f milliseconds", msec);
break;
}
And here is my kernel function:
__global__ void diff(unsigned char *data1 ,unsigned char *data2, int *data_res)
{
int row = blockIdx.x;
int col = threadIdx.x;
int v = col + row*blockDim.x;
if (row < MAX_H && col < MAX_W)
{
data_res[v] = (int) data2[v] - (int) data1[v];
}
}
I obtained these execution time for each one
CPU: 1,3210ms
GPU: 0,3229ms
I wonder why GPU result is not as lower as it should be. I am a beginner in Cuda so please be comprehensive if there are some classic errors.
EDIT1:
Thank you for your feedback. I tried to delete the 'if' condition from the kernel but it didn't change deeply my program execution time.
However, after having install Cuda profiler, it told me that my threads weren't running concurrently. I don't understand why I have this kind of message, but it seems true because I only have a 5 or 6 times faster application with GPU than with CPU. This ratio should be greater, because each thread is supposed to process one pixel concurrently to all the other ones. If you have an idea of what I am doing wrong, it would be hepful...
Flow.
Here are two things you could do which may improve the performance of your diff kernel:
1. Let each thread do more work
In your kernel, each thread handles just a single element; but having a thread do anything already has a bunch of overhead, at the block and the thread level, including obtaining the parameters, checking the condition and doing address arithmetic. Now, you could say "Oh, but the reads and writes take much more time then that; this overhead is negligible" - but you would be ignoring the fact, that the latency of these reads and writes is hidden by the presence of many other warps which may be scheduled to do their work.
So, let each thread process more than a single element. Say, 4, as each thread can easily read 4 bytes at once into a register. Or even 8 or 16; experiment with it. Of course you'll need to adjust your grid and block parameters accordingly.
2. "Restrict" your pointers
__restrict is not part of C++, but it is supported in CUDA. It tells the compiler that accesses through different pointers passed to the function never overlap. See:
What does the restrict keyword mean in C++?
Realistic usage of the C99 'restrict' keyword?
Using it allows the CUDA compiler to apply additional optimizations, e.g. loading or storing data via non-coherent cache. Indeed, this happens with your kernel although I haven't measured the effects.
3. Consider using a "SIMD" instruction
CUDA offers this intrinsic:
__device__ ​ unsigned int __vsubss4 ( unsigned int a, unsigned int b )
Which subtracts each signed byte value in a from its corresponding one in b. If you can "live" with the result, rather than expecting a larger int variable, that could save you some of work - and go very well with increasing the number of elements per thread. In fact, it might let you increase it even further to get to the optimum.
I don't think you are measuring times correctly, memory copy is a time consuming step in GPU that you should take into account when measuring your time.
I see some details that you can test:
I suppose you are using MAX_H and MAX_H as constants, you may consider doing so using cudaMemcpyToSymbol().
Remember to sync your threads using __syncthreads(), so you don't get issues between each loop iteration.
CUDA works with warps, so block and number of threads per block work better as multiples of 8, but not larger than 512 threads per block unless your hardware supports it. Here is an example using 128 threads per block: <<<(cols*rows+127)/128,128>>>.
Remember as well to free your allocated memory in GPU and destroying your time events created.
In your kernel function you can have a single variable int v = threadIdx.x + blockIdx.x * blockDim.x .
Have you tested, beside the execution time, that your result is correct? I think you should use cudaMallocPitch() and cudaMemcpy2D() while working with arrays due to padding.
Probably there are other issues with the code, but here's what I see. The following lines in __global__ void diff are considered not optimal:
if (row < MAX_H && col < MAX_W)
{
data_res[v] = (int) data2[v] - (int) data1[v];
}
Conditional operators inside a kernel result in warp divergence. It means that if and else parts inside a warp are executed in sequence, not in parallel. Also, as you might have realized, if evaluates to false only at borders. To avoid the divergence and needless computation, split your image in two parts:
Central part where row < MAX_H && col < MAX_W is always true. Create an additional kernel for this area. if is unnecessary here.
Border areas that will use your diff kernel.
Obviously you'll have modify your code that calls the kernels.
And on a separate note:
GPU has throughput-oriented architecture, but not latency-oriented as CPU. It means CPU may be faster then CUDA when it comes to processing small amounts of data. Have you tried using large data sets?
CUDA Profiler is a very handy tool that will tell you're not optimal in the code.

Understanding Cache memory

I am trying to understand how cache memory reads and writes. Also I am trying to determine the hit and miss rate. I have tried reading and reading the textbook "Computer Systems - A Programmer Perspective" over and over and can't seem to grasp this idea. Maybe someone can help me understand this:
I am working with a two-dimensional array which has 480 rows and 640 columns. The cache is direct-mapped and 64 KB with 4 byte lines. Below is the C-code:
struct pixel {
char r;
char g;
char b;
char a;
};
struct pixel buffer[480][640];
register int i, j;
register char *cptr;
register int *iptr;
sizeof(char) == 1 (meaning an index in the array consists of 4 byte each (if I am understanding that correctly)). The buffer begins at memory address 0 and the cache is initially empty (cold cache). The only memory accesses are to the entries of the array. All other variables are stored in registers.
for (j=0; j < 640; j++) {
for (i=0; i < 480; i++){
buffer[i][j].r = 0;
buffer[i][j].g = 0;
buffer[i][j].b = 0;
buffer[i][j].a = 0;
}
}
For the code above then it is initializing all the elements in the array to 0, so it must be writing. I can see that this is bad locality because the array is writing column by column instead of row by row. Doesn't that affect the miss rate? I am trying to determine the miss rate for this code based on the cache size. I think the miss rate is 100% and if the locality was row by row then it would be 25%. But I am not totally understanding how cache-memory works so... Can anyone tell me something that could help me understand this better?
I would recommend you to watch the whole Tutorial if you are a beginner.
But for your question, lecture 27 to 31 would explain everything.
https://www.youtube.com/watch?v=tGarzP488Wc&index=29&list=PL2F82ECDF8BB71B0C
IISc Bangalore.

Need help explaining some CUDA performance results

We have been experimenting with different histogramming algorithms on a CUDA GPU. Most of the results I can explain, but we noticed some really weird features of which I have no clue what is causing them.
Kernels
The weird stuff happens in a data-parallel implementation. This means that the data is distributed over the threads. Each thread looks at a subset (ideally just 1) of the data, and adds its contribution to a histogram in global memory, which requires atomic operations.
__global__ void histogram1(float *data, uint *hist, uint n, float xMin, float binWidth, uin\
t nBins)
{
uint const nThreads = blockDim.x * gridDim.x;
uint const tid = threadIdx.x + blockIdx.x * blockDim.x;
uint idx = tid;
while (idx < n)
{
float x = data[idx];
uint bin = (x - xMin) / binWidth;
atomicAdd(hist + bin, 1);
idx += nThreads;
}
}
As a first optimization, each block first constructs a partial histogram in shared memory before doing a reduction of partial histograms to obtain the final result in global memory. The code is pretty straightforward, and I believe that it's very similar to that used in Cuda By Example.
__global__ void histogram2(float *data, uint *hist, uint n,
float xMin, float binWidth, uint nBins)
{
extern __shared__ uint partialHist[]; // size = nBins * sizeof(uint)
uint const nThreads = blockDim.x * gridDim.x;
uint const tid = threadIdx.x + blockIdx.x * blockDim.x;
// initialize shared memory to 0
uint idx = threadIdx.x;
while (idx < nBins)
{
partialHist[idx] = 0;
idx += blockDim.x;
}
__syncthreads();
// Calculate partial histogram (in shared mem)
idx = tid;
while (idx < n)
{
float x = data[idx];
uint bin = (x - xMin) / binWidth;
atomicAdd(partialHist + bin, 1);
idx += nThreads;
}
__syncthreads();
// Compute resulting total (global) histogram
idx = threadIdx.x;
while (idx < nBins)
{
atomicAdd(hist + idx, partialHist[idx]);
idx += blockDim.x;
}
}
Results
Speedup vs n
We benchmarked these two kernels to see how they behave as a function of n, which is the number of datapoints. The data was uniform randomly distributed. In the figure below, HIST_DP_1 is the unoptimized trivial version, whereas HIST_DP_2 is the one using shared memory to speed things up:
The timings have been taken relative to the CPU performance, and the weird stuff happens for very large datasets. The optimizing function, instead of flattening out like the unoptimized version, starts to improve again (relatively). We'd expect that for large datasets, the occupancy of our card will be near 100%, which would mean that from that point on the performance would scale linearly, like the CPU (and indeed the unoptimized blue curve).
The behavior could be due to the fact that the chance of having two threads performing an atomic operation on the same bin in shared/global memory going to zero for large data-sets, but in that case we would expect the drop to be in different places for different nBins. This is not what we observe, the drop is in all three panels at around 10^7 bins. What is happening here? Some complicated caching effect? Or is it something obvious that we missed?
Speedup vs nBins
To have a closer look at the behavior as a function of the number of bins, we fixed our dataset at 10^4 (10^5 in one case), and ran the algorithms for many different bin-numbers.
As a reference we also generated some non-random data. The red graph shows the results for perfectly sorted data, whereas the light-blue line corresponds to a dataset in which every value was identical (maximal congestion in the atomic operations). The question is obvious: what is the discontinuity doing there?
System Setup
NVidia Tesla M2075, driver 319.37
Cuda 5.5
Intel(R) Xeon(R) CPU E5-2603 0 # 1.80GHz
Thanks for your help!
EDIT: Reproduction Case
As requested: a compiling, runnable reproduction case. The code is quite long, which is why I didn't include it in the first place. The snippet is available on snipplr. To make your life even more easy, I'll include a little shell-script to run it for the same settings I used, and an Octave script to produce the plots.
Shell script
#!/bin/bash
runs=100
# format: [n] [nBins] [t_cpu] [t_gpu1] [t_gpu2]
for nBins in 100 1000 10000
do
for n in 10 50 100 200 500 1000 2000 5000 10000 50000 100000 500000 1000000 10000000 100000000
do
echo -n "$n $nBins "
./repro $n $nBins $runs
done
done
Octave script
T = load('repro.txt');
bins = unique(T(:,2));
t = cell(1, numel(bins));
for i = 1:numel(bins)
t{i} = T(T(:,2) == bins(i), :);
subplot(2, numel(bins), i);
loglog(t{i}(:,1), t{i}(:,3:5))
title(sprintf("nBins = %d", bins(i)));
legend("cpu", "gpu1", "gpu2");
subplot(2, numel(bins), i + numel(bins));
loglog(t{i}(:,1), t{i}(:,4)./t{i}(:,3), ...
t{i}(:,1), t{i}(:,5)./t{i}(:,3));
title("relative");
legend("gpu1/cpu", "gpu2/cpu");
end
Absolute Timings
Absolute timings show that it's not the CPU slowing down. Instead, the GPU is speeding up relatively:
Regarding question 1:
This is not what we observe, the drop is in all three panels at around 10^7 bins. What is happening here? Some complicated caching effect? Or is it something obvious that we missed?
This drop is due to the limit you've set on the maximum number of blocks (1<<14 == 16384). At n=10^7 gpuBench2 the limit has kicked in, and each thread starts processing multiple elements. At n=10^8 each thread works on 12 (sometimes 11) elements. If you remove this cap you can see that your performance continues to flatline.
Why is this faster? Multiple elements per thread allows for latency of the load from data to be hidden much better, especially in the case with 10000 bins where you are only able to fit one block on to each SM due to the high shared memory usage. In this case, every element in the block will reach the global load at around the same time, and none will be able to continue until it has completed its load. By having multiple elements we can pipeline these loads, getting many elements per thread for the latency of one.
(You don't see this in gupBench1 as it is not latency bound, but bandwidth bound to L2. You can see this very quickly if you import the output of nvprof into the visual profiler)
Regarding question 2:
The question is obvious: what is the discontinuity doing there?
I don't have a Fermi to hand, and I can't reproduce this on my Kepler, so I'd assume it's something that is Fermi specific. That's the danger of answering questions with two parts, I suppose!

OpenCL very low GFLOPS, no data transfer bottleneck

I am trying to optimize an algorithm I am running on my GPU (AMD HD6850). I counted the number of floating point operations inside my kernel and measured its execution time. I found it to achieve ~20 SP GFLOPS, however according to the GPUs specs I should achieve ~1500 GFLOPS.
To find the bottleneck I created a very simple kernel:
kernel void test_gflops(const float d, global float* result)
{
int gid = get_global_id(0);
float cd;
for (int i=0; i<100000; i++)
{
cd = d*i;
}
if (cd == -1.0f)
{
result[gid] = cd;
}
}
Running this kernel I get ~5*10^5 work_items/sec. I count one floating point operation (not sure if that's right, what about incrementing i and comparing it to 100000?) per iteration of the loop.
==> 5*10^5 work_items/sec * 10^5 FLOPS = 50 GFLOPS.
Even if there are 3 or 4 operations going on in the loop, it's much slower than the what the card should be able to do. What am I doing wrong?
The global work size is big enough (no speed change for 10k vs 100k work items).
Here are a couple of tricks:
GPU doesn't like cycles at all. Use #pragma unroll to unwind them.
Your GPU is good at vector operations. Stick to it, that will allow you to process multiple operands at once.
Use vector load/store whether it's possible.
Measure the memory bandwidth - I'm almost sure that you are bandwidth-limited because of poor access pattern.
In my opinion, kernel should look like this:
typedef union floats{
float16 vector;
float array[16];
} floats;
kernel void test_gflops(const float d, global float* result)
{
int gid = get_global_id(0);
floats cd;
cd.vector = vload16(gid, result);
cd.vector *= d;
#pragma unroll
for (int i=0; i<16; i++)
{
if(cd.array[i] == -1.0f){
result[gid] = cd;
}
}
Make your NDRange bigger to compensate difference between 16 & 1000 in loop condition.

Using both GPU device of CUDA and zero copy pinned memory

I am using the CUSP library for sparse matrix-multiplication on CUDA a machine. My current code is
#include <cusp/coo_matrix.h>
#include <cusp/multiply.h>
#include <cusp/print.h>
#include <cusp/transpose.h>
#include<stdio.h>
#define CATAGORY_PER_SCAN 1000
#define TOTAL_CATAGORY 100000
#define MAX_SIZE 1000000
#define ELEMENTS_PER_CATAGORY 10000
#define ELEMENTS_PER_TEST_CATAGORY 1000
#define INPUT_VECTOR 1000
#define TOTAL_ELEMENTS ELEMENTS_PER_CATAGORY * CATAGORY_PER_SCAN
#define TOTAL_TEST_ELEMENTS ELEMENTS_PER_TEST_CATAGORY * INPUT_VECTOR
int main(void)
{
cudaEvent_t start, stop;
cudaEventCreate(&start);
cudaEventCreate(&stop);
cudaEventRecord(start, 0);
cusp::coo_matrix<long long int, double, cusp::host_memory> A(CATAGORY_PER_SCAN,MAX_SIZE,TOTAL_ELEMENTS);
cusp::coo_matrix<long long int, double, cusp::host_memory> B(MAX_SIZE,INPUT_VECTOR,TOTAL_TEST_ELEMENTS);
for(int i=0; i< ELEMENTS_PER_TEST_CATAGORY;i++){
for(int j = 0;j< INPUT_VECTOR ; j++){
int index = i * INPUT_VECTOR + j ;
B.row_indices[index] = i; B.column_indices[ index ] = j; B.values[index ] = i;
}
}
for(int i = 0;i < CATAGORY_PER_SCAN; i++){
for(int j=0; j< ELEMENTS_PER_CATAGORY;j++){
int index = i * ELEMENTS_PER_CATAGORY + j ;
A.row_indices[index] = i; A.column_indices[ index ] = j; A.values[index ] = i;
}
}
/*cusp::print(A);
cusp::print(B); */
//test vector
cusp::coo_matrix<long int, double, cusp::device_memory> A_d = A;
cusp::coo_matrix<long int, double, cusp::device_memory> B_d = B;
// allocate output vector
cusp::coo_matrix<int, double, cusp::device_memory> y_d(CATAGORY_PER_SCAN, INPUT_VECTOR ,CATAGORY_PER_SCAN * INPUT_VECTOR);
cusp::multiply(A_d, B_d, y_d);
cusp::coo_matrix<int, double, cusp::host_memory> y=y_d;
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
float elapsedTime;
cudaEventElapsedTime(&elapsedTime, start, stop); // that's our time!
printf("time elaplsed %f ms\n",elapsedTime);
return 0;
}
cusp::multiply function uses 1 GPU only (as of my understanding).
How can I use setDevice() to run same program on both the GPU(one cusp::multiply per GPU) .
Measure the total time accurately.
How can I use zero-copy pinned memory with this library as I can use malloc myself.
1 How can I use setDevice() to run same program on both the GPU
If you mean "How can I perform a single cusp::multiply operation using two GPUs", the answer is you can't.
EDIT:
For the case where you want to run two separate CUSP sparse matrix-matrix products on different GPUs, it is possible to simply wrap the operation in a loop and call cudaSetDevice before the transfers and the cusp::multiply call. You will probably not, however get any speed up by doing so. I think I am correct in saying that both the memory transfers and cusp::multiply operations are blocking calls, so the host CPU will stall until they are finished. Because of this, the calls for different GPUs cannot overlap and there will be no speed up over performing the same operation on a single GPU twice. If you were willing to use a multithreaded application and have a host CPU with multiple cores, you could probably still run them in parallel, but it won't be as straightforward host code as it seems you are hoping for.
2 Measure the total time accurately
The cuda_event approach you have now is the most accurate way of measuring the execution time of a single kernel. If you had a hypthetical multi-gpu scheme, then the sum of the events from each GPU context would be the total execution time of the kernels. If, by total time, you mean the "wallclock" time to complete the operation, then you would need to either use a host timer around the whole multigpu segment of your code. I vaguely recall that it might be possible in the latest versions of CUDA to synchronize between events in streams from different contexts in some circumstances, so a CUDA event based timer might still be usable in such a scenario.
3 How can I use zero-copy pinned memory with this library as I can use malloc myself.
To the best of my knowledge that isn't possible. The underlying thrust library CUSP uses can support containers using zero copy memory, but CUSP doesn't expose the necessary mechanisms in the standard matrix constructors to be able to use allocate a CUSP sparse matrix in zero copy memory.

Resources