Access violation error while memset - c++11

typedef struct
{
long nIndex; // object index
TCHAR path[3 * MAX_TEXT_FIELD_SIZE];
}structItems;
void method1(LPCTSTR pInput, LPTSTR pOutput, size_t iSizeOfOutput)
{
size_t iLength = 0;
iLength = _tcslen(pInput);
if (iLength > iSizeOfOutput + sizeof(TCHAR))
iLength = iSizeOfOutput - sizeof(TCHAR);
memset(pOutput, 0, iSizeOfOutput); // Access violation error
}
void main()
{
CString csSysPath = _T("fghjjjjjjjjjjjjjjjj");
structItems *pIndexSyspath = nullptr;
pIndexSyspath = (structItems *)calloc(1, sizeof(structItems) * 15555555); //If i put size as 1555555 then it works well
method1(csSysPath, pIndexSyspath[0].path, (sizeof(TCHAR) * (3 * MAX_TEXT_FIELD_SIZE)));
}
This is a sample code which cause the crash.
In the above code if the size we put 1555555 then it works well (I randomly decreased size by a digit).
This is a 32 bit application running on 64 Bit Win OS on 16GB RAM
I kindly request some one to help me understand the reason for failure and relation between calloc - size - memset.

typedef struct
{
long nIndex; // 4 bytes on Windows
TCHAR path[3 * MAX_TEXT_FIELD_SIZE]; // 1 * 3 * 255 bytes for non-unicode
} structItems;
Supposing non unicode, TCHAR is 1byte, MAX_TEXT_FIELD_SIZE is 255, so sizeof(structItems) is 255*3 + 4, which is 769 bytes for a struct. Now, you want to allocate sizeof(structItems) * 15555555, which is more than 11GiB. How could that fit into 2GiB available to 32-bit process.

Related

kernel change MMAP memory but user space not change

Goal: I want to transfer data from kernel driver to user space app real time.
Method: I use mmap to connect kernel buffer_K and user buffer_U. When I change write data to K, the U will be changed also.
Problem: When I changed the buffer_K, use mmcpy(buffer_k, buffer_another, length), the buffer_K changed, but the buffer_U not change, my change frequency is 4ms.
This is my code
In kernel space, if kernel work done, it will trigger a signal to notice user space.
static uint8_t *mmap_buffer;
mmap_buffer = (uint8_t *)kmalloc(ads1299.samp_size * ads1299.buff_size, GFP_KERNEL);
int event()
{
memcpy(mmap_buffer, ads1299.buff_a, ads1299.buff_size * ads1299.samp_size);
SEND_SIGNAL_TO_APP;
}
int ads1299_mmap(struct file *flip, struct vm_area_struct *vma)
{
unsigned long page;
unsigned long start = (unsigned long)vma->vm_start;
unsigned long size = (unsigned long)(vma->vm_end - vma->vm_start);
vma->vm_flags |= VM_IO;
vma->vm_flags |= VM_SHARED;
page = virt_to_phys(mmap_buffer);
if(remap_pfn_range(vma,start,page>>PAGE_SHIFT, size, vma->vm_page_prot))
{
return -1;
}
return 0;
}
This is my user code
unsigned char *buffer= NULL;
buffer = (unsigned char*)malloc(charDataLen*sizeof(unsigned char));
buffer = (unsigned char *)mmap(NULL, getpagesize(), PROT_READ, MAP_SHARED, fd, 0);
if(buffer == MAP_FAILED)
{
printf("mmap error\r\n");
return -1;
}
int signal_handle()
{
usebufer(buffer)
}
Could you tell me some suggestion about my problem. Thanks.

Why are odd-sized fread requests split in two?

I noticed that on Windows every time I issue an unbuffered fread() request with an odd length, it's split into 2 requests (as observed through procmon):
a) fread for my requested length-1
b) 2-byte fread for the last byte
This has an obvious performance overhead like 2 kernel requests instead of one etc.
Sample code ran on Windows 10:
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[]) {
FILE* pFile;
char* buffer;
pFile = fopen(argv[0], "rb");
setbuf(pFile, nullptr);
size_t len = 3;
buffer = (char*)malloc(sizeof(char)*len);
if (len != fread(buffer, 1, len, pFile)) { fputs("Reading error", stderr); exit(3); }
free(buffer);
fclose(pFile);
return 0;
}
This results in the following procmon reported calls:
ReadFile c:\work\cpptry\Debug\cpptry.exe SUCCESS Offset: 0, Length: 2, Priority: Normal
ReadFile c:\work\cpptry\Debug\cpptry.exe SUCCESS Offset: 2, Length: 2
It seems as if Windows is incapable of issuing odd-sized requests to the file system.
What's up with that?
This is implementation artifact.
MS CRT keeps all FILEs buffered even if you tell it to don't do this. Instead file buffer is set to internal buffer with space for two bytes. This allows to keep one code path instead of two and simplifies implementation of fast path in fgetc and fputc.
#define fgetc(_stream) (--(_stream)->_cnt >= 0 ? 0xff & *(_stream)->_ptr++ : _filbuf(_stream))
Some of you are probably bothered by size of the buffer (2 bytes when quasi unbuffered), but in _fread_nolock_s function we can find optimization
witch tries to read multiplies of buffer size directly to the destination bypassing file buffer.
See fread.c in CRT sources:
/* calc chars to read -- (count/streambufsize) * streambufsize */
nbytes = (unsigned)(count - count % streambufsize);
...
nread = _read_nolock(_fileno(stream), data, nbytes);
Because the file buffer's size is equal 2, even number of bytes is read directly to the destination and eventual one byte goes through the file buffer. Sometimes there could be some bytes in the buffer that need to be transfered to destination before optimized read can take place.
Bonus: buffer size is always forced to multiple of 2.
See setvbuf.c:
/*
* force size to be even by masking down to the nearest multiple
* of 2
*/
size &= (size_t)~1;
...
/*
* CASE 1: No Buffering.
*/
if (type & _IONBF) {
stream->_flag |= _IONBF;
buffer = (char *)&(stream->_charbuf);
size = 2;
}
Code snippets above are from VC 2013 CRT.
For comparison snippets from Universal CRT 10.0.17134
read.cpp
unsigned const bytes_to_read = stream_buffer_size != 0
? static_cast<unsigned>(maximum_bytes_to_read - maximum_bytes_to_read % stream_buffer_size)
: maximum_bytes_to_read;
...
int const bytes_read = _read_nolock(_fileno(stream.public_stream()), data, bytes_to_read);
setvbuf.cpp
// Force the buffer size to be even by masking the low order bit:
size_t const usable_buffer_size = buffer_size_in_bytes & ~static_cast<size_t>(1);
...
// Case 1: No buffering:
if (type & _IONBF)
{
return set_buffer(stream, reinterpret_cast<char*>(&stream->_charbuf), 2, _IOBUFFER_NONE);
}
And snippets from VC 6.0 (1998)
read.c
/* calc chars to read -- (count/bufsize) * bufsize */
nbytes = ( bufsize ? (count - count % bufsize) : count );
nread = _read(_fileno(stream), data, nbytes);
setvbuf.c
/*
* force size to be even by masking down to the nearest multiple
* of 2
*/
size &= (size_t)~1;
...
/*
* CASE 1: No Buffering.
*/
if (type & _IONBF) {
stream->_flag |= _IONBF;
buffer = (char *)&(stream->_charbuf);
size = 2;
}

Windows 10 poor performance compared to Windows 7 (page fault handling is not scalable, severe lock contention when no of threads > 16)

We set up two identical HP Z840 Workstations with the following specs
2 x Xeon E5-2690 v4 # 2.60GHz (Turbo Boost ON, HT OFF, total 28 logical CPUs)
32GB DDR4 2400 Memory, Quad-channel
and installed Windows 7 SP1 (x64) and Windows 10 Creators Update (x64) on each.
Then we ran a small memory benchmark (code below, built with VS2015 Update 3, 64-bit architecture) which performs memory allocation-fill-free simultaneously from multiple threads.
#include <Windows.h>
#include <vector>
#include <ppl.h>
unsigned __int64 ZQueryPerformanceCounter()
{
unsigned __int64 c;
::QueryPerformanceCounter((LARGE_INTEGER *)&c);
return c;
}
unsigned __int64 ZQueryPerformanceFrequency()
{
unsigned __int64 c;
::QueryPerformanceFrequency((LARGE_INTEGER *)&c);
return c;
}
class CZPerfCounter {
public:
CZPerfCounter() : m_st(ZQueryPerformanceCounter()) {};
void reset() { m_st = ZQueryPerformanceCounter(); };
unsigned __int64 elapsedCount() { return ZQueryPerformanceCounter() - m_st; };
unsigned long elapsedMS() { return (unsigned long)(elapsedCount() * 1000 / m_freq); };
unsigned long elapsedMicroSec() { return (unsigned long)(elapsedCount() * 1000 * 1000 / m_freq); };
static unsigned __int64 frequency() { return m_freq; };
private:
unsigned __int64 m_st;
static unsigned __int64 m_freq;
};
unsigned __int64 CZPerfCounter::m_freq = ZQueryPerformanceFrequency();
int main(int argc, char ** argv)
{
SYSTEM_INFO sysinfo;
GetSystemInfo(&sysinfo);
int ncpu = sysinfo.dwNumberOfProcessors;
if (argc == 2) {
ncpu = atoi(argv[1]);
}
{
printf("No of threads %d\n", ncpu);
try {
concurrency::Scheduler::ResetDefaultSchedulerPolicy();
int min_threads = 1;
int max_threads = ncpu;
concurrency::SchedulerPolicy policy
(2 // two entries of policy settings
, concurrency::MinConcurrency, min_threads
, concurrency::MaxConcurrency, max_threads
);
concurrency::Scheduler::SetDefaultSchedulerPolicy(policy);
}
catch (concurrency::default_scheduler_exists &) {
printf("Cannot set concurrency runtime scheduler policy (Default scheduler already exists).\n");
}
static int cnt = 100;
static int num_fills = 1;
CZPerfCounter pcTotal;
// malloc/free
printf("malloc/free\n");
{
CZPerfCounter pc;
for (int i = 1 * 1024 * 1024; i <= 8 * 1024 * 1024; i *= 2) {
concurrency::parallel_for(0, 50, [i](size_t x) {
std::vector<void *> ptrs;
ptrs.reserve(cnt);
for (int n = 0; n < cnt; n++) {
auto p = malloc(i);
ptrs.emplace_back(p);
}
for (int x = 0; x < num_fills; x++) {
for (auto p : ptrs) {
memset(p, num_fills, i);
}
}
for (auto p : ptrs) {
free(p);
}
});
printf("size %4d MB, elapsed %8.2f s, \n", i / (1024 * 1024), pc.elapsedMS() / 1000.0);
pc.reset();
}
}
printf("\n");
printf("Total %6.2f s\n", pcTotal.elapsedMS() / 1000.0);
}
return 0;
}
Surprisingly, the result is very bad in Windows 10 CU compared to Windows 7. I plotted the result below for 1MB chunk size and 8MB chunk size, varying the number of threads from 2,4,.., up to 28. While Windows 7 gave slightly worse performance when we increased the number of threads, Windows 10 gave much worse scalability.
We have tried to make sure all Windows update is applied, update drivers, tweak BIOS settings, without success. We also ran the same benchmark on several other hardware platforms, and all gave similar curve for Windows 10. So it seems to be a problem of Windows 10.
Does anyone have similar experience, or maybe know-how about this (maybe we missed something ?). This behavior has made our multithreaded application got significant performance hit.
*** EDITED
Using https://github.com/google/UIforETW (thanks to Bruce Dawson) to analyze the benchmark, we found that most of the time is spent inside kernels KiPageFault. Digging further down the call tree, all leads to ExpWaitForSpinLockExclusiveAndAcquire. Seems that the lock contention is causing this issue.
*** EDITED
Collected Server 2012 R2 data on the same hardware. Server 2012 R2 is also worse than Win7, but still a lot better than Win10 CU.
*** EDITED
It happens in Server 2016 as well. I added the tag windows-server-2016.
*** EDITED
Using info from #Ext3h, I modified the benchmark to use VirtualAlloc and VirtualLock. I can confirmed significant improvement compared to when VirtualLock is not used. Overall Win10 is still 30% to 40% slower than Win7 when both using VirtualAlloc and VirtualLock.
Microsoft seems to have fixed this issue with Windows 10 Fall Creators Update and Windows 10 Pro for Workstation.
Here is the updated graph.
Win 10 FCU and WKS has lower overhead than Win 7. In exchange, the VirtualLock seems to have higher overhead.
Unfortunately not an answer, just some additional insight.
Little experiment with a different allocation strategy:
#include <Windows.h>
#include <thread>
#include <condition_variable>
#include <mutex>
#include <queue>
#include <atomic>
#include <iostream>
#include <chrono>
class AllocTest
{
public:
virtual void* Alloc(size_t size) = 0;
virtual void Free(void* allocation) = 0;
};
class BasicAlloc : public AllocTest
{
public:
void* Alloc(size_t size) override {
return VirtualAlloc(NULL, size, MEM_RESERVE | MEM_COMMIT, PAGE_READWRITE);
}
void Free(void* allocation) override {
VirtualFree(allocation, NULL, MEM_RELEASE);
}
};
class ThreadAlloc : public AllocTest
{
public:
ThreadAlloc() {
t = std::thread([this]() {
std::unique_lock<std::mutex> qlock(this->qm);
do {
this->qcv.wait(qlock, [this]() {
return shutdown || !q.empty();
});
{
std::unique_lock<std::mutex> rlock(this->rm);
while (!q.empty())
{
q.front()();
q.pop();
}
}
rcv.notify_all();
} while (!shutdown);
});
}
~ThreadAlloc() {
{
std::unique_lock<std::mutex> lock1(this->rm);
std::unique_lock<std::mutex> lock2(this->qm);
shutdown = true;
}
qcv.notify_all();
rcv.notify_all();
t.join();
}
void* Alloc(size_t size) override {
void* target = nullptr;
{
std::unique_lock<std::mutex> lock(this->qm);
q.emplace([this, &target, size]() {
target = VirtualAlloc(NULL, size, MEM_RESERVE | MEM_COMMIT, PAGE_READWRITE);
VirtualLock(target, size);
VirtualUnlock(target, size);
});
}
qcv.notify_one();
{
std::unique_lock<std::mutex> lock(this->rm);
rcv.wait(lock, [&target]() {
return target != nullptr;
});
}
return target;
}
void Free(void* allocation) override {
{
std::unique_lock<std::mutex> lock(this->qm);
q.emplace([allocation]() {
VirtualFree(allocation, NULL, MEM_RELEASE);
});
}
qcv.notify_one();
}
private:
std::queue<std::function<void()>> q;
std::condition_variable qcv;
std::condition_variable rcv;
std::mutex qm;
std::mutex rm;
std::thread t;
std::atomic_bool shutdown = false;
};
int main()
{
SetProcessWorkingSetSize(GetCurrentProcess(), size_t(4) * 1024 * 1024 * 1024, size_t(16) * 1024 * 1024 * 1024);
BasicAlloc alloc1;
ThreadAlloc alloc2;
AllocTest *allocator = &alloc2;
const size_t buffer_size =1*1024*1024;
const size_t buffer_count = 10*1024;
const unsigned int thread_count = 32;
std::vector<void*> buffers;
buffers.resize(buffer_count);
std::vector<std::thread> threads;
threads.resize(thread_count);
void* reference = allocator->Alloc(buffer_size);
std::memset(reference, 0xaa, buffer_size);
auto func = [&buffers, allocator, buffer_size, buffer_count, reference, thread_count](int thread_id) {
for (int i = thread_id; i < buffer_count; i+= thread_count) {
buffers[i] = allocator->Alloc(buffer_size);
std::memcpy(buffers[i], reference, buffer_size);
allocator->Free(buffers[i]);
}
};
for (int i = 0; i < 10; i++)
{
std::chrono::high_resolution_clock::time_point t1 = std::chrono::high_resolution_clock::now();
for (int t = 0; t < thread_count; t++) {
threads[t] = std::thread(func, t);
}
for (int t = 0; t < thread_count; t++) {
threads[t].join();
}
std::chrono::high_resolution_clock::time_point t2 = std::chrono::high_resolution_clock::now();
auto duration = std::chrono::duration_cast<std::chrono::microseconds>(t2 - t1).count();
std::cout << duration << std::endl;
}
DebugBreak();
return 0;
}
Under all sane conditions, BasicAlloc is faster, just as it should be. In fact, on a quad core CPU (no HT), there is no constellation in which ThreadAlloc could outperform it. ThreadAlloc is constantly around 30% slower. (Which is actually surprisingly little, and it keeps true even for tiny 1kB allocations!)
However, if the CPU has around 8-12 virtual cores, then it eventually reaches the point where BasicAlloc actually scales negatively, while ThreadAlloc just "stalls" on the base line overhead of soft faults.
If you profile the two different allocation strategies, you can see that for a low thread count, KiPageFault shifts from memcpy on BasicAlloc to VirtualLock on ThreadAlloc.
For higher thread and core counts, eventually ExpWaitForSpinLockExclusiveAndAcquire starts emerging from virtually zero load to up to 50% with BasicAlloc, while ThreadAlloc only maintains the constant overhead from KiPageFault itself.
Well, the stall with ThreadAlloc is also pretty bad. No matter how many cores or nodes in a NUMA system you have, you are currently hard capped to around 5-8GB/s in new allocations, across all processes in the system, solely limited by single thread performance. All the dedicated memory management thread achieves, is not wasting CPU cycles on a contended critical section.
You would have expected that Microsoft had a lock free strategy for assigning pages on different cores, but apparently that's not even remotely the case.
The spin-lock was also already present in the Windows 7 and earlier implementations of KiPageFault. So what did change?
Simple answer: KiPageFault itself became much slower. No clue what exactly caused it to slow down, but the spin-lock simply never became a obvious limit, because 100% contention was never possible before.
If someone whishes to disassemble KiPageFault to find the most expensive part - be my guest.

How can I increase the limit of generated prime numbers with Sieve of Eratosthenes?

What do I need to change in my program to be able to compute a higher limit of prime numbers?
Currently my algorithm works only with numbers up to 85 million. Should work with numbers up to 3 billion in my opinion.
I'm writing my own implementation of the Sieve of Eratosthenes in CUDA and I've hit a wall.
So far the algorithm seems to work fine for small numbers (below 85 million).
However, when I try to compute prime numbers up to 100 million, 2 billion, 3 billion, the system freezes (while it's computing stuff in the CUDA device), then after a few seconds, my linux machine goes back to normal (unfrozen), but the CUDA program crashes with the following error message:
CUDA error at prime.cu:129 code=6(cudaErrorLaunchTimeout) "cudaDeviceSynchronize()"
I have a GTX 780 (3 GB) and I'm allocating the sieves in a char array, so if I were to compute prime numbers up to 100,000, it would allocate 100,000 bytes in the device.
I assumed that the GPU would allow up to 3 billion numbers since it has 3 GB of memory, however, it only lets me do 85 million tops (85 million bytes = 0.08 GB)
this is my prime.cu code:
#include <stdio.h>
#include <helper_cuda.h> // checkCudaErrors() - NVIDIA_CUDA-6.0_Samples/common/inc
// #include <cuda.h>
// #include <cuda_runtime_api.h>
// #include <cuda_runtime.h>
typedef unsigned long long int uint64_t;
/******************************************************************************
* kernel that initializes the 1st couple of values in the primes array.
******************************************************************************/
__global__ static void sieveInitCUDA(char* primes)
{
primes[0] = 1; // value of 1 means the number is NOT prime
primes[1] = 1; // numbers "0" and "1" are not prime numbers
}
/******************************************************************************
* kernel for sieving the even numbers starting at 4.
******************************************************************************/
__global__ static void sieveEvenNumbersCUDA(char* primes, uint64_t max)
{
uint64_t index = blockIdx.x * blockDim.x + threadIdx.x + threadIdx.x + 4;
if (index < max)
primes[index] = 1;
}
/******************************************************************************
* kernel for finding prime numbers using the sieve of eratosthenes
* - primes: an array of bools. initially all numbers are set to "0".
* A "0" value means that the number at that index is prime.
* - max: the max size of the primes array
* - maxRoot: the sqrt of max (the other input). we don't wanna make all threads
* compute this over and over again, so it's being passed in
******************************************************************************/
__global__ static void sieveOfEratosthenesCUDA(char *primes, uint64_t max,
const uint64_t maxRoot)
{
// get the starting index, sieve only odds starting at 3
// 3,5,7,9,11,13...
/* int index = blockIdx.x * blockDim.x + threadIdx.x + threadIdx.x + 3; */
// apparently the following indexing usage is faster than the one above. Hmm
int index = blockIdx.x * blockDim.x + threadIdx.x + 3;
// make sure index won't go out of bounds, also don't start the execution
// on numbers that are already composite
if (index < maxRoot && primes[index] == 0)
{
// mark off the composite numbers
for (int j = index * index; j < max; j += index)
{
primes[j] = 1;
}
}
}
/******************************************************************************
* checkDevice()
******************************************************************************/
__host__ int checkDevice()
{
// query the Device and decide on the block size
int devID = 0; // the default device ID
cudaError_t error;
cudaDeviceProp deviceProp;
error = cudaGetDevice(&devID);
if (error != cudaSuccess)
{
printf("CUDA Device not ready or not supported\n");
printf("%s: cudaGetDevice returned error code %d, line(%d)\n", __FILE__, error, __LINE__);
exit(EXIT_FAILURE);
}
error = cudaGetDeviceProperties(&deviceProp, devID);
if (deviceProp.computeMode == cudaComputeModeProhibited || error != cudaSuccess)
{
printf("CUDA device ComputeMode is prohibited or failed to getDeviceProperties\n");
return EXIT_FAILURE;
}
// Use a larger block size for Fermi and above (see compute capability)
return (deviceProp.major < 2) ? 16 : 32;
}
/******************************************************************************
* genPrimesOnDevice
* - inputs: limit - the largest prime that should be computed
* primes - an array of size [limit], initialized to 0
******************************************************************************/
__host__ void genPrimesOnDevice(char* primes, uint64_t max)
{
int blockSize = checkDevice();
if (blockSize == EXIT_FAILURE)
return;
char* d_Primes = NULL;
int sizePrimes = sizeof(char) * max;
uint64_t maxRoot = sqrt(max);
// allocate the primes on the device and set them to 0
checkCudaErrors(cudaMalloc(&d_Primes, sizePrimes));
checkCudaErrors(cudaMemset(d_Primes, 0, sizePrimes));
// make sure that there are no errors...
checkCudaErrors(cudaPeekAtLastError());
// setup the execution configuration
dim3 dimBlock(blockSize);
dim3 dimGrid((maxRoot + dimBlock.x) / dimBlock.x);
dim3 dimGridEvens(((max + dimBlock.x) / dimBlock.x) / 2);
//////// debug
#ifdef DEBUG
printf("dimBlock(%d, %d, %d)\n", dimBlock.x, dimBlock.y, dimBlock.z);
printf("dimGrid(%d, %d, %d)\n", dimGrid.x, dimGrid.y, dimGrid.z);
printf("dimGridEvens(%d, %d, %d)\n", dimGridEvens.x, dimGridEvens.y, dimGridEvens.z);
#endif
// call the kernel
// NOTE: no need to synchronize after each kernel
// http://stackoverflow.com/a/11889641/2261947
sieveInitCUDA<<<1, 1>>>(d_Primes); // launch a single thread to initialize
sieveEvenNumbersCUDA<<<dimGridEvens, dimBlock>>>(d_Primes, max);
sieveOfEratosthenesCUDA<<<dimGrid, dimBlock>>>(d_Primes, max, maxRoot);
// check for kernel errors
checkCudaErrors(cudaPeekAtLastError());
checkCudaErrors(cudaDeviceSynchronize());
// copy the results back
checkCudaErrors(cudaMemcpy(primes, d_Primes, sizePrimes, cudaMemcpyDeviceToHost));
// no memory leaks
checkCudaErrors(cudaFree(d_Primes));
}
to test this code:
int main()
{
int max = 85000000; // 85 million
char* primes = malloc(max);
// check that it allocated correctly...
memset(primes, 0, max);
genPrimesOnDevice(primes, max);
// if you wish to display results:
for (uint64_t i = 0; i < size; i++)
{
if (primes[i] == 0) // if the value is '0', then the number is prime
{
std::cout << i; // use printf if you are using c
if ((i + 1) != size)
std::cout << ", ";
}
}
free(primes);
}
This error:
CUDA error at prime.cu:129 code=6(cudaErrorLaunchTimeout) "cudaDeviceSynchronize()"
doesn't necessarily mean anything other than that your kernel is taking too long. It's not necessarily a numerical limit, or computational error, but a system-imposed limit on the amount of time your kernel is allowed to run. Both Linux and windows can have such watchdog timers.
If you want to work around it in the linux case, review this document.
You don't mention it, but I assume your GTX780 is also hosting a (the) display. In that case, there is a time limit on kernels by default. If you can use another device as the display, then reconfigure your machine to have X not use the GTX780, as described in the link. If you do not have another GPU to use for the display, then the only option is to modify the interactivity setting indicated in the linked document, if you want to run long-running kernels. And in this situation, the keyboard/mouse/display will become non-responsive while the kernel is running. If your kernel should happen to run too long, it can be difficult to recover the machine, and may require a hard reboot. (You could also SSH into the machine, and kill the process that is using the GPU for CUDA.)

CUDA allocating array of arrays

I have some trouble with allocate array of arrays in CUDA.
void ** data;
cudaMalloc(&data, sizeof(void**)*N); // allocates without problems
for(int i = 0; i < N; i++) {
cudaMalloc(data + i, getSize(i) * sizeof(void*)); // seg fault is thrown
}
What did I wrong?
You have to allocate the pointers to a host memory, then allocate device memory for each array and store it's pointer in the host memory.
Then allocate the memory for storing the pointers into the device
and then copy the host memory to the device memory.
One example is worth 1000 words:
__global__ void multi_array_kernel( int N, void** arrays ){
// stuff
}
int main(){
const int N_ARRAYS = 20;
void *h_array = malloc(sizeof(void*) * N_ARRAYS);
for(int i = 0; i < N_ARRAYS; i++){
cudaMalloc(&h_array[i], i * sizeof(void*));
//TODO: check error
}
void *d_array = cudaMalloc(sizeof(void*) * N_ARRAYS);
// Copy to device Memory
cudaMemcpy(d_array, h_array, sizeof(void*) * N_ARRAYS, cudaMemcpyHostToDevice);
multi_array_kernel<1,1>(N_ARRAYS, d_array);
cudaThreadSynchronize();
for(int i = 0; i < N_ARRAYS; i++){
cudaFree(h_array[i]); //host not device memory
//TODO: check error
}
cudaFree(d_array);
free(h_array);
}
I don't believe this is supported. cudaMalloc() allocates device memory, but stores the address in a variable on the host. In your for-loop, you are passing it addresses in device memory.
Depending on what you're trying to accomplish, you may want to allocate data with normal host malloc() before calling the for-loop as you currently have it. Or allocate a single big block of device memory and compute offsets into it manually.
Look at Sections 2.4, 3.2.1 and B.2.5 (bottom) of the CUDA Programming Guide for more discussion of this. Specifically, on the bottom of page 108:
The address obtained by taking the address of a __device__, __shared__ or
__constant__ variable can only be used in device code.
I think in the first loop it should be &h_array[i] not &d_array[i].
you cannot use
cudaMalloc(&h_array[i], i * sizeof(void*));
for array declared as void *
use defined data type
CUdeviceptr *h_array = malloc(sizeof(CUdeviceptr *) * N);
or
int *h_array = malloc(sizeof(int *) * N);
and cast it to void *
cudaMalloc((void *)&h_array[i], i * sizeof(void*));
I had the same Problem and managed to solve it.
FabrizioM's answer was a good point to start for me and helped me a lot. But nevertheless i encountered some problems when i tried to transfer the code to my project. Using the additional comments and posts i was able to write a working example (VS2012, CUDA7.5). Thus i will post my code as additional answer and as point to start for others.
To understand the naming: I'm using a vector of OpenCV cv::Mat as input which are captured from multiple cameras and i am processing these images in the Kernel.
void TransferCameraImageToCuda(const std::vector<cv::Mat*>* Images)
{
int NumberCams = Images->size();
int imageSize = Images->at(0)->cols*Images->at(0)->rows;
CUdeviceptr* CamArraysAdressOnDevice_H;
CUdeviceptr* CamArraysAdressOnDevice_D;
//allocate memory on host to store the device-address of each array
CamArraysAdressOnDevice_H = new CUdeviceptr[NumberCams];
// allocate memory on the device and store the arrays on the device
for (int i = 0; i < NumberCams; i++){
cudaMalloc((void**)&(CamArraysAdressOnDevice_H[i]), imageSize * sizeof(unsigned short));
cudaMemcpy((void*)CamArraysAdressOnDevice_H[i], Images->at(i)->data, imageSize * sizeof(unsigned short), cudaMemcpyHostToDevice);
}
// allocate memory on the device to store the device-adresses of the arrays
cudaMalloc((void**)&CamArraysAdressOnDevice_D, sizeof(CUdeviceptr*)* NumberCams);
// Copy the adress of each device array to the device
cudaMemcpy(CamArraysAdressOnDevice_D, CamArraysAdressOnDevice_H, sizeof(CUdeviceptr*)* NumberCams, cudaMemcpyHostToDevice);
}
In the kernel launch I'm casting the device pointer to the data type pointer (unsigned short**)
DummyKernel<<<gridDim,blockDim>>>(NumberCams, (unsigned short**) CamArraysAdressOnDevice_D)
and the kernel definition is for example:
__global__ void DummyKernel(int NumberImages, unsigned short** CamImages)
{
int someIndex = 3458;
printf("Value Image 0 : %d \n", CamImages[0][someIndex]);
printf("Value Image 1 : %d \n", CamImages[1][someIndex]);
printf("Value Image 2 : %d \n", CamImages[2][someIndex]);
}

Resources