So I am trying to a vector of strings that contain arguments that I want to try and run with execve command. I am also copying the environment because the application that I am writing needs to have a copy of the incoming environment from the process. This application is written in c++, and I am getting an error that is "Bad Address" from the execve call. Here is the current code that I have:
#include <iostream>
#include <stdio.h>
#include <string.h>
#include <sys/wait.h>
#include <unistd.h>
#include <vector>
using namespace std;
int main (int argc, char * argv[], char * envp[]) {
int total = 0;
int a = 0;
int b = 0;
char **my_array;
char **my_envp;
// Setup copy the environment.
while (envp[a] != NULL) {
total++;
a++;
}
my_envp = new char*[total+1];
for (a = 0; a < total; a++) {
my_envp[a] = new char[strlen(envp[a])+1];
strcpy(my_envp[a], envp[a]);
}
a++;
my_envp[a] = NULL;
// Get my path and arguments.
vector<string> random = { "/bin/echo", "Grace ", "Will ", "Dan ", "Scott ", "Kevin ", "Amanda " };
my_array = new char*[random.size()+1];
for (b = 0; b < random.size(); b++) {
my_array[b] = new char[strlen(random[b].c_str())+1];
strcpy(my_array[b], random[b].c_str());
}
b++;
my_array[b] = NULL;
// Run my arguments.
pid_t pid;
pid = fork();
if (pid == 0) {
if (execve(my_array[0], my_array, my_envp) == -1)
perror("");
exit(1);
} else {
waitpid(pid, 0, WUNTRACED);
}
// Clean up time.
for (b = 0; b < random.size(); b++)
delete [] my_array[b];
delete [] my_array;
for (a = 0; a < total; a++)
delete [] my_envp[a];
delete [] my_envp;
return 0;
}
Here is my Valgrind output:
{"
==27594== Memcheck, a memory error detector
==27594== Copyright (C) 2002-2015, and GNU GPL'd, by Julian Seward et al.
==27594== Using Valgrind-3.11.0 and LibVEX; rerun with -h for copyright info
==27594== Command: ./a.out
==27594==
==27594== Invalid write of size 8
==27594== at 0x40115A: main (in /home/examples/a.out)
==27594== Address 0x5ab6eb0 is 0 bytes after a block of size 560 alloc'd
==27594== at 0x4C2E80F: operator new[](unsigned long) (o)
==27594== by 0x40106E: main (in /home/examples/a.out)
==27594==
==27594== Invalid write of size 8
==27594== at 0x4014E0: main (in /home/examples/a.out)
==27594== Address 0x5ab9220 is 0 bytes after a block of size 64 alloc'd
==27594== at 0x4C2E80F: operator new[](unsigned long) ()
==27594== by 0x4013D1: main (in /home/examples/a.out)
==27594==
==27595== Syscall param execve(argv) points to uninitialised byte(s)
==27595== at 0x549E777: execve (syscall-template.S:84)
==27595== by 0x40151D: main (in /home/examples/a.out)
==27595== Address 0x5ab9218 is 56 bytes inside a block of size 64 alloc'd
==27595== at 0x4C2E80F: operator new[](unsigned long) ()
==27595== by 0x4013D1: main (in /home/examples/a.out)
==27595==
==27595== Syscall param execve(envp) points to uninitialised byte(s)
==27595== at 0x549E777: execve (syscall-template.S:84)
==27595== by 0x40151D: main (in /home/examples/a.out)
==27595== Address 0x5ab6ea8 is 552 bytes inside a block of size 560 alloc'd
==27595== at 0x4C2E80F: operator new[](unsigned long) ()
==27595== by 0x40106E: main (in /home/examples/a.out)
==27595==
Grace Will Dan Scott Kevin Amanda
==27594==
==27594== HEAP SUMMARY:
==27594== in use at exit: 72,704 bytes in 1 blocks
==27594== total heap usage: 80 allocs, 79 frees, 77,249 bytes allocated
==27594==
==27594== LEAK SUMMARY:
==27594== definitely lost: 0 bytes in 0 blocks
==27594== indirectly lost: 0 bytes in 0 blocks
==27594== possibly lost: 0 bytes in 0 blocks
==27594== still reachable: 72,704 bytes in 1 blocks
==27594== suppressed: 0 bytes in 0 blocks
==27594== Reachable blocks (those to which a pointer was found) are not show
==27594== To see them, rerun with: --leak-check=full --show-leak-kinds=all
==27594==
==27594== For counts of detected and suppressed errors, rerun with: -v
==27594== ERROR SUMMARY: 2 errors from 2 contexts (suppressed: 0 from 0)
"}
I have a feeling that the way I am creating the char pointer of cstring pointers is not correct, or I am missing something very obvious. Thanks.
The problem with your code was that neither your environment list nor the
argument list was NULL terminated. Then you made an update of your code from
my suggestion:
What has valgrind to do with that? You do my_array = new char*[random.size() + 1]; and after the loop making the copies, you do my_array[b] = NULL
but you did it incorrectly:
for (a = 0; a < total; a++) {
my_envp[a] = new char[strlen(envp[a])+1];
strcpy(my_envp[a], envp[a]);
}
a++; // <-- does not belong here
my_envp[a] = NULL;
for (b = 0; b < random.size(); b++) {
my_array[b] = new char[strlen(random[b].c_str())+1];
strcpy(my_array[b], random[b].c_str());
}
b++; // <-- does not belong here
my_array[b] = NULL;
and valgrind is complaing about that:
==27594== Invalid write of size 8
==27594== at 0x40115A: main (in /home/examples/a.out)
==27594== Address 0x5ab6eb0 is 0 bytes after a block of size 560 alloc'd
==27594== at 0x4C2E80F: operator new[](unsigned long) (o)
==27594== by 0x40106E: main (in /home/examples/a.out)
==27594==
==27594== Invalid write of size 8
==27594== at 0x4014E0: main (in /home/examples/a.out)
==27594== Address 0x5ab9220 is 0 bytes after a block of size 64 alloc'd
==27594== at 0x4C2E80F: operator new[](unsigned long) ()
==27594== by 0x4013D1: main (in /home/examples/a.out)
The correct version should be (as I wrote in the comments)
for (b = 0; b < random.size(); b++) {
my_array[b] = new char[strlen(random[b].c_str())+1];
strcpy(my_array[b], random[b].c_str());
}
my_array[b] = NULL;
The reason why you don't need the b++ is because the loop is already doing it.
For a loop
for(int i = 0; i < 5; ++i)
{
printf("i in loop: %d\n", i);
}
printf("i out of loop\n");
you will get
i in loop: 0
i in loop: 1
i in loop: 2
i in loop: 3
i in loop: 4
i out of loop: 5
because the loop ends when the condition evaluates to false, and this happens
when i == 5. The same applies for the for loop above, if you increment b
once again after the loop ends, you are increment to much.
So let's say random.size() is 5 (like in my loop example) and you've allocated
space for random.size() + 1 == 6 elements, so you can only index from memory
from 0 to 5. At the end of the loop b is 5, if you do an extra b++ then b
is 6 and 6 is beyond the bound of my_array.
To prove that, this is the code I compiled
#include <iostream>
#include <stdio.h>
#include <string.h>
#include <sys/wait.h>
#include <unistd.h>
#include <vector>
using namespace std;
int main (int argc, char * argv[], char * envp[]) {
// these variables must be unsigned, vector.size()
// returns an unsigned value
unsigned int total = 0;
unsigned int a = 0;
unsigned int b = 0;
char **my_array;
char **my_envp;
// Setup copy the environment.
while (envp[a] != NULL) {
total++;
a++;
}
my_envp = new char*[total+1];
for (a = 0; a < total; a++) {
my_envp[a] = new char[strlen(envp[a])+1];
strcpy(my_envp[a], envp[a]);
}
my_envp[a] = NULL;
// Get my path and arguments.
vector<string> random = { "/bin/echo", "Grace ", "Will ", "Dan ", "Scott ", "Kevin ", "Amanda " };
my_array = new char*[random.size()+1];
for (b = 0; b < random.size(); b++) {
my_array[b] = new char[strlen(random[b].c_str())+1];
strcpy(my_array[b], random[b].c_str());
}
my_array[b] = NULL;
// Run my arguments.
pid_t pid;
pid = fork();
if (pid == 0) {
if (execve(my_array[0], my_array, my_envp) == -1)
perror("");
exit(1);
} else {
waitpid(pid, 0, WUNTRACED);
}
// Clean up time.
for (b = 0; b < random.size(); b++)
delete [] my_array[b];
delete [] my_array;
for (a = 0; a < total; a++)
delete [] my_envp[a];
delete [] my_envp;
return 0;
}
and the output
$ g++ a.cpp -oa -g -Wall
$ valgrind ./a
==15833== Memcheck, a memory error detector
==15833== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al.
==15833== Using Valgrind-3.13.0 and LibVEX; rerun with -h for copyright info
==15833== Command: ./a
==15833==
Grace Will Dan Scott Kevin Amanda
==15833==
==15833== HEAP SUMMARY:
==15833== in use at exit: 0 bytes in 0 blocks
==15833== total heap usage: 79 allocs, 79 frees, 78,730 bytes allocated
==15833==
==15833== All heap blocks were freed -- no leaks are possible
==15833==
==15833== For counts of detected and suppressed errors, rerun with: -v
==15833== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Related
I found an issue about large-size page-locked memory in CUDA. Here is the source code and makefile. The code allocates 10GB page-locked memory and copy some data from device memory to this page-locked memory, the data in device memory are set 1.0 before the copy.
#include <cuda.h>
#include <assert.h>
#include <cuda_runtime.h>
#include "helper_cuda.h"
__global__
void test_k(double* x, size_t n)
{
int gid = blockIdx.x*blockDim.x + threadIdx.x;
if(gid<n) x[gid] = 1.0 ;
}
int main(int argc, char* argv[])
{
size_t n = size_t(10)*1024*1024*1024/sizeof(double);
printf("\n n: %zu, page-locked memory size: %zu MB\n", n, n*sizeof(double)/1024/1024);
double* x_h = NULL, *x_d = NULL;
int gpuid = 0;
if(argc>1 ) gpuid = atoi(argv[1]);
printf("select gpu %d\n", gpuid);
checkCudaErrors(cudaSetDevice(gpuid));
checkCudaErrors(cudaMallocHost(&x_h, sizeof(double)*n));
checkCudaErrors(cudaMalloc(&x_d, sizeof(double)*n));
for(int i = 0; i < n; ++i) x_h[i]=0.0;
int nthd = 256;
int nblk = (n+nthd-1) / nthd;
test_k<<<nblk, nthd, 0, 0>>>(x_d, n);
checkCudaErrors(cudaMemcpy(x_h, x_d, sizeof(double)*n, cudaMemcpyDeviceToHost));
int errCount = 0;
for(size_t i = 0; i < n; ++i){
if(x_h[i] == 0.0) errCount++;
}
printf("%s errCount: %d, which should be 0\n", errCount?"Error:":"Correct", errCount);
checkCudaErrors(cudaFree(x_d));
checkCudaErrors(cudaFreeHost(x_h));
return 0;
}
CUDA_PATH = /depot/cuda/cuda-11.2/
CUDA_INC = -I$(CUDA_PATH)/include -I$(CUDA_PATH)/samples/common/inc
NVCC = $(CUDA_PATH)/bin/nvcc
NVCCXXFLAGS = -std=c++11 -O3 -w -m64 -Xptxas -dlcm=cg -gencode=arch=compute_70,code=sm_70 -gencode=arch=compute_80,code=sm_80 $(CUDA_INC)
all: testLargePin
testLargePin: testLargePin.cu
$(NVCC) $^ $(NVCCXXFLAGS) -o $#
clean:
rm testLargePin -f
I run the binary on three different GPU servers(all with A100-SXM4-40GB). On machine 1, the result is correct. On machine 2, it reports
CUDA error at testLargePin.cu:31 code=719(cudaErrorLaunchFailure) "cudaMemcpy(x_h, x_d, sizeof(double)*n, cudaMemcpyDeviceToHost)"
On machine 3, its copy is wrong, there are lots of zeros in the page-locked array.
n: 1342177280, page-locked memory size: 10240 MB
select gpu 0
Error: errCount: 1024, which should be 0
Anyone knows the reason and how to fix the issue? like an API to check the max page-locked memory size in specified machine? Thanks in advance.
By NVIDIA, (https://docs.nvidia.com/cuda/cuda-driver-api/group__CUDA__TYPES.html#group__CUDA__TYPES_1gc6c391505e117393cc2558fff6bfc2e9)
Error 719 is about dereferencing an invalid device pointer, accessing out of bounds shared memory, or system specific problem...
In my experience, synchronization helped troubles about memory error and inconsistent results. Did you try adding cudaDeviceSyncronize(); after checkCudaErrors(cudaMemcpy(x_h, x_d, sizeof(double)*n, cudaMemcpyDeviceToHost)); ??
About page-locked memory, there's no limit in CUDA. I think you have to check this on your host side.
I found the method 'vectorized/batch sort' and 'nested sort' on below link. How to use Thrust to sort the rows of a matrix?
When I tried this method for 500 row and 1000 elements, the result of them are
vectorized/batch sort : 66ms
nested sort : 3290ms
I am using 1080ti HOF model to do this operation but it takes too long compared to your case.
But in the below link, it could be less than 10ms and almost 100 microseconds.
(How to find median value in 2d array for each column with CUDA?)
Could you recommend how to optimize this method to reduce operation time?
#include <thrust/device_vector.h>
#include <thrust/device_ptr.h>
#include <thrust/host_vector.h>
#include <thrust/sort.h>
#include <thrust/execution_policy.h>
#include <thrust/generate.h>
#include <thrust/equal.h>
#include <thrust/sequence.h>
#include <thrust/for_each.h>
#include <iostream>
#include <stdlib.h>
#define NSORTS 500
#define DSIZE 1000
int my_mod_start = 0;
int my_mod() {
return (my_mod_start++) / DSIZE;
}
bool validate(thrust::device_vector<int> &d1, thrust::device_vector<int> &d2) {
return thrust::equal(d1.begin(), d1.end(), d2.begin());
}
struct sort_functor
{
thrust::device_ptr<int> data;
int dsize;
__host__ __device__
void operator()(int start_idx)
{
thrust::sort(thrust::device, data + (dsize*start_idx), data + (dsize*(start_idx + 1)));
}
};
#include <time.h>
#include <windows.h>
unsigned long long dtime_usec(LONG start) {
SYSTEMTIME timer2;
GetSystemTime(&timer2);
LONG end = (timer2.wSecond * 1000) + timer2.wMilliseconds;
return (end-start);
}
int main() {
for (int i = 0; i < 3; i++) {
SYSTEMTIME timer1;
cudaDeviceSetLimit(cudaLimitMallocHeapSize, (16 * DSIZE*NSORTS));
thrust::host_vector<int> h_data(DSIZE*NSORTS);
thrust::generate(h_data.begin(), h_data.end(), rand);
thrust::device_vector<int> d_data = h_data;
// first time a loop
thrust::device_vector<int> d_result1 = d_data;
thrust::device_ptr<int> r1ptr = thrust::device_pointer_cast<int>(d_result1.data());
GetSystemTime(&timer1);
LONG time_ms1 = (timer1.wSecond * 1000) + timer1.wMilliseconds;
for (int i = 0; i < NSORTS; i++)
thrust::sort(r1ptr + (i*DSIZE), r1ptr + ((i + 1)*DSIZE));
cudaDeviceSynchronize();
time_ms1 = dtime_usec(time_ms1);
std::cout << "loop time: " << time_ms1 << "ms" << std::endl;
//vectorized sort
thrust::device_vector<int> d_result2 = d_data;
thrust::host_vector<int> h_segments(DSIZE*NSORTS);
thrust::generate(h_segments.begin(), h_segments.end(), my_mod);
thrust::device_vector<int> d_segments = h_segments;
GetSystemTime(&timer1);
time_ms1 = (timer1.wSecond * 1000) + timer1.wMilliseconds;
thrust::stable_sort_by_key(d_result2.begin(), d_result2.end(), d_segments.begin());
thrust::stable_sort_by_key(d_segments.begin(), d_segments.end(), d_result2.begin());
cudaDeviceSynchronize();
time_ms1 = dtime_usec(time_ms1);
std::cout << "loop time: " << time_ms1 << "ms" << std::endl;
if (!validate(d_result1, d_result2)) std::cout << "mismatch 1!" << std::endl;
//nested sort
thrust::device_vector<int> d_result3 = d_data;
sort_functor f = { d_result3.data(), DSIZE };
thrust::device_vector<int> idxs(NSORTS);
thrust::sequence(idxs.begin(), idxs.end());
GetSystemTime(&timer1);
time_ms1 = (timer1.wSecond * 1000) + timer1.wMilliseconds;
thrust::for_each(idxs.begin(), idxs.end(), f);
cudaDeviceSynchronize();
time_ms1 = dtime_usec(time_ms1);
std::cout << "loop time: " << time_ms1 << "ms" << std::endl;
if (!validate(d_result1, d_result3)) std::cout << "mismatch 2!" << std::endl;
}
return 0;
}
The main takeaway from your thrust experience is that you should never compile a debug project or with device debug switch (-G) when you are interested in performance. Compiling device debug code causes the compiler to omit many performance optimizations. The difference in your case was quite dramatic, about a 30x improvement going from debug to release code.
Here is a segmented cub sort, where we are launching 500 blocks and each block is handling a separate 1024 element array. The CUB code is lifted from here.
$ cat t1761.cu
#include <cub/cub.cuh> // or equivalently <cub/block/block_radix_sort.cuh>
#include <iostream>
const int ipt=8;
const int tpb=128;
__global__ void ExampleKernel(int *data)
{
// Specialize BlockRadixSort for a 1D block of 128 threads owning 8 integer items each
typedef cub::BlockRadixSort<int, tpb, ipt> BlockRadixSort;
// Allocate shared memory for BlockRadixSort
__shared__ typename BlockRadixSort::TempStorage temp_storage;
// Obtain a segment of consecutive items that are blocked across threads
int thread_keys[ipt];
// just create some synthetic data in descending order 1023 1022 1021 1020 ...
for (int i = 0; i < ipt; i++) thread_keys[i] = (tpb-1-threadIdx.x)*ipt+i;
// Collectively sort the keys
BlockRadixSort(temp_storage).Sort(thread_keys);
__syncthreads();
// write results to output array
for (int i = 0; i < ipt; i++) data[blockIdx.x*ipt*tpb + threadIdx.x*ipt+i] = thread_keys[i];
}
int main(){
const int blks = 500;
int *data;
cudaMalloc(&data, blks*ipt*tpb*sizeof(int));
ExampleKernel<<<blks,tpb>>>(data);
int *h_data = new int[blks*ipt*tpb];
cudaMemcpy(h_data, data, blks*ipt*tpb*sizeof(int), cudaMemcpyDeviceToHost);
for (int i = 0; i < 10; i++) std::cout << h_data[i] << " ";
std::cout << std::endl;
}
$ nvcc -o t1761 t1761.cu -I/path/to/cub/cub-1.8.0
$ CUDA_VISIBLE_DEVICES="2" nvprof ./t1761
==13713== NVPROF is profiling process 13713, command: ./t1761
==13713== Warning: Profiling results might be incorrect with current version of nvcc compiler used to compile cuda app. Compile with nvcc compiler 9.0 or later version to get correct profiling results. Ignore this warning if code is already compiled with the recommended nvcc version
0 1 2 3 4 5 6 7 8 9
==13713== Profiling application: ./t1761
==13713== Profiling result:
Type Time(%) Time Calls Avg Min Max Name
GPU activities: 60.35% 308.66us 1 308.66us 308.66us 308.66us [CUDA memcpy DtoH]
39.65% 202.79us 1 202.79us 202.79us 202.79us ExampleKernel(int*)
API calls: 98.39% 210.79ms 1 210.79ms 210.79ms 210.79ms cudaMalloc
0.72% 1.5364ms 1 1.5364ms 1.5364ms 1.5364ms cudaMemcpy
0.32% 691.15us 1 691.15us 691.15us 691.15us cudaLaunchKernel
0.28% 603.26us 97 6.2190us 400ns 212.71us cuDeviceGetAttribute
0.24% 516.56us 1 516.56us 516.56us 516.56us cuDeviceTotalMem
0.04% 79.374us 1 79.374us 79.374us 79.374us cuDeviceGetName
0.01% 13.373us 1 13.373us 13.373us 13.373us cuDeviceGetPCIBusId
0.00% 5.0810us 3 1.6930us 729ns 2.9600us cuDeviceGetCount
0.00% 2.3120us 2 1.1560us 609ns 1.7030us cuDeviceGet
0.00% 748ns 1 748ns 748ns 748ns cuDeviceGetUuid
$
(CUDA 10.2.89, RHEL 7)
Above I am running on a Tesla K20x, which has performance that is "closer" to your 1080ti than a Tesla V100. We see that the kernel execution time is ~200us. If I run the exact same code on a Tesla V100, the kernel execution time drops to ~35us:
$ CUDA_VISIBLE_DEVICES="0" nvprof ./t1761
==13814== NVPROF is profiling process 13814, command: ./t1761
0 1 2 3 4 5 6 7 8 9
==13814== Profiling application: ./t1761
==13814== Profiling result:
Type Time(%) Time Calls Avg Min Max Name
GPU activities: 82.33% 163.43us 1 163.43us 163.43us 163.43us [CUDA memcpy DtoH]
17.67% 35.073us 1 35.073us 35.073us 35.073us ExampleKernel(int*)
API calls: 98.70% 316.92ms 1 316.92ms 316.92ms 316.92ms cudaMalloc
0.87% 2.7879ms 1 2.7879ms 2.7879ms 2.7879ms cuDeviceTotalMem
0.19% 613.75us 97 6.3270us 389ns 205.37us cuDeviceGetAttribute
0.19% 601.61us 1 601.61us 601.61us 601.61us cudaMemcpy
0.02% 72.718us 1 72.718us 72.718us 72.718us cudaLaunchKernel
0.02% 59.905us 1 59.905us 59.905us 59.905us cuDeviceGetName
0.01% 37.886us 1 37.886us 37.886us 37.886us cuDeviceGetPCIBusId
0.00% 4.6830us 3 1.5610us 546ns 2.7850us cuDeviceGetCount
0.00% 1.9900us 2 995ns 587ns 1.4030us cuDeviceGet
0.00% 677ns 1 677ns 677ns 677ns cuDeviceGetUuid
$
You'll note there is no "input" array, I'm just synthesizing data in the kernel, since we are interested in performance, primarily. If you need to handle an array size like 1000, you should probably just pad each array to 1024 (e.g. pad with a very large number, then ignore the last numbers in the sorted result.)
This code is largely lifted from external documentation. It is offered for instructional purposes. I'm not suggesting it is defect-free or suitable for any particular purpose. Use it at your own risk.
While using CudaMallocManaged() to allocate an array of structs with arrays inside, I'm getting the error "out of memory" even though I have enough free memory. Here's some code that replicates my problem:
#include <iostream>
#include <cuda.h>
#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=true)
{
if (code != cudaSuccess)
{
fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
if (abort) exit(code);
}
}
#define N 100000
#define ARR_SZ 100
struct Struct
{
float* arr;
};
int main()
{
Struct* struct_arr;
gpuErrchk( cudaMallocManaged((void**)&struct_arr, sizeof(Struct)*N) );
for(int i = 0; i < N; ++i)
gpuErrchk( cudaMallocManaged((void**)&(struct_arr[i].arr), sizeof(float)*ARR_SZ) ); //out of memory...
for(int i = 0; i < N; ++i)
cudaFree(struct_arr[i].arr);
cudaFree(struct_arr);
/*float* f;
gpuErrchk( cudaMallocManaged((void**)&f, sizeof(float)*N*ARR_SZ) ); //this works ok
cudaFree(f);*/
return 0;
}
There doesn't seem to be a problem when I call cudaMallocManaged() once to allocate a single chunk of memory, as I'm showing in the last piece of commented code.
I have a GeForce GTX 1070 Ti, and I'm using Windows 10. A friend tried to compile the same code in a PC with Linux and it worked correctly, while it had the same issue in another PC with Windows 10. WDDM TDR is deactivated.
Any help would be appreciated. Thanks.
There is an allocation granularity.
This means that if you ask for 1 byte, or 400 bytes, what is actually used up is something like 4096 65536 bytes. So a bunch of very small allocations will actually use up memory at a much faster rate than what you would predict based on the requested allocation size. The solution is to not make very small allocations, but instead to allocate in larger chunks.
An alternative strategy here would also be to flatten your allocation, and carve out pieces from it for each of your arrays:
#include <iostream>
#include <cstdio>
#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(cudaError_t code, const char *file, int line, bool abort=true)
{
if (code != cudaSuccess)
{
fprintf(stderr,"GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
if (abort) exit(code);
}
}
#define N 100000
#define ARR_SZ 100
struct Struct
{
float* arr;
};
int main()
{
Struct* struct_arr;
float* f;
gpuErrchk( cudaMallocManaged((void**)&struct_arr, sizeof(Struct)*N) );
gpuErrchk( cudaMallocManaged((void**)&f, sizeof(float)*N*ARR_SZ) );
for(int i = 0; i < N; ++i)
struct_arr[i].arr = f+i*ARR_SZ;
cudaFree(struct_arr);
cudaFree(f);
return 0;
}
ARR_SZ divisible by 4 means the various created pointers can also be up-cast to larger vector types e.g. float2 or float4, if your use had any intention of doing that.
A possible reason the original code works on linux is because managed memory on linux, in a proper setup, can oversubscribe the GPU physical memory. The result is the actual allocation limit is much higher than what the GPU on-board memory would suggest. It might also be that the linux case has a bit more free memory, or perhaps the allocation granularity on linux is different (smaller).
Based on a question in the comments, I decided to estimate the allocation granularity, using this code:
#include <iostream>
#include <cstdio>
#define gpuErrchk(ans) { gpuAssert((ans), __FILE__, __LINE__); }
inline void gpuAssert(cudaError_t code, const char* file, int line, bool abort = true)
{
if (code != cudaSuccess)
{
fprintf(stderr, "GPUassert: %s %s %d\n", cudaGetErrorString(code), file, line);
if (abort) exit(code);
}
}
#define N 100000
#define ARR_SZ 100
struct Struct
{
float* arr;
};
int main()
{
Struct* struct_arr;
//float* f;
gpuErrchk(cudaMallocManaged((void**)& struct_arr, sizeof(Struct) * N));
#if 0
gpuErrchk(cudaMallocManaged((void**)& f, sizeof(float) * N * ARR_SZ));
for (int i = 0; i < N; ++i)
struct_arr[i].arr = f + i * ARR_SZ;
#else
size_t fre, tot;
gpuErrchk(cudaMemGetInfo(&fre, &tot));
std::cout << "Free: " << fre << " total: " << tot << std::endl;
for (int i = 0; i < N; ++i)
gpuErrchk(cudaMallocManaged((void**) & (struct_arr[i].arr), sizeof(float) * ARR_SZ));
gpuErrchk(cudaMemGetInfo(&fre, &tot));
std::cout << "Free: " << fre << " total: " << tot << std::endl;
for (int i = 0; i < N; ++i)
cudaFree(struct_arr[i].arr);
#endif
cudaFree(struct_arr);
//cudaFree(f);
return 0;
}
When I compile a debug project with that code, and run that on a windows 10 desktop with RTX 2070 GPU (8GB memory, same as GTX 1070 Ti) I get the following output:
Microsoft Windows [Version 10.0.17763.973]
(c) 2018 Microsoft Corporation. All rights reserved.
C:\Users\Robert Crovella>cd C:\Users\Robert Crovella\source\repos\test12\x64\Debug
C:\Users\Robert Crovella\source\repos\test12\x64\Debug>test12
Free: 7069866393 total: 8589934592
Free: 516266393 total: 8589934592
C:\Users\Robert Crovella\source\repos\test12\x64\Debug>test12
Free: 7069866393 total: 8589934592
Free: 516266393 total: 8589934592
C:\Users\Robert Crovella\source\repos\test12\x64\Debug>
Note that on my machine there is only 0.5GB of reported free memory left after the 100,000 allocations. So if for any reason your 8GB GPU starts out with less free memory (entirely possible) you may run into an out-of-memory error, even though I did not.
The calculation of the allocation granularity is as follows:
7069866393 - 516266393 / 100000 = 65536 bytes per allocation(!)
So my previous estimate of 4096 bytes per allocation was way off, by at least 1 order of magnitude, on my machine/test setup.
The allocation granularity may vary based on:
windows or linux
WDDM or TCC
x86 or Power9
managed vs ordinary cudaMalloc
possibly other factors (e.g. CUDA version)
so my advice to future readers would not be to assume that it is always 65536 bytes per allocation, minimum.
I noticed that on Windows every time I issue an unbuffered fread() request with an odd length, it's split into 2 requests (as observed through procmon):
a) fread for my requested length-1
b) 2-byte fread for the last byte
This has an obvious performance overhead like 2 kernel requests instead of one etc.
Sample code ran on Windows 10:
#include <iostream>
#include <stdio.h>
#include <stdlib.h>
int main(int argc, char* argv[]) {
FILE* pFile;
char* buffer;
pFile = fopen(argv[0], "rb");
setbuf(pFile, nullptr);
size_t len = 3;
buffer = (char*)malloc(sizeof(char)*len);
if (len != fread(buffer, 1, len, pFile)) { fputs("Reading error", stderr); exit(3); }
free(buffer);
fclose(pFile);
return 0;
}
This results in the following procmon reported calls:
ReadFile c:\work\cpptry\Debug\cpptry.exe SUCCESS Offset: 0, Length: 2, Priority: Normal
ReadFile c:\work\cpptry\Debug\cpptry.exe SUCCESS Offset: 2, Length: 2
It seems as if Windows is incapable of issuing odd-sized requests to the file system.
What's up with that?
This is implementation artifact.
MS CRT keeps all FILEs buffered even if you tell it to don't do this. Instead file buffer is set to internal buffer with space for two bytes. This allows to keep one code path instead of two and simplifies implementation of fast path in fgetc and fputc.
#define fgetc(_stream) (--(_stream)->_cnt >= 0 ? 0xff & *(_stream)->_ptr++ : _filbuf(_stream))
Some of you are probably bothered by size of the buffer (2 bytes when quasi unbuffered), but in _fread_nolock_s function we can find optimization
witch tries to read multiplies of buffer size directly to the destination bypassing file buffer.
See fread.c in CRT sources:
/* calc chars to read -- (count/streambufsize) * streambufsize */
nbytes = (unsigned)(count - count % streambufsize);
...
nread = _read_nolock(_fileno(stream), data, nbytes);
Because the file buffer's size is equal 2, even number of bytes is read directly to the destination and eventual one byte goes through the file buffer. Sometimes there could be some bytes in the buffer that need to be transfered to destination before optimized read can take place.
Bonus: buffer size is always forced to multiple of 2.
See setvbuf.c:
/*
* force size to be even by masking down to the nearest multiple
* of 2
*/
size &= (size_t)~1;
...
/*
* CASE 1: No Buffering.
*/
if (type & _IONBF) {
stream->_flag |= _IONBF;
buffer = (char *)&(stream->_charbuf);
size = 2;
}
Code snippets above are from VC 2013 CRT.
For comparison snippets from Universal CRT 10.0.17134
read.cpp
unsigned const bytes_to_read = stream_buffer_size != 0
? static_cast<unsigned>(maximum_bytes_to_read - maximum_bytes_to_read % stream_buffer_size)
: maximum_bytes_to_read;
...
int const bytes_read = _read_nolock(_fileno(stream.public_stream()), data, bytes_to_read);
setvbuf.cpp
// Force the buffer size to be even by masking the low order bit:
size_t const usable_buffer_size = buffer_size_in_bytes & ~static_cast<size_t>(1);
...
// Case 1: No buffering:
if (type & _IONBF)
{
return set_buffer(stream, reinterpret_cast<char*>(&stream->_charbuf), 2, _IOBUFFER_NONE);
}
And snippets from VC 6.0 (1998)
read.c
/* calc chars to read -- (count/bufsize) * bufsize */
nbytes = ( bufsize ? (count - count % bufsize) : count );
nread = _read(_fileno(stream), data, nbytes);
setvbuf.c
/*
* force size to be even by masking down to the nearest multiple
* of 2
*/
size &= (size_t)~1;
...
/*
* CASE 1: No Buffering.
*/
if (type & _IONBF) {
stream->_flag |= _IONBF;
buffer = (char *)&(stream->_charbuf);
size = 2;
}
Suppose we have;
struct collapsed {
char **seq;
int num;
};
...
__device__ *collapsed xdev;
...
collapsed *x_dev
cudaGetSymbolAddress((void **)&x_dev, xdev);
cudaMemcpyToSymbol(x_dev, x, sizeof(collapsed)*size); //x already defined collapsed * , this line gives ERROR
Whay do you think I am getting error at the last line : invalid device symbol ??
The first problem here is that x_dev isn't a device symbol. It might contain an address in a device memory, but that address cannot be passed to cudaMemcpyToSymbol. The call should just be:
cudaMemcpyToSymbol(xdev, ......);
Which brings up the second problem. Doing this:
cudaMemcpyToSymbol(xdev, x, sizeof(collapsed)*size);
would be illegal. xdev is a pointer, so the only valid value you can copy to xdev is a device address. If x is the address of a struct collapsed in device memory, then the only valid version of this memory transfer operation is
cudaMemcpyToSymbol(xdev, &x, sizeof(collapsed *));
ie. x must have previously have been set to the address of memory allocated in the device, something like
collapsed *x;
cudaMalloc((void **)&x, sizeof(collapsed)*size);
cudaMemcpy(x, host_src, sizeof(collapsed)*size, cudaMemcpyHostToDevice);
As promised, here is a complete working example. First the code:
#include <cstdlib>
#include <iostream>
#include <cuda_runtime.h>
struct collapsed {
char **seq;
int num;
};
__device__ collapsed xdev;
__global__
void kernel(const size_t item_sz)
{
if (threadIdx.x < xdev.num) {
char *p = xdev.seq[threadIdx.x];
char val = 0x30 + threadIdx.x;
for(size_t i=0; i<item_sz; i++) {
p[i] = val;
}
}
}
#define gpuQ(ans) { gpu_assert((ans), __FILE__, __LINE__); }
void gpu_assert(cudaError_t code, const char *file, const int line)
{
if (code != cudaSuccess)
{
std::cerr << "gpu_assert: " << cudaGetErrorString(code) << " "
<< file << " " << line << std::endl;
exit(code);
}
}
int main(void)
{
const int nitems = 32;
const size_t item_sz = 16;
const size_t buf_sz = size_t(nitems) * item_sz;
// Gpu memory for sequences
char *_buf;
gpuQ( cudaMalloc((void **)&_buf, buf_sz) );
gpuQ( cudaMemset(_buf, 0x7a, buf_sz) );
// Host array for holding sequence device pointers
char **seq = new char*[nitems];
size_t offset = 0;
for(int i=0; i<nitems; i++, offset += item_sz) {
seq[i] = _buf + offset;
}
// Device array holding sequence pointers
char **_seq;
size_t seq_sz = sizeof(char*) * size_t(nitems);
gpuQ( cudaMalloc((void **)&_seq, seq_sz) );
gpuQ( cudaMemcpy(_seq, seq, seq_sz, cudaMemcpyHostToDevice) );
// Host copy of the xdev structure to copy to the device
collapsed xdev_host;
xdev_host.num = nitems;
xdev_host.seq = _seq;
// Copy to device symbol
gpuQ( cudaMemcpyToSymbol(xdev, &xdev_host, sizeof(collapsed)) );
// Run Kernel
kernel<<<1,nitems>>>(item_sz);
// Copy back buffer
char *buf = new char[buf_sz];
gpuQ( cudaMemcpy(buf, _buf, buf_sz, cudaMemcpyDeviceToHost) );
// Print out seq values
// Each string should be ASCII starting from ´0´ (0x30)
char *seq_vals = buf;
for(int i=0; i<nitems; i++, seq_vals += item_sz) {
std::string s;
s.append(seq_vals, item_sz);
std::cout << s << std::endl;
}
return 0;
}
and here it is compiled and run:
$ /usr/local/cuda/bin/nvcc -arch=sm_12 -Xptxas=-v -g -G -o erogol erogol.cu
./erogol.cu(19): Warning: Cannot tell what pointer points to, assuming global memory space
ptxas info : 8 bytes gmem, 4 bytes cmem[14]
ptxas info : Compiling entry function '_Z6kernelm' for 'sm_12'
ptxas info : Used 5 registers, 20 bytes smem, 4 bytes cmem[1]
$ /usr/local/cuda/bin/cuda-memcheck ./erogol
========= CUDA-MEMCHECK
0000000000000000
1111111111111111
2222222222222222
3333333333333333
4444444444444444
5555555555555555
6666666666666666
7777777777777777
8888888888888888
9999999999999999
::::::::::::::::
;;;;;;;;;;;;;;;;
<<<<<<<<<<<<<<<<
================
>>>>>>>>>>>>>>>>
????????????????
################
AAAAAAAAAAAAAAAA
BBBBBBBBBBBBBBBB
CCCCCCCCCCCCCCCC
DDDDDDDDDDDDDDDD
EEEEEEEEEEEEEEEE
FFFFFFFFFFFFFFFF
GGGGGGGGGGGGGGGG
HHHHHHHHHHHHHHHH
IIIIIIIIIIIIIIII
JJJJJJJJJJJJJJJJ
KKKKKKKKKKKKKKKK
LLLLLLLLLLLLLLLL
MMMMMMMMMMMMMMMM
NNNNNNNNNNNNNNNN
OOOOOOOOOOOOOOOO
========= ERROR SUMMARY: 0 errors
Some notes:
To simplify things a bit, I have only used a single memory allocation _buf to hold all of the string data. Each value of seq is set to a different address within _buf. This is functionally equivalent to running a separate cudaMalloc call for each pointer, but much faster.
The key concept is to assemble a copy of the structure you wish to access on the device in host memory, then copy that to the device. All of the pointers in my xdev_host are device pointers. The CUDA API doesn't have any sort of deep copy or automatic pointer translation facility, so it is the programmer's responsibility to make sure this is correct.
Each thread in the kernel just fills its sequence with a difference ASCII character. Note that I have declared my xdev as a structure, rather than pointer to structure and copy values rather than a reference to the __device__ symbol (again to simplify things slightly). But otherwise the sequence of operations is what you would need to make your design pattern work.
Because I only have access to a compute 1.x device, the compiler issues a warning. One compute 2.x and 3.x this won't happen because of the improved memory model in those devices. The warning is normal and can be safely ignored.
Because each sequence is just written into a different part of _buf, I can transfer all the sequences back to the host with a single cudaMemcpy call.