PAPI Counter Issues - performance

I wrote the following code to get L3 cache miss information.
#include <stdlib.h>
#include <stdio.h>
#include <math.h>
#include <papi.h>
int main( int argc, char *argv[] ) {
int i;
long long counters[3];
counters[0] = counters[1] = counters[2] = 0;
int PAPI_events[] = {
PAPI_TOT_CYC,
PAPI_L3_TCM,
PAPI_L3_DCA };
PAPI_library_init(PAPI_VER_CURRENT);
i = PAPI_start_counters(PAPI_events, 3);
printf("Measuring instruction count for this printf\n");
PAPI_read_counters(counters, 3);
printf("%lld L3 cache misses %lld L3 cache accesses in %lld cycles"
counters[1], counters[2], counters[0] );
return 0;
}
But I get an error and zero values for counters as below. What could be wrong?
PAPI Error: pfm_find_full_event(RETIRED_MISPREDICTED_BRANCH_INSTRUCTIONS,0x7fff22fe65a0): event not found.
PAPI Error: 1 of 4 events in papi_events.csv were not valid.
Measuring instruction count for this printf
0 L3 cache misses 0 L3 cache accesses in 0 cycles
I checked available counters with papi_avail -a and the counters seem to be supported. CPU information given below.
Available events and hardware information.
--------------------------------------------------------------------------------
PAPI Version : 5.1.1.0
Vendor string and code : GenuineIntel (1)
Model string and code : Intel(R) Xeon(R) CPU E7- 4830 # 2.13GHz (47)
CPU Revision : 2.000000
CPUID Info : Family: 6 Model: 47 Stepping: 2
CPU Max Megahertz : 2128
CPU Min Megahertz : 2128
Hdw Threads per core : 1
Cores per Socket : 8
NUMA Nodes : 4
CPUs per Node : 8
Total CPUs : 32
Running in a VM : no
Number Hardware Counters : 7
Max Multiplex Counters : 64
uname output
2.6.32-431.17.1.el6.x86_64 #1 SMP Fri Apr 11 17:27:00 EDT 2014 x86_64 x86_64 x86_64 GNU/Linux

Related

Why a 'for loop' inside a 'parallel for loop' takes longer than the same 'for loop' in a serial region?

I am testing the performance of a cluster where I am using 64 threads. I have written a simple code:
unsigned int m(67000);
double start_time_i(0.),end_time_i(0.),start_time_init(0.),end_time_init(0.),diff_time_i(0.),start_time_j(0.),end_time_j(0.),diff_time_j(0.),total_time(0.);
cout<<"omp_get_max_threads : "<<omp_get_max_threads()<<endl;
cout<<"omp_get_num_procs : "<<omp_get_num_procs()<<endl;
omp_set_num_threads(omp_get_max_threads());
unsigned int dim_i=omp_get_max_threads();
unsigned int dim_j=dim_i*m;
std::vector<std::vector<unsigned int>> vector;
vector.resize(dim_i, std::vector<unsigned int>(dim_j, 0));
start_time_init = omp_get_wtime();
for (unsigned int j=0;j<dim_j;j++){
vector[0][j]=j;
}
end_time_init = omp_get_wtime();
start_time_i = omp_get_wtime();
#pragma omp parallel for
for (unsigned int i=0;i<dim_i;i++){
start_time_j = omp_get_wtime();
for (unsigned int j=0;j<dim_j;j++) vector[i][j]=i+j;
end_time_j = omp_get_wtime();
cout<<"i "<<i<<" thread "<<omp_get_thread_num()<<" int_time = "<<(end_time_j-start_time_j)*1000<<endl;
}
end_time_i = omp_get_wtime();
cout<<"time_final = "<<(end_time_i-start_time_i)*1000<<endl;
cout<<"initial non parallel region "<< " time = "<<(end_time_init-start_time_init)*1000<<endl;
return 0;
I do not understand why "(end_time_j-start_time_j)*1000" is much bigger (around 50) than the time I need to go through the same loop over j if I am outside from the parallel region, i.e "end_time_init-start_time_init" (around 1).
omp_get_max_threads() and omp_get_num_procs() are both equal to 64.
In your loop you just fill a memory location with a lot of values. This task is not computation expensive, it depends on the speed of memory write. One thread can do it at a certain rate, but when you use N threads simultaneously, the total memory bandwidth remains the same on Shared-Memory Multicore systems (i.e most PCs, laptops) and it increases on Distributed-Memory Multicore systems (high-end serves). For more details please read this.
So, depending on the system the speed of memory write either remains the same or decreases when running several loops concurrently. For me 50 times difference seems to be a bit large. I got the following results on compiler explorer (it means that it has to be a Distributed-Memory Multicore system):
omp_get_max_threads : 4
omp_get_num_procs : 2
i 2 thread 2 int_time = 0.095537
i 0 thread 0 int_time = 0.084061
i 1 thread 1 int_time = 0.099578
i 3 thread 3 int_time = 0.10519
time_final = 0.868523
initial non parallel region time = 0.090862
On my laptop I got the following (so it is a shared-memory multicore system):
omp_get_max_threads : 8
omp_get_num_procs : 8
i 7 thread 7 int_time = 0.7518
i 5 thread 5 int_time = 1.0555
i 1 thread 1 int_time = 1.2755
i 6 thread 6 int_time = 1.3093
i 2 thread 2 int_time = 1.3093
i 3 thread 3 int_time = 1.3093
i 4 thread 4 int_time = 1.3093
i 0 thread 0 int_time = 1.3093
time_final = 1.915
initial non parallel region time = 0.1578
In conclusion it does depend on the system you are using...

Why does the same OpenCL code have different outputs from Intel Xeon CPU and NVIDIA GTX 1080 Ti GPU?

I am trying to parallelize Monte Carlo simulation by using OpenCL. I use the MWC64X as a uniform random number generator. The code runs well on different Intel CPUs, since the output of parallel computation is very close to the sequential one.
Using OpenCL device: Intel(R) Xeon(R) CPU E5-2630L v3 # 1.80GHz
Literal influence running time: 0.029048 seconds r1 seqInfl= 0.4771
Literal influence running time: 0.029762 seconds r2 seqInfl= 0.4771
Literal influence running time: 0.029742 seconds r3 seqInfl= 0.4771
Literal influence running time: 0.02971 seconds ra seqInfl= 0.4771
Literal influence running time: 0.029225 seconds trust1-57 seqInfl= 0.6001
Literal influence running time: 0.04992 seconds trust110-1 seqInfl= 0
Literal influence running time: 0.034636 seconds trust4-57 seqInfl= 0
Literal influence running time: 0.049079 seconds trust57-110 seqInfl= 0
Literal influence running time: 0.024442 seconds trust57-4 seqInfl= 0.8026
Literal influence running time: 0.04946 seconds trust33-1 seqInfl= 0
Literal influence running time: 0.049071 seconds trust57-33 seqInfl= 0
Literal influence running time: 0.053117 seconds trust4-1 seqInfl= 0.1208
Literal influence running time: 0.051642 seconds trust57-1 seqInfl= 0
Literal influence running time: 0.052052 seconds trust57-64 seqInfl= 0
Literal influence running time: 0.052118 seconds trust64-1 seqInfl= 0
Literal influence running time: 0.051998 seconds trust57-7 seqInfl= 0
Literal influence running time: 0.052069 seconds trust7-1 seqInfl= 0
Total number of literals: 17
Sequential influence running time: 0.71728 seconds
Sequential maxInfluence Literal: trust57-4 0.8026
index1= 17 size= 51 dim1_size= 6
sum0:4781 influence0:0.478100 sum2:4781 influence2:0.478100 sum6:0 influence6:0.000000 sum10:0 sum12:0 influence12:0.000000 sum7:0 influence7:0.000000 influence10:0.000000 sum4:5962 influence4:0.596200 sum8:7971 influence8:0.797100 sum1:4781 influence1:0.478100 sum3:4781 influence3:0.478100 sum13:0 influence13:0.000000 sum11:1261 influence11:0.126100 sum9:0 influence9:0.000000 sum14:0 influence14:0.000000 sum5:0 influence5:0.000000 sum15:0 influence15:0.000000 sum16:0 influence16:0.000000
Parallel influence running time: 0.054391 seconds
Parallel maxInfluence Literal: trust57-4 Infl=0.7971
However, when I run the code on GeForce GTX 1080 Ti, with NVIDIA-SMI 430.40 and CUDA 10.1 and OpenCL 1.2 CUDA installed, the output is as below:
Using OpenCL device: GeForce GTX 1080 Ti
Influence:
Literal influence running time: 0.011119 seconds r1 seqInfl= 0.4771
Literal influence running time: 0.011238 seconds r2 seqInfl= 0.4771
Literal influence running time: 0.011408 seconds r3 seqInfl= 0.4771
Literal influence running time: 0.01109 seconds ra seqInfl= 0.4771
Literal influence running time: 0.011132 seconds trust1-57 seqInfl= 0.6001
Literal influence running time: 0.018978 seconds trust110-1 seqInfl= 0
Literal influence running time: 0.013093 seconds trust4-57 seqInfl= 0
Literal influence running time: 0.018968 seconds trust57-110 seqInfl= 0
Literal influence running time: 0.009105 seconds trust57-4 seqInfl= 0.8026
Literal influence running time: 0.018753 seconds trust33-1 seqInfl= 0
Literal influence running time: 0.018583 seconds trust57-33 seqInfl= 0
Literal influence running time: 0.02005 seconds trust4-1 seqInfl= 0.1208
Literal influence running time: 0.01957 seconds trust57-1 seqInfl= 0
Literal influence running time: 0.019686 seconds trust57-64 seqInfl= 0
Literal influence running time: 0.019632 seconds trust64-1 seqInfl= 0
Literal influence running time: 0.019687 seconds trust57-7 seqInfl= 0
Literal influence running time: 0.019859 seconds trust7-1 seqInfl= 0
Total number of literals: 17
Sequential influence running time: 0.272032 seconds
Sequential maxInfluence Literal: trust57-4 0.8026
index1= 17 size= 51 dim1_size= 6
sum0:10000 sum1:10000 sum2:10000 sum3:10000 sum4:10000 sum5:0 sum6:0 sum7:0 sum8:10000 sum9:0 sum10:0 sum11:0 sum12:0 sum13:0 sum14:0 sum15:0 sum16:0
Parallel influence running time: 0.193581 seconds
The "Influence" value equals sum*1.0/10000, thus the parallel influence only composes of 1 and 0, which is incorrect (in GPU runs) and doesn't happen when parallelizing on a Intel CPU.
When I check the output of the random number generator if(flag==0) printf("randint=%u",randint);, it seems the outputs are all zero on GPU. Below is the clinfo and the .cl code:
Device Name GeForce GTX 1080 Ti
Device Vendor NVIDIA Corporation
Device Vendor ID 0x10de
Device Version OpenCL 1.2 CUDA
Driver Version 430.40
Device OpenCL C Version OpenCL C 1.2
Device Type GPU
Device Topology (NV) PCI-E, 68:00.0
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 28
Max clock frequency 1721MHz
Compute Capability (NV) 6.1
Device Partition (core)
Max number of sub-devices 1
Supported partition types None
Max work item dimensions 3
Max work item sizes 1024x1024x64
Max work group size 1024
Preferred work group size multiple 32
Warp size (NV) 32
Preferred / native vector sizes
char 1 / 1
short 1 / 1
int 1 / 1
long 1 / 1
half 0 / 0 (n/a)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (n/a)
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 11720130560 (10.92GiB)
Error Correction support No
Max memory allocation 2930032640 (2.729GiB)
Unified memory for Host and Device No
Integrated memory (NV) No
Minimum alignment for any data type 128 bytes
Alignment of base address 4096 bits (512 bytes)
Global Memory cache type Read/Write
Global Memory cache size 458752 (448KiB)
Global Memory cache line size 128 bytes
Image support Yes
Max number of samplers per kernel 32
Max size for 1D images from buffer 134217728 pixels
Max 1D or 2D image array size 2048 images
Max 2D image size 16384x32768 pixels
Max 3D image size 16384x16384x16384 pixels
Max number of read image args 256
Max number of write image args 16
Local memory type Local
Local memory size 49152 (48KiB)
Registers per block (NV) 65536
Max number of constant args 9
Max constant buffer size 65536 (64KiB)
Max size of kernel argument 4352 (4.25KiB)
Queue properties
Out-of-order execution Yes
Profiling Yes
Prefer user sync for interop No
Profiling timer resolution 1000ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Kernel execution timeout (NV) Yes
Concurrent copy and kernel execution (NV) Yes
Number of async copy engines 2
printf() buffer size 1048576 (1024KiB)
#define N 70 // N > index, which is the total number of literals
#define BASE 4294967296UL
//! Represents the state of a particular generator
typedef struct{ uint x; uint c; } mwc64x_state_t;
enum{ MWC64X_A = 4294883355U };
enum{ MWC64X_M = 18446383549859758079UL };
void MWC64X_Step(mwc64x_state_t *s)
{
uint X=s->x, C=s->c;
uint Xn=MWC64X_A*X+C;
uint carry=(uint)(Xn<C); // The (Xn<C) will be zero or one for scalar
uint Cn=mad_hi(MWC64X_A,X,carry);
s->x=Xn;
s->c=Cn;
}
//! Return a 32-bit integer in the range [0..2^32)
uint MWC64X_NextUint(mwc64x_state_t *s)
{
uint res=s->x ^ s->c;
MWC64X_Step(s);
return res;
}
__kernel void setInfluence(const int literals, const int size, const int dim1_size, __global int* lambdas, __global float* lambdap, __global int* dim2_size, __global float* influence){
int flag=get_global_id(0);
int sum=0;
int count=10000;
int assignment[N];
//or try to get newlambda like original version does
if(flag < literals){
mwc64x_state_t rng;
for(int i=0; i<count; i++){
for(int j=0; j<size; j++){
uint randint=MWC64X_NextUint(&rng);
float rand=randint*1.0/BASE;
//if(flag==0)
// printf("randint=%u",randint);
if(lambdap[j]<rand)
assignment[lambdas[j]]=0;
else
assignment[lambdas[j]]=1;
}
//the true case
assignment[flag]=1;
int valuet=0;
int index=0;
for(int m=0; m<dim1_size; m++){
int valueMono=1;
for(int n=0; n<dim2_size[m]; n++){
if(assignment[lambdas[index+n]]==0){
valueMono=0;
index+=dim2_size[m];
break;
}
}
if(valueMono==1){
valuet=1;
break;
}
}
//the false case
assignment[flag]=0;
int valuef=0;
index=0;
for(int m=0; m<dim1_size; m++){
int valueMono=1;
for(int n=0; n<dim2_size[m]; n++){
if(assignment[lambdas[index+n]]==0){
valueMono=0;
index+=dim2_size[m];
break;
}
}
if(valueMono==1){
valuef=1;
break;
}
}
sum += valuet-valuef;
}
influence[flag] = 1.0*sum/count;
printf("sum%d:%d\t", flag, sum);
}
}
What might be the problem when running the code on GPU? Is it MWC64X? According to its author, it can perform well on NVIDIA GPUs. If so, how can I fix it; if not, what might be the problem?
(This started out as a comment, it turns out this was the source of the problem so I'm turning it into an answer.)
You're not initialising your mwc64x_state_t rng; variable before reading from it, so any results will be undefined:
mwc64x_state_t rng;
for(int i=0; i<count; i++){
for(int j=0; j<size; j++){
uint randint=MWC64X_NextUint(&rng);
Where MWC64X_NextUint() immediately reads from the rng state before updating it:
uint MWC64X_NextUint(mwc64x_state_t *s)
{
uint res=s->x ^ s->c;
Note that you will probably want to seed your RNG differently for each work-item, otherwise you will get nasty correlation artifacts in your results.
All use-cases of a pseudo-random number are a next-level challenge in true-[PARALLEL] computing platforms (not languages, platforms).
Either, there is some source-of-randomness, which gets us into a trouble once massively-parallel requests are to get fair-handled in a truly [PARALLEL] fashion (here, hardware resources may help, yet at a cost of not being able to reproduce the same behaviour "outside" of this very same platform ( and moment-in-time, if such a source is not software-operated with some seed-injection feature, that may setup the "just"-pseudo-random algorithm that creates a pure-[SERIAL] sequence-of-produced "just"-pseudo-random numbers ) )
Or,there is some "shared"-generator of pseudo-random numbers, which enjoys of a higher level of system-wide level-of-entropy (which is good for the resulting "quality" of pseudo-randomness) but at a cost of pure-serial dependence (no parallel execution possible,serial sequence gets served one after another in a sequential manner) and having close to zero chance for repeatable runs (a must for reproducible science) providing repeatably same sequences, needed for testing and for method-validation cases.
RESUME :
The code may employ a work-item-"private" pseudo-random generating function(s) ( privacy is a must for the sake of both the parallel code-execution and the mutual independence (non-intervening processes) of generating these pseudo-random numbers ) , yet each of instances must be a) independently initialised, so as to provide the expected level of randomness achievable in parallelised code-runs and b) any such initialisation ought be performed in a repeatably reproducible manner, for the sake of running the test on different times, often using different OpenCL target computing-platforms.
For __kernel-s, that do not rely on hardware-specific sources-of-randomness, meeting the conditions a && b will suffice for receiving repeatably reproducible (same) results for testing "in vitro" and thus providing a reasonably random method for generating results during generic production-level use-case code-runs "in vivo".
The comparison of net-run-times (benchmarked above) seems to show that Amdahl's law add-on overhead costs plus a tail-end effect of the atomicity-of-work have finally decided the net-run-time was ~ 3.6x faster on XEON compared to GPU:
index1 = 17
size = 51
dim1_size = 6
sum0: 4781 influence0: 0.478100
sum2: 4781 influence2: 0.478100
sum6: 0 influence6: 0.000000
sum10: 0 influence10: 0.000000
sum12: 0 influence12: 0.000000
sum7: 0 influence7: 0.000000
sum4: 5962 influence4: 0.596200
sum8: 7971 influence8: 0.797100
sum1: 4781 influence1: 0.478100
sum3: 4781 influence3: 0.478100
sum13: 0 influence13: 0.000000
sum11: 1261 influence11: 0.126100
sum9: 0 influence9: 0.000000
sum14: 0 influence14: 0.000000
sum5: 0 influence5: 0.000000
sum15: 0 influence15: 0.000000
sum16: 0 influence16: 0.000000
Parallel influence running time: 0.054391 seconds on XEON E5-2630L v3 # 1.80GHz using OpenCL
|....
index1 = 17 |....
size = 51 |....
dim1_size = 6 |....
sum0: 10000 |....
sum1: 10000 |....
sum2: 10000 |....
sum3: 10000 |....
sum4: 10000 |....
sum5: 0 |....
sum6: 0 |....
sum7: 0 |....
sum8: 10000 |....
sum9: 0 |....
sum10: 0 |....
sum11: 0 |....
sum12: 0 |....
sum13: 0 |....
sum14: 0 |....
sum15: 0 |....
sum16: 0 |....
Parallel influence running time: 0.193581 seconds on GeForce GTX 1080 Ti using OpenCL

SIGXCPU raised by setrlimit RLIMIT_CPU later than expected in a virtual machine

[EDIT: added MCVE in the text, clarifications]
I have the following program that sets RLIMIT_CPU to 2 seconds using setrlimit() and catches the signal. RLIMIT_CPU limits CPU time. «When the process reaches the soft limit, it is sent a SIGXCPU signal. The default action for this signal is to terminate the process. However, the signal can be caught, and the handler can return control to the main program.» (man)
The following program sets RLIMIT_CPU and a signal handler for SIGXCPU, then it generates random numbers until SIGXCPU gets raised, the signal handler simply exits the program.
test_signal.cpp
/*
* Test program for signal handling on CMS.
*
* Compile with:
* /usr/bin/g++ [-DDEBUG] -Wall -std=c++11 -O2 -pipe -static -s \
* -o test_signal test_signal.cpp
*
* The option -DDEBUG activates some debug logging in the helpers library.
*/
#include <iostream>
#include <fstream>
#include <random>
#include <chrono>
#include <iostream>
#include <unistd.h>
#include <csignal>
#include <sys/time.h>
#include <sys/resource.h>
using namespace std;
namespace helpers {
long long start_time = -1;
volatile sig_atomic_t timeout_flag = false;
unsigned const timelimit = 2; // soft limit on CPU time (in seconds)
void setup_signal(void);
void setup_time_limit(void);
static void signal_handler(int signum);
long long get_elapsed_time(void);
bool has_reached_timeout(void);
void setup(void);
}
namespace {
unsigned const minrand = 5;
unsigned const maxrand = 20;
int const numcycles = 5000000;
};
/*
* Very simple debugger, enabled at compile time with -DDEBUG.
* If enabled, it prints on stderr, otherwise it does nothing (it does not
* even evaluate the expression on its right-hand side).
*
* Main ideas taken from:
* - C++ enable/disable debug messages of std::couts on the fly
* (https://stackoverflow.com/q/3371540/2377454)
* - Standard no-op output stream
* (https://stackoverflow.com/a/11826787/2377454)
*/
#ifdef DEBUG
#define debug true
#else
#define debug false
#endif
#define debug_logger if (!debug) \
{} \
else \
cerr << "[DEBUG] helpers::"
// conversion factor betwen seconds and nanoseconds
#define NANOS 1000000000
// signal to handle
#define SIGNAL SIGXCPU
#define TIMELIMIT RLIMIT_CPU
/*
* This could be a function factory where and a closure of the signal-handling
* function so that we could explicitly pass the output ofstream and close it.
* C++ support closures only for lambdas, alas, at the moment we also need
* the signal-handling function to be a pointer to a function and lambaa are
* a different object that can not be converted. See:
* - Passing lambda as function pointer
* (https://stackoverflow.com/a/28746827/2377454)
*/
void helpers::signal_handler(int signum) {
helpers::timeout_flag = true;
debug_logger << "signal_handler:\t" << "signal " << signum \
<< " received" << endl;
debug_logger << "signal_handler:\t" << "exiting after " \
<< helpers::get_elapsed_time() << " microseconds" << endl;
exit(0);
}
/*
* Set function signal_handler() as handler for SIGXCPU using sigaction. See
* - https://stackoverflow.com/q/4863420/2377454
* - https://stackoverflow.com/a/17572787/2377454
*/
void helpers::setup_signal() {
debug_logger << "set_signal:\t" << "set_signal() called" << endl;
struct sigaction new_action;
//Set the handler in the new_action struct
new_action.sa_handler = signal_handler;
// Set to empty the sa_mask. It means that no signal is blocked
// while the handler run.
sigemptyset(&new_action.sa_mask);
// Block the SIGXCPU signal, while the handler run, SIGXCPU is ignored.
sigaddset(&new_action.sa_mask, SIGNAL);
// Remove any flag from sa_flag
new_action.sa_flags = 0;
// Set new action
sigaction(SIGNAL,&new_action,NULL);
if(debug) {
struct sigaction tmp;
// read the old signal associated to SIGXCPU
sigaction(SIGNAL, NULL, &tmp);
debug_logger << "set_signal:\t" << "action.sa_handler: " \
<< tmp.sa_handler << endl;
}
return;
}
/*
* Set soft CPU time limit.
* RLIMIT_CPU set teg CPU time limit in seconds..
* See:
* - https://www.go4expert.com/articles/
* getrlimit-setrlimit-control-resources-t27477/
* - https://gist.github.com/Leporacanthicus/11086960
*/
void helpers::setup_time_limit(void) {
debug_logger << "set_limit:\t\t" << "set_limit() called" << endl;
struct rlimit limit;
if(getrlimit(TIMELIMIT, &limit) != 0) {
perror("error calling getrlimit()");
exit(EXIT_FAILURE);
}
limit.rlim_cur = helpers::timelimit;
if(setrlimit(TIMELIMIT, &limit) != 0) {
perror("error calling setrlimit()");
exit(EXIT_FAILURE);
}
if (debug) {
struct rlimit tmp;
getrlimit(TIMELIMIT, &tmp);
debug_logger << "set_limit:\t\t" << "current limit: " << tmp.rlim_cur \
<< " seconds" << endl;
}
return;
}
void helpers::setup(void) {
struct timespec start;
if (clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &start)) {
exit(EXIT_FAILURE);
}
start_time = start.tv_sec*NANOS + start.tv_nsec;
setup_signal();
setup_time_limit();
return;
}
long long helpers::get_elapsed_time(void) {
struct timespec current;
if (clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &current)) {
exit(EXIT_FAILURE);
}
long long current_time = current.tv_sec*NANOS + current.tv_nsec;
long long elapsed_micro = (current_time - start_time)/1000 + \
((current_time - start_time) % 1000 >= 500);
return elapsed_micro;
}
bool helpers::has_reached_timeout(void) {
return helpers::timeout_flag;
}
int main() {
helpers::setup();
ifstream in("input.txt");
in.close();
ofstream out("output.txt");
random_device rd;
mt19937 eng(rd());
uniform_int_distribution<> distr(minrand, maxrand);
int i = 0;
while(!helpers::has_reached_timeout()) {
int nmsec;
for(int n=0; n<numcycles; n++) {
nmsec = distr(eng);
}
cout << "i: " << i << "\t- nmsec: " << nmsec << "\t- ";
out << "i: " << i << "\t- nmsec: " << nmsec << "\t- ";
cout << "program has been running for " << \
helpers::get_elapsed_time() << " microseconds" << endl;
out << "program has been running for " << \
helpers::get_elapsed_time() << " microseconds" << endl;
i++;
}
return 0;
}
I compile it as follows:
/usr/bin/g++ -DDEBUG -Wall -std=c++11 -O2 -pipe -static -s -o test_signal test_signal.cpp
On my laptop it correctly gets a SIGXCPU after 2 seconds, see the output:
$ /usr/bin/time -v ./test_signal
[DEBUG] helpers::set_signal: set_signal() called
[DEBUG] helpers::set_signal: action.sa_handler: 1
[DEBUG] helpers::set_limit: set_limit() called
[DEBUG] helpers::set_limit: current limit: 2 seconds
i: 0 - nmsec: 11 - program has been running for 150184 microseconds
i: 1 - nmsec: 18 - program has been running for 294497 microseconds
i: 2 - nmsec: 9 - program has been running for 422220 microseconds
i: 3 - nmsec: 5 - program has been running for 551882 microseconds
i: 4 - nmsec: 20 - program has been running for 685373 microseconds
i: 5 - nmsec: 16 - program has been running for 816642 microseconds
i: 6 - nmsec: 9 - program has been running for 951208 microseconds
i: 7 - nmsec: 20 - program has been running for 1085614 microseconds
i: 8 - nmsec: 20 - program has been running for 1217199 microseconds
i: 9 - nmsec: 12 - program has been running for 1350183 microseconds
i: 10 - nmsec: 17 - program has been running for 1486431 microseconds
i: 11 - nmsec: 13 - program has been running for 1619845 microseconds
i: 12 - nmsec: 20 - program has been running for 1758074 microseconds
i: 13 - nmsec: 11 - program has been running for 1895408 microseconds
[DEBUG] helpers::signal_handler: signal 24 received
[DEBUG] helpers::signal_handler: exiting after 2003326 microseconds
Command being timed: "./test_signal"
User time (seconds): 1.99
System time (seconds): 0.00
Percent of CPU this job got: 99%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:02.01
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 1644
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 59
Voluntary context switches: 1
Involuntary context switches: 109
Swaps: 0
File system inputs: 0
File system outputs: 16
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
If I compile and run in a virtual machine (VirtualBox, running Ubuntu), I get this:
$ /usr/bin/time -v ./test_signal
[DEBUG] helpers::set_signal: set_signal() called
[DEBUG] helpers::set_signal: action.sa_handler: 1
[DEBUG] helpers::set_limit: set_limit() called
[DEBUG] helpers::set_limit: current limit: 2 seconds
i: 0 - nmsec: 12 - program has been running for 148651 microseconds
i: 1 - nmsec: 13 - program has been running for 280494 microseconds
i: 2 - nmsec: 7 - program has been running for 428390 microseconds
i: 3 - nmsec: 5 - program has been running for 580805 microseconds
i: 4 - nmsec: 10 - program has been running for 714362 microseconds
i: 5 - nmsec: 19 - program has been running for 846853 microseconds
i: 6 - nmsec: 20 - program has been running for 981253 microseconds
i: 7 - nmsec: 7 - program has been running for 1114686 microseconds
i: 8 - nmsec: 7 - program has been running for 1249530 microseconds
i: 9 - nmsec: 12 - program has been running for 1392096 microseconds
i: 10 - nmsec: 20 - program has been running for 1531859 microseconds
i: 11 - nmsec: 19 - program has been running for 1667021 microseconds
i: 12 - nmsec: 13 - program has been running for 1818431 microseconds
i: 13 - nmsec: 17 - program has been running for 1973182 microseconds
i: 14 - nmsec: 7 - program has been running for 2115423 microseconds
i: 15 - nmsec: 20 - program has been running for 2255140 microseconds
i: 16 - nmsec: 13 - program has been running for 2394162 microseconds
i: 17 - nmsec: 10 - program has been running for 2528274 microseconds
i: 18 - nmsec: 15 - program has been running for 2667978 microseconds
i: 19 - nmsec: 8 - program has been running for 2803725 microseconds
i: 20 - nmsec: 9 - program has been running for 2940610 microseconds
i: 21 - nmsec: 19 - program has been running for 3075349 microseconds
i: 22 - nmsec: 14 - program has been running for 3215255 microseconds
i: 23 - nmsec: 5 - program has been running for 3356515 microseconds
i: 24 - nmsec: 5 - program has been running for 3497369 microseconds
[DEBUG] helpers::signal_handler: signal 24 received
[DEBUG] helpers::signal_handler: exiting after 3503271 microseconds
Command being timed: "./test_signal"
User time (seconds): 3.50
System time (seconds): 0.00
Percent of CPU this job got: 99%
Elapsed (wall clock) time (h:mm:ss or m:ss): 0:03.52
Average shared text size (kbytes): 0
Average unshared data size (kbytes): 0
Average stack size (kbytes): 0
Average total size (kbytes): 0
Maximum resident set size (kbytes): 1636
Average resident set size (kbytes): 0
Major (requiring I/O) page faults: 0
Minor (reclaiming a frame) page faults: 59
Voluntary context switches: 0
Involuntary context switches: 106
Swaps: 0
File system inputs: 0
File system outputs: 16
Socket messages sent: 0
Socket messages received: 0
Signals delivered: 0
Page size (bytes): 4096
Exit status: 0
Even running the binary compiled on my laptop, the process gets killed after around 3 seconds of elapsed user time.
Any idea of what could be causing this? For a broader context see, this thread: https://github.com/cms-dev/cms/issues/851

Minimize cudaDeviceSynchronize launch overhead

I'm currently doing a project with CUDA where a pipeline is refreshed with 200-10000 new events every 1ms. Each time, I want to call one(/two) kernels which compute a small list of outputs; then fed those outputs to the next element of the pipeline.
The theoretical flow is:
receive data in an std::vector
cudaMemcpy the vector to GPU
processing
generate small list of outputs
cudaMemcpy to the output std::vector
But when I'm calling cudaDeviceSynchronize on a 1block/1thread empty kernel with no processing, it already takes in average 0.7 to 1.4ms, which is already higher than my 1ms timeframe.
I could eventually change the timeframe of the pipeline in order to receive events every 5ms, but with 5x more each times. It wouldn't be ideal though.
What would be the best way to minimize the overhead of cudaDeviceSynchronize? Could streams be helpful in this situation? Or another solution to efficiently run the pipeline.
(Jetson TK1, compute capabilities 3.2)
Here's a nvprof log of the applications:
==8285== NVPROF is profiling process 8285, command: python player.py test.rec
==8285== Profiling application: python player.py test.rec
==8285== Profiling result:
Time(%) Time Calls Avg Min Max Name
94.92% 47.697ms 5005 9.5290us 1.7500us 13.083us reset_timesurface(__int64, __int64*, __int64*, __int64*, __int64*, float*, float*, bool*, bool*, Event*)
5.08% 2.5538ms 8 319.23us 99.750us 413.42us [CUDA memset]
==8285== API calls:
Time(%) Time Calls Avg Min Max Name
75.00% 5.03966s 5005 1.0069ms 25.083us 11.143ms cudaDeviceSynchronize
17.44% 1.17181s 5005 234.13us 83.750us 3.1391ms cudaLaunch
4.71% 316.62ms 9 35.180ms 23.083us 314.99ms cudaMalloc
2.30% 154.31ms 50050 3.0830us 1.0000us 2.6866ms cudaSetupArgument
0.52% 34.857ms 5005 6.9640us 2.5000us 464.67us cudaConfigureCall
0.02% 1.2048ms 8 150.60us 71.917us 183.33us cudaMemset
0.01% 643.25us 83 7.7490us 1.3330us 287.42us cuDeviceGetAttribute
0.00% 12.916us 2 6.4580us 2.0000us 10.916us cuDeviceGetCount
0.00% 5.3330us 1 5.3330us 5.3330us 5.3330us cuDeviceTotalMem
0.00% 4.0830us 1 4.0830us 4.0830us 4.0830us cuDeviceGetName
0.00% 3.4160us 2 1.7080us 1.5830us 1.8330us cuDeviceGet
A small reconstitution of the program (nvprof log at the end) - for some reason, the average of cudaDeviceSynchronize is 4 times lower, but it's still really high for an empty 1-thread kernel:
/* Compile with `nvcc test.cu -I.`
* with -I pointing to "helper_cuda.h" and "helper_string.h" from CUDA samples
**/
#include <iostream>
#include <cuda.h>
#include <helper_cuda.h>
#define MAX_INPUT_BUFFER_SIZE 131072
typedef struct {
unsigned short x;
unsigned short y;
short a;
long long b;
} Event;
long long *d_a_[2], *d_b_[2];
float *d_as_, *d_bs_;
bool *d_some_bool_[2];
Event *d_data_;
int width_ = 320;
int height_ = 240;
__global__ void reset_timesurface(long long ts,
long long *d_a_0, long long *d_a_1,
long long *d_b_0, long long *d_b_1,
float *d_as, float *d_bs,
bool *d_some_bool_0, bool *d_some_bool_1, Event *d_data) {
// nothing here
}
void reset_errors(long long ts) {
static const int n = 1024;
static const dim3 grid_size(width_ * height_ / n
+ (width_ * height_ % n != 0), 1, 1);
static const dim3 block_dim(n, 1, 1);
reset_timesurface<<<1, 1>>>(ts, d_a_[0], d_a_[1],
d_b_[0], d_b_[1],
d_as_, d_bs_,
d_some_bool_[0], d_some_bool_[1], d_data_);
cudaDeviceSynchronize();
// static long long *h_holder = (long long*)malloc(sizeof(long long) * 2000);
// cudaMemcpy(h_holder, d_a_[0], 0, cudaMemcpyDeviceToHost);
}
int main(void) {
checkCudaErrors(cudaMalloc(&(d_a_[0]), sizeof(long long)*width_*height_*2));
checkCudaErrors(cudaMemset(d_a_[0], 0, sizeof(long long)*width_*height_*2));
checkCudaErrors(cudaMalloc(&(d_a_[1]), sizeof(long long)*width_*height_*2));
checkCudaErrors(cudaMemset(d_a_[1], 0, sizeof(long long)*width_*height_*2));
checkCudaErrors(cudaMalloc(&(d_b_[0]), sizeof(long long)*width_*height_*2));
checkCudaErrors(cudaMemset(d_b_[0], 0, sizeof(long long)*width_*height_*2));
checkCudaErrors(cudaMalloc(&(d_b_[1]), sizeof(long long)*width_*height_*2));
checkCudaErrors(cudaMemset(d_b_[1], 0, sizeof(long long)*width_*height_*2));
checkCudaErrors(cudaMalloc(&d_as_, sizeof(float)*width_*height_*2));
checkCudaErrors(cudaMemset(d_as_, 0, sizeof(float)*width_*height_*2));
checkCudaErrors(cudaMalloc(&d_bs_, sizeof(float)*width_*height_*2));
checkCudaErrors(cudaMemset(d_bs_, 0, sizeof(float)*width_*height_*2));
checkCudaErrors(cudaMalloc(&(d_some_bool_[0]), sizeof(bool)*width_*height_*2));
checkCudaErrors(cudaMemset(d_some_bool_[0], 0, sizeof(bool)*width_*height_*2));
checkCudaErrors(cudaMalloc(&(d_some_bool_[1]), sizeof(bool)*width_*height_*2));
checkCudaErrors(cudaMemset(d_some_bool_[1], 0, sizeof(bool)*width_*height_*2));
checkCudaErrors(cudaMalloc(&d_data_, sizeof(Event)*MAX_INPUT_BUFFER_SIZE));
for (int i = 0; i < 5005; ++i)
reset_errors(16487L);
cudaFree(d_a_[0]);
cudaFree(d_a_[1]);
cudaFree(d_b_[0]);
cudaFree(d_b_[1]);
cudaFree(d_as_);
cudaFree(d_bs_);
cudaFree(d_some_bool_[0]);
cudaFree(d_some_bool_[1]);
cudaFree(d_data_);
cudaDeviceReset();
}
/* nvprof ./a.out
==9258== NVPROF is profiling process 9258, command: ./a.out
==9258== Profiling application: ./a.out
==9258== Profiling result:
Time(%) Time Calls Avg Min Max Name
92.64% 48.161ms 5005 9.6220us 6.4160us 13.250us reset_timesurface(__int64, __int64*, __int64*, __int64*, __int64*, float*, float*, bool*, bool*, Event*)
7.36% 3.8239ms 8 477.99us 148.92us 620.17us [CUDA memset]
==9258== API calls:
Time(%) Time Calls Avg Min Max Name
53.12% 1.22036s 5005 243.83us 9.6670us 8.5762ms cudaDeviceSynchronize
25.10% 576.78ms 5005 115.24us 44.250us 11.888ms cudaLaunch
9.13% 209.77ms 9 23.308ms 16.667us 208.54ms cudaMalloc
6.56% 150.65ms 1 150.65ms 150.65ms 150.65ms cudaDeviceReset
5.33% 122.39ms 50050 2.4450us 833ns 6.1167ms cudaSetupArgument
0.60% 13.808ms 5005 2.7580us 1.0830us 104.25us cudaConfigureCall
0.10% 2.3845ms 9 264.94us 22.333us 537.75us cudaFree
0.04% 938.75us 8 117.34us 58.917us 169.08us cudaMemset
0.02% 461.33us 83 5.5580us 1.4160us 197.58us cuDeviceGetAttribute
0.00% 15.500us 2 7.7500us 3.6670us 11.833us cuDeviceGetCount
0.00% 7.6670us 1 7.6670us 7.6670us 7.6670us cuDeviceTotalMem
0.00% 4.8340us 1 4.8340us 4.8340us 4.8340us cuDeviceGetName
0.00% 3.6670us 2 1.8330us 1.6670us 2.0000us cuDeviceGet
*/
As detailled in the comments of the original message, my problem was entirely related to the GPU I'm using (Tegra K1). Here's an answer I found for this particular problem; it might be useful for other GPUs as well. The average for cudaDeviceSynchronize on my Jetson TK1 went from 250us to 10us.
The rate of the Tegra was 72000kHz by default, we'll have to set it to 852000kHz using this command:
$ echo 852000000 > /sys/kernel/debug/clock/override.gbus/rate
$ echo 1 > /sys/kernel/debug/clock/override.gbus/state
We can find the list of available frequency using this command:
$ cat /sys/kernel/debug/clock/gbus/possible_rates
72000 108000 180000 252000 324000 396000 468000 540000 612000 648000 684000 708000 756000 804000 852000 (kHz)
More performance can be obtained (again, in exchange for a higher power draw) on both the CPU and GPU; check this link for more informations.

Units of QueryPerformanceFrequency

A simple question:
Which is the QueryPerformanceFrequency unit?
Hz (ticks per second)?
Thank you very much,
Bruno
Q: Units of QueryPerformanceFrequency?
A: KILO-HERTZ (NOT Hz)
=========== DETAILS ==============================================
My research indicates that both Counters and Freq are in KILOs, KILO-clock-ticks and KILO-HERTZ!
The counters register KILO-Clicks (KLICKS) and the freq is either in kHz or I am woefully UnderClocked. When you divide the Clock_Ticks by Clock_Frequency, kclicks/(kclicks*sec^-1), everything wipes out except for seconds.
Here is an example C program stripped to just the essentials:
#include "stdio.h"
#include <windows.h> // Needed for LARGE_INTEGER
// gcc cpu.freq.test.c -o cft.exe
// cft.exe -> Sleep d_KLICKS=3417790, d_time=0.999182880 sec, CPU_Freq=3420585 KILO-Hz
void main(int argc, char *argv[]) {
// Clock KILO-ticks start, end, CPU_Freq in kHz. KILOs cancel
LARGE_INTEGER sklick, eklick, cpu_khz;
double delta_time; // Expected time in SECONDS. All units above are k.
QueryPerformanceFrequency(&cpu_khz); // Gets clock KILO-tics, Klicks/sec
QueryPerformanceCounter(&sklick); // Capture cpu Start Klicks
Sleep(1000); // Sleep 1000 MILLI-seconds
QueryPerformanceCounter(&eklick); // Capture cpu End Klicks
delta_time = (eklick.QuadPart-sklick.QuadPart) / (double)cpu_khz.QuadPart;
printf("Sleep d_KLICKS=%lld, d_time=%4.9lf sec, CPU_Freq=%lld KILO-Hz\n",
eklick.QuadPart-sklick.QuadPart, delta_time, cpu_khz.QuadPart);
}
It actually compiles! Running...
Sleep d_KLICKS=3418803, d_time=0.999479036 sec, CPU_Freq=3420585 KILO-Hz
The CPU freq reads 3420585 or 3.420585E6 or 3.4 M-Hertz? <- MEGA-HURTS !OUCH!
The actual CPU freq is 3.4 Mega-Kilo-Hz or 3.4 GHz
microsoft appears to be confused (some things Never Change):
https://msdn.microsoft.com/en-us/library/windows/desktop/dn553408%28v=vs.85%29.aspx
QueryPerformanceFrequency(&Frequency);
QueryPerformanceCounter(&StartingTime);
// Activity to be timed
QueryPerformanceCounter(&EndingTime);
ElapsedMicroseconds.QuadPart = EndingTime.QuadPart - StartingTime.QuadPart;
// We now have the elapsed number of ticks, along with the
// number of ticks-per-second.
The number of "elapsed ticks" in 1 second is in the MILLIONS, NOT BILLIONS so they are NOT UNIT-CPU-CLOCK-TICKS but KILO-CPU-CLOCK-TICKS
Same off-by-3-orders-of-magnitude error for FREQ: 3.4 MILLION is not "ticks-per-second" but THOUSAND-ticks-per-second.
As long as you divide one by the other, the ?clicks cancel with a result in seconds. If one were so fatuous as to take ms at their document and try to use their "ticks-per-second" in some other calculation, you would wind up off by a factor of 1000 or ~1 standard_ms_error!
Perhaps we should call Heinrich in to check HIS units? Oops! 153 years too late. :(

Resources