How to properly implement waiting of async computations? - performance
i have some little trouble and i am asking for hint. I am on Windows platform, doing calculations in a following manner:
int input = 0;
int output; // junk bytes here
while(true) {
async_enqueue_upload(input); // completes instantly, but transfer will take 10us
async_enqueue_calculate(); // completes instantly, but computation will take 80us
async_enqueue_download(output); // completes instantly, but transfer will take 10us
sync_wait_finish(); // must wait while output is fully calculated, and there is no junk
input = process(output); // i cannot launch next step without doing it on the host.
}
I am asking about wait_finish() thing. I must wait all devices to finish, to combine all results and somehow process the data and upload a new portion, that is based on a previous computation step. I need to sync data in between each step, so i can't parallelize steps. I know, this is not quite performant case. So lets proceed to question.
I have 2 ways of checking completion, within wait_finish(). First is to put thread to sleep until it wakes up by completion event:
while( !is_completed() )
Sleep(1);
It has very low performance, because actual calculation, to say, takes 100us, and minimal Windows sheduler timestep is 1ms, so it gives unsuitable 10x lower performance.
Second way is to check completion in empty infinite loop:
while( !is_completed() )
{} // do_nothing();
It has 10x good computation performance. But it is also unsuitable solution, because it makes full cpu core utilisation usage, with absolutely useless work. How to make cpu "sleep" exactly time i needed? (Each step has equal amount of work)
How this case is usually solved, when amount of calculation time is too big for active spin-wait, but is too small compared to sheduler timestep? Also related subquestion - how to do that on linux?
Fortunately, i have succeeded in finding answer on my own. In short words - i should use linux for that.
And my investigation shows following. On windows there is hidden function in ntdll, NtDelayExecution(). It is not exposed through SDK, but can be loaded in a following manner:
static int(__stdcall *NtDelayExecution)(BOOL Alertable, PLARGE_INTEGER DelayInterval) = (int(__stdcall*)(BOOL, PLARGE_INTEGER)) GetProcAddress(GetModuleHandleW(L"ntdll.dll"), "NtDelayExecution");
It allows to set sleep intervals in 100ns periods. However, even that not worked well, as shown in a following benchmark:
SetPriorityClass(GetCurrentProcess(), REALTIME_PRIORITY_CLASS); // requires Admin privellegies
SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_TIME_CRITICAL);
uint64_t hpf = qpf(); // QueryPerformanceFrequency()
uint64_t s0 = qpc(); // QueryPerformanceCounter()
uint64_t n = 0;
while (1) {
sleep_precise(1); // NtDelayExecution(-1); waits one 100-nanosecond interval
auto s1 = qpc();
n++;
auto passed = s1 - s0;
if (passed >= hpf) {
std::cout << "freq=" << (n * hpf / passed) << " hz\n";
s0 = s1;
n = 0;
}
}
That yields something less than just 2000 hz loop rate, and result varies from string to string. That led me towards windows thread switching sheduler, which is totally not suited for real time tasks. And its minimum interval of 0.5ms (+overhead). Btw, does anyone knows on how to tune that value?
And next was linux question, and what does it can? So i've built custom tiny kernel 4.14 with means of buildroot, and tested that benchmark code there. I replaced qpc() to return clock_gettime() data, with CLOCK_MONOTONIC clock, and qpf() just returns number of nanoseconds in a second and sleep_precise() just called clock_nanosleep(). I was failed to find out what is the difference between CLOCK_MONOTONIC and CLOCK_REALTIME.
And i was quite surprised, getting whooping 18.4khz frequency just out of the box, and that was quite stable. While i tested several intervals, i found that i can set the loop to almost any frequency up to 18.4khz, but also that actual measured wait time results differs to 1.6 times of what i asked. For example if i ask to sleep 100 us it actually sleeps for ~160 us, giving ~6.25 khz frequency. Nothing else is going on the system, just kernel, busybox and this test. I am not an experience linux user, and i am still wondering how can i tune this to be more real-time and deterministic. Can i push that frequency maximum even more?
Related
Generate few us short delay in GNU Linux
I am trying to generate a short delay between two calls writing HW based registers in GNU C on ARM (Linux). It looks like the system latency is too high when I am using usleep() or nanosleep() functions. The following code fragment struct timespec ts; ts.tv_sec = 0; ts.tv_nsec = 1; // 1 nano second //... do{ } while (nanosleep(&ts, &ts)); results in over 100 us delay (comparing when present or commented out). What is the way around? Since my desired delay is approximately 2 us I can possibly live even with a blocking function. As #Lubo hinted I cannot rely on reliable delay generated within my code since that may be interrupted. The HW register I am writing needs ~ 1us between two consequent writes. If I want to generate a shortest possible delay at least 2us and won't mind getting longer delay in cases I get interrupted I may still be fine. In total I may acquire less delay compared to the current state when every time I am getting 100us more than intended.
Measuring accurate GPU computation time
I'm working on a code in which I have to perform a vector-matrix multiplication on a chunk of data, copying the results back to CPU and then start multiplying another chunk. I perform the vector to matrix multiplication using cublas library (following code). clock_t a,b; a = clock(); for(int i=0;i<n;i++) { cublasSgemv(handle,CUBLAS_OP_T,m,k,&alpha, dev_b1+((i+1)*m), m, dev_b1+(i*m),1, &beta,out,1); out+=(n-(i+1)); cudaMemcpy(b3,dev_b3, sizeof(float)*(cor_size), cudaMemcpyDeviceToHost); } b = clock(); cout<<"Running time is: "<<(double)(b-a)/clocks_per_sec; I have to measure the running time of this for loop. I read something about CudaEvent but in my case, I want to measure the time of total loop not a kernel so I used clock function. I am wondering is this a correct way to measure the time for this chunk of code or there are more accurate ways to do that? I know that for measuring elapsed time we have to repeat running the code multiple times and take the average of elapsed times of all runs, so another question is that is there any trade-off for the number of times that running code should be repeated? Thanks
cudaMemcpy synchronizes host and device, so a CPU timer such as clock_t should give results that are identical with those produced by a CUDA timer, making the necessary allowances for the granularity/resolution of clock_t. As regards the accuracy of the measurements is concerned, from what I have seen, the first iteration timings could be disregarded in the calculations. Subsequent timing measurements should yield numbers depending on factors such as load imbalance in the algorithm being run, which might decide on whether we get the same numbers at every iteration. I would reckon that that would not be an issue here, with Sgemm.
You can still use CUDA events to measure the entire loop runtime, by recording two events (one before starting the loop, one after the end, i.e. in the positions where you are currently using clock()), synchronizing on the second event and then getting the elapsed time using cudaEventElapsedTime(). This should have the advantage of being more accurate than clock().
Testing Erlang function performance with timer
I'm testing the performance of a function in a tight loop (say 5000 iterations) using timer:tc/3: {Duration_us, _Result} = timer:tc(M, F, [A]) This returns both the duration (in microseconds) and the result of the function. For argument's sake the duration is N microseconds. I then perform a simple average calculation on the results of the iterations. If I place a timer:sleep(1) function call before the timer:tc/3 call, the average duration for all the iterations is always > the average without the sleep: timer:sleep(1), timer:tc(M, F, [A]). This doesn't make much sense to me as the timer:tc/3 function should be atomic and not care about anything that happened before it. Can anyone explain this strange functionality? Is it somehow related to scheduling and reductions?
Do you mean like this: 4> foo:foo(10000). Where: -module(foo). -export([foo/1, baz/1]). foo(N) -> TL = bar(N), {TL,sum(TL)/N} . bar(0) -> []; bar(N) -> timer:sleep(1), {D,_} = timer:tc(?MODULE, baz, [1000]), [D|bar(N-1)] . baz(0) -> ok; baz(N) -> baz(N-1). sum([]) -> 0; sum([H|T]) -> H + sum(T). I tried this, and it's interesting. With the sleep statement the mean time returned by timer:tc/3 is 19 to 22 microseconds, and with the sleep commented out, the average drops to 4 to 6 microseconds. Quite dramatic! I notice there are artefacts in the timings, so events like this (these numbers being the individual microsecond timings returned by timer:tc/3) are not uncommon: ---- snip ---- 5,5,5,6,5,5,5,6,5,5,5,6,5,5,5,5,4,5,5,5,5,5,4,5,5,5,5,6,5,5, 5,6,5,5,5,5,5,6,5,5,5,5,5,6,5,5,5,6,5,5,5,5,5,5,5,5,5,5,4,5, 5,5,5,6,5,5,5,6,5,5,7,8,7,8,5,6,5,5,5,6,5,5,5,5,4,5,5,5,5, 14,4,5,5,4,5,5,4,5,4,5,5,5,4,5,5,4,5,5,4,5,4,5,5,5,4,5,5,4, 5,5,4,5,4,5,5,4,4,5,5,4,5,5,4,4,4,4,4,5,4,5,5,4,5,5,5,4,5,5, 4,5,5,4,5,4,5,5,5,4,5,5,4,5,5,4,5,4,5,4,5,4,5,5,4,4,4,4,5,4, 5,5,54,22,26,21,22,22,24,24,32,31,36,31,33,27,25,21,22,21, 24,21,22,22,24,21,22,21,24,21,22,22,24,21,22,21,24,21,22,21, 23,27,22,21,24,21,22,21,24,22,22,21,23,22,22,21,24,22,22,21, 24,21,22,22,24,22,22,21,24,22,22,22,24,22,22,22,24,22,22,22, 24,22,22,22,24,22,22,21,24,22,22,21,24,21,22,22,24,22,22,21, 24,21,23,21,24,22,23,21,24,21,22,22,24,21,22,22,24,21,22,22, 24,22,23,21,24,21,23,21,23,21,21,21,23,21,25,22,24,21,22,21, 24,21,22,21,24,22,21,24,22,22,21,24,22,23,21,23,21,22,21,23, 21,22,21,23,21,23,21,24,22,22,22,24,22,22,41,36,30,33,30,35, 21,23,21,25,21,23,21,24,22,22,21,23,21,22,21,24,22,22,22,24, 22,22,21,24,22,22,22,24,22,22,21,24,22,22,21,24,22,22,21,24, 22,22,21,24,21,22,22,27,22,23,21,23,21,21,21,23,21,21,21,24, 21,22,21,24,21,22,22,24,22,22,22,24,21,22,22,24,21,22,21,24, 21,23,21,23,21,22,21,23,21,23,22,24,22,22,21,24,21,22,22,24, 21,23,21,24,21,22,22,24,21,22,22,24,21,22,21,24,21,22,22,24, 22,22,22,24,22,22,21,24,22,21,21,24,21,22,22,24,21,22,22,24, 24,23,21,24,21,22,24,21,22,21,23,21,22,21,24,21,22,21,32,31, 32,21,25,21,22,22,24,46,5,5,5,5,5,4,5,5,5,5,6,5,5,5,5,5,5,4, 6,5,5,5,6,5,5,5,5,5,5,5,6,5,5,5,5,4,5,4,5,5,5,5,6,5,5,5,5,5, 5,5,6,5,5,5,5,5,5,5,6,5,5,5,5,4,6,4,6,5,5,5,5,5,5,4,6,5,5,5, 5,4,5,5,5,5,5,5,6,5,5,5,5,4,5,5,5,5,5,5,6,5,5,5,5,5,5,5,6,5, 5,5,5,4,5,5,6,5,5,5,6,5,5,5,5,5,5,5,6,5,5,5,6,5,5,5,5,5,5,5, 6,5,5,5,5,4,5,4,5,5,5,5,6,5,5,5,5,5,5,4,5,4,5,5,5,5,5,6,5,5, 5,5,4,5,4,5,5,5,5,6,5,5,5,5,5,5,5,6,5,5,5,5,5,5,5,6,5,5,5,5, ---- snip ---- I assume this is the effect you are referring to, though when you say always > N, is it always, or just mostly? Not always for me anyway. The above results extract was without the sleep. Typically when using sleep timer:tc/3 returns low times like 4 or 5 most of the time without the sleep, but sometimes big times like 22, and with the sleep in place it's usually big times like 22, with occasional batches of low times. It's certainly not obvious why this would happen, since sleep really just means yield. I wonder if all this is not down to the CPU cache. After all, especially on a machine that's not busy, one might expect the case without the sleep to execute most of the code all in one go without it getting moved to another core, without doing so much else with the core, thus making the most out of the caches... but when you sleep, and thus yield, and come back later, the chances of cache hits might be considerably less.
Measuring performance is a complex task especially on new HW and in modern OS. There are many things which can fiddle with your result. First thing, you are not alone. It is when you measure on your desktop or notebook, there can be other processes which can interfere with your measurement including system ones. Second thing, there is HW itself. Moder CPUs have many cool features which control performance and power consumption. They can boost performance for a short time before overheat, they can boost performance when there is not work on other CPUs on the same chip or other hyper thread on the same CPU. On another hand, they can enter power saving mode when there is not enough work and CPU doesn't react fast enough to the sudden change. It is hard to tell if it is your case, but it is naive to thing previous work or lack of it can't affect your measurement. You should always take care to measure in steady state for long enough time (seconds at least) and remove as much as possible other things which could affect your measurement. (And do not forget GC in Erlang as well.)
PID Control Loops with large and unpredictable anomalies
Short Question Is there a common way to handle very large anomalies (order of magnitude) within an otherwise uniform control region? Background I am working on a control algorithm that drives a motor across a generally uniform control region. With no / minimal loading the PID control works great (fast response, little to no overshoot). The issue I'm running into is there will usually be at least one high load location. The position is determined by the user during installation, so there is no reasonable way for me to know when / where to expect it. When I tune the PID to handle the high load location, it causes large over shoots on the non-loaded areas (which I fully expected). While it is OK to overshoot mid travel, there are no mechanical hard stops on the enclosure. The lack of hardstops means that any significant overshoot can / does cause the control arm to be disconnected from the motor (yielding a dead unit). Things I'm Prototyping Nested PIDs (very agressive when far away from target, conservative when close by) Fixed gain when far away, PID when close Conservative PID (works with no load) + an external control that looks for the PID to stall and apply additional energy until either: the target is achieved or rapid rate of change is detected (ie leaving the high load area) Hardware Limitations Full travel defined Hardstops cannot be added (at this point in time) Update My answer does not indicate that this is best solution. It's just my current solution that I thought I would share.
Initial Solution stalled_pwm_output = PWM / | ΔE | PWM = Max PWM value ΔE = last_error - new_error The initial relationship successfully ramps up the PWM output based on the lack of change in the motor. See the graph below for the sample output. This approach makes since for the situation where the non-aggressive PID stalled. However, it has the unfortunate (and obvious) issue that when the non-aggressive PID is capable of achieving the setpoint and attempts to slow, the stalled_pwm_output ramps up. This ramp up causes a large overshoot when traveling to a non-loaded position. Current Solution Theory stalled_pwm_output = (kE * PID_PWM) / | ΔE | kE = Scaling Constant PID_PWM = Current PWM request from the non-agressive PID ΔE = last_error - new_error My current relationship still uses the 1/ΔE concept, but uses the non-aggressive PID PWM output to determine the stall_pwm_output. This allows the PID to throttle back the stall_pwm_output when it starts getting close to the target setpoint, yet allows 100% PWM output when stalled. The scaling constant kE is needed to ensure the PWM gets into the saturation point (above 10,000 in graphs below). Pseudo Code Note that the result from the cal_stall_pwm is added to the PID PWM output in my current control logic. int calc_stall_pwm(int pid_pwm, int new_error) { int ret = 0; int dE = 0; static int last_error = 0; const int kE = 1; // Allow the stall_control until the setpoint is achived if( FALSE == motor_has_reached_target()) { // Determine the error delta dE = abs(last_error - new_error); last_error = new_error; // Protect from divide by zeros dE = (dE == 0) ? 1 : dE; // Determine the stall_pwm_output ret = (kE * pid_pwm) / dE; } return ret; } Output Data Stalled PWM Output Note that in the stalled PWM output graph the sudden PWM drop at ~3400 is a built in safety feature activated because the motor was unable to reach position within a given time. Non-Loaded PWM Output
Measuring execution time of selected loops
I want to measure the running times of selected loops in a C program so as to see what percentage of the total time for executing the program (on linux) is spent in these loops. I should be able to specify the loops for which the performance should be measured. I have tried out several tools (vtune, hpctoolkit, oprofile) in the last few days and none of them seem to do this. They all find the performance bottlenecks and just show the time for those. Thats because these tools only store the time taken that is above a threshold (~1ms). So if one loop takes lesser time than that then its execution time won't be reported. The basic block counting feature of gprof depends on a feature in older compilers thats not supported now. I could manually write a simple timer using gettimeofday or something like that but for some cases it won't give accurate results. For ex: for (i = 0; i < 1000; ++i) { for (j = 0; j < N; ++j) { //do some work here } } Now here I want to measure the total time spent in the inner loop and I will have to put a call to gettimeofday inside the first loop. So gettimeofday itself will get called a 1000 times which introduces its own overhead and the result will be inaccurate.
Unless you have an in circuit emulator or break-out box around your CPU, there's no such thing as timing a single-loop or single-instruction. You need to bulk up your test runs to something that takes at least several seconds each in order to reduce error due to other things going on in the CPU, OS, etc. If you're wanting to find out exactly how much time a particular loop takes to execute, and it takes less than, say, 1 second to execute, you're going to need to artificially increase the number of iterations in order to get a number that is above the "noise floor". You can then take that number and divide it by the number of artificially inflated iterations to get a figure that represents how long one pass through your target loop will take. If you're wanting to compare the performance of different loop styles or techniques, the same thing holds: you're going to need to increase the number of iterations or passes through your test code in order to get a measurement in which what you're interested in dominates the time slice you're measuring. This is true whether you're measuring performance using sub-millisecond high performance counters provided by the CPU, the system date time clock, or a wall clock to measure the elapsed time of your test. Otherwise, you're just measuring white noise.
Typically if you want to measure the time spent in the inner loop, you'll put the time get routines outside of the outer loop and then divide by the (outer) loop count. If you expect the time of the inner loop to be relatively constant for any j, that is. Any profiling instructions incur their own overhead, but presumably the overhead will be the same regardless of where it's inserted so "it all comes out in the wash." Presumably you're looking for spots where there are considerable differences between the runtimes of two compared processes, where a pair of function calls like this won't be an issue (since you need one at the "end" too, to get the time delta) since one routine will be 2x or more costly over the other. Most platforms offer some sort of higher resolution timer, too, although the one we use here is hidden behind an API so that the "client" code is cross-platform. I'm sure with a little looking you can turn it up. Although even here, there's little likelihood that you'll get better than 1ms accuracy, so it's preferable to run the code several times in a row and time the whole run (then divide by the loop count, natch).
I'm glad you're looking for percentage, because that's easy to get. Just get it running. If it runs quickly, put an outer loop around it so it takes a good long time. That won't affect the percentages. While it's running, get stackshots. You can do this with Ctrl-Break in gdb, or you can use pstack or lsstack. Just look to see what percentage of stackshots display the code you care about. Suppose the loops take some fraction of time, like 0.2 (20%) and you take N=20 samples. Then the number of samples that should show them will average 20 * 0.2 = 4, and the standard deviation of the number of samples will be sqrt(20 * 0.2 * 0.8) = sqrt(3.2) = 1.8, so if you want more precision, take more samples. (I personally think precision is overrated.)