Resolution of a nanoTime() Timer - complexity-theory

I have searched the postings for an answer related to the time which the System.nanoTime( ) method call takes to process.
Consider the following code:
long lastTime = System.nanoTime();
long currentTime = System.nanoTime();
long deltaTime = currentTime - lastTime;
If you run this, currentTime - lastTime will evaluate to '0'. The only way for this to happen is if the computer processed that second method call outside of the resolution of a nanosecond (i.e. the call took less than a nanosecond). Logically this makes sense, because a computer can (on average) perform multiple processes in a single nanosecond.
Is this correct? If not, where is my logic wrong?

well theoretically it will print out "0". In some cases it may have a slight variation if you have a lot of other tasks executing.

This is theoretically correct, assuming that there are absolutely no other processes running and that the call to the Java methods has no delay. So on a real system it is impossible to achieve as calls to the Java methods will introduce significant delays. As I stated in my comment, I can achieve around 0.015 millisecond difference on a dual core Android 4.0.4 phone.

Related

Modelica total time calculation of simulation and equation initialization

I would like to measure the total simulation and initialization time of a system of DAEs. I am interested in the wall-clock time (like the one given in Matlab by the function tic-toc).
I noticed in Modelica there are different flags for the simulation time but actually the time I get is very small compared to the time that elapses since I press the simulation button to the end of the simulation (approximately measured with the clock of my phone).
I guess this short time is just the time required for the simulation and it does not include the initialization of the system of eqs.
Is there a way to calculate this total time?
Thank you so much in advance,
Gabriele
Dear Marco,
Thank you so much for your extremely detailed and useful reply!
I am actually using OpenModelica and not Dymola so unfortunately I have to build the function that does it for me and I am very new with OpenModelica language.
So far, I have a model that simulate the physical behavior based on a DAEs. Now, I am trying to build what you suggest here:
With get time() you can build a function that: reads the system time as t_start translates the model and simulate for 0 seconds reads the system time again and as t_stop computes the difference between t_start and t_stop.
Could you please, give me more details: Which command can I use to read the system at time t_start and to simulate it for 0 seconds? To do this for both t_start and t_stop do I need to different function?
Once I have done this, do I have to call the function (or functions) inside the OpenModelica Model of which I want to know its time?
Thank you so much again for your precious help!
Very best regards, Gabriele
Depending on the tool you have, this could mean a lot of work.
The first problem is that the MSL allows you to retrieve the system time, but there is nothing included to easily compute time deltas. Therefore the Testing library in Dymola features the operator records DateTime and Duration. Note, that it is planned to integrate them in future MSL versions, but at the moment this is only available via the Testing library for Dymola users.
The second problem is that there is no standardized way to translate and simulate models. Every tools has its own way to do that from scripts. So without knowing what tool you are using, it's not possible to give an exact answer.
What Modelica offers in the MSL
In the current Modelica Standard Library version 3.2.3 you can read the actual system time via Modelica.Utilities.System.getTime().
This small example shows how to use it:
function printSystemTime
protected
Integer ms, s, min, h, d, mon, a;
algorithm
(ms, s, min, h, d, mon, a) := Modelica.Utilities.System.getTime();
Modelica.Utilities.Streams.print("Current time is: "+String(h)+":"+String(min)+":"+String(s));
end printSystemTime;
You see it gives the current system date and time via 7 return values. These variables are not very nice to deal with if you want to compute a time delta, as you will end up with 14 variables, each with its own value range.
How to measure translation and simulation time in general
With gettime() you can build a function that:
reads the system time as t_start
translates the model and simulate for 0 seconds
reads the system time again and as t_stop
computes the difference of t_start and t_stop.
Step 2 depends on the tool. In Dymola you would call
DymolaCommands.SimulatorAPI.simulateModel("path-to-model", 0, 0);
which translates your model and simulates it for 0 seconds, so it only runs the initialization section.
For Dymola users
The Testing library contains the function Testing.Utilities.Simulation.timing, which does almost exactly what you want.
To translate and simulate your model call it as follows:
Testing.Utilities.Simulation.timing(
"Modelica.Blocks.Examples.PID_Controller",
task=Testing.Utilities.Simulation.timing.Task.fullTranslate_simulate,
loops=3);
This will translate your model and simulate for 1 second three times and compute the average.
To simulate for 0s, duplicate the function and change this
if simulate then
_ :=simulateModel(c);
end if;
to
if simulate then
_ :=simulateModel(c, 0, 0);
end if;

C comparing times that can overflow

I need to detect the overflowing of an unsigned long.
This variable holds the amount of milliseconds since the device is running (it's an Arduino). Doing sizeof(unsigned long), I have come to see it's indeed a 32-bit number. Now, since it increments every millisecond, which means the device will run for about 49 days before this value overflows.
Since it's for a home system, it isn't really advisable. Now what I'm using the number for, is comparing if the current time is larger than the previous time plus an amount of debouncing.
if(timeChanged + amountOfMs < currentTime){ ... }
Needless to say, once overflow occurs this isn't going to work anymore. What's an efficient way to solve this? I've thought about also having a second-timer as well to check if the milliseconds one has overflowed, but in the end I'll have the same problem.
This rollover issue seems to cause quite a bit of confusion...
The right answer is that you need not worry about the millis() rollover, as long as you do your calculation properly.
This is bad:
if (timeChanged + amountOfMs < currentTime) { ... }
This is good (rollover-safe):
if (currentTime - timeChanged > amountOfMs) { ... }
The reason it works is that arithmetics with unsigned integers (unsigned long in your case) reliably works modulo max+1 (ULONG_MAX+1 is 232). Thus, currentTime, timeChanged and their difference always have the correct value, modulo 232. As long as you test your button more often than once every 49 days (which is likely) the difference will be in the range of an unsigned long, and your test will be correct.
Let put it another way: if millis() rolls over between timeChanged and currentTime, then the difference currentTime - timeChanged will be negative. But since the difference is actually computed with unsigned numbers, it will underflow and roll-over to the correct result. I do not like this explanation though, as it sounds like an error compensating another error. The truth is: if you think of unsigned numbers in terms of modular arithmetics, there is no error anywhere.
This is such a common mistake (and one that I've made myself) that the Arduino Playground has a nice, thorough, and correct answer. See https://playground.arduino.cc/Code/TimingRollover
You can create a new if loop checking the condition : if(currentTime == 0xFFFFFFFE)
If this is the condition, next millisecond will overflow your variable. So at this point you can manually reset it to zero and goto loop where it starts from zero.
This might or might not help your situation. I can't say for sure because you haven't shared any further details about your code.
Define two variables, I'm going to call them 'now' and 'lastNow'.
unsigned long now;
unsigned long lastNow = 0;
In your loop you can now do this:
now = millis();
if (now < lastNow) {
// rollover!
}
lastNow = now;
Nice and reliable regardless of how frequently (or infrequently) you loop.

How to benchmark Matlab processes?

Searching for an idea how to avoid using loop in my Matlab code, I found following comments under one question on SE:
The statement "for loops are slow in Matlab" is no longer generally true since Matlab...euhm, R2008a?
and
Have you tried to benchmark a for loop vs what you already have? sometimes it is faster than vectorized code...
So I would like to ask, is there commonly used way to test the speed of a process in Matlab? Can user see somewhere how much time the process takes or the only way is to extend the processes for several minutes in order to compare the times between each other?
The best tool for testing the performance of MATLAB code is Steve Eddins' timeit function, available here from the MATLAB Central File Exchange.
It handles many subtle issues related to benchmarking MATLAB code for you, such as:
ensuring that JIT compilation is used by wrapping the benchmarked code in a function
warming up the code
running the code several times and averaging
Update: As of release R2013b, timeit is part of core MATLAB.
Update: As of release R2016a, MATLAB also includes a performance testing framework that handles the above issues for you in a similar way to timeit.
You can use the profiler to assess how much time your functions, and the blocks of code within them, are taking.
>> profile on; % Starts the profiler
>> myfunctiontorun( ); % This can be a function, script or block of code
>> profile viewer; % Opens the viewer showing you how much time everything took
Viewer also clears the current profile data for next time.
Bear in mind, profile does tend to slow execution a bit, but I believe it does so in a uniform way across everything.
Obviously if your function is very quick, you might find you don't get reliable results so if you can run it many times or extend the computation that would improve matters.
If it's really simple stuff you're testing, you can also just time it using tic and toc:
>> tic; % Start the timer
>> myfunctionname( );
>> toc; % End the timer and display elapsed time
Also if you want multiple timers, you can assign them to variables:
>> mytimer = tic;
>> myfunctionname( );
>> toc(mytimer);
Finally, if you want to store the elapsed time instead of display it:
>> myresult = toc;
I think that I am right to state that many of us time Matlab by wrapping the block of code we're interested in between tic and toc. Furthermore, we take care to ensure that the total time is of the order of 10s of seconds (rather than 1s of seconds or 100s of seconds) and repeat it 3 - 5 times and take some measure of central tendency (such as the mean) and draw our conclusions from that.
If the piece of code takes less than, say 10s, then repeat it as many times as necessary to bring it into the range, being careful to avoid any impact of one iteration on the next. And if the code naturally takes 100s of seconds or longer, either spend longer on the testing or try it with artificially small input data to run more quickly.
In my experience it's not necessary to run programs for minutes to get data on average run time with acceptably low variance. If I run a program 5 times and one (or two) of the results is wildly different from the mean I'll re-run it.
Of course, if the code has any features which make its run time non-deterministic then it's a different matter.

TCL - how to know how much time a function has worked?

Say I have a proc and the proc consists of several statements and function calls. How I can know how much time the function has taken so far?
a very crude example would be something like:
set TIME_start [clock clicks -milliseconds]
...do something...
set TIME_taken [expr [clock clicks -milliseconds] - $TIME_start]
Using the time proc, you can do the following:
% set tt [time {set x [expr 23 * 34]}]
38 microseconds per iteration
To measure the time some code has taken, you either use time or clock.
The time command will run its script argument and return a description of how long the script took, in milliseconds (plus some descriptive text, which is trivial to chop off with lindex). If you're really doing performance analysis work, you can supply an optional count argument that makes the script be run repeatedly, but for just general monitoring you can ignore that.
The clock command lets you get various sorts of timestamps (as well as doing formatting, parsing and arithmetic with times). The coarsest is got with clock seconds, which returns the amount of time since the beginning of the Unix epoch (in seconds computed with civil time; that's what you want unless you're doing something specialized). If you need more detail, you should use clock milliseconds or clock microseconds. There's also clock clicks, but it's not typically defined what unit that's counting in (unless you pass the -milliseconds or -microseconds option). It's up to you to turn the timestamps into something useful to you.
If you're timing things on Tcl 8.4 (or before!) then you're constrained to using time, clock seconds or clock clicks (and even the -microseconds option is absent; there's no microsecond-resolution timer exposed in 8.4). In that case, you should consider upgrading to 8.5, as it's generally faster. Faster is Good! (If you're using pre-8.4, definitely upgrade as you're enormously behind on the support front.)
To tell how long a function has taken, you can either use the time command (wrapped around the function call) or use clock clicks to get the current time before and then during the function. The time option is simple but can only time a whole function (and will only give you a time when the function returns). Using clock clicks can be done several times, but you will need to subtract the current time from the starting time yourself.
In case your really looking for some kind of profiler, have a look at the profiler package in Tcllib:
http://tcllib.sourceforge.net/doc/profiler.html

does gFortran's cpu_time() return user time, system time, or the sum of both?

I need to do some timing to compare the performance of some Fortran Vs C code.
In C I can get both user time and system time independently.
When using gFortran's cpu_time() what does it represent?
With in IBM's fortran compiler one can choose what to output by setting an environment variable (see CPU_TIME() )
I found no reference to something similar in gFortran's documentation.
So, does anybody know if gFortran's cpu_time() returns user time, system time, or the sum of both?
Gfortran CPU_TIME returns the sum of the user and system time.
On MINGW it uses GetProcessTimes(), on other platforms getrusage() or if getrusage() is not available, times().
See
http://gcc.gnu.org/git/?p=gcc.git;a=blob;f=libgfortran/intrinsics/cpu_time.c;h=619f8d25246409e0f32c96299db724213aa62b45;hb=refs/heads/master
and
http://gcc.gnu.org/git/?p=gcc.git;a=blob;f=libgfortran/intrinsics/time_1.h;h=12d79ebc12fecf52baa0895c7ab8accc41dab500;hb=refs/heads/master
FWIW, if you wish to measure the wallclock time rather than CPU time, please use the SYSTEM_CLOCK intrinsic instead of CPU_TIME.
my guess: the total of user and system time, otherwise it would be mentioned? Probably depends on the OS anyway, maybe not all of them make the distinction. As far a s I know, CPU time is the time which the OS assigns to your process, be it in user mode or in kernel mode executed on behalf of the process.
Is it important for you to have that distinction?
For performance comparison, I would probably go for wall-time anyway, and use CPU time to guess how much I/O it is doing by subtracting it from the wall-time.
If you need wallclock time, you may use date_and_time, http://gcc.gnu.org/onlinedocs/gcc-4.0.2/gfortran/DATE_005fAND_005fTIME.html
I'm not sure how standard it is, but in my experience it works on at least four different platforms, including exotic Cray designs.
One gotcha here is to take care of the midnight, like this:
character*8 :: date
character*10 :: time
character*5 :: zone
integer :: tvalues(8)
real*8 :: time_prev, time_curr, time_elapsed, time_limit
integer :: hr_curr, hr_prev
! set the clock
call date_and_time(date, time, zone, tvalues)
time_curr = tvalues(5)*3600.d0 + tvalues(6)*60.d0 + tvalues(7) ! seconds
hr_curr = tvalues(5)
time_prev=0.d0; time_elapsed = 0.d0; hr_prev = 0
!... do something...
time_prev = time_curr; hr_prev = hr_curr
call date_and_time(date, time, zone, tvalues)
time_curr = tvalues(5)*3600.d0 + tvalues(6)*60.d0 + tvalues(7) ! seconds
hr_curr = tvalues(5)
dt = time_curr - time_prev
if( hr_curr < hr_prev )dt = dt + 24*3600.d0 ! across the midnight
time_elapsed = time_elapsed + dt
#Emanual Ey - In continuation to your comment on #steabert's post - (what follows goes for Intel's; I don't know whether something differs on other compilers). User cpu time + system cpu time should equal cpu time. Elapsed, real, or "wall clock" time should be greater than total charged cpu time. To measure wallclock time, it is best to put the time command, before and after the tricky part. Ugh, I'm gonna make this more complicated than it should be. Could you read the part on Timing your application on Intel's manual page (you'll have to find the "Timing your application" in the index). Should clear up a few things.
As I said before, that goes for Intel's. I don't have access to gfortran's compiler.

Resources