Modelica total time calculation of simulation and equation initialization - time

I would like to measure the total simulation and initialization time of a system of DAEs. I am interested in the wall-clock time (like the one given in Matlab by the function tic-toc).
I noticed in Modelica there are different flags for the simulation time but actually the time I get is very small compared to the time that elapses since I press the simulation button to the end of the simulation (approximately measured with the clock of my phone).
I guess this short time is just the time required for the simulation and it does not include the initialization of the system of eqs.
Is there a way to calculate this total time?
Thank you so much in advance,
Gabriele
Dear Marco,
Thank you so much for your extremely detailed and useful reply!
I am actually using OpenModelica and not Dymola so unfortunately I have to build the function that does it for me and I am very new with OpenModelica language.
So far, I have a model that simulate the physical behavior based on a DAEs. Now, I am trying to build what you suggest here:
With get time() you can build a function that: reads the system time as t_start translates the model and simulate for 0 seconds reads the system time again and as t_stop computes the difference between t_start and t_stop.
Could you please, give me more details: Which command can I use to read the system at time t_start and to simulate it for 0 seconds? To do this for both t_start and t_stop do I need to different function?
Once I have done this, do I have to call the function (or functions) inside the OpenModelica Model of which I want to know its time?
Thank you so much again for your precious help!
Very best regards, Gabriele

Depending on the tool you have, this could mean a lot of work.
The first problem is that the MSL allows you to retrieve the system time, but there is nothing included to easily compute time deltas. Therefore the Testing library in Dymola features the operator records DateTime and Duration. Note, that it is planned to integrate them in future MSL versions, but at the moment this is only available via the Testing library for Dymola users.
The second problem is that there is no standardized way to translate and simulate models. Every tools has its own way to do that from scripts. So without knowing what tool you are using, it's not possible to give an exact answer.
What Modelica offers in the MSL
In the current Modelica Standard Library version 3.2.3 you can read the actual system time via Modelica.Utilities.System.getTime().
This small example shows how to use it:
function printSystemTime
protected
Integer ms, s, min, h, d, mon, a;
algorithm
(ms, s, min, h, d, mon, a) := Modelica.Utilities.System.getTime();
Modelica.Utilities.Streams.print("Current time is: "+String(h)+":"+String(min)+":"+String(s));
end printSystemTime;
You see it gives the current system date and time via 7 return values. These variables are not very nice to deal with if you want to compute a time delta, as you will end up with 14 variables, each with its own value range.
How to measure translation and simulation time in general
With gettime() you can build a function that:
reads the system time as t_start
translates the model and simulate for 0 seconds
reads the system time again and as t_stop
computes the difference of t_start and t_stop.
Step 2 depends on the tool. In Dymola you would call
DymolaCommands.SimulatorAPI.simulateModel("path-to-model", 0, 0);
which translates your model and simulates it for 0 seconds, so it only runs the initialization section.
For Dymola users
The Testing library contains the function Testing.Utilities.Simulation.timing, which does almost exactly what you want.
To translate and simulate your model call it as follows:
Testing.Utilities.Simulation.timing(
"Modelica.Blocks.Examples.PID_Controller",
task=Testing.Utilities.Simulation.timing.Task.fullTranslate_simulate,
loops=3);
This will translate your model and simulate for 1 second three times and compute the average.
To simulate for 0s, duplicate the function and change this
if simulate then
_ :=simulateModel(c);
end if;
to
if simulate then
_ :=simulateModel(c, 0, 0);
end if;

Related

time series simulation and logical checking with Matlab or with other tools

1) I have time series data and signals (indicators) that their value changes over time.
My question:
2) I need to do logical checking all the time, e.g. if signal 1 and 2 happened around the same time (were equal to a certain value e.g.=1) then I need to know the exact time in order to check what happened next.
3) to complicate things,e.g. if signal 3 happened in some time range after signal 1 and signal 2 were equal to 1, I would like to check other things.
4)The time series is very long and I need to deal with it segment by segment.
Please advice how to write it without inventing the wheel.
Is it recommended to write it in Matlab?, using a state machine? in C++?, using threads?
5) Does Matlab have a simulator ready for this kind of things?
How do I define the logical conditions in an efficient way?
6) Can I use data mining tools for this?
I saw this list of tools:
Data Mining open source tools
not sure where to start.
Thanks
The second and third question could be done like this in Matlab:
T = -range; % Assuming that t starts at 0.
for n = 1 : length(t)
if signal1(n) == 1 && signal2(n) == 1
T = t(n);
end
if t(n) - T < range && signal3(n) == 1
if % Conditions you want to get checked, could also be put in the previous if statement.
% Things you want to be executed if these coditions are met.
end
end
end
Using a lower level programming language like C++ would improve the rate at which it would be done. And if data is very long it could also reduce the amount of memory use by loading in an element of each array at the time.
Matlab has a simulator, called Simulink, but that is more meant for solving more complicated things, since you only conditionally want to do something.

How to benchmark Matlab processes?

Searching for an idea how to avoid using loop in my Matlab code, I found following comments under one question on SE:
The statement "for loops are slow in Matlab" is no longer generally true since Matlab...euhm, R2008a?
and
Have you tried to benchmark a for loop vs what you already have? sometimes it is faster than vectorized code...
So I would like to ask, is there commonly used way to test the speed of a process in Matlab? Can user see somewhere how much time the process takes or the only way is to extend the processes for several minutes in order to compare the times between each other?
The best tool for testing the performance of MATLAB code is Steve Eddins' timeit function, available here from the MATLAB Central File Exchange.
It handles many subtle issues related to benchmarking MATLAB code for you, such as:
ensuring that JIT compilation is used by wrapping the benchmarked code in a function
warming up the code
running the code several times and averaging
Update: As of release R2013b, timeit is part of core MATLAB.
Update: As of release R2016a, MATLAB also includes a performance testing framework that handles the above issues for you in a similar way to timeit.
You can use the profiler to assess how much time your functions, and the blocks of code within them, are taking.
>> profile on; % Starts the profiler
>> myfunctiontorun( ); % This can be a function, script or block of code
>> profile viewer; % Opens the viewer showing you how much time everything took
Viewer also clears the current profile data for next time.
Bear in mind, profile does tend to slow execution a bit, but I believe it does so in a uniform way across everything.
Obviously if your function is very quick, you might find you don't get reliable results so if you can run it many times or extend the computation that would improve matters.
If it's really simple stuff you're testing, you can also just time it using tic and toc:
>> tic; % Start the timer
>> myfunctionname( );
>> toc; % End the timer and display elapsed time
Also if you want multiple timers, you can assign them to variables:
>> mytimer = tic;
>> myfunctionname( );
>> toc(mytimer);
Finally, if you want to store the elapsed time instead of display it:
>> myresult = toc;
I think that I am right to state that many of us time Matlab by wrapping the block of code we're interested in between tic and toc. Furthermore, we take care to ensure that the total time is of the order of 10s of seconds (rather than 1s of seconds or 100s of seconds) and repeat it 3 - 5 times and take some measure of central tendency (such as the mean) and draw our conclusions from that.
If the piece of code takes less than, say 10s, then repeat it as many times as necessary to bring it into the range, being careful to avoid any impact of one iteration on the next. And if the code naturally takes 100s of seconds or longer, either spend longer on the testing or try it with artificially small input data to run more quickly.
In my experience it's not necessary to run programs for minutes to get data on average run time with acceptably low variance. If I run a program 5 times and one (or two) of the results is wildly different from the mean I'll re-run it.
Of course, if the code has any features which make its run time non-deterministic then it's a different matter.

TCL - how to know how much time a function has worked?

Say I have a proc and the proc consists of several statements and function calls. How I can know how much time the function has taken so far?
a very crude example would be something like:
set TIME_start [clock clicks -milliseconds]
...do something...
set TIME_taken [expr [clock clicks -milliseconds] - $TIME_start]
Using the time proc, you can do the following:
% set tt [time {set x [expr 23 * 34]}]
38 microseconds per iteration
To measure the time some code has taken, you either use time or clock.
The time command will run its script argument and return a description of how long the script took, in milliseconds (plus some descriptive text, which is trivial to chop off with lindex). If you're really doing performance analysis work, you can supply an optional count argument that makes the script be run repeatedly, but for just general monitoring you can ignore that.
The clock command lets you get various sorts of timestamps (as well as doing formatting, parsing and arithmetic with times). The coarsest is got with clock seconds, which returns the amount of time since the beginning of the Unix epoch (in seconds computed with civil time; that's what you want unless you're doing something specialized). If you need more detail, you should use clock milliseconds or clock microseconds. There's also clock clicks, but it's not typically defined what unit that's counting in (unless you pass the -milliseconds or -microseconds option). It's up to you to turn the timestamps into something useful to you.
If you're timing things on Tcl 8.4 (or before!) then you're constrained to using time, clock seconds or clock clicks (and even the -microseconds option is absent; there's no microsecond-resolution timer exposed in 8.4). In that case, you should consider upgrading to 8.5, as it's generally faster. Faster is Good! (If you're using pre-8.4, definitely upgrade as you're enormously behind on the support front.)
To tell how long a function has taken, you can either use the time command (wrapped around the function call) or use clock clicks to get the current time before and then during the function. The time option is simple but can only time a whole function (and will only give you a time when the function returns). Using clock clicks can be done several times, but you will need to subtract the current time from the starting time yourself.
In case your really looking for some kind of profiler, have a look at the profiler package in Tcllib:
http://tcllib.sourceforge.net/doc/profiler.html

does gFortran's cpu_time() return user time, system time, or the sum of both?

I need to do some timing to compare the performance of some Fortran Vs C code.
In C I can get both user time and system time independently.
When using gFortran's cpu_time() what does it represent?
With in IBM's fortran compiler one can choose what to output by setting an environment variable (see CPU_TIME() )
I found no reference to something similar in gFortran's documentation.
So, does anybody know if gFortran's cpu_time() returns user time, system time, or the sum of both?
Gfortran CPU_TIME returns the sum of the user and system time.
On MINGW it uses GetProcessTimes(), on other platforms getrusage() or if getrusage() is not available, times().
See
http://gcc.gnu.org/git/?p=gcc.git;a=blob;f=libgfortran/intrinsics/cpu_time.c;h=619f8d25246409e0f32c96299db724213aa62b45;hb=refs/heads/master
and
http://gcc.gnu.org/git/?p=gcc.git;a=blob;f=libgfortran/intrinsics/time_1.h;h=12d79ebc12fecf52baa0895c7ab8accc41dab500;hb=refs/heads/master
FWIW, if you wish to measure the wallclock time rather than CPU time, please use the SYSTEM_CLOCK intrinsic instead of CPU_TIME.
my guess: the total of user and system time, otherwise it would be mentioned? Probably depends on the OS anyway, maybe not all of them make the distinction. As far a s I know, CPU time is the time which the OS assigns to your process, be it in user mode or in kernel mode executed on behalf of the process.
Is it important for you to have that distinction?
For performance comparison, I would probably go for wall-time anyway, and use CPU time to guess how much I/O it is doing by subtracting it from the wall-time.
If you need wallclock time, you may use date_and_time, http://gcc.gnu.org/onlinedocs/gcc-4.0.2/gfortran/DATE_005fAND_005fTIME.html
I'm not sure how standard it is, but in my experience it works on at least four different platforms, including exotic Cray designs.
One gotcha here is to take care of the midnight, like this:
character*8 :: date
character*10 :: time
character*5 :: zone
integer :: tvalues(8)
real*8 :: time_prev, time_curr, time_elapsed, time_limit
integer :: hr_curr, hr_prev
! set the clock
call date_and_time(date, time, zone, tvalues)
time_curr = tvalues(5)*3600.d0 + tvalues(6)*60.d0 + tvalues(7) ! seconds
hr_curr = tvalues(5)
time_prev=0.d0; time_elapsed = 0.d0; hr_prev = 0
!... do something...
time_prev = time_curr; hr_prev = hr_curr
call date_and_time(date, time, zone, tvalues)
time_curr = tvalues(5)*3600.d0 + tvalues(6)*60.d0 + tvalues(7) ! seconds
hr_curr = tvalues(5)
dt = time_curr - time_prev
if( hr_curr < hr_prev )dt = dt + 24*3600.d0 ! across the midnight
time_elapsed = time_elapsed + dt
#Emanual Ey - In continuation to your comment on #steabert's post - (what follows goes for Intel's; I don't know whether something differs on other compilers). User cpu time + system cpu time should equal cpu time. Elapsed, real, or "wall clock" time should be greater than total charged cpu time. To measure wallclock time, it is best to put the time command, before and after the tricky part. Ugh, I'm gonna make this more complicated than it should be. Could you read the part on Timing your application on Intel's manual page (you'll have to find the "Timing your application" in the index). Should clear up a few things.
As I said before, that goes for Intel's. I don't have access to gfortran's compiler.

Determining Millisecond Time Intervals In Cocoa

Just as background, I'm building an application in Cocoa. This application existed originally in C++ in another environment. I'd like to do as much as possible in Objective-C.
My questions are:
1)
How do I compute, as an integer, the number of milliseconds between now and the previous time I remembered as now?
2)
When used in an objective-C program, including time.h, what are the units of
clock()
Thank you for your help.
You can use CFAbsoluteTimeGetCurrent() but bear in mind the clock can change between two calls and can screw you over. If you want to protect against that you should use CACurrentMediaTime().
The return type of these is CFAbsoluteTime and CFTimeInterval respectively, which are both double by default. So they return the number of seconds with double precision. If you really want an integer you can use mach_absolute_time() found in #include <mach/mach_time.h> which returns a 64 bit integer. This needs a bit of unit conversion, so check out this link for example code. This is what CACurrentMediaTime() uses internally so it's probably best to stick with that.
Computing the difference between two calls is obviously just a subtraction, use a variable to remember the last value.
For the clock function see the documentation here: clock(). Basically you need to divide the return value by CLOCKS_PER_SEC to get the actual time.
How do I compute, as an integer, the number of milliseconds between now and the previous time I remembered as now?
Is there any reason you need it as an integral number of milliseconds? Asking NSDate for the time interval since another date will give you a floating-point number of seconds. If you really do need milliseconds, you can simply multiply by that by 1000 to get a floating-point number of milliseconds. If you really do need an integer, you can round or truncate the floating-point value.
If you'd like to do it with integers from start to finish, use either UpTime or mach_absolute_time to get the current time in absolute units, then use AbsoluteToNanoseconds to convert that to a real-world unit. Obviously, you'll have to divide that by 1,000,000 to get milliseconds.
QA1398 suggests mach_absolute_time, but UpTime is easier, since it returns the same type AbsoluteToNanoseconds uses (no “pointer fun” as shown in the technote).
AbsoluteToNanoseconds returns an UnsignedWide, which is a structure. (This stuff dates back to before Mac machines could handle scalar 64-bit values.) Use the UnsignedWideToUInt64 function to convert it to a scalar. That just leaves the subtraction, which you'll do the normal way.

Resources