time.Sub() returns 1 second despite the difference exceeding couple of years - go

I am trying to write a piece of code that will react to system time change due to synchronisation. Here's a rather simple code that is running inside of goroutine:
var start, end time.Time
var start_ts, end_ts int64
var diff_ts time.Duration
var diff time.Duration
for {
start = time.Now()
start_ts = start.Unix()
fmt.Printf("Now: => %v (%d);\n", start, start_ts)
time.Sleep(1 * time.Second)
end = time.Now()
end_ts = end.Unix()
fmt.Printf("New Now: %v (%d);\n", end, end_ts)
diff = end.Sub(start)
diff_ts = time.Duration(end_ts-start_ts) * time.Second
fmt.Printf("Measured time duration: %v (%v) %f (%f)\n", diff, diff_ts, diff.Seconds(), diff_ts.Seconds())
}
my problem is that when I change system time in another console, the time is read correctly, however the "original" time difference is incorrect and I have to resort to constructing the time difference manually. Here's the excerpt from the logs:
Now: => 2020-02-26 12:29:42.778827718 +0000 UTC m=+21.776791756 (1582720182);
New Now: 2017-01-01 01:02:03.391215325 +0000 UTC m=+22.777003266 (1483232523);
Measured time duration: 1.00021151s (-27635h27m39s) 1.000212 (-99487659.000000)
how come the diff object returns 1 second even though the difference is clearlly greater than that?

go's time package uses both "wall clock" (what you are trying to change) and a monotonic clock. From the docs:
Operating systems provide both a “wall clock,” which is subject to
changes for clock synchronization, and a “monotonic clock,” which is
not. The general rule is that the wall clock is for telling time and
the monotonic clock is for measuring time. Rather than split the API,
in this package the Time returned by time.Now contains both a wall
clock reading and a monotonic clock reading; later time-telling
operations use the wall clock reading, but later time-measuring
operations, specifically comparisons and subtractions, use the
monotonic clock reading.
[...]
If Times t and u both contain monotonic clock readings, the operations t.After(u), t.Before(u), t.Equal(u), and t.Sub(u) are carried out using the monotonic clock readings alone, ignoring the wall clock readings.
This is specifically designed to prevent deviant app behavior when a clock-sync (ntp etc.) occurs (and pushes the clock back). go's time package ensures the monotonic clock reading always moves forward (when comparing or subtraction operations).

Related

Modelica total time calculation of simulation and equation initialization

I would like to measure the total simulation and initialization time of a system of DAEs. I am interested in the wall-clock time (like the one given in Matlab by the function tic-toc).
I noticed in Modelica there are different flags for the simulation time but actually the time I get is very small compared to the time that elapses since I press the simulation button to the end of the simulation (approximately measured with the clock of my phone).
I guess this short time is just the time required for the simulation and it does not include the initialization of the system of eqs.
Is there a way to calculate this total time?
Thank you so much in advance,
Gabriele
Dear Marco,
Thank you so much for your extremely detailed and useful reply!
I am actually using OpenModelica and not Dymola so unfortunately I have to build the function that does it for me and I am very new with OpenModelica language.
So far, I have a model that simulate the physical behavior based on a DAEs. Now, I am trying to build what you suggest here:
With get time() you can build a function that: reads the system time as t_start translates the model and simulate for 0 seconds reads the system time again and as t_stop computes the difference between t_start and t_stop.
Could you please, give me more details: Which command can I use to read the system at time t_start and to simulate it for 0 seconds? To do this for both t_start and t_stop do I need to different function?
Once I have done this, do I have to call the function (or functions) inside the OpenModelica Model of which I want to know its time?
Thank you so much again for your precious help!
Very best regards, Gabriele
Depending on the tool you have, this could mean a lot of work.
The first problem is that the MSL allows you to retrieve the system time, but there is nothing included to easily compute time deltas. Therefore the Testing library in Dymola features the operator records DateTime and Duration. Note, that it is planned to integrate them in future MSL versions, but at the moment this is only available via the Testing library for Dymola users.
The second problem is that there is no standardized way to translate and simulate models. Every tools has its own way to do that from scripts. So without knowing what tool you are using, it's not possible to give an exact answer.
What Modelica offers in the MSL
In the current Modelica Standard Library version 3.2.3 you can read the actual system time via Modelica.Utilities.System.getTime().
This small example shows how to use it:
function printSystemTime
protected
Integer ms, s, min, h, d, mon, a;
algorithm
(ms, s, min, h, d, mon, a) := Modelica.Utilities.System.getTime();
Modelica.Utilities.Streams.print("Current time is: "+String(h)+":"+String(min)+":"+String(s));
end printSystemTime;
You see it gives the current system date and time via 7 return values. These variables are not very nice to deal with if you want to compute a time delta, as you will end up with 14 variables, each with its own value range.
How to measure translation and simulation time in general
With gettime() you can build a function that:
reads the system time as t_start
translates the model and simulate for 0 seconds
reads the system time again and as t_stop
computes the difference of t_start and t_stop.
Step 2 depends on the tool. In Dymola you would call
DymolaCommands.SimulatorAPI.simulateModel("path-to-model", 0, 0);
which translates your model and simulates it for 0 seconds, so it only runs the initialization section.
For Dymola users
The Testing library contains the function Testing.Utilities.Simulation.timing, which does almost exactly what you want.
To translate and simulate your model call it as follows:
Testing.Utilities.Simulation.timing(
"Modelica.Blocks.Examples.PID_Controller",
task=Testing.Utilities.Simulation.timing.Task.fullTranslate_simulate,
loops=3);
This will translate your model and simulate for 1 second three times and compute the average.
To simulate for 0s, duplicate the function and change this
if simulate then
_ :=simulateModel(c);
end if;
to
if simulate then
_ :=simulateModel(c, 0, 0);
end if;

How many bytes are in a golang time object

I am having to store a time object in an array of bytes in a go project I am working on and have to declare the size of the array up front. I can not find the length in bytes referenced anywhere. At this point, I am planning on using the time.MarshalBinary() from the time library to convert it to bytes and manually figuring it out. But I wanted to know if anyone has any reference to the number of bytes this is and if time.MarshalBinary() is the best method to use for converting to bytes.
The answer to this question is not as straight forward as it might seem. It depends a lot on how much detail you need to preserve in your marshaling.
As pointed out in another answer, you can simply use unsafe.Sizeof() to determine the in-memory size of a time object, but this has little resemblance to the actual marshaled size, for the simple reason that it contains a pointer. If we look at the definition of time.Time we see:
type Time struct {
// wall and ext encode the wall time seconds, wall time nanoseconds,
// and optional monotonic clock reading in nanoseconds.
//
// From high to low bit position, wall encodes a 1-bit flag (hasMonotonic),
// a 33-bit seconds field, and a 30-bit wall time nanoseconds field.
// The nanoseconds field is in the range [0, 999999999].
// If the hasMonotonic bit is 0, then the 33-bit field must be zero
// and the full signed 64-bit wall seconds since Jan 1 year 1 is stored in ext.
// If the hasMonotonic bit is 1, then the 33-bit field holds a 33-bit
// unsigned wall seconds since Jan 1 year 1885, and ext holds a
// signed 64-bit monotonic clock reading, nanoseconds since process start.
wall uint64
ext int64
// loc specifies the Location that should be used to
// determine the minute, hour, month, day, and year
// that correspond to this Time.
// The nil location means UTC.
// All UTC times are represented with loc==nil, never loc==&utcLoc.
loc *Location
}
Whether you care about the timezone info stored in loc, is application dependent. If you always store UTC times (usually the best approach), then you can ignore this bit entirely, which means you can get by storing just the two uint64s.
But even these two fields depend on whether or not you're using a monotonic clock. When marshaling data, you almost certainly do not care about the monotonic clock, whether or not it's encoded in those bits.
What this means is that, in most cases, you should be able to store a full time object in 64 bits (8 bytes), plus a timezone indicator, if necessary.
Further, depending on the precision you need, you may be able to store only the seconds field (discarding sub-second precision), which needs only 33 bits. If you only care about minutes or days, you could use even less space.
You can use usafe.Sizeof to get the size in bytes of a variable. I did this
package main
import (
"fmt"
"time"
"unsafe"
)
func main() {
t := time.Now()
fmt.Printf("a: %T, %d\n", t, unsafe.Sizeof(t))
}
Looks like its 24 bytes! :)
Also MarshalBinary looks like it works, although it depends on where you are sending it and how you want to unmarshal it. It may be easier to simply convert it to a string then use that if you are using it in Javascript or something.

What's the difference between time.Now() and time.Now().Local()?

I'm trying to understand what the difference is between time.Now() and time.Now().Local(). I started by printing them out on my laptop (running Ubuntu 18.04):
fmt.Println(time.Now())
fmt.Println(time.Now().Local())
which gives me
2018-12-23 19:57:08.606595466 +0100 CET m=+0.000583834
2018-12-23 19:57:08.606667843 +0100 CET
I'm not sure what the m=+0.000583834 is. Maybe the difference between my machine and the NTP servers?
I then checked out the docs on .Now() and .Local(), which read:
Now returns the current local time.
and
Local returns t with the location set to local time.
Both of them return local time, so I'm still unsure what the difference is. I tried searching around, but I couldn't really find definite answers.
Could anyone shed some light on this?
time.Now().Local() sets the time's Location to local time. time.Now() is already set to local time, so there's no net effect except that m bit.
The m portion is the Monotonic Clock.
Operating systems provide both a “wall clock,” which is subject to changes for clock synchronization, and a “monotonic clock,” which is not. The general rule is that the wall clock is for telling time and the monotonic clock is for measuring time.
A monotonic clock is basically a simple count since the program started. m=+0.000583834 says that time is 0.000583834 seconds after the program started.
time.Now().Local() explicitly strips the monotonic clock...
Because t.In, t.Local, and t.UTC are used for their effect on the interpretation of the wall time, they also strip any monotonic clock reading from their results. The canonical way to strip a monotonic clock reading is to use t = t.Round(0).

Unexpected output from time.Time

I just started to learn Go by following a tutorial video on Udemy, and I tried to print the current time as below
import (
"fmt"
"time"
)
func main(){
t := time.Now()
fmt.Println(t)
}
And I get a very long text as the output as below
2018-07-04 12:03:07.2911671 +0800 +08 m=+0.002000201
I was expecting to get only the +0800 followed by a timeZone and that should be the end of it. The expected output is shown below and as it was shown in the tutorial video, too. But for me, the result is in much longer form.
2018-07-04 12:03:07.2911671 +0530 IST
The question is, why does the same command date.Now() return different formats between the instructor's program and mine? Why is there no specific format being set, shouldn't a standardize/base format being returned instead?
The question is, why the same command date.Now() is returning different format between the instructor's program and mine?
Because the tutorial was created before the release of Go 1.9. As of Go 1.9, monotonic clock support was added to the time.Time struct, which added those extra fields.
For normal usage, you should always output time using the Format function, rather than outputting the raw data. This will produce more useful output, and be protected against any future additions to the underlying type.
Your Udemy tutorial video is out-of-date. Go is continually updated. For example, a monotonic clock bug fix:
Go 1.9 Release Notes (August 2017)
Transparent Monotonic Time support
The time package now transparently tracks monotonic time in each Time
value, making computing durations between two Time values a safe
operation in the presence of wall clock adjustments. See the package
docs and design document for details.
As always, there are various minor changes and updates to the library,
made with the Go 1 promise of compatibility in mind.
time
If a Time value has a monotonic clock reading, its string
representation (as returned by String) now includes a final field
"m=±value", where value is the monotonic clock reading formatted as a
decimal number of seconds.
Package time
import "time"
The Time returned by time.Now contains a monotonic clock reading. If
Time t has a monotonic clock reading, t.Add adds the same duration to
both the wall clock and monotonic clock readings to compute the
result. Because t.AddDate(y, m, d), t.Round(d), and t.Truncate(d) are
wall time computations, they always strip any monotonic clock reading
from their results. Because t.In, t.Local, and t.UTC are used for
their effect on the interpretation of the wall time, they also strip
any monotonic clock reading from their results. The canonical way to
strip a monotonic clock reading is to use t = t.Round(0).
fmt.Println(t) uses a debugging format so it prints all the underlying time.Time fields.
The canonical way to strip a monotonic clock reading is to use t =
t.Round(0).
For example,
package main
import (
"fmt"
"time"
)
func main() {
t := time.Now()
fmt.Println(t)
fmt.Println(t.Round(0))
t2 := time.Now().Round(0)
fmt.Println(t2)
}
Playground: https://play.golang.org/p/p_pjRWRB8_y
Output:
2009-11-10 23:00:00 +0000 UTC m=+0.000000001
2009-11-10 23:00:00 +0000 UTC
2009-11-10 23:00:00 +0000 UTC
The +08 is the string returned by t.Location().String(). Locations are given a string on creation which is used to identify it. It could be IST, or it can be "+08" or any other string you can think of.
The m=+0.002000201 is the monotonic clock. It is used for more accurate durations. For more information on Go's monotonic clock implementations, see https://golang.org/pkg/time/#hdr-Monotonic_Clocks.
As for the reason the monotonic clock shows up in t.String():
For debugging, the result of t.String does include the monotonic clock reading if present. If t != u because of different monotonic clock readings, that difference will be visible when printing t.String() and u.String().

TCL - how to know how much time a function has worked?

Say I have a proc and the proc consists of several statements and function calls. How I can know how much time the function has taken so far?
a very crude example would be something like:
set TIME_start [clock clicks -milliseconds]
...do something...
set TIME_taken [expr [clock clicks -milliseconds] - $TIME_start]
Using the time proc, you can do the following:
% set tt [time {set x [expr 23 * 34]}]
38 microseconds per iteration
To measure the time some code has taken, you either use time or clock.
The time command will run its script argument and return a description of how long the script took, in milliseconds (plus some descriptive text, which is trivial to chop off with lindex). If you're really doing performance analysis work, you can supply an optional count argument that makes the script be run repeatedly, but for just general monitoring you can ignore that.
The clock command lets you get various sorts of timestamps (as well as doing formatting, parsing and arithmetic with times). The coarsest is got with clock seconds, which returns the amount of time since the beginning of the Unix epoch (in seconds computed with civil time; that's what you want unless you're doing something specialized). If you need more detail, you should use clock milliseconds or clock microseconds. There's also clock clicks, but it's not typically defined what unit that's counting in (unless you pass the -milliseconds or -microseconds option). It's up to you to turn the timestamps into something useful to you.
If you're timing things on Tcl 8.4 (or before!) then you're constrained to using time, clock seconds or clock clicks (and even the -microseconds option is absent; there's no microsecond-resolution timer exposed in 8.4). In that case, you should consider upgrading to 8.5, as it's generally faster. Faster is Good! (If you're using pre-8.4, definitely upgrade as you're enormously behind on the support front.)
To tell how long a function has taken, you can either use the time command (wrapped around the function call) or use clock clicks to get the current time before and then during the function. The time option is simple but can only time a whole function (and will only give you a time when the function returns). Using clock clicks can be done several times, but you will need to subtract the current time from the starting time yourself.
In case your really looking for some kind of profiler, have a look at the profiler package in Tcllib:
http://tcllib.sourceforge.net/doc/profiler.html

Resources