What's the difference between time.Now() and time.Now().Local()? - go

I'm trying to understand what the difference is between time.Now() and time.Now().Local(). I started by printing them out on my laptop (running Ubuntu 18.04):
fmt.Println(time.Now())
fmt.Println(time.Now().Local())
which gives me
2018-12-23 19:57:08.606595466 +0100 CET m=+0.000583834
2018-12-23 19:57:08.606667843 +0100 CET
I'm not sure what the m=+0.000583834 is. Maybe the difference between my machine and the NTP servers?
I then checked out the docs on .Now() and .Local(), which read:
Now returns the current local time.
and
Local returns t with the location set to local time.
Both of them return local time, so I'm still unsure what the difference is. I tried searching around, but I couldn't really find definite answers.
Could anyone shed some light on this?

time.Now().Local() sets the time's Location to local time. time.Now() is already set to local time, so there's no net effect except that m bit.
The m portion is the Monotonic Clock.
Operating systems provide both a “wall clock,” which is subject to changes for clock synchronization, and a “monotonic clock,” which is not. The general rule is that the wall clock is for telling time and the monotonic clock is for measuring time.
A monotonic clock is basically a simple count since the program started. m=+0.000583834 says that time is 0.000583834 seconds after the program started.
time.Now().Local() explicitly strips the monotonic clock...
Because t.In, t.Local, and t.UTC are used for their effect on the interpretation of the wall time, they also strip any monotonic clock reading from their results. The canonical way to strip a monotonic clock reading is to use t = t.Round(0).

Related

Modelica total time calculation of simulation and equation initialization

I would like to measure the total simulation and initialization time of a system of DAEs. I am interested in the wall-clock time (like the one given in Matlab by the function tic-toc).
I noticed in Modelica there are different flags for the simulation time but actually the time I get is very small compared to the time that elapses since I press the simulation button to the end of the simulation (approximately measured with the clock of my phone).
I guess this short time is just the time required for the simulation and it does not include the initialization of the system of eqs.
Is there a way to calculate this total time?
Thank you so much in advance,
Gabriele
Dear Marco,
Thank you so much for your extremely detailed and useful reply!
I am actually using OpenModelica and not Dymola so unfortunately I have to build the function that does it for me and I am very new with OpenModelica language.
So far, I have a model that simulate the physical behavior based on a DAEs. Now, I am trying to build what you suggest here:
With get time() you can build a function that: reads the system time as t_start translates the model and simulate for 0 seconds reads the system time again and as t_stop computes the difference between t_start and t_stop.
Could you please, give me more details: Which command can I use to read the system at time t_start and to simulate it for 0 seconds? To do this for both t_start and t_stop do I need to different function?
Once I have done this, do I have to call the function (or functions) inside the OpenModelica Model of which I want to know its time?
Thank you so much again for your precious help!
Very best regards, Gabriele
Depending on the tool you have, this could mean a lot of work.
The first problem is that the MSL allows you to retrieve the system time, but there is nothing included to easily compute time deltas. Therefore the Testing library in Dymola features the operator records DateTime and Duration. Note, that it is planned to integrate them in future MSL versions, but at the moment this is only available via the Testing library for Dymola users.
The second problem is that there is no standardized way to translate and simulate models. Every tools has its own way to do that from scripts. So without knowing what tool you are using, it's not possible to give an exact answer.
What Modelica offers in the MSL
In the current Modelica Standard Library version 3.2.3 you can read the actual system time via Modelica.Utilities.System.getTime().
This small example shows how to use it:
function printSystemTime
protected
Integer ms, s, min, h, d, mon, a;
algorithm
(ms, s, min, h, d, mon, a) := Modelica.Utilities.System.getTime();
Modelica.Utilities.Streams.print("Current time is: "+String(h)+":"+String(min)+":"+String(s));
end printSystemTime;
You see it gives the current system date and time via 7 return values. These variables are not very nice to deal with if you want to compute a time delta, as you will end up with 14 variables, each with its own value range.
How to measure translation and simulation time in general
With gettime() you can build a function that:
reads the system time as t_start
translates the model and simulate for 0 seconds
reads the system time again and as t_stop
computes the difference of t_start and t_stop.
Step 2 depends on the tool. In Dymola you would call
DymolaCommands.SimulatorAPI.simulateModel("path-to-model", 0, 0);
which translates your model and simulates it for 0 seconds, so it only runs the initialization section.
For Dymola users
The Testing library contains the function Testing.Utilities.Simulation.timing, which does almost exactly what you want.
To translate and simulate your model call it as follows:
Testing.Utilities.Simulation.timing(
"Modelica.Blocks.Examples.PID_Controller",
task=Testing.Utilities.Simulation.timing.Task.fullTranslate_simulate,
loops=3);
This will translate your model and simulate for 1 second three times and compute the average.
To simulate for 0s, duplicate the function and change this
if simulate then
_ :=simulateModel(c);
end if;
to
if simulate then
_ :=simulateModel(c, 0, 0);
end if;

time.Sub() returns 1 second despite the difference exceeding couple of years

I am trying to write a piece of code that will react to system time change due to synchronisation. Here's a rather simple code that is running inside of goroutine:
var start, end time.Time
var start_ts, end_ts int64
var diff_ts time.Duration
var diff time.Duration
for {
start = time.Now()
start_ts = start.Unix()
fmt.Printf("Now: => %v (%d);\n", start, start_ts)
time.Sleep(1 * time.Second)
end = time.Now()
end_ts = end.Unix()
fmt.Printf("New Now: %v (%d);\n", end, end_ts)
diff = end.Sub(start)
diff_ts = time.Duration(end_ts-start_ts) * time.Second
fmt.Printf("Measured time duration: %v (%v) %f (%f)\n", diff, diff_ts, diff.Seconds(), diff_ts.Seconds())
}
my problem is that when I change system time in another console, the time is read correctly, however the "original" time difference is incorrect and I have to resort to constructing the time difference manually. Here's the excerpt from the logs:
Now: => 2020-02-26 12:29:42.778827718 +0000 UTC m=+21.776791756 (1582720182);
New Now: 2017-01-01 01:02:03.391215325 +0000 UTC m=+22.777003266 (1483232523);
Measured time duration: 1.00021151s (-27635h27m39s) 1.000212 (-99487659.000000)
how come the diff object returns 1 second even though the difference is clearlly greater than that?
go's time package uses both "wall clock" (what you are trying to change) and a monotonic clock. From the docs:
Operating systems provide both a “wall clock,” which is subject to
changes for clock synchronization, and a “monotonic clock,” which is
not. The general rule is that the wall clock is for telling time and
the monotonic clock is for measuring time. Rather than split the API,
in this package the Time returned by time.Now contains both a wall
clock reading and a monotonic clock reading; later time-telling
operations use the wall clock reading, but later time-measuring
operations, specifically comparisons and subtractions, use the
monotonic clock reading.
[...]
If Times t and u both contain monotonic clock readings, the operations t.After(u), t.Before(u), t.Equal(u), and t.Sub(u) are carried out using the monotonic clock readings alone, ignoring the wall clock readings.
This is specifically designed to prevent deviant app behavior when a clock-sync (ntp etc.) occurs (and pushes the clock back). go's time package ensures the monotonic clock reading always moves forward (when comparing or subtraction operations).

Unexpected output from time.Time

I just started to learn Go by following a tutorial video on Udemy, and I tried to print the current time as below
import (
"fmt"
"time"
)
func main(){
t := time.Now()
fmt.Println(t)
}
And I get a very long text as the output as below
2018-07-04 12:03:07.2911671 +0800 +08 m=+0.002000201
I was expecting to get only the +0800 followed by a timeZone and that should be the end of it. The expected output is shown below and as it was shown in the tutorial video, too. But for me, the result is in much longer form.
2018-07-04 12:03:07.2911671 +0530 IST
The question is, why does the same command date.Now() return different formats between the instructor's program and mine? Why is there no specific format being set, shouldn't a standardize/base format being returned instead?
The question is, why the same command date.Now() is returning different format between the instructor's program and mine?
Because the tutorial was created before the release of Go 1.9. As of Go 1.9, monotonic clock support was added to the time.Time struct, which added those extra fields.
For normal usage, you should always output time using the Format function, rather than outputting the raw data. This will produce more useful output, and be protected against any future additions to the underlying type.
Your Udemy tutorial video is out-of-date. Go is continually updated. For example, a monotonic clock bug fix:
Go 1.9 Release Notes (August 2017)
Transparent Monotonic Time support
The time package now transparently tracks monotonic time in each Time
value, making computing durations between two Time values a safe
operation in the presence of wall clock adjustments. See the package
docs and design document for details.
As always, there are various minor changes and updates to the library,
made with the Go 1 promise of compatibility in mind.
time
If a Time value has a monotonic clock reading, its string
representation (as returned by String) now includes a final field
"m=±value", where value is the monotonic clock reading formatted as a
decimal number of seconds.
Package time
import "time"
The Time returned by time.Now contains a monotonic clock reading. If
Time t has a monotonic clock reading, t.Add adds the same duration to
both the wall clock and monotonic clock readings to compute the
result. Because t.AddDate(y, m, d), t.Round(d), and t.Truncate(d) are
wall time computations, they always strip any monotonic clock reading
from their results. Because t.In, t.Local, and t.UTC are used for
their effect on the interpretation of the wall time, they also strip
any monotonic clock reading from their results. The canonical way to
strip a monotonic clock reading is to use t = t.Round(0).
fmt.Println(t) uses a debugging format so it prints all the underlying time.Time fields.
The canonical way to strip a monotonic clock reading is to use t =
t.Round(0).
For example,
package main
import (
"fmt"
"time"
)
func main() {
t := time.Now()
fmt.Println(t)
fmt.Println(t.Round(0))
t2 := time.Now().Round(0)
fmt.Println(t2)
}
Playground: https://play.golang.org/p/p_pjRWRB8_y
Output:
2009-11-10 23:00:00 +0000 UTC m=+0.000000001
2009-11-10 23:00:00 +0000 UTC
2009-11-10 23:00:00 +0000 UTC
The +08 is the string returned by t.Location().String(). Locations are given a string on creation which is used to identify it. It could be IST, or it can be "+08" or any other string you can think of.
The m=+0.002000201 is the monotonic clock. It is used for more accurate durations. For more information on Go's monotonic clock implementations, see https://golang.org/pkg/time/#hdr-Monotonic_Clocks.
As for the reason the monotonic clock shows up in t.String():
For debugging, the result of t.String does include the monotonic clock reading if present. If t != u because of different monotonic clock readings, that difference will be visible when printing t.String() and u.String().

Length of time representation in Go

Under Unix, I'm working on a program that needs to behave differently depending on whether time is 32-bit (will wrap in 2038) or 64-bit.
I presume Go time is not magic and will wrap in 2038 on a platform with a 32-bit time_t. If this is false and it is somehow always 64-bit, clue me in because that will prevent much grief.
What's the simplest way in Go to write a test for the platform's time_t size? Is there any way simpler than the obvious hack with cgo?
If you really want to find the size of time_t, you can use cgo to link to time.h. Then the sizeof time_t will be available as C.sizeof_time_t. It doesn't get much simpler.
package main
// #include <time.h>
import "C"
import (
"fmt"
)
func main() {
fmt.Println(C.sizeof_time_t);
}
Other than trying to set the system time to increasingly distant dates, which is not very polite to anything else running on that system, I don't know of any way to directly query the limits of the hardware clock in a portable fashion in any programming language. C simply hard codes the size of time_t in a file provided by the operating system (on OS X it's /usr/include/i386/_types.h), so you're probably best off taking advantage of that information by querying the size of time_t via cgo.
But there's very few reasons to do this. Go does not use time_t and does not appear to suffer from 2038 issues unless you actually plan to have code running on a 32-bit machine in 2038. If that's your plan, I'd suggest finding a better plan.
I presume Go time is not magic and will wrap in 2038 on a platform with a 32-bit time_t. If this is false and it is somehow always 64-bit, clue me in because that will prevent much grief.
Most of the the Year 2038 Problem is programs assuming that the time since 1970 will fit in a 32-bit signed integer. This effects time and date functions, as well as network and data formats which choose to represent time as a 32-bit signed integer since 1970. This is not some hardware limit (except if it's actually 2038, see below), but rather a design limitation of older programming languages and protocols. There's nothing stopping you from using 64 bit integers to represent time, or choosing a different epoch. And that's exactly what newer programming languages do, no magic required.
Go was first released in 2009 long after issues such as Unicode, concurrency, and 32-bit time (ie. the Year 2038 Problem) were acknowledged as issues any programming language would have to tackle. Given how many issues there are with C's time library, I highly doubt that Go is using it at all. A quick skim of the source code confirms.
While I can't find any explicit mention in the Go documentation of the limits of its Time representation, it appears to be completely disconnected from C's time.h structures such as time_t. Since Time uses 64 bit integers, it seems to be clear of 2038 problems unless you're asking for actual clock time.
Digging into the Go docs for Time we find their 0 is well outside the range of a 32-bit time_t which ranges from 1901 to 2038.
The zero value of type Time is January 1, year 1, 00:00:00.000000000 UTC
time.Unix takes seconds and nanoseconds as 64 bit integers leaving no doubt that it is divorced from the size of time_t.
time.Parse will parse a year "in the range 0000..9999", again well outside the range of a 32-bit time_t.
And so on. The only limitation I could find is that a Duration is limited to 290 years because it has a nanosecond accuracy and 290 years is about 63 bits worth of nanoseconds.
Of course, you should test your code on a machine with a 32-bit time_t.
One side issue of the 2038 Problem is time zones. Computers calculate time zone information from a time zone database, usually the IANA time zone database. This allows one to get the time offset for a certain location at a certain time.
Computers have their own copy of the time zone database installed. Unfortunately its difficult to know where they are located or when they were last updated. To avoid this issue, most programming languages supply their own copy of the time zone database. Go does as well.
The only real limitation on a machine with 32-bit time is the limits of its hardware clock. This tells the software what time it is right now. A 32-bit clock only becomes an issue if your program is still running on a 32-bit machine in 2038. There isn't much point to mitigating that because everything on that machine will have the same problem and its unlikely they took it into account. You're better off decommissioning that hardware before 2038.
Ordinarily, time.Time uses 63 bits to represent wall clock seconds elapsed since January 1, year 1 00:00:00 UTC, up through 219250468-12-04 15:30:09.147483647 +0000 UTC. For example,
package main
import (
"fmt"
"time"
)
func main() {
var t time.Time
fmt.Println(t)
t = time.Unix(1<<63-1, 1<<31-1)
fmt.Println(t)
}
Playground: https://play.golang.org/p/QPs1m6eMPH
Output:
0001-01-01 00:00:00 +0000 UTC
219250468-12-04 15:30:09.147483647 +0000 UTC
If time.Time is monotonic (derived from time.Now()), time.Time uses 33 bits to represent wall clock seconds, covering the years 1885 through 2157.
References:
Package time
Proposal: Monotonic Elapsed Time Measurements in Go

TCL - how to know how much time a function has worked?

Say I have a proc and the proc consists of several statements and function calls. How I can know how much time the function has taken so far?
a very crude example would be something like:
set TIME_start [clock clicks -milliseconds]
...do something...
set TIME_taken [expr [clock clicks -milliseconds] - $TIME_start]
Using the time proc, you can do the following:
% set tt [time {set x [expr 23 * 34]}]
38 microseconds per iteration
To measure the time some code has taken, you either use time or clock.
The time command will run its script argument and return a description of how long the script took, in milliseconds (plus some descriptive text, which is trivial to chop off with lindex). If you're really doing performance analysis work, you can supply an optional count argument that makes the script be run repeatedly, but for just general monitoring you can ignore that.
The clock command lets you get various sorts of timestamps (as well as doing formatting, parsing and arithmetic with times). The coarsest is got with clock seconds, which returns the amount of time since the beginning of the Unix epoch (in seconds computed with civil time; that's what you want unless you're doing something specialized). If you need more detail, you should use clock milliseconds or clock microseconds. There's also clock clicks, but it's not typically defined what unit that's counting in (unless you pass the -milliseconds or -microseconds option). It's up to you to turn the timestamps into something useful to you.
If you're timing things on Tcl 8.4 (or before!) then you're constrained to using time, clock seconds or clock clicks (and even the -microseconds option is absent; there's no microsecond-resolution timer exposed in 8.4). In that case, you should consider upgrading to 8.5, as it's generally faster. Faster is Good! (If you're using pre-8.4, definitely upgrade as you're enormously behind on the support front.)
To tell how long a function has taken, you can either use the time command (wrapped around the function call) or use clock clicks to get the current time before and then during the function. The time option is simple but can only time a whole function (and will only give you a time when the function returns). Using clock clicks can be done several times, but you will need to subtract the current time from the starting time yourself.
In case your really looking for some kind of profiler, have a look at the profiler package in Tcllib:
http://tcllib.sourceforge.net/doc/profiler.html

Resources