Can the value of std::chrono::system_clock::time_point change based on the timezone? - c++11

I have been writing unit tests for a class in our codebase that basically converts date, time values from std::string to std::chrono::time_point and vice versa, for different kinds of timestamps (yyyy-mm-dd , hh:mm:ss.ms etc).
One way I tried to test whether a std::chrono::system_clock::time_point returned by a function in our codebase as the same as one created in the unit tests was to do something like this -
std::chrono::system_clock::time_point m_TimePoint{}; // == Clock's epoch
auto foo = convertToString(n_TimePoint); //foo is std::string
auto bar = getTimePoint(foo);
ASSERT_EQ(m_TimePoint, bar);
This was on Ubuntu , now the constructor should return a time point as UTC Jan 1 00:00:00 1970. Now when I used ctime() to see the textual representation of the epoch it returned Dec 31 19:00:00 1969. I was puzzled and checked that EST(my system timezone) is equal to UTC-5.
Once I created the object as -
std::chrono::duration d{0};
d += std::chrono::hours(5);
std::chrono::system_clock::time_point m_TimePoint{d}; //== clock epoch + 5 hours
All worked fine.
My question is it possible for the system clock epoch to be adjusted based on the system timezone?

There's two answers to this question, and I'll try to hit both of them.
system_clock was introduced in C++11, and its epoch was left unspecified. So the technical answer to your question is: yes, it is possible for the system_clock epoch to be adjusted based on the system timezone.
But that's not the end of the story.
There's only a few implementations of system_clock, and all of them model Unix Time. Unix Time is a measure of time duration since 1970-01-01 00:00:00 UTC, excluding leap seconds. It is not dependent on the system timezone.
The C++20 spec standardizes this existing practice.
So from a practical standpoint, the answer to your question is: no, it is not possible for the system_clock epoch to be adjusted based on the system timezone.
One thing that could be tripping you up is that system_clock typically counts time in units finer than milliseconds. It varies from platform to platform, and you can inspect what it is with system_clock::duration::period::num and system_clock::duration::period::den. These are two compile-time integral constants that form a fraction, n/d which describes the length of time in seconds that system_clock is measuring. For Ubuntu my guess would be this forms the fraction 1/1'000'000'000, or nanoseconds.
You can get milliseconds (or whatever unit you want) out of system_clock::time_point with:
auto tp = time_point_cast<milliseconds>(system_clock::now());

Related

time.Sub() returns 1 second despite the difference exceeding couple of years

I am trying to write a piece of code that will react to system time change due to synchronisation. Here's a rather simple code that is running inside of goroutine:
var start, end time.Time
var start_ts, end_ts int64
var diff_ts time.Duration
var diff time.Duration
for {
start = time.Now()
start_ts = start.Unix()
fmt.Printf("Now: => %v (%d);\n", start, start_ts)
time.Sleep(1 * time.Second)
end = time.Now()
end_ts = end.Unix()
fmt.Printf("New Now: %v (%d);\n", end, end_ts)
diff = end.Sub(start)
diff_ts = time.Duration(end_ts-start_ts) * time.Second
fmt.Printf("Measured time duration: %v (%v) %f (%f)\n", diff, diff_ts, diff.Seconds(), diff_ts.Seconds())
}
my problem is that when I change system time in another console, the time is read correctly, however the "original" time difference is incorrect and I have to resort to constructing the time difference manually. Here's the excerpt from the logs:
Now: => 2020-02-26 12:29:42.778827718 +0000 UTC m=+21.776791756 (1582720182);
New Now: 2017-01-01 01:02:03.391215325 +0000 UTC m=+22.777003266 (1483232523);
Measured time duration: 1.00021151s (-27635h27m39s) 1.000212 (-99487659.000000)
how come the diff object returns 1 second even though the difference is clearlly greater than that?
go's time package uses both "wall clock" (what you are trying to change) and a monotonic clock. From the docs:
Operating systems provide both a “wall clock,” which is subject to
changes for clock synchronization, and a “monotonic clock,” which is
not. The general rule is that the wall clock is for telling time and
the monotonic clock is for measuring time. Rather than split the API,
in this package the Time returned by time.Now contains both a wall
clock reading and a monotonic clock reading; later time-telling
operations use the wall clock reading, but later time-measuring
operations, specifically comparisons and subtractions, use the
monotonic clock reading.
[...]
If Times t and u both contain monotonic clock readings, the operations t.After(u), t.Before(u), t.Equal(u), and t.Sub(u) are carried out using the monotonic clock readings alone, ignoring the wall clock readings.
This is specifically designed to prevent deviant app behavior when a clock-sync (ntp etc.) occurs (and pushes the clock back). go's time package ensures the monotonic clock reading always moves forward (when comparing or subtraction operations).

unix timestamp with millisecond precision in VB6?

how can i get a unix timestamp with millisecond (or more?) precision in VB6?
(and getting a seconds-precision stamp and multiplying it by 1000 is not an acceptable solution because i need sub-second precision)
Edit: No, How to get time elapsed in milliseconds is not a duplicate of this question, i need to get a unix timestamp since 1970-01-01 00:00:00, while that question just needs to know how much time it took to execute operations, and the answers use GetTickCount() which counts milliseconds since system reboot, not since 1970 (and even if i was theoretically running on a computer that was last rebooted in 1970, which i'm not, GetTickCount() has a max limit of 49.7 days, and i don't think such an old system would be running Windows NT anyway, meaning it wouldn't have access to WinAPI's GetTickCount())
Edit2: i don't have a solution yet, but it's probably possible to use GetSystemTimeAsFileTime() to get the numbers, but i don't know how to use 2x 32bit integers as 64bit integer in VB6 to derive the number..
also here is code to run GetSystemTime() in VB6 (thanks to danaseaman )
Private Type SYSTEMTIME
wYear As Integer
wMonth As Integer
wDayOfWeek As Integer
wDay As Integer
wHour As Integer
wMinute As Integer
wSecond As Integer
wMilliseconds As Integer
End Type
Private Declare Sub GetSystemTime Lib "KERNEL32.dll" (lpSystemTime As SYSTEMTIME)
Based upon the many comments, here is a simple solution for future readers of this question:
Private Sub Form_Load()
'https://www.freeformatter.com/epoch-timestamp-to-date-converter.html
Dim epoch As Currency
epoch = (DateDiff("s", "1/1/1970", Date) + Timer) * 1000
Debug.Print Round(epoch)
End Sub
The idea is to calculate the number of seconds to midnight, add fractional seconds from midnight, and multiply by 1000 to get milliseconds. The result appears to be accurate according to a number of websites including the one referenced in the code.

Length of time representation in Go

Under Unix, I'm working on a program that needs to behave differently depending on whether time is 32-bit (will wrap in 2038) or 64-bit.
I presume Go time is not magic and will wrap in 2038 on a platform with a 32-bit time_t. If this is false and it is somehow always 64-bit, clue me in because that will prevent much grief.
What's the simplest way in Go to write a test for the platform's time_t size? Is there any way simpler than the obvious hack with cgo?
If you really want to find the size of time_t, you can use cgo to link to time.h. Then the sizeof time_t will be available as C.sizeof_time_t. It doesn't get much simpler.
package main
// #include <time.h>
import "C"
import (
"fmt"
)
func main() {
fmt.Println(C.sizeof_time_t);
}
Other than trying to set the system time to increasingly distant dates, which is not very polite to anything else running on that system, I don't know of any way to directly query the limits of the hardware clock in a portable fashion in any programming language. C simply hard codes the size of time_t in a file provided by the operating system (on OS X it's /usr/include/i386/_types.h), so you're probably best off taking advantage of that information by querying the size of time_t via cgo.
But there's very few reasons to do this. Go does not use time_t and does not appear to suffer from 2038 issues unless you actually plan to have code running on a 32-bit machine in 2038. If that's your plan, I'd suggest finding a better plan.
I presume Go time is not magic and will wrap in 2038 on a platform with a 32-bit time_t. If this is false and it is somehow always 64-bit, clue me in because that will prevent much grief.
Most of the the Year 2038 Problem is programs assuming that the time since 1970 will fit in a 32-bit signed integer. This effects time and date functions, as well as network and data formats which choose to represent time as a 32-bit signed integer since 1970. This is not some hardware limit (except if it's actually 2038, see below), but rather a design limitation of older programming languages and protocols. There's nothing stopping you from using 64 bit integers to represent time, or choosing a different epoch. And that's exactly what newer programming languages do, no magic required.
Go was first released in 2009 long after issues such as Unicode, concurrency, and 32-bit time (ie. the Year 2038 Problem) were acknowledged as issues any programming language would have to tackle. Given how many issues there are with C's time library, I highly doubt that Go is using it at all. A quick skim of the source code confirms.
While I can't find any explicit mention in the Go documentation of the limits of its Time representation, it appears to be completely disconnected from C's time.h structures such as time_t. Since Time uses 64 bit integers, it seems to be clear of 2038 problems unless you're asking for actual clock time.
Digging into the Go docs for Time we find their 0 is well outside the range of a 32-bit time_t which ranges from 1901 to 2038.
The zero value of type Time is January 1, year 1, 00:00:00.000000000 UTC
time.Unix takes seconds and nanoseconds as 64 bit integers leaving no doubt that it is divorced from the size of time_t.
time.Parse will parse a year "in the range 0000..9999", again well outside the range of a 32-bit time_t.
And so on. The only limitation I could find is that a Duration is limited to 290 years because it has a nanosecond accuracy and 290 years is about 63 bits worth of nanoseconds.
Of course, you should test your code on a machine with a 32-bit time_t.
One side issue of the 2038 Problem is time zones. Computers calculate time zone information from a time zone database, usually the IANA time zone database. This allows one to get the time offset for a certain location at a certain time.
Computers have their own copy of the time zone database installed. Unfortunately its difficult to know where they are located or when they were last updated. To avoid this issue, most programming languages supply their own copy of the time zone database. Go does as well.
The only real limitation on a machine with 32-bit time is the limits of its hardware clock. This tells the software what time it is right now. A 32-bit clock only becomes an issue if your program is still running on a 32-bit machine in 2038. There isn't much point to mitigating that because everything on that machine will have the same problem and its unlikely they took it into account. You're better off decommissioning that hardware before 2038.
Ordinarily, time.Time uses 63 bits to represent wall clock seconds elapsed since January 1, year 1 00:00:00 UTC, up through 219250468-12-04 15:30:09.147483647 +0000 UTC. For example,
package main
import (
"fmt"
"time"
)
func main() {
var t time.Time
fmt.Println(t)
t = time.Unix(1<<63-1, 1<<31-1)
fmt.Println(t)
}
Playground: https://play.golang.org/p/QPs1m6eMPH
Output:
0001-01-01 00:00:00 +0000 UTC
219250468-12-04 15:30:09.147483647 +0000 UTC
If time.Time is monotonic (derived from time.Now()), time.Time uses 33 bits to represent wall clock seconds, covering the years 1885 through 2157.
References:
Package time
Proposal: Monotonic Elapsed Time Measurements in Go

MPI Fortran WTIME not working well

I am coding using Fortran MPI and I need to get the run time of the program. Therefore I tried to use the WTIME() function but I am getting some strange results.
Part of the code is like this:
program heat_transfer_1D_parallel
implicit none
include 'mpif.h'
integer myid,np,rc,ierror,status(MPI_STATUS_SIZE)
integer :: N,N_loc,i,k,e !e = number extra points (filled with 0s)
real :: time,tmax,start,finish,dt,dx,xmax,xmin,T_in1,T_in2,T_out1,T_out2,calc_T,t1,t2
real,allocatable,dimension(:) :: T,T_prev,T_loc,T_loc_prev
call MPI_INIT(ierror)
call MPI_COMM_SIZE(MPI_COMM_WORLD,np,ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD,myid,ierror)
...
t1 = MPI_WTIME()
time = 0.
do while (time.le.tmax)
...
end do
...
call MPI_BARRIER(MPI_COMM_WORLD,ierror)
t2 = MPI_WTIME()
call MPI_FINALIZE(ierror)
if(myid.eq.0) then
write(*,"(8E15.7)") T(1:N-e)
write(*,*)t2
write(*,*)t1
end if
And the output value for t1 and t2 is the same and a very big: 1.4240656E+09
Any ideas why? Thank you so much.
From the documentation: Return value: Time in seconds since an arbitrary time in the past. They didn't specify how far back ;-) Only t2-t1 is meaningful here...
Also, the return value of MPI_Wtime() is double precision! t1 and t2 are declared as single precision floats.
Building on the answer from Alexander Vogt, I'd like to add that many Unix implementations of MPI_WTIME use gettimeofday(2) (or similar) to retrieve the system time and then convert the returned struct timeval into a floating-point value. Timekeeping in Unix is done by tracking the number of seconds elapsed since the Epoch (00:00 UTC on 01.01.1970). While writing this, the value is 1424165330.897136 seconds and counting.
With many Fortran compilers, REAL defaults to single-precision floating point representation that can only hold 7.22 decimal digits, while you need at least 9 (more if subsecond precision is needed). The high-precision time value above becomes 1.42416538E9 when stored in a REAL variable. The next nearest value that can be represented by the type is 1.4241655E9. Therefore you cannot measure time periods shorter than (1.4241655 - 1.42416538).109 or 120 seconds.
Use double precision.

Determining Millisecond Time Intervals In Cocoa

Just as background, I'm building an application in Cocoa. This application existed originally in C++ in another environment. I'd like to do as much as possible in Objective-C.
My questions are:
1)
How do I compute, as an integer, the number of milliseconds between now and the previous time I remembered as now?
2)
When used in an objective-C program, including time.h, what are the units of
clock()
Thank you for your help.
You can use CFAbsoluteTimeGetCurrent() but bear in mind the clock can change between two calls and can screw you over. If you want to protect against that you should use CACurrentMediaTime().
The return type of these is CFAbsoluteTime and CFTimeInterval respectively, which are both double by default. So they return the number of seconds with double precision. If you really want an integer you can use mach_absolute_time() found in #include <mach/mach_time.h> which returns a 64 bit integer. This needs a bit of unit conversion, so check out this link for example code. This is what CACurrentMediaTime() uses internally so it's probably best to stick with that.
Computing the difference between two calls is obviously just a subtraction, use a variable to remember the last value.
For the clock function see the documentation here: clock(). Basically you need to divide the return value by CLOCKS_PER_SEC to get the actual time.
How do I compute, as an integer, the number of milliseconds between now and the previous time I remembered as now?
Is there any reason you need it as an integral number of milliseconds? Asking NSDate for the time interval since another date will give you a floating-point number of seconds. If you really do need milliseconds, you can simply multiply by that by 1000 to get a floating-point number of milliseconds. If you really do need an integer, you can round or truncate the floating-point value.
If you'd like to do it with integers from start to finish, use either UpTime or mach_absolute_time to get the current time in absolute units, then use AbsoluteToNanoseconds to convert that to a real-world unit. Obviously, you'll have to divide that by 1,000,000 to get milliseconds.
QA1398 suggests mach_absolute_time, but UpTime is easier, since it returns the same type AbsoluteToNanoseconds uses (no “pointer fun” as shown in the technote).
AbsoluteToNanoseconds returns an UnsignedWide, which is a structure. (This stuff dates back to before Mac machines could handle scalar 64-bit values.) Use the UnsignedWideToUInt64 function to convert it to a scalar. That just leaves the subtraction, which you'll do the normal way.

Resources