Are the decimal components of Unix timestamps and UTC time synced? - time

Conventional time is meant to stay in sync with the rotation of the earth, and so is shifted with leap years and leap seconds, while Unix time is meant to measure the number of seconds since midnight Jan 1 1970. As such, the two drift apart over time.
But what about the decimals? It seems to me that if you took just the decimal portion of UTC, Unix time, and frankly any other time zone, they should line up except during the exact time a leap second or leap smear is taking place.
Are the decimal components of Unix timestamps and UTC time synced (except during such events)?

The reason leap seconds are issued is because we have 2 different definitions of measuring a second:
As ​1⁄86400 of 1 rotation of the earth (a day)
A more stable definition from the SI standard: https://en.wikipedia.org/wiki/Second
These 2 seconds are not equal length. In science and computing we prefer something very exact, and for clocks we prefer the second to be 1⁄86400 of a day.
To make clocks on computers match up with our expectation of the rotation-based clock, we add or remove seconds in the form of leap seconds.
What's really going on is that the 'length' of these 2 second definitions is different and keeps changing (compared to each other). Once the length has caused 1 the clock to drift far enough we just add a second to our computers to match the other definition.
But this drift is not instant. It happens over time. This means that the both these clocks slowly drift apart.
The suggestion that the 'decimals' are the same doesn't really make that much sense then. The difference between these decimals grow and grow until we have to add or remove a second to make them closer together again. The Earth's rotation isn't suddenly an extra second faster one day.
So when you ask the question: are they synced? It's asking whether the rotation of the earth is synced. We don't yet have the power to make the earth spin slower or faster ;)

Related

Google Sheets Countdown Timer for Aircraft Flight Plan "Time Remaining"

I'm struggling with a formula in Google Sheets that will display a countdown for how much time remains until an aircraft is overdue, based on a filed flight plan.
What I have: 2 key pieces of information about a flight plan:
how much estimated time it will take to make the flight (ETE:
Estimated Time Enroute)
what time the aircraft departed (ATA: Actual Time of Departure)
Constraints: (mandated by company policy)
The ETE must be entered in decimal format, in numbers of hours. A 1hr 30min flight must have an ETE of 1.5, or a 20 minute flight must have an ETE of .3 (rounded to the nearest 10th).
The ATA must be entered in 4-digit 24hr time, but without the colon. 1:30pm must be entered as "1330"
The countdown timer must be displayed in minutes, rounded to the nearest whole number. 1hr 28min must be listed "88"
The countdown should be "live" (this is solved by spreadsheet settings to update "on update or every minute".
The countdown should easily indicate aircraft that have become "overdue" (this will be solved with conditional formatting to highlight negative numbers)
My pseudo formula is essentially just: Now() - (ETE+ATD), but I'm stuck on how to get around the constraints, specifically the three different time formats (decimal hours ETE, 4-digit 24hr time ATA, and remaining time in minutes).
I've set up a dummy sheet here:
https://docs.google.com/spreadsheets/d/165mXKRquI4aBEEap8PIHVrFpAraaapykGqjkDg22qeU/edit?usp=sharing
*I've looked through this Q&A, but it's a GAS solution. I'd much prefer to just have a formula. Preferably an array formula, so that it copies down to however many rows there might end up being.
**Possibly a secondary concern down the road: at the moment, we do not conduct overnight flights, but it's possible in the future. Starting a 3hr flight at 10pm will result in the arrival time being the next day. Hopefully, there is a solution for this.
I suggest some testing before use, but should be worth trying:
=if(now()>today()+1*(left(A2,2)&":"&right(A2,2)),round(24*60*(today()+1*(left(A2,2)&":"&right(A2,2))-now())+B2*60,0),"")
where the ATA value is in A2 and the ETE in B2.
Could be simplified but longer might be easier to adapt for overnight, if required.

Daylight Savings Time Gap/Overlap definitions? When to "correct" for them?

What is the definition of Daylight Savings Time 'Overlap' & 'Gap'?
I have a hazy understanding of them, so I'd like to confirm... What does it mean to be "within" either of them?
What does it mean to "correct" for DST Gap or DST Overlap? When does a time need correcting, and when does it not need correcting?
The above questions are language-agnostic, but an example of their application I have is:
When to call org.joda.time.LocalDateTime#correctDstTransition?
Correct date in case of DST overlap.The Date object created has
exactly the same fields as this date-time, except when the time would
be invalid due to a daylight savings gap. In that case, the time will
be set to the earliest valid time after the gap. In the case of a
daylight savings overlap, the earlier instant is selected.
Much of this is already explained in the DST tag wiki, but I will answer your specific questions.
What is the definition of Daylight Savings Time 'Overlap' & 'Gap'?
...
What does it mean to be "within" either of them?
When daylight saving time begins, the local time is advanced - usually by one hour. This creates a "gap" in the values of local time in that time zone.
For example, when DST starts in the United States, the clocks tick from 1:59 AM to 3:00 AM. Any local time value from 2:00 AM through 2:59 AM would be considered to be "within the gap".
Note that values in the gap are non-existent. They do not occur in the real world, unless a clock was not correctly advanced. In practice, one typically gets to a value within the gap by adding or subtracting an elapsed time value from another local time.
When daylight saving time ends, the local time is retracted by the same amount that was added when it began (again, usually 1 hour). This creates an "overlap" in the local time values of that time zone.
For example, when DST ends in the United states, the clocks tick from 1:59 AM back to 1:00 AM. Any local time value from 1:00 AM through 1:59 AM is ambiguous if there is no additional qualifying information.
To be "within the overlap" means that you have a value that is potentially ambiguous because it falls into this range.
Such values may belong to the daylight time occurrence (which comes first sequentially), or may belong to the standard time occurrence (which comes second sequentially).
What does it mean to "correct" for DST Gap or DST Overlap?
Correcting for the gap means to ensure that the local time value is valid by possible moving it to a different value. There are various schemes in use for doing so, but the most common and most sensible is to advance the local time value by the amount of the gap.
For example, if you have a local time of 2:30 AM, and you determine it to occur on the day of the spring-forward transition in the United States, then it falls into the gap. Advance it to 3:30 AM.
This approach tends to work well because simulates the act of a human manually advancing an analog clock - or rather, correcting with the idea that it had not been properly advanced.
Correcting for the overlap means to ensure that all local times are well qualified. Usually this is accomplished by assigning a time zone offset to all values.
In the case of a value that is not ambiguous, the offset is deterministic.
In the case of a value that falls within the overlap on the day of a fall-back transition, it often makes sense to choose the first of the two possible values (which will have the daylight time offset). This is because time moves in a forward direction. However, there are sometimes cases where it makes sense to use a different rule, so YMMV.
When does a time need correcting, and when does it not need correcting?
If you are attempting to work with time as an instantaneous value, such as determining the elapsed duration between two values, adding an elapsed time to a specific value, or when converting to UTC, then you need to correct for gaps and overlaps as they occur.
If you are only working with user input and output, always displaying the exact value a user gave you (and never using it for math or time zone conversions) then you do not need to correct for gaps and overlaps.
Also, if you are working with date-only values, or time-only values, then you should not be applying time zone information at all, and thus do not need to correct for gaps and overlaps.
Lastly, if you are working strictly with Coordinated Universal Time (UTC), which has no daylight saving time, then you do not need to correct for gaps and overlaps.
When to call org.joda.time.LocalDateTime#correctDstTransition?
You don't. That method is private, and is called by other Joda-time functions as needed.

Unix time and leap seconds

Regarding Unix (POSIX) time, Wikipedia says:
Due to its handling of leap seconds, it is neither a linear representation of time nor a true representation of UTC.
But the Unix date command does not seem to be aware of them actually
$ date -d '#867715199' --utc
Mon Jun 30 23:59:59 UTC 1997
$ date -d '#867715200' --utc
Tue Jul 1 00:00:00 UTC 1997
While there should be a leap second there at Mon Jun 30 23:59:60 UTC 1997.
Does this mean that only the date command ignores leap seconds, while the concept of Unix time doesn't?
The number of seconds per day are fixed with Unix timestamps.
The Unix time number is zero at the Unix epoch, and increases by
exactly 86400 per day since the epoch.
So it cannot represent leap seconds. The OS will slow down the clock to accommodate for this. The leap seconds is simply not existent as far a Unix timestamps are concerned.
Unix time is easy to work with, but some timestamps are not real times, and some timestamps are not unique times.
That is, there are some duplicate timestamps representing two different seconds in time, because in unix time the sixtieth second might have to repeat itself (as there can't be a sixty-first second). Theoretically, they could also be gaps in the future because the sixtieth second doesn't have to exist, although no skipping leap seconds have been issued so far.
Rationale for unix time: it's defined so that it's easy to work with. Adding support for leap seconds to the standard libraries is very tricky. For example, you want to represent 1 Jan 2050 in a database. No-one on earth knows how many seconds away that date is in UTC! The date can't be stored as a UTC timestamp, because the IAU doesn't know how many leap seconds we'll have to add in the next decades (they're as good as random). So how can a programmer do date arithmetic when the length of time which will elapse between any two dates in the future isn't know until a year or two before? Unix time is simple: we know the timestamp of 1 Jan 2050 already (namely, 80 years * #of seconds in a year). UTC is extremely hard to work with all year round, whereas unix time is only hard to work with in the instant a leap second occurs.
For what it's worth, I've never met a programmer who agrees with leap seconds. They should clearly be abolished.
There is a lot of discussion here and elsewhere about leap seconds, but it isn't a complicated issue, because it doesn't have anything to do with UTC, or GMT, or UT1, or TAI, or any other time standard. POSIX (Unix) time is, by definition, that which is specified by the IEEE Std 1003.1 "POSIX" standard, available here.
The standard is unambiguous: POSIX time does not include leap seconds.
Coordinated Universal Time (UTC) includes leap seconds. However, in POSIX time (seconds since the Epoch), leap seconds are ignored (not applied) to provide an easy and compatible method of computing time differences. Broken-down POSIX time is therefore not necessarily UTC, despite its appearance.
The standard goes into significant detail unambiguously stating that POSIX time does not include leap seconds, in particular:
It is a practical impossibility to mandate that a conforming implementation must have a fixed relationship to any particular official clock (consider isolated systems, or systems performing "reruns" by setting the clock to some arbitrary time).
Since leap seconds are decided by committee, it is not just a "bad idea" to include leap seconds in POSIX time, it is impossible given that the standard allows for conforming implementations which do not have network access.
Elsewhere in this question #Pacerier has said that POSIX time does include leap seconds, and that each POSIX time may correspond to more than one UTC time. While this is certainly one possible interpretation of a POSIX timestamp, this is by no means specified by the standard. His arguments largely amount to weasel words that do not apply to the standard, which defines POSIX time.
Now, things get complicated. As specified by the standard, POSIX time may not be equivalent to UTC time:
Broken-down POSIX time is therefore not necessarily UTC, despite its appearance.
However, in practice, it is. In order to understand the issue, you have to understand time standards. GMT and UT1 are based on the astronomical position of the Earth in the universe. TAI is based on the actual amount of time that passes in the universe as measured by physical (atomic) reactions. In TAI, each second is an "SI second," which are all exactly the same length. In UTC, each second is an SI second, but leap seconds are added as necessary to readjust the clock back to within .9 seconds of GMT/UT1. The GMT and UT1 time standards are defined by empirical measurements of the Earth's position and movement in the universe, and these empirical measurements cannot through any means (neither scientific theory nor approximation) be predicted. As such, leap seconds are also unpredictable.
Now, the POSIX standard also specifies that the intention is for all POSIX timestamps to be interoperable (mean the same thing) in different implementations. One solution is for everyone to agree that each POSIX second is one SI second, in which case POSIX time is equivalent to TAI (with the specified epoch), and nobody need contact anyone except for their atomic clock. We didn't do that, however, probably because we wanted POSIX timestamps to be UTC timestamps.
Using an apparent loophole in the POSIX standard, implementations intentionally slow down or speed up seconds -- so that POSIX time no longer uses SI seconds -- in order to remain in sync with UTC time. Reading the standard it is clear this was not what was intended, because this cannot be done with isolated systems, which therefore cannot interoperate with other machines (their timestamps, without leap seconds, mean something different for other machines, with leap seconds). Read:
[...] it is important that the interpretation of time names and seconds since the Epoch values be consistent across conforming systems; that is, it is important that all conforming systems interpret "536457599 seconds since the Epoch" as 59 seconds, 59 minutes, 23 hours 31 December 1986, regardless of the accuracy of the system's idea of the current time. The expression is given to ensure a consistent interpretation, not to attempt to specify the calendar. [...] This unspecified second is nominally equal to an International System (SI) second in duration.
The "loophole" allowing this behavior:
Note that as a practical consequence of this, the length of a second as measured by some external standard is not specified.
So, implementations abuse this freedom by intentionally changing it to something which cannot, by definition, be interoperable among isolated or nonparticipating systems. Alternatively, the implementation may simply repeat POSIX times as if no time had passed. See this Unix StackExchange answer for details on all modern implementations.
Phew, that was confusing alright... A real brain teaser!
Since both of the other answers contain lots of misleading information, I'll throw this in.
Thomas is right that the number of Unix Epoch timestamp seconds per day are fixed. What this means is that on days where there is a leap second, the second right before midnight (the 61st second of the UTC minute before midnight) is given the same timestamp as the previous second.
That timestamp is "replayed", if you will. So the same unix timestamp will be used for two real-world seconds. This also means that if you're getting fraction unix epochs, the whole second will repeat.
X86399.0, X86399.5, X86400.0, X86400.5, X86400.0, X86400.5, then X86401.0.
So unix time can't unambiguously represent leap seconds - the leap second timestamp is also the timestamp for the previous real-world second.

Meaning of axis of figures of simulation or performance modeling papers

I am reading some papers on simulation and performance modeling. The Y axis in some figures is labeled "Seconds per Simulation Day". I am not sure what it actually means. It span from 0, 20, 40 to 120.
Another label is "Simulation years per day". I guess it means the guest OS inside simulation environment thinks it has passed several years while actually it just passed a day in the real world? But I guess simulation should slow down the execution, so I guess inside simulation environment passed several hours while actually it just passed a day in the real world would be more reasonable.
Thanks.
Without seeing the paper, I assume they are trying to compare the CPU time it takes to get to some physical time in a simulation.
So "Seconds per Simulation Day" is likely the walltime it took to get 24 hours in the simulation.
Likewise, "Simulation Years per Day" is physical time of simulation/real life day.
Of course, without seeing the paper it's impossible to know for sure. It's also possible they are looking at CPU-seconds or CPU-days, which would be nCPUs*walltime.
Simulations typically run in discrete time units, called time steps. If you'd like to simulate a certain process that spans certain time in the simulation, you would have to perform certain number of time steps. If the length of a time step is fixed, the number of steps is then just the simulated time divided by the length of the time step. Calculations in each time step take certain amount of time and the total run time for the simulation would equal the number of time steps times the time it takes to perform one time step:
(1) total_time = (simulation_time / timestep_length) * run_time_per_timestep
Now several benchmark parameters can be obtained by placing different parameters on the left hand side. E.g. if you fix simulation_time = 1 day then total_time would give you the total simulation run time, i.e.
(2) seconds_per_sim_day = (1 day / timestep_length) * run_time_per_timestep
Large values of seconds_per_sim_day could mean:
it takes too much time to compute a single time step, i.e. run_time_per_timestep is too high -> the computation algorithm should be optimised for speed;
the time step is too short -> search for better algorithms that can accept larger time steps and still produce (almost) the same result.
On the other hand, if you solve (1) for simulation_time and fix total_time = 1 day, you get the number of time steps that can be performed per day times the length of the time step, or the total simulation time that can be achieved per day of computation:
(3) simulation_time_per_day = (1 day / run_time_per_step) * timestep_length
Now one can observe that:
larger time steps lead to larger values of simulation_time_per_day, i.e. longer simulations can be computed;
if it takes too much time to compute a time step, the value of simulation_time_per_day would go down.
Usually those figures could be used when making decisions about buying CPU time at some computing centre. For example, you would like to simulate 100 years, then just divide that by the amount of simulation years per day and you get how many compute days you would have to pay (or wait) for. Larger values of simulation_time_per_day definitely benefit you in this case. If, on the other hand, you only have 10 compute days at your disposal, then you can compute how long of a simulation could be computed and make some decisions, e.g. more short simulations but with many different parameters vs. less but longer simulations with some parameters that you have predicted to be the optimal ones.
In real life things are much more complicated. Usually computing each time step could take different time (although there are cases where each time step takes exactly the same amount of time as all other time steps) and it would strongly depend on the simluation size, configuration, etc. That's why standardised tests exist and usually some averaged value is reported.
Just to summarise: given that all test parameters are kept equal,
faster computers would give less "seconds per simulation day" and more "simulation years per day"
slower computers would give more "seconds per simulation day" and less "simulation years per day"
By the way, both quantites are reciprocial and related by this simple equation:
simuation_years_per_day = 236,55 / seconds_per_simulation_day
(that is "simulation years per day" equals 86400 divided by "seconds per simulation day" /which gives you the simulation days per day/ and then dividied by 365.25 to convert the result into years)
So it doesn't really matter if "simulation years per day" or "seconds per simulation day" is presented. One just have to chose the representation which clearly shows how much better the newer system is from the previous/older/existing one :)

Does the windows FILETIME structure include leap seconds?

The FILETIME structure counts from January 1 1601 (presumably the start of that day) according to the Microsoft documentation, but does this include leap seconds?
The question shouldn't be if FILETIME includes leap seconds.
It should be:
Do the people, functions, and libraries, who interpret a FILETIME (i.e. FileTimeToSystemTime) include leap seconds when counting the duration?
The simple answer is "no". FileTimeToSystemTime returns seconds as 0..59.
The simpler answer is: "of course not, how could it?".
My Windows 2000 machine doesn't know that there were 2 leap seconds added in the decade since it was released. Any interpretation it makes of a FILETIME is wrong.
Finally, rather than relying on logic, we can determine by direct experimental observation, the answer to the posters question:
var
systemTime: TSystemTime;
fileTime: TFileTime;
begin
//Construct a system-time for the 12/31/2008 11:59:59 pm
ZeroMemory(#systemTime, SizeOf(systemTime));
systemtime.wYear := 2008;
systemTime.wMonth := 12;
systemTime.wDay := 31;
systemTime.wHour := 23;
systemtime.wMinute := 59;
systemtime.wSecond := 59;
//Convert it to a file time
SystemTimeToFileTime(systemTime, {var}fileTime);
//There was a leap second 12/31/2008 11:59:60 pm
//Add one second to our filetime to reach the leap second
filetime.dwLowDateTime := fileTime.dwLowDateTime+10000000; //10,000,000 * 100ns = 1s
//Convert the filetime, sitting on a leap second, to a displayable system time
FileTimeToSystemTime(fileTime, {var}systemTime);
//And now print the system time
ShowMessage(DateTimeToStr(SystemTimeToDateTime(systemTime)));
Adding one second to
12/31/2008 11:59:59pm
gives
1/1/2009 12:00:00am
rather than
1/1/2009 11:59:60pm
Q.E.D.
Original poster might not like it, but god intentionally rigged it so that a year is not evenly divisible by a day. He did it just to screw up programmers.
There can be no single answer to this question without first deciding: What is the Windows FILETIME actually counting? The Microsoft docs say it counts 100 nanosecond intervals since 1601 UTC, but this is problematic.
No form of internationally coordinated time existed prior to the year 1960. The name UTC itself does not occur in any literature prior to 1964. The name UTC as an official designation did not exist until 1970. But it gets worse. The Royal Greenwich Observatory was not established until 1676, so even trying to interpret the FILETIME as GMT has no clear meaning, and it was only around then that pendulum clocks with accurate escapements began to give accuracies of 1 second.
If FILETIME is interpreted as mean solar seconds then the number of leap seconds since 1601 is zero, for UT has no leap seconds. If FILETIME is interpreted as if there had been atomic chronometers then the number of leap seconds since 1601 is about -60 (that's negative 60 leap seconds).
That is ancient history, what about the era since atomic chronometers? It is no better because national governments have not made the distinction between mean solar seconds and SI seconds. For a decade the ITU-R has been discussing abandoning leap seconds, but they have not achieved international consensus. Part of the reason for that can be seen in the
javascript on this page (also see the delta-T link on that page for plots of the ancient history). Because national governments have not made a clear distinction, any attempt to define the count of seconds since 1972 runs the risk of being invalid according to the laws of some jurisdiction. The delegates to ITU-R are aware of this complexity, as are the folks on the POSIX committee. Until the diplomatic issues are worked out, until national governments and international standards make a clear distinction and choice between mean solar and SI seconds, there is little hope that the computer standards can follow suit.
Here's some more info about why that particular date was chosen.
The FILETIME structure records time in
the form of 100-nanosecond intervals
since January 1, 1601. Why was that
date chosen?
The Gregorian calendar operates on a
400-year cycle, and 1601 is the first
year of the cycle that was active at
the time Windows NT was being
designed. In other words, it was
chosen to make the math come out
nicely.
I actually have the email from Dave
Cutler confirming this.
The answer to this question used to be no, but has changed to: YES, sort of, sometimes...
Per the Windows Networking team blog article:
Starting in Server 2019 and the Windows 10 October [2018] update time APIs will now take into account all leap seconds the Operating System is aware of when it translates FILETIME to SystemTime.
As there have been no leap seconds issued since the time of this feature being added, the operating system is still unaware of any leap seconds. However, when the next official leap second makes its way into the world, Windows computers that have this new feature enabled will keep track of it, and thus FILETIME values will be offset by the number of leap seconds on the computer at the time they are interpreted.
The blog post goes on to describe:
No change is made to FILETIME. It still represents the number of 100 ns intervals since the start of the epoch. What has changed is the interpretation of that number when it is converted to SYSTEMTIME and back. Here is a list of affected APIs:
GetSystemTime
GetLocalTime
FileTimeToSystemTime
FileTimeToLocalTime
SystemTimeToFileTime
SetSystemTime
SetLocalTime
Previous to this release, SYSTEMTIME had valid values for wSecond between 0 and 59. SYSTEMTIME has now been updated to allow a value of 60, provided the year, month, and day represents day in which a leap second is valid.
...
In order receive the 60 second in the SYSTEMTIME structure a process must explicitly opt-in.
Note that the opt-in applies to the behavior within the functions listed on how a FILETIME is mapped to a SYSTEMTIME. Regardless of whether you opt-in or not, the operating system will still offset FILETIME values according to the leap seconds it is aware of.
With regard to compatibility, the article states:
Applications that rely on 3rd party frameworks should ensure their framework’s implementation on Windows is also calling into the correct APIs to calculate the correct time, or else the application will have the wrong time reported.
And also provides a links to an earlier post which describes how to disable the entire feature, as follows:
... you can revert to the prior operating system behavior and disable leap seconds across the board by adding the following registry key:
HKLM:\SYSTEM\CurrentControlSet\Control\LeapSecondInformation
Type: "REG_DWORD"
Name: Enabled
Value: 0 Disables the system-wide setting
Value: 1 Enables the system-wide setting
Next, restart your system.
Leap seconds are added unpredictably by the IERS. 23 seconds have been added since 1972, when UTC and leap seconds were defined. Wikipedia says "because the Earth's rotation rate is unpredictable in the long term, it is not possible to predict the need for them more than six months in advance."
Since you'd have to keep a history of when leap seconds were inserted, and keep updating the OS to keep a reference of when they had been inserted, and the difference is so small, it's fair not to expect a general-purpose OS to compensate for leap seconds.
In addition, regular clock drift, of the simple electronic clock in your PC compared to UTC, is so much larger than the compensation required for leap seconds. If you need the kind of precision to compensate for leap seconds, you shouldn't use the highly-inaccurate PC clock.
According to this comment windows is totally unaware of leap seconds. If you add 24 * 60 * 60 seconds to a FILETIME that represents 1:39:45 today, you get a FILETIME that represents 1:39:45 tomorrow, no matter what.
A very crude summary:
UTC = (Atomic Time) + (Leap Seconds) ~~ (Mean Solar Time)
The MS documentation says, specifically, "UTC", and so should include the leap seconds. As always with MS, your mileage may vary.

Resources