gps time, time conversion - time

I have time in UTC seconds format. Could any one assist on how to convert such numbers to GPS
time in normal timestamp (dd-mm-yyyy hh:mm:ss)? I need a C code or, perhaps, algorithm.

Update (June 2017): Currently 18 leap seconds.
GPS time is simply UTC time, but without leap seconds. As of this writing, there have been 15 leap seconds since the GPS epoch (January 6, 1980 # 00:00:00), so if it's 2012/02/13 # 12:00:00 (UTC), then it's 2012/02/13 # 12:00:15 in GPS time. If you want to do correct conversions for other times, you'll have to take into account when each leap second went into effect.
Here's how you can compute the current offset, from a couple different "authoritative" sources:
http://www.ietf.org/timezones/data/leap-seconds.list -- Count the number of lines starting from the 2571782400 20 # 1 Jul 1981 line. Or just subtract 19 from the last number in the list (e.g., 37-19 = 18) as of May 2017.
https://www.nist.gov/pml/time-and-frequency-division/atomic-standards/leap-second-and-ut1-utc-information -- Count the number of leap seconds inserted (from the Leap Seconds Inserted into the UTC Time Scale section), starting with (and including) the 1981-06-30 entry.

There is a Javascript library that can convert to and from unixtime. The library is available at
http://www.lsc-group.phys.uwm.edu/~kline/gpstime/
Whatever algorithm you use, you must update it when new leap seconds are announced.

for an algorithm check this site source code: UTC Converter
for built-in functions in c++ check here - especially ctime()

Related

What is the timestamp unit of an exported TwinCAT Scope measurement?

I took some measurements with TwinCAT Scope and exported the results as a CSV. The measurement series started at 17 August 2022 at 10:32:25.290 which has timestamp 133051987452906875. This is not UNIX time, because that time would correspond to 1660725145. Adding some miliseconds would add some zeros at the end.
So what is the unit of the TwinCAT timestamp?
The same time format is also used at more places. For example in ADS. From the C++ ADS library I found that the unit is
the number of 100-nanosecond intervals since January 1, 1601 (UTC)

Is there a one to one and onto relation between ISO-8601 UTC and Unix Timestamp?

Time Formats
A point in time is often represented as Unix Time, or as a human-readable ISO 8601 date in UTC time string.
For example:
Unix Time
Seconds since Epoch, or Unix timestamp, in seconds or milliseconds:
1529325705
1529325705000
ISO 8601 Date
2018-06-18T15:41:45+00:00
My question
Is there a one-to-one and onto relationship between the two? In other words, is there a point in time with a single representation in one format, and more than one, or zero, representations in the other?
Yes, it is possible to find such a date. From the wiki article on Unix time:
Every day is treated as if it contains exactly 86400 seconds,[2] so leap seconds are not applied to seconds since the Epoch.
That means that the leap seconds themselves cannot be represented in Unix time.
For example, the latest leap second occurred at the end of 2016, so 2016-12-31T23:59:60+00:00 is a valid ISO 8601 time stamp. However, the Unix time stamp for the second before, at 23:59:59, is represented as 1483228799 and the second after, 00:00:00 (on January 1 2017) is 1483228800, so there is no Unix timestamp that represents the leap second.
In practice, this is probably not a problem for you; there has only been 27 leap seconds since they were introduced in 1972.
It might be worthwhile to mention that most software implementations of ISO 8601 does not take leap seconds into account either, but will do something else if asked to parse "2016-12-31T23:59:60+00:00". The System.DateTime class in .NET throws an exception, while it's also conceivable that a library would return 2017-01-01 00:00:00.
No. there is a nice correspondence between the two, but the relationship is 1 to many, and strictly speaking there may not even exist a precise Unix millisecond for a given ISO date-time string. Some issues are:
There are some freedoms in the ISO 8601 format, so the same Unix millisecond may be written in several ways even when we require that the time be in UTC (the offset is zero).
Seconds and fraction of seconds are optional, and there may be a varying number of decimals on the seconds. So a milliseconds value of 1 529 381 160 000, for example, could be written as for example
2018-06-19T04:06:00.000000000Z
2018-06-19T04:06:00.00Z
2018-06-19T04:06:00Z
2018-06-19T04:06Z
The offset of 0 would normally be written as Z, but may also be written as you do in the question, +00:00. I think the forms +00 and +0000 are OK too (forms with a minus are not).
Since there may be more than three decimals on the seconds in ISO 8601, no exact Unix millisecond may match. So you will have to accept truncation (or rounding) to convert to Unix time. Of course the error will be still greater if you convert to Unix seconds rather than milliseconds.
As Thomas Lycken noted, leap seconds can be represented in ISO 8601, but not in Unix time.
In other words, is there a point in time with a single representation in one format, and more than one, or zero, representations in the other?
No. The UTC time depends on your geographic location, ie. your latitude and longitude. However, the UNIX timestamp is a way to track time as a running total of seconds. This count starts at the Unix Epoch on January 1st, 1970 at UTC.
From Unix TimeStamp,
It should also be pointed out that this point in time technically does not change no
matter where you are located on the globe.

Timezone confusion with Ruby's Time class

It's worth noting I am using Ruby 2.1.2, and so the Time class uses a signed 63 bit integer. The integer is used when it can represent a number of nanoseconds since the Epoch; otherwise, bignum or rational is used, according to the documentation.
When I use ::new without arguments, it gives me the current time using my local time zone (not UTC):
> Time.new
=> 2015-06-30 18:29:08 -0400
It's correct. It's 6:29pm on the East coast of the US. Now I want to check the local time in two timezones to the east of EDT:
> t = Time.new(2015,6,30,18,29,8,"+02:00")
=> 2015-06-30 18:29:08 +0200
This is where my confusion comes in. When I specify two timezones to the east of me, I expect there to be two additional hours because each timezone is 15 degrees longitude and each is represented by 1 hour.
Why did it give me the same time as my local time, instead of two hours later?
What you think is happening isn't what's happening. What you've done is give that time an offset of GMT +2 as opposed to two hours out from your current timezone.
If you want to see the time at the offset of two hours ahead of where you are, then you want to create the instance of Time, get your local time, and shift that by the GMT offset.
Time.now.getlocal("-02:00")
If you want to compute this, you have to look at your local time's utc_offset first, then either add or subtract the product of 3600 and however many timezones you want to move. Note that this only moves timezones in whole number increments and will break in cases where a timezone that requires a different precision is needed (i.e. Newfoundland).
t = Time.now
t.getlocal(t.utc_offset + (3600 * 2))

Unix time and leap seconds

Regarding Unix (POSIX) time, Wikipedia says:
Due to its handling of leap seconds, it is neither a linear representation of time nor a true representation of UTC.
But the Unix date command does not seem to be aware of them actually
$ date -d '#867715199' --utc
Mon Jun 30 23:59:59 UTC 1997
$ date -d '#867715200' --utc
Tue Jul 1 00:00:00 UTC 1997
While there should be a leap second there at Mon Jun 30 23:59:60 UTC 1997.
Does this mean that only the date command ignores leap seconds, while the concept of Unix time doesn't?
The number of seconds per day are fixed with Unix timestamps.
The Unix time number is zero at the Unix epoch, and increases by
exactly 86400 per day since the epoch.
So it cannot represent leap seconds. The OS will slow down the clock to accommodate for this. The leap seconds is simply not existent as far a Unix timestamps are concerned.
Unix time is easy to work with, but some timestamps are not real times, and some timestamps are not unique times.
That is, there are some duplicate timestamps representing two different seconds in time, because in unix time the sixtieth second might have to repeat itself (as there can't be a sixty-first second). Theoretically, they could also be gaps in the future because the sixtieth second doesn't have to exist, although no skipping leap seconds have been issued so far.
Rationale for unix time: it's defined so that it's easy to work with. Adding support for leap seconds to the standard libraries is very tricky. For example, you want to represent 1 Jan 2050 in a database. No-one on earth knows how many seconds away that date is in UTC! The date can't be stored as a UTC timestamp, because the IAU doesn't know how many leap seconds we'll have to add in the next decades (they're as good as random). So how can a programmer do date arithmetic when the length of time which will elapse between any two dates in the future isn't know until a year or two before? Unix time is simple: we know the timestamp of 1 Jan 2050 already (namely, 80 years * #of seconds in a year). UTC is extremely hard to work with all year round, whereas unix time is only hard to work with in the instant a leap second occurs.
For what it's worth, I've never met a programmer who agrees with leap seconds. They should clearly be abolished.
There is a lot of discussion here and elsewhere about leap seconds, but it isn't a complicated issue, because it doesn't have anything to do with UTC, or GMT, or UT1, or TAI, or any other time standard. POSIX (Unix) time is, by definition, that which is specified by the IEEE Std 1003.1 "POSIX" standard, available here.
The standard is unambiguous: POSIX time does not include leap seconds.
Coordinated Universal Time (UTC) includes leap seconds. However, in POSIX time (seconds since the Epoch), leap seconds are ignored (not applied) to provide an easy and compatible method of computing time differences. Broken-down POSIX time is therefore not necessarily UTC, despite its appearance.
The standard goes into significant detail unambiguously stating that POSIX time does not include leap seconds, in particular:
It is a practical impossibility to mandate that a conforming implementation must have a fixed relationship to any particular official clock (consider isolated systems, or systems performing "reruns" by setting the clock to some arbitrary time).
Since leap seconds are decided by committee, it is not just a "bad idea" to include leap seconds in POSIX time, it is impossible given that the standard allows for conforming implementations which do not have network access.
Elsewhere in this question #Pacerier has said that POSIX time does include leap seconds, and that each POSIX time may correspond to more than one UTC time. While this is certainly one possible interpretation of a POSIX timestamp, this is by no means specified by the standard. His arguments largely amount to weasel words that do not apply to the standard, which defines POSIX time.
Now, things get complicated. As specified by the standard, POSIX time may not be equivalent to UTC time:
Broken-down POSIX time is therefore not necessarily UTC, despite its appearance.
However, in practice, it is. In order to understand the issue, you have to understand time standards. GMT and UT1 are based on the astronomical position of the Earth in the universe. TAI is based on the actual amount of time that passes in the universe as measured by physical (atomic) reactions. In TAI, each second is an "SI second," which are all exactly the same length. In UTC, each second is an SI second, but leap seconds are added as necessary to readjust the clock back to within .9 seconds of GMT/UT1. The GMT and UT1 time standards are defined by empirical measurements of the Earth's position and movement in the universe, and these empirical measurements cannot through any means (neither scientific theory nor approximation) be predicted. As such, leap seconds are also unpredictable.
Now, the POSIX standard also specifies that the intention is for all POSIX timestamps to be interoperable (mean the same thing) in different implementations. One solution is for everyone to agree that each POSIX second is one SI second, in which case POSIX time is equivalent to TAI (with the specified epoch), and nobody need contact anyone except for their atomic clock. We didn't do that, however, probably because we wanted POSIX timestamps to be UTC timestamps.
Using an apparent loophole in the POSIX standard, implementations intentionally slow down or speed up seconds -- so that POSIX time no longer uses SI seconds -- in order to remain in sync with UTC time. Reading the standard it is clear this was not what was intended, because this cannot be done with isolated systems, which therefore cannot interoperate with other machines (their timestamps, without leap seconds, mean something different for other machines, with leap seconds). Read:
[...] it is important that the interpretation of time names and seconds since the Epoch values be consistent across conforming systems; that is, it is important that all conforming systems interpret "536457599 seconds since the Epoch" as 59 seconds, 59 minutes, 23 hours 31 December 1986, regardless of the accuracy of the system's idea of the current time. The expression is given to ensure a consistent interpretation, not to attempt to specify the calendar. [...] This unspecified second is nominally equal to an International System (SI) second in duration.
The "loophole" allowing this behavior:
Note that as a practical consequence of this, the length of a second as measured by some external standard is not specified.
So, implementations abuse this freedom by intentionally changing it to something which cannot, by definition, be interoperable among isolated or nonparticipating systems. Alternatively, the implementation may simply repeat POSIX times as if no time had passed. See this Unix StackExchange answer for details on all modern implementations.
Phew, that was confusing alright... A real brain teaser!
Since both of the other answers contain lots of misleading information, I'll throw this in.
Thomas is right that the number of Unix Epoch timestamp seconds per day are fixed. What this means is that on days where there is a leap second, the second right before midnight (the 61st second of the UTC minute before midnight) is given the same timestamp as the previous second.
That timestamp is "replayed", if you will. So the same unix timestamp will be used for two real-world seconds. This also means that if you're getting fraction unix epochs, the whole second will repeat.
X86399.0, X86399.5, X86400.0, X86400.5, X86400.0, X86400.5, then X86401.0.
So unix time can't unambiguously represent leap seconds - the leap second timestamp is also the timestamp for the previous real-world second.

Ruby- get date from microseconds

Alright I was looking at this in python but I just like ruby better. What I'm trying to do is get a date and time from this number - 12988822998637849 - which is the number of microseconds since January 1, 1601 UTC. This is how Chrome stores it's timestamps and i've seen a number of methods to do this in python, but I am just more comfortable with ruby and I have no idea on how to even start going about doing this. (My Google-Fu didn't help me this time)
Note this example number is from a few days ago. I'll take any help I can get. Thank you!
look at Time.at.
A Windows file time is "a 64-bit value that represents the number of 100-nanosecond intervals that have elapsed since 12:00 midnight, January 1, 1601 A.D. (C.E.) Coordinated Universal Time (UTC)." Ref.
In contrast, Ruby stores times like Unix: "Time is stored internally as the number of seconds and microseconds since the epoch, January 1, 1970 00:00 UTC" Ref.
# This return a Time
Time.at(12988822998637849/1000000-11644473600) # Epoch Diff is 11644473600
# => 2012-08-07 11:23:18 -0300
# This returns a String
Time.at(12988822998637849/1000000-11644473600).strftime("%Y-%m-%d %H:%M.%S")
# => "2012-08-07 11:23.18"
All you need to do is to create a Ruby date using the Chrome time origin, then increment by the requisite number of microseconds:
Time.gm(1601,1,1) + 12988822998637849 / 1000000
# => 2012-08-07 14:23:18 UTC

Resources