I know that the primary timestamp on Apple systems is the CF Absolute Time value (also called Mac Absolute Time), which is a 32-bit integer calculated by the number of seconds since 01/01/2001 00:00:00 UTC. For example, 219216022 when decoded is Thu, 13 December 2007 05:20:22 UTC.
Are there any other timestamps used on Mac/Unix systems (other than the default Unix timestamp which is a 32-bit integer calculated in seconds since 01/01/1970 00:00:00 UTC)?
The NSDate object has timeIntervalSince1970, but that's based off the mach time. Meaning [[NSDate date] timeIntervalSince1970]-NSTimeIntervalSince1970 is equal to [NSDate timeIntervalSinceReferenceDate];
Yes. As pointed out by #MarcusJ, the the AIFF Specification has a 32-bit binary timestamp that on the Amiga is the number of seconds since January 1, 1978, and on the Mac is the number of seconds since January 1, 1904.
Related
Time Formats
A point in time is often represented as Unix Time, or as a human-readable ISO 8601 date in UTC time string.
For example:
Unix Time
Seconds since Epoch, or Unix timestamp, in seconds or milliseconds:
1529325705
1529325705000
ISO 8601 Date
2018-06-18T15:41:45+00:00
My question
Is there a one-to-one and onto relationship between the two? In other words, is there a point in time with a single representation in one format, and more than one, or zero, representations in the other?
Yes, it is possible to find such a date. From the wiki article on Unix time:
Every day is treated as if it contains exactly 86400 seconds,[2] so leap seconds are not applied to seconds since the Epoch.
That means that the leap seconds themselves cannot be represented in Unix time.
For example, the latest leap second occurred at the end of 2016, so 2016-12-31T23:59:60+00:00 is a valid ISO 8601 time stamp. However, the Unix time stamp for the second before, at 23:59:59, is represented as 1483228799 and the second after, 00:00:00 (on January 1 2017) is 1483228800, so there is no Unix timestamp that represents the leap second.
In practice, this is probably not a problem for you; there has only been 27 leap seconds since they were introduced in 1972.
It might be worthwhile to mention that most software implementations of ISO 8601 does not take leap seconds into account either, but will do something else if asked to parse "2016-12-31T23:59:60+00:00". The System.DateTime class in .NET throws an exception, while it's also conceivable that a library would return 2017-01-01 00:00:00.
No. there is a nice correspondence between the two, but the relationship is 1 to many, and strictly speaking there may not even exist a precise Unix millisecond for a given ISO date-time string. Some issues are:
There are some freedoms in the ISO 8601 format, so the same Unix millisecond may be written in several ways even when we require that the time be in UTC (the offset is zero).
Seconds and fraction of seconds are optional, and there may be a varying number of decimals on the seconds. So a milliseconds value of 1 529 381 160 000, for example, could be written as for example
2018-06-19T04:06:00.000000000Z
2018-06-19T04:06:00.00Z
2018-06-19T04:06:00Z
2018-06-19T04:06Z
The offset of 0 would normally be written as Z, but may also be written as you do in the question, +00:00. I think the forms +00 and +0000 are OK too (forms with a minus are not).
Since there may be more than three decimals on the seconds in ISO 8601, no exact Unix millisecond may match. So you will have to accept truncation (or rounding) to convert to Unix time. Of course the error will be still greater if you convert to Unix seconds rather than milliseconds.
As Thomas Lycken noted, leap seconds can be represented in ISO 8601, but not in Unix time.
In other words, is there a point in time with a single representation in one format, and more than one, or zero, representations in the other?
No. The UTC time depends on your geographic location, ie. your latitude and longitude. However, the UNIX timestamp is a way to track time as a running total of seconds. This count starts at the Unix Epoch on January 1st, 1970 at UTC.
From Unix TimeStamp,
It should also be pointed out that this point in time technically does not change no
matter where you are located on the globe.
It occurred to me that I'm not aware of a mechanism to store dates before 1970 jan. 1 as Unix timestamps. Since that date is the Unix "epoch" this isn't much of a surprise.
But - even though it's not designed for that - I still wish to store dates in the far past in Unix format.I need this for reasons.
So my question is: how would one go about making unix-timestamps contain "invalid" but still working dates? Would storing a negative amount of seconds work? Can we even store negative amounts of seconds in a unix-timestamp? I mean isn't it unsigned?
Also if I'm correct then I could only store dates as far back as 1901. dec. 13 20:45:52 could this be extended any further back in history by any means?
Unix Time is usually a 32-bit number of whole seconds from the first moment of 1970 in UTC, the epoch being 1 January 1970 00:00:00 UTC. That means a range of about 136 years with about half on either side of the epoch. Negative numbers are earlier, zero is the epoch, and positive are later. For a signed 32-bit integer, the values range from 1901-12-13 to 2038-01-19 03:14:07 UTC.
This is not written in stone. Well, it is written, but in a bunch of different stones. Older ones say 32-bit, newer ones 64-bit. Some specifications says that the meaning is "implementation-defined". Some Unix systems use an unsigned int to extend only into the future past the epoch, but usual practice has been a signed number. Some use a float rather than an integer. For details, see Wikipedia article on Unix Time, and this Question.
So, basically, your Question makes no sense. You have to know the context of your programming language (standard C, other C, Java, etc.), environment (POSIX-compliant), particular software library, or database store, or application.
Avoid Count-From-Epoch
Add to this lack of specificity the fact that a couple dozen other epochs have been used by various software systems, some extremely popular and common. Examples include January 1, 1601 for NTFS file system & COBOL, January 1, 1980 for various FAT file systems, January 1, 2001 for Apple Cocoa, and January 0, 1900 for Excel & Lotus 1-2-3 spreadsheets.
Further add the fact that different granularities of count have been used. Besides whole seconds, some systems use milliseconds, microseconds, or nanoseconds.
I recommend against tracking date-time as a count-from-epoch. Instead use specific data types where available in your programming language or database.
ISO 8601
When data types are not available, or when exchanging data, follow the ISO 8601 standard which defines sensible string formats for various kinds of date-time values.
Date
2015-07-29
A date-time with an offset from UTC (Z is zero/Zulu for UTC) (note padding zero on offset)
2015-07-29T14:59:08Z
2001-02-13T12:34:56.123+05:30
Week (with or without day of week)
2015-W31
2015-W31-3
Ordinal date (day-of-year)
2015-210
Interval
"2007-03-01T13:00:00Z/2008-05-11T15:30:00Z"
Duration (format of PnYnMnDTnHnMnS)
P3Y6M4DT12H30M5S = "period of three years, six months, four days, twelve hours, thirty minutes, and five seconds"
Search StackOverflow.com for many more Questions and Answers on these topics.
could someone explain me what is this Time Format?
1381039499
I do not know how should I have to change it to normal Time format
My guess is that it's the Unix Epoc, defined as the number of seconds that have elapsed since 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970.
According to this site, that is the Unix timestamp for:
Sun, 06 Oct 2013 06:04:59 GMT.
Most programming languages have a function to convert to/from Unix timestamps, but the correct way to do this will obviously vary between them.
I have time in UTC seconds format. Could any one assist on how to convert such numbers to GPS
time in normal timestamp (dd-mm-yyyy hh:mm:ss)? I need a C code or, perhaps, algorithm.
Update (June 2017): Currently 18 leap seconds.
GPS time is simply UTC time, but without leap seconds. As of this writing, there have been 15 leap seconds since the GPS epoch (January 6, 1980 # 00:00:00), so if it's 2012/02/13 # 12:00:00 (UTC), then it's 2012/02/13 # 12:00:15 in GPS time. If you want to do correct conversions for other times, you'll have to take into account when each leap second went into effect.
Here's how you can compute the current offset, from a couple different "authoritative" sources:
http://www.ietf.org/timezones/data/leap-seconds.list -- Count the number of lines starting from the 2571782400 20 # 1 Jul 1981 line. Or just subtract 19 from the last number in the list (e.g., 37-19 = 18) as of May 2017.
https://www.nist.gov/pml/time-and-frequency-division/atomic-standards/leap-second-and-ut1-utc-information -- Count the number of leap seconds inserted (from the Leap Seconds Inserted into the UTC Time Scale section), starting with (and including) the 1981-06-30 entry.
There is a Javascript library that can convert to and from unixtime. The library is available at
http://www.lsc-group.phys.uwm.edu/~kline/gpstime/
Whatever algorithm you use, you must update it when new leap seconds are announced.
for an algorithm check this site source code: UTC Converter
for built-in functions in c++ check here - especially ctime()
The Date constructor in JavaScript/ECMAScript/JScript allows passing the number of milliseconds since midnight, 1/1/1970. Nowhere can I find documentation whether this is midnight in the client machine's timezone, or midnight GMT. Which is it? Can it be relied on between different browsers and versions? Is this officially documented anywhere?
From the ECMAScript specification:
Time is measured in ECMAScript in
milliseconds since 01 January, 1970
UTC. In time values leap seconds are
ignored. It is assumed that there are
exactly 86,400,000 milliseconds per
day. ECMAScript Number values can
represent all integers from
–9,007,199,254,740,991 to
9,007,199,254,740,991; this range
suffices to measure times to
millisecond precision for any instant
that is within approximately 285,616
years, either forward or backward,
from 01 January, 1970 UTC.
The actual
range of times supported by ECMAScript
Date objects is slightly smaller:
exactly –100,000,000 days to
100,000,000 days measured relative to
midnight at the beginning of 01
January, 1970 UTC. This gives a range
of 8,640,000,000,000,000 milliseconds
to either side of 01 January, 1970
UTC.
The exact moment of midnight at
the beginning of 01 January, 1970 UTC
is represented by the value +0.
So to answer your question, it's Coordinated Universal Time.