The Date constructor in JavaScript/ECMAScript/JScript allows passing the number of milliseconds since midnight, 1/1/1970. Nowhere can I find documentation whether this is midnight in the client machine's timezone, or midnight GMT. Which is it? Can it be relied on between different browsers and versions? Is this officially documented anywhere?
From the ECMAScript specification:
Time is measured in ECMAScript in
milliseconds since 01 January, 1970
UTC. In time values leap seconds are
ignored. It is assumed that there are
exactly 86,400,000 milliseconds per
day. ECMAScript Number values can
represent all integers from
–9,007,199,254,740,991 to
9,007,199,254,740,991; this range
suffices to measure times to
millisecond precision for any instant
that is within approximately 285,616
years, either forward or backward,
from 01 January, 1970 UTC.
The actual
range of times supported by ECMAScript
Date objects is slightly smaller:
exactly –100,000,000 days to
100,000,000 days measured relative to
midnight at the beginning of 01
January, 1970 UTC. This gives a range
of 8,640,000,000,000,000 milliseconds
to either side of 01 January, 1970
UTC.
The exact moment of midnight at
the beginning of 01 January, 1970 UTC
is represented by the value +0.
So to answer your question, it's Coordinated Universal Time.
Related
Time Formats
A point in time is often represented as Unix Time, or as a human-readable ISO 8601 date in UTC time string.
For example:
Unix Time
Seconds since Epoch, or Unix timestamp, in seconds or milliseconds:
1529325705
1529325705000
ISO 8601 Date
2018-06-18T15:41:45+00:00
My question
Is there a one-to-one and onto relationship between the two? In other words, is there a point in time with a single representation in one format, and more than one, or zero, representations in the other?
Yes, it is possible to find such a date. From the wiki article on Unix time:
Every day is treated as if it contains exactly 86400 seconds,[2] so leap seconds are not applied to seconds since the Epoch.
That means that the leap seconds themselves cannot be represented in Unix time.
For example, the latest leap second occurred at the end of 2016, so 2016-12-31T23:59:60+00:00 is a valid ISO 8601 time stamp. However, the Unix time stamp for the second before, at 23:59:59, is represented as 1483228799 and the second after, 00:00:00 (on January 1 2017) is 1483228800, so there is no Unix timestamp that represents the leap second.
In practice, this is probably not a problem for you; there has only been 27 leap seconds since they were introduced in 1972.
It might be worthwhile to mention that most software implementations of ISO 8601 does not take leap seconds into account either, but will do something else if asked to parse "2016-12-31T23:59:60+00:00". The System.DateTime class in .NET throws an exception, while it's also conceivable that a library would return 2017-01-01 00:00:00.
No. there is a nice correspondence between the two, but the relationship is 1 to many, and strictly speaking there may not even exist a precise Unix millisecond for a given ISO date-time string. Some issues are:
There are some freedoms in the ISO 8601 format, so the same Unix millisecond may be written in several ways even when we require that the time be in UTC (the offset is zero).
Seconds and fraction of seconds are optional, and there may be a varying number of decimals on the seconds. So a milliseconds value of 1 529 381 160 000, for example, could be written as for example
2018-06-19T04:06:00.000000000Z
2018-06-19T04:06:00.00Z
2018-06-19T04:06:00Z
2018-06-19T04:06Z
The offset of 0 would normally be written as Z, but may also be written as you do in the question, +00:00. I think the forms +00 and +0000 are OK too (forms with a minus are not).
Since there may be more than three decimals on the seconds in ISO 8601, no exact Unix millisecond may match. So you will have to accept truncation (or rounding) to convert to Unix time. Of course the error will be still greater if you convert to Unix seconds rather than milliseconds.
As Thomas Lycken noted, leap seconds can be represented in ISO 8601, but not in Unix time.
In other words, is there a point in time with a single representation in one format, and more than one, or zero, representations in the other?
No. The UTC time depends on your geographic location, ie. your latitude and longitude. However, the UNIX timestamp is a way to track time as a running total of seconds. This count starts at the Unix Epoch on January 1st, 1970 at UTC.
From Unix TimeStamp,
It should also be pointed out that this point in time technically does not change no
matter where you are located on the globe.
could someone explain me what is this Time Format?
1381039499
I do not know how should I have to change it to normal Time format
My guess is that it's the Unix Epoc, defined as the number of seconds that have elapsed since 00:00:00 Coordinated Universal Time (UTC), Thursday, 1 January 1970.
According to this site, that is the Unix timestamp for:
Sun, 06 Oct 2013 06:04:59 GMT.
Most programming languages have a function to convert to/from Unix timestamps, but the correct way to do this will obviously vary between them.
Can anybody explain me what it means when I have the following time:
2012-12-28T18:12:33+01:00
I'm new to the whole datetime stuff and I can't find a good explanation on the web.
Currently I'm in Holland. So does it mean:
2012-12-28T18:12:33+01:00 = 2012-12-28 19:12:33
or
2012-12-28T18:12:33+01:00 = 2012-12-28 17:12:33
or
2012-12-28T18:12:33+01:00 = 2012-12-28 18:12:33
The 2012-12-28T18:12:33+01:00 date string indicates that it is 2012-12-28 at 18:12 in the timezone that corresponds to +1 hour from UTC, which is CET timezone.
This appears to be the ISO 8601 format. The T indicates start of the time element.
Times are expressed in local time, together with a time zone offset in
hours and minutes. A time zone offset of "+hh:mm" indicates that the
date/time uses a local time zone which is "hh" hours and "mm" minutes
ahead of UTC. A time zone offset of "-hh:mm" indicates that the
date/time uses a local time zone which is "hh" hours and "mm" minutes
behind UTC.
The value you provided 2012-12-28T18:12:33+01:00 is an DateTime+Offset value in ISO8601 format, meaning "December 12th, 2013 at 18:12:33, one hour ahead of UTC".
The +01:00 portion represents an offset, not a time zone. See TimeZone != Offset.
The time zone for Holland is either Europe/Amsterdam in the IANA/Olson database, or the entry in the Windows database that has the Id of W. Europe Standard Time and the English display name of "(UTC+01:00) Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna".
This zone is in the +01:00 offset during part of the year, and uses the +02:00 offset during European Summer Time.
Just because you have +01:00 in December, does not mean that is the correct offset to use year-round. It also does not tell you that the timestamp is in Holland. There are several other time zones that use the same offset, and not always at the same times of year.
To convert from one time zone to another, you need to first apply the offset you have. Use the inverse of the sign you have. Since you have +01:00, you would subtract an hour to get the UTC time of 17:12:33. Then you need to know what the correct offset is for the target time zone at that time of year. For that, you need a time zone database.
International Standard ISO 8601 specifies numeric representations of date and time.
YYYY-MM-DDThh:mm:ss.sTZD (eg 1997-07-16T19:20:30.45+01:00)
where:
YYYY = four-digit year
MM = two-digit month (01=January, etc.)
DD = two-digit day of month (01 through 31)
hh = two digits of hour (00 through 23) (am/pm NOT allowed)
mm = two digits of minute (00 through 59)
ss = two digits of second (00 through 59)
s = one or more digits representing a decimal fraction of a second
TZD = time zone designator (Z or +hh:mm or -hh:mm)
Times are expressed in UTC (Coordinated Universal Time), with a special UTC designator ("Z").
Times are expressed in local time, together with a time zone offset in hours and minutes. A time zone offset of "+hh:mm" indicates that the date/time uses a local time zone which is "hh" hours and "mm" minutes ahead of UTC. A time zone offset of "-hh:mm" indicates that the date/time uses a local time zone which is "hh" hours and "mm" minutes behind UTC.
In your case: 2012-12-28T18:12:33+01:00 = 2012-12-28 18:12:33 is true. Meaning the time in Holland is 18:12 and you are 1 hour ahead of UTC.
I have time in UTC seconds format. Could any one assist on how to convert such numbers to GPS
time in normal timestamp (dd-mm-yyyy hh:mm:ss)? I need a C code or, perhaps, algorithm.
Update (June 2017): Currently 18 leap seconds.
GPS time is simply UTC time, but without leap seconds. As of this writing, there have been 15 leap seconds since the GPS epoch (January 6, 1980 # 00:00:00), so if it's 2012/02/13 # 12:00:00 (UTC), then it's 2012/02/13 # 12:00:15 in GPS time. If you want to do correct conversions for other times, you'll have to take into account when each leap second went into effect.
Here's how you can compute the current offset, from a couple different "authoritative" sources:
http://www.ietf.org/timezones/data/leap-seconds.list -- Count the number of lines starting from the 2571782400 20 # 1 Jul 1981 line. Or just subtract 19 from the last number in the list (e.g., 37-19 = 18) as of May 2017.
https://www.nist.gov/pml/time-and-frequency-division/atomic-standards/leap-second-and-ut1-utc-information -- Count the number of leap seconds inserted (from the Leap Seconds Inserted into the UTC Time Scale section), starting with (and including) the 1981-06-30 entry.
There is a Javascript library that can convert to and from unixtime. The library is available at
http://www.lsc-group.phys.uwm.edu/~kline/gpstime/
Whatever algorithm you use, you must update it when new leap seconds are announced.
for an algorithm check this site source code: UTC Converter
for built-in functions in c++ check here - especially ctime()
UK is currently 1 hour ahead of UTC due to Daylight Savings Time. When I check the Daylight Bias value from GetTimeZoneInformation it is currently -60. Does that mean that translating UTC to DST means DST = UTC + -1 * DaylightBias, ie negate and add?
I thought in this case for instance adding Daylight Bias to UTC is the correct operation, hence requiring DaylightBias to be 60 rather than -60.
It's
UTC = DST + DaylightBias [for your specific timezone]
so yes, you would subtract the Bias from UTC to get local time.
Here's a quote from the MS glossary:
time zone bias: The positive, zero, or
negative offset in minutes from
Coordinated Universal Time (UTC). For
example, Middle European Time (MET,
GMT+01:00) has a time zone bias of
"-60" because it is one hour ahead of
UTC. Pacific Standard Time (PST,
GMT-08:00) has a time zone bias of
"+480" because it is eight hours
behind UTC.