What is the timestamp unit of an exported TwinCAT Scope measurement? - twincat

I took some measurements with TwinCAT Scope and exported the results as a CSV. The measurement series started at 17 August 2022 at 10:32:25.290 which has timestamp 133051987452906875. This is not UNIX time, because that time would correspond to 1660725145. Adding some miliseconds would add some zeros at the end.
So what is the unit of the TwinCAT timestamp?

The same time format is also used at more places. For example in ADS. From the C++ ADS library I found that the unit is
the number of 100-nanosecond intervals since January 1, 1601 (UTC)

Related

Forecast period starting earlier than defined

My dataset has multiple variables and I am using TSModel for forecasting. I have data till December 2017 but a lot of them are either 0 or missing. During the forecast, it is starting the forecast from July 2015 onwards whereas it should actually start from January 2018.
Can someone help with what might have gone wrong?
If for a particular series all values after a given date are missing, then the estimation period for that series ends at the last observed value's date, and the beginning of the forecast period is the next date.

Is there a one to one and onto relation between ISO-8601 UTC and Unix Timestamp?

Time Formats
A point in time is often represented as Unix Time, or as a human-readable ISO 8601 date in UTC time string.
For example:
Unix Time
Seconds since Epoch, or Unix timestamp, in seconds or milliseconds:
1529325705
1529325705000
ISO 8601 Date
2018-06-18T15:41:45+00:00
My question
Is there a one-to-one and onto relationship between the two? In other words, is there a point in time with a single representation in one format, and more than one, or zero, representations in the other?
Yes, it is possible to find such a date. From the wiki article on Unix time:
Every day is treated as if it contains exactly 86400 seconds,[2] so leap seconds are not applied to seconds since the Epoch.
That means that the leap seconds themselves cannot be represented in Unix time.
For example, the latest leap second occurred at the end of 2016, so 2016-12-31T23:59:60+00:00 is a valid ISO 8601 time stamp. However, the Unix time stamp for the second before, at 23:59:59, is represented as 1483228799 and the second after, 00:00:00 (on January 1 2017) is 1483228800, so there is no Unix timestamp that represents the leap second.
In practice, this is probably not a problem for you; there has only been 27 leap seconds since they were introduced in 1972.
It might be worthwhile to mention that most software implementations of ISO 8601 does not take leap seconds into account either, but will do something else if asked to parse "2016-12-31T23:59:60+00:00". The System.DateTime class in .NET throws an exception, while it's also conceivable that a library would return 2017-01-01 00:00:00.
No. there is a nice correspondence between the two, but the relationship is 1 to many, and strictly speaking there may not even exist a precise Unix millisecond for a given ISO date-time string. Some issues are:
There are some freedoms in the ISO 8601 format, so the same Unix millisecond may be written in several ways even when we require that the time be in UTC (the offset is zero).
Seconds and fraction of seconds are optional, and there may be a varying number of decimals on the seconds. So a milliseconds value of 1 529 381 160 000, for example, could be written as for example
2018-06-19T04:06:00.000000000Z
2018-06-19T04:06:00.00Z
2018-06-19T04:06:00Z
2018-06-19T04:06Z
The offset of 0 would normally be written as Z, but may also be written as you do in the question, +00:00. I think the forms +00 and +0000 are OK too (forms with a minus are not).
Since there may be more than three decimals on the seconds in ISO 8601, no exact Unix millisecond may match. So you will have to accept truncation (or rounding) to convert to Unix time. Of course the error will be still greater if you convert to Unix seconds rather than milliseconds.
As Thomas Lycken noted, leap seconds can be represented in ISO 8601, but not in Unix time.
In other words, is there a point in time with a single representation in one format, and more than one, or zero, representations in the other?
No. The UTC time depends on your geographic location, ie. your latitude and longitude. However, the UNIX timestamp is a way to track time as a running total of seconds. This count starts at the Unix Epoch on January 1st, 1970 at UTC.
From Unix TimeStamp,
It should also be pointed out that this point in time technically does not change no
matter where you are located on the globe.

How to represent dates before epoch as a UNIX timestamp

It occurred to me that I'm not aware of a mechanism to store dates before 1970 jan. 1 as Unix timestamps. Since that date is the Unix "epoch" this isn't much of a surprise.
But - even though it's not designed for that - I still wish to store dates in the far past in Unix format.I need this for reasons.
So my question is: how would one go about making unix-timestamps contain "invalid" but still working dates? Would storing a negative amount of seconds work? Can we even store negative amounts of seconds in a unix-timestamp? I mean isn't it unsigned?
Also if I'm correct then I could only store dates as far back as 1901. dec. 13 20:45:52 could this be extended any further back in history by any means?
Unix Time is usually a 32-bit number of whole seconds from the first moment of 1970 in UTC, the epoch being 1 January 1970 00:00:00 UTC. That means a range of about 136 years with about half on either side of the epoch. Negative numbers are earlier, zero is the epoch, and positive are later. For a signed 32-bit integer, the values range from 1901-12-13 to 2038-01-19 03:14:07 UTC.
This is not written in stone. Well, it is written, but in a bunch of different stones. Older ones say 32-bit, newer ones 64-bit. Some specifications says that the meaning is "implementation-defined". Some Unix systems use an unsigned int to extend only into the future past the epoch, but usual practice has been a signed number. Some use a float rather than an integer. For details, see Wikipedia article on Unix Time, and this Question.
So, basically, your Question makes no sense. You have to know the context of your programming language (standard C, other C, Java, etc.), environment (POSIX-compliant), particular software library, or database store, or application.
Avoid Count-From-Epoch
Add to this lack of specificity the fact that a couple dozen other epochs have been used by various software systems, some extremely popular and common. Examples include January 1, 1601 for NTFS file system & COBOL, January 1, 1980 for various FAT file systems, January 1, 2001 for Apple Cocoa, and January 0, 1900 for Excel & Lotus 1-2-3 spreadsheets.
Further add the fact that different granularities of count have been used. Besides whole seconds, some systems use milliseconds, microseconds, or nanoseconds.
I recommend against tracking date-time as a count-from-epoch. Instead use specific data types where available in your programming language or database.
ISO 8601
When data types are not available, or when exchanging data, follow the ISO 8601 standard which defines sensible string formats for various kinds of date-time values.
Date
2015-07-29
A date-time with an offset from UTC (Z is zero/Zulu for UTC) (note padding zero on offset)
2015-07-29T14:59:08Z
2001-02-13T12:34:56.123+05:30
Week (with or without day of week)
2015-W31
2015-W31-3
Ordinal date (day-of-year)
2015-210
Interval
"2007-03-01T13:00:00Z/2008-05-11T15:30:00Z"
Duration (format of PnYnMnDTnHnMnS)
P3Y6M4DT12H30M5S = "period of three years, six months, four days, twelve hours, thirty minutes, and five seconds"
Search StackOverflow.com for many more Questions and Answers on these topics.

gps time, time conversion

I have time in UTC seconds format. Could any one assist on how to convert such numbers to GPS
time in normal timestamp (dd-mm-yyyy hh:mm:ss)? I need a C code or, perhaps, algorithm.
Update (June 2017): Currently 18 leap seconds.
GPS time is simply UTC time, but without leap seconds. As of this writing, there have been 15 leap seconds since the GPS epoch (January 6, 1980 # 00:00:00), so if it's 2012/02/13 # 12:00:00 (UTC), then it's 2012/02/13 # 12:00:15 in GPS time. If you want to do correct conversions for other times, you'll have to take into account when each leap second went into effect.
Here's how you can compute the current offset, from a couple different "authoritative" sources:
http://www.ietf.org/timezones/data/leap-seconds.list -- Count the number of lines starting from the 2571782400 20 # 1 Jul 1981 line. Or just subtract 19 from the last number in the list (e.g., 37-19 = 18) as of May 2017.
https://www.nist.gov/pml/time-and-frequency-division/atomic-standards/leap-second-and-ut1-utc-information -- Count the number of leap seconds inserted (from the Leap Seconds Inserted into the UTC Time Scale section), starting with (and including) the 1981-06-30 entry.
There is a Javascript library that can convert to and from unixtime. The library is available at
http://www.lsc-group.phys.uwm.edu/~kline/gpstime/
Whatever algorithm you use, you must update it when new leap seconds are announced.
for an algorithm check this site source code: UTC Converter
for built-in functions in c++ check here - especially ctime()

What is the range of times that Ruby's Time class can represent?

Times farthest in the past and farthest in the future that can be represented?
Is it absolute moments in time, or distance in time from the present moment?
I couldn't find it in the docs for the Time class.
Does it depend on the system? If so, how can I access it in my code?
UPDATE
After some experimentation, I found that it's from about 108 years in the past to about 29 years in the future. Still wondering if it's system dependent.
DateTime (in the Date library, included with ruby) goes back to 1 January 4713 BCE and farther into the future than you are likely to need.
"Time is stored internally as the number of seconds and microseconds since the epoch, January 1, 1970 00:00 UTC. On some operating systems, this offset is allowed to be negative."
So clearly its an abosolute time not relative to now
Sounds like there is a "C" time implementation under cover (integers can be signed or unsigned depending on OS / processor / compiler) : it means the bounds are system dependent.
But if you need to handle dates that are that long ago / far in the future, I guess you won't really need the "time of day" part and can use a Date !?

Resources