Why was UTC initially defined as being 10 seconds offset from TAI - time

From http://derickrethans.nl/leap-seconds-and-what-to-do-with-them.html
UTC was defined (in the latest adjustment of its definition) as being 10 seconds different from TAI making 1972-01-01T00:00:00 UTC and 1972-01-01T00:00:10 TAI the same instant.
From https://en.wikipedia.org/wiki/International_Atomic_Time
TAI is exactly 36 seconds ahead of UTC. The 36 seconds results from the initial difference of 10 seconds at the start of 1972, plus 26 leap seconds in UTC since 1972.
Why the initial difference of 10 seconds?

Presumably because that was the difference between leap-second-free TAI (which originates from the 50s) and the 'leap second adjusted' UTC as introduced in 1972.

Related

How to calculate phone call length in 6 second increments

We log phone calls to our SQL server and to calculate billing we need to calculate in 6 second increments where they pay for all full or partial 6 second increments of time in the call.
We have the call length as a number in seconds, and we can do the work using a case statement to calculate it. I am looking for a more time / clock cycle efficient way to do this.
Has anyone else already done this and has a query they would be willing to share?
Examples:
Call is 30 seconds in length, since 30 is divisible by 6 (no remainder) we bill for 30 seconds.
Call is 0 seconds in length. No bill.
Call is 32 seconds in length, 32 is not divisible by 6 so bill for 36 seconds.
You should bill for 6*ceil(time/6.0) seconds.

Is QueryPerformanceCounter counter process specific?

https://msdn.microsoft.com/en-us/library/windows/desktop/dn553408(v=vs.85).aspx
https://msdn.microsoft.com/en-us/library/ms644904(VS.85).aspx
Imagine that I measure some part of code (20ms)
Context switching happend. And my thread was displaced by another thread which was executed (20 ms)
Then I receive quantum of time back from scheduler and perform some cals during 1ms.
If calculate elapsed time then what time will I receive? 41ms or 21 ms?
If calculate elapsed time then what time will I receive? 41ms or 21 ms?
QueryPerformanceCounter reports wall clock time. So the answer will be 41ms.

from 16bit unsigned value to minutes and seconds

I'm writing some PIC assembly code to compute the remaining time of a CD track based on elapsed minutes and seconds and total track length (16 bit unsigned value, in seconds).
The elapsed minutes and seconds are two 8bit unsigned values (two GPR register), the total track length is a two bytes value (hi-byte and lo-byte).
I need to compute the remaining time, expressed in minutes and seconds.
I tried computing the total elapsed seconds (elapsed_minutes * 60 + elapsed_seconds) subtracting it to the total track length. Now I face the problem how to convert back such result in a MM:SS format. Do I have to divide by 60? take the quotient (minutes) and the remainder (seconds)?
Yes, you divide by 60 to get minutes and the remainder is seconds. It's just algebra, not magic!

Interval With Microseconds (Or Faster)

I'd like to program my own clock based on an octal numeral system. From what I've gathered, javascript is browser friendly but inaccurate at time intervals. What's a good program to code something like this? To give an example for this specific time system, there would be 64 hours in a day, 64 minutes in an hour, and 64 seconds in a minute. This would result in 1 octal second being equivalent to 0.32958984375 ISU second.

Why is z/OS USS "date" command output different from TSO TIME?

A "date" command on USS says:
Wed Jan 22 17:51:30 EST 2014
A couple of seconds later, a TSO TIME command says:
IKJ56650I TIME-04:51:58 PM. CPU-00:00:02 SERVICE-196896 SESSION-07:08:30 JANUARY 22,2014
(There's a one-hour time zone difference.) TSO TIME tracks, via eyeball, very closely to the time in system log entries. Any idea why the "date" command might be 28 seconds off?
Thanks.
The difference is due to the handling of leap-seconds. Applications that merely access the hardware clock directly (STCK/STCKE instructions) often forget about leap-seconds, and so they will be off by about 30 seconds. Smarter apps use system time conversion routines that factor in leap-seconds automatically. Here's an example of how this happens: http://www-01.ibm.com/support/docview.wss?uid=isg1OA41950
Having said that, POSIX or the Single Unix Specification (which z/OS UNIX Services adheres to) may in fact specify the behavior of the "date" command. Here's what SUS says under "Seconds Since the Epoch":
A value that approximates the number of seconds that have elapsed
since the Epoch...As represented in seconds since the Epoch, each and
every day shall be accounted for by exactly 86400 seconds.
By my reading, the comment about every day having exactly 86400 seconds suggests that the UNIX specification intentionally does not want leap seconds counted. If this is the case, then IBM is merely following the letter of the law with respect to how the time is displayed.

Resources