Converting epoch time to date format - epoch

I'm saving seconds since epoch in an unsigned long long type. how can I know what is the latest date I can get? And is there another type (besides unsigned long long) what allows me to express an even later date?

unsigned long long is only guaranteed to be at least 64 bit, so your maximum end date is at least 2^64 seconds after the start of your epoch (approx, 2.13503982E14 days).

I will assume you mean the C++ language
An unsigned long long will be at least 8 bytes (= 64 bits) large. So the range is from 0 to 2^64. You simply need to determine the date of 2^64; the underlying operating system should provide you with enough to do that.
For your second question - let me answer it like this: You can get a type to store an 'infinite' amount of bytes; 'infinite' means as much memory as available to you. Such a type would be like the BigInteger type from Java. Here is a question that addresses exactly that issue:
Big numbers library in c++

Related

unix timestamp with millisecond precision in VB6?

how can i get a unix timestamp with millisecond (or more?) precision in VB6?
(and getting a seconds-precision stamp and multiplying it by 1000 is not an acceptable solution because i need sub-second precision)
Edit: No, How to get time elapsed in milliseconds is not a duplicate of this question, i need to get a unix timestamp since 1970-01-01 00:00:00, while that question just needs to know how much time it took to execute operations, and the answers use GetTickCount() which counts milliseconds since system reboot, not since 1970 (and even if i was theoretically running on a computer that was last rebooted in 1970, which i'm not, GetTickCount() has a max limit of 49.7 days, and i don't think such an old system would be running Windows NT anyway, meaning it wouldn't have access to WinAPI's GetTickCount())
Edit2: i don't have a solution yet, but it's probably possible to use GetSystemTimeAsFileTime() to get the numbers, but i don't know how to use 2x 32bit integers as 64bit integer in VB6 to derive the number..
also here is code to run GetSystemTime() in VB6 (thanks to danaseaman )
Private Type SYSTEMTIME
wYear As Integer
wMonth As Integer
wDayOfWeek As Integer
wDay As Integer
wHour As Integer
wMinute As Integer
wSecond As Integer
wMilliseconds As Integer
End Type
Private Declare Sub GetSystemTime Lib "KERNEL32.dll" (lpSystemTime As SYSTEMTIME)
Based upon the many comments, here is a simple solution for future readers of this question:
Private Sub Form_Load()
'https://www.freeformatter.com/epoch-timestamp-to-date-converter.html
Dim epoch As Currency
epoch = (DateDiff("s", "1/1/1970", Date) + Timer) * 1000
Debug.Print Round(epoch)
End Sub
The idea is to calculate the number of seconds to midnight, add fractional seconds from midnight, and multiply by 1000 to get milliseconds. The result appears to be accurate according to a number of websites including the one referenced in the code.

Data type of time_Span and time in Ada

What is the data type of time and time_span in Ada language. And how the variables of this type are stored in memory?
Just looking at Ada.Real_Time.Time_Span, ARM D.8(5) says type Time_Span is private; and then (17) says the full declaration isn't defined by the language.
Further, (20) says
Values of the type Time_Span represent length of real time duration.
(31) says
Time_Span_First shall be no greater than –3600 seconds, and Time_Span_Last shall be no less than 3600 seconds.
and those statements are all that a portable Ada program can rely on.
Now, specifically for GNAT, we can look at the actual implementation on Github, where in the private part we find
-- Time and Time_Span are represented in 64-bit Duration value in
-- nanoseconds. For example, 1 second and 1 nanosecond is represented
-- as the stored integer 1_000_000_001. This is for the 64-bit Duration
-- case, not clear if this also is used for 32-bit Duration values.
type Time is new Duration;
Time_First : constant Time := Time'First;
Time_Last : constant Time := Time'Last;
type Time_Span is new Duration;
(note, this is for the desktop system: for embedded systems the types and values may be different).
So the answer is, on desktop GNAT, that Time and Time_Span are both stored as 64-bit values with least significant bit 1 nanosecond.
There is defined a type named Time in the Language Reference Manual. The memory layout of this type is implementation defined.
From Ada Reference Manual
A value of the type Time in package Calendar, or of some other time type, represents a time as reported by a corresponding clock.
Values of the type Time_Span represent length of real time duration.
In short terms Time type represent timestamp, Time_Span type represent duration of time period.

What are reasons to use a sized or unsigned integer type in go?

Doing the go tour in chapter "Basics/Basic types" it says:
When you need an integer value you should use int unless you have a specific reason to use a sized or unsigned integer type.
What are those specific reasons? Can we name them all?
Other available ressources only talk about 32 and 64 bit signed and unsigned types. But why would someone use int types < 32 bit?
If you cannot think of a reason not to use a standard int, you should use a standard int. In most cases, saving memory isn't worth the extra effort and you are probably not going to need to store that large values anyway.
If you are saving a very large number of small values, you might be able to save a lot of memory by changing the datatype to a smaller one, such as byte. Storing 8bit values in an int means we are storing 24bits of zeroes for every 8 bits of data, and thus, wasting a lot of space. Of course, you could store 4 (or maybe 8) bytes inside an int with some bitshift magic, but why do the hard work when you can let the compiler and the cpu do it for you?
If you are trying to do computations that might not fit inside a 32bit integer, you might want an int64 instead, or even bigint.

Time as a Signed Integer

I've been reading up on the Y2038 problem and I understand that time_t will eventually revert to the lowest representable negative number because it'll try to "increment" the sign bit.
According to that Wikipedia page, changing time_t to an unsigned integer cannot be done because it would break programs that handle early dates. (Which makes sense.)
However, I don't understand why it wasn't made an unsigned integer in the first place. Why not just store January 1, 1970 as zero rather than some ridiculous negative number?
Because letting it start at signed −2,147,483,648 is equivalent to letting it start at unsigned 0. It doesn't change the range of a values a 32 bit integer can hold - a 32 bit integer can hold 4,294,967,296 different states. The problem isn't the starting point, the problem is the maximum value which can be held by the integer. Only way to mitigate the problem is to upgrade to 64 bit integers.
Also (as I just realized that): 1970 was set as 0, so we could reach back in time as well. (reaching back to 1901 seemed to be sufficient at the time). If they went unsigned, the epoch would've begun at 1901 to be able to reach back from 1970, and we would have the same problem again.
There's a more fundamental problem here than using unsigned values. If we used unsigned values, then we'd get only one more bit of timekeeping. This would have a definitely positive impact - it would double the amount of time we could keep - but then we'd have a problem much later on in the future. More generally, for any fixed-precision integer value, we'd have a problem along these lines.
When UNIX was being developed in the 1970s, having a 60 year clock sounded fine, though clearly a 120-year clock would have been better. If they had used more bits, then we'd have a much longer clock - say 1000 years - but after that much time elapsed we'd be right back in the same bind and would probably think back and say "why didn't they use more bits?"
Because not all systems have to deal purely with "past" and "future" values. Even in the 70s, when Unix was created and the time system defined, they had to deal with dates back in the 60s or earlier. So, a signed integer made sense.
Once everyone switches to 64bit time_t's, we won't have to worry about a y2038k type problem for another 2billion or so 136-year periods.

Determining Millisecond Time Intervals In Cocoa

Just as background, I'm building an application in Cocoa. This application existed originally in C++ in another environment. I'd like to do as much as possible in Objective-C.
My questions are:
1)
How do I compute, as an integer, the number of milliseconds between now and the previous time I remembered as now?
2)
When used in an objective-C program, including time.h, what are the units of
clock()
Thank you for your help.
You can use CFAbsoluteTimeGetCurrent() but bear in mind the clock can change between two calls and can screw you over. If you want to protect against that you should use CACurrentMediaTime().
The return type of these is CFAbsoluteTime and CFTimeInterval respectively, which are both double by default. So they return the number of seconds with double precision. If you really want an integer you can use mach_absolute_time() found in #include <mach/mach_time.h> which returns a 64 bit integer. This needs a bit of unit conversion, so check out this link for example code. This is what CACurrentMediaTime() uses internally so it's probably best to stick with that.
Computing the difference between two calls is obviously just a subtraction, use a variable to remember the last value.
For the clock function see the documentation here: clock(). Basically you need to divide the return value by CLOCKS_PER_SEC to get the actual time.
How do I compute, as an integer, the number of milliseconds between now and the previous time I remembered as now?
Is there any reason you need it as an integral number of milliseconds? Asking NSDate for the time interval since another date will give you a floating-point number of seconds. If you really do need milliseconds, you can simply multiply by that by 1000 to get a floating-point number of milliseconds. If you really do need an integer, you can round or truncate the floating-point value.
If you'd like to do it with integers from start to finish, use either UpTime or mach_absolute_time to get the current time in absolute units, then use AbsoluteToNanoseconds to convert that to a real-world unit. Obviously, you'll have to divide that by 1,000,000 to get milliseconds.
QA1398 suggests mach_absolute_time, but UpTime is easier, since it returns the same type AbsoluteToNanoseconds uses (no “pointer fun” as shown in the technote).
AbsoluteToNanoseconds returns an UnsignedWide, which is a structure. (This stuff dates back to before Mac machines could handle scalar 64-bit values.) Use the UnsignedWideToUInt64 function to convert it to a scalar. That just leaves the subtraction, which you'll do the normal way.

Resources