how can i get a unix timestamp with millisecond (or more?) precision in VB6?
(and getting a seconds-precision stamp and multiplying it by 1000 is not an acceptable solution because i need sub-second precision)
Edit: No, How to get time elapsed in milliseconds is not a duplicate of this question, i need to get a unix timestamp since 1970-01-01 00:00:00, while that question just needs to know how much time it took to execute operations, and the answers use GetTickCount() which counts milliseconds since system reboot, not since 1970 (and even if i was theoretically running on a computer that was last rebooted in 1970, which i'm not, GetTickCount() has a max limit of 49.7 days, and i don't think such an old system would be running Windows NT anyway, meaning it wouldn't have access to WinAPI's GetTickCount())
Edit2: i don't have a solution yet, but it's probably possible to use GetSystemTimeAsFileTime() to get the numbers, but i don't know how to use 2x 32bit integers as 64bit integer in VB6 to derive the number..
also here is code to run GetSystemTime() in VB6 (thanks to danaseaman )
Private Type SYSTEMTIME
wYear As Integer
wMonth As Integer
wDayOfWeek As Integer
wDay As Integer
wHour As Integer
wMinute As Integer
wSecond As Integer
wMilliseconds As Integer
End Type
Private Declare Sub GetSystemTime Lib "KERNEL32.dll" (lpSystemTime As SYSTEMTIME)
Based upon the many comments, here is a simple solution for future readers of this question:
Private Sub Form_Load()
'https://www.freeformatter.com/epoch-timestamp-to-date-converter.html
Dim epoch As Currency
epoch = (DateDiff("s", "1/1/1970", Date) + Timer) * 1000
Debug.Print Round(epoch)
End Sub
The idea is to calculate the number of seconds to midnight, add fractional seconds from midnight, and multiply by 1000 to get milliseconds. The result appears to be accurate according to a number of websites including the one referenced in the code.
Related
I have been writing unit tests for a class in our codebase that basically converts date, time values from std::string to std::chrono::time_point and vice versa, for different kinds of timestamps (yyyy-mm-dd , hh:mm:ss.ms etc).
One way I tried to test whether a std::chrono::system_clock::time_point returned by a function in our codebase as the same as one created in the unit tests was to do something like this -
std::chrono::system_clock::time_point m_TimePoint{}; // == Clock's epoch
auto foo = convertToString(n_TimePoint); //foo is std::string
auto bar = getTimePoint(foo);
ASSERT_EQ(m_TimePoint, bar);
This was on Ubuntu , now the constructor should return a time point as UTC Jan 1 00:00:00 1970. Now when I used ctime() to see the textual representation of the epoch it returned Dec 31 19:00:00 1969. I was puzzled and checked that EST(my system timezone) is equal to UTC-5.
Once I created the object as -
std::chrono::duration d{0};
d += std::chrono::hours(5);
std::chrono::system_clock::time_point m_TimePoint{d}; //== clock epoch + 5 hours
All worked fine.
My question is it possible for the system clock epoch to be adjusted based on the system timezone?
There's two answers to this question, and I'll try to hit both of them.
system_clock was introduced in C++11, and its epoch was left unspecified. So the technical answer to your question is: yes, it is possible for the system_clock epoch to be adjusted based on the system timezone.
But that's not the end of the story.
There's only a few implementations of system_clock, and all of them model Unix Time. Unix Time is a measure of time duration since 1970-01-01 00:00:00 UTC, excluding leap seconds. It is not dependent on the system timezone.
The C++20 spec standardizes this existing practice.
So from a practical standpoint, the answer to your question is: no, it is not possible for the system_clock epoch to be adjusted based on the system timezone.
One thing that could be tripping you up is that system_clock typically counts time in units finer than milliseconds. It varies from platform to platform, and you can inspect what it is with system_clock::duration::period::num and system_clock::duration::period::den. These are two compile-time integral constants that form a fraction, n/d which describes the length of time in seconds that system_clock is measuring. For Ubuntu my guess would be this forms the fraction 1/1'000'000'000, or nanoseconds.
You can get milliseconds (or whatever unit you want) out of system_clock::time_point with:
auto tp = time_point_cast<milliseconds>(system_clock::now());
I am coding using Fortran MPI and I need to get the run time of the program. Therefore I tried to use the WTIME() function but I am getting some strange results.
Part of the code is like this:
program heat_transfer_1D_parallel
implicit none
include 'mpif.h'
integer myid,np,rc,ierror,status(MPI_STATUS_SIZE)
integer :: N,N_loc,i,k,e !e = number extra points (filled with 0s)
real :: time,tmax,start,finish,dt,dx,xmax,xmin,T_in1,T_in2,T_out1,T_out2,calc_T,t1,t2
real,allocatable,dimension(:) :: T,T_prev,T_loc,T_loc_prev
call MPI_INIT(ierror)
call MPI_COMM_SIZE(MPI_COMM_WORLD,np,ierror)
call MPI_COMM_RANK(MPI_COMM_WORLD,myid,ierror)
...
t1 = MPI_WTIME()
time = 0.
do while (time.le.tmax)
...
end do
...
call MPI_BARRIER(MPI_COMM_WORLD,ierror)
t2 = MPI_WTIME()
call MPI_FINALIZE(ierror)
if(myid.eq.0) then
write(*,"(8E15.7)") T(1:N-e)
write(*,*)t2
write(*,*)t1
end if
And the output value for t1 and t2 is the same and a very big: 1.4240656E+09
Any ideas why? Thank you so much.
From the documentation: Return value: Time in seconds since an arbitrary time in the past. They didn't specify how far back ;-) Only t2-t1 is meaningful here...
Also, the return value of MPI_Wtime() is double precision! t1 and t2 are declared as single precision floats.
Building on the answer from Alexander Vogt, I'd like to add that many Unix implementations of MPI_WTIME use gettimeofday(2) (or similar) to retrieve the system time and then convert the returned struct timeval into a floating-point value. Timekeeping in Unix is done by tracking the number of seconds elapsed since the Epoch (00:00 UTC on 01.01.1970). While writing this, the value is 1424165330.897136 seconds and counting.
With many Fortran compilers, REAL defaults to single-precision floating point representation that can only hold 7.22 decimal digits, while you need at least 9 (more if subsecond precision is needed). The high-precision time value above becomes 1.42416538E9 when stored in a REAL variable. The next nearest value that can be represented by the type is 1.4241655E9. Therefore you cannot measure time periods shorter than (1.4241655 - 1.42416538).109 or 120 seconds.
Use double precision.
I'm saving seconds since epoch in an unsigned long long type. how can I know what is the latest date I can get? And is there another type (besides unsigned long long) what allows me to express an even later date?
unsigned long long is only guaranteed to be at least 64 bit, so your maximum end date is at least 2^64 seconds after the start of your epoch (approx, 2.13503982E14 days).
I will assume you mean the C++ language
An unsigned long long will be at least 8 bytes (= 64 bits) large. So the range is from 0 to 2^64. You simply need to determine the date of 2^64; the underlying operating system should provide you with enough to do that.
For your second question - let me answer it like this: You can get a type to store an 'infinite' amount of bytes; 'infinite' means as much memory as available to you. Such a type would be like the BigInteger type from Java. Here is a question that addresses exactly that issue:
Big numbers library in c++
I've been reading up on the Y2038 problem and I understand that time_t will eventually revert to the lowest representable negative number because it'll try to "increment" the sign bit.
According to that Wikipedia page, changing time_t to an unsigned integer cannot be done because it would break programs that handle early dates. (Which makes sense.)
However, I don't understand why it wasn't made an unsigned integer in the first place. Why not just store January 1, 1970 as zero rather than some ridiculous negative number?
Because letting it start at signed −2,147,483,648 is equivalent to letting it start at unsigned 0. It doesn't change the range of a values a 32 bit integer can hold - a 32 bit integer can hold 4,294,967,296 different states. The problem isn't the starting point, the problem is the maximum value which can be held by the integer. Only way to mitigate the problem is to upgrade to 64 bit integers.
Also (as I just realized that): 1970 was set as 0, so we could reach back in time as well. (reaching back to 1901 seemed to be sufficient at the time). If they went unsigned, the epoch would've begun at 1901 to be able to reach back from 1970, and we would have the same problem again.
There's a more fundamental problem here than using unsigned values. If we used unsigned values, then we'd get only one more bit of timekeeping. This would have a definitely positive impact - it would double the amount of time we could keep - but then we'd have a problem much later on in the future. More generally, for any fixed-precision integer value, we'd have a problem along these lines.
When UNIX was being developed in the 1970s, having a 60 year clock sounded fine, though clearly a 120-year clock would have been better. If they had used more bits, then we'd have a much longer clock - say 1000 years - but after that much time elapsed we'd be right back in the same bind and would probably think back and say "why didn't they use more bits?"
Because not all systems have to deal purely with "past" and "future" values. Even in the 70s, when Unix was created and the time system defined, they had to deal with dates back in the 60s or earlier. So, a signed integer made sense.
Once everyone switches to 64bit time_t's, we won't have to worry about a y2038k type problem for another 2billion or so 136-year periods.
Just as background, I'm building an application in Cocoa. This application existed originally in C++ in another environment. I'd like to do as much as possible in Objective-C.
My questions are:
1)
How do I compute, as an integer, the number of milliseconds between now and the previous time I remembered as now?
2)
When used in an objective-C program, including time.h, what are the units of
clock()
Thank you for your help.
You can use CFAbsoluteTimeGetCurrent() but bear in mind the clock can change between two calls and can screw you over. If you want to protect against that you should use CACurrentMediaTime().
The return type of these is CFAbsoluteTime and CFTimeInterval respectively, which are both double by default. So they return the number of seconds with double precision. If you really want an integer you can use mach_absolute_time() found in #include <mach/mach_time.h> which returns a 64 bit integer. This needs a bit of unit conversion, so check out this link for example code. This is what CACurrentMediaTime() uses internally so it's probably best to stick with that.
Computing the difference between two calls is obviously just a subtraction, use a variable to remember the last value.
For the clock function see the documentation here: clock(). Basically you need to divide the return value by CLOCKS_PER_SEC to get the actual time.
How do I compute, as an integer, the number of milliseconds between now and the previous time I remembered as now?
Is there any reason you need it as an integral number of milliseconds? Asking NSDate for the time interval since another date will give you a floating-point number of seconds. If you really do need milliseconds, you can simply multiply by that by 1000 to get a floating-point number of milliseconds. If you really do need an integer, you can round or truncate the floating-point value.
If you'd like to do it with integers from start to finish, use either UpTime or mach_absolute_time to get the current time in absolute units, then use AbsoluteToNanoseconds to convert that to a real-world unit. Obviously, you'll have to divide that by 1,000,000 to get milliseconds.
QA1398 suggests mach_absolute_time, but UpTime is easier, since it returns the same type AbsoluteToNanoseconds uses (no “pointer fun” as shown in the technote).
AbsoluteToNanoseconds returns an UnsignedWide, which is a structure. (This stuff dates back to before Mac machines could handle scalar 64-bit values.) Use the UnsignedWideToUInt64 function to convert it to a scalar. That just leaves the subtraction, which you'll do the normal way.