Does anyone know an equivalent function of the gettimeofday() function in Windows environment? I am comparing a code execution time in Linux vs Windows. I am using MS Visual Studio 2010 and it keeps saying, identifier "gettimeofday" is undefined.
Here is a free implementation:
#define WIN32_LEAN_AND_MEAN
#include <Windows.h>
#include <stdint.h> // portable: uint64_t MSVC: __int64
// MSVC defines this in winsock2.h!?
typedef struct timeval {
long tv_sec;
long tv_usec;
} timeval;
int gettimeofday(struct timeval * tp, struct timezone * tzp)
{
// Note: some broken versions only have 8 trailing zero's, the correct epoch has 9 trailing zero's
// This magic number is the number of 100 nanosecond intervals since January 1, 1601 (UTC)
// until 00:00:00 January 1, 1970
static const uint64_t EPOCH = ((uint64_t) 116444736000000000ULL);
SYSTEMTIME system_time;
FILETIME file_time;
uint64_t time;
GetSystemTime( &system_time );
SystemTimeToFileTime( &system_time, &file_time );
time = ((uint64_t)file_time.dwLowDateTime ) ;
time += ((uint64_t)file_time.dwHighDateTime) << 32;
tp->tv_sec = (long) ((time - EPOCH) / 10000000L);
tp->tv_usec = (long) (system_time.wMilliseconds * 1000);
return 0;
}
GetLocalTime() for the time in the system timezone, GetSystemTime() for UTC. Those return the date/time in a SYSTEMTIME structure, where it's parsed into year, month, etc. If you want a seconds-since-epoch time, use SystemTimeToFileTime() or GetSystemTimeAsFileTime(). The FILETIME is a 64-bit value with the number of 100ns intervals since Jan 1, 1601 UTC.
For interval taking, use GetTickCount(). It returns milliseconds since startup.
For taking intervals with the best possible resolution (limited by hardware only), use QueryPerformanceCounter().
This is the version of c++11 that uses chrono.
Thank you, Howard Hinnant for advice.
#if defined(_WIN32)
#include <chrono>
int gettimeofday(struct timeval* tp, struct timezone* tzp) {
namespace sc = std::chrono;
sc::system_clock::duration d = sc::system_clock::now().time_since_epoch();
sc::seconds s = sc::duration_cast<sc::seconds>(d);
tp->tv_sec = s.count();
tp->tv_usec = sc::duration_cast<sc::microseconds>(d - s).count();
return 0;
}
#endif // _WIN32
If you really want a Windows gettimeofday() implementation, here is one from PostgreSQL that uses Windows APIs and the proper conversions.
However if you want to time code, I suggest you look into QueryPerformanceCounter() or by directly invoking the TSC if you're only going to run on x86 for example.
Nowadys I would use the following for gettimeofday() on Windows, which is using GetSystemTimePreciseAsFileTime() if compiled for Windows 8 or higher and GetSystemTimeAsFileTime() otherwise:
#include <Windows.h>
struct timezone {
int tz_minuteswest;
int tz_dsttime;
};
int gettimeofday(struct timeval *tv, struct timezone *tz)
{
if (tv) {
FILETIME filetime; /* 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 00:00 UTC */
ULARGE_INTEGER x;
ULONGLONG usec;
static const ULONGLONG epoch_offset_us = 11644473600000000ULL; /* microseconds betweeen Jan 1,1601 and Jan 1,1970 */
#if _WIN32_WINNT >= _WIN32_WINNT_WIN8
GetSystemTimePreciseAsFileTime(&filetime);
#else
GetSystemTimeAsFileTime(&filetime);
#endif
x.LowPart = filetime.dwLowDateTime;
x.HighPart = filetime.dwHighDateTime;
usec = x.QuadPart / 10 - epoch_offset_us;
tv->tv_sec = (time_t)(usec / 1000000ULL);
tv->tv_usec = (long)(usec % 1000000ULL);
}
if (tz) {
TIME_ZONE_INFORMATION timezone;
GetTimeZoneInformation(&timezone);
tz->tz_minuteswest = timezone.Bias;
tz->tz_dsttime = 0;
}
return 0;
}
Since Visual Studio 2015, timespec_get is available:
#include <inttypes.h>
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
#include <time.h>
static uint64_t
time_ns(void)
{
struct timespec ts;
if (timespec_get(&ts, TIME_UTC) != TIME_UTC)
{
fputs("timespec_get failed!", stderr);
return 0;
}
return 1000000000 * ts.tv_sec + ts.tv_nsec;
}
int main(void)
{
printf("%" PRIu64 "\n", time_ns());
return EXIT_SUCCESS;
}
Compile using cl t.c and run:
C:\> perl -E "system 't' for 1 .. 10"
1626610781959535900
1626610781973206600
1626610781986049300
1626610781999977000
1626610782014814800
1626610782028317500
1626610782040880700
1626610782054217800
1626610782068346700
1626610782081375500
Related
I have created std::chrono::milliseconds ms and std::chrono::nanoseconds ns
from std::chrono::system_clock::now().time_since_epoch(). From that duration I created timepoints and convert it to time_t using system_clock::to_time_t and print it using ctime function. But the time printed is not same. As I understand the time_point have duration and duration have rep and period (ratio). So time_point must have same value up to millisecond precision in both time_points. Why the output is different?
Here is my code
#include <ctime>
#include <ratio>
#include <chrono>
#include <iostream>
using namespace std::chrono;
int main ()
{
std::chrono::milliseconds ms = std::chrono::duration_cast < std::chrono::milliseconds > (std::chrono::system_clock::now().time_since_epoch());
std::chrono::nanoseconds ns = std::chrono::duration_cast< std::chrono::nanoseconds > (std::chrono::system_clock::now().time_since_epoch());
std::chrono::duration<unsigned int,std::ratio<1,1000>> today_day (ms.count());
std::chrono::duration<system_clock::duration::rep,system_clock::duration::period> same_day(ns.count());
system_clock::time_point abc(today_day);
system_clock::time_point abc1(same_day);
std::time_t tt;
tt = system_clock::to_time_t ( abc );
std::cout << "today is: " << ctime(&tt);
tt = system_clock::to_time_t ( abc1 );
std::cout << "today is: " << ctime(&tt);
return 0;
}
This line:
std::chrono::duration<unsigned int,std::ratio<1,1000>> today_day (ms.count());
is overflowing. The number of milliseconds since 1970 is on the order of 1.5 trillion. But unsigned int (on your platform) overflows at about 4 billion.
Also, depending on your platform, this line:
std::chrono::duration<system_clock::duration::rep,system_clock::duration::period> same_day(ns.count());
may introduce a conversion error. If you are using gcc, system_clock::duration is nanoseconds, and there will be no error.
However, if you're using llvm's libc++, system_clock::duration is microseconds and you will be silently multiplying your duration by 1000.
And if you are using Visual Studio, system_clock::duration is 100 nanoseconds and you will be silently multiplying your duration by 100.
Here is a video tutorial for <chrono> which may help, and contains warnings about the use of .count() and .time_since_epoch().
The conversions you do manually do not look right.
You should use duration_cast for conversions because they are type-safe:
auto today_day = duration_cast<duration<unsigned, std::ratio<86400>>>(ms);
auto same_day = duration_cast<system_clock::duration>(ns);
Outputs:
today is: Thu Jul 26 01:00:00 2018
today is: Thu Jul 26 13:01:08 2018
Because you throw away the duration info, and then interpret an integer value as a different duration type
std::chrono::duration<unsigned int,std::ratio<1,1000>> today_day (ms.count());
milliseconds -> dimensionless -> 1 / 1000 seconds (i.e. milliseconds)
std::chrono::duration<system_clock::duration::rep,system_clock::duration::period> same_day(ns.count());
nanoseconds -> dimensionless -> system clocks
You should instead just duration_cast again
#include <ctime>
#include <ratio>
#include <chrono>
#include <iostream>
using namespace std::chrono;
int main ()
{
milliseconds ms = duration_cast<milliseconds>(system_clock::now().time_since_epoch());
nanoseconds ns = duration_cast<nanoseconds>(system_clock::now().time_since_epoch());
system_clock::time_point abc(duration_cast<system_clock::duration>(ms));
system_clock::time_point abc1(duration_cast<system_clock::duration>(ns));
std::time_t tt;
tt = system_clock::to_time_t ( abc );
std::cout << "today is: " << ctime(&tt);
tt = system_clock::to_time_t ( abc1 );
std::cout << "today is: " << ctime(&tt);
return 0;
}
Building with VS2013, specifying time_point::max() to a condition variable's wait_until results in an immediate timeout.
This seems unintuitive - I would naively expect time_point::max() to wait indefinitely (or at least a very long time). Can anyone confirm if this is documented, expected behaviour or something specific to MSVC?
Sample program below; note replacing time_point::max() with now + std::chrono::hours(1) gives the expected behaviour (wait_for exits once cv is notified, with no timeout)
#include <condition_variable>
#include <mutex>
#include <chrono>
#include <future>
#include <functional>
void fire_cv( std::mutex *mx, std::condition_variable *cv )
{
std::unique_lock<std::mutex> lock(*mx);
printf("firing cv\n");
cv->notify_one();
}
int main(int argc, char *argv[])
{
std::chrono::steady_clock::time_point now = std::chrono::steady_clock::now();
std::condition_variable test_cv;
std::mutex test_mutex;
std::future<void> s;
{
std::unique_lock<std::mutex> lock(test_mutex);
s = std::async(std::launch::async, std::bind(fire_cv, &test_mutex, &test_cv));
printf("blocking on cv\n");
std::cv_status result = test_cv.wait_until( lock, std::chrono::steady_clock::time_point::max() );
//std::cv_status result = test_cv.wait_until( lock, now + std::chrono::hours(1) ); // <--- this works as expected!
printf("%s\n", (result==std::cv_status::timeout) ? "timeout" : "no timeout");
}
s.wait();
return 0;
}
I debugged MSCV 2015's implementation, and wait_until calls wait_for internally, which is implemented like this:
template<class _Rep,
class _Period>
_Cv_status wait_for(
unique_lock<mutex>& _Lck,
const chrono::duration<_Rep, _Period>& _Rel_time)
{ // wait for duration
_STDEXT threads::xtime _Tgt = _To_xtime(_Rel_time); // Bug!
return (wait_until(_Lck, &_Tgt));
}
The bug here is that _To_xtime overflows, which results in undefined behavior, and the result is a negative time_point:
template<class _Rep,
class _Period> inline
xtime _To_xtime(const chrono::duration<_Rep, _Period>& _Rel_time)
{ // convert duration to xtime
xtime _Xt;
if (_Rel_time <= chrono::duration<_Rep, _Period>::zero())
{ // negative or zero relative time, return zero
_Xt.sec = 0;
_Xt.nsec = 0;
}
else
{ // positive relative time, convert
chrono::nanoseconds _T0 =
chrono::system_clock::now().time_since_epoch();
_T0 += chrono::duration_cast<chrono::nanoseconds>(_Rel_time); //Overflow!
_Xt.sec = chrono::duration_cast<chrono::seconds>(_T0).count();
_T0 -= chrono::seconds(_Xt.sec);
_Xt.nsec = (long)_T0.count();
}
return (_Xt);
}
std::chrono::nanoseconds by default stores its value in a long long, and so after its definition, _T0 has a value of 1'471'618'263'082'939'000 (this changes obviously). Adding _Rel_time (9'223'244'955'544'505'510) results definitely in signed overflow.
We have already passed every negative time_point possible, so a timeout happens.
I'm having some issues with HDF5 on Mac os x (10.7). After some testing, I've confirmed that POSIX write seems to have issues with buffer sizes exceeding 2gb. I've written a test program to demonstrate the issue:
#define _FILE_OFFSET_BITS 64
#include <iostream>
#include <unistd.h>
#include <fcntl.h>
void writePosix(const int64_t arraySize, const char* name) {
int fd = open(name, O_WRONLY | O_CREAT);
if (fd != -1) {
double *array = new double [arraySize];
double start = 0.0;
for (int64_t i=0;i<arraySize;++i) {
array[i] = start;
start += 0.001;
}
ssize_t result = write(fd, array, (int64_t)(sizeof(double))*arraySize);
printf("results for array size %lld = %ld\n", arraySize, result);
close(fd);
} else {
printf("file error");
}
}
int main(int argc, char *argv[]) {
writePosix(268435455, "/Users/tpav/testfolder/lessthan2gb");
writePosix(268435456, "/Users/tpav/testfolder/equal2gb");
}
Output:
results for array size 268435455 = 2147483640
results for array size 268435456 = -1
As you can see, I've even tried defining the file offsets. Is there anything I can do about this or should I start looking for a workaround in the way I write 2gb+ chunks?
In the HDF5 virtual file drivers, we break I/O operations that are too large for the call into multiple smaller I/O calls. The Mac implementation of POSIX I/O takes a size_t argument so our code assumed that the max I/O size would be the max value that can fit in a variable of type ssize_t (the return type of read/write). Sadly, this is not the case.
Note that this only applies to single I/O operations. You can create files that go above the 2GB/4GB barrier, you just can't write >2GB in a single call.
This should be fixed in HDF5 1.8.10 patch 1, due out in late January 2013.
I'm trying to extract boot time by getting current time SYSTEMTIME structure, then converting it to FILETIME which I then convert to ULARGE_INTEGER from which I subtract GetTickCount64() and then proceed on converting everything back to SYSTEMTIME.
I'm comparing this function to 'NET STATISTICS WORKSTATION' and for some reason my output is off by several hours, which don't seem to match any timezone differences.
Here's visual studio example code:
#include "stdafx.h"
#include <windows.h>
#include <tchar.h>
#include <strsafe.h>
#define KILOBYTE 1024
#define BUFF KILOBYTE
int _tmain(int argc, _TCHAR* argv[])
{
ULARGE_INTEGER ticks, ftime;
SYSTEMTIME current, final;
FILETIME ft, fout;
OSVERSIONINFOEX osvi;
char output[BUFF];
int retval=0;
ZeroMemory(&osvi, sizeof(OSVERSIONINFOEX));
ZeroMemory(&final, sizeof(SYSTEMTIME));
GetVersionEx((OSVERSIONINFO *) &osvi);
if (osvi.dwBuildNumber >= 6000) ticks.QuadPart = GetTickCount64();
else ticks.QuadPart = GetTickCount();
//Convert miliseconds to 100-nanosecond time intervals
ticks.QuadPart = ticks.QuadPart * 10000;
//GetLocalTime(¤t); -- //doesn't really fix the problem
GetSystemTime(¤t);
SystemTimeToFileTime(¤t, &ft);
printf("INITIAL: Filetime lowdatetime %u, highdatetime %u\r\n", ft.dwLowDateTime, ft.dwHighDateTime);
ftime.LowPart=ft.dwLowDateTime;
ftime.HighPart=ft.dwHighDateTime;
//subtract boot time interval from current time
ftime.QuadPart = ftime.QuadPart - ticks.QuadPart;
//Convert ULARGE_INT back to FILETIME
fout.dwLowDateTime = ftime.LowPart;
fout.dwHighDateTime = ftime.HighPart;
printf("FINAL: Filetime lowdatetime %u, highdatetime %u\r\n", fout.dwLowDateTime, fout.dwHighDateTime);
//Convert FILETIME back to system time
retval = FileTimeToSystemTime(&fout, &final);
printf("Return value is %d\r\n", retval);
printf("Current time %d-%.2d-%.2d %.2d:%.2d:%.2d\r\n", current.wYear, current.wMonth, current.wDay, current.wHour, current.wMinute, current.wSecond);
printf("Return time %d-%.2d-%.2d %.2d:%.2d:%.2d\r\n", final.wYear, final.wMonth, final.wDay, final.wHour, final.wMinute, final.wSecond);
return 0;
}
I ran it and found that it works correctly when using GetLocalTime as opposed to GetSystemTime, which is expressed in UTC. So it would make sense that GetSystemTime would not necessarily match the "clock" on the PC.
Other than that, though, the issue could possibly be the call to GetVersionEx. As written, I think it will always return zeros for all values. You need this line prior to calling it:
osvi.dwOSVersionInfoSize = sizeof( osvi );
Otherwise that dwBuildNumber will be zero and it will call GetTickCount, which is only good for 49 days or so. On the other hand, if that were the case, I think you would get a result with a much larger difference.
I'm not completely sure that (as written) the check is necessary to choose which tick count function to call. If GetTickCount64 doesn't exist, the app would not load due to the missing entrypoint (unless perhaps delay loading was used ... I'm not sure in that case). I believe that it would be necessary to use LoadLibrary and GetProcAddress to make the decision dynamically between those two functions and have it work on an older platform.
I am trying to get system date in a C program on a MSVC++ 6.0 compiler. I am using a system call:
system("date /T") (output is e.g. 13-Oct-08 which is date on my system in the format i have set)
but this prints the date to the i/o console.
How do i make take this date as returned by above system call and store it as a string value to a string defined in my code?
Or
Is there any other API i can use to get the date in above mentioned format (13-Oct-08, or 13-10-08) ?
-AD
#include <windows.h>
#include <iostream>
int main() {
SYSTEMTIME systmDateTime = {};
::GetLocalTime(&systmDateTime);
wchar_t wszDate[64] = {};
int const result = ::GetDateFormatW(
LOCALE_USER_DEFAULT, DATE_SHORTDATE,
&systmDateTime, 0, wszDate, _countof(wszDate));
if (result) {
std::wcout << wszDate;
}
}
There are a couple of ways to do this using API functions, two that jump to mind are strftime and GetDateFormat.
I'd like to provide examples but I'm afraid I don't have a Win32 compiler handy at the moment. Hopefully the examples in the above documentation are sufficient.
Have a read of Win32 Time functions; GetLocalTime may be your friend. There are also the standard C time functions, time and strftime.
For future reference, in a C program, it is almost always the wrong answer to invoke an external utility and capture it's STDOUT.
Thanks for the pointers.
I used this and it served my purpose:
#include <time.h>
#include <stdio.h>
#include <sys/types.h>
#include <sys/timeb.h>
#include <string.h>
int main()
{
char tmpbuf[128];
time_t ltime;
struct tm *today;
_strdate( tmpbuf );
printf("\n before formatting date is %s",tmpbuf);
time(<ime);
today = localtime( <ime );
strftime(tmpbuf,128,"%d-%m-%y",today);
printf( "\nafter formatting date is %s\n", tmpbuf );
}
-AD