Retrieving boot time using GetTickCount64 - winapi

I'm trying to extract boot time by getting current time SYSTEMTIME structure, then converting it to FILETIME which I then convert to ULARGE_INTEGER from which I subtract GetTickCount64() and then proceed on converting everything back to SYSTEMTIME.
I'm comparing this function to 'NET STATISTICS WORKSTATION' and for some reason my output is off by several hours, which don't seem to match any timezone differences.
Here's visual studio example code:
#include "stdafx.h"
#include <windows.h>
#include <tchar.h>
#include <strsafe.h>
#define KILOBYTE 1024
#define BUFF KILOBYTE
int _tmain(int argc, _TCHAR* argv[])
{
ULARGE_INTEGER ticks, ftime;
SYSTEMTIME current, final;
FILETIME ft, fout;
OSVERSIONINFOEX osvi;
char output[BUFF];
int retval=0;
ZeroMemory(&osvi, sizeof(OSVERSIONINFOEX));
ZeroMemory(&final, sizeof(SYSTEMTIME));
GetVersionEx((OSVERSIONINFO *) &osvi);
if (osvi.dwBuildNumber >= 6000) ticks.QuadPart = GetTickCount64();
else ticks.QuadPart = GetTickCount();
//Convert miliseconds to 100-nanosecond time intervals
ticks.QuadPart = ticks.QuadPart * 10000;
//GetLocalTime(&current); -- //doesn't really fix the problem
GetSystemTime(&current);
SystemTimeToFileTime(&current, &ft);
printf("INITIAL: Filetime lowdatetime %u, highdatetime %u\r\n", ft.dwLowDateTime, ft.dwHighDateTime);
ftime.LowPart=ft.dwLowDateTime;
ftime.HighPart=ft.dwHighDateTime;
//subtract boot time interval from current time
ftime.QuadPart = ftime.QuadPart - ticks.QuadPart;
//Convert ULARGE_INT back to FILETIME
fout.dwLowDateTime = ftime.LowPart;
fout.dwHighDateTime = ftime.HighPart;
printf("FINAL: Filetime lowdatetime %u, highdatetime %u\r\n", fout.dwLowDateTime, fout.dwHighDateTime);
//Convert FILETIME back to system time
retval = FileTimeToSystemTime(&fout, &final);
printf("Return value is %d\r\n", retval);
printf("Current time %d-%.2d-%.2d %.2d:%.2d:%.2d\r\n", current.wYear, current.wMonth, current.wDay, current.wHour, current.wMinute, current.wSecond);
printf("Return time %d-%.2d-%.2d %.2d:%.2d:%.2d\r\n", final.wYear, final.wMonth, final.wDay, final.wHour, final.wMinute, final.wSecond);
return 0;
}

I ran it and found that it works correctly when using GetLocalTime as opposed to GetSystemTime, which is expressed in UTC. So it would make sense that GetSystemTime would not necessarily match the "clock" on the PC.
Other than that, though, the issue could possibly be the call to GetVersionEx. As written, I think it will always return zeros for all values. You need this line prior to calling it:
osvi.dwOSVersionInfoSize = sizeof( osvi );
Otherwise that dwBuildNumber will be zero and it will call GetTickCount, which is only good for 49 days or so. On the other hand, if that were the case, I think you would get a result with a much larger difference.
I'm not completely sure that (as written) the check is necessary to choose which tick count function to call. If GetTickCount64 doesn't exist, the app would not load due to the missing entrypoint (unless perhaps delay loading was used ... I'm not sure in that case). I believe that it would be necessary to use LoadLibrary and GetProcAddress to make the decision dynamically between those two functions and have it work on an older platform.

Related

Storing a time_point outside of the application

I am using an std::chrono::system_clock::time_point in my program.
When the application stops I want to save to the time_point to a file and load it again when the application starts.
If it was an UNIX-Timestamp I could simply store the value as integer. Is there a way to similarly store a time_point?
Yes. Choose the precision you desire the timestamp in (seconds, milliseconds, ... nanoseconds). Then cast the system_clock::time_point to that precision, extract its numeric value, and print it:
cout << time_point_cast<seconds>(system_clock::now()).time_since_epoch().count();
Though not specified by the standard, the above line (de facto) portably outputs the number of non-leap seconds since 1970-01-01 00:00:00 UTC. That is, this is a UNIX-Timestamp.
I am attempting to get the above code blessed by the standard to do what it in fact does by all implementations today. And I have the unofficial assurance of the std::chrono implementors, that they will not change their system_clock epochs in the meantime.
Here's a complete roundtrip example:
#include <chrono>
#include <iostream>
#include <sstream>
int
main()
{
using namespace std;
using namespace std::chrono;
stringstream io;
io << time_point_cast<seconds>(system_clock::now()).time_since_epoch().count();
int64_t i;
system_clock::time_point tp;
io >> i;
if (!io.fail())
tp = system_clock::time_point{seconds{i}};
}

When (if ever) does QueryPerformanceCounter() get reset to zero?

I can't find any clear indication of if/when the 64-bit value returned by QueryPerformanceCounter() gets reset, or overflows and resets back to zero. Hopefully it never overflows because the 64 bits gives space for decades worth of counting at gigahertz rates. However... is there anything other than a computer restart that will reset it?
Empirically, QPC is reset at system startup.
Note that you should not depend on this behavior, since Microsoft do not explicitly state what the "zero point" is for QPC, merely that it is a monotonically increasing value (mod 2^64) that can be used for high precision timing.
Hence they are quite within their rights to modify it's behavior at any time. They could, for example, make it return values that match FILETIME values as would be produced by a call to GetSystemTimeAsFileTime(), with the same resolution, 100ns tick rate. Under these circumstances, it would never reset. At least not in your or my lifetimes.
That said, the following program when run on Windows 10 [Version 6.3.16299] produces pairs of identical values that are the system uptime in seconds.
#include <windows.h>
#include <iostream>
int main()
{
LARGE_INTEGER performanceCount;
LARGE_INTEGER performanceFrequency;
QueryPerformanceFrequency(&performanceFrequency);
for (;;)
{
QueryPerformanceCounter(&performanceCount);
DWORD const systemTicks = timeGetTime();
DWORD const systemSeconds = systemTicks / 1000;
__int64 const performanceSeconds = performanceCount.QuadPart / performanceFrequency.QuadPart;
std::cout << systemSeconds << " " << performanceSeconds << std::endl;
Sleep(1000);
}
return 0;
}
Standard disclaimers apply, your actual mileage may vary, etc. etc. etc.
It seems that some Windows running inside VirtualBox may reset QueryPerformanceCounter every 20 minutes or so: see here.
QPC is more reliable as time goes by, but for better portability a low precision timer should be used such as GetTickCount64.

Problems With 64bit Posix Write In Mac OS X? (2gb+ Dataset in HDF5)

I'm having some issues with HDF5 on Mac os x (10.7). After some testing, I've confirmed that POSIX write seems to have issues with buffer sizes exceeding 2gb. I've written a test program to demonstrate the issue:
#define _FILE_OFFSET_BITS 64
#include <iostream>
#include <unistd.h>
#include <fcntl.h>
void writePosix(const int64_t arraySize, const char* name) {
int fd = open(name, O_WRONLY | O_CREAT);
if (fd != -1) {
double *array = new double [arraySize];
double start = 0.0;
for (int64_t i=0;i<arraySize;++i) {
array[i] = start;
start += 0.001;
}
ssize_t result = write(fd, array, (int64_t)(sizeof(double))*arraySize);
printf("results for array size %lld = %ld\n", arraySize, result);
close(fd);
} else {
printf("file error");
}
}
int main(int argc, char *argv[]) {
writePosix(268435455, "/Users/tpav/testfolder/lessthan2gb");
writePosix(268435456, "/Users/tpav/testfolder/equal2gb");
}
Output:
results for array size 268435455 = 2147483640
results for array size 268435456 = -1
As you can see, I've even tried defining the file offsets. Is there anything I can do about this or should I start looking for a workaround in the way I write 2gb+ chunks?
In the HDF5 virtual file drivers, we break I/O operations that are too large for the call into multiple smaller I/O calls. The Mac implementation of POSIX I/O takes a size_t argument so our code assumed that the max I/O size would be the max value that can fit in a variable of type ssize_t (the return type of read/write). Sadly, this is not the case.
Note that this only applies to single I/O operations. You can create files that go above the 2GB/4GB barrier, you just can't write >2GB in a single call.
This should be fixed in HDF5 1.8.10 patch 1, due out in late January 2013.

Is there a Windows equivalent of nanosleep?

Unix has a variety of sleep APIs (sleep, usleep, nanosleep). The only Win32 function I know of for sleep is Sleep(), which is in units of milliseconds.
I seen that most sleeps, even on Unix, get rounded up significantly (ie: typically to about 10ms). I've seen that on Solaris, if you run as root, you can get sub 10ms sleeps, and I know this is also possible on HPUX provided the fine grain timers kernel parameter is enabled. Are finer granularity timers available on Windows and if so, what are the APIs?
The sad truth is that there is no good answer to this. Multimedia timers are probably the closest you can get -- they only let you set periods down to 1 ms, but (thanks to timeBeginPeriod) they do actually provide precision around 1 ms, where most of the others do only about 10-15 ms as a rule.
There are a lot of other candidates. At first glance, CreateWaitableTimer and SetWaitableTimer probably seem like the closest equivalent since they're set in 100 ns interals. Unfortunately, you can't really depend on anywhere close to that good of resolution, at least in my testing. In the long term, they probably do provide the best possibility, since they at least let you specify a time of less than 1 ms, even though you can't currently depend on the implementation to provide (anywhere close to) that resolution.
NtDelayExecution seems to be roughly the same, as SetWaitableTimer except that it's undocumented. Unless you're set on using/testing undocumented functions, it seems to me that CreateWaitableTimer/SetWaitableTimer is a better choice just on the basis of being documented.
If you're using thread pools, you could try using CreateThreadPoolTimer and SetThreadPoolTimer instead. I haven't tested them enough to have any certainty about the resolution they really provide, but I'm not particularly optimistic.
Timer queues (CreateTimerQueue, CreateTimerQueueTimer, etc.) are what MS recommends as the replacement for multimedia timers, but (at least in my testing) they don't really provide much better resolution than Sleep.
If you merely want resolution in the nanoseconds range, there's NtDelayExecution in ntdll.dll:
NTSYSAPI NTSTATUS NTAPI NtDelayExecution(BOOLEAN Alertable, PLARGE_INTEGER DelayInterval);
It measures time in 100-nanosecond intervals.
HOWEVER, this probably isn't what you want:
It can delay for much longer than that—as long as a thread time slice (0.5 - 15ms) or two.
Here's code you can use to observe this:
#ifdef __cplusplus
extern "C" {
#endif
#ifdef _M_X64
typedef long long intptr_t;
#else
typedef int intptr_t;
#endif
int __cdecl printf(char const *, ...);
int __cdecl _unloaddll(intptr_t);
intptr_t __cdecl _loaddll(char *);
int (__cdecl * __cdecl _getdllprocaddr(intptr_t, char *, intptr_t))(void);
typedef union _LARGE_INTEGER *PLARGE_INTEGER;
typedef long NTSTATUS;
typedef NTSTATUS __stdcall NtDelayExecution_t(unsigned char Alertable, PLARGE_INTEGER Interval); NtDelayExecution_t *NtDelayExecution = 0;
typedef NTSTATUS __stdcall NtQueryPerformanceCounter_t(PLARGE_INTEGER PerformanceCounter, PLARGE_INTEGER PerformanceFrequency); NtQueryPerformanceCounter_t *NtQueryPerformanceCounter = 0;
#ifdef __cplusplus
}
#endif
int main(int argc, char *argv[]) {
long long delay = 1 * -(1000 / 100) /* relative 100-ns intervals */, counts_per_sec = 0;
long long counters[2];
intptr_t ntdll = _loaddll("ntdll.dll");
NtDelayExecution = (NtDelayExecution_t *)_getdllprocaddr(ntdll, "NtDelayExecution", -1);
NtQueryPerformanceCounter = (NtQueryPerformanceCounter_t *)_getdllprocaddr(ntdll, "NtQueryPerformanceCounter", -1);
for (int i = 0; i < 10; i++) {
NtQueryPerformanceCounter((PLARGE_INTEGER)&counters[0], (PLARGE_INTEGER)&counts_per_sec);
NtDelayExecution(0, (PLARGE_INTEGER)&delay);
NtQueryPerformanceCounter((PLARGE_INTEGER)&counters[1], (PLARGE_INTEGER)&counts_per_sec);
printf("Slept for %lld microseconds\n", (counters[1] - counters[0]) * 1000000 / counts_per_sec);
}
return 0;
}
My output:
Slept for 9455 microseconds
Slept for 15538 microseconds
Slept for 15401 microseconds
Slept for 15708 microseconds
Slept for 15510 microseconds
Slept for 15520 microseconds
Slept for 1248 microseconds
Slept for 996 microseconds
Slept for 984 microseconds
Slept for 1010 microseconds
The MinGW answer in long form:
MinGW and Cygwin provides a nanosleep() implementation under <pthread.h>. Source code:
In Cygwin and MSYS2: signal.cc and cygwait.cc (LGPLv3+; with linking exception)
This is based on NtCreateTimer and WaitForMultipleObjects.
In MinGW-W64: nanosleep.c and thread.c (Zope Public License)
This is based on WaitForSingleObject and Sleep.
In addition, gnulib (GPLv3+) has a higher-precision implementation in nanosleep.c. This performs a busy-loop over QueryPerformanceCounter for short (<1s) intervals and Sleep for longer intervals.
You can use the usual timeBeginPeriod trick ethanpil linked to with all the underlying NT timers.
Windows provides Multimedia timers that are higher resolution than Sleep(). The actual resolution supported by the OS can be obtained at runtime.
You may want to look into
timeBeginPeriod / timeEndPeriod
and/or
QueryPerformanceCounter
See here for more information: http://www.geisswerks.com/ryan/FAQS/timing.html
particulary the section towards the bottom: High-precision 'Sleeps'
Yeah there is , under 'pthread.h' mingw compiler

Stack around the variable 'xyz' was corrupted

I'm trying to get some simple piece of code I found on a website to work in VC++ 2010 on windows vista 64:
#include "stdafx.h"
#include <windows.h>
int _tmain(int argc, _TCHAR* argv[])
{
DWORD dResult;
BOOL result;
char oldWallPaper[MAX_PATH];
result = SystemParametersInfo(SPI_GETDESKWALLPAPER, sizeof(oldWallPaper)-1, oldWallPaper, 0);
fprintf(stderr, "Current desktop background is %s\n", oldWallPaper);
return 0;
}
it does compile, but when I run it, I always get this error:
Run-Time Check Failure #2 - Stack around the variable 'oldWallPaper' was corrupted.
I'm not sure what is going wrong, but I noticed, that the value of oldWallPaper looks something like "C\0:\0\0U\0s\0e\0r\0s[...]" -- I'm wondering where all the \0s come from.
A friend of mine compiled it on windows xp 32 (also VC++ 2010) and is able to run it without problems
any clues/hints/opinions?
thanks
The doc isn't very clear. The returned string is a WCHAR, two bytes per character not one, so you need to allocate twice as much space otherwise you get a buffer overrun. Try:
BOOL result;
WCHAR oldWallPaper[(MAX_PATH + 1)];
result = SystemParametersInfo(SPI_GETDESKWALLPAPER,
_tcslen(oldWallPaper), oldWallPaper, 0);
See also:
http://msdn.microsoft.com/en-us/library/ms724947(VS.85).aspx
http://msdn.microsoft.com/en-us/library/ms235631(VS.80).aspx (string conversion)
Every Windows function has 2 versions:
SystemParametersInfoA() // Ascii
SystemParametersInfoW() // Unicode
The version ending in W is the wide character type (ie Unicode) version of the function. All the \0's you are seeing are because every character you're getting back is in Unicode - 16 bytes per character - the second byte happens to be 0. So you need to store the result in a wchar_t array, and use wprintf instead of printf
wchar_t oldWallPaper[MAX_PATH];
result = SystemParametersInfo(SPI_GETDESKWALLPAPER, MAX_PATH-1, oldWallPaper, 0);
wprintf( L"Current desktop background is %s\n", oldWallPaper );
So you can use the A version SystemParametersInfoA() if you are hell-bent on not using Unicode. For the record you should always try to use Unicode, however.
Usually SystemParametersInfo() is a macro that evaluates to the W version, if UNICODE is defined on your system.

Resources