I'm trying to decode a date encoded as a REG_BINARY in the Windows registry. Specifically this date:
SignaturesLastUpdated REG_BINARY 720CB9EBE8CBCE01
which should in 2013.
Any idea how to manually decode it? (i.e. without using any built-in C# or C++ library)
It's a FILETIME structure which is defined as:
Contains a 64-bit value representing the number of 100-nanosecond
intervals since January 1, 1601 (UTC).
The struct looks like this:
typedef struct _FILETIME {
DWORD dwLowDateTime;
DWORD dwHighDateTime;
} FILETIME, *PFILETIME, *LPFILETIME;
Members
dwLowDateTime
The low-order part of the file time.
dwHighDateTime
The high-order part of the file time.
And here is how it can be converted to Unix time (in Go):
type Filetime struct {
LowDateTime uint32
HighDateTime uint32
}
// Nanoseconds returns Filetime ft in nanoseconds
// since Epoch (00:00:00 UTC, January 1, 1970).
func (ft *Filetime) Nanoseconds() int64 {
// 100-nanosecond intervals since January 1, 1601
nsec := int64(ft.HighDateTime)<<32 + int64(ft.LowDateTime)
// change starting time to the Epoch (00:00:00 UTC, January 1, 1970)
nsec -= 116444736000000000
// convert into nanoseconds
nsec *= 100
return nsec
}
Related
So, basically a person has a shift, with a start time and duration, and I need to know how many minutes to go.
I retrieve from the database (mySQL) a standard string of datetime for the start of the shift (eg. 2022-01-01 09:00:00)
I also retrieve the number of hours of the shift (eg. 8)
I can then determine current time, using Golang's Time.time, but this format is different.
Please see code below, which explains the problem better, and gives the pieces missing.
Thanks!
var shiftStartDB string // For example 2022-01-01 09:00:00
var shiftStartUnix int // For example, start time converted to minutes since epoch
var offTimer int // Number of hours for the shift
var shiftEndDB string // For example 2022-01-01 17:00:00
var shiftEndUnix int // For example, end time converted to minutes since epoch
var nowTime Time.time // Golangs version of time for now
var nowUnix int // Golang now converted to Unix time
var templateID string
var minsToGo int
_ = db.QueryRow("SELECT LastSignedOn, OffTimer, TemplateID FROM assets WHERE ID = ?", assetid).Scan(&shiftStartDB, &offTimer, &templateID)
shiftStartUnix = <convert database time into unix time>
if offTimerTemp == 0 {
_ = db.QueryRow("OffTimer FROM templates WHERE ID = ?", templateID).Scan(&offTimer)
}
shiftEndUnix = shiftStartUnix + (offTimer * 60)
nowTime = time.Now()
nowUnix = <convert golang time into unix time>
minsToGo = shiftEndInt - nowInt
You can use nowUnix = nowTime.UNIX()/60, as Time.UNIX returns the number of seconds elapsed since epoch. that said, this might be easier to deal with if you parsed the times and used go library time functions directly:
shiftStartTime,err := time.Parse("2006-02-01 15:04:05",dbTime)
shiftEnd=shiftStartTime.Add(time.Hour*offTimer)
minsToGo:=shiftEnd.Sub(time.Now()).Minutes()
How do i convert millisecond (uint64) into Time Format RFC3999 with millisecond (string) in GO?
For example:
var milleSecond int64
milleSecond = 1645286399999 //My Local Time : Sat Feb 19 2022 23:59:59
var loc = time.FixedZone("UTC-4", -4*3600)
string1 := time.UnixMilli(end).In(loc).Format(time.RFC3339)
Actual Result: 2022-02-19T11:59:59-04:00
Expected Result(should be): 2022-02-19T11:59:59.999-04:00
You are asking for an RFC3339 formatted string, with seconds reported to the nearest millisecond. There's no format string in the time package for this (only with whole seconds and nanosecond accuracy), but you can make your own.
Here's the string for seconds to the nearest nanosecond, copied from the standard library:
RFC3339Nano = "2006-01-02T15:04:05.999999999Z07:00"
You can make a millisecond version of this easily enough by removing the .999999999 (report time to the nearest nanosecond, removing trailing zeros) to .000 (report time to the nearest millisecond, don't remove trailing zeros). This format is documented under time.Layout in the package docs https://pkg.go.dev/time#pkg-constants:
RFC3339Milli = "2006-01-02T15:04:05.000Z07:00"
Code (playground link):
package main
import (
"fmt"
"time"
)
const RFC3339Milli = "2006-01-02T15:04:05.000Z07:00"
func main() {
ms := int64(1645286399999) //My Local Time : Sat Feb 19 2022 23:59:59
var loc = time.FixedZone("UTC-4", -4*3600)
fmt.Println(time.UnixMilli(ms).In(loc).Format(RFC3339Milli))
}
Output:
2022-02-19T11:59:59.999-04:00
I'm using the chrono crate; after some digging I discovered the DateTime type has a function timestamp() which could generate epoch time of type i64. However, I couldn't find out how to convert it back to DateTime.
extern crate chrono;
use chrono::*;
fn main() {
let date = chrono::UTC.ymd(2020, 1, 1).and_hms(0, 0, 0);
println!("{}", start_date.timestamp());
// ...how to convert it back?
}
You first need to create a NaiveDateTime and then use it to create a DateTime again:
extern crate chrono;
use chrono::prelude::*;
fn main() {
let datetime = Utc.ymd(2020, 1, 1).and_hms(0, 0, 0);
let timestamp = datetime.timestamp();
let naive_datetime = NaiveDateTime::from_timestamp(timestamp, 0);
let datetime_again: DateTime<Utc> = DateTime::from_utc(naive_datetime, Utc);
println!("{}", datetime_again);
}
Playground
Not sure if I'm missing something, or chrono expanded its feature set in the meantime, but its 2021 and at least since chrono 0.4.0 there appears to be a cleaner way to do it:
https://docs.rs/chrono/0.4.19/chrono/#conversion-from-and-to-epoch-timestamps
use chrono::{DateTime, TimeZone, Utc};
// Construct a datetime from epoch:
let dt = Utc.timestamp(1_500_000_000, 0);
assert_eq!(dt.to_rfc2822(), "Fri, 14 Jul 2017 02:40:00 +0000");
// Get epoch value from a datetime:
let dt = DateTime::parse_from_rfc2822("Fri, 14 Jul 2017 02:40:00 +0000").unwrap();
assert_eq!(dt.timestamp(), 1_500_000_000);
So your full conversion should look like this:
extern crate chrono;
use chrono::*;
fn main() {
let start_date = chrono::Utc.ymd(2020, 1, 1).and_hms(0, 0, 0);
let ts = start_date.timestamp();
println!("{}", &ts);
let end_date = Utc.timestamp(ts, 0);
assert_eq!(end_date, start_date);
}
You can use the parse_duration crate: https://docs.rs/parse_duration/2.1.0/parse_duration/
extern crate parse_duration;
use parse_duration::parse;
use std::time::Duration;
fn main() {
// 1587971749 seconds since UNIX_EPOCH
assert_eq!(parse("1587971749"), Ok(Duration::new(1587971749, 0)));
// One hour less than a day
assert_eq!(parse("1 day -1 hour"), Ok(Duration::new(82_800, 0)));
// Using exponents
assert_eq!(
parse("1.26e-1 days"),
Ok(Duration::new(10_886, 400_000_000))
);
// Extra things will be ignored
assert_eq!(
parse("Duration: 1 hour, 15 minutes and 29 seconds"),
Ok(Duration::new(4529, 0))
);
}
The goal of the following code is to call the Win32 function FileTimeToSystemTime:
pub fn convert_times(s: SystemTime) -> Option<SYSTEMTIME> {
let mut st = SYSTEMTIME::default();
let x: u64 = unsafe { transmute(s) };
let low = (x & 0x00000000FFFFFFFF) as u32;
let high = ((x & 0xFFFFFFFF00000000) >> 32) as u32;
let fs = FILETIME {
dwLowDateTime: low,
dwHighDateTime: high,
};
if unsafe { FileTimeToSystemTime(transmute(&fs), transmute(&mut st)) } > 0 {
Some(st)
} else {
None
}
}
When I take a known file time 131147233180069965 which was generated at 2016-08-03T14:41 US-EST(GMT+5) according to my computer's clock. The return structure I get from this returns 2016-0803T18:41:58.006
This is +4 hours.
While US-EST is GMT+5?
Is it because Daylight Saving Time is -1hr?
FileTimeToLocalFileTime() returns time at UTC. In the United States, locations normally in EST transition to EDT during Daylight Saving Time, so it will be GMT+5 DST-1 or UTC-4.
To get time in the local timezone and take DST into account, one would instead need to call SystemTimeToTzSpecificLocalTime().
Generally this inadvisable as working in UTC is preferable for computers as 2 intercommunicating computers are not necessarily in the same timezone.
How can I get the Windows system time with millisecond resolution?
If the above is not possible, then how can I get the operating system start time? I would like to use this value together with timeGetTime() in order to compute a system time with millisecond resolution.
Try this article from MSDN Magazine. It's actually quite complicated.
Implement a Continuously Updating, High-Resolution Time Provider for Windows
(archive link)
This is an elaboration of the above comments to explain the some of the whys.
First, the GetSystemTime* calls are the only Win32 APIs providing the system's time. This time has a fairly coarse granularity, as most applications do not need the overhead required to maintain a higher resolution. Time is (likely) stored internally as a 64-bit count of milliseconds. Calling timeGetTime gets the low order 32 bits. Calling GetSystemTime, etc requests Windows to return this millisecond time, after converting into days, etc and including the system start time.
There are two time sources in a machine: the CPU's clock and an on-board clock (e.g., real-time clock (RTC), Programmable Interval Timers (PIT), and High Precision Event Timer (HPET)). The first has a resolution of around ~0.5ns (2GHz) and the second is generally programmable down to a period of 1ms (though newer chips (HPET) have higher resolution). Windows uses these periodic ticks to perform certain operations, including updating the system time.
Applications can change this period via timerBeginPeriod; however, this affects the entire system. The OS will check / update regular events at the requested frequency. Under low CPU loads / frequencies, there are idle periods for power savings. At high frequencies, there isn't time to put the processor into low power states. See Timer Resolution for further details. Finally, each tick has some overhead and increasing the frequency consumes more CPU cycles.
For higher resolution time, the system time is not maintained to this accuracy, no more than Big Ben has a second hand. Using QueryPerformanceCounter (QPC) or the CPU's ticks (rdtsc) can provide the resolution between the system time ticks. Such an approach was used in the MSDN magazine article Kevin cited. Though these approaches may have drift (e.g., due to frequency scaling), etc and therefore need to be synced to the system time.
In Windows, the base of all time is a function called GetSystemTimeAsFiletime.
It returns a structure that is capable of holding a time with 100ns resoution.
It is kept in UTC
The FILETIME structure records the number of 100ns intervals since January 1, 1600; meaning its resolution is limited to 100ns.
This forms our first function:
A 64-bit number of 100ns ticks since January 1, 1600 is somewhat unwieldy. Windows provides a handy helper function, FileTimeToSystemTime that can decode this 64-bit integer into useful parts:
record SYSTEMTIME {
wYear: Word;
wMonth: Word;
wDayOfWeek: Word;
wDay: Word;
wHour: Word;
wMinute: Word;
wSecond: Word;
wMilliseconds: Word;
}
Notice that SYSTEMTIME has a built-in resolution limitation of 1ms
Now we have a way to go from FILETIME to SYSTEMTIME:
We could write the function to get the current system time as a SYSTEIMTIME structure:
SYSTEMTIME GetSystemTime()
{
//Get the current system time utc in it's native 100ns FILETIME structure
FILETIME ftNow;
GetSytemTimeAsFileTime(ref ft);
//Decode the 100ns intervals into a 1ms resolution SYSTEMTIME for us
SYSTEMTIME stNow;
FileTimeToSystemTime(ref stNow);
return stNow;
}
Except Windows already wrote such a function for you: GetSystemTime
Local, rather than UTC
Now what if you don't want the current time in UTC. What if you want it in your local time? Windows provides a function to convert a FILETIME that is in UTC into your local time: FileTimeToLocalFileTime
You could write a function that returns you a FILETIME in local time already:
FILETIME GetLocalTimeAsFileTime()
{
FILETIME ftNow;
GetSystemTimeAsFileTime(ref ftNow);
//convert to local
FILETIME ftNowLocal
FileTimeToLocalFileTime(ftNow, ref ftNowLocal);
return ftNowLocal;
}
And lets say you want to decode the local FILETIME into a SYSTEMTIME. That's no problem, you can use FileTimeToSystemTime again:
Fortunately, Windows already provides you a function that returns you the value:
Precise
There is another consideration. Before Windows 8, the clock had a resolution of around 15ms. In Windows 8 they improved the clock to 100ns (matching the resolution of FILETIME).
GetSystemTimeAsFileTime (legacy, 15ms resolution)
GetSystemTimeAsPreciseFileTime (Windows 8, 100ns resolution)
This means we should always prefer the new value:
You asked for the time
You asked for the time; but you have some choices.
The timezone:
UTC (system native)
Local timezone
The format:
FILETIME (system native, 100ns resolution)
SYTEMTIME (decoded, 1ms resolution)
Summary
100ns resolution: FILETIME
UTC: GetSytemTimeAsPreciseFileTime (or GetSystemTimeAsFileTime)
Local: (roll your own)
1ms resolution: SYSTEMTIME
UTC: GetSystemTime
Local: GetLocalTime
GetTickCount will not get it done for you.
Look into QueryPerformanceFrequency / QueryPerformanceCounter. The only gotcha here is CPU scaling though, so do your research.
Starting with Windows 8 Microsoft has introduced the new API command GetSystemTimePreciseAsFileTime
Unfortunately you can't use that if you create software which must also run on older operating systems.
My current solution is as follows, but be aware: The determined time is not exact, it is only near to the real time. The result should always be smaller or equal to the real time, but with a fixed error (unless the computer went to standby). The result has a millisecond resolution. For my purpose it is exact enough.
void GetHighResolutionSystemTime(SYSTEMTIME* pst)
{
static LARGE_INTEGER uFrequency = { 0 };
static LARGE_INTEGER uInitialCount;
static LARGE_INTEGER uInitialTime;
static bool bNoHighResolution = false;
if(!bNoHighResolution && uFrequency.QuadPart == 0)
{
// Initialize performance counter to system time mapping
bNoHighResolution = !QueryPerformanceFrequency(&uFrequency);
if(!bNoHighResolution)
{
FILETIME ftOld, ftInitial;
GetSystemTimeAsFileTime(&ftOld);
do
{
GetSystemTimeAsFileTime(&ftInitial);
QueryPerformanceCounter(&uInitialCount);
} while(ftOld.dwHighDateTime == ftInitial.dwHighDateTime && ftOld.dwLowDateTime == ftInitial.dwLowDateTime);
uInitialTime.LowPart = ftInitial.dwLowDateTime;
uInitialTime.HighPart = ftInitial.dwHighDateTime;
}
}
if(bNoHighResolution)
{
GetSystemTime(pst);
}
else
{
LARGE_INTEGER uNow, uSystemTime;
{
FILETIME ftTemp;
GetSystemTimeAsFileTime(&ftTemp);
uSystemTime.LowPart = ftTemp.dwLowDateTime;
uSystemTime.HighPart = ftTemp.dwHighDateTime;
}
QueryPerformanceCounter(&uNow);
LARGE_INTEGER uCurrentTime;
uCurrentTime.QuadPart = uInitialTime.QuadPart + (uNow.QuadPart - uInitialCount.QuadPart) * 10000000 / uFrequency.QuadPart;
if(uCurrentTime.QuadPart < uSystemTime.QuadPart || abs(uSystemTime.QuadPart - uCurrentTime.QuadPart) > 1000000)
{
// The performance counter has been frozen (e. g. after standby on laptops)
// -> Use current system time and determine the high performance time the next time we need it
uFrequency.QuadPart = 0;
uCurrentTime = uSystemTime;
}
FILETIME ftCurrent;
ftCurrent.dwLowDateTime = uCurrentTime.LowPart;
ftCurrent.dwHighDateTime = uCurrentTime.HighPart;
FileTimeToSystemTime(&ftCurrent, pst);
}
}
GetSystemTimeAsFileTime gives the best precision of any Win32 function for absolute time. QPF/QPC as Joel Clark suggested will give better relative time.
Since we all come here for quick snippets instead of boring explanations, I'll write one:
FILETIME t;
GetSystemTimeAsFileTime(&t); // unusable as is
ULARGE_INTEGER i;
i.LowPart = t.dwLowDateTime;
i.HighPart = t.dwHighDateTime;
int64_t ticks_since_1601 = i.QuadPart; // now usable
int64_t us_since_1601 = (i.QuadPart * 1e-1);
int64_t ms_since_1601 = (i.QuadPart * 1e-4);
int64_t sec_since_1601 = (i.QuadPart * 1e-7);
// unix epoch
int64_t unix_us = (i.QuadPart * 1e-1) - 11644473600LL * 1000000;
int64_t unix_ms = (i.QuadPart * 1e-4) - 11644473600LL * 1000;
double unix_sec = (i.QuadPart * 1e-7) - 11644473600LL;
// i.QuadPart is # of 100ns ticks since 1601-01-01T00:00:00Z
// difference to Unix Epoch is 11644473600 seconds (attention to units!)
No idea how drifting performance-counter-based answers went up, don't do slippage bugs, guys.
QueryPerformanceCounter() is built for fine-grained timer resolution.
It is the highest resolution timer that the system has to offer that you can use in your application code to identify performance bottlenecks
Here is a simple implementation for C# devs:
[DllImport("kernel32.dll")]
extern static short QueryPerformanceCounter(ref long x);
[DllImport("kernel32.dll")]
extern static short QueryPerformanceFrequency(ref long x);
private long m_endTime;
private long m_startTime;
private long m_frequency;
public Form1()
{
InitializeComponent();
}
public void Begin()
{
QueryPerformanceCounter(ref m_startTime);
}
public void End()
{
QueryPerformanceCounter(ref m_endTime);
}
private void button1_Click(object sender, EventArgs e)
{
QueryPerformanceFrequency(ref m_frequency);
Begin();
for (long i = 0; i < 1000; i++) ;
End();
MessageBox.Show((m_endTime - m_startTime).ToString());
}
If you are a C/C++ dev, then take a look here: How to use the QueryPerformanceCounter function to time code in Visual C++
Well, this one is very old, yet there is another useful function in Windows C library _ftime, which returns a structure with local time as time_t, milliseconds, timezone, and daylight saving time flag.
In C11 and above (or C++17 and above) you can use timespec_get() to get time with higher precision portably
#include <stdio.h>
#include <time.h>
int main(void)
{
struct timespec ts;
timespec_get(&ts, TIME_UTC);
char buff[100];
strftime(buff, sizeof buff, "%D %T", gmtime(&ts.tv_sec));
printf("Current time: %s.%09ld UTC\n", buff, ts.tv_nsec);
}
If you're using C++ then since C++11 you can use std::chrono::high_resolution_clock, std::chrono::system_clock (wall clock), or std::chrono::steady_clock (monotonic clock) in the new <chrono> header. No need to use Windows-specific APIs anymore
auto start1 = std::chrono::high_resolution_clock::now();
auto start2 = std::chrono::system_clock::now();
auto start3 = std::chrono::steady_clock::now();
// do some work
auto end1 = std::chrono::high_resolution_clock::now();
auto end2 = std::chrono::system_clock::now();
auto end3 = std::chrono::steady_clock::now();
std::chrono::duration<long long, std::milli> diff1 = end1 - start1;
std::chrono::duration<double, std::milli> diff2 = end2 - start2;
auto diff3 = std::chrono::duration_cast<std::chrono::milliseconds>(end3 - start3);
std::cout << diff.count() << ' ' << diff2.count() << ' ' << diff3.count() << '\n';