performing Arithmetic on SYSTEMTIME - windows

I have a time value represented in SYSTEMTIME, i want to add/subtract 1 hour from it and get the newly obtained SYSTEMTIME. I want the conversion should take care of the date change on addition/subtraction or month change or e1 year change .
Can someone help me with this if there is some windows api which does arithmetic on SYSTEMTIME

If you're using C# (or VB.NET, or ASP.NET) you can use
DateTime dt = DateTime.Now.AddHours(1);
You can use negative numbers to subtract:
DateTime dt = DateTime.Now.AddHours(-1);
EDITED:
I extract an asnwer from this post
They suggest converting SYSTEMTIME to FILETIME, which is a number of
ticks since an epoch. You can then add the required number of 'ticks'
(i.e. 100ns intervals) to indicate your time, and convert back to
SYSTEMTIME.
The ULARGE_INTEGER struct is a union with a QuadPart member, which is
a 64bit number, that can be directly added to (on recent hardware).
SYSTEMTIME add( SYSTEMTIME s, double seconds ) {
FILETIME f;
SystemTimeToFileTime( &s, &f );
ULARGE_INTEGER u ;
memcpy( &u , &f , sizeof( u ) );
const double c_dSecondsPer100nsInterval = 100. * 1.E-9;
u.QuadPart += seconds / c_dSecondsPer100nsInterval;
memcpy( &f, &u, sizeof( f ) );
FileTimeToSystemTime( &f, &s );
return s;
}
If you want to add an hour use SYSTEMTIME s2 = add(s1, 60*60)

To add signed seconds (forward or backward in time) in C++:
const double clfSecondsPer100ns = 100. * 1.E-9;
void iAddSecondsToSystemTime(SYSTEMTIME* timeIn, SYSTEMTIME* timeOut, double tfSeconds)
{
union {
ULARGE_INTEGER li;
FILETIME ft;
};
// Convert timeIn to filetime
SystemTimeToFileTime(timeIn, &ft);
// Add in the seconds
li.QuadPart += tfSeconds / clfSecondsPer100ns;
// Convert back to systemtime
FileTimeToSystemTime(&ft, timeOut);
}

#include <stdio.h>
#include <windows.h>
#define NSEC 60*60
main()
{
SYSTEMTIME st;
FILETIME ft;
// Get local time from system
GetLocalTime(&st);
printf("%02d/%02d/%04d %02d:%02d:%02d\n",
st.wDay,st.wMonth,st.wYear,st.wHour,st.wMinute,st.wSecond);
// Convert to filetime
SystemTimeToFileTime(&st,&ft);
// Add NSEC seconds
((ULARGE_INTEGER *)&ft)->QuadPart +=(NSEC*10000000LLU);
// Convert back to systemtime
FileTimeToSystemTime(&ft,&st);
printf("%02d/%02d/%04d %02d:%02d:%02d\n",
st.wDay,st.wMonth,st.wYear,st.wHour,st.wMinute,st.wSecond);
}

Related

How do I port code calling QueryPerformanceFrequency to Rust?

I need to port this C code into Rust:
QueryPerformanceFrequency((unsigned long long int *) &frequency);
I didn't find a function that does that.
The Linux variant looks like:
struct timespec now;
if (clock_gettime(CLOCK_MONOTONIC, &now) == 0)
frequency = 1000000000;
Should I call std::time::Instant::now() and set the frequency to 1000000000?
This is the complete function:
// Initializes hi-resolution MONOTONIC timer
static void InitTimer(void)
{
srand(time(NULL)); // Initialize random seed
#if defined(_WIN32)
QueryPerformanceFrequency((unsigned long long int *) &frequency);
#endif
#if defined(__linux__)
struct timespec now;
if (clock_gettime(CLOCK_MONOTONIC, &now) == 0)
frequency = 1000000000;
#endif
#if defined(__APPLE__)
mach_timebase_info_data_t timebase;
mach_timebase_info(&timebase);
frequency = (timebase.denom*1e9)/timebase.numer;
#endif
baseTime = GetTimeCount(); // Get MONOTONIC clock time offset
startTime = GetCurrentTime(); // Get current time
}
The direct solution to accessing Windows APIs is to use the winapi crate. In this case, call QueryPerformanceFrequency:
use std::mem;
use winapi::um::profileapi::QueryPerformanceFrequency;
fn freq() -> u64 {
unsafe {
let mut freq = mem::zeroed();
QueryPerformanceFrequency(&mut freq);
*freq.QuadPart() as u64
}
}
fn main() {
println!("Hello, world!");
}
[dependencies]
winapi = { version = "0.3.8", features = ["profileapi"] }
hi-resolution MONOTONIC timer
I would use Instant as a monotonic timer and assume it's high-enough precision until proven otherwise.

Building a Days Counter with Arduino Leonardo and an RTC 3231

I've been trying to create a days counter that starts counting from a unix timestamp. I'm using an arduino Leonardo, an RTC DS 3231 and a 7-segment serial display(by microbot). Here's the display link: Serial Display Link
But I cannot get the display to print ny output. I think I might have messed up with the connections.
I have connected the Vcc and Gnd from the rtc to the 3.3v and gnd of the arduino and the Sda and Scl to the Sda and Scl of the arduino.
For the display I connected the Vcc to the 5V, the gnd to the gnd and the Rx to the digital output 5( Is this correct? I know I have messed up sth here)
Here is the code:
#include <Time.h>
#include <TimeLib.h>
#include <SPI.h>
#include <Wire.h>
#include "Adafruit_LEDBackpack.h"
#include "Adafruit_GFX.h"
#include "RTClib.h"
RTC_DS1307 rtc = RTC_DS1307();
Adafruit_7segment clockDisplay = Adafruit_7segment();
int hours = 0;
int minutes = 0;
int seconds = 0;
unsigned long previousMillis = 0; // will store last millis event time
unsigned long sensorpreviousMillis = 0; // will store last millis event time
unsigned long fiveMinuteInterval = 300000; // interval at which to use event time (milliseconds)
unsigned long postDaysInterval = 7200000 ; //seconds in a day 86400000
#define DISPLAY_ADDRESS 0x70
unsigned long theDateWeGotTogether = 1441843200; //in unixtime
unsigned long days ;
int weeks ;
void setup() {
Serial.begin(115200);
clockDisplay.begin(DISPLAY_ADDRESS);
rtc.begin();
bool setClockTime = !rtc.isrunning();
if (setClockTime) {
rtc.adjust(DateTime(F(__DATE__), F(__TIME__)));
}
}
void loop() {
// DateTime now = rtc.now();
days = ((now() - theDateWeGotTogether) / 86400); //86400 is the number of seconds in a day
unsigned long currentMillis = millis();
ShowDaysReading();
time_t t = now();
if(currentMillis - sensorpreviousMillis > fiveMinuteInterval)
{
// save the last time you performed event
sensorpreviousMillis = currentMillis;
DateTime Zeit = rtc.now();
}
}
void ShowDaysReading()
{
days = ((now() - theDateWeGotTogether) / 86400); //86400 number of seconds in a day
weeks = ((now() - theDateWeGotTogether) / (86400 * 7) ); //86400 number of seconds in a day
clockDisplay.print(days);
}

How to convert std::chrono::high_resolution_clock::now() to Windows File Time (and reverse conversion)

Is there a way to easily convert std::chrono::high_resolution_clock::now() time to Windows File Time (https://msdn.microsoft.com/en-us/library/windows/desktop/ms724284(v=vs.85).aspx) (and back)? I have no idea how to deal with this...
Thanks a lot!
If you have (or may have) a 100nanosecond or better precision time_point:
FILETIME fileTime = {0};
// Filetime has a resolution of 100nanoseconds
typedef std::chrono::duration<int64_t, std::ratio_multiply<std::hecto, std::nano>> hundrednanoseconds;
// 100nanoseconds since unix epoch + epoch offset difference of filetime
long long timePointTmp =
std::chrono::duration_cast<hundrednanoseconds>(tp.time_since_epoch()).count()
+ 116444736000000000;
fileTime.dwLowDateTime = (unsigned long)timePointTmp;
fileTime.dwHighDateTime = timePointTmp >> 32;
return fileTime;
you can convert time_point to a time_t, then use the following function to convert it to a FILETIME.
std::time_t t;
t = to_time_t(std::chrono::high_resolution_clock::now());
Then convert to FILETIME (https://msdn.microsoft.com/en-us/library/windows/desktop/ms724228%28v=vs.85%29.aspx)
#include <windows.h>
#include <time.h>
void TimetToFileTime( time_t t, LPFILETIME pft )
{
LONGLONG ll = Int32x32To64(t, 10000000) + 116444736000000000;
pft->dwLowDateTime = (DWORD) ll;
pft->dwHighDateTime = ll >>32;
}
Also, remember on windows the high_resolution_clock is implemented as the not-so-high-res system_clock
This seems to be the right solution:
FILETIME fileTime = { 0 };
long long timePointTmp = std::chrono::duration_cast<std::chrono::microseconds>(std::chrono::high_resolution_clock::now().time_since_epoch()).count() * 10 + 116444736000000000;
fileTime.dwLowDateTime = (unsigned long)timePointTmp;
fileTime.dwHighDateTime = timePointTmp >> 32;

Porting C++ project from VS 6.0 to VS 2010 brought to slower code

I ported one project from Visual C++ 6.0 to VS 2010 and found that a critical part of the code (scripting engine) now runs in about three times slower than in was before.
After some research I managed to extract code fragment which seems to cause the slowdown. I minimized it as much as possible, so it ill be easier to reproduce the problem.
The problem is reproduced when assigning a complex class (Variant) which contains another class (String), and the union of several other fields of simple types.
Playing with the example I discovered more "magic":
1. If I comment one of unused (!) class members, the speed increases, and the code finally runs faster than those complied with VS 6.2
2. The same is true if I remove the "union" wrapper"
3. The same is true event if change the value of the filed from 1 to 0
I have no idea what the hell is going on.
I have checked all code generation and optimization switches, but without any success.
The code sample is below:
On my Intel 2.53 GHz CPU this test, compiled under VS 6.2 runs 1.0 second.
Compiled under VS 2010 - 40 seconds
Compiled under VS 2010 with "magic" lines commented - 0.3 seconds.
The problem is reproduces with any optimization switch, but the "Whole program optimization" (/GL) should be disabled. Otherwise this too smart optimizer will know that out test actually does nothing, and the test will run 0 seconds.
#include <windows.h>
#include <stdio.h>
#include <stdlib.h>
class String
{
public:
char *ptr;
int size;
String() : ptr(NULL), size( 0 ) {};
~String() {if ( ptr != NULL ) free( ptr );};
String& operator=( const String& str2 );
};
String& String::operator=( const String& string2 )
{
if ( string2.ptr != NULL )
{
// This part is never called in our test:
ptr = (char *)realloc( ptr, string2.size + 1 );
size = string2.size;
memcpy( ptr, string2.ptr, size + 1 );
}
else if ( ptr != NULL )
{
// This part is never called in our test:
free( ptr );
ptr = NULL;
size = 0;
}
return *this;
}
struct Date
{
unsigned short year;
unsigned char month;
unsigned char day;
unsigned char hour;
unsigned char minute;
unsigned char second;
unsigned char dayOfWeek;
};
class Variant
{
public:
int dataType;
String valStr; // If we comment this string, the speed is OK!
// if we drop the 'union' wrapper, the speed is OK!
union
{
__int64 valInteger;
// if we comment any of these fields, unused in out test, the speed is OK!
double valReal;
bool valBool;
Date valDate;
void *valObject;
};
Variant() : dataType( 0 ) {};
};
void TestSpeed()
{
__int64 index;
Variant tempVal, tempVal2;
tempVal.dataType = 3;
tempVal.valInteger = 1; // If we comment this string, the speed is OK!
for ( index = 0; index < 200000000; index++ )
{
tempVal2 = tempVal;
}
}
int main(int argc, char* argv[])
{
int ticks;
char str[64];
ticks = GetTickCount();
TestSpeed();
sprintf( str, "%.*f", 1, (double)( GetTickCount() - ticks ) / 1000 );
MessageBox( NULL, str, "", 0 );
return 0;
}
This was rather interesting. First I was unable to reproduce the slow down in release build, only in debug build. Then I turned off SSE2 optimizations and got the same ~40s run time.
The problem seems to be in the compiler generated copy assignment for Variant. Without SSE2 it actually does a floating point copy with fld/fstp instructions because the union contains a double. And with some specific values this apparently is a really expensive operation. The 64-bit integer value 1 maps to 4.940656458412e-324#DEN which is a denormalized number and I believe this causes problems. When you leave tempVal.valInteger uninitialized it may contain a value that works faster.
I did a small test to confirm this:
union {
uint64_t i;
volatile double d1;
};
i = 0xcccccccccccccccc; //with this value the test takes 0.07 seconds
//i = 1; //change to 1 and now the test takes 36 seconds
volatile double d2;
for(int i=0; i<200000000; ++i)
d2 = d1;
So what you could do is define your own copy assignment for Variant that just does a simple memcpy of the union.
Variant& operator=(const Variant& rhs)
{
dataType = rhs.dataType;
union UnionType
{
__int64 valInteger;
double valReal;
bool valBool;
Date valDate;
void *valObject;
};
memcpy(&valInteger, &rhs.valInteger, sizeof(UnionType));
valStr = rhs.valStr;
return *this;
}

How to transfer UINT64 to 2 DWORDS?

Is there efficient way to do this?
That's something you could be using a union for:
union {
UINT64 ui64;
struct {
DWORD d0;
DWORD d1;
} y;
} un;
un.ui64 = 27;
// Use un.y.d0 and un.y.d1
An example (under Linix so using different types):
#include <stdio.h>
union {
long ui64;
struct {
int d0;
int d1;
} y;
} un;
int main (void) {
un.ui64 = 27;
printf ("%d %d\n", un.y.d0, un.y.d1);
return 0;
}
This produces:
27 0
Thought I would provide an example using LARGE_INTEGER FOR the windows platform.
If I have a variable called "value" that is 64 bit, then I do:
LARGE_INTEGER li;
li.QuadPart = value;
DWORD low = li.LowPart;
DWORD high = li.HighPart;
Yes, this copies it, but I like the readability of it.
Keep in mind that 64-bit integers have alignment restrictions at least as great as 32-bit integers on all platforms. Therefore, it's perfectly safe to cast a pointer to a 64-bit integer as a pointer to a 32-bit.
ULONGLONG largeInt;
printf( "%u %u\n", ((DWORD *)&largeInt)[ 0 ], ((DWORD *)&largeInt)[ 1 ] );
Obviously, Pax's solution is a lot cleaner, but this is technically more efficient since it doesn't require any data copying.

Resources