I have a code which is getting SystemTime as FileTime using
uint64 currentUtcFileTime;
GetSystemTimeAsFileTime((FILETIME*)¤tUtcFileTime);
Now I want to convert this uint64 back to FILETIME.
I searched for long and finally got this solution -
FILETIME ftime;
__int64 i64;
*(__int64 *)&ftime = i64;
It worked well for me :)
Related
I've searched a lot and found nothing useful regarding this thing.
If anyone knows please share the knowledge.
Problem: I have a DWORD "-476053499" and I want to convert it to hex (e.g. XX XX XX XX).
DWORD is just a typedef for a a 32-bit unsigned integer (e.g. unsigned int or uint32_t). Notice that it's "unsigned", so I'm not sure how that plays into your value of -476053499
You didn't specify which language, but to convert a DWORD to a hexadecimal string is easily accomplished like this in C:
DWORD dwValue = (DWORD)(-476053499);
char szHexString[20];
sprintf(szHexString, "%X"); // use %x if you want lowercase hex
Equivalent in C++:
DWORD dwValue = (DWORD)(-476053499);
std::ostringstream ss;
ss << std::hex << dwValue;
std::string s = ss.str();
If your original value is actually a string, and not a DWORD, then to convert it to a number is simply:
const char* s = "-476053499";
DWORD value = (DWORD)atoi(s);
I have this strange error in botan 1.10.9.
When I want to store the private key bytes vector and the publics key byte vector i get an std::bad_alloc error. Could it be that is not possible to initialize a std::vector from the SecureVector from botan?
Botan::LibraryInitializer init;
Botan::AutoSeeded_RNG rng;
rng.reseed(10096);
Botan::RSA_PrivateKey rsaPrivate(rng, 1024);
std::vector<unsigned char> privateArray(rsaPrivate.pkcs8_private_key().begin(), rsaPrivate.pkcs8_private_key().end());
std::vector<unsigned char> publicArray(rsaPrivate.x509_subject_public_key().begin(), rsaPrivate.x509_subject_public_key().end());
If i encode the keys, then the operation works fine:
Botan::SecureVector<Botan::byte> publicBytes = std::move(Botan::X509::BER_encode(rsaPrivate));
Botan::SecureVector<Botan::byte> privateBytes = std::move(Botan::PKCS8::BER_encode(rsaPrivate, rng, info.passphrase()));
std::vector<unsigned char> publicArray(publicBytes.begin(), publicBytes.end());
std::vector<unsigned char> privateArray(privateBytes.begin(), privateBytes.end());
Any ideas why this could be happening? The weird thing is that if i remove one of the vectors initialization, soooooometimes it works but most of the time i get the crash.
A little bit old of a reply, but for Botan the unlock function turns a Botan::secure_vector<T> into a std::vector<T>.
I'm working on a port from some old Delphi code to VC++ 2013, and I'm encountering an error that I feel should be an easy fix but cannot for the life of me figure out...
The problem is this: I have a number of common utility functions in a local file Utils.h that I am deploying as part of a windows form. Most (90%) of the functions in this header work as normal. GetMsg(...), however, throws a C3861 Identifier not found error...
Utils.h (snippet): GetMsg declared at bottom
#pragma once
/*------------------------------------------------------------------------*
Includes:
*------------------------------------------------------------------------*/
using namespace std;
/*------------------------------------------------------------------------*
Constants:
*------------------------------------------------------------------------*/
#define GET_MSG_TIMEOUT 2
/*------------------------------------------------------------------------*
Typedefs, Structs, Enums:
*------------------------------------------------------------------------*/
typedef union
{
unsigned long ui32;
unsigned char ui8[4];
} UI32_UI8;
typedef union
{
unsigned short ui16;
unsigned char ui8[2];
} UI16_UI8;
typedef union
{
float f;
unsigned char ui8[4];
} F_UI8;
typedef struct
{
string sName;
string sVersion;
string sCompany;
string sCopyright;
} PRODUCT_INFORMATION;
/*------------------------------------------------------------------------*
Prototypes:
*------------------------------------------------------------------------*/
unsigned short SwapShort(unsigned short aShort);
float SwapFloat(float aFloat);
unsigned long SwapLong(unsigned long aLong);
unsigned int ReadLine(unsigned char *msgBuf, SerialPort^ Hdl, bool ReturnLF);
void __stdcall FillTheBuffer(char *buf, String sss, int length);
string __stdcall FillTheString(string sss, int length);
unsigned int __stdcall GetMsg(SerialPort^ Hdl, unsigned char *msgBuf);
GetMsg Definition in Utils.cpp:
//---------------------------------------------------------
unsigned int __stdcall GetMsg(SerialPort^ Hdl, unsigned char *msgBuf)
{
...
}
And, finally, GetMsg usage in form file:
#include "Utils.h"
...
void MainForm::UploadButton_Click
(System::Object^ object, System::EventArgs^ e)
{
...
SwapShort(1); //Works fine, also declared in Utils.h
GetMsg(spCom, inBuf); //C3861 ERROR
...
}
Where spCom is a (SerialPort^) contained, configured, and opened within the windows form. inBuf is a simple array of characters (char*) to buffer the input. I've tried renaming the function, thinking that there may have been an unintentional conflict / overload in other files, to no avail.
Any advice? Thanks, in advance
Solved the problem -- As it turns out I needed to be more explicit in my function definitions. Changing the declaration to read
GetMsg(System::IO::Ports::SerialPort^ Hdl, unsigned char *msgBuf)
eliminated the C3861 error. It would seem that the lack of a specific namespace on the declaration passed Intellisense but confused the compiler, rendering it unable to determine which prototype to use with the function call.
typedef struct _FILE_BOTH_DIR_INFORMATION {
ULONG NextEntryOffset;
ULONG FileIndex;
LARGE_INTEGER CreationTime;
LARGE_INTEGER LastAccessTime;
LARGE_INTEGER LastWriteTime;
LARGE_INTEGER ChangeTime;
LARGE_INTEGER EndOfFile;
LARGE_INTEGER AllocationSize;
ULONG FileAttributes;
ULONG FileNameLength;
ULONG EaSize;
CCHAR ShortNameLength;
WCHAR ShortName[12];
WCHAR FileName[1];
} FILE_BOTH_DIR_INFORMATION, *PFILE_BOTH_DIR_INFORMATION;
FileNameLength is declared ULONG. I guessed this is byte-count because all(or most) string lengths are byte-count in the kernel.
Yesterday, I wrote wrong code, because I misunderstood that it means count of char when I see CCHAR ShortNameLength. Now, I know ShortNameLength requires bytes-count.
Then, what does C mean in CCHAR?
C means count in Hungarian Notation. A variable named cch would be a "count of chars" and you would expect it to contain a string length. So CCHAR is the type that can contain a count of chars.
It's a horrible abuse of Hungarian notation typical of the Windows team.
I'm trying to extract boot time by getting current time SYSTEMTIME structure, then converting it to FILETIME which I then convert to ULARGE_INTEGER from which I subtract GetTickCount64() and then proceed on converting everything back to SYSTEMTIME.
I'm comparing this function to 'NET STATISTICS WORKSTATION' and for some reason my output is off by several hours, which don't seem to match any timezone differences.
Here's visual studio example code:
#include "stdafx.h"
#include <windows.h>
#include <tchar.h>
#include <strsafe.h>
#define KILOBYTE 1024
#define BUFF KILOBYTE
int _tmain(int argc, _TCHAR* argv[])
{
ULARGE_INTEGER ticks, ftime;
SYSTEMTIME current, final;
FILETIME ft, fout;
OSVERSIONINFOEX osvi;
char output[BUFF];
int retval=0;
ZeroMemory(&osvi, sizeof(OSVERSIONINFOEX));
ZeroMemory(&final, sizeof(SYSTEMTIME));
GetVersionEx((OSVERSIONINFO *) &osvi);
if (osvi.dwBuildNumber >= 6000) ticks.QuadPart = GetTickCount64();
else ticks.QuadPart = GetTickCount();
//Convert miliseconds to 100-nanosecond time intervals
ticks.QuadPart = ticks.QuadPart * 10000;
//GetLocalTime(¤t); -- //doesn't really fix the problem
GetSystemTime(¤t);
SystemTimeToFileTime(¤t, &ft);
printf("INITIAL: Filetime lowdatetime %u, highdatetime %u\r\n", ft.dwLowDateTime, ft.dwHighDateTime);
ftime.LowPart=ft.dwLowDateTime;
ftime.HighPart=ft.dwHighDateTime;
//subtract boot time interval from current time
ftime.QuadPart = ftime.QuadPart - ticks.QuadPart;
//Convert ULARGE_INT back to FILETIME
fout.dwLowDateTime = ftime.LowPart;
fout.dwHighDateTime = ftime.HighPart;
printf("FINAL: Filetime lowdatetime %u, highdatetime %u\r\n", fout.dwLowDateTime, fout.dwHighDateTime);
//Convert FILETIME back to system time
retval = FileTimeToSystemTime(&fout, &final);
printf("Return value is %d\r\n", retval);
printf("Current time %d-%.2d-%.2d %.2d:%.2d:%.2d\r\n", current.wYear, current.wMonth, current.wDay, current.wHour, current.wMinute, current.wSecond);
printf("Return time %d-%.2d-%.2d %.2d:%.2d:%.2d\r\n", final.wYear, final.wMonth, final.wDay, final.wHour, final.wMinute, final.wSecond);
return 0;
}
I ran it and found that it works correctly when using GetLocalTime as opposed to GetSystemTime, which is expressed in UTC. So it would make sense that GetSystemTime would not necessarily match the "clock" on the PC.
Other than that, though, the issue could possibly be the call to GetVersionEx. As written, I think it will always return zeros for all values. You need this line prior to calling it:
osvi.dwOSVersionInfoSize = sizeof( osvi );
Otherwise that dwBuildNumber will be zero and it will call GetTickCount, which is only good for 49 days or so. On the other hand, if that were the case, I think you would get a result with a much larger difference.
I'm not completely sure that (as written) the check is necessary to choose which tick count function to call. If GetTickCount64 doesn't exist, the app would not load due to the missing entrypoint (unless perhaps delay loading was used ... I'm not sure in that case). I believe that it would be necessary to use LoadLibrary and GetProcAddress to make the decision dynamically between those two functions and have it work on an older platform.