Trying to make sense of Windows timestamps.... - windows

I'm in a situation that involves the manual reconstruction of raw data, to include MFT records and other Windows artifacts. I understand that timestamps in MFT records are 64-bit integers, big endian, and are calculated by the number of 100 nanosecond intervals since 01/01/1601 00:00:00 UTC. I am also familiar with Windows email header timestamps, which consist of two 32-bit values that combine to form a single 64-bit value, also calculated by the number of 100 nanosecond intervals since 01/01/1601 00:00:00 UTC.
But there are other Windows timestamps with different epochs, such as SQL Server timestamps, which I believe use a date in the 1800's. I cannot find much documentation on all of this. What timestamps are used in Windows other than the two listed above? How do you break them down?

Here is some code for decoding Windows timestamps:
static string microsoftDateToISODate(const uint64_t &time) {
/**
* See comment above for more information on
* SECONDS_BETWEEN_WIN32_EPOCH_AND_UNIX_EPOCH
*
* Convert UNIX time_t to ISO8601 format
*/
time_t tmp = (time / ONE_HUNDRED_NANO_SEC_TO_SECONDS)
- SECONDS_BETWEEN_WIN32_EPOCH_AND_UNIX_EPOCH;
struct tm time_tm;
gmtime_r(&tmp, &time_tm);
char buf[256];
strftime(buf, sizeof(buf), "%Y-%m-%dT%H:%M:%S", &time_tm);
return string(buf);
}
And your reference for Mac timestamps is:
Apple Mac and Unix timestamps
http://developer.apple.com/library/mac/#qa/qa1398/_index.html

Related

arm-gcc mktime binary size

I need to perform simple arithmetic on struct tm from time.h. I need to add or subtract seconds or minutes, and be able to normalize the structure. Normally, I'd use mktime(3) which performs this normalization as a side effect:
struct tm t = {.tm_hour=0, .tm_min=59, .tm_sec=40};
t.tm_sec += 30;
mktime(&t);
// t.tm_hour is now 1
// t.tm_min is now 0
// t.tm_sec is now 10
I'm doing this on an STM32 with 32 kB of flash, and binary gets very big. mktime(3) and the other stuff it pulls in take up 16 kB of flash--half the available space.
Is there a function in newlib that is specifically responsible for struct tm normalization? I realize that linking to a private function like that would make the code less portable.
There is a validate_structure() function in newlib/libc/time/mktime.c which does a part of the job, normalizes month, day-of-month, hour, min, sec, but leaves day-of-week and day-of-year alone.
It's declared static, so you can't simply call it, but you can copy the function from the sources. (There might be licensing issues though). Or you can just reimplement it, it's quite straightforward.
The tm_wday and tm_yday is calculated later in mktime(), so you'd need the whole mess including the timezone stuff in order to have these two normalized.
The bulk of that 16kB code is related to a call to siscanf(), a variant of sscanf() without floating point support, which is (I believe) used to parse timezone and DST information in environment variables.
You can cut lots of unnecessary code by using --specs=nano.specs when linking, which would switch to simplified printf/scanf code, saving about 10kB of code in your case.

scapy: Timestamps lacking float precision on sniffed packets

While capturing packets on Windows 7 x64, timestamps don't seem to have float precision. the code is given below.
from scapy.all import sniff
pkts = sniff(count=10)
pkts[0].time
-> for higher precision output
print('%.6f'%pkts[0].time)
output
1506009934
1506009934.000000
any ideas how to get precise values for timestamps?
This has been fixed in Scapy's development version. Get it from the official repository and your code should give the results you expect.

How should i find size in OMNet++?

I need to find packet size sent by each node in OMNeT++. Do i need to set it by myself or is there any way of finding the packet size which is changing dynamically.
Kindly tell me the procedure of finding the Packet size?
I think what you're trying to say is, where can you find the "inherent" size of a packet, for example of one that has been defined in a .msg file, based on "what's in it".
If I'm right: You can't. And shouldn't really want to. Since everything inside an OMNeT++ simulation is... simulation, no matter what the actual contents of a cPacket are, the bitLength property can be set to any value, with no regard to the amount of information stored in your custom messages.
So the only size any packet will have is the size set either by you manually, or by the model library you are using, with the setBitLength() method.
It is useful in scenarios where a protocol header has some fields that are of some weird length, like 3 bits, and then 9 bits, and 1 flag bit, etc. It is best to represent these fields as separate members in the message class, and since C++ doesn't have* these flexible size data types, the representation in the simulation and the represented header will have different sizes.
Or if you want to cheat, and transmit extra information with a packet, that wouldn't really be a part of it on a real network, in the actual bit sequence.
So you should just set the appropriate length with setBitLength, and don't care about what is actually stored. Usually. Until your computer runs out of memory.
I might be completely wrong about what you're trying to get to.
*Yes, there are bit fields, but ... it's easier not having to deal with them.
If you are talking about cPakets in OMNeT++, then simply use the according getter methods for the length of a packet. That is for cases where the packets have a real size set either by you or in your code.
From the cpacket.h in the OMNeT 5.1 release:
/**
* Returns the packet length (in bits).
*/
virtual int64_t getBitLength() const {return bitLength;}
/**
* Returns the packet length in bytes, that is, bitlength/8. If bitlength
* is not a multiple of 8, the result is rounded up.
*/
int64_t getByteLength() const {return (getBitLength()+7)>>3;}
So simply read the value, maybe write it into a temporary variable and use it for whatever you need it.

How to get volume data from an input device in Core-Audio?

I am trying to get the volume of the audio heard by an input device using Core-Audio.
So far I have used AudioDeviceAddIOProc and AudioDeviceStart with my AudioDeviceIOProc function to get the input data in the form of an AudioBufferList containing AudioBuffers.
How do I get the volume of the data from the AudioBuffer? Or am I going about this completely the wrong way?
On Mac OS X 10.5, the APIs you mentioned are deprecated and you should read Tech Note TN2223.
Anyway, assuming that you are getting buffers of linear PCM sample data in 32-bit float format, you just need to write a for loop that determines the fabs(y) of each sample and then takes the max of all those values. Then you can save that value or convert it to decibels.

MSG::time is later than timeGetTime

After noticing some timing descrepencies with events in my code, I boiled the problem all the way down to my Windows Message Loop.
Basically, unless I'm doing something strange, I'm experiencing this behaviour:-
MSG message;
while (PeekMessage(&message, _applicationWindow.Handle, 0, 0, PM_REMOVE))
{
int timestamp = timeGetTime();
bool strange = message.time > timestamp; //strange == true!!!
TranslateMessage(&message);
DispatchMessage(&message);
}
The only rational conclusion I can draw is that MSG::time uses a different timing mechanism then timeGetTime() and therefore is free to produce differing results.
Is this the case or am i missing something fundemental?
Could this be a signed unsigned issue? You are comparing a signed int (timestamp) to an unsigned DWORD (msg.time).
Also, the clock wraps every 40ish days - when that happens strange could well be true.
As an aside, if you don't have a great reason to use timeGetTime, you can use GetTickCount here - it saves you bringing in winmm.
The code below shows how you should go about using times - you should never compare the times directly, because clock wrapping messes that up. Instead you should always subtract the start time from the current time and look at the interval.
// This is roughly equivalent code, however strange should never be true
// in this code
DWORD timestamp = GetTickCount();
bool strange = (timestamp - msg.time < 0);
I don't think it's advisable to expect or rely on any particular relationship between the absolute values of timestamps returned from different sources. For one thing, the multimedia timer may have a different resolution from the system timer. For another, the multimedia timer runs in a separate thread, so you may encounter synchronisation issues. (I don't know if each CPU maintains its own independent tick count.) Furthermore, if you are running any sort of time synchronisation service, it may be making its own adjustments to your local clock and affecting the timestamps you are seeing.
Are you by any chance running an AMD dual core? There is an issue where since each core has a separate timer and can run at different speeds, the timers can diverge from each other. This can manifest itself in negative ping times, for example.
I had similar issues when measuring timeouts in different threads using GetTickCount().
Install this driver (IIRC) to resolve the issue.
MSG.time is based on GetTickCount(), and timeGetTime() uses the multimedia timer, which is completely independent of GetTickCount(). I would not be surprised to see that one timer has 'ticked' before the other.

Resources