All I want to do is grab system clock values so that I can measure the time between them in order to compare the speeds of alternate codings using Free Pascal on Mac with OSX.
Free Pascal's documentation is more about dates and gross time stamps than about system clocks, as far as I can gather from their online documentation. System clock values would be far more precise.
My research here in stackoverflow finds nothing specific to my situation.
I have been able to do this with Xcode in a native OSX application, but I'd like to use Free Pascal for this app, due to its cross platform portability.
Has anyone found how to do what I need? Thank you.
The function for this is gettickcount64, but afaik that hasn't switch to clock_gettime (CLOCK_MONOTONIC) yet on OS X and still uses fpgettimeofday
It seems that mach_absolute_time is not exposed anywhere, but you may be can declare the import yourself. You also might want to check the univint header for Carbon system functions. Maybe one of the MIDI or (Core)Audio headers there has something for precise timekeeping
So I pretty much solved my problem by writing a function which uses inline assembler code. The low order longint portion of the counter is sufficient for my purpose. If the counter would happen accumulate into the high order 32 bits during a test run, my subtraction of start from end values would deliver a negative answer, so I'd run the test again. Unfortunately, the clock tick difference includes the ticks during background processes, so I need to minimize active processes during testing.
function HdweTick: longint; assembler;
asm
RDTSC
movl %eax,__result
end;
If you really need 64 bits of clock tick count, you could do this:
function HdweTick: int64;
type truc = record
case ovlay: integer of
0: (eax, edx :longint);
1: (val64 :int64 );
end;
var trec :truc;
begin
asm
RDTSC
movl %eax,trec.eax
movl %edx,trec.edx
end;
HdweTick := trec.val64;
end;
Related
is there anyways to get the system time in VxWorks besides tickGet() and tickAnnounce? I want to measure the time between the task switches of a specified task but I think the precision of tickGet() is not good enough because the the two tickGet() values at the beggining and the end of taskSwitchHookAdd function is always the same!
If you are looking to try and time task switches, I would assume you need a timer at least at the microsecond (us) level.
Usually, timers/clocks this fine grained are only provided by the platform you are running on. If you are working on an embedded system, you can try and read thru the manuals for your board support package (if there is one) to see if there are any functions provided to access various timers on a board.
A more low level solution would be to figure out the processor that is running on your system and then write some simple assembly code to poll the processor's internal timebase register (TBR). This might require a bit of research on the processor you are running on, but could be easily done.
If you are running on a PPC based processor, you can use the code below to read the TBR:
loop: mftbu rx #load most significant half from TBU
mftbl ry #load least significant half from TBL
mftbu rz #load from TBU again
cmpw rz,rx #see if 'old' = 'new'
bne loop #repeat if two values read from TBU are unequal
On an x86 based processor, you might consider using the RDTSC assembly instruction to read the Time Stamp Counter (TSC). On vxWorks, pentiumALib has some library functions (pentiumTscGet64() and pentiumTscGet32()) that will make reading the TSC easier using C.
source: http://www-inteng.fnal.gov/Integrated_Eng/GoodwinDocs/pdf/Sys%20docs/PowerPC/PowerPC%20Elapsed%20Time.pdf
Good luck!
It depends on what platform you are on, but if it is x86 then you can use:
pentiumTscGet64();
I am converting a threaded timer pool unit for cross platform use.
The current unit uses timeGetTime to ensure high accuracy and to report the actual elapsed interval when the timer event is called.
I have used gettimeofday in OSX before to get a high resolution timer but cannot find any reference to it for use in Delphi XE3.
Looking for help on how I can call this in Delphi or an alternative cross platform way to get a high res timer. I want ms accuracy (I know its OS dependent) for this.
Thanks in advance, Martin
A better option, multi-platform ready, may be to use the TStopWatch record from the System.Diagnostics unit.
The TStopWatch is a true high resolution timer if available, and in that case have close to nano-second precision (depend on the OS and hardware), and if not available (in Windows) use standard timer functions to provide millisecond precision.
If you want only millisecond precision, use the ElapsedMilliseconds property, like this:
var
sw : TStopWatch;
ElapsedMilliseconds : Int64;
begin
sw := TStopWatch.Create() ;
try
sw.Start;
Whatever();
sw.Stop;
ElapsedMilliseconds := sw.ElapsedMilliseconds;
finally
sw.Free;
end;
end;
The TStopWatch relies on QueryPreformanceFrequency/QueryPerformanceCounter functions in windows and mach_absolute_time on OS-X
I'm planning on making a clock. An actual clock, not something for Windows. However, I would like to be able to write most of the code now. I'll be using a PIC16F628A to drive the clock, and it has a timer I can access (actually, it has 3, in addition to the clock it has built in). Windows, however, does not appear to have this function. Which makes making a clock a bit hard, since I need to know how long it's been so I can update the current time. So I need to know how I can get a pulse (1Hz, 1KHz, doesn't really matter as long as I know how fast it is) in Windows.
There are many timer objects available in Windows. Probably the easiest to use for your purposes would be the Multimedia Timer, but that's been deprecated. It would still work, but Microsoft recommends using one of the new timer types.
I'd recommend using a threadpool timer if you know your application will be running under Windows Vista, Server 2008, or later. If you have to support Windows XP, use a Timer Queue timer.
There's a lot to those APIs, but general use is pretty simple. I showed how to use them (in C#) in my article Using the Windows Timer Queue API. The code is mostly API calls, so I figure you won't have trouble understanding and converting it.
The LARGE_INTEGER is just an 8-byte block of memory that's split into a high part and a low part. In assembly, you can define it as:
MyLargeInt equ $
MyLargeIntLow dd 0
MyLargeIntHigh dd 0
If you're looking to learn ASM, just do a Google search for [x86 assembly language tutorial]. That'll get you a whole lot of good information.
You could use a waitable timer object. Since Windows is not a real-time OS, you'll need to make sure you set the period long enough that you won't miss pulses. A tenth of a second should be safe most of the time.
Additional:
The const LARGE_INTEGER you need to pass to SetWaitableTimer is easy to implement in NASM, it's just an eight byte constant:
period: dq 100 ; 100ms = ten times a second
Pass the address of period as the second argument to SetWaitableTimer.
I've developed a Windows service which tracks business events. It uses the Windows clock to timestamp events. However, the underlying clock can drift quite dramatically (e.g. losing a few seconds per minute), particularly when the CPUs are working hard. Our servers use the Windows Time Service to stay in sync with domain controllers, which uses NTP under the hood, but the sync frequency is controlled by domain policy, and in any case even syncing every minute would still allow significant drift. Are there any techniques we can use to keep the clock more stable, other than using hardware clocks?
Clock ticks should be predictable, but on most PC hardware - because they're not designed for real-time systems - other I/O device interrupts have priority over the clock tick interrupt, and some drivers do extensive processing in the interrupt service routine rather than defer it to a deferred procedure call (DPC), which means the system may not be able to serve the clock tick interrupt until (sometimes) long after it was signalled.
Other factors include bus-mastering I/O controllers which steal many memory bus cycles from the CPU, causing it to be starved of memory bus bandwidth for significant periods.
As others have said, the clock-generation hardware may also vary its frequency as component values change with temperature.
Windows does allow the amount of ticks added to the real-time clock on every interrupt to be adjusted: see SetSystemTimeAdjustment. This would only work if you had a predictable clock skew, however. If the clock is only slightly off, the SNTP client ("Windows Time" service) will adjust this skew to make the clock tick slightly faster or slower to trend towards the correct time.
I don't know if this applies, but ...
There's an issue with Windows that if you change the timer resolution with timeBeginPeriod() a lot, the clock will drift.
Actually, there is a bug in Java's Thread wait() (and the os::sleep()) function's Windows implementation that causes this behaviour. It always sets the timer resolution to 1 ms before wait in order to be accurate (regardless of sleep length), and restores it immediately upon completion, unless any other threads are still sleeping. This set/reset will then confuse the Windows clock, which expects the windows time quantum to be fairly constant.
Sun has actually known about this since 2006, and hasn't fixed it, AFAICT!
We actually had the clock going twice as fast because of this! A simple Java program that sleeps 1 millisec in a loop shows this behaviour.
The solution is to set the time resolution yourself, to something low, and keep it there as long as possible. Use timeBeginPeriod() to control that. (We set it to 1 ms without any adverse effects.)
For those coding in Java, the easier way to fix this is by creating a thread that sleeps as long as the app lives.
Note that this will fix this issue on the machine globally, regardless of which application is the actual culprit.
You could run "w32tm /resync" in a scheduled task .bat file. This works on Windows Server 2003.
Other than resynching the clock more frequently, I don't think there is much you can do, other than to get a new motherboard, as your clock signal doesn't seem to be at the right frequency.
http://www.codinghorror.com/blog/2007/01/keeping-time-on-the-pc.html
PC clocks should typically be accurate to within a few seconds per day. If you're experiencing massive clock drift-- on the order of minutes per day-- the first thing to check is your source of AC power. I've personally observed systems with a UPS plugged into another UPS (this is a no-no, by the way) that gained minutes per day. Removing the unnecessary UPS from the chain fixed the time problem. I am no hardware engineer, but I'm guessing that some timing signal in the power is used by the real-time clock chip on the motherboard.
As already mentioned, Java programs can cause this issue.
Another solution that does not require code modification is adding the VM argument -XX:+ForceTimeHighResolution (found on the NTP support page).
9.2.3. Windows and Sun's Java Virtual Machine
Sun's Java Virtual Machine needs to be started with the >-XX:+ForceTimeHighResolution parameter to avoid losing interrupts.
See http://www.macromedia.com/support/coldfusion/ts/documents/createuuid_clock_speed.htm for more information.
From the referenced link (via the Wayback machine - original link is gone):
ColdFusion MX: CreateUUID Increases the Windows System Clock Speed
Calling the createUUID function multiple times under load in
Macromedia ColdFusion MX and higher can cause the Windows system clock
to accelerate. This is an issue with the Java Virtual Machine (JVM) in
which Thread.sleep calls less than 10 milliseconds (ms) causes the
Windows system clock to run faster. This behavior was originally filed
as Sun Java Bug 4500388
(developer.java.sun.com/developer/bugParade/bugs/4500388.html) and has
been confirmed for the 1.3.x and 1.4.x JVMs.
In ColdFusion MX, the createUUID function has an internal Thread.sleep
call of 1 millisecond. When createUUID is heavily utilized, the
Windows system clock will gain several seconds per minute. The rate of
acceleration is proportional to the number of createUUID calls and the
load on the ColdFusion MX server. Macromedia has observed this
behavior in ColdFusion MX and higher on Windows XP, 2000, and 2003
systems.
Increase the frequency of the re-sync.
If the syncs are with your own main server on your own network there's no reason not to sync every minute.
Sync more often. Look at the Registry entries for the W32Time service, especially "Period". "SpecialSkew" sounds like it would help you.
Clock drift may be a consequence of the temperature; maybe you could try to get temperature more constant - using better cooling perhaps? You're never going to loose drift totally, though.
Using an external clock (GPS receiver etc...), and a statistical method to relate CPU time to Absolute Time is what we use here to synch events in distributed systems.
Since it sounds like you have a big business:
Take an old laptop or something which isn't good for much, but seems to have a more or less reliable clock, and call it the Timekeeper. The Timekeeper's only job is to, once every (say) 2 minutes, send a message to the servers telling the time. Instead of using the Windows clock for their timestamps, the servers will put down the time from the Timekeeper's last signal, plus the elapsed time since the signal. Check the Timekeeper's clock by your wristwatch once or twice a week. This should suffice.
What servers are you running? In desktops the times I've come across this are with Spread Spectrum FSB enabled, causes some issues with the interrupt timing which is what makes that clock tick. May want to see if this is an option in BIOS on one of those servers and turn it off if enabled.
Another option you have is to edit the time polling interval and make it much shorter using the following registry key, most likely you'll have to add it (note this is a DWORD value and the value is in seconds, e.g. 600 for 10min):
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\SpecialPollInterval
Here's a full workup on it: KB816042
I once wrote a Delphi class to handle time resynchs. It is pasted below. Now that I see the "w32tm" command mentioned by Larry Silverman, I suspect I wasted my time.
unit TimeHandler;
interface
type
TTimeHandler = class
private
FServerName : widestring;
public
constructor Create(servername : widestring);
function RemoteSystemTime : TDateTime;
procedure SetLocalSystemTime(settotime : TDateTime);
end;
implementation
uses
Windows, SysUtils, Messages;
function NetRemoteTOD(ServerName :PWideChar; var buffer :pointer) : integer; stdcall; external 'netapi32.dll';
function NetApiBufferFree(buffer : Pointer) : integer; stdcall; external 'netapi32.dll';
type
//See MSDN documentation on the TIME_OF_DAY_INFO structure.
PTime_Of_Day_Info = ^TTime_Of_Day_Info;
TTime_Of_Day_Info = record
ElapsedDate : integer;
Milliseconds : integer;
Hours : integer;
Minutes : integer;
Seconds : integer;
HundredthsOfSeconds : integer;
TimeZone : LongInt;
TimeInterval : integer;
Day : integer;
Month : integer;
Year : integer;
DayOfWeek : integer;
end;
constructor TTimeHandler.Create(servername: widestring);
begin
inherited Create;
FServerName := servername;
end;
function TTimeHandler.RemoteSystemTime: TDateTime;
var
Buffer : pointer;
Rek : PTime_Of_Day_Info;
DateOnly, TimeOnly : TDateTime;
timezone : integer;
begin
//if the call is successful...
if 0 = NetRemoteTOD(PWideChar(FServerName),Buffer) then begin
//store the time of day info in our special buffer structure
Rek := PTime_Of_Day_Info(Buffer);
//windows time is in GMT, so we adjust for our current time zone
if Rek.TimeZone <> -1 then
timezone := Rek.TimeZone div 60
else
timezone := 0;
//decode the date from integers into TDateTimes
//assume zero milliseconds
try
DateOnly := EncodeDate(Rek.Year,Rek.Month,Rek.Day);
TimeOnly := EncodeTime(Rek.Hours,Rek.Minutes,Rek.Seconds,0);
except on e : exception do
raise Exception.Create(
'Date retrieved from server, but it was invalid!' +
#13#10 +
e.Message
);
end;
//translate the time into a TDateTime
//apply any time zone adjustment and return the result
Result := DateOnly + TimeOnly - (timezone / 24);
end //if call was successful
else begin
raise Exception.Create('Time retrieval failed from "'+FServerName+'"');
end;
//free the data structure we created
NetApiBufferFree(Buffer);
end;
procedure TTimeHandler.SetLocalSystemTime(settotime: TDateTime);
var
SystemTime : TSystemTime;
begin
DateTimeToSystemTime(settotime,SystemTime);
SetLocalTime(SystemTime);
//tell windows that the time changed
PostMessage(HWND_BROADCAST,WM_TIMECHANGE,0,0);
end;
end.
I believe Windows Time Service only implements SNTP, which is a simplified version of NTP. A full NTP implementation takes into account the stability of your clock in deciding how often to sync.
You can get the full NTP server for Windows here.
I have a procedure with a lot of
i := i +1;
in it and I think
inc(i);
looks a lot better. Is there a performance difference or does the function call just get inlined by the compiler? I know this probably doesn't matter at all to my app, I'm just curious.
EDIT: I did some gauging of the performance and found the difference to be very small, in fact as small as 5.1222741794670901427682121946224e-8! So it really doesn't matter. And optimization options really didn't change the outcome much. Thanks for all tips and suggestions!
There is a huge difference if Overflow Checking is turned on. Basically Inc does not do overflow checking. Do as was suggested and use the disassembly window to see the difference when you have those compiler options turned on (it is different for each).
If those options are turned off, then there is no difference. Rule of thumb, use Inc when you don't care about a range checking failure (since you won't get an exception!).
Modern compilers optimize the code.
inc(i) and i:= i+1; are pretty much the same.
Use whichever you prefer.
Edit: As Jim McKeeth corrected: with Overflow Checking there is a difference. Inc does not do a range checking.
It all depends on the type of "i". In Delphi, one normally declares loop-variables as "i: Integer", but it could as well be "i: PChar" which resolves to PAnsiChar on everything below Delphi 2009 and FPC (I'm guessing here), and to PWideChar on Delphi 2009 and Delphi.NET (also guessing).
Since Delphi 2009 can do pointer-math, Inc(i) can also be done on typed-pointers (if they are defined with POINTER_MATH turned on).
For example:
type
PSomeRecord = ^RSomeRecord;
RSomeRecord = record
Value1: Integer;
Value2: Double;
end;
var
i: PSomeRecord;
procedure Test;
begin
Inc(i); // This line increases i with SizeOf(RSomeRecord) bytes, thanks to POINTER_MATH !
end;
As the other anwsers already said : It's relativly easy to see what the compiler made of your code by opening up :
Views > Debug Windows > CPU Windows > Disassembly
Note, that compiler options like OPTIMIZATION, OVERFLOW_CHECKS and RANGE_CHECKS might influence the final result, so you should take care to have the settings according to your preference.
A tip on this : In every unit, $INCLUDE a file that steers the compiler options, this way, you won't loose settings when your .bdsproj or .dproj is somehow damaged. (Look at the sourcecode of the JCL for a good example on this)
You can verify it in the CPU window while debugging. The generated CPU instructions are the same for both cases.
I agree Inc(I); looks better although this may be subjective.
Correction: I just found this in the documentation for Inc:
"On some platforms, Inc may generate
optimized code, especially useful in
tight loops."
So it's probably advisable to stick to Inc.
You could always write both pieces of code (in separate procedures), put a breakpoint in the code and compare the assembler in the CPU window.
In general, I'd use inc(i) wherever it's obviously being used only as a loop/index of some sort, and + 1 wherever the 1 would make the code easier to maintain (ie, it might conceivable change to another integer in the future) or just more readable from an algorithm/spec point of view.
"On some platforms, Inc may generate optimized code, especially useful in tight loops."
For optimized compiler such as Delphi it doesn't care. That is about old compilers (e.g. Turbo Pascal)