I am converting a threaded timer pool unit for cross platform use.
The current unit uses timeGetTime to ensure high accuracy and to report the actual elapsed interval when the timer event is called.
I have used gettimeofday in OSX before to get a high resolution timer but cannot find any reference to it for use in Delphi XE3.
Looking for help on how I can call this in Delphi or an alternative cross platform way to get a high res timer. I want ms accuracy (I know its OS dependent) for this.
Thanks in advance, Martin
A better option, multi-platform ready, may be to use the TStopWatch record from the System.Diagnostics unit.
The TStopWatch is a true high resolution timer if available, and in that case have close to nano-second precision (depend on the OS and hardware), and if not available (in Windows) use standard timer functions to provide millisecond precision.
If you want only millisecond precision, use the ElapsedMilliseconds property, like this:
var
sw : TStopWatch;
ElapsedMilliseconds : Int64;
begin
sw := TStopWatch.Create() ;
try
sw.Start;
Whatever();
sw.Stop;
ElapsedMilliseconds := sw.ElapsedMilliseconds;
finally
sw.Free;
end;
end;
The TStopWatch relies on QueryPreformanceFrequency/QueryPerformanceCounter functions in windows and mach_absolute_time on OS-X
Related
The declaration for timeSetEvent contains a uResolution parameter:
MMRESULT timeSetEvent(
UINT uDelay,
UINT uResolution,
LPTIMECALLBACK lpTimeProc,
DWORD_PTR dwUser,
UINT fuEvent
);
But my understanding is that, to set the timer resolution, timeBeginPeriod must be called first:
The timeBeginPeriod function requests a minimum resolution for periodic timers... call this function immediately before using timer services...
Do I have to call timeBeginPeriod before timeSetEvent?
If yes, then what is uResolution for?
Do I have to call timeBeginPeriod before timeSetEvent?
No, you do not have to call timeBeginPeriod in this case.
Microsoft's documentation is misleading. Indeed, the documentation for timeBeginPeriod states:
Call this function immediately before using timer services... call the timeEndPeriod function immediately after you are finished using timer services...
Which has lead to the ubiquitous coding pattern:
call timeBeginPeriod
call timeSetEvent
do stuff...
call timeKillEvent
call timeEndPeriod
In this pattern, the calls to timeBeginPeriod and timeEndPeriod are unnecessary. Mike Wasson, who wrote SDK documentation for Windows multimedia APIs, clarifies this in a forum post:
Internally, timeSetEvent calls timeBeginPeriod, so the uResolution parameter in timeSetEvent acts just like timeBeginPeriod. When the timer is cancelled (either by calling timeKillEvent for a periodic timer, or after a one-shot timer expires), internally it calls timeEndPeriod.
My evidence-based recommendation
Unless there is good reason, timeBeginPeriod/timeEndPeriod should not be called when using timeSetEvent/timeKillEvent. Doing so only increases the risk of timeBeginPeriod being called earlier than necessary, or timeEndPeriod being called later than necessary, or worse still timeEndPeriod not being called at all (a risk that absolutely exists in complex code using multiple timers). In all these cases system performance and power usage can be negatively impacted if high resolution timing has been requested:
Setting a higher resolution can improve the accuracy of time-out intervals in wait functions. However, it can also reduce overall system performance, because the thread scheduler switches tasks more often. High resolutions can also prevent the CPU power management system from entering power-saving modes.
Notes
Microsoft warns that timeSetEvent is obsolete and recommends using CreateTimerQueueTimer. However, this alternative timer might not be suitable for precision timing:
Callback functions are queued to the thread pool. These threads are subject to scheduling delays, so the timing can vary depending on what else is happening in the application or the system.
So it seems that, for now, the "obsolete" timeSetEvent function is still the only practical option for precision event/callback-based timing.
Windows 10 2004 April 2020 attempts to ameliorate some of the system-wide impacts of applications that use multimedia timers at high resolutions. The linked article contains important information for folks who use timeBeginPeriod for its system-wide "side-effect" benefit e.g. higher resolution for Wait and Sleep functions.
All I want to do is grab system clock values so that I can measure the time between them in order to compare the speeds of alternate codings using Free Pascal on Mac with OSX.
Free Pascal's documentation is more about dates and gross time stamps than about system clocks, as far as I can gather from their online documentation. System clock values would be far more precise.
My research here in stackoverflow finds nothing specific to my situation.
I have been able to do this with Xcode in a native OSX application, but I'd like to use Free Pascal for this app, due to its cross platform portability.
Has anyone found how to do what I need? Thank you.
The function for this is gettickcount64, but afaik that hasn't switch to clock_gettime (CLOCK_MONOTONIC) yet on OS X and still uses fpgettimeofday
It seems that mach_absolute_time is not exposed anywhere, but you may be can declare the import yourself. You also might want to check the univint header for Carbon system functions. Maybe one of the MIDI or (Core)Audio headers there has something for precise timekeeping
So I pretty much solved my problem by writing a function which uses inline assembler code. The low order longint portion of the counter is sufficient for my purpose. If the counter would happen accumulate into the high order 32 bits during a test run, my subtraction of start from end values would deliver a negative answer, so I'd run the test again. Unfortunately, the clock tick difference includes the ticks during background processes, so I need to minimize active processes during testing.
function HdweTick: longint; assembler;
asm
RDTSC
movl %eax,__result
end;
If you really need 64 bits of clock tick count, you could do this:
function HdweTick: int64;
type truc = record
case ovlay: integer of
0: (eax, edx :longint);
1: (val64 :int64 );
end;
var trec :truc;
begin
asm
RDTSC
movl %eax,trec.eax
movl %edx,trec.edx
end;
HdweTick := trec.val64;
end;
The following code is to fade my application on close.
procedure TfrmMain.btnClose1Click(Sender: TObject);
var
i : Integer;
begin
for i := 255 downto 0 do begin
frmMain.AlphaBlendValue := i;
application.ProcessMessages;
end;
Close;
end;
With Windows performance set to “Let Windows choose…”
When closing my Delphi app with the above code the fade is almost
instantaneous (maybe ¼ second at the most, if I blink I miss the
transition).
If I set the performance Option to ‘Adjust for best performance”
When exiting the same app the fade takes over 12 seconds.
Using the same code but commenting out the AlphaBlendValue change removes the delay.
I tested this out on both Delphi 2010 and DelphiXE2 and the results are the same.
This was tested on Windows 7 Ultimate 64bit if that makes any difference.
To say the least this behavior puzzles me.
I thought that the forms Alpha property was handled by the GPU and would therefore not be affected by Windows performance settings that should would be targeted at maximizing CPU performance.
So as far as this is concerned I'm not sure if this is a Windows 7 bug, a Delphi bug or just my lack of knowledge.
As far as a fix...
Is there a way to tell if Windows is running in crap graphics/max performance mode so that I can disable Alpha fade effects in my apps?
Edit for clarity:
While I would like to fix the fade what I am really looking for is a way to determine what the Windows performance setting is.
I am looking for how to determine a specific Windows setting - when you go into Windows Performance Options there are 3 tabs. On the first tab "Visual Effects" there are 3 canned options and a 4th option for 'Custom'. Minimally I am trying to determine if the option chosen is 'Adjust for best performance', if I could determine what the settings are on this tab even better.
Appreciate any help.
The fundamental problem with your code is that you are forcing 256 distinct updates irrespective of the performance characteristics of the machine. You don't have to use every single alpha blend value between 255 and 0. You can skip some values and still have a smooth fade.
You need to account for the actual graphics performance of the machine. Since you cannot predict that, you should account for real time in your fade code. Doing so will give you a consistent rate of fade irrespective of the performance characteristics of your machine.
So, here's a simple example to demonstrate tying the fade rate to real time:
procedure TfrmMain.btnClose1Click(Sender: TObject);
var
Stopwatch: TStopwatch;
NewAlphaBlendValue: Integer;
begin
Stopwatch := TStopwatch.StartNew;
while True do
begin
NewAlphaBlendValue := 255-(Stopwatch.ElapsedMilliseconds div 4);
if NewAlphaBlendValue>0 then
AlphaBlendValue := NewAlphaBlendValue
else
break;
end;
Close;
end;
The fade has a 1 second duration. You can readily adjust the mathematics to modify the duration to your requirements. This code will produce a smooth fade even on your low performing machine.
I would also comment that you should not use the global variable drmMain in a TfrmMain method. The TfrmMain method already has access to the instance. It is Self. And of course you can omit the Self. What's more the call to ProcessMessages is bad. That allows re-entrant handling of queued input messages. You don't want that to happen. So remove the call to ProcessMessages.
You actually ask about detecting the Adjust for best performance setting. But I think that's the wrong question. For a start you should fix your fade code so that the fade duration is independent of graphics performance.
Having done that you may still wish to disable the fade if the user has asked for lower quality appearance settings. I don't think you should look for one of the 3 canned options that you mention. They are quite possibly Windows version specific. Personally I would base the behaviour on the Animate windows when minimizing and maximizing setting. My rationale is that if the user does not want minimize and maximize to be animated, then presumably they don't want window close to be faded.
Here's how to read that setting:
function GetWindowAnimation: Boolean;
var
AnimationInfo: TAnimationInfo;
begin
AnimationInfo.cbSize := SizeOf(AnimationInfo);
if not SystemParametersInfo(SPI_GETANIMATION, AnimationInfo.cbSize,
#AnimationInfo, 0) then
RaiseLastOSError;
Result := AnimationInfo.iMinAnimate<>0;
end;
I think that most of the other settings that you may be concerned with can also be read using SystemParametersInfo. You should be able to work out how to do so by following the documentation.
Sorry for the tardy followup but it took me a while to figure out a working answer to my question and some of the issues behind it.
First, a thank you to David Heffernan for insight on a better way to handle the fade loop and information on the TStopWatch function from the Delphi's Diagnostics unit, much appreciated.
In regards to being able to determine the Windows' Performance settings...
When using the following un-optimized fade loop
procedure TfrmMain.btnFadeNCloseClick(Sender: TObject);
var
i : Integer;
begin
for i := 255 downto 0 do
frmMain.AlphaBlendValue := i;
Close;
end;
the actual Windows Performance Option settings causing the performance issue are "Enable desktop composition" and "Use visual styles on Windows and buttons". If both options are enabled there is no issue, if either setting is not enabled the loop crawls** (about 12 seconds on my system if the form is maximized).
Turns out that turning Aero Glass on or off affects these same 2 settings. So being able to detect if Aero Glass is on or not enables me to determine whether to not to enable the form effects, such as transition fades and other eye candy, in my apps. Plus now I can also capture that information in my bug reports.
**Note this appears to be an NVidia issue/bug, or at least an issue that is much more severe on systems with NVidia graphics cards. On 2
different NVidia systems (with recent, if not latest drivers) I
got similar results for a miximized form fade - less than
.001 seconds if Aero Glass is on, around 12 seconds if Aero Glass is
off. On a system with an Intel graphics card - less than .001 seconds
if Aero Glass is on, about 3.7 seconds if Aero Glass is off. Now
granted my test sampling is small, 3 NVidia systems (counting my
customer who initially reported the issue) and one non-NVidia system
but if I was using a decent NVidia graphic card I would not bother
turning Aero Glass off.
Below is the working code to detect if Aero Glass is enabled via Delphi:
This function has been tested on a Windows7 64 bit system and works with Delphi 2007, 2010 and Xe2 (32 & 64bit).
All of the various versions of the Delphi function below that I found on the net were broken - along with comments of people complaining about getting Access Violation errors.
What finally shed the light on fixing the bad code was Gerry Coll's response to: AccessViolationException in Delphi - impossible (check it, unbelievable...) which was about trying to fix AV errors in a function of the same type.
function ISAeroEnabled: Boolean;
type
_DwmIsCompositionEnabledFunc = function(var IsEnabled: Bool): HRESULT; stdcall;
var
Flag : BOOL;
DllHandle : THandle;
OsVersion : TOSVersionInfo;
DwmIsCompositionEnabledFunc: _DwmIsCompositionEnabledFunc;
begin
Result:=False;
ZeroMemory(#OsVersion, SizeOf(OsVersion));
OsVersion.dwOSVersionInfoSize := SizeOf(TOSVERSIONINFO);
if ((GetVersionEx(OsVersion)) and (OsVersion.dwPlatformId = VER_PLATFORM_WIN32_NT) and
(OsVersion.dwMajorVersion = 6) and (OsVersion.dwMinorVersion < 2)) then //Vista&Win7 only (no Win8)
begin
DllHandle := LoadLibrary('dwmapi.dll');
try
if DllHandle <> 0 then
begin
#DwmIsCompositionEnabledFunc := GetProcAddress(DllHandle, 'DwmIsCompositionEnabled');
if (#DwmIsCompositionEnabledFunc <> nil) then
begin
if DwmIsCompositionEnabledFunc(Flag)= S_OK then
Result:= Flag;
end;
end;
finally
FreeLibrary(DllHandle);
end;
end;
end;
I'm planning on making a clock. An actual clock, not something for Windows. However, I would like to be able to write most of the code now. I'll be using a PIC16F628A to drive the clock, and it has a timer I can access (actually, it has 3, in addition to the clock it has built in). Windows, however, does not appear to have this function. Which makes making a clock a bit hard, since I need to know how long it's been so I can update the current time. So I need to know how I can get a pulse (1Hz, 1KHz, doesn't really matter as long as I know how fast it is) in Windows.
There are many timer objects available in Windows. Probably the easiest to use for your purposes would be the Multimedia Timer, but that's been deprecated. It would still work, but Microsoft recommends using one of the new timer types.
I'd recommend using a threadpool timer if you know your application will be running under Windows Vista, Server 2008, or later. If you have to support Windows XP, use a Timer Queue timer.
There's a lot to those APIs, but general use is pretty simple. I showed how to use them (in C#) in my article Using the Windows Timer Queue API. The code is mostly API calls, so I figure you won't have trouble understanding and converting it.
The LARGE_INTEGER is just an 8-byte block of memory that's split into a high part and a low part. In assembly, you can define it as:
MyLargeInt equ $
MyLargeIntLow dd 0
MyLargeIntHigh dd 0
If you're looking to learn ASM, just do a Google search for [x86 assembly language tutorial]. That'll get you a whole lot of good information.
You could use a waitable timer object. Since Windows is not a real-time OS, you'll need to make sure you set the period long enough that you won't miss pulses. A tenth of a second should be safe most of the time.
Additional:
The const LARGE_INTEGER you need to pass to SetWaitableTimer is easy to implement in NASM, it's just an eight byte constant:
period: dq 100 ; 100ms = ten times a second
Pass the address of period as the second argument to SetWaitableTimer.
I've developed a Windows service which tracks business events. It uses the Windows clock to timestamp events. However, the underlying clock can drift quite dramatically (e.g. losing a few seconds per minute), particularly when the CPUs are working hard. Our servers use the Windows Time Service to stay in sync with domain controllers, which uses NTP under the hood, but the sync frequency is controlled by domain policy, and in any case even syncing every minute would still allow significant drift. Are there any techniques we can use to keep the clock more stable, other than using hardware clocks?
Clock ticks should be predictable, but on most PC hardware - because they're not designed for real-time systems - other I/O device interrupts have priority over the clock tick interrupt, and some drivers do extensive processing in the interrupt service routine rather than defer it to a deferred procedure call (DPC), which means the system may not be able to serve the clock tick interrupt until (sometimes) long after it was signalled.
Other factors include bus-mastering I/O controllers which steal many memory bus cycles from the CPU, causing it to be starved of memory bus bandwidth for significant periods.
As others have said, the clock-generation hardware may also vary its frequency as component values change with temperature.
Windows does allow the amount of ticks added to the real-time clock on every interrupt to be adjusted: see SetSystemTimeAdjustment. This would only work if you had a predictable clock skew, however. If the clock is only slightly off, the SNTP client ("Windows Time" service) will adjust this skew to make the clock tick slightly faster or slower to trend towards the correct time.
I don't know if this applies, but ...
There's an issue with Windows that if you change the timer resolution with timeBeginPeriod() a lot, the clock will drift.
Actually, there is a bug in Java's Thread wait() (and the os::sleep()) function's Windows implementation that causes this behaviour. It always sets the timer resolution to 1 ms before wait in order to be accurate (regardless of sleep length), and restores it immediately upon completion, unless any other threads are still sleeping. This set/reset will then confuse the Windows clock, which expects the windows time quantum to be fairly constant.
Sun has actually known about this since 2006, and hasn't fixed it, AFAICT!
We actually had the clock going twice as fast because of this! A simple Java program that sleeps 1 millisec in a loop shows this behaviour.
The solution is to set the time resolution yourself, to something low, and keep it there as long as possible. Use timeBeginPeriod() to control that. (We set it to 1 ms without any adverse effects.)
For those coding in Java, the easier way to fix this is by creating a thread that sleeps as long as the app lives.
Note that this will fix this issue on the machine globally, regardless of which application is the actual culprit.
You could run "w32tm /resync" in a scheduled task .bat file. This works on Windows Server 2003.
Other than resynching the clock more frequently, I don't think there is much you can do, other than to get a new motherboard, as your clock signal doesn't seem to be at the right frequency.
http://www.codinghorror.com/blog/2007/01/keeping-time-on-the-pc.html
PC clocks should typically be accurate to within a few seconds per day. If you're experiencing massive clock drift-- on the order of minutes per day-- the first thing to check is your source of AC power. I've personally observed systems with a UPS plugged into another UPS (this is a no-no, by the way) that gained minutes per day. Removing the unnecessary UPS from the chain fixed the time problem. I am no hardware engineer, but I'm guessing that some timing signal in the power is used by the real-time clock chip on the motherboard.
As already mentioned, Java programs can cause this issue.
Another solution that does not require code modification is adding the VM argument -XX:+ForceTimeHighResolution (found on the NTP support page).
9.2.3. Windows and Sun's Java Virtual Machine
Sun's Java Virtual Machine needs to be started with the >-XX:+ForceTimeHighResolution parameter to avoid losing interrupts.
See http://www.macromedia.com/support/coldfusion/ts/documents/createuuid_clock_speed.htm for more information.
From the referenced link (via the Wayback machine - original link is gone):
ColdFusion MX: CreateUUID Increases the Windows System Clock Speed
Calling the createUUID function multiple times under load in
Macromedia ColdFusion MX and higher can cause the Windows system clock
to accelerate. This is an issue with the Java Virtual Machine (JVM) in
which Thread.sleep calls less than 10 milliseconds (ms) causes the
Windows system clock to run faster. This behavior was originally filed
as Sun Java Bug 4500388
(developer.java.sun.com/developer/bugParade/bugs/4500388.html) and has
been confirmed for the 1.3.x and 1.4.x JVMs.
In ColdFusion MX, the createUUID function has an internal Thread.sleep
call of 1 millisecond. When createUUID is heavily utilized, the
Windows system clock will gain several seconds per minute. The rate of
acceleration is proportional to the number of createUUID calls and the
load on the ColdFusion MX server. Macromedia has observed this
behavior in ColdFusion MX and higher on Windows XP, 2000, and 2003
systems.
Increase the frequency of the re-sync.
If the syncs are with your own main server on your own network there's no reason not to sync every minute.
Sync more often. Look at the Registry entries for the W32Time service, especially "Period". "SpecialSkew" sounds like it would help you.
Clock drift may be a consequence of the temperature; maybe you could try to get temperature more constant - using better cooling perhaps? You're never going to loose drift totally, though.
Using an external clock (GPS receiver etc...), and a statistical method to relate CPU time to Absolute Time is what we use here to synch events in distributed systems.
Since it sounds like you have a big business:
Take an old laptop or something which isn't good for much, but seems to have a more or less reliable clock, and call it the Timekeeper. The Timekeeper's only job is to, once every (say) 2 minutes, send a message to the servers telling the time. Instead of using the Windows clock for their timestamps, the servers will put down the time from the Timekeeper's last signal, plus the elapsed time since the signal. Check the Timekeeper's clock by your wristwatch once or twice a week. This should suffice.
What servers are you running? In desktops the times I've come across this are with Spread Spectrum FSB enabled, causes some issues with the interrupt timing which is what makes that clock tick. May want to see if this is an option in BIOS on one of those servers and turn it off if enabled.
Another option you have is to edit the time polling interval and make it much shorter using the following registry key, most likely you'll have to add it (note this is a DWORD value and the value is in seconds, e.g. 600 for 10min):
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\SpecialPollInterval
Here's a full workup on it: KB816042
I once wrote a Delphi class to handle time resynchs. It is pasted below. Now that I see the "w32tm" command mentioned by Larry Silverman, I suspect I wasted my time.
unit TimeHandler;
interface
type
TTimeHandler = class
private
FServerName : widestring;
public
constructor Create(servername : widestring);
function RemoteSystemTime : TDateTime;
procedure SetLocalSystemTime(settotime : TDateTime);
end;
implementation
uses
Windows, SysUtils, Messages;
function NetRemoteTOD(ServerName :PWideChar; var buffer :pointer) : integer; stdcall; external 'netapi32.dll';
function NetApiBufferFree(buffer : Pointer) : integer; stdcall; external 'netapi32.dll';
type
//See MSDN documentation on the TIME_OF_DAY_INFO structure.
PTime_Of_Day_Info = ^TTime_Of_Day_Info;
TTime_Of_Day_Info = record
ElapsedDate : integer;
Milliseconds : integer;
Hours : integer;
Minutes : integer;
Seconds : integer;
HundredthsOfSeconds : integer;
TimeZone : LongInt;
TimeInterval : integer;
Day : integer;
Month : integer;
Year : integer;
DayOfWeek : integer;
end;
constructor TTimeHandler.Create(servername: widestring);
begin
inherited Create;
FServerName := servername;
end;
function TTimeHandler.RemoteSystemTime: TDateTime;
var
Buffer : pointer;
Rek : PTime_Of_Day_Info;
DateOnly, TimeOnly : TDateTime;
timezone : integer;
begin
//if the call is successful...
if 0 = NetRemoteTOD(PWideChar(FServerName),Buffer) then begin
//store the time of day info in our special buffer structure
Rek := PTime_Of_Day_Info(Buffer);
//windows time is in GMT, so we adjust for our current time zone
if Rek.TimeZone <> -1 then
timezone := Rek.TimeZone div 60
else
timezone := 0;
//decode the date from integers into TDateTimes
//assume zero milliseconds
try
DateOnly := EncodeDate(Rek.Year,Rek.Month,Rek.Day);
TimeOnly := EncodeTime(Rek.Hours,Rek.Minutes,Rek.Seconds,0);
except on e : exception do
raise Exception.Create(
'Date retrieved from server, but it was invalid!' +
#13#10 +
e.Message
);
end;
//translate the time into a TDateTime
//apply any time zone adjustment and return the result
Result := DateOnly + TimeOnly - (timezone / 24);
end //if call was successful
else begin
raise Exception.Create('Time retrieval failed from "'+FServerName+'"');
end;
//free the data structure we created
NetApiBufferFree(Buffer);
end;
procedure TTimeHandler.SetLocalSystemTime(settotime: TDateTime);
var
SystemTime : TSystemTime;
begin
DateTimeToSystemTime(settotime,SystemTime);
SetLocalTime(SystemTime);
//tell windows that the time changed
PostMessage(HWND_BROADCAST,WM_TIMECHANGE,0,0);
end;
end.
I believe Windows Time Service only implements SNTP, which is a simplified version of NTP. A full NTP implementation takes into account the stability of your clock in deciding how often to sync.
You can get the full NTP server for Windows here.