How to programatically prevent Windows from hard disk drive spin down? - windows

My program performs a task on the hard disk free space.
The task is quite long, it takes 1-2 hours.
The problem is that on laptop the hard disk may be turned off after few minutes when the user is inactive.
How do I programmatically prevent Windows from hard disk spin down (power off) ?

To prevent the system from entering idle mode you may try to use the SetThreadExecutionState function. This function informs the system that the application is in use and allows you to specify the thread's execution requirements. The usage can be like this, but I'm not sure if this affects also the disk power down timer:
type
EXECUTION_STATE = DWORD;
const
ES_SYSTEM_REQUIRED = $00000001;
ES_DISPLAY_REQUIRED = $00000002;
ES_USER_PRESENT = $00000004;
ES_AWAYMODE_REQUIRED = $00000040;
ES_CONTINUOUS = $80000000;
function SetThreadExecutionState(esFlags: EXECUTION_STATE): EXECUTION_STATE;
stdcall; external 'kernel32.dll' name 'SetThreadExecutionState';
procedure TForm1.Button1Click(Sender: TObject);
begin
if SetThreadExecutionState(ES_CONTINUOUS or ES_SYSTEM_REQUIRED or
ES_AWAYMODE_REQUIRED) <> 0 then
try
// execute your long running task here
finally
SetThreadExecutionState(ES_CONTINUOUS);
end;
end;
Or there is also available the new set of functions PowerCreateRequest, PowerSetRequest and PowerClearRequest designed for Windows 7, but the documentation is confusing and I haven't found any example of their usage at this time.
Or you can modify the power settings by PowerWriteACValueIndex or PowerWriteDCValueIndex functions with the GUID_DISK_SUBGROUP subgroup of power settings.

Windows does not allow applications to disable power control changes, because buggy applications were causing batteries to be drained. See http://blogs.msdn.com/oldnewthing/archive/2007/04/16/2148139.aspx
You can get notified when the system power status is about to be changed. See WM_POWERBROADCAST Messages.

Related

How to to stop a machine from sleeping/hibernating for execution period

I have an app written in golang (partially), as part of its operation it will spawn an external process (written in c) and begin monitoring. This external process can take many hours to complete so I am looking for a way to prevent the machine from sleeping or hibernating whilst processing.
I would like to be able to then relinquish this lock so that when the process is finished the machine is allowed to sleep/hibernate
I am initially targeting windows, but a cross-platform solution would be ideal (does nix even hibernate?).
Thanks to Anders for pointing me in the right direction - I put together a minimal example in golang (see below).
Note: polling to reset the timer seems to be the only reliable method, I found that when trying to combine with the continuous flag it would only take effect for approx 30 seconds (no idea why), having said that polling on this example is excessive and could probably be increased to 10 mins (since min hibernation time is 15 mins)
Also FYI this is a windows specific example:
package main
import (
"log"
"syscall"
"time"
)
// Execution States
const (
EsSystemRequired = 0x00000001
EsContinuous = 0x80000000
)
var pulseTime = 10 * time.Second
func main() {
kernel32 := syscall.NewLazyDLL("kernel32.dll")
setThreadExecStateProc := kernel32.NewProc("SetThreadExecutionState")
pulse := time.NewTicker(pulseTime)
log.Println("Starting keep alive poll... (silence)")
for {
select {
case <-pulse.C:
setThreadExecStateProc.Call(uintptr(EsSystemRequired))
}
}
}
The above is tested on win 7 and 10 (not tested on Win 8 yet - presumed to work there too).
Any user request to sleep will override this method, this includes actions such as shutting the lid on a laptop (unless power management settings are altered from defaults)
The above were sensible behaviors for my application.
On Windows, your first step is to try SetThreadExecutionState:
Enables an application to inform the system that it is in use, thereby preventing the system from entering sleep or turning off the display while the application is running
This is not a perfect solution but I assume this is not an issue for you:
The SetThreadExecutionState function cannot be used to prevent the user from putting the computer to sleep. Applications should respect that the user expects a certain behavior when they close the lid on their laptop or press the power button
The Windows 8 connected standby feature is also something you might need to consider. Looking at the power related APIs we find this description of PowerRequestSystemRequired:
The system continues to run instead of entering sleep after a period of user inactivity.
This request type is not honored on systems capable of connected standby. Applications should use PowerRequestExecutionRequired requests instead.
If you are dealing with tablets and other small devices then you can try to call PowerSetRequest with PowerRequestExecutionRequired to prevent this although the description of that is also not ideal:
The calling process continues to run instead of being suspended or terminated by process lifetime management mechanisms. When and how long the process is allowed to run depends on the operating system and power policy settings.
You might also want to use ShutdownBlockReasonCreate but I'm not sure if it blocks sleep/hibernate.

How to determine programmatically the Windows' Performance settings with Delphi 2010

The following code is to fade my application on close.
procedure TfrmMain.btnClose1Click(Sender: TObject);
var
i : Integer;
begin
for i := 255 downto 0 do begin
frmMain.AlphaBlendValue := i;
application.ProcessMessages;
end;
Close;
end;
With Windows performance set to “Let Windows choose…”
When closing my Delphi app with the above code the fade is almost
instantaneous (maybe ¼ second at the most, if I blink I miss the
transition).
If I set the performance Option to ‘Adjust for best performance”
When exiting the same app the fade takes over 12 seconds.
Using the same code but commenting out the AlphaBlendValue change removes the delay.
I tested this out on both Delphi 2010 and DelphiXE2 and the results are the same.
This was tested on Windows 7 Ultimate 64bit if that makes any difference.
To say the least this behavior puzzles me.
I thought that the forms Alpha property was handled by the GPU and would therefore not be affected by Windows performance settings that should would be targeted at maximizing CPU performance.
So as far as this is concerned I'm not sure if this is a Windows 7 bug, a Delphi bug or just my lack of knowledge.
As far as a fix...
Is there a way to tell if Windows is running in crap graphics/max performance mode so that I can disable Alpha fade effects in my apps?
Edit for clarity:
While I would like to fix the fade what I am really looking for is a way to determine what the Windows performance setting is.
I am looking for how to determine a specific Windows setting - when you go into Windows Performance Options there are 3 tabs. On the first tab "Visual Effects" there are 3 canned options and a 4th option for 'Custom'. Minimally I am trying to determine if the option chosen is 'Adjust for best performance', if I could determine what the settings are on this tab even better.
Appreciate any help.
The fundamental problem with your code is that you are forcing 256 distinct updates irrespective of the performance characteristics of the machine. You don't have to use every single alpha blend value between 255 and 0. You can skip some values and still have a smooth fade.
You need to account for the actual graphics performance of the machine. Since you cannot predict that, you should account for real time in your fade code. Doing so will give you a consistent rate of fade irrespective of the performance characteristics of your machine.
So, here's a simple example to demonstrate tying the fade rate to real time:
procedure TfrmMain.btnClose1Click(Sender: TObject);
var
Stopwatch: TStopwatch;
NewAlphaBlendValue: Integer;
begin
Stopwatch := TStopwatch.StartNew;
while True do
begin
NewAlphaBlendValue := 255-(Stopwatch.ElapsedMilliseconds div 4);
if NewAlphaBlendValue>0 then
AlphaBlendValue := NewAlphaBlendValue
else
break;
end;
Close;
end;
The fade has a 1 second duration. You can readily adjust the mathematics to modify the duration to your requirements. This code will produce a smooth fade even on your low performing machine.
I would also comment that you should not use the global variable drmMain in a TfrmMain method. The TfrmMain method already has access to the instance. It is Self. And of course you can omit the Self. What's more the call to ProcessMessages is bad. That allows re-entrant handling of queued input messages. You don't want that to happen. So remove the call to ProcessMessages.
You actually ask about detecting the Adjust for best performance setting. But I think that's the wrong question. For a start you should fix your fade code so that the fade duration is independent of graphics performance.
Having done that you may still wish to disable the fade if the user has asked for lower quality appearance settings. I don't think you should look for one of the 3 canned options that you mention. They are quite possibly Windows version specific. Personally I would base the behaviour on the Animate windows when minimizing and maximizing setting. My rationale is that if the user does not want minimize and maximize to be animated, then presumably they don't want window close to be faded.
Here's how to read that setting:
function GetWindowAnimation: Boolean;
var
AnimationInfo: TAnimationInfo;
begin
AnimationInfo.cbSize := SizeOf(AnimationInfo);
if not SystemParametersInfo(SPI_GETANIMATION, AnimationInfo.cbSize,
#AnimationInfo, 0) then
RaiseLastOSError;
Result := AnimationInfo.iMinAnimate<>0;
end;
I think that most of the other settings that you may be concerned with can also be read using SystemParametersInfo. You should be able to work out how to do so by following the documentation.
Sorry for the tardy followup but it took me a while to figure out a working answer to my question and some of the issues behind it.
First, a thank you to David Heffernan for insight on a better way to handle the fade loop and information on the TStopWatch function from the Delphi's Diagnostics unit, much appreciated.
In regards to being able to determine the Windows' Performance settings...
When using the following un-optimized fade loop
procedure TfrmMain.btnFadeNCloseClick(Sender: TObject);
var
i : Integer;
begin
for i := 255 downto 0 do
frmMain.AlphaBlendValue := i;
Close;
end;
the actual Windows Performance Option settings causing the performance issue are "Enable desktop composition" and "Use visual styles on Windows and buttons". If both options are enabled there is no issue, if either setting is not enabled the loop crawls** (about 12 seconds on my system if the form is maximized).
Turns out that turning Aero Glass on or off affects these same 2 settings. So being able to detect if Aero Glass is on or not enables me to determine whether to not to enable the form effects, such as transition fades and other eye candy, in my apps. Plus now I can also capture that information in my bug reports.
**Note this appears to be an NVidia issue/bug, or at least an issue that is much more severe on systems with NVidia graphics cards. On 2
different NVidia systems (with recent, if not latest drivers) I
got similar results for a miximized form fade - less than
.001 seconds if Aero Glass is on, around 12 seconds if Aero Glass is
off. On a system with an Intel graphics card - less than .001 seconds
if Aero Glass is on, about 3.7 seconds if Aero Glass is off. Now
granted my test sampling is small, 3 NVidia systems (counting my
customer who initially reported the issue) and one non-NVidia system
but if I was using a decent NVidia graphic card I would not bother
turning Aero Glass off.
Below is the working code to detect if Aero Glass is enabled via Delphi:
This function has been tested on a Windows7 64 bit system and works with Delphi 2007, 2010 and Xe2 (32 & 64bit).
All of the various versions of the Delphi function below that I found on the net were broken - along with comments of people complaining about getting Access Violation errors.
What finally shed the light on fixing the bad code was Gerry Coll's response to: AccessViolationException in Delphi - impossible (check it, unbelievable...) which was about trying to fix AV errors in a function of the same type.
function ISAeroEnabled: Boolean;
type
_DwmIsCompositionEnabledFunc = function(var IsEnabled: Bool): HRESULT; stdcall;
var
Flag : BOOL;
DllHandle : THandle;
OsVersion : TOSVersionInfo;
DwmIsCompositionEnabledFunc: _DwmIsCompositionEnabledFunc;
begin
Result:=False;
ZeroMemory(#OsVersion, SizeOf(OsVersion));
OsVersion.dwOSVersionInfoSize := SizeOf(TOSVERSIONINFO);
if ((GetVersionEx(OsVersion)) and (OsVersion.dwPlatformId = VER_PLATFORM_WIN32_NT) and
(OsVersion.dwMajorVersion = 6) and (OsVersion.dwMinorVersion < 2)) then //Vista&Win7 only (no Win8)
begin
DllHandle := LoadLibrary('dwmapi.dll');
try
if DllHandle <> 0 then
begin
#DwmIsCompositionEnabledFunc := GetProcAddress(DllHandle, 'DwmIsCompositionEnabled');
if (#DwmIsCompositionEnabledFunc <> nil) then
begin
if DwmIsCompositionEnabledFunc(Flag)= S_OK then
Result:= Flag;
end;
end;
finally
FreeLibrary(DllHandle);
end;
end;
end;

How do you use SetThreadAffinityMask with QueryPerformanceFrequency?

I have a long standing program with the FAA that was running great until the FAA started deploying Dell GX-760 desktops. The program is a graphical replay of air traffic. I use the QueryPerformanceFrequency function to get the processor counter. With the GX 760 it appears to not be processor dependent. I found this http://msdn.microsoft.com/en-us/library/ms644904(VS.85).aspx which descibes what I am seeing.
On a multiprocessor computer, it
should not matter which processor is
called. However, youit can get
different results on different
processors due to bugs in the basic
input/output system (BIOS) or the
hardware abstraction layer (HAL). To
specify processor affinity for a
thread, use the SetThreadAffinityMask
function.
I not familiar with SetThreadAffinityMask, how does this work and how should I implement it? Here is my code that gets the count.
Thanks,
Dave
'Declarations
Private Declare Function QueryPerformanceCounter Lib "kernel32" (lpPerformanceCount As Currency) As Long
Private Declare Function QueryPerformanceFrequency Lib "kernel32" (lpFrequency As Currency) As Long
'I set the Frequency on Startup
cTime.SetFrequency
Public Sub SetFrequency()
'Get the Processor Frequency. This is locked at Windows startup and does n
Dim f As Currency
QueryPerformanceFrequency f
cTime.Frequency = f
End Sub
When the program needs the time it calls
Public Function CurrentCount() As Currency
'What is the current processoer count
QueryPerformanceCounter CurrentCount 'get current count number
End Function
It isn't exactly clear what kind of problem you are having. It is very unlikely that the quoted MSDN article is relevant, a Dell Optiplex 760 doesn't have multiple processors. Just one with multiple cores, it is not subject to this kind of bug. You can easily test this by running your program with the start.exe, it allows setting the processor affinity:
start /affinity 1 yourapp.exe
Perhaps more relevant is that newer machines take shortcuts on the frequency source, using whatever source happens to be available in the chipset. They typically have a much higher return value for QueryPerformanceFrequency. Two billion isn't unusual, maybe that screws up your math. Working with 'currency' instead of a true 64-bit integer is rather toe-curling.
Also check the BIOS revision for your machine, they had rather a large number of them, all the way up to A08.

SCardEstablishContext hangs as a service

Why might SCardEstablishContext hang, never to return, when called from a service?
I have code that works fine on lots of Windows installations. It accesses a Cherry keyboard's Smart Card reader (6x44) to read data on a smart card. It works fine on most PCs it has been tried on. However, on some PCs, running in Spain with Spanish Windows, the SCardEstablishContext function never returns. I cannot work out why this might be. I have logging either side of it, but the log entry after it does not appear. I cannot then shut it down (the worker thread is getting stuck), and have to kill it.
Exactly the same thread code works fine if run from an application, and not a service. Giving the service login settings of a user instead of system makes no difference.
I've installed Spanish XP on a machine here, but it works just fine. The far end has the same Winscard.dll version as I have here (both at XP SP3 status). No errors are shown in the event log.
How might I work out what is going wrong, and what might be fixing it? (Delphi code below)
// based on code by Norbert Huettisch
function TPCSCConnector.Init: boolean;
var
RetVar: LongInt;
ReaderList: string;
ReaderListSize: integer;
v: array[0..MAXIMUM_SMARTCARD_READERS] of string;
i: integer;
begin
Result := false;
FNumReaders := 0;
{$IFDEF MJ_ONLY}
LogReport(leInformation, 'About to call SCardEstablishContext');
{$ENDIF}
RetVar := SCardEstablishContext(SCARD_SCOPE_USER, nil, nil, #FContext);
{$IFDEF MJ_ONLY}
// never gets to report this (and logging known good etc)
LogReport(leInformation, 'SCardEstablishContext result = ' + IntToStr(RetVar));
{$ENDIF}
if RetVar = SCARD_S_SUCCESS then
begin
There may be different reasons why the API function appears to hang, like a deadlock, or an invisible message box or dialog waiting for user input. You should try to get a stacktrace using WinDbg.
You should also make sure that you are trying to reproduce the bug in the same environment. Important points might be whether Fast User Switching is active and whether other users are logged on, also that there are the same device drivers and services running.

Clock drift on Windows

I've developed a Windows service which tracks business events. It uses the Windows clock to timestamp events. However, the underlying clock can drift quite dramatically (e.g. losing a few seconds per minute), particularly when the CPUs are working hard. Our servers use the Windows Time Service to stay in sync with domain controllers, which uses NTP under the hood, but the sync frequency is controlled by domain policy, and in any case even syncing every minute would still allow significant drift. Are there any techniques we can use to keep the clock more stable, other than using hardware clocks?
Clock ticks should be predictable, but on most PC hardware - because they're not designed for real-time systems - other I/O device interrupts have priority over the clock tick interrupt, and some drivers do extensive processing in the interrupt service routine rather than defer it to a deferred procedure call (DPC), which means the system may not be able to serve the clock tick interrupt until (sometimes) long after it was signalled.
Other factors include bus-mastering I/O controllers which steal many memory bus cycles from the CPU, causing it to be starved of memory bus bandwidth for significant periods.
As others have said, the clock-generation hardware may also vary its frequency as component values change with temperature.
Windows does allow the amount of ticks added to the real-time clock on every interrupt to be adjusted: see SetSystemTimeAdjustment. This would only work if you had a predictable clock skew, however. If the clock is only slightly off, the SNTP client ("Windows Time" service) will adjust this skew to make the clock tick slightly faster or slower to trend towards the correct time.
I don't know if this applies, but ...
There's an issue with Windows that if you change the timer resolution with timeBeginPeriod() a lot, the clock will drift.
Actually, there is a bug in Java's Thread wait() (and the os::sleep()) function's Windows implementation that causes this behaviour. It always sets the timer resolution to 1 ms before wait in order to be accurate (regardless of sleep length), and restores it immediately upon completion, unless any other threads are still sleeping. This set/reset will then confuse the Windows clock, which expects the windows time quantum to be fairly constant.
Sun has actually known about this since 2006, and hasn't fixed it, AFAICT!
We actually had the clock going twice as fast because of this! A simple Java program that sleeps 1 millisec in a loop shows this behaviour.
The solution is to set the time resolution yourself, to something low, and keep it there as long as possible. Use timeBeginPeriod() to control that. (We set it to 1 ms without any adverse effects.)
For those coding in Java, the easier way to fix this is by creating a thread that sleeps as long as the app lives.
Note that this will fix this issue on the machine globally, regardless of which application is the actual culprit.
You could run "w32tm /resync" in a scheduled task .bat file. This works on Windows Server 2003.
Other than resynching the clock more frequently, I don't think there is much you can do, other than to get a new motherboard, as your clock signal doesn't seem to be at the right frequency.
http://www.codinghorror.com/blog/2007/01/keeping-time-on-the-pc.html
PC clocks should typically be accurate to within a few seconds per day. If you're experiencing massive clock drift-- on the order of minutes per day-- the first thing to check is your source of AC power. I've personally observed systems with a UPS plugged into another UPS (this is a no-no, by the way) that gained minutes per day. Removing the unnecessary UPS from the chain fixed the time problem. I am no hardware engineer, but I'm guessing that some timing signal in the power is used by the real-time clock chip on the motherboard.
As already mentioned, Java programs can cause this issue.
Another solution that does not require code modification is adding the VM argument -XX:+ForceTimeHighResolution (found on the NTP support page).
9.2.3. Windows and Sun's Java Virtual Machine
Sun's Java Virtual Machine needs to be started with the >-XX:+ForceTimeHighResolution parameter to avoid losing interrupts.
See http://www.macromedia.com/support/coldfusion/ts/documents/createuuid_clock_speed.htm for more information.
From the referenced link (via the Wayback machine - original link is gone):
ColdFusion MX: CreateUUID Increases the Windows System Clock Speed
Calling the createUUID function multiple times under load in
Macromedia ColdFusion MX and higher can cause the Windows system clock
to accelerate. This is an issue with the Java Virtual Machine (JVM) in
which Thread.sleep calls less than 10 milliseconds (ms) causes the
Windows system clock to run faster. This behavior was originally filed
as Sun Java Bug 4500388
(developer.java.sun.com/developer/bugParade/bugs/4500388.html) and has
been confirmed for the 1.3.x and 1.4.x JVMs.
In ColdFusion MX, the createUUID function has an internal Thread.sleep
call of 1 millisecond. When createUUID is heavily utilized, the
Windows system clock will gain several seconds per minute. The rate of
acceleration is proportional to the number of createUUID calls and the
load on the ColdFusion MX server. Macromedia has observed this
behavior in ColdFusion MX and higher on Windows XP, 2000, and 2003
systems.
Increase the frequency of the re-sync.
If the syncs are with your own main server on your own network there's no reason not to sync every minute.
Sync more often. Look at the Registry entries for the W32Time service, especially "Period". "SpecialSkew" sounds like it would help you.
Clock drift may be a consequence of the temperature; maybe you could try to get temperature more constant - using better cooling perhaps? You're never going to loose drift totally, though.
Using an external clock (GPS receiver etc...), and a statistical method to relate CPU time to Absolute Time is what we use here to synch events in distributed systems.
Since it sounds like you have a big business:
Take an old laptop or something which isn't good for much, but seems to have a more or less reliable clock, and call it the Timekeeper. The Timekeeper's only job is to, once every (say) 2 minutes, send a message to the servers telling the time. Instead of using the Windows clock for their timestamps, the servers will put down the time from the Timekeeper's last signal, plus the elapsed time since the signal. Check the Timekeeper's clock by your wristwatch once or twice a week. This should suffice.
What servers are you running? In desktops the times I've come across this are with Spread Spectrum FSB enabled, causes some issues with the interrupt timing which is what makes that clock tick. May want to see if this is an option in BIOS on one of those servers and turn it off if enabled.
Another option you have is to edit the time polling interval and make it much shorter using the following registry key, most likely you'll have to add it (note this is a DWORD value and the value is in seconds, e.g. 600 for 10min):
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\W32Time\TimeProviders\NtpClient\SpecialPollInterval
Here's a full workup on it: KB816042
I once wrote a Delphi class to handle time resynchs. It is pasted below. Now that I see the "w32tm" command mentioned by Larry Silverman, I suspect I wasted my time.
unit TimeHandler;
interface
type
TTimeHandler = class
private
FServerName : widestring;
public
constructor Create(servername : widestring);
function RemoteSystemTime : TDateTime;
procedure SetLocalSystemTime(settotime : TDateTime);
end;
implementation
uses
Windows, SysUtils, Messages;
function NetRemoteTOD(ServerName :PWideChar; var buffer :pointer) : integer; stdcall; external 'netapi32.dll';
function NetApiBufferFree(buffer : Pointer) : integer; stdcall; external 'netapi32.dll';
type
//See MSDN documentation on the TIME_OF_DAY_INFO structure.
PTime_Of_Day_Info = ^TTime_Of_Day_Info;
TTime_Of_Day_Info = record
ElapsedDate : integer;
Milliseconds : integer;
Hours : integer;
Minutes : integer;
Seconds : integer;
HundredthsOfSeconds : integer;
TimeZone : LongInt;
TimeInterval : integer;
Day : integer;
Month : integer;
Year : integer;
DayOfWeek : integer;
end;
constructor TTimeHandler.Create(servername: widestring);
begin
inherited Create;
FServerName := servername;
end;
function TTimeHandler.RemoteSystemTime: TDateTime;
var
Buffer : pointer;
Rek : PTime_Of_Day_Info;
DateOnly, TimeOnly : TDateTime;
timezone : integer;
begin
//if the call is successful...
if 0 = NetRemoteTOD(PWideChar(FServerName),Buffer) then begin
//store the time of day info in our special buffer structure
Rek := PTime_Of_Day_Info(Buffer);
//windows time is in GMT, so we adjust for our current time zone
if Rek.TimeZone <> -1 then
timezone := Rek.TimeZone div 60
else
timezone := 0;
//decode the date from integers into TDateTimes
//assume zero milliseconds
try
DateOnly := EncodeDate(Rek.Year,Rek.Month,Rek.Day);
TimeOnly := EncodeTime(Rek.Hours,Rek.Minutes,Rek.Seconds,0);
except on e : exception do
raise Exception.Create(
'Date retrieved from server, but it was invalid!' +
#13#10 +
e.Message
);
end;
//translate the time into a TDateTime
//apply any time zone adjustment and return the result
Result := DateOnly + TimeOnly - (timezone / 24);
end //if call was successful
else begin
raise Exception.Create('Time retrieval failed from "'+FServerName+'"');
end;
//free the data structure we created
NetApiBufferFree(Buffer);
end;
procedure TTimeHandler.SetLocalSystemTime(settotime: TDateTime);
var
SystemTime : TSystemTime;
begin
DateTimeToSystemTime(settotime,SystemTime);
SetLocalTime(SystemTime);
//tell windows that the time changed
PostMessage(HWND_BROADCAST,WM_TIMECHANGE,0,0);
end;
end.
I believe Windows Time Service only implements SNTP, which is a simplified version of NTP. A full NTP implementation takes into account the stability of your clock in deciding how often to sync.
You can get the full NTP server for Windows here.

Resources