I have some Delphi 2007 code which runs in two different applications, one is a GUI application and the other is a Windows service. The weird part is that while the GUI application technically seems to have more "to do", drawing the GUI, calculating some stats and so on, the Windows service is consistently using more of the CPU when it runs. Where the GUI application uses around 3-4% CPU power, the service use in the region of 6-8%.
When running them together CPU loads of both applications approximately double.
The basic code is the same in both applications, except for the addition of the GUI code in the Windows Forms application.
Is there any reason for this behavior? Do Windows service applications have some kind of inherent overhead or do I need to look through the code to find the source of this, in my book, unexpected behavior?
EDIT:
Having had time to look more closely at the code, I think the suggestion below that the GUI application spends some time waiting for repaints, causing the CPU load to drop is likely incorrect. The applications are both threaded, meaning the GUI repaints should not influence the CPU load.
Just to be sure I first tried to remove all GUI components from the application, leaving only a blank form. That did not increase the CPU load of the program. I then went through and stripped out all calls to Synchronize in the working threads which were used to update the UI. This had the same result: The CPU load did not change.
The code in the service looks like this:
procedure TLsOpcServer.ServiceExecute(Sender: TService);
begin
// Initialize OPC server as NT Service
dmEngine.AddToLog( sevInfo, 'Service', 'Name', Sender.Name );
AddLocalServiceKeysToRegistry( Sender.Name );
dmEngine.AddToLog( sevInfo, 'Service', 'Execute', 'Started' );
dmEngine.Start( True );
//
while not Terminated do
begin
ServiceThread.ProcessRequests( True );
end;
dmEngine.Stop;
dmEngine.AddToLog( sevInfo, 'Service', 'Execute', 'Stopped' );
end;
dmEngine.Start will start and register the OPC server and initialize a socket. It then starts a thread which does... something to incoming OPC signals. The same exact call is made on in FormCreate on the main form of the GUI application.
I'm going to look into how the GUI application starts next, I didn't write this code so trying to puzzle out how it works is a bit of an adventure :)
EDIT2
This is a little bit interesting. I ran both applications for exactly 1 minute each, running AQTime to benchmark them. This is the most interesting part of the results:
In the service:
Procedure name: TSignalList::HandleChild
Execution time: 20.105963821084
Hitcount: 5961231
In the GUI Application:
Procedure name: TSignalList::HandleChild
Execution time: 7.62424101324976
Hit count: 6383010
EDIT 3:
I'm finally back in a position where I can keep looking at this problem. I have found two procedures which both have about the same hitcount during a five minute run, yet in the service the execution time is much higher. For HandleValue the hitcount is 4 300 258 and the execution time is 21.77s in the service and in the GUI application the hitcount is 4 254 018 with an execution time of 9.75s.
The code looks like this:
function TSignalList.HandleValue(const Signal: string; var Tag: TTag; const CreateIfNotExist: Boolean): HandleStatus;
var
Index: integer;
begin
result := statusNoSignal;
Tag := nil;
if not Assigned( Values ) then
begin
Values := TValueStrings.Create;
Values.CaseSensitive := defDefaultCase;
Values.Sorted := True;
Values.Duplicates := dupIgnore;
Index := -1; // Garantied no items in list
end else
begin
Index := Values.IndexOf( Signal );
end;
if Index = -1 then
begin
if CreateIfNotExist then
begin
// Value signal does not exist create it
Tag := TTag.Create;
if Values.AddObject( Signal, Tag ) > -1 then
begin
result := statusAdded;
end;
end;
end else
begin
Tag := TTag( Values.Objects[ Index ] );
result := statusExist;
end;
end;
Both applications enter the "CreateIfNotExist" case exactly the same number of times. TValueStrings is a direct descendant of TStringList without any overloads.
Have you timed the execution of core functionality? If so, did you measure a difference? I think, if you do, you won't find much difference between them, unless you add other functionality, like updating the GUI, to the code of that core functionality.
Consuming less CPU doesn't mean it's running slower. The GUI app could be waiting more often on repaints, which depend on the GPU as well (and maybe other parts of the system). Therefore, the GUI app may consume less CPU power, because the CPU is waiting for other parts of your system before it can continue with the next instruction.
Related
I have written some PL/SQL that connects to a service on prem and gets a very small amount of string data back. The routine works, but it is incredibly slow, taking roughly 9 seconds to return the data. I have re-created the process in C# and it gets the results back in under a second, so I assume it is something I am doing wrong in my PL/SQL. I need to resolve the PL/SQL speed issue as I have to make the call from a very old Oracle Forms application. Here is the PL/SQL:
declare
c utl_tcp.connection;
ret_val varchar2(100);
reading varchar2(100);
cmd varchar2(100) := 'COMMAND(STUFF,SERVICE,EXPECTS)';
cmd2 varchar2(100);
begin
c := utl_tcp.open_connection(remote_host => 'SERVICE.I.P.ADDRESS'
,remote_port => 9995
,charset => 'US7ASCII'
,tx_timeout => 4
); -- Open connection
--This is a two step process. First, issue this command which brings back a sequence number
ret_val := utl_tcp.write_line(c, cmd); -- Send command to service
ret_val := utl_tcp.write_line(c); -- Don't know why this is necessary, it was in the example I followed
dbms_output.put_line(utl_tcp.get_text(c, 100)); -- Read the response from the server
sys.dbms_session.sleep(1); -- This is important as sometimes it doesn't work if it's not slowed down!
--This is the second step which issues another command, using the sequence number retrieved above
cmd2 := 'POLL(' || ret_val || ')';
reading := utl_tcp.write_line(c, cmd2); -- Send command to service
reading := utl_tcp.write_line(c); --Don't know why this is necessary, it was in the example I followed
dbms_output.put_line(utl_tcp.get_text(c, 100)); -- Read the response from the server
utl_tcp.close_connection(c); --Close the connection
end;
I appreciate performance problems are hard to track down when you don't have access to the systems, but any guidance would be greatly appreciated.
My guess is that it's this line:
dbms_output.put_line(utl_tcp.get_text(c, 100));
Are you actually reading 100 characters in your response? If not, it will read the available buffer, wait for the rest of the 100 characters to arrive (but they won't), then timeout.
You've set tx_timeout to 4 s. The fact that you have 2 calls to get_text, a 1 s sleep, and your procedure is taking 9 s suggests to me that's what's going on.
I'm currently creating a game that uses several different timers to increase several numbers, displayed as 3D Text, on the screen simultaneously. When only one number on the screen is present it works perfectly and the number counts up pretty seamlessly. However when more than one number is present and consequently more than one timer is running, the numbers are really glitchy and they all count up showing the same numbers, even though they have different values.
I start the timer using this:
p1Price = Timer.scheduledTimer(timeInterval: 0.2, target: self, selector: #selector(p1PriceCalculator), userInfo: nil, repeats: true)
And use this to change the text:
#objc func p1PriceCalculator() {
var smallHouse1Incremental: Int = Int(arc4random_uniform(UInt32(100)))
smallHousePrice1 = smallHousePrice1 + smallHouse1Incremental
smallHouse1Incremental += Int(arc4random_uniform(UInt32(100)))
if let smallHouse1TextGeometry = smallHouseText1.geometry as? SCNText {
smallHouse1TextGeometry.string = String(smallHousePrice1)
}
}
There are several of this same setup throughout the code, with the only change being the name of the nodes.
Does anyone have knowledge as to why this is happening?
Thanks!
That is a pretty frequent occurrence at 0.2, but it's doable - I've had 10-12 running at the same time with some decent stuff in the loops. I don't see a tolerance set to give it any flexibility - Docs(Setting a tolerance for a timer allows it to fire later than the scheduled fire date. Allowing the system flexibility in when a timer fires increases the ability of the system to optimize for increased power savings and responsiveness.) - also make sure your times are running in the main thread - I don't remember if they just won't work, or if they work sporadically - but DEF issues if you don't.
Past that, you may have to look into your calls and see if you're doing too much somewhere else. When it fogs up, did you run instruments to see if you are loosing memory or racking the CPU?
I have a C++ written Windows service, and on startup, if the SERVICE_STATUS stays in SERVICE_START_PENDING too long, I end up with this error :
Error 1053: The service did not respond to the start or control request in a timely fashion
This happens when keeping the progress bar dialog opened. It does not affect the service startup itself. The service will continue in SERVICE_START_PENDING until the work is completed and I set SERVICE_RUNNING.
The Windows documentation on dwWaitHint here :
https://msdn.microsoft.com/en-us/library/windows/desktop/ms685996(v=vs.85).aspx
states that the service must call SetServiceStatus with an incremented dwCheckPoint before the dwWaitHint time elapses.
So for example, I set dwWaitHint to 5 minutes, and call SetServiceStatus every 10 seconds with an incremented dwCheckPoint but I still get the 1053 error after 5 minutes. In other words, the SetServiceStatus calls don't seem to do anything. (and these calls are NOT failing, I checked).
By doing the above, can't the service startup time take longer than dwWaitHint ???
UPDATE: I can reproduce with Microsoft's service sample code. Here's a snippet.
{
gSvcStatus.dwServiceType = SERVICE_WIN32_OWN_PROCESS;
gSvcStatus.dwServiceSpecificExitCode = 0;
// Report initial status to the SCM
ReportSvcStatus( SERVICE_START_PENDING, NO_ERROR, 300000 );
int limit = 6; // 6 minutes total
while(limit--)
{
Sleep(60000); // sleep 1 min
ReportSvcStatus( SERVICE_START_PENDING, NO_ERROR, 300000 ); // 5 minute dwWaitHint
}
// We've completed startup, report RUNNING to SCM
ReportSvcStatus( SERVICE_RUNNING, NO_ERROR, 0 );
}
VOID ReportSvcStatus( DWORD dwCurrentState, DWORD dwWin32ExitCode, DWORD dwWaitHint)
{
static DWORD dwCheckPoint = 1;
// Fill in the SERVICE_STATUS structure.
gSvcStatus.dwCurrentState = dwCurrentState;
gSvcStatus.dwWin32ExitCode = dwWin32ExitCode;
gSvcStatus.dwWaitHint = dwWaitHint;
if (dwCurrentState == SERVICE_START_PENDING)
gSvcStatus.dwControlsAccepted = 0;
else gSvcStatus.dwControlsAccepted = SERVICE_ACCEPT_STOP;
if ( (dwCurrentState == SERVICE_RUNNING) ||
(dwCurrentState == SERVICE_STOPPED) )
gSvcStatus.dwCheckPoint = 0;
else gSvcStatus.dwCheckPoint = dwCheckPoint++;
// Report the status of the service to the SCM.
SetServiceStatus( gSvcStatusHandle, &gSvcStatus );
}
You are sure you are treating dwWaitHint as millseconds and not seconds? (i.e. your dwWaitHint is 300000?)
My experience is that the docs are right on this point, that the wait hint only applies to the next SetServiceStatus call.
Although I would also say a 5min service start time is excessive even if it actually takes that long to load or check data. Mostly I say that because the service control interface is stuck that entire time. SQLServer for example does a fairly quick service start even after a system crash that requires hours of validation.
Well, there are definitely limitations to the Microsoft Management Console (MMC) Services snap-in, specifically this dialog here :
See this link here from MS :
https://support.microsoft.com/en-ca/help/307806/the-services-snap-in-times-out-with-error-1053
When any control operation is initiated, the Services snap-in displays a progress dialog box with the title "Service Control". If a service requires a significant amount of time to process an operation, the progress bar will slowly increment as the Services snap-in waits for the operation to finish. After 125 seconds, the progress bar will be full and the Services snap-in will display the error 1053 (ERROR_SERVICE_REQUEST_TIMEOUT) message. The service process itself will continue its operation as usual even after the error message has appeared.
But, the somewhat good news is I've proven this 125 seconds statement to be false, at least on Windows 10 (haven't tried other Windows versions). As stated in my question, when setting the SERVICE_START_PENDING, you can set the dwWaitHint to something higher, and the progress bar will respect that. But, you only have 1 chance at this, if you then update SERVICE_START_PENDING by calling SetServiceStatus with a higher dwWaitHint, it will not affect the progress bar dialog.
The only downside to setting dwWaitHint really high is that the progress bar will slow down, and when you set the SERVICE_RUNNING status, the progress bar might just be half way. But not a big deal, just aesthetic.
I'm writing a website with Golang and Sqlite3, and I expect around 1000 concurrent writings per second for a few minutes each day, so I did the following test (ignore error checking to look cleaner):
t1 := time.Now()
tx, _ := db.Begin()
stmt, _ := tx.Prepare("insert into foo(stuff) values(?)")
defer stmt.Close()
for i := 0; i < 1000; i++ {
_, _ = stmt.Exec(strconv.Itoa(i) + " - ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789,./;'[]-=<>?:()*&^%$##!~`")
}
tx.Commit()
t2 := time.Now()
log.Println("Writing time: ", t2.Sub(t1))
And the writing time is about 0.1 second. Then I modified the loop to:
for i := 0; i < 1000; i++ {
go func(stmt *sql.Stmt, i int) {
_, err = stmt.Exec(strconv.Itoa(i) + " - ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789,./;'[]-=<>?:()*&^%$##!~`")
if err != nil {
log.Fatal(err)
}
}(stmt, i)
}
This gives me holy 46.2 seconds! I run it many times and every time is beyond 40 seconds! Sometimes even over a minute! Since Golang handles each user concurrently, does it mean I have to switch database in order to make the webpage working? Thanks!
I recently evaluated SQLite3 performance in Go myself for a network application and learned that it needs a bit of setup before it even remotely usable.
Turn on the Write-Ahead Logging
You need to use WAL PRAGMA journal_mode=WAL. That's mainly why you get such a bad performance. With WAL I can do 10000 concurent writes without transactions in a matter of seconds. Within transaction it will be lightning fast.
Disable connections pool
I use mattn/go-sqlite3 and it opens a database with SQLITE_OPEN_FULLMUTEX flag. It means that every SQLite call will be guarded with a lock. Everything will be serialized. And that's actually what you want with SQLite. The problem with Go in this situation is that you will get random errors that tell you that the database is locked. And the reason why is because of the way the sql/DB works inside. Inside it manages pool of connections for you, so it will open multiple SQLite connections and you don't want to do that. To solve this I had to, basically, disable the pool. Call db.SetMaxOpenConns(1) and it will work. Even on very high loads with tens of thousands of concurent reads and writes it works without a problem.
Other solution might be to use SQLITE_OPEN_NOMUTEX to run SQLite in multi-threaded mode and let it manage that for you. But SQLite doesn't really work in multi-threaded apps. Reads can happen in parallel but only one write at a time. You will get occasional busy errors which are completely normal for SQLite but will require you to do something with them - you probably don't want to stop a write operation completely when that happens. That's why most of the time people work with SQLite either synchronously or by sending calls to a separate thread just for the SQLite.
I tested the write performance on go1.18 to see if parallelism works
Out of Box
I used 3 golang threads incrementing different integer columns of the same record
Parallelism Conclusions:
Read code 5 percentage 2.5%
Write code 5 percentage 518% (waiting 5x in between attempts)
Write throughput: 2,514 writes per second
code 5 is “database is locked (5) (SQLITE_BUSY)”
A few years ago on Node.js the driver crashes with only concurrency, not parallelism, unless I serialized the writes, ie. write concurrency = 1
Serialized Writes
With golang is used github.com/haraldrudell/parl.NewModerator(1, context.Background()), ie. serialized writes:
Serialized results:
read code 5: 0.005%
write code 5: 0.02%
3,032 writes per second (+20%)
Reads are not serialized, but they are held up by writes in the same thread. Writes seems to be 208x more expensive than reads.
Serializing writes in golang increases write performance by 20%
PRAGMA journal_mode
Enabling sqlDB.Exec("PRAGMA journal_mode = WAL")
(from default: journalMode: delete)
increases write performance to 18,329/s, ie. another 6x
code 5 goes to 0
Multiple Processes
Using 3 processes x 3 threads with writes serialized per process lowers write throughput by about 5% and raises code 5 up to 200%. Good news is that file locking works without errors macOS 12.3.1 apfs
On the MIT App Inventor (similar but not the same to Scratch), I need to create a timer that can be reset when an action happens to complete an App. But, I have been unable to find a way to make a resetable timer. Is there a way using this piece of software? This is a link to the App Inventor.
The first 4 blocks are the codes for when the player interacts/clicks one of the 4 colored boxes.
The last block is the code outside of the 4 .Click blocks.
Btw. there is a lot of redundancy in your blocks, see Enis' tips here how to simplify this...
If you want to reset the clock, just set Clock.TimerEnabled = false and then set
Clock.TimerEnabled = true again and the clock will restart
see also the following example blocks (let's assume, you have a clock component and the timer interval is 10 seconds)
in the example I reset the clock after 5 seconds and as you can see, the clock starts from the beginning...
You can download the test project from here