I have written some PL/SQL that connects to a service on prem and gets a very small amount of string data back. The routine works, but it is incredibly slow, taking roughly 9 seconds to return the data. I have re-created the process in C# and it gets the results back in under a second, so I assume it is something I am doing wrong in my PL/SQL. I need to resolve the PL/SQL speed issue as I have to make the call from a very old Oracle Forms application. Here is the PL/SQL:
declare
c utl_tcp.connection;
ret_val varchar2(100);
reading varchar2(100);
cmd varchar2(100) := 'COMMAND(STUFF,SERVICE,EXPECTS)';
cmd2 varchar2(100);
begin
c := utl_tcp.open_connection(remote_host => 'SERVICE.I.P.ADDRESS'
,remote_port => 9995
,charset => 'US7ASCII'
,tx_timeout => 4
); -- Open connection
--This is a two step process. First, issue this command which brings back a sequence number
ret_val := utl_tcp.write_line(c, cmd); -- Send command to service
ret_val := utl_tcp.write_line(c); -- Don't know why this is necessary, it was in the example I followed
dbms_output.put_line(utl_tcp.get_text(c, 100)); -- Read the response from the server
sys.dbms_session.sleep(1); -- This is important as sometimes it doesn't work if it's not slowed down!
--This is the second step which issues another command, using the sequence number retrieved above
cmd2 := 'POLL(' || ret_val || ')';
reading := utl_tcp.write_line(c, cmd2); -- Send command to service
reading := utl_tcp.write_line(c); --Don't know why this is necessary, it was in the example I followed
dbms_output.put_line(utl_tcp.get_text(c, 100)); -- Read the response from the server
utl_tcp.close_connection(c); --Close the connection
end;
I appreciate performance problems are hard to track down when you don't have access to the systems, but any guidance would be greatly appreciated.
My guess is that it's this line:
dbms_output.put_line(utl_tcp.get_text(c, 100));
Are you actually reading 100 characters in your response? If not, it will read the available buffer, wait for the rest of the 100 characters to arrive (but they won't), then timeout.
You've set tx_timeout to 4 s. The fact that you have 2 calls to get_text, a 1 s sleep, and your procedure is taking 9 s suggests to me that's what's going on.
Related
For oracle we can use the following syntax for sleep
DBMS_LOCK.SLEEP(sleepTime);
For mysql we can use the following syntax for sleep
DO SLEEP(sleepTime);
For db2 how could I achieve this?.
Following is part of my script.
REPEAT
IF rowCount > 0
THEN
DO SLEEP(sleepTime);
END IF;
DELETE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE IN ('EXPIRED','INACTIVE','REVOKED') OR (TOKEN_STATE='ACTIVE');
GET DIAGNOSTICS rowCount = ROW_COUNT;
UNTIL rowCount=0 END REPEAT;
How can we do sleep with db2?.Any help on this would be appreciated
At the present time IBM did not supply a DBMS_LOCK module for Db2-for-LUW although that may change in the future or you can implement your own if you have the skills.
But If you are using recent Db2 versions for Linux/Unix/Windows, then you can abuse the DBMS_ALERT.WAITONE procedure. It's not an exact match but may be good enough. The idea is to wait a specified time for an alert(signal) that's never going to be triggered (i.e. you have to ensure the code does not signal the specified alert unless you want to interrupt the wait).
for example, the block below will wait for 5 minutes:
--#SET TERMINATOR#
BEGIN
DECLARE v_outmessage VARCHAR(32672);
DECLARE v_outstatus integer default 0;
DECLARE v_seconds INTEGER default 300;
CALL dbms_alert.waitone('whatever',v_outmessage ,v_outstatus,v_seconds);
END#
There is also the option to implement a sleep function (as an external UDF or an external stored procedure) and that is described here (requires C compiler etc).
Try the undocumented call DBMS_ALERT.SLEEP(60)
I'm writing a website with Golang and Sqlite3, and I expect around 1000 concurrent writings per second for a few minutes each day, so I did the following test (ignore error checking to look cleaner):
t1 := time.Now()
tx, _ := db.Begin()
stmt, _ := tx.Prepare("insert into foo(stuff) values(?)")
defer stmt.Close()
for i := 0; i < 1000; i++ {
_, _ = stmt.Exec(strconv.Itoa(i) + " - ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789,./;'[]-=<>?:()*&^%$##!~`")
}
tx.Commit()
t2 := time.Now()
log.Println("Writing time: ", t2.Sub(t1))
And the writing time is about 0.1 second. Then I modified the loop to:
for i := 0; i < 1000; i++ {
go func(stmt *sql.Stmt, i int) {
_, err = stmt.Exec(strconv.Itoa(i) + " - ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789,./;'[]-=<>?:()*&^%$##!~`")
if err != nil {
log.Fatal(err)
}
}(stmt, i)
}
This gives me holy 46.2 seconds! I run it many times and every time is beyond 40 seconds! Sometimes even over a minute! Since Golang handles each user concurrently, does it mean I have to switch database in order to make the webpage working? Thanks!
I recently evaluated SQLite3 performance in Go myself for a network application and learned that it needs a bit of setup before it even remotely usable.
Turn on the Write-Ahead Logging
You need to use WAL PRAGMA journal_mode=WAL. That's mainly why you get such a bad performance. With WAL I can do 10000 concurent writes without transactions in a matter of seconds. Within transaction it will be lightning fast.
Disable connections pool
I use mattn/go-sqlite3 and it opens a database with SQLITE_OPEN_FULLMUTEX flag. It means that every SQLite call will be guarded with a lock. Everything will be serialized. And that's actually what you want with SQLite. The problem with Go in this situation is that you will get random errors that tell you that the database is locked. And the reason why is because of the way the sql/DB works inside. Inside it manages pool of connections for you, so it will open multiple SQLite connections and you don't want to do that. To solve this I had to, basically, disable the pool. Call db.SetMaxOpenConns(1) and it will work. Even on very high loads with tens of thousands of concurent reads and writes it works without a problem.
Other solution might be to use SQLITE_OPEN_NOMUTEX to run SQLite in multi-threaded mode and let it manage that for you. But SQLite doesn't really work in multi-threaded apps. Reads can happen in parallel but only one write at a time. You will get occasional busy errors which are completely normal for SQLite but will require you to do something with them - you probably don't want to stop a write operation completely when that happens. That's why most of the time people work with SQLite either synchronously or by sending calls to a separate thread just for the SQLite.
I tested the write performance on go1.18 to see if parallelism works
Out of Box
I used 3 golang threads incrementing different integer columns of the same record
Parallelism Conclusions:
Read code 5 percentage 2.5%
Write code 5 percentage 518% (waiting 5x in between attempts)
Write throughput: 2,514 writes per second
code 5 is “database is locked (5) (SQLITE_BUSY)”
A few years ago on Node.js the driver crashes with only concurrency, not parallelism, unless I serialized the writes, ie. write concurrency = 1
Serialized Writes
With golang is used github.com/haraldrudell/parl.NewModerator(1, context.Background()), ie. serialized writes:
Serialized results:
read code 5: 0.005%
write code 5: 0.02%
3,032 writes per second (+20%)
Reads are not serialized, but they are held up by writes in the same thread. Writes seems to be 208x more expensive than reads.
Serializing writes in golang increases write performance by 20%
PRAGMA journal_mode
Enabling sqlDB.Exec("PRAGMA journal_mode = WAL")
(from default: journalMode: delete)
increases write performance to 18,329/s, ie. another 6x
code 5 goes to 0
Multiple Processes
Using 3 processes x 3 threads with writes serialized per process lowers write throughput by about 5% and raises code 5 up to 200%. Good news is that file locking works without errors macOS 12.3.1 apfs
I wanted to show remaining time during an installation like in the following question and used the code from there, posted by TLama: How to show percent done, elapsed time and estimated time progress?
The code is working for me, so thanks for this.
But if you install bigger files, the period in which the "remaining-time-label" is updated, is too fast.
So I wanted to ask, how to change the update period of the "remaining-time-label", so that it updates only every second or every half second.
Thanks in advance
Use GetTickCount to remember the last update time. On the next calls to CurInstallProgressChanged calculate difference to CurTick and update the labels only if the difference is large enough (1000 = 1 second)
var
LastUpdate: DWORD;
procedure CurInstallProgressChanged(CurProgress, MaxProgress: Integer);
var
CurTick: DWORD;
begin
CurTick := GetTickCount;
if (CurTick - LastUpdate) >= 1000 then
begin
LastUpdate := CurTick;
// Update labels
end;
end;
The following code returns a unique 3 character code by continually checking if the genereated code already exists in the db. Once it finds one that does not exist the loop exits.
How can I protect against race conditions which could lead to non-unique codes being returned?
pubcode = Pubcode.find_by_pub_id(current_pub.id)
new_id = nil
begin
new_id = SecureRandom.hex(2)[0..2].to_s
old_id = Pubcode.find_by_guid(new_id)
if !old_id.nil?
pubcode.guid = new_id
pubcode.save
end
end while (old_id)
How can I protect against race conditions which could lead to non-unique codes being returned?
Don't use the database as a synchronization point. Apart from synchronization issues, your code is susceptible to slowdown as the number of available codes shrinks. There is no guarantee your loop would terminate.
A far better approach to this would be to have a service which pre-generates a batch of unique identifiers and hands these out on a first-come, first-served basis.
Given that you are only using 3 characters for this code, you can only store ~= 17 000 records - you could generate the entire list of permutations of three character codes up front, and remove entries from this list as you allocate them.
You can add a unique index on the database column, and then just try to update a Pubcode with a random uuid. If that fails because of the unique index, just try another code:
pubcode = Pubcode.find_by_pub_id!(current_pub.id)
begin
pupcode.update!(guid: SecureRandom.hex(2)[0..2])
rescue ActiveRecord::StatementInvalid => e
retry
end
Perhaps you want to count the number of retries and raise the exception if there was no code found within a certain number of tries (because there are only 4096 possible ids).
Preventing a race is done by putting the process in a mutex:
#mutex = Mutex.new
within the method that calls your code:
#mutex.synchronize do
# Whatever process you want to avoid race
end
But a problem with your approach is that your loop may never end since you are using randomness.
I have some Delphi 2007 code which runs in two different applications, one is a GUI application and the other is a Windows service. The weird part is that while the GUI application technically seems to have more "to do", drawing the GUI, calculating some stats and so on, the Windows service is consistently using more of the CPU when it runs. Where the GUI application uses around 3-4% CPU power, the service use in the region of 6-8%.
When running them together CPU loads of both applications approximately double.
The basic code is the same in both applications, except for the addition of the GUI code in the Windows Forms application.
Is there any reason for this behavior? Do Windows service applications have some kind of inherent overhead or do I need to look through the code to find the source of this, in my book, unexpected behavior?
EDIT:
Having had time to look more closely at the code, I think the suggestion below that the GUI application spends some time waiting for repaints, causing the CPU load to drop is likely incorrect. The applications are both threaded, meaning the GUI repaints should not influence the CPU load.
Just to be sure I first tried to remove all GUI components from the application, leaving only a blank form. That did not increase the CPU load of the program. I then went through and stripped out all calls to Synchronize in the working threads which were used to update the UI. This had the same result: The CPU load did not change.
The code in the service looks like this:
procedure TLsOpcServer.ServiceExecute(Sender: TService);
begin
// Initialize OPC server as NT Service
dmEngine.AddToLog( sevInfo, 'Service', 'Name', Sender.Name );
AddLocalServiceKeysToRegistry( Sender.Name );
dmEngine.AddToLog( sevInfo, 'Service', 'Execute', 'Started' );
dmEngine.Start( True );
//
while not Terminated do
begin
ServiceThread.ProcessRequests( True );
end;
dmEngine.Stop;
dmEngine.AddToLog( sevInfo, 'Service', 'Execute', 'Stopped' );
end;
dmEngine.Start will start and register the OPC server and initialize a socket. It then starts a thread which does... something to incoming OPC signals. The same exact call is made on in FormCreate on the main form of the GUI application.
I'm going to look into how the GUI application starts next, I didn't write this code so trying to puzzle out how it works is a bit of an adventure :)
EDIT2
This is a little bit interesting. I ran both applications for exactly 1 minute each, running AQTime to benchmark them. This is the most interesting part of the results:
In the service:
Procedure name: TSignalList::HandleChild
Execution time: 20.105963821084
Hitcount: 5961231
In the GUI Application:
Procedure name: TSignalList::HandleChild
Execution time: 7.62424101324976
Hit count: 6383010
EDIT 3:
I'm finally back in a position where I can keep looking at this problem. I have found two procedures which both have about the same hitcount during a five minute run, yet in the service the execution time is much higher. For HandleValue the hitcount is 4 300 258 and the execution time is 21.77s in the service and in the GUI application the hitcount is 4 254 018 with an execution time of 9.75s.
The code looks like this:
function TSignalList.HandleValue(const Signal: string; var Tag: TTag; const CreateIfNotExist: Boolean): HandleStatus;
var
Index: integer;
begin
result := statusNoSignal;
Tag := nil;
if not Assigned( Values ) then
begin
Values := TValueStrings.Create;
Values.CaseSensitive := defDefaultCase;
Values.Sorted := True;
Values.Duplicates := dupIgnore;
Index := -1; // Garantied no items in list
end else
begin
Index := Values.IndexOf( Signal );
end;
if Index = -1 then
begin
if CreateIfNotExist then
begin
// Value signal does not exist create it
Tag := TTag.Create;
if Values.AddObject( Signal, Tag ) > -1 then
begin
result := statusAdded;
end;
end;
end else
begin
Tag := TTag( Values.Objects[ Index ] );
result := statusExist;
end;
end;
Both applications enter the "CreateIfNotExist" case exactly the same number of times. TValueStrings is a direct descendant of TStringList without any overloads.
Have you timed the execution of core functionality? If so, did you measure a difference? I think, if you do, you won't find much difference between them, unless you add other functionality, like updating the GUI, to the code of that core functionality.
Consuming less CPU doesn't mean it's running slower. The GUI app could be waiting more often on repaints, which depend on the GPU as well (and maybe other parts of the system). Therefore, the GUI app may consume less CPU power, because the CPU is waiting for other parts of your system before it can continue with the next instruction.