Sleep time syntax for db2 store procedure - syntax

For oracle we can use the following syntax for sleep
DBMS_LOCK.SLEEP(sleepTime);
For mysql we can use the following syntax for sleep
DO SLEEP(sleepTime);
For db2 how could I achieve this?.
Following is part of my script.
REPEAT
IF rowCount > 0
THEN
DO SLEEP(sleepTime);
END IF;
DELETE FROM IDN_OAUTH2_ACCESS_TOKEN WHERE TOKEN_STATE IN ('EXPIRED','INACTIVE','REVOKED') OR (TOKEN_STATE='ACTIVE');
GET DIAGNOSTICS rowCount = ROW_COUNT;
UNTIL rowCount=0 END REPEAT;
How can we do sleep with db2?.Any help on this would be appreciated

At the present time IBM did not supply a DBMS_LOCK module for Db2-for-LUW although that may change in the future or you can implement your own if you have the skills.
But If you are using recent Db2 versions for Linux/Unix/Windows, then you can abuse the DBMS_ALERT.WAITONE procedure. It's not an exact match but may be good enough. The idea is to wait a specified time for an alert(signal) that's never going to be triggered (i.e. you have to ensure the code does not signal the specified alert unless you want to interrupt the wait).
for example, the block below will wait for 5 minutes:
--#SET TERMINATOR#
BEGIN
DECLARE v_outmessage VARCHAR(32672);
DECLARE v_outstatus integer default 0;
DECLARE v_seconds INTEGER default 300;
CALL dbms_alert.waitone('whatever',v_outmessage ,v_outstatus,v_seconds);
END#
There is also the option to implement a sleep function (as an external UDF or an external stored procedure) and that is described here (requires C compiler etc).

Try the undocumented call DBMS_ALERT.SLEEP(60)

Related

Performance issue with PL/SQL when using utl_tcp

I have written some PL/SQL that connects to a service on prem and gets a very small amount of string data back. The routine works, but it is incredibly slow, taking roughly 9 seconds to return the data. I have re-created the process in C# and it gets the results back in under a second, so I assume it is something I am doing wrong in my PL/SQL. I need to resolve the PL/SQL speed issue as I have to make the call from a very old Oracle Forms application. Here is the PL/SQL:
declare
c utl_tcp.connection;
ret_val varchar2(100);
reading varchar2(100);
cmd varchar2(100) := 'COMMAND(STUFF,SERVICE,EXPECTS)';
cmd2 varchar2(100);
begin
c := utl_tcp.open_connection(remote_host => 'SERVICE.I.P.ADDRESS'
,remote_port => 9995
,charset => 'US7ASCII'
,tx_timeout => 4
); -- Open connection
--This is a two step process. First, issue this command which brings back a sequence number
ret_val := utl_tcp.write_line(c, cmd); -- Send command to service
ret_val := utl_tcp.write_line(c); -- Don't know why this is necessary, it was in the example I followed
dbms_output.put_line(utl_tcp.get_text(c, 100)); -- Read the response from the server
sys.dbms_session.sleep(1); -- This is important as sometimes it doesn't work if it's not slowed down!
--This is the second step which issues another command, using the sequence number retrieved above
cmd2 := 'POLL(' || ret_val || ')';
reading := utl_tcp.write_line(c, cmd2); -- Send command to service
reading := utl_tcp.write_line(c); --Don't know why this is necessary, it was in the example I followed
dbms_output.put_line(utl_tcp.get_text(c, 100)); -- Read the response from the server
utl_tcp.close_connection(c); --Close the connection
end;
I appreciate performance problems are hard to track down when you don't have access to the systems, but any guidance would be greatly appreciated.
My guess is that it's this line:
dbms_output.put_line(utl_tcp.get_text(c, 100));
Are you actually reading 100 characters in your response? If not, it will read the available buffer, wait for the rest of the 100 characters to arrive (but they won't), then timeout.
You've set tx_timeout to 4 s. The fact that you have 2 calls to get_text, a 1 s sleep, and your procedure is taking 9 s suggests to me that's what's going on.

Aborting queries on neo4jrb

I am running something along the lines of the following:
results = queries.map do |query|
begin
Neo4j::Session.query(query)
rescue Faraday::TimeoutError
nil
end
end
After a few iterations I get an unrescued Faraday::TimeoutError: too many connection resets (due to Net::ReadTimeout - Net::ReadTimeout) and Neo4j needs switching off and on again.
I believe this is because the queries themselves aren't aborted - i.e. the connection times out but Neo4j carries on trying to run my query. I actually want to time them out, so simply increasing the timeout window won't help me.
I've had a scout around and it looks like I can find my queries and abort them via the Neo4j API, which will be my next move.
Am I right in my diagnosis? If so, is there a recommended way of managing queries (and aborting them) from neo4jrb?
Rebecca is right about managing queries manually. Though if you want Neo4j to automatically stop queries within a certain time period, you can set this in your neo4j conf:
dbms.transaction.timeout=60s
You can find more info in the docs for that setting.
The Ruby gem is using Faraday to connect to Neo4j via HTTP and Faraday has a built-in timeout which is separate from the one in Neo4j. I would suggest setting the Neo4j timeout as a bit longer (5-10 seconds perhaps) than the one in Ruby (here are the docs for configuring the Faraday timeout). If they both have the same timeout, Neo4j might raise a timeout before Ruby, making for a less clear error.
Query management can be done through Cypher. You must be an admin user.
To list all queries, you can use CALL dbms.listQueries;.
To kill a query, you can use CALL dbms.killQuery('ID-OF-QUERY-TO-KILL');, where the ID is obtained from the list of queries.
The previous statements must be executed as a raw query; it does not matter whether you are using an OGM, as long as you can input queries manually. If there is no way to manually input queries, and there is no way of doing this in your framework, then you will have to access the database using some other method in order to execute the queries.
So thanks to Brian and Rebecca for useful tips about query management within Neo4j. Both of these point the way to viable solutions to my problem, and Brian's explicitly lays out steps for achieving one via Neo4jrb so I've marked it correct.
As both answers assume, the diagnosis I made IS correct - i.e. if you run a query from Neo4jrb and the HTTP connection times out, Neo4j will carry on executing the query and Neo4jrb will not issue any instruction for it to stop.
Neo4jrb does not provide a wrapper for any query management functionality, so simply setting a transaction timeout seems most sensible and probably what I'll adopt. Actually intercepting and killing queries is also possible, but this means running your query on one thread so that you can look up its queryId in another. This is the somewhat hacky solution I'm working with atm:
class QueryRunner
DEFAULT_TIMEOUT=70
def self.query(query, timeout_limit=DEFAULT_TIMEOUT)
new(query, timeout_limit).run
end
def initialize(query, timeout_limit)
#query = query
#timeout_limit = timeout_limit
end
def run
start_time = Time.now.to_i
Thread.new { #result = Neo4j::Session.query(#query) }
sleep 0.5
return #result if #result
id = if query_ref = Neo4j::Session.query("CALL dbms.listQueries;").to_a.find {|x| x.query == #query }
query_ref.queryId
end
while #result.nil?
if (Time.now.to_i - start_time) > #timeout_limit
puts "killing query #{id} due to timeout"
Neo4j::Session.query("CALL dbms.killQuery('#{id}');")
#result = []
else
sleep 1
end
end
#result
end
end

While Recording in Squish using python, how to set the application to sleep for sometime between 2 consecutive activities?

I am working on an application where there are read only screens.
To test whether the data is being fetched on screen load, i want to set some wait time till the screen is ready.
I am using python to record the actions. Is there a way to check the static text on the screen and set the time ?
You can simply use
snooze(time in s).
Example:
snooze(5)
If you want to wait for a certain object, use
waitForObject(":symbolic_name")
Example:
type(waitForObject(":Welcome.Button"), )
The problem is more complicated if your objects are created dynamically. As my app does. In this case, you should create a while function that waits until the object exists. Here, maybe this code helps you:
def whileObjectIsFalse(objectID):
# objectID = be the symbolic name of your object.
counter = 300
objectState = object.exists(objectID)
while objectState == False:
objectState = object.exists(objectID)
snooze(0.1)
counter -= 1
if counter == 0:
return False
snooze(0.2)
In my case, even if I use snooze(), doesn't work all the time, because in some cases i need to wait 5 seconds, in other 8 or just 2. So, presume that your object is not created and tests this for 30 seconds.
If your object is not created until then, then the code exits with False, and you can tests this to stop script execution.
If you're using python, you can use time.sleep() as well

python slow to check if mongodb record found

I have a python (3.2) request that goes to MongoDB and the request itself is running fast enough. When I then perform an if statement check to see if any records were found it takes 50 times as long:
Line # Hits Time Per Hit % Time Line Contents
==============================================================
58 27623 6475988 234.4 1.7 itemInDB = db.mainData.find({"x":item[x]}).limit(1)
59
60 #existing item in db
61 27623 293419802 10622.3 77.6 if itemInDB.count():
What on earth is the cause for that if statement taking so long?! I presume there must be a better way to check if a record was found but google has come up empty.
Thanks for the help.
Perhaps a Better Way
If you're only interested in returning one value, you might want to use find_one instead of find. It will stop looking for values after one has been found, as opposed to find, which has to run through the collection:
itemInDB = db.mainData.find_one({"x":item[x]})
if itemInDB:
print("Item found")
else:
print("Item not found")
For Your Example
According to the PyMongo docs, when querying the count of a cursor, you can pass in a parameter (True or False) to take into account any skip or limit calls previously made to the cursor. The default for that parameter is False (namely, not taking those calls into account). That may be affecting the performance of your count query.
Gauging Query Performance
If you want to see how your query will be carried out by mongo, you can call explain on your cursor:
db.coll.find({"x":4}).explain()
The explain function is also implemented in PyMongo.
Turns out it was due to the find() function and not the if statement. I created an index on "x" (as I should have anyway). Changed the find to find_one and removed the .count() from the if statement. Overall 75% faster.

Same code runs slower as a Windows service than a GUI application

I have some Delphi 2007 code which runs in two different applications, one is a GUI application and the other is a Windows service. The weird part is that while the GUI application technically seems to have more "to do", drawing the GUI, calculating some stats and so on, the Windows service is consistently using more of the CPU when it runs. Where the GUI application uses around 3-4% CPU power, the service use in the region of 6-8%.
When running them together CPU loads of both applications approximately double.
The basic code is the same in both applications, except for the addition of the GUI code in the Windows Forms application.
Is there any reason for this behavior? Do Windows service applications have some kind of inherent overhead or do I need to look through the code to find the source of this, in my book, unexpected behavior?
EDIT:
Having had time to look more closely at the code, I think the suggestion below that the GUI application spends some time waiting for repaints, causing the CPU load to drop is likely incorrect. The applications are both threaded, meaning the GUI repaints should not influence the CPU load.
Just to be sure I first tried to remove all GUI components from the application, leaving only a blank form. That did not increase the CPU load of the program. I then went through and stripped out all calls to Synchronize in the working threads which were used to update the UI. This had the same result: The CPU load did not change.
The code in the service looks like this:
procedure TLsOpcServer.ServiceExecute(Sender: TService);
begin
// Initialize OPC server as NT Service
dmEngine.AddToLog( sevInfo, 'Service', 'Name', Sender.Name );
AddLocalServiceKeysToRegistry( Sender.Name );
dmEngine.AddToLog( sevInfo, 'Service', 'Execute', 'Started' );
dmEngine.Start( True );
//
while not Terminated do
begin
ServiceThread.ProcessRequests( True );
end;
dmEngine.Stop;
dmEngine.AddToLog( sevInfo, 'Service', 'Execute', 'Stopped' );
end;
dmEngine.Start will start and register the OPC server and initialize a socket. It then starts a thread which does... something to incoming OPC signals. The same exact call is made on in FormCreate on the main form of the GUI application.
I'm going to look into how the GUI application starts next, I didn't write this code so trying to puzzle out how it works is a bit of an adventure :)
EDIT2
This is a little bit interesting. I ran both applications for exactly 1 minute each, running AQTime to benchmark them. This is the most interesting part of the results:
In the service:
Procedure name: TSignalList::HandleChild
Execution time: 20.105963821084
Hitcount: 5961231
In the GUI Application:
Procedure name: TSignalList::HandleChild
Execution time: 7.62424101324976
Hit count: 6383010
EDIT 3:
I'm finally back in a position where I can keep looking at this problem. I have found two procedures which both have about the same hitcount during a five minute run, yet in the service the execution time is much higher. For HandleValue the hitcount is 4 300 258 and the execution time is 21.77s in the service and in the GUI application the hitcount is 4 254 018 with an execution time of 9.75s.
The code looks like this:
function TSignalList.HandleValue(const Signal: string; var Tag: TTag; const CreateIfNotExist: Boolean): HandleStatus;
var
Index: integer;
begin
result := statusNoSignal;
Tag := nil;
if not Assigned( Values ) then
begin
Values := TValueStrings.Create;
Values.CaseSensitive := defDefaultCase;
Values.Sorted := True;
Values.Duplicates := dupIgnore;
Index := -1; // Garantied no items in list
end else
begin
Index := Values.IndexOf( Signal );
end;
if Index = -1 then
begin
if CreateIfNotExist then
begin
// Value signal does not exist create it
Tag := TTag.Create;
if Values.AddObject( Signal, Tag ) > -1 then
begin
result := statusAdded;
end;
end;
end else
begin
Tag := TTag( Values.Objects[ Index ] );
result := statusExist;
end;
end;
Both applications enter the "CreateIfNotExist" case exactly the same number of times. TValueStrings is a direct descendant of TStringList without any overloads.
Have you timed the execution of core functionality? If so, did you measure a difference? I think, if you do, you won't find much difference between them, unless you add other functionality, like updating the GUI, to the code of that core functionality.
Consuming less CPU doesn't mean it's running slower. The GUI app could be waiting more often on repaints, which depend on the GPU as well (and maybe other parts of the system). Therefore, the GUI app may consume less CPU power, because the CPU is waiting for other parts of your system before it can continue with the next instruction.

Resources