I'm using Oracle database version - 12.1.0.2.0
I've an Oracle function that I need to ensure is executed serially. I want to restrict concurrent execution of this function by different sessions.
Couple of ways this can be achieved is:
Update a "static" row in a parameter table at beginning of function and commit before function ends. Since no other session will be able to update same row, it will ensure concurrent access is restricted.
Implement logic using user locks.
Is there any other way this control can be achieved. I've read about "latch", but I believe it is used for internal mechanism to control access to Oracle data structures (mainly resource in SGA).
Is there a way to implement latch (or something similar) to fulfill my requirement.
I understand latch is lightweight as against locks which are heavier in comparison.
Thanks in advance.
Oracle implementes DBMS_LOCK.ALLOCATE_UNIQUE for this purpose.
At the beginning of the procedure allocate a unique lockhandle for a given lockname.
Then REQUEST the lock
/* lock parallel executions */
DBMS_LOCK.ALLOCATE_UNIQUE( v_lockname, v_lockhandle);
v_res := DBMS_LOCK.REQUEST( lockhandle=>v_lockhandle, release_on_commit => TRUE);
Perform your serial stuff and at the end of the function RELEASE the lock.
v_res := DBMS_LOCK.RELEASE (v_lockhandle);
Do not forget to release the lock in the EXCEPTION section to be not blocked after abnormal termination
Related
We are using Oracle19c and have below code written in PL/SQL. Application is written in java 8.
HANDLE:=randomNum;
DBMS_LOCK.ALLOCATE_UNIQUE( 'LOCK5', HANDLE );
STATUS := DBMS_LOCK.REQUEST( LOCKHANDLE => HANDLE, LOCKMODE =>DBMS_LOCK.X_MODE );
SELECT TR into P_NUMBER from T_MONITORING where PROCESS_S=0 and ROWNUM=1;
UPDATE T_MONITORING td set td.PROCESS_S=1 where td.TR=P_NUMBER and td.PROCESS_S =0;
NEW_TELEPHONE_NUMBER :=PHONE_NUMBER;
STATUS := DBMS_LOCK.RELEASE(LOCKHANDLE => HANDLE);
Table T_MONITORING contains 10 million records and multiple threads are calling this stored procedure. Objective is to process every record in this table. This procedure returns P_NUMBER and threads do some business logic and update this table (not shown here as it is a different stored procedure).
I would like to understand locking mechanism written in the pl/sql. I googled it but couldnt clearly understand it.
Does this code make sure that multiple threads calling this stored procedure wont get same P_NUMBER ?
Please help in understand it.
If you do not provide a timeout to wait when calling DBMS_LOCK.REQUEST, it will use MAXWAIT default value. This value is described in the same page and has the following semantics:
maxwait constant integer := 32767;
The constant maxwait waits forever.
So as long as the lock is requested in X mode (Exclusive), all other sessions will not execute all the below code (including select and update) unless the lock will be released. They will not get any value of P_NUMBER in the output.
When the lock is released in a correct way (not by timeout), processed rows will have PROCESS_S=1 and will not fulfill where predicate of select statement: where PROCESS_S=0 and ROWNUM=1;
But if (for some reason) the lock will expire between select and update, it may provide the same P_NUMBER to different sessions of the application. It may be handled by select ... for update skip locked, that will not expire if the session is active.
i would like to know if there is a connect param, that i can use in JDBC Thin Oracle Connection URL,
to tell the Oracle DB that i want to use Parallelism in processing the queries.
The Application, that should be use this parameter generates Statements during runtime and fires them against the Database, so i can't update or optimize them. In nearly every query i run in timeouts and the User on the other side gets an error message.
If i fire the generated Statements and send them with /* parallel */ Hint with the SQL Developer, i have a much better performance.
Maybe someone has a hint, that i can achive a better performance.
You could use a logon trigger to force parallel execution of all query statements in the session for which parallelization is possible. This would override any default parallelism property on individual objects.
CREATE OR REPLACE TRIGGER USER1.LOGON_TRG
AFTER LOGON ON SCHEMA
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL QUERY 4';
END;
/
https://docs.oracle.com/en/database/oracle/oracle-database/19/vldbg/parameters-parallel-exec.html#GUID-FEDED00B-57AF-4BB0-ACDB-73F43B71754A
Is there a way to retrieve output from PL/SQL continuously rather than wait until the SP completes its execution. Continuously mean as when it executes the execute immediate.
Any other mechanism to retrieve pl/sql output?
As per Oracle docs
Output that you create using PUT or PUT_LINE is buffered in the SGA. The output cannot be retrieved until the PL/SQL program unit from which it was buffered returns to its caller. So, for example, Enterprise Manager or SQL*Plus do not display DBMS_OUTPUT messages until the PL/SQL program completes.
As far as I know, there is a way, but not with DBMS_OUTPUT.PUT_LINE. Technique I use is:
create a log table which will accept values you'd normally display using DBMS_OUTPUT.PUT_LINE. Columns I use are
ID (a sequence, to be able to sort data)
Date (to know what happened when; might not be enough for sorting purposes because operations that take very short time to finish might have the same timestamp)
Message (a VARCHAR2 column, large enough to accept the whole information)
create a logging procedure which will be inserting values into that table. It should be an autonomous transaction so that you could COMMIT within (and be able to access data from other sessions), without affecting the main transaction
Doing so, you'd
start your PL/SQL procedure
call the logging procedure whenever appropriate (basically, where you'd put the DBMS_OUTPUT.PUT_LINE call)
in another session, periodically query the log table as select * from log_table order by ID desc
Additionally, you could write a simple Apex application with one report page which selects from the logging table and refreshes periodically (for example, every 10 seconds or so) and view the main PL/SQL procedure's execution.
The approach that Littlefoot has provided is what I normally use as well.
However, there is another approach that you can try for a specific use case. Let's say you have a long-running batch job (like a payroll process for example). You do not wish to be tied down in front of the screen monitoring the progress. But you want to know as soon as the processing of any of the rows of data hits an error so that you can take action or inform a relevant team. In this case, you could add code to send out emails with all the information from the database as soon as the processing of a row hits an error (or meets any condition you specify).
You can do this using the functions and procedures provided in the 'UTL_MAIL' package. UTL_MAIL Documentation from Oracle
For monitoring progress without the overhead of logging to tables and autonomous transactions. I use:
DBMS_APPLICATION.SET_CLIENT_INFO( TO_CHAR(SYSDATE, 'HH24:MI:SS') || ' On step A' );
and then monitor in v$session.client_infofor your session. It's all in memory and won't persist of course but it is a quick and easy ~zero cost way of posting progress.
Another option (Linux/UNIX) for centralised logging that is persistent and again avoids logging in the database more generally viewable that I like is interfacing to syslog and having Splunk or similar pick these up. If you have Splunk or similar then this makes the monitoring viewable without having to connect to the database query directly. See this post here for how to do this.
https://community.oracle.com/thread/2343125
I've been testing Oracle AQ for the first time. I have managed to create 2000 rows of test inserts into the queue I created.
Now, I'd like to clear those out. As I was teaching myself, I set the expiry time to be a month. I can't wait that long. And I don't think I should just delete them from the queue table.
What's the best way to do this?
You can use the DBMS_aqadm.purge_queue_table procedure.
SOLUTION
The SQL looks something like this :
-- purge queue
DECLARE
po_t dbms_aqadm.aq$_purge_options_t;
BEGIN
dbms_aqadm.purge_queue_table('MY_QUEUE_TABLE', NULL, po_t);
END;
Just do a delete on the queue table.
Never mind, just did a check and that's not right:
Oracle Streams AQ does not support data manipulation language (DML) operations on queue tables or associated index-organized tables (IOTs), if any. The only supported means of modifying queue tables is through the supplied APIs. Queue tables and IOTs can become inconsistent and therefore effectively ruined, if DML operations are performed on them.
So, you'll have to create a little PL/SQL routine to pull the items off.
Use the dbms_aq package. Check the example from the documentation: Dequeuing Messages.
Scroll down a little bit and there's a complete example.
The trigger below is delaying my insert response. How can I prevent this?
create or replace
TRIGGER GETHTTPONINSERT
BEFORE INSERT ON TABLENAME
FOR EACH ROW
Declare
--
BEGIN
-- The inserted data is transfered via HTTP to a remote location
END;
EDIT People are telling me to do batch jobs, but I would rather have the data earlier than having 100% consistency. The advantage of the trigger is that it happens as soon as the data arrives, but I can't afford the insert response delay.
One approach is to have the trigger create a dbms_job that runs once (each) time to perform the http transfer. The dbms_job creation is relatively quick and you can think of this as effectively spawning a new thread in parallel.
See http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:7267435205059 for further info - his example deals with sending email, but the idea is the same.
There is a perfect solution for this exact situation called Database Change Notification.
You can think of it almost exactly like an async trigger.
You use the DBMS_Change_Notification package to tell oracle which tables to watch and what to do when a change occurs. You can monitor for DML and DDL, you can have Oracle batch the changes (i.e. wait for 10 changes to occur before firing). It will call a sproc with an object containing all the rowids of the changed rows... you can decide how to handle, including calling HTTP. It will not have to finish for the insert to commit.
Documentation for 10gR2
Maybe you could create a local table that store the info do you have to transfer, and create a job that executes every X minutes. The job read from the table, transfer all the data and delete the transfered data from the table.
Isn't it possible to use the Oracle replication options? You send your inserted data via http to a remote location in an after or before statement trigger. What will happen when there is a rollback? Your hhtp send message will not be rollbacked so you have inconsistent data.
well obviously, you could prevent the delay by removing the Trigger....
Else, the trigger will ALWAYS be executed before your insert, thats what the TRIGGER BEFORE INSERT is made for.
Or maybe you could give us more details on what you need exactly?
If you are getting to this question after 2020, look at DBMS_CQ_NOTIFICATION:
https://docs.oracle.com/en/database/oracle/oracle-database/19/arpls/DBMS_CQ_NOTIFICATION.html