I am trying to resolve an issue where the application user is prompted ORA-00054 in Oracle AgilePLM. From sql logs I came to know that the following command is causing this issue.
SELECT 1 FROM CLUSTER_THREAD_LOCKS WHERE LOCK_TYPE = 3 FOR UPDATE OF LOCK_COUNT, LAST_LOCK_TIME NOWAIT
Status:FAILURE
Reason - ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired
I cannot set auto commit on as it is a session based setting and same goes to ddl_lock_timeout, and I wanted it to be global. However, I am trying to a write a procedure and then schedule it in a DBMS_JOB, it will find the blocking session(s) on CLUSTER_THREAD_LOCKS and commit that session(s). The problem I am facing is how to issue a commit to that session(s) on the basis of it sid and serial# in a stored procedure. Following is the procedure:
CREATE OR REPLACE PROCEDURE SP_COMMIT_SESSION()
AS
v_session_no int;
v_sid int;
BEGIN
for c in
(
select obj.object_id, ses.sid, ses.serial#,user_objects.object_name, to_char(ses.logon_time, 'MM-DD-YYYY HH24:MI:SS') lock_date
from v$session ses, V$LOCKED_OBJECT obj, user_objects where
ses.sid = obj.session_id and user_objects.object_id = obj.object_id
AND USER_OBJECTS.OBJECT_NAME = 'CLUSTER_THREAD_LOCKS'
)
loop
v_session_no := c.serial#;
v_sid := sid;
/*
Issue commit here
*/
end loop;
end;
I can neither kill a session as I do not know its' impacts, nor I can change how the SQL command is issued from the application. Could anyone point me to the right direction?
Related
I am working on a project that will be replacing an existing older project. On one side I am taking data from external views and on the other I am taking the existing production data. The data between the two systems was supposed to have been in-sync, but they werent at any meaningful level. For the most part this has not been a huge issue as I have largely merged them where possible. In some cases the rows were matched on employee id's, and in others using only surname and birthdate.
The external view data always has two pieces of distinct information, an employee ID and another identifier. The Production data will always have one distinct ID and thats file_number. Sometimes production will have the external view keys, but this is not normal. These keys are the primary keys on either side, not surname and birthdate.
Its this last comparison which has caused the issue as we have several requirements where we are only allowed to join on this criteria, such as when the employee exists in the production side gets an entry on the external views. As surname and birthdate are not particularly distinct, I created an exclusion table for these records that would otherwise cause issues but are valid (twins for example). The exclusion table takes all of the offending records from both sides.
As I was unable to figure out a way to get some kind of constraint where these duplicate records would automatically get entered into the exclusion table (have poor control over data entry), I turned to triggers.
Error
Error report -
SQL Error: ORA-04021: timeout occurred while waiting to lock object
ORA-06512: at "USER.EXCLUSION_TRG", line 4
ORA-04088: error during execution of trigger 'USER.EXCLUSION_TRG'
ORA-06512: at "USER.VIEW_DUPLICATE_TRG", line 4
ORA-04088: error during execution of trigger 'USER.VIEW_DUPLICATE_TRG'
04021. 00000 - "timeout occurred while waiting to lock object %s%s%s%s%s"
*Cause: While waiting to lock a library object, a timeout is occurred.
*Action: Retry the operation later.
First trigger for VIEW Table
CREATE OR REPLACE TRIGGER VIEW_DUPLICATE_TRG
AFTER INSERT OR UPDATE ON VIEW_PERSON
BEGIN
INSERT INTO VIEW_EXCLUSION_PERSON (EMPLID, PRI, COMMENTS)
select emplid, PRI, 'VIEW CREATED '||SYSDATE from (
select upper(CONVERT(last_name, 'US7ASCII')) LAST_NAME, birthdate,first_name,emplid, pri, count(*) over (partition by upper(CONVERT(last_name, 'US7ASCII')), birthdate) duplicate_count from VIEW_PERSON
) K where duplicate_count > 1
and NOT EXISTS (select emplid from exclusion_person Z WHERE K.EMPLID=Z.EMPLID);
END;
Second trigger for Prod table
CREATE OR REPLACE TRIGGER PROD_DUPLICATE_TRG
AFTER INSERT OR UPDATE ON BACKGROUND_INFO
BEGIN
INSERT INTO EXCLUSION_PERSON (FILE_NUMBER, COMMENTS)
SELECT FILE_NUMBER, 'PROD CREATED '||SYSDATE
FROM BACKGROUND_INFO
WHERE (SURNAME, BIRTHDATE) IN
(SELECT SURNAME, BIRTHDATE
FROM BACKGROUND_INFO
GROUP BY SURNAME, BIRTHDATE
HAVING COUNT(*) > 1
)
AND FILE_NUMBER NOT IN (SELECT FILE_NUMBER FROM exclusion_person WHERE FILE_NUMBER IS NOT NULL);
END;
Third trigger for Exclusion table
CREATE OR REPLACE TRIGGER EXCLUSION_TRG
AFTER INSERT ON EXCLUSION_PERSON
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
EXECUTE IMMEDIATE 'ALTER TRIGGER EXCLUSION_TRG DISABLE';
merge into EXCLUSION_PERSON E
using (select file_number, DECODE(TRIM(PRI), '99999999', NULL, PRI) PRI from administrative_info) A
on (E.PRI=A.PRI)
when matched then update set E.file_number = A.file_number, E.COMMENTS = E.COMMENTS||', MATCHED ON PRI ON '||SYSDATE
WHERE E.FILE_NUMBER IS NULL AND E.PRI IS NOT NULL AND A.PRI IS NOT NULL;
MERGE INTO VIEW_EXCLUSION_PERSON E
USING (SELECT FILE_NUMBER, EMPLID FROM VIEW_PERSON) P
ON (P.EMPLID = E.EMPLID)
WHEN MATCHED THEN UPDATE SET
E.FILE_NUMBER = P.FILE_NUMBER,
E.COMMENTS = E.COMMENTS||' MATCHED FROM PERSON '||SYSDATE
WHERE E.FILE_NUMBER IS NULL AND E.EMPLID IS NOT NULL AND P.FILE_NUMBER IS NOT NULL;
EXECUTE IMMEDIATE 'ALTER TRIGGER VIEW_DUPLICATE_TRG DISABLE';
MERGE INTO VIEW_PERSON P
USING (SELECT FILE_NUMBER, EMPLID FROM EXCLUSION_PERSON ) E
ON (P.EMPLID = E.EMPLID)
WHEN MATCHED THEN UPDATE SET P.FILE_NUMBER = E.FILE_NUMBER
WHERE P.FILE_NUMBER IS NULL AND E.EMPLID IS NOT NULL AND E.FILE_NUMBER IS NOT NULL;
EXECUTE IMMEDIATE 'ALTER TRIGGER VIEW_DUPLICATE_TRG ENABLE';
EXECUTE IMMEDIATE 'ALTER TRIGGER EXCLUSION_TRG ENABLE';
END;
So the issue seems to be that the first trigger VIEW_DUPLICATER_TRG is locking something and not releasing that lock when EXCLUSION_TRG is attempting to run. When I go looking for this %S%S%S%S%S object, I just cant find it, and none of my code is calling this object.
The major issue is here:
EXECUTE IMMEDIATE 'ALTER TRIGGER EXCLUSION_TRG DISABLE';
This tries to disable the trigger while the trigger itself is executing!
This will never work: you can't disable a trigger while it's executing. So to try to change its state from within its own execution will be blocked.
You can see this with the following:
create table t (
c1 int
);
create or replace trigger trg
after insert on t
declare
pragma autonomous_transaction;
begin
execute immediate 'alter trigger trg disable';
end;
/
insert into t values ( 1 );
The insert will be stuck, waiting to try and disable the trigger. But the trigger is executing. So you can't disable it. AAAAARGGGH!
This whole process needs redesigning. Ideally without any triggers!
I need to do this task , and update this table that has a lot of rows.
This table has 2 columns :
FOO and BAR
I have FOO as PK and i know those values, they are both numbers but i don't have any value at bar.
I can manually run every query without any problems, but i made this PL/SQL so it automatically run without any problem, once i need to find the BAR value within another query.
create or replace procedure FxB_pro
IS
tmpFIELD NUMBER := 0;
i NUMBER := 0;
cursor c1 is
SELECT * FROM FooXBar WHERE BAR IS NULL;
BEGIN
FOR CTUpdate IN c1
LOOP
BEGIN
SELECT t5.bar INTO tmpFIELD FROM table_1 t1, table_2 t2, table_3 t3, table_4 t4, table_5 t5, table_6 t6
where t1.fielda_id = t2.fielda_id
and t2.fielda_id = t3.fielda_id
and t3.fieldb_id = t4.fieldb_id
and t3.fieldb_id = t6.fieldb_id
and t4.fieldd_id = t5.fieldc_id
and t1.fieldc = CTUpdate.FOO
and rownum = 1;
EXCEPTION
WHEN NO_DATA_FOUND THEN
tmpFIELD :=null;
END;
UPDATE FooXBar set BAR = tmpFIELD where FOO=CTUpdate.FOO;
i := i+1;
IF mod(i, 1000) = 0 THEN -- Commit every 1000 records
COMMIT;
END IF;
END LOOP;
COMMIT;
END;
I've tested this in my properly Test Environment the PL/SQL Is created and runs, but when i'm going to run it in Production , i have this error in the Select wich put the value in tmpFIELD :
Erro(12,11): PL/SQL: SQL Statement ignored
Erro(12,143): PL/SQL: ORA-01031: insufficient privileges
I can't figure why this is happening, can someone please help me?
Your privileges are assigned via ROLE.
This is fine with direct SQL, but don't work with PL/SQL.
You need to acquire the privileges direct to you user.
While testing the PL/SQL queries set in advance
set role none;
this will deactivate the priviledges acquired via ROLE and show possible problems running in PL/SQL.
I have written this function for an Oracle db for a class project:
create or replace Function loginUser ( name_in IN varchar2, pass_in IN varchar2 )
Return Number
IS
cursor c1 is
SELECT u_id
FROM userinfo
WHERE username = name_in AND pass = pass_in AND lockedout='N';
user_id_return number;
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
open c1;
fetch c1 into user_id_return;
if c1%notfound then
user_id_return := 0;
INSERT INTO LoginAttempt(username, whenattempted, attempt_status) VALUES (name_in, SYSDATE, 'N');
commit;
ELSE
INSERT INTO LoginAttempt(username, whenattempted, attempt_status) VALUES (name_in, SYSDATE, 'Y');
commit;
END IF;
close c1;
Return user_id_return;
EXCEPTION
WHEN OTHERS THEN
raise_application_error(-20001,'An error was encountered - '||SQLCODE||' -ERROR- '||SQLERRM);
END;
It works great, I get an insert into a table called LoginAttempt when you call
SELECT loginUser('name','pass') FROM DUAL;
The issue, however, is that new records and updates to userinfo are not reflected in the SELECT statement at the top of the function.
I have to recompile the function each time I update the userinfo table.
Why is this? Is this how functions with SELECT statements work? Is the table that is being SELECTED from compiled when the function is compiled?
Is this related to the PRAGME AUTONOMOUS_TRANSACTION bit?
The schema for the table can be found on github (https://github.com/tmsimont/cs3810schema/blob/master/export.sql)
Because your function executes its DML in an autonomous transaction (i.e. a separate session to the calling one), it cannot see data that has not been committed by the calling session. Until your code commits, the changes it makes are not visible to any other session, including the one used by your function.
This has nothing to do with needing to compile the function. The reason compilation caused it to suddenly see the data is because DDL always issues a commit, thus having the side effect of committing the data you had inserted.
I have created a table which contains the host names of all the trusted sources. I have written a oracle log off trigger to fetch details of all the sql executed by that session if the connection's host is not amongst the snif_session table. I am taking the output to a utl_file output which contains the sid, hostname, time connected.
SQL> select * from snif_Session;
ALLOWED_HOST
--------------------------------------------------
RND1
WORKGROUP\RND1
The place where i am getting stuck is what query to use to get all the sql's executed by that particular session ( i can get the sid from v$mystat).
does this work best :
select a.sql_id
,b.sql_text
from dba_hist_active_sess_history a
,dba_hist_sqltext b
where a.sql_id=b.sql_id
or
select s.sid
, s.serial#
, a.sql_text
from v$session s
join v$sqlarea a
on a.hash_value = s.sql_hash_value ;
This is the code i have written (block) which i will be placing inside a trigger.
declare
machine_id varchar2(50);
val int;
auth_terminal varchar2(50);
check_machine varchar2(1000);
mydate char(50);
osuser_1 varchar2(50);
sid_1 int;
sql_query_1 varchar2(5000);
machine_1 varchar2(50);
trace_info UTL_FILE.FILE_TYPE;
begin
select machine into check_machine
from v$session
where sid in (select distinct(sid) from v$mystat) ;
select count(*) into val
from snif_session
where allowed_host=check_machine;
if ( 1=val) then
dbms_output.put_line(check_machine|| ' dont check host' );
else
dbms_output.put_line(check_machine || ' check host' );
end if;
select osuser,sid,machine
into osuser_1,sid_1,machine_1
from v$session
where sid in (select distinct(sid) from v$mystat);
SELECT TO_char(systimestamp,'mm/dd/yyyy HH24:MI:SS') into mydate
FROM DUAL;
dbms_output.put_line(mydate || sid_1 || ' ' || osuser_1 || ' '|| machine_1);
trace_info := UTL_FILE.FOPEN('UTL_DIR', 'trace_info_file.txt', 'W');
UTL_FILE.PUTF(trace_info,mydate||' '||sid_1||' '||osuser_1||' '|| machine_1);
UTL_FILE.FCLOSE(trace_info);
EXCEPTION
WHEN utl_file.invalid_path THEN
raise_application_error(-20000, 'ERROR: Invalid PATH FOR file.');
end;
I need to include the 'sql queries' also executed by session in the utl_file output.
"I need to include the 'sql queries' also executed by session"
Neither of your suggested queries will give you all the SQL executed by a session.
V$SESSION is a dynamic view, so it just shows what is happening in a session right now.
DBA_HIST_ACTIVE_SESS_HISTORY is a series of snapshots of running SQL. It is intended for performance profiling, and as such it is basically a random sub-set of active statements. Also, it is part of the Diagnostics and Tuning Pack: you will be in breach of your license if you use it without paying the additional charge.
It appears that what you really need is an audit trail. Instead of rolling your own, why not investigate the functionality Oracle already has? There's AUDIT to track DDL activity. There's Fine-grained Auditing to monitor lower-level DML. Find out more.
Let's say that I have a procedure called myproc. This is a complex process and i cannot allow two instances executing at the same time the proc.
Actually I do this using the dbms_application_info.set_module:
procedure start_process is
begin
dbms_application_info.set_module('myproc', 'running');
end;
and verify before run the process:
select 'S'
from v$session v
where v.module = 'myproc'
and v.action = 'running';
In the database level, is there a better way to check this?
Use dbms_lock.allocate_unique along with dbms_lock.request. The usage notes says:
The first session to call ALLOCATE_UNIQUE with a new lock name causes
a unique lock ID to be generated and stored in the dbms_lock_allocated table.
Subsequent calls (usually by other sessions) return the lock ID previously generated.
I think this could be what you're after.
You can create a table processes. You also ensure that each process has some sort of unique identifier - for instance a hash of the owner, object_name from dba_objects so you could create this dynamically in your package.
You then create a function to lock each row individually as a process is run.
As #Sergio pointed out in the comments this would not work if for some reason you needed to commit in the middle of the process - unless, of course, you re-selected after each commit.
function locking ( Pid ) return number is
l_locked number := 0;
begin
select 1
into l_locked
from processes
where id = Pid
-- exit immediately if the proc is running
for update nowait
;
return l_locked;
exception when others then
return 0;
end;
This has the benefit of locking that row in processes for you until the session that's currently running your procedure has finished.
You then wrap this in your procedure:
-- if we failed to lock locking will have thrown an error
-- i.e. we have 0 here.
if locking( 123 ) = 0 then
exit;
end if;
As long as each procedure has a unique id - the important bit - your procedure will exit cleanly.
It might not apply in your situation but, my normal way of doing this is to use mod. Though it doesn't stop 2 of the same process running it does ensure that when you have more than 1 you only run them on different data. Something like as follows:
procedure my_procedure ( PNumerator number, PDenominator number ) is
cursor c_my_cursor ( CNumerator number, CDenominator number ) is
select columns
from my_table
where mod( ascii(substr(id, -1)), CDenominator ) = CNumerator
;
begin
open c_my_cursor( PNumerator, PDenominator );
...
end;