Trying to set up a scenario of record locks due to pending distributed transactions.
Goal: get some rows into DBA_2PC_PENDING for training purposes and forcing commits/rollbacks/purging entries etc.
According to Database Administrator's Guide (12c)
https://docs.oracle.com/database/121/ADMIN/ds_txnman.htm#ADMIN12285
in section 35.9 Simulating Distributed Transaction Failure
the following code should accomplish what I need:
DECLARE
l_tran_id VARCHAR2(200);
BEGIN
l_tran_id := dbms_transaction.local_transaction_id(true);
DBMS_OUTPUT.PUT_LINE('l_tran_id: ' ||l_tran_id);
abc.package1.proc1#DB2 ( parm1 => 'XXX' );
COMMIT COMMENT 'ORA-2PC-CRASH-TEST-5';
END;
But when I run this, I get ORA-01031: insufficient privileges with error pointing to line holding the commit.
If I change to COMMIT COMMENT 'hooray';
I don't get error, but of course I don't get my results of a pending transaction.
What privileges is it complaining about?
Related
We are initiating the rebuilding of many materialized views by using dbms_job.submit to execute a stored procedure that perform the rebuilding. However, I am having trouble trying to figure out how to determine if a submitted job failed. The issue that I am having is that the job is failing but I cannot identify what the issue is. So, I am trying to start out with a simple example, which is probably failing on a permission issue but I don't know where to look for the error message.
I have the following test procedure that I want to initiate using dbms_job.submit.
CREATE OR REPLACE PROCEDURE MYLANID.JUNKPROC
AS
lv_msg varchar2(3000);
BEGIN
INSERT INTO MYLANID.junk_log ( msg ) VALUES ('Hello World' );
commit;
EXCEPTION
WHEN OTHERS THEN
lv_msg := SUBSTR(sqlerrm, 1, 3000);
INSERT INTO MYLANID.junk_log ( msg ) VALUES (lv_msg);
END;
/
Note that this table is used above:
CREATE TABLE MYLANID.JUNK_LOG (
EVENT_TIME TIMESTAMP(6) DEFAULT systimestamp,
MSG VARCHAR2(3000 BYTE))
To submit the above procedure as a job, I execute the following anonymous block.
declare l_jobid binary_integer;
BEGIN
dbms_job.submit(job => l_jobid, what => 'BEGIN MYLANID.JUNKPROC; END;');
DBMS_OUTPUT.PUT_LINE('l_jobid:' || l_jobid);
commit;
END;
I then execute the following SQL...
select * from all_jobs;
...to see one record that represents my submitted job. When I re-query the all_jobs view, I see that this record quickly disappears from the view within a few seconds, presumably when the job completes. All is happy so far. I would like to use the presence of a record in the all_jobs view to determine whether a submitted job is running or has failed. I expect to be able to tell if it failed by looking at the ALL_JOBS.FAILURES column having a non null value > 0.
The problem, probably a permission issue, begins when I switch to another schema and I switch all of the occurrences of the above SQL and replace "MYSCHEMA" with "ANOTHERSCHEMA" that I also have access to. For example, I create the following
Table: ANOTHERSCHEMA.JUNK_LOG
Procedure: ANOTHERSCHEMA.JUNKPROC
I am even able to execute the stored procedure successfully in a query window while logged in as MYSCHEMA:
EXEC ANOTHERSCHEMA.JUNKPROC
However, if I execute the following code to submit a job that involves running the same ANOTHERSCHEMA procedure but by submitting it as a JOB...
declare l_jobid binary_integer;
BEGIN
dbms_job.submit(job => l_jobid, what => 'BEGIN ANOTHERSCHEMA.JUNKPROC; END;');
DBMS_OUTPUT.PUT_LINE('l_jobid:' || l_jobid);
commit;
END;
...then, when I query the jobs ALL_JOBS view...
select * from all_jobs;
...I see that the job has a positive value for the column FAILURE and I have no record of what the error was. This FAILURE count value continues to gradually increment over time as Oracle presumably retries up to 16? times until the job is marked BROKEN in the ALL_JOBS view.
But this is just a simple example and I don't know where to look for the error message that would tell me why the job using ANOTEHRSCHEMA references failed.
Where Do I look for the error log of failed jobs? I'm wondering if this will be somewhere only the DBA can see...
Update:
The above is just a simple test example. In my actual real world situation, my log shows that the job was submitted but I never see anything in USER_JOBS or even DBA_JOBS, which should show everything. I don't understand why the dbms_job.submit procedure would return the job number of the submitted job indicating that it was submitted but no job is visible in the DBA_JOBS view! The job that I did submit should have taken a long time to run, so I don't expect that it completed faster than I could notice.
First off, you probably shouldn't be using dbms_job. That package has been superseded for some time by the dbms_scheduler package which is significantly more powerful and flexible. If you are using Oracle 19c or later, Oracle automatically migrates dbms_job jobs to dbms_scheduler.
If you are using an Oracle version prior to 19c and a dbms_job job fails, the error information is written to the database alert log. That tends to be a bit of a pain to query from SQL particularly if you're not a DBA. You can define an external table that reads the alert log to make it queryable. Assuming you're on 11g, there is a view, x$dbgalertext, that presents the alert log information in a way that you can query it but DBAs generally aren't going to rush to give users permission on x$ tables.
If you use dbms_scheduler instead (or if you are on 19c or later and your dbms_job jobs get converted to dbms_scheduler jobs), errors are written to dba_scheduler_job_run_details. dbms_scheduler in general gives you a lot more logging information than dbms_job does so you can see things like the history of successful runs without needing to add a bunch of instrumentation code to your procedures.
I have the below procedure which has pragma autonomous_transaction clause.Now this procedure is being called in Java code after validating the come business logic. After an execution of this proc, It starts with some java stuffs...
create or replace procedure UPDATE_INSTRUMENT
is
pragma autonomous_transaction;
begin
begin
update abc
set AUTHSTATUS = p_AUTHSTATUS,
STATUS = p_STATUS,
USERID = p_USERID,
LASTUPDATED = TO_DATE(p_LASTUPDATED, 'DD/MM/YYYY'),
USERDATETIME = TO_DATE(p_USERDATETIME, 'DD/MM/YYYY')
where TRANSACNO = p_TRANSACNO;
commit;
end;
begin
update xyz
set AUTHSTATUS = p_AUTHSTATUS,
USERID = p_USERID,
AUTHDATE = TO_DATE(SYSDATE, 'DD/MM/YYYY'),
LASTUPDATED = TO_DATE(SYSDATE, 'DD/MM/YYYY'),
where TRANSACNO = p_TRANSACNO;
commit;
end;
end UPDATE_INSTRUMENT;
Table 'xyz' has three triggers and out of that 1 is on Insert and 2 are on Before update event.
P.N:- Table 'xyz' is not updated or locked anywhere before calling to this Procedure.
I am getting the below errors.
ORA-00060: deadlock detected while waiting for resource
ORA-06512: at "ADTTRG_xyz", line 277
ORA-04088: error during execution of trigger 'ADTTRG_xyz'
The table abc is getting updated properly, but it is failing to update table xyz.
Please explain why this deadlock is occurring.
"How deadlock is occurring."
Deadlocks occur when two sessions simultaneously attempt to change a common resource - such as a table or unique index - in such a way that neither session can commit without the other committing first. This is always an application design flaw, in that deadlocks are the result of complicated flow and poorly-implemented logic strategies.
There are a few clues that this is the case here.
ORA-06512: at "ADTTRG_xyz", line 277
For a start, a trigger with several hundred lines of code is a code smell. That's a lot of code to have in a trigger. It seems like there's a opportunity for competition there. Especially as...
Table 'xyz' has three triggers and ... 2 are on Before update event.
You have two BEFORE UPDATE triggers on the 'xyz' table and the event which generates the deadlock is an update of 'xyz'. This may not be a coincidence.
You must investigate these two triggers and establish which tables and indexes they need, so that you can spot whether they are in contention.
pragma autonomous_transaction;
The PL/SQL documentation says "If an autonomous transaction tries to access a resource held by the main transaction, a deadlock can occur." Autonomous transactions are another code smell. There are very few valid use cases for autonomous transactions; more often they are used to wrangle a bad data model into submission.
So you have a lot of things to investigate. Oracle offers diagnostics to help with this.
When deadlocks happen Oracle produces a tracefile. This file will be written to an OS directory. If you don't know where that is you can query the V$DIAG_INFO view. Find out more. The tracefile will tell you the two rowids which generated the deadlock; you can find out the object id using dbms_rowid.rowid_object() and plug that into select object_name from all_objects where object_id = :oid;. Depending on how your organisation arranges things you may not have access to the tracefile directory, in which case you will need to ask a DBA for help.
Once you know what table is in deadlock you know what you must change in your application logic. Potentially that's quite a big change, as your code has a number of red flags (long trigger bodies, two triggers on same event, autonomous transaction). Good luck!
I need to collect all the SQL queries (SELECT, UPDATE, DELETE, INSERT) which have been used by the application when any order is processed through the application.
If I can get all SQL's for atleast 50 orders processed through the application then I can check that which SELECT, UPDATE, DELETE statements are frequently in use and which tables are being frequently used by the application after finding these information.
I can get to conclusion that on which table I can use partitioning as if I get the whole SQL's with the WHERE clause I can also get to know that which type of partitioning will be better for any particular table and the partitioning.
However it seems to be a hectic exercise as there could be lots of SQL's which the application use but it helps me understand the application and also after this exercise i will be having a scrutiny report of my application behavior with database which can be used by the later employees.
For this till now i have used the DBMS_adivsor package which gives me some tables of my database to be partitioned and when i check the EXPLAIN PLAN of SQL which i used in the DBMS_ADVISOR then it occur to me that tables which are being full table scan in EXPLAIN PLAN the DBMS_ADVISOR told me to partition them.
The thing is that i can not partition the tables based on this information as its a application level partitioning and also my manager will be not convinced by this little information. so i have come up with the ABOVE plan:(
I need to do this to find out the tables where i can perform table partitioning and other performance tuning things like creating index's as i can get the where clause with filter so its like a database tuning and i want to do this as it will help me grow my career in database development.
Please help me out with this scenario.
Will this query give me required information !
select st.command
from V$SQLTEXT_WITH_NEWLINES st, SYS.V_$SQL s
where st.hash_value = s.hash_value
and parsing_schema_name = 'NETSERVICOS2CM'
and s.module = 'JDBC THIN CLIENT';
Tracing for non-dba USER's ----
GRANT SELECT ON SYS.V_$SESSION TO USER;
GRANT SELECT ON SYS.V_$MYSTAT TO USER;
To get the SID and SERAIL#
SELECT sid, serial# FROM SYS.V_$SESSION
WHERE SID = (SELECT DISTINCT SID FROM SYS.V_$MYSTAT);
Then on DBA user execute this --
EXEC DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION (sid=>3002, serial#=>31833,sql_trace=> true);
OR
on no-dba user i am using --
ALTER SESSION SET SQL_TRACE = TRUE;
OR
EXEC DBMS_SESSION.set_sql_trace(sql_trace => TRUE);
Trigger to trace a session for a particular user ----
CREATE OR REPLACE TRIGGER ON_MY_SCHEMA_LOGIN
AFTER LOGON ON DATABASE
WHEN ( USER = 'NETSERVICOS1CM' )
BEGIN
EXECUTE IMMEDIATE 'ALTER SESSION SET TRACEFILE_IDENTIFIER = "net1cm"';
EXECUTE IMMEDIATE 'alter session set statistics_level=ALL';
EXECUTE IMMEDIATE 'alter session set events ''10046 trace name context forever, level 12''';
EXCEPTION
WHEN OTHERS THEN
NULL;
END;
After that to stop trace i am using
ALTER SESSION SET EVENTS '10046 trace name context off';
ALTER SYSTEM SET EVENTS '10046 trace name context off';
As suggested by Derek.
After this you may have multiple trace files to make a consolidate trace file we can use TRCSESS utility --
trcsess output=net1cm_trcsess.trc module="JDBC Thin Client" *net1cm.trc
It will create a single trace file net1cm_trcsess.trc for all trace file generated in my case (with trace file identifier net1cm).
Now we can use TKPROF utility to generate a report which is in human readable form using below command for example ---
tkprof net1cm_trcsess.trc OUTPUT=net1cm_trcsess.txt EXPLAIN=netservicos1cm/netservicos1 SYS=NO
Thanks
So here is my advise.
You can use several different traces for application context actions, such as INSERT, DELETE, UPDATE, SELECT, or even all actions.
Say you have a PL/SQL program run by an application, or have an OCI call to the database. You would have this oracle code at the module/stored proc level:
dbms_application_info.set_module(<module_name>,'execute');
before you execute the entire code. (After the BEGIN in the code).
or
dbms_application_info.set_module(<module_name>,'UPDATE');
before you do an update SQL statement.
To turn off application context, you would use (before the END;):
dbms_application_info.set_module(NULL,NULL);
Then when you execute the module or run the update statement you would like to trace in the module you would make sure you did this prior to and after the module runs
execute DBMS_MONITOR.SERV_MOD_ACT_TRACE_ENABLE( -
service_name => '<service_name>', -
module_name => '<module_name>', -
action_name => DBMS_MONITOR.ALL_ACTIONS, -
waits => TRUE, -
binds => TRUE);
All actions would be traced and you would know exactly where the statement ran and what action was executed.
To turn it off:
execute DBMS_MONITOR.SERV_MOD_ACT_TRACE_DISABLE( -
service_name => '<service_name>', -
module_name => '<module_name>', -
action_name => DBMS_MONITOR.ALL_ACTIONS);
To do this at the session level, you would do the following when 9 is the serial number and 100 is the Sid, for example. (check the syntax).
execute DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(9,190,TRUE);
To turn it off:
execute DBMS_SYSTEM.SET_SQL_TRACE_IN_SESSION(9,190,FALSE);
At the database level, (You have to be very careful with this because it will generate a trace for the entire database and can fill up your diagnostic directory on your oracle database. Disclaimer: USE WITH CAUTION).
execute DBMS_MONITOR.DATABASE_TRACE_ENABLE(waits=>TRUE, binds=>TRUE, instance_name=>'<Instance_name>');
execute DBMS_MONITOR.DATABASE_TRACE_DISABLE(instance_name=>'<instance_name>');
You can leverage v$sqltext_with_newlines ,V$SESSION and v$session_longops. You can google with these words and see if these can be useful for your requirements.
I am trying to schedule a job using DBMS_JOB (I can't use DBMS_SCHEDULER for security reasons), which uses a DDL statement.
DECLARE
job_num NUMBER;
BEGIN
DBMS_JOB.SUBMIT(job => job_num,
what => 'BEGIN EXECUTE IMMEDIATE ''CREATE TABLE temp1 (ID NUMBER)''; END;'
);
DBMS_OUTPUT.PUT_LINE('JobID'||job_num);
DBMS_JOB.RUN(job_num);
END;
/
It fails to execute giving me an error message :
ORA-12011: execution of 1 jobs failed
ORA-06512: at "SYS.DBMS_IJOB", line 548
ORA-06512: at "SYS.DBMS_JOB", line 278
ORA-06512: at line 8
On removing the DBMS_JOB.RUN() statement from inside the anonymous block, I am able to at least create (and save) the job. When I check the job, it has saved this as the code to execute
BEGIN EXECUTE IMMEDIATE 'CREATE TABLE temp1 (id NUMBER) '; END;
If I execute it standalone, it obviously executes. The only time it fails it when I try to execute the entire thing through the call to DBMS_JOB.RUN().
Is there a restriction on using DDL statements as a parameter in DBMS_JOB? I can't find any pointer in documentation for this.
While echoing the sentiments of the other commenters-- creating tables on the fly is a red flag that often indicates that you really ought to be using global temporary tables-- a couple of questions.
Is there a reason that you need the DBMS_JOB.RUN call? Your call to DBMS_JOB.SUBMIT is telling Oracle to run the job asynchronously as soon as the parent transaction commits. So, normally, you'd call DBMS_JOB.SUBMIT and then just `COMMIT'.
Does the user that is submitting job have the CREATE TABLE privilege granted directly? My guess is that the user only has the CREATE TABLE privilege granted via a role. That would allow you to run the anonymous PL/SQL block interactively but not in a job. If so, you'll need the DBA to grant you the CREATE TABLE privilege directly, not via a role.
When a job fails, an entry is written to the alert log with the error message. Can you (or, more likely, the DBA) get the error message and the error stack from the alert log and post it here (assuming it is something other than the privileges issue from #2).
I have and application which uses Adodb to insert data in Oracle table(customers database).
Data is successfully inserted if there are no errors.
If there is any error like invalid datatype etc. Error is raised and captured by my application and dumped in log gile.
My customer has written their own triggers on this particular table. When a record is inserted few other checking are done be fore the data insertion
Now all fine until now.
But recently we found that many a times data is not inserted in the oracle table.
When checked in log file no error was found.
Then I logged the query which was executed.
Copied the query to oracle Sql prompt and executed it gave error of trigger.
My Issue is
Customer is not ready to share the details of trigger.
Error is not raised while inserting to oracle table so we are not able to log it or take any action.
The same qry when executed directly in oracle the trigger errors are show.
Help needed for
Why the error is not raised in ADODB
Do I have to inform customer to implement any error raising
Anything that you can suggest for resolving the issue
I have 0% to 10% knowledge of Oracle
"Copied the query to oracle Sql prompt and executed it gave error of trigger." Since the ADO session doesn't report an error, it may be that the error from the trigger is misleading. It may simply be a check on the lines of "Hey, you are not allowed to insert into this table except though the application".
"Error is not raised while inserting to oracle table so we are not able to log it or take any action."
If the error isn't raised at the time of insert, it MAY be raised at the time of committing. Deferred constraints and materialized views could give this.
Hypothetically, I could reproduce your experience as follows:
1. Create a table tab_a with a deferrable constraint initially deferred (eg val_a > 10)
2. The ADO session inserts a row violating the constraint but it dooesn't error because the constraint is deferred
3. The commit happens and the constraint violation exception fires and the transaction is rolled back instead of being committed.
So see if you are catering for the possibility of an error in the commit.
It may also be something else later in the transaction which results in a rollback of the whole transaction (eg a deadlock). Session tracing would be good. Failing that, look into a SERVERERROR trigger on the user to log the error (eg in a file, so it won't be rolled back)
http://download-west.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_7004.htm#i2153530
You can log your business logic in log table.
But you have to use stored procedure to log the message.
Stored procedure should have pragma Transaction such that your log data must be saved in log table.
You are trigger should have error handling - and in error handling , you have to call Logged stored procedure (which have pragma transaction)
I've never used adodb ( and I assume that is what you are using, not ADO.NET?).. But, a quick look at its references leads to this question.. Are you actually checking the return state of your query?
$ok = $DB->Execute("update atable set aval = 0");
if (!$ok) mylogerr($DB->ErrorMsg());