First of all, sorry about my english, i'm spanish and i'm not so good at it.
I've been having some troubles exporting and importing with datapump a few schemas between 2 cloned databases (to make a single update data).
First, I tried making an expdp with this parfile:
[oracle#ES-NAW-ORACLEVM-PRO backup]$ cat /u01/app/oracle/EXPORTS/FEDBPRE/EXP_FEDBPRE_para_CLON.par
directory=EXPORT_TEMP
dumpfile=EXP_FEDBPRE_%U.dmp
logfile=EXP_FEDBPRE.log
schemas=AQADM,ASPNETOP,ASSISTANT,AUTOPUB,AUTOPUBOP,AVANTTIC,AVAN_SPA,DBAWKE,JAUSER,JURIMETRIA,JURIMETRIA_OLD,JURI_OPW,MONDB,NAGIOS,NASPOP,NTTAM,PREOP,PREOP_TEST,PRESENTATION,PRESENTATION_TEMP,PRESENT_ACT,PUB,PUBOP,SCOTT,TRACE,TRACEOP,WKE
FILESIZE=10g
parallel=4
And then:
expdp \'/ as sysdba\' PARFILE=/u01/app/oracle/EXPORTS/FEDBPRE/EXP_FEDBPRE_para_CLON.par
It took like 15 mins to export all schemas.
I moved the DMP files to the cloned server, DROPed the USERs with CASCADE option on the database and i let the import run all night with this parfile:
[oracle#ES-NAW-ORACLEVM-PRO FEDBPRE_bkp]$ cat /backup/FEDBPRE_bkp/IMP_FEDBPRE_para_CLON.par
directory=EXPORT_TEMP
dumpfile=EXP_FEDBPRE_%U.dmp
logfile=IMP_FEDBPRE.log
ignore=yes
PARALLEL=8
impdp \'/ as sysdba\' PARFILE=/backup/FEDBPRE_bkp/IMP_FEDBPRE_para_CLON.par
The next day, i watched it and it took like 4h30min to finish the import. I thought it was too much time being the export 15min, so i re-run the import to see in real time what was happening.
While it was running, i was looking how it was going on the database searching for the sessions working on it with this query:
select s.sid, s.module, s.state, substr(s.event, 1, 21) as event,
s.seconds_in_wait as secs, substr(sql.sql_text, 1, 30) as sql_text
from v$session s
join v$sql sql on sql.sql_id = s.sql_id
where s.module like 'Data Pump%'
order by s.module, s.sid;
At the beggining, it looked like everything was working well:
Import: Release 12.1.0.2.0 - Production on Mon Jan 16 13:44:55 2023
Copyright (c) 1982, 2014, Oracle and/or its affiliates. All rights reserved.
Connected to: Oracle Database 12c Enterprise Edition Release 12.1.0.2.0 - 64bit Production
With the Partitioning, OLAP, Advanced Analytics and Real Application Testing options
Legacy Mode Active due to the following parameters:
Legacy Mode Parameter: "ignore=TRUE" Location: Parameter File, Replaced with: "table_exists_action=append"
Master table "SYS"."SYS_IMPORT_FULL_02" successfully loaded/unloaded
Starting "SYS"."SYS_IMPORT_FULL_02": SYS/******** PARFILE=/backup/FEDBPRE_bkp/IMP_FEDBPRE_para_CLON.par
Processing object type SCHEMA_EXPORT/USER
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/SYNONYM/SYNONYM
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/SEQUENCE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type SCHEMA_EXPORT/TABLE/TABLE
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
. . imported "PUB"."PUBLICATIONS" 1.582 GB 23242881 rows
. . imported "ASSISTANT"."ASSIST_NODES_RESOURCES" 1.319 GB 74670288 rows
And using the query I was seeing everything normal:
SID MODULE STATE EVENT SECS SQL_TEXT
----- ----------------- ------------------- --------------------- ---------- ------------------------------
312 Data Pump Master WAITING wait for unread messa 1 BEGIN :1 := sys.kupc$que_int.r
65 Data Pump Worker WAITING log file switch (chec 46 BEGIN SYS.KUPW$WORKER.MAIN
75 Data Pump Worker WAITING log file switch (chec 39 BEGIN SYS.KUPW$WORKER.MAIN
127 Data Pump Worker WAITING log file switch (chec 55 BEGIN SYS.KUPW$WORKER.MAIN
187 Data Pump Worker WAITING wait for unread messa 4 BEGIN :1 := sys.kupc$que_int.t
187 Data Pump Worker WAITING wait for unread messa 4 BEGIN :1 := sys.kupc$que_int.t
194 Data Pump Worker WAITING wait for unread messa 4 BEGIN :1 := sys.kupc$que_int.t
194 Data Pump Worker WAITING wait for unread messa 4 BEGIN :1 := sys.kupc$que_int.t
247 Data Pump Worker WAITING wait for unread messa 3 BEGIN :1 := sys.kupc$que_int.t
247 Data Pump Worker WAITING wait for unread messa 3 BEGIN :1 := sys.kupc$que_int.t
249 Data Pump Worker WAITING direct path sync 1 INSERT /*+ APPEND PARALLEL("TR
301 Data Pump Worker WAITING log file switch (chec 55 INSERT /*+ APPEND PARALLEL("TR
361 Data Pump Worker WAITING log file switch (chec 55 INSERT /*+ APPEND PARALLEL("AS
371 Data Pump Worker WAITING direct path sync 2 INSERT /*+ APPEND PARALLEL("TR
418 Data Pump Worker WAITING direct path sync 2 INSERT /*+ APPEND PARALLEL("TR
428 Data Pump Worker WAITING PX Deq: Execute Reply 1 INSERT /*+ APPEND PARALLEL("TR
But suddenly, impdp looked like frozen after table ASSISTANT.ASSIST_NODES and i wanted to know what was going on:
[...]
. . imported "ASSISTANT"."ASSIST_NODES_DA" 307.6 MB 4322248 rows
. . imported "ASSISTANT"."ASSIST_TYPES_CHANGED" 21.15 MB 1249254 rows
. . imported "ASSISTANT"."STR_RESOURCES" 845.4 MB 10994245 rows
. . imported "ASSISTANT"."ASSIST_NODES" 6.526 GB 74638678 rows
SID MODULE STATE EVENT SECS SQL_TEXT
----- ----------------- ------------------- --------------------- ---------- ------------------------------
312 Data Pump Master WAITING wait for unread messa 1 BEGIN :1 := sys.kupc$que_int.r
65 Data Pump Worker WAITING wait for unread messa 2 BEGIN :1 := sys.kupc$que_int.t
65 Data Pump Worker WAITING wait for unread messa 2 BEGIN :1 := sys.kupc$que_int.t
75 Data Pump Worker WAITING wait for unread messa 4 BEGIN :1 := sys.kupc$que_int.t
75 Data Pump Worker WAITING wait for unread messa 4 BEGIN :1 := sys.kupc$que_int.t
127 Data Pump Worker WAITING wait for unread messa 2 BEGIN :1 := sys.kupc$que_int.t
127 Data Pump Worker WAITING wait for unread messa 2 BEGIN :1 := sys.kupc$que_int.t
187 Data Pump Worker WAITING wait for unread messa 3 BEGIN :1 := sys.kupc$que_int.t
187 Data Pump Worker WAITING wait for unread messa 3 BEGIN :1 := sys.kupc$que_int.t
194 Data Pump Worker WAITING wait for unread messa 4 BEGIN :1 := sys.kupc$que_int.t
194 Data Pump Worker WAITING wait for unread messa 4 BEGIN :1 := sys.kupc$que_int.t
247 Data Pump Worker WAITING wait for unread messa 2 BEGIN :1 := sys.kupc$que_int.t
247 Data Pump Worker WAITING wait for unread messa 2 BEGIN :1 := sys.kupc$que_int.t
361 Data Pump Worker WAITED KNOWN TIME direct path sync 0 INSERT /*+ APPEND PARALLEL("AS
428 Data Pump Worker WAITING wait for unread messa 2 BEGIN :1 := sys.kupc$que_int.t
428 Data Pump Worker WAITING wait for unread messa 2 BEGIN :1 := sys.kupc$que_int.t
I searched the session with SID=361 and was executing the following SQL_ID=bh6qct41h9bth and the text was:
INSERT /*+ APPEND PARALLEL("ASSIST_NODES_METADATA",1)+*/
INTO RELATIONAL("ASSISTANT"."ASSIST_NODES_METADATA" NOT XMLTYPE)
("NODE_ID", "AST_NODES_MT_TYPE", "XML_DATA") SELECT "NODE_ID",
"AST_NODES_MT_TYPE", SYS.XMLTYPE.CREATEXML("XML_DATA") FROM
"SYS"."ET$0169B1810001" KU$
Appearenlty, the data inserts were doing one by one, even knowing that i set PARALLEL=8 on parfile.
I don't know if the XML_DATA column of this table is what causes it, probably.
Searching for this slowness, i found this oracle documentation:
Doc ID 2014960.1
Where i can see that Oracle Databases Enterprise Editions from Version 11.2.0.3 to 12.1.0.2 can be affected by Bug 19520061.
So... they propose 3 solutions:
1. Upgrade the database to 12.2, when available, where issue is fixed.
- OR -
2. For earlier database releases please check Patch 19520061, if available
for your platform and RDBMS version.
- OR -
3. Run the DataPump import job with an user other than SYS.
Confirming this table is making the impdp take so long, i have to tell that i made another import excluding the table and it took like 20 mins.
I tried the 3rd one with an user granted with DBA role and nothing changed, so... solution number 3 is dismissed.
I've seen some articles talking about increasing the table DEGREE parallelism but it didn't work either.
I was thinking the way to "force" oracle to insert the rows with a specific parallel, but not setting it in the parfile. Like the way that oracle make the insert like this, with the specific parallel (8) behind the table_name:
INSERT /*+ APPEND PARALLEL("ASSIST_NODES_METADATA",8)+*/ INTO
RELATIONAL("ASSISTANT"."ASSIST_NODES_METADATA" NOT XMLTYPE)...
Any solution to reduce this impdp time besides to apply patch or upgrade?
If you want to focus on optimizing slow import of XML column of table ASSISTANT.ASSIST_NODES_METADATA, I believe most of benefits you can get if column will be stored as SECUREFILE (if it's not already, but I expect currently it's BASICFILE, taking into account performance issues you encounter and ask to resolve).
There are mainly two ways - either you can convert XML column XML_DATA to SECUREFILE on the Source DB (before doing Export), or on Destination DB. Preferable would be to do it on Source DB, because you should do it once, and then each time when you have to perform Exp/Imp of your schemas, no any additional steps will be needed. But that depends on your application and other conditions - are you able/allowed to perform such change or no.
If convert to SECUREFILE should be done on Destination DB - following considerations:
At some version of impdp utility was introduced new Transform parameter, TRANSFORM =LOB_STORAGE:SECUREFILE - check, do you have it in your 12.1.0.2 version
Then later was added other similar option LOB_STORAGE=SECUREFILE - both these parameters allows you to perform automac convert of LOB column (your XML_DATA column) into SECUREFILE storage type during Import process (at the moment of table segment creation)
Another way would be to separately import all schemas and objects / their data, and separately import problematic table ASSISTANT.ASSIST_NODES_METADATA (like you already did). During import of table, you first import only table's definition (using METADATA_ONLY mode, or just manually copying DDL from Source DB), but modify STORAGE AS part of CREATE TABLE statement to SECUREFILE
And then import only that table's data (DATA_ONLY mode)
You can try to simulate parallel Import using QUERY parameter in your Parameters file and starting in parallel 8 (or any other preferred degree of parallelism) separate impdp processes, with appropriate value for filter in QUERY parameter. For QUERY value you should use filter on NODE_ID or AST_NODES_MT_TYPE column - analyze contents of these columns, is it possible to split table into more or less same chunks or pieces
And few comments, just about Import process:
Set in your Parameters file options METRICS=YES and LOGTIME=ALL - then there will be no need to sit and watch how Import process is running - you will have timestamp and duration for each step, i.e. for each processed table, how long it took to import it. Also you will see which "path" Data Pump selected for each table - either CONVENTIONAL or DIRECT_PATH insert path
Also matters to exclude Optimizer Statistics, for both tables and indexes, via EXLUDE=TABLE_STATISTICS,INDEX_STATISTICS parameters (applies both to Export and Import phases). Rationale for such exclusion is - anyway good practice is to re-collect Optimizer's Statistics for all objects after their import into new DB. So, Import of statistics is useless, if statistics will be anyway overwritten by new gathering process. But exclusion will safe mearasible amount of time during Import process.
In one of outputs you posted, there was showed long Wait Events for event "log file switch (checkpoint incomplete)" - seems, during Import your Redo Logs config was not able to effectively catch up such load. Matters to review Redo Logs config and maybe add more Redo Log groups and/or use larger Redo Log members. In the same time there is no need to try to achieve (sometimes) recommended Redo Logs switch frequency around "every 20 min / 3 times per hour" - as your Import process is rare event. So, some balance needed.
Also as a trick sometimes used approach - think about switching DB into NOARCHIVELOG mode for Import phase, and after Import completed - switch back to ARCHIVELOG mode and perform immediate DB backup
Alternatively, if you don't want to go with changing NOARCHIVELOG / ARCHIVELOG modes (as that will require 2 DB restarts), then almost same effect you can achieve with putting Tablespace(-es), where will be stored imported segments, into NOLOGGING mode, perform Import, switch Tablespaces back into LOGGING mode and then perform immediate DB backup
Few URLs about, to read and learn what smart guys recommend about Data Pump Import optimization:
https://connor-mcdonald.com/2020/06/24/datapump-migration-to-securefile/
https://www.oracle.com/technetwork/database/manageability/motorola-datapump-128085.pdf
https://www.oracle.com/a/ocom/docs/oracle-data-pump-best-practices.pdf
P.S. I believe, you want to optimize Import phase because you plan to execute it periodically (for example to refresh Destination DB with recent data from Source DB). Means, Export/Import exercise is planned to be periodic task, it's not one-time task?
I appreciate your answer, it's all very clear and with an amount of information.
I've tried changing on Source DB the XML_DATA column to securefile (it was as basicfile):
SYS#FEDBPRE> alter table ASSISTANT.ASSIST_NODES_METADATA move lob(SYS_NC00004$) store as securefile(tablespace ASSISTANT_DAT) parallel 10;
Table altered.
Elapsed: 00:04:52.35
It took like 5 mins to make the ALTER TABLE but it worked.
The original statement i found was this one:
alter table ASSISTANT.ASSIST_NODES_METADATA move lob(SYS_NC00004$) store as securefile( tablespace ASSISTANT_DAT compress high deduplicate ) parallel 10;
But i was afraid to use "compress high" and "deduplicate" options because i read on an oracle note that i would need a license called "Oracle Advanced Compression" for it:
https://docs.oracle.com/database/121/ADLOB/adlob_smart.htm#ADLOB45944
Anyway, i tried this ALTER TABLE without those options and keeping parallel 10 if this could increase the INSERTS.
I exported from de SourceDB only the table and imported it in DestinationDB with parallel 6 and FINALLY i could see 7 slaves working on the INSERTs (i suppose because i have 7 DMP files and the table parallel was set to 10):
SID MODULE STATE EVENT SECS SQL_TEXT
---- ------------------ ------------------- --------------------- ---------- ------------------------------
187 Data Pump Master WAITING wait for unread messa 1 BEGIN :1 := sys.kupc$que_int.r
75 Data Pump Worker WAITED SHORT TIME PX Deq Credit: send b 0 INSERT /*+ APPEND PARALLEL("AS
191 Data Pump Worker WAITING PX Deq Credit: send b 0 INSERT /*+ APPEND PARALLEL("AS
247 Data Pump Worker WAITING PX Deq Credit: send b 0 INSERT /*+ APPEND PARALLEL("AS
314 Data Pump Worker WAITING PX Deq Credit: send b 0 INSERT /*+ APPEND PARALLEL("AS
361 Data Pump Worker WAITING PX Deq Credit: send b 0 INSERT /*+ APPEND PARALLEL("AS
423 Data Pump Worker WAITED SHORT TIME PX Deq: Execute Reply 0 INSERT /*+ APPEND PARALLEL("AS
428 Data Pump Worker WAITED KNOWN TIME PX Deq Credit: send b 0 INSERT /*+ APPEND PARALLEL("AS
And looking for the SQL_ID executing is now applying parallel 6:
INSERT /*+ APPEND PARALLEL("ASSIST_NODES_METADATA",6)+*/ INTO RELATIONAL("ASSISTANT"."ASSIST_NODES_METADATA" NOT XMLTYPE)
("NODE_ID", "AST_NODES_MT_TYPE", "XML_DATA") SELECT "NODE_ID", "AST_NODES_MT_TYPE", SYS.XMLTYPE.CREATEXML("XML_DATA")
FROM "AVANTTIC"."ET$01A739BA0001" KU$
It finally ended in 1h39min, 3h less than the previous import.
I'll make another try exporting with filesize=4g (instead 10g) to generate more DMP files and importing with parallel=16 to see how it goes.
Thank you very much Shane, your help has been so useful and thank you to take your time to do it :D
Related
I am using Oracle SQL developer as a client for Oracle 11g DB. Its a simple issue. I am fetching data from a table and writing the data into a text file. This particular piece of code is scheduled as a monthly job and the output text file is placed in DB directory path.
The number of records differ each month. The text output file had correct number of rows as like in table till last month's job. This month, data inconsistency is observed in the text file. The number of rows to be exported to text file is lets say, 1000. The output file has total of 950 or so rows . The data do not match. This issue was not occurring till last month.
On testing further, observed, file was not closed after writing using UTL_FILE.FCLOSE(M_OUT_SYS). Issue is resolved after closing the file, data matches now.
But why the issue didn't surface till last month when program ran without file closure and why the issue surfaced suddenly in this month?
declare
M_OUT_SYS UTL_FILE.FILE_TYPE;
M_DATA VARCHAR2(2000);
M_DIRECTORY_NAME ALL_DIRECTORIES.DIRECTORY_NAME%TYPE;
M_DELIMITER_FILE_NAME VARCHAR2(250);
cursor c1 is
select * from example_table;
begin
M_DIRECTORY_NAME := 'OracleDB_dir_name';
M_DELIMITER_FILE_NAME := 'OutputTextFile.txt';
M_OUT_SYS := UTL_FILE.FOPEN(M_DIRECTORY_NAME,
M_DELIMITER_FILE_NAME,
'W', 8192);
UTL_FILE.PUT_LINE(M_OUT_SYS,'column1|column2|column3');
for i in c1 loop
M_DATA := I.column1 || '|' || I.column2 || '|' || I.column3;
UTL_FILE.PUT_LINE(M_OUT_SYS, M_DATA);
end loop;
end;
See the utl_file docs for 11.2 https://docs.oracle.com/cd/E11882_01/appdev.112/e40758/u_file.htm :
UTL_FILE.PUT_LINE does not (by default) flush to the file after each call, it just writes to a buffer.
Flushing will happen after either:
The instance decides to flush due to reaching a certain buffer size (around 10KB)
Data is manually flushed with utl_file.fflush
The file handle is closed
The session disconnects (which is similar to 3)
My money would be on your previous jobs exited their session by the time you came to pick up the file. And when you noticed the difference it's because the session was still open and it had last triggered an auto flush on the 950th row.
I have 5 rows in table. And some of the rows are locked in some sessions .
I don't want to generate any error, Just want to wait until any row will become free for further processing
I tired with nowait and skip locked:-
nowait , But there is a problem with nowait. query has written in cursor , when I used "nowait" under cursor , query will return null and control will go out with an error by saying- resource busy
I tried with skip locked with for update-But if table contain 5 rows and all
5 rows are locked then it is giving error.
CURSOR cur_name_test IS
SELECT def.id , def.name
FROM def_map def
WHERE def.id = In_id
FOR UPDATE skip locked;
Why don't use select for update only ? The below is being test locally in plsql developer.
in first sesssion I do the below
SELECT id , name
FROM ex_employee
FOR UPDATE;
in second session i run the below however it hang.
SET serveroutput ON size 2000
/
begin
declare curSOR cur_name_test IS
SELECT id , name
FROM ex_employee
WHERE id = 1
FOR UPDATE ;
begin
for i in cur_name_test loop
dbms_output.put_line('inside cursor');
end loop;
end;
end;
/
commit
/
when I commit in the first session , the lock will be released and the second session will do its work. i guess that you want , infinite wait.
However such locking mechanism (pessimistic locking) can lead to deadlocks if its not managed correctly and carefully ( first session waiting second session , and second session waiting first session).
As for the nowait its normal to have error resource busy, because you are saying for the query don't wait if there are locking. you can instead wait 30, which will wait 30 second then output error but thats not what you want(I guess).
As for skip locked, the select will skip the locked data for example you have 5 rows , and one of them is locked then the select will not read this row . thats why when all the data is locked its throwing error because nothing can be skipped. and I guess is not what you want in your scenario.
This sounds like you need to think about transaction control.
If you are doing work in a transaction then the implication is that that unit of work needs to complete in order to be valid.
What you are saying is that some of the work in my update transaction doesn't need to complete in order for the transaction to be committed.
Not only that but you have two transactions running at the same time performing operations against the same object. In itself that may be valid but
if it is then you really need to go back to the first sentence and think hard about transaction control and process flow and see if there's a way you can have the second transaction only attempt to update rows that aren't being updated in the first transaction.
I have a job running in ORacle 10g production DB which syncs two DB tables A and B. The job fetches data from table A and inserts into table B. The job runs on a daily basis and for the past few months it started failing in production with the error "Error in retrieving data -54". On checking the store procedure, I could see that the job fails due to locking record issue when other jobs lock the records from table A and our job is not able to process the same. So I started searching for some possible solutions which I have given below.
Change the running time of the job so that it can process records. But this is not gonna help since table A is very critical and always used by production jobs. Also it has real time updates from the users.
Instead of "No WAIT" use "SKIP LOCKED" so that job will skip the locked records and run fine. But problem here is if locked records(This is always negligible compared to the huge production data) are skipped, there will be mismatch in the data in tables A and B for the day. Next day run will clear this problem since the job picks unpicked records of previous days also.But the slight mismatch for the job failed days may cause small problems
Let the job wait till all the records are unlocked and processed. but this again causes problem since we cannot predict how long job will be in waiting state(Long running state).
As of now one possible solution for me is to go with option 2 and ignore the slight deviation between table A and Bs data. Is there any other way in Oracle 10g Db to run the job without failing and long running and process all records. I wish to get some technical guidance on the same.
Thanks
PB
I'd handle the exception (note, you'll have to either initialise your own EXCEPTION or handle OTHERS and inspect the SQLCODE) and track the ids of the rows that were skipped. That way you can retry them once all the available records have been processed.
Something like this:
DECLARE
row_is_locked EXCEPTION;
PRAGMA EXCEPTION_INIT(row_is_locked, -54);
TYPE t_id_type IS VARRAY(1) OF INTEGER;
l_locked_ids t_id_type := t_id_type();
l_row test_table_a%ROWTYPE;
BEGIN
FOR i IN (
SELECT a.id
FROM test_table_a a
)
LOOP
BEGIN
-- Simulating your processing that requires locks
SELECT *
INTO l_row
FROM test_table_a a
WHERE a.id = i.id
FOR UPDATE NOWAIT;
INSERT INTO test_table_b
VALUES l_row;
-- This is on the basis that you're commiting
-- to release the lock on each row after you've
-- processed it; may not be necessary in your case
COMMIT;
EXCEPTION
WHEN row_is_locked THEN
l_locked_ids(l_locked_ids.LAST) := i.id;
l_locked_ids.EXTEND();
END;
END LOOP;
IF l_locked_ids.COUNT > 0 THEN
FOR i IN l_locked_ids.FIRST .. l_locked_ids.LAST LOOP
-- Reconcile the remaining ids here
NULL;
END LOOP;
END IF;
END;
I'm confused about time Oracle 10g XE performs insert. I implemented bulk insert from xml file into several tables with programmatical transaction managment. Why one insert performs in a moment and another more than 10 minutes! I can't wait more and stop it. I think there's something more complex I have not payed attention yet.
Update:
I found lock using Monitor.
Waits
Event enq: TX - row lock contention
name|mode 1415053316
usnusnusnusn<<16 | slot 327711
sequence 162
SQL
INSERT INTO ESKD$SERVICESET (ID, TOUR_ID, CURRENCY_ID) VALUES (9, 9, 1)
What does it mean and how should I resolve it?
TX- Enqueues are well known and a quick google will give you a clear answer.
From that article:
1) Waits for TX in mode 6 occurs when a session is waiting for a row level lock that is already held by another session. This occurs when one user is updating or deleting a row, which another session wishes to update or delete. This type of TX enqueue wait corresponds to the wait event enq: TX - row lock contention.
If you have lots of simultaneous inserts and updates to a table you want each transaction to be a short as possible. Get in, get out... the longer things sit in between, the longer the delays for OTHER transactions.
PURE GUESS:
I have a feeling that your mention of "programmatical transaction managment" is that you're trying to use a table like a QUEUE. Inserting a start record, updating it frequently to change the status and then deleting the 'finished' ones. That is always trouble.
This question will be really hard to answer with so little specific information. All that I can tell you is why this could be.
If you are doing an INSERT ... SELECT ... bulk insert then perhaps your SELECT query is performing poorly. There may be a large number of table joins, innefficient use of inline views and other resources that may be negatively impacting the performance of your INSERT.
Try executing your SELECT query in an Explain Plan to see how the Optimizer is deriving the plan and to evaluation the COST of the query.
The other thing that you mentioned was a possible lock. This could be the case however you will need to analyze this with the OEM tool to tell for sure.
Another thing to consider may be that you do not have indexes on your tables OR the statistics on these tables may be out of date. Out of date statistics can GREATLY impact the performance of queries on large tables.
see sites.google.com/site/embtdbo/wait-event-documentation/oracle-enqueues
The locking wait indicates a conflict that could easily be the cause of your performance issues. On the surface it looks likely that the problem is inserting a duplicate key value while the first insert of that key value had not yet committed. The lock you see "enq: TX - row lock contention" happens because one session is trying to modify uncommited data from another session. There are 4 common reasons for this particular lock wait event:
update/delete of the same row
inserting the same uniq key
modifying the same bitmap index chunk
deleting/updating a parent value to a foreign key
We can eliminate the first and last case are you are doing an insert.
You should be able to identify the 2nd if you have no bitmap indexes involved. If you have bitmap indexes involved and you have uniq keys involved then you could investigate easily if you had Active Session History (ASH) data, but unfortunately Oracle XE doesn't. On the other hand you can collected it yourself with S-ASH, see : http://ashmasters.com/ash-simulation/ . With ASH or S-ASH you can run a query like
col event for a22
col block_type for a18
col objn for a18
col otype for a10
col fn for 99
col sid for 9999
col bsid for 9999
col lm for 99
col p3 for 99999
col blockn for 99999
select
to_char(sample_time,'HH:MI') st,
substr(event,0,20) event,
ash.session_id sid,
mod(ash.p1,16) lm,
ash.p2,
ash.p3,
nvl(o.object_name,ash.current_obj#) objn,
substr(o.object_type,0,10) otype,
CURRENT_FILE# fn,
CURRENT_BLOCK# blockn,
ash.SQL_ID,
BLOCKING_SESSION bsid
--,ash.xid
from v$active_session_history ash,
all_objects o
where event like 'enq: TX %'
and o.object_id (+)= ash.CURRENT_OBJ#
Order by sample_time
/
Which would output something like:
ST EVENT SID LM P2 P3 OBJ OTYPE FN BLOCKN SQL_ID BSID
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
10:41 enq: TX - row lock c 143 4 966081 4598 I1 INDEX 0 0 azav296xxqcjx 144
showing that the object name "OBJ" and the object type "OTYPE" with the contention and that the type is an INDEX. From there you could look up the type of INDEX to verify that it is bitmap.
IF the problem is a bitmap index, then you should probably re-evaluate using bitmap indexes or revisit the way that data is loaded and/or modify to reduce conflicts.
If the problem isn't BITMAP indexes, then it's trying to insert a duplicate key. Some other process had inserted the same key value and not yet committed. Then your process tries to insert the same key value and has to wait for the first session to commit or rollback.
For more information see this link: lock waits
It means, your sequence cache is to small. Increase it.
I usually generate explain plans using the following in sqlplus:
SET AUTOTRACE ON
SET TIMING ON
SET TRIMSPOOL ON
SET LINES 200
SPOOL filename.txt
SET AUTOTRACE TRACEONLY;
{query goes here}
SPOOL OFF
SET AUTOTRACE OFF
But what If I want to generate explain plan for a stored procedure?
Is there a way to generate explain plan for the entire stored procedure? The SP has no input/output parameters.
What you are generating is correctly called an "execution plan". "Explain plan" is a command used to generate and view an execution plan, much as AUTOTRACE TRACEONLY does in your example.
By definition, an execution plan is for a single SQL statement. A PL/SQL block does not have an execution plan. If it contains one or more SQL statements, then each of those will have an execution plan.
One option is to manually extract the SQL statements from the PL/SQL code and use the process you've already shown.
Another option is to active SQL tracing then run the procedure. This will produce a trace file on the server that contains the execution plans for all statements executed in the session. The trace is in fairly raw form so it is generally easiest to format it using Oracle's TKPROF tool; there are also various third-party tools that process these trace files as well.
Hi I have done like below for the stored procedure:
SET AUTOTRACE ON
SET TIMING ON
SET TRIMSPOOL ON
SET LINES 200
SPOOL filename.txt
SET AUTOTRACE TRACEONLY;
#your stored procedure path
SPOOL OFF
SET AUTOTRACE OFF
And got the below statistics:
Statistics
-----------------------------------------------------------
6 CPU used by this session
8 CPU used when call started
53 DB time
6 Requests to/from client
188416 cell physical IO interconnect bytes
237 consistent gets
112 consistent gets - examination
237 consistent gets from cache
110 consistent gets from cache (fastpath)
2043 db block gets
1 db block gets direct
2042 db block gets from cache
567 db block gets from cache (fastpath)
27 enqueue releases
27 enqueue requests
4 messages sent
31 non-idle wait count
19 non-idle wait time
44 opened cursors cumulative
2 opened cursors current
22 physical read total IO requests
180224 physical read total bytes
1 physical write total IO requests
8192 physical write total bytes
1 pinned cursors current
461 recursive calls
4 recursive cpu usage
2280 session logical reads
1572864 session pga memory
19 user I/O wait time
9 user calls
1 user commits
No Errors.
Autotrace Disabled