I've got an Oracle export that's been running for almost 2 days now.
The export file shows a modification time of 5 am this morning and does not appear to be growing.
The log file does not show any errors.
I think it's stuck and I need to restart it, but I'd hate to cancel it only to find out it was still going and I simply didn't wait long enough.
Any way to definitely know if the export is still running?
select * from v$session where status = 'ACTIVE';
Couple of things:
1) FEEDBACK parameter for export can be helpful to detect activity. I usually set this to 1000 or 10000 records depending on the size of the database I am exporting.
2) You may also be able to take advantage of the v$session_longops to see progress.
The export program doesn't update LongOps directly, but if there are large tables the table scan progress that is a by product of the export will show up, you should at least see the PercentageDone counting up while large tables are being exported:
SELECT ROUND(sofar/totalwork*100,2) as PercentDone,
v$session_longops.*
FROM v$session_longops
WHERE sofar <> totalwork
ORDER BY target, sid;
-Dave
Method 1:
ps -ef|grep expdp
Method 2:
Step-1
Find Export JOB using below query.
select job_name,owner_name,state from dba_datapump_jobs;
JOB_NAME OWNER_NAME STATE
------------------------------ -------------------- --------------------
SYS_EXPORT_FULL_04 SYSTEM NOT RUNNING
EXP_TEST_FULL SYSTEM NOT RUNNING
Step-2 Then using below query
expdp system/password attach=EXP_TEST_FULL
Step-3 Kill the job
Export> KILL_JOB
Related
My Oracle 11.2.0.3 FULL DATABASE Datapump Export is very slow, when i ask V$SESSION_LONGOPS
SELECT USERNAME,OPNAME,TARGET_DESC,SOFAR,TOTALWORK,MESSAGE,SYSDATE,ROUND(100*SOFAR/TOTALWORK,2)||'%' COMPLETED FROM V$SESSION_LONGOPS
where SOFAR/TOTALWORK!=1
it show me 2 records, in opname one containing the SYS_EXPORT_FULL_XX, and another "Rowid Range Scan" and the message for the last one is
Rowid Range Scan : MY_SCHEMA.BIG_TABLE: 28118329 out of 30250532 Blocks done and it takes hours and hours.
I.E : MY_SCHEMA.BIG_TABLE is a 220 GB table size having 2 CLOB colunn.
If you have CLOBs in the table it will take a long time to export because that wont parallelize. Exactly what phase are you stuck in? Could you paste the last lines from the log file or get a status from data pump?
There are some best practices that you could try out:
SecureFile LOBs can be faster than BasicFile LOBs. That is yet another reason for going to SecureFile LOBs.
You could try to increase the STREAMS_POOL_SIZE to 256 MB (at least) although I think that is not the reason.
Use PARALLEL option and set it to 2 x CPU cores. Never export statistics - it is better to either export using DBMS_STATS or regather at target database.
Regards,
Daniel
Well for 11g and 12cR1 the Streams AQ Enqueue is a common culprit for this as well. If you ALTER SYSTEM SET EVENTS 'IMMEDIATE TRACE NAME MMAN_CREATE_DEF_REQUEST LEVEL 6' this will help if the issue is the very common Streams AQ Enqueue.
Question
How I can set a timeout value for nonblocking DDL (ALTER TABLE add column) in oracle so that if any DML lock the table for long time (several hours), my DDL can fast-fail instead of waiting for hours. (we expect oracle raise error like ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired to interrupt our DDL)
P.S: DDL_LOCK_TIMEOUT is not working (refer 'What I tried' below)
Background
I'm working on a big oracle database (Oracle Database 19c). There are legacy application every hour will do aggregation job to calculate the data in past hour, like AVG, SUM of the counters. The production has 40 CPUs and 200GB+ memory, normally the aggregation job will run around 30 minutes, but in some case, like due to maintenance break the aggregation jobs are delayed, more data need to be handle in next aggregation job cause the job running for few hours.
Those legacy applications are out of my control. It's not possible to change the aggregation job.
Edition-Based Redefinition is not used.
My work is update database table (due to new counter added). We use ALTER TABLE to add new column to the existing tables. But in some case, the aggregation job lock the table for hours make my script hang there for hours. It make customer unhappy. So I want to make my script fast-fail.
What I tried
By google a long time, seems DDL_LOCK_TIMEOUT is the simplest solution.
However, based on the test, we notice that DDL_LOCK_TIMEOUT is not works in our case. By a long time google again, we found Oracle document here clearly mentioned:
The DDL_LOCK_TIMEOUT parameter affects blocking DDL statements (but not nonblocking DDL statements)
ALTER TABLE add column is exactly 'nonblocking DDL' as listed in List of Nonblocking DDLs
Expectation
When a DML lock the table for 1 hours, like SELECT * FROM MY_TABLE FOR UPDATE and commit after 1 hours. I want my DDL like ALTER TABLE MY_TABLE ADD (COL_A number) can get timeout after 10 minutes instead of wait for 1 hour.
Other Solutions
1
There have one solution in my mind that we can first issue a lock table MY_TABLE IN EXCLUSIVE MODE wait 600 to get the lock fist. But before we go with this solution, I want to seek is there any simple solution just like DDL_LOCK_TIMEOUT to set only one parameter.
2
Based on oracle doc, enable Supplemental Logging able to downgrade the nonblocking DDL to blocking way. But Supplemental Logging is DB level configuration. I do not have the permission to do such change.
Today, someone in my system has updated unexpected statement. So that makes my system run incorrect.
Now, I would like to see log who (or which session) did it. May I find it in AWR report ? And if I can find it in AWR report, where is it particularly ?
Thanks so much !
The change could be in many sources, depending on how it was made. Only the last option, Log Miner, will give you exactly everything you want. But it also requires the most effort. Some sources won't tell you the session, but maybe just seeing the relevant SQL will be enough to figure out who did it.
V$SQL - All SQL statements go in there, but they age out of the shared pool so you need to search quickly. If they used a unique query you may be able to find it with something like select * from v$sql where lower(sql_text) like '%table_name%';.
AWR - You may be able to find the SQL in select * from dba_hist_sqltext where lower(sql_text) like '%table_name%';, and then if you're lucky you can find out some session information from select * from dba_hist_active_sess_history where sql_id = '<sql id>';. Active Session History only samples activity, if the query ran very quickly there's a good chance it won't be in there.
Flashback query - If you're lucky the UNDO is still around and you can see exactly how it changed from a flashback query. This may give you the exact time, and what changed. select VERSIONS_STARTSCN, VERSIONS_STARTTIME, VERSIONS_ENDSCN, VERSIONS_ENDTIME, VERSIONS_XID, VERSIONS_OPERATION, your_table.* from your_table versions between scn minvalue and maxvalue;
Log Miner - I haven't used this, but supposedly it's the perfect tool for this job. Read more about it in the documentation.
I have a weird problem right now that if a ref cursor returned from a stored procedure that has only 1 record in it, the fetch operation will hang and freeze. The stored procedure execution was really fast, just the fetching process hangs. If the ref cursor has more than 1 record, then everything is fine. Does anyone have similar issues before?
The Oracle server is 11g running on Linus. The client is Windows Server 2003. I'm testing this using the generic Oracle sqlplus tool on the Windows Server.
Any help and comments would be greatly appreciated. thanks.
When you say hangs, what do you mean ?
If the session is still active in the database (status in V$SESSION), then it is probably waiting on some event (eg SQL*Net from client means it is waiting for the client to do something).
It may be that the query is taking a long time to find that there aren't any more rows. Consider a table of 10,000,000 rows with no indexes. The query may full scan the table and find the first row matches the criteria. It still has to scan the next 9,999,999 rows to find that they don't. That can take a while.
Since you are saying that the process hangs, Is there a chance that your cursor does a "select for Update" instead of "Select " ? Since you are saying that the fetch of multiple records does not cause this error, that might not be the case.
Can you show us the code (or a reproducible small test/sample) for your select and the fetch.
Also, you can check the v$locked_objects using the following query and giving in your table name(s) to see if the object in question is being locked. Again, unless your current query has "for update" this fetch should not hang.
select do.*
from v$locked_objects vo,
dba_objects do
where vo.object_id = do.object_id
and vo.object_name = '<your_table_name>'
Can I find out when the last INSERT, UPDATE or DELETE statement was performed on a table in an Oracle database and if so, how?
A little background: The Oracle version is 10g. I have a batch application that runs regularly, reads data from a single Oracle table and writes it into a file. I would like to skip this if the data hasn't changed since the last time the job ran.
The application is written in C++ and communicates with Oracle via OCI. It logs into Oracle with a "normal" user, so I can't use any special admin stuff.
Edit: Okay, "Special Admin Stuff" wasn't exactly a good description. What I mean is: I can't do anything besides SELECTing from tables and calling stored procedures. Changing anything about the database itself (like adding triggers), is sadly not an option if want to get it done before 2010.
I'm really late to this party but here's how I did it:
SELECT SCN_TO_TIMESTAMP(MAX(ora_rowscn)) from myTable;
It's close enough for my purposes.
Since you are on 10g, you could potentially use the ORA_ROWSCN pseudocolumn. That gives you an upper bound of the last SCN (system change number) that caused a change in the row. Since this is an increasing sequence, you could store off the maximum ORA_ROWSCN that you've seen and then look only for data with an SCN greater than that.
By default, ORA_ROWSCN is actually maintained at the block level, so a change to any row in a block will change the ORA_ROWSCN for all rows in the block. This is probably quite sufficient if the intention is to minimize the number of rows you process multiple times with no changes if we're talking about "normal" data access patterns. You can rebuild the table with ROWDEPENDENCIES which will cause the ORA_ROWSCN to be tracked at the row level, which gives you more granular information but requires a one-time effort to rebuild the table.
Another option would be to configure something like Change Data Capture (CDC) and to make your OCI application a subscriber to changes to the table, but that also requires a one-time effort to configure CDC.
Ask your DBA about auditing. He can start an audit with a simple command like :
AUDIT INSERT ON user.table
Then you can query the table USER_AUDIT_OBJECT to determine if there has been an insert on your table since the last export.
google for Oracle auditing for more info...
SELECT * FROM all_tab_modifications;
Could you run a checksum of some sort on the result and store that locally? Then when your application queries the database, you can compare its checksum and determine if you should import it?
It looks like you may be able to use the ORA_HASH function to accomplish this.
Update: Another good resource: 10g’s ORA_HASH function to determine if two Oracle tables’ data are equal
Oracle can watch tables for changes and when a change occurs can execute a callback function in PL/SQL or OCI. The callback gets an object that's a collection of tables which changed, and that has a collection of rowid which changed, and the type of action, Ins, upd, del.
So you don't even go to the table, you sit and wait to be called. You'll only go if there are changes to write.
It's called Database Change Notification. It's much simpler than CDC as Justin mentioned, but both require some fancy admin stuff. The good part is that neither of these require changes to the APPLICATION.
The caveat is that CDC is fine for high volume tables, DCN is not.
If the auditing is enabled on the server, just simply use
SELECT *
FROM ALL_TAB_MODIFICATIONS
WHERE TABLE_NAME IN ()
You would need to add a trigger on insert, update, delete that sets a value in another table to sysdate.
When you run application, it would read the value and save it somewhere so that the next time it is run it has a reference to compare.
Would you consider that "Special Admin Stuff"?
It would be better to describe what you're actually doing so you get clearer answers.
How long does the batch process take to write the file? It may be easiest to let it go ahead and then compare the file against a copy of the file from the previous run to see if they are identical.
If any one is still looking for an answer they can use Oracle Database Change Notification feature coming with Oracle 10g. It requires CHANGE NOTIFICATION system privilege. You can register listeners when to trigger a notification back to the application.
Please use the below statement
select * from all_objects ao where ao.OBJECT_TYPE = 'TABLE' and ao.OWNER = 'YOUR_SCHEMA_NAME'