EXPDP Running too slow - oracle

we have a database with size 16GB.We are running daily backup using EXPDP,but few days ago this EXPDP taking too long to complete(more than 6 hours).
my question is
1) does TABLE LOCKS affect performance of EXPDP(i have checked for table locking and found a number tables were on lock(we are updating tables using some procedures which are set to run multiple times for a day)..
2) will hard disk related issue slow down EXPDP performance??
As per your suggestion i have included the query(While the expdp is Running)
select elapsed_time/1000000 seconds,sql_text,SHARABLE_MEM,PERSISTENT_MEM,RUNTIME_MEM,USERS_EXECUTING,DISK_READS,BUFFER_GETS,USER_IO_WAIT_TIME from gv$sql where users_executing > 0 order by elapsed_time desc;
For this query i am getting more than 20 records and i will share some records
enter image description here

Related

Read huge Oracle data to SAS

I need to read in a very large Oracle table (half billion) and save it as a SAS dataset. Eventually I just pulled 1/6 of the oracle tables each time and extracted the data 6 times. Simplified Proc pass-through sql query provided below. But it still takes a long time. Any suggestion to further optimize the process so it will be faster/more efficient?
proc sql noprint;
connect to oracle (user='oooo' password='xxxx' path="ssss" readbuff=10000 preserve_comments);
create table work.sastbl&i. as
select * from connection to oracle
( select column1,
column2,
column3,
.................
from oraSchema.oraTbl
%if &i. eq 6 %then %do;
where &&strt&i. <= memberID
%end;%else %do;
where &&strt&i. <= memberID
and memberID < &&end&i.
%end;
);
%PUT &SQLXMSG ;
disconnect from oracle;
quit;
run;
For the most part the main things you need to do are talk to your Oracle DBA and see if you can tune the settings - like READBUFF - to see if you can get a better pipe; or else consider another option than directly reading in SAS (can you schedule an export from Oracle?).
You might want to see if you can compare the time it takes to do the download with the theoretical time - meaning, what is the size of the network pipe to SAS from Oracle, and what is the time to do the query directly on Oracle. If you try to run the code directly in Oracle (SQL Developer, Toad, etc.) and it takes two hours, and SAS takes 2 hours to do the job, then you need to talk to the DBA to see what can be done to improve things; if you run it in 5 minutes in Oracle but SAS takes 2 hours, then you have things on the SAS side to figure out.

Set timeout value for ALTER TABLE ADD COLUMN in Oracle (DDL_LOCK_TIMEOUT not works)

Question
How I can set a timeout value for nonblocking DDL (ALTER TABLE add column) in oracle so that if any DML lock the table for long time (several hours), my DDL can fast-fail instead of waiting for hours. (we expect oracle raise error like ORA-00054: resource busy and acquire with NOWAIT specified or timeout expired to interrupt our DDL)
P.S: DDL_LOCK_TIMEOUT is not working (refer 'What I tried' below)
Background
I'm working on a big oracle database (Oracle Database 19c). There are legacy application every hour will do aggregation job to calculate the data in past hour, like AVG, SUM of the counters. The production has 40 CPUs and 200GB+ memory, normally the aggregation job will run around 30 minutes, but in some case, like due to maintenance break the aggregation jobs are delayed, more data need to be handle in next aggregation job cause the job running for few hours.
Those legacy applications are out of my control. It's not possible to change the aggregation job.
Edition-Based Redefinition is not used.
My work is update database table (due to new counter added). We use ALTER TABLE to add new column to the existing tables. But in some case, the aggregation job lock the table for hours make my script hang there for hours. It make customer unhappy. So I want to make my script fast-fail.
What I tried
By google a long time, seems DDL_LOCK_TIMEOUT is the simplest solution.
However, based on the test, we notice that DDL_LOCK_TIMEOUT is not works in our case. By a long time google again, we found Oracle document here clearly mentioned:
The DDL_LOCK_TIMEOUT parameter affects blocking DDL statements (but not nonblocking DDL statements)
ALTER TABLE add column is exactly 'nonblocking DDL' as listed in List of Nonblocking DDLs
Expectation
When a DML lock the table for 1 hours, like SELECT * FROM MY_TABLE FOR UPDATE and commit after 1 hours. I want my DDL like ALTER TABLE MY_TABLE ADD (COL_A number) can get timeout after 10 minutes instead of wait for 1 hour.
Other Solutions
1
There have one solution in my mind that we can first issue a lock table MY_TABLE IN EXCLUSIVE MODE wait 600 to get the lock fist. But before we go with this solution, I want to seek is there any simple solution just like DDL_LOCK_TIMEOUT to set only one parameter.
2
Based on oracle doc, enable Supplemental Logging able to downgrade the nonblocking DDL to blocking way. But Supplemental Logging is DB level configuration. I do not have the permission to do such change.

Spoon run slow from Postgres to Oracle

I have an ETL Spoon that read a table from Postgres and write into Oracle.
No transformation, no sort. SELECT col1, col2, ... col33 from table.
350 000 rows in input. The performance is 40-50 rec/sec.
I try to read/write the same table from PS to PS with ALL columns (col1...col100) I have 4-5 000 rec/sec
The same if I read/write from Oracle to Oracle: 4-5 000 rec/sec
So, for me, is not a network problem.
If I try with another table Postgres and only 7 columns, the performances are good.
Thanks for the help.
It happened same in my case also, while loading data from Oracle and running it on my local machine(Windows) the processing rate was 40 r/s but it was 3000 r/s for Vertica database.
I couldn't figure it out what was the exact problem but I found a way to increase the row count. It worked from me. you can also do the same.
Right click on the Table Input steps, you will see "Change Number Of Copies to Start"
Please include below in the where clause, This is to avoid duplicates. Because when you choose the option "Change Number Of Copies to Start" the query will be triggered N number of time and return duplicates but keeping below code in where clause will get only distinct records
where ora_hash(v_account_number,10)=${internal.step.copynr}
v_account_number is primary key in my case
10 is, say for example you have chosen 11 copies to start means, 11 - 1 = 10 so it is up to you to set.
Please note this will work, I suggest you to use on local machine for testing purpose but on the server definitely you will not face this issue. so comment the line while deploying to servers.

Why does Vertica query_requests table report that a query took a few milliseconds, while it actually took 10 seconds?

I'm running queries against a Vertica table with close to 500 columns and only 100 000 rows.
A simple query (like select avg(col1) from mytable) takes 10 seconds, as reported by the Vertica vsql client with the \timing command.
But when checking column query_requests.request_duration_ms for this query, there's no mention of the 10 seconds, it reports less than 100 milliseconds.
The query_requests.start_timestamp column indicates that the beginning of the processing started 10 seconds after I actually executed the command.
The resource_acquisitions table show no delay in resource acquisition, but its queue_entry_timestamp column also shows the queue entry occurred 10 seconds after I actually executed the command.
The same query run on the same data but on a table with only one column returns immediately. And since I'm running the queries directly on a Vertica node, I'm excluding any network latency issue.
It feels like Vertica is doing something before executing the query. This is taking most of the time, and is related to the number of columns of the table. Any idea what it could be, and what I could try to fix it ?
I'm using Vertica 8, in a test environment with no load.
I was running Vertica 8.1.0-1, it seems the issue was caused by a Vertica bug in the query planning phase causing a performance degradation. It was solved in versions >= 8.1.1 :
https://my.vertica.com/docs/ReleaseNotes/8.1./Vertica_8.1.x_Release_Notes.htm
VER-53602 - Optimizer - This fix improves complex query performance during the query planning phase.

Selecting all User Created Tables in ORACLE

I am using Oracle Xpress Edition. I want to know how to select only user created tables in Oracle DB.?
I am using this query:
select * from user_tables;
But it showing 24 rows. But i have only created 6 table.I don't know why & from where other tables (like APEX$_WS_FILES,DEPT, DEMO_USERS,APEX$_ACL,, APEX$_WS_HISTORY, etc) are showing.
How to avoid those useless table.?
These tables were presumably created during any Oracle APEX related installation. You can use the below steps to get rid of them.
SELECT * FROM ALL_OBJECTS WHERE OBJECT_TYPE = 'TABLE' AND OWNER = 'your_user' ORDER BY created;
As these tables have been installed via an application, they most
probably have been installed in a small and coherent time window. What
I mean here is that probably they have been installed within a time
frame of 30 mins, 1 hr or so. So if you order them by creation time,
they will all flock to consecutive rows in the output of above query.
Identify the time frame in which they have been started and
finished installing those tables. Write the above query once again to
filter that time frame out. You are then expected to get only your
tables.

Resources