Is it possible to do direct-load INSERTs in Oracle through JDBC?
I currently use batched prepared statements (through Spring JDBC), is there any way to make these bypass the redo logs on a NOLOGGING table?
This is with Oracle 11g.
direct path inserts are only possible in a insert into x as select * from y scenario. This can be done using jdbc, no problem. This can not be done with insert and values. This also can not be done when the database in in force logging mode. Most of the times when a standby database in connected, the primary database will be in force logging mode.
As Gary Myers mentioned, since 11gR2 there is the APPEND_VALUES hint. As with the 'old' append hint, it should only be used for bulk inserts.
I hope this helps,
Ronald.
There is an APPEND_VALUES hint introduced in 11gR2 for direct path inserts with INSERT...VALUES.
Don't have an 11gR2 instance available to test whether it works with JDBC batch inserts. It is worth a try though.
I was able to use APPEND_VALUES hint with Oracle 12c with JDBC batching. I verified direct path insert happened via Oracle Enterprise manager where explain plan shows Load As Select
edit: I am not on the project anymore but I try to come up with more details:
The code was something like:
prepareTableForLargeInsert("TABLE_X")
preparedStatement = conn.prepareStatement("INSERT /*+ APPEND_VALUES */ INTO TABLE_X(A, B) VALUES(?,?)");
while(thereIsStuffToInsert()) {
for (ThingToWrite entity : getBatch()) {
int i = 1;
preparedStatement.setLong(i++, entity.getA());
preparedStatement.setString(i++, entity.getB());
...
}
preparedStatement.executeBatch();
preparedStatement.clearParameters();
}
repairTableAfterLargeInsert("TABLE_X")
One needs to verify whether direct path was really used (table is locked / conventional insert in same tx fails / actual execution plans shows Load As Select)
methods prepareTableForLargeInsert and repairTableAfterLargeInsert were calling stored procedures. They might be helpful:
PROCEDURE sp_before_large_insert (in_table_name IN VARCHAR2) AS
BEGIN
-- force parallel processing
EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL DML';
EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL QUERY';
EXECUTE IMMEDIATE 'ALTER SESSION FORCE PARALLEL DDL';
-- set table to NOLOGGING
EXECUTE IMMEDIATE 'ALTER TABLE ' || in_table_name || ' NOLOGGING';
-- disable all FK constraints referencing the table. all but those used for Partition by reference
FOR cur IN (SELECT a.owner, a.constraint_name, a.table_name
FROM all_cons_columns a
JOIN all_constraints c ON a.owner = c.owner
AND a.constraint_name = c.constraint_name
JOIN all_constraints c_pk ON c.r_owner = c_pk.owner
AND C.R_CONSTRAINT_NAME = C_PK.CONSTRAINT_NAME
LEFT JOIN user_part_tables pt on pt.ref_ptn_constraint_name = c.constraint_name
WHERE C.CONSTRAINT_TYPE = 'R'
AND pt.ref_ptn_constraint_name IS NULL
AND A.OWNER LIKE '%_OWNER'
AND c_pk.table_name = in_table_name)
LOOP
execute immediate 'ALTER TABLE "'||cur.owner||'"."'||cur.table_name||'" MODIFY CONSTRAINT "'||cur.constraint_name||'" DISABLE';
END LOOP;
-- disable FKs (but one used for Partition by reference), PK (unless referenced by enabled FK for partition reference) and UCs on table
FOR c IN (select distinct rc.CONSTRAINT_NAME FROM user_constraints rc
LEFT JOIN user_part_tables pt on pt.ref_ptn_constraint_name = rc.constraint_name
LEFT JOIN user_constraints c_fk ON c_fk.R_CONSTRAINT_NAME = rc.CONSTRAINT_NAME AND c_fk.status = 'ENABLED'
WHERE rc.owner like '%OWNER'
AND pt.ref_ptn_constraint_name IS NULL
AND c_fk.R_CONSTRAINT_NAME IS NULL
AND rc.CONSTRAINT_TYPE IN ('R', 'U', 'P')
AND rc.TABLE_NAME = in_table_name)
LOOP
EXECUTE IMMEDIATE 'ALTER TABLE ' || in_table_name || ' DISABLE CONSTRAINT ' || c.CONSTRAINT_NAME;
END LOOP;
-- set unusable non-local non-unique indexes on table
FOR c IN (select INDEX_NAME from all_indexes
where table_owner LIKE '%_OWNER'
and PARTITIONED = 'NO'
and UNIQUENESS = 'NONUNIQUE'
and STATUS = 'VALID'
and TABLE_NAME = in_table_name)
LOOP
EXECUTE IMMEDIATE 'ALTER INDEX ' || c.index_name || ' UNUSABLE';
END LOOP;
END sp_before_large_insert;
PROCEDURE sp_after_large_insert (in_table_name IN VARCHAR2) AS
BEGIN
-- rebuild disabled indexes on table
FOR c IN (select INDEX_NAME from all_indexes
where table_owner LIKE '%_OWNER'
and STATUS = 'UNUSABLE'
and TABLE_NAME = in_table_name)
LOOP
EXECUTE IMMEDIATE 'ALTER INDEX ' || c.index_name || ' REBUILD PARALLEL NOLOGGING';
END LOOP;
-- enable FKs, PK and UCs on table
FOR c IN (select CONSTRAINT_NAME, CONSTRAINT_TYPE
FROM user_constraints
WHERE owner like '%OWNER'
AND CONSTRAINT_TYPE IN ('R', 'U', 'P')
AND TABLE_NAME = in_table_name)
LOOP
IF c.CONSTRAINT_TYPE = 'P' THEN
EXECUTE IMMEDIATE 'ALTER TABLE ' || in_table_name || ' ENABLE CONSTRAINT ' || c.CONSTRAINT_NAME || ' USING INDEX REVERSE';
ELSE
EXECUTE IMMEDIATE 'ALTER TABLE ' || in_table_name || ' ENABLE CONSTRAINT ' || c.CONSTRAINT_NAME;
END IF;
END LOOP;
-- enable FKs constraints on related tables
FOR cur IN (select fk.owner, fk.constraint_name , fk.table_name
from all_constraints fk, all_constraints pk
where fk.CONSTRAINT_TYPE = 'R' and
pk.owner LIKE '%_OWNER' and
fk.r_owner = pk.owner and
fk.R_CONSTRAINT_NAME = pk.CONSTRAINT_NAME and
pk.TABLE_NAME = in_table_name)
LOOP
execute immediate 'ALTER TABLE "'||cur.owner||'"."'||cur.table_name||'" MODIFY CONSTRAINT "'||cur.constraint_name||'" ENABLE';
END LOOP;
-- set table to LOGGING
EXECUTE IMMEDIATE 'ALTER TABLE ' || in_table_name || ' LOGGING';
-- disable parallel processing
EXECUTE IMMEDIATE 'ALTER SESSION DISABLE PARALLEL DML';
EXECUTE IMMEDIATE 'ALTER SESSION DISABLE PARALLEL QUERY';
EXECUTE IMMEDIATE 'ALTER SESSION DISABLE PARALLEL DDL';
-- clean up indexes i.e. set logging and noparallel again
FOR c IN (SELECT INDEX_NAME FROM ALL_INDEXES
WHERE (TRIM(DEGREE) > TO_CHAR(1) OR LOGGING = 'NO')
AND OWNER LIKE '%_OWNER'
AND TABLE_NAME = in_table_name)
LOOP
EXECUTE IMMEDIATE 'ALTER INDEX ' || c.index_name || ' NOPARALLEL LOGGING';
END LOOP;
END sp_after_large_insert;
I recall there were issues with recreating tailored indexes for disabled UC e.g. because of losing the information how they were partitioned (global hash partitioned) (making the indexes just UNUSABLE doesn't work)
Notes:
when parallelizing to different threads one doesn't gain much as each of these thread is eventually serialized on the table lock
our table was partitioned. insert slow down was observed if batch writes to different partitions - good is to write to as little partitions as possible per batch
major speed up could be possibly achieved if each thread wrote to its own plain (temporary?) table and at the end these tables are 'coalesced' into the main table - but this was never tried out
Does
insert /*+ append */ into desttab select * from srctab
not work in JDBC ?
Use:
INSERT /*+ APPEND_VALUES */ INTO table_name (column1, column2) values (?,?);
Related
i want to create a trigger insert sequence i created to apply on all primary key on tables ,
CREATE SEQUENCE HR.PRIMARY_KEY
START WITH 300
INCREMENT BY 10
MAXVALUE 99990
MINVALUE 1
NOCYCLE
NOCACHE
NOORDER;
create or replace trigger increment_pk_trigger
before insert
ON schema
--FOR EACH ROW
DECLARE
CURSOR get_pk_CURSOR IS
select a.constraint_name, b.column_name, a.table_name
from user_constraints a, user_cons_columns b
where a.constraint_type = 'P' and a.constraint_name = b.constraint_name;
BEGIN
FOR V_RECORD IN get_pk_CURSOR LOOP
EXECUTE IMMEDIATE 'insert into '||V_RECORD.TABLE_NAME||' :new.V_RECORD.column_name := primary_key.nextva ';
END LOOP;
END;
my problem i cannot get what will table_name will be i tried to make it on database|schema
but not work
This is not a requirement that makes sense so you realistically can't.
It would be very weird to have a single sequence as the source for the primary key on every table-- there would end up being quite a bit of contention on that single sequence. It would be much more normal to create one sequence per table or (assuming a recent version of Oracle) to simply declare the primary key as an identity column.
If you really wanted to, you could write some code that dynamically generated triggers for every table in the schema. Something like this (assuming that every table has a single column primary key and that you really want to use the same sequence to generate the primary key on every table despite the performance impact)
begin
for pk in (select a.constraint_name, b.column_name, a.table_name
from user_constraints a, user_cons_columns b
where a.constraint_type = 'P'
and a.constraint_name = b.constraint_name)
loop
execute immediate 'create or replace trigger trg_pk_' || pk.table_name ||
' before insert on ' || pk.table_name ||
' for each row ' ||
'begin ' ||
' :new.' || pk.column_name || ' := primary_key.nextval; ' ||
'end; ';
end loop
end;
One of my stored procedure recently took around 6 hours which usually takes about 3 hours to complete.
On checking, I found that the cursor is taking the time to execute.
Both the tables are present in my local DB instance.
I need to know what could be the possible reason for this and how the procedure can be fine tuned.
My stored procedure:
create or replace PROCEDURE VMS_DETAILS_D_1 IS
LOG_D1 VARCHAR2(20);
BEGIN
/* IDENTIFY PARTITION */
SELECT partition_name into LOG_D1 FROM all_tab_partitions a WHERE table_name = 'LOG' AND TABLE_OWNER='OWNER1' and partition_position IN
(SELECT MAX (partition_position-1) FROM all_tab_partitions b WHERE table_name = a.table_name AND a.table_owner = b.table_owner);
execute immediate 'DROP TABLE TAB1 PURGE';
COMMIT;
EXECUTE IMMEDIATE 'create table TAB1 Nologging as
select /*+ Parallel(20) */ TRANSACTIONID,TIME_STAMP from OWNER1.log partition('||LOG_D1||')
where ( MESSAGE = ''WalletUpdate| Request for Estel Update is Processed'' or MESSAGE = ''Voucher Core request processed'')';
EXECUTE IMMEDIATE 'CREATE INDEX IDX_TAB1 on TAB1(TRANSACTIONID)';
DBMS_STATS.GATHER_TABLE_STATS (ownname => 'OWNER2' , tabname => 'TAB1',cascade => true, estimate_percent => 10,method_opt=>'for all indexed columns size 1', granularity => 'ALL', degree => 1);
DECLARE
CURSOR resp_cur
IS
select TRANSACTIONID,to_char(max(TIME_STAMP),'DD-MM-YYYY HH24:MI:SS') TIME_STAMP from TAB1
where TRANSACTIONID in (select ORDERREFNUM from TAB2
where ORDERREFNUM like 'BV%') group by TRANSACTIONID;
BEGIN
FOR l IN resp_cur
LOOP
update TAB2
set TCTIME=l.TIME_STAMP
where ORDERREFNUM=l.TRANSACTIONID;
COMMIT;
END LOOP;
END;
end;
First off, DDL has an implicit commit, so you don't need a commit after your drop table.
Secondly, why are you dropping the table and recreating it instead of just truncating the table and inserting into it?
Thirdly, why loop around a cursor to do an update, when you can do it in a single update statement?
If you absolutely must store the data in a separate table, I would rewrite your procedure like so:
CREATE OR REPLACE PROCEDURE vms_details_d_1 IS
log_d1 VARCHAR2(20);
BEGIN
/* IDENTIFY PARTITION */
SELECT partition_name
INTO log_d1
FROM all_tab_partitions a
WHERE table_name = 'LOG'
AND table_owner = 'OWNER1'
AND partition_position IN (SELECT MAX(partition_position - 1)
FROM all_tab_partitions b
WHERE table_name = a.table_name
AND a.table_owner = b.table_owner);
EXECUTE IMMEDIATE 'TRUNCATE TABLE TAB1 reuse storage';
EXECUTE IMMEDIATE 'insert into TAB1 (transactionid, time_stamp)'||CHR(10)||
'select /*+ Parallel(20) */ TRANSACTIONID,TIME_STAMP from OWNER1.log partition(' || log_d1 || ')'||CHR(10)||
'where MESSAGE in (''WalletUpdate| Request for Estel Update is Processed'', ''Voucher Core request processed'')';
EXECUTE IMMEDIATE 'CREATE INDEX IDX_TAB1 on TAB1(TRANSACTIONID)';
dbms_stats.gather_table_stats(ownname => 'OWNER2',
tabname => 'TAB1',
cascade => TRUE,
estimate_percent => 10,
method_opt => 'for all indexed columns size 1',
granularity => 'ALL',
degree => 1);
MERGE INTO tab2 tgt
USING (SELECT transactionid,
max(time_stamp) ts
FROM tab1
GROUP BY transactionid) src
ON (tgt.transactionid = src.transactionid)
WHEN MATCHED THEN
UPDATE SET tgt.tctime = to_char(src.ts, 'dd-mm-yyyy hh24:mi:ss'); -- is tab2.tctime really a string? If it's a date, remove the to_char
COMMIT;
END vms_details_d_1;
/
If you're only copying the data to make it easier to do the update, you don't need to - instead, you can do it all in a single DML statement, like so:
CREATE OR REPLACE PROCEDURE vms_details_d_1 IS
log_d1 VARCHAR2(20);
BEGIN
/* IDENTIFY PARTITION */
SELECT partition_name
INTO log_d1
FROM all_tab_partitions a
WHERE table_name = 'LOG'
AND table_owner = 'OWNER1'
AND partition_position IN (SELECT MAX(partition_position - 1)
FROM all_tab_partitions b
WHERE table_name = a.table_name
AND a.table_owner = b.table_owner);
EXECUTE IMMEDIATE 'MERGE INTO tab2 tgt'||CHR(10)||
' USING (SELECT transactionid,'||CHR(10)||
' MAX(time_stamp) ts'||CHR(10)||
' FROM owner1.log partition(' || log_d1 || ')'||CHR(10)||
' GROUP BY transactionid) src'||CHR(10)||
' ON (tgt.transactionid = src.transactionid)'||CHR(10)||
'WHEN MATCHED THEN'||CHR(10)||
' UPDATE SET tgt.tctime = to_char(src.ts, ''dd-mm-yyyy hh24:mi:ss'')'; -- is tab2.tctime really a string? If it's a date, remove the to_char
COMMIT;
END vms_details_d_1;
/
If you know the predicate(s) which define the partition you're after, you can use those in your query, thus removing the need to find the partition name and therefore needing dynamic SQL.
Okay you procedure needs lots of enhancment:
In the below query you can use user_tab_partitions instead of all_tab_partitions.
SELECT partition_name
into LOG_D1
FROM all_tab_partitions a
WHERE table_name = 'LOG'
AND TABLE_OWNER = 'OWNER1'
and partition_position IN
(SELECT MAX(partition_position - 1)
FROM all_tab_partitions b
WHERE table_name = a.table_name
AND a.table_owner = b.table_owner);
You have to include a checking on the table tab1 incase if it doesnt exists and no need to commit here, its not a DML statement.
execute immediate 'DROP TABLE TAB1 PURGE';
COMMIT;
No need to update the statistics in the procedure, especially its a newly table created and the index is already created, and its only one index.
The above might slightly improve the performance but you have to check that there is an index on table log for column message ( but as i said its wrong modeling) also check query plan on tab2 if it needs a index.
This is the wrong approach, what you're doing is update TAB2 the number of times the records in cursor resp_cur, I would switch to a merge.
A table TMP has 5 partitions, namely P_1, P_2,....P_5.
I need to drop some partitions of TMP; the partitions to drop are derived by another query.
Ex:
ALTER TABLE TMP DROP PARTITIONS (SELECT ... From ... //expression to get partition names )
Let's say the SELECT statement returns P_1 & P_5. The part query of the ALTER statement above doesn't work. Is there any way to drop partitions with input from a SELECT statement?
You can use dynamic sql in anonymous pl/sql block;
Begin
for i in (select part_name from ... //expression to get partition names) loop
execute immediate 'ALTER TABLE TMP DROP PARTITION ' || i.part_name;
end loop;
end;
For dropping multiple partitions on a go then;
declare
v_part varchar(1000);
Begin
select LISTAGG(partition_name, ', ') WITHIN GROUP (ORDER BY partition_name DESC)
into v_part
from ... //expression to get partition names;
execute immediate 'ALTER TABLE TMP DROP PARTITION ' || v_part;
end;
You may use the following sql to generate DML for dropping multiple table partitions.
select 'ALTER TABLE ' || TABLE_OWNER || '.' || TABLE_NAME || ' DROP PARTITION ' || '"' || PARTITION_NAME || '";' from DBA_TAB_PARTITIONS
where TABLE_NAME='%YOUR_PATTERN%'order by PARTITION_NAME;
You'll need to use dynamic SQL. Something this:
begin
for prec in (SELECT ... From ... //expression to get partition names )
loop
execute immediate 'ALTER TABLE TMP DROP PARTITION '
|| prec.partition_name;
end loop;
end;
/
Clearly you need to have complete faith that your query will return only the partitions you want to drop. Or equivalent faith in your Backup & Recover plans :)
Alternatively you can use a similar approach to generate a drop script which you can review before you run it.
You have to use pl/sql block for dropping partitions from a table with select query. Use listagg for making a comma separated list.
DECLARE
var1 varchar2(50);
BEGIN
SELECT listagg(Partition_names) into var1 from table_name//expression to get partition names ;
execute immediate 'alter table tmp drop PARTITION'||var1 ;
END;
Example on listagg
select LISTAGG(partition_name,',') within group(order by table_name) as comma_list
from ALL_TAB_PARTITIONS where TABLE_owner='OWNER' AND TABLE_NAME='TABLE_NAME'
Maybe it's could somebody help.
This script drop all partitions for all partition tables for specific schema. I use it with clear DB with METADATA, for changing started (referencial) partition.
ALTER TABLE SCHEMA_1.TABLE_1
SET INTERVAL ();
ALTER TABLE SCHEMA_1.TABLE_2
SET INTERVAL ();
ALTER TABLE SCHEMA_1.TABLE_3
SET INTERVAL ();
set lines 100
set heading off
spool runme.sql
select 'ALTER TABLE ' || TABLE_OWNER || '.' || TABLE_NAME || ' DROP PARTITION ' || '"' || PARTITION_NAME || '";' from DBA_TAB_PARTITIONS
where
TABLE_OWNER='SCHEMA_1'
-- and TABLE_NAME='TABLE_%'
and PARTITION_NAME LIKE 'SYS_P%'
;
#runme
ALTER TABLE SCHEMA_1.TABLE_1
SET INTERVAL (NUMTOYMINTERVAL(1,'MONTH'));
ALTER TABLE SCHEMA_1.TABLE_1
SET INTERVAL (NUMTOYMINTERVAL(1,'MONTH'));
ALTER TABLE SCHEMA_1.TABLE_3
SET INTERVAL (NUMTOYMINTERVAL(1,'MONTH'));
Yes, the script is semi-manual, but I think it's more safety.
ALTER TABLE ... INTERVAL it's need for droping last partition.
Interval must be same that it was before
I'm trying to drop a schema in oracle 11g on our dev environment and I get back SQL Error: No more data to read from socket. There is no load on the schema as it's just a dev db. It's a small db without anything crazy going on. I see this error all the time. Restarting the instance sometimes resolves the problem. I can't seem to find any information that would point to a solution. Thanks!
I understand that this message often arises due to a bug. Also, when it appears an entry in your alert log and/or a trace file will contain more detail on what the error might actually be. To find your trace file for the session run:
select U_DUMP.value
|| '/'
|| DB_NAME.value
|| '_ora_'
|| V$PROCESS.SPID
|| nvl2(V$PROCESS.TRACEID, '_' || V$PROCESS.TRACEID, null)
|| '.trc'
"Trace File"
from V$PARAMETER U_DUMP
cross join V$PARAMETER DB_NAME
cross join V$PROCESS
join V$SESSION
on V$PROCESS.ADDR = V$SESSION.PADDR
where U_DUMP.NAME = 'user_dump_dest'
and DB_NAME.NAME = 'db_name'
and v$session.audsid=sys_context('userenv','sessionid');
A dba at my company gave me this one. It's
CREATE OR REPLACE PROCEDURE "SYS"."DROP_SCHEMA_FAST" (pSchema IN
VARCHAR2)
IS
cnt NUMBER(5) := 0;
sql1 varchar2(4000);
x PLS_INTEGER;
--disable constraints:
cursor cur1 is select 'alter table ' || OWNER ||'.'||table_name||' disable constraint '||constraint_name sql2
from all_constraints where owner=pSchema and status='ENABLED'
and table_name not like 'BIN$%' and constraint_name not like 'SYS_%' and constraint_name not like '%PK%';
cursor cur2 is select 'alter table ' || OWNER ||'.'||table_name||' disable constraint '||constraint_name sql2
from all_constraints where owner=pSchema and status='ENABLED'
and table_name not like 'BIN$%' and constraint_name not like 'SYS_%';
--truncate all tables:
cursor cur3 is select 'truncate table ' || OWNER ||'.'||table_name sql2 from all_tables where owner=pSchema
and table_name not like 'BIN$%';
BEGIN
SELECT COUNT(*) INTO cnt FROM dba_users WHERE UPPER(username) = UPPER(pSchema);
IF (cnt <= 0) THEN
RETURN;
END IF;
sql1 := 'ALTER USER ' || UPPER(pSchema) || ' ACCOUNT LOCK';
EXECUTE IMMEDIATE sql1;
--disable constraints:
FOR ao_rec IN cur1 LOOP
EXECUTE IMMEDIATE ao_rec.sql2;
END LOOP;
FOR ao_rec IN cur2 LOOP
EXECUTE IMMEDIATE ao_rec.sql2;
END LOOP;
--truncate all tables:
FOR ao_rec IN cur3 LOOP
EXECUTE IMMEDIATE ao_rec.sql2;
END LOOP;
--drop schema:
sql1 := 'DROP USER ' || UPPER(pSchema) || ' CASCADE';
EXECUTE IMMEDIATE sql1;
exception when others then null;
END;
Also had this problem, got fixed by setting "PLScope identifiers:" to "None" in Tools->Preferences ->Database->PL/SQL Compiler
Does anybody know how to run all the lines generated from the following query as scripts on their own right?
select 'DROP TABLE '||table_name||' CASCADE CONSTRAINTS;' from user_tables;
What I'm basically trying to do, is delete all the user tables and constraints on my DB (this is oracle). The output I get is correct, but I want to know how I would run all the lines without copy/pasting.
Also, is there a more efficient way to drop all tables (including constraints)?
begin
for i in (select table_name from user_tables)
loop
execute immediate ('drop table ' || i.table_name || ' cascade constraints');
end loop;
end;
/
Justin Cave brought up an excellent point - the following will drop tables within the user's schema starting at the outermost branches of the hierarchy of dependencies, assuming all foreign keys reference the primary key, not a unique constraint. Tables without primary keys would be dropped last.
begin
for i in (select parent_table, max(tree_depth) as tree_depth
from (select parent.table_name as parent_table,
child.constraint_name as foreign_key,
child.table_name as child_table,
LEVEL AS TREE_DEPTH
from (select table_name, constraint_name
from USER_constraints
where constraint_type = 'P'
) parent
LEFT JOIN
(SELECT TABLE_NAME, CONSTRAINT_NAME,
r_constraint_name
FROM USER_CONSTRAINTS
WHERE CONSTRAINT_TYPE = 'R') child
on parent.constraint_name =
child.r_constraint_name
CONNECT BY NOCYCLE
(PRIOR CHILD.TABLE_NAME = PARENT.TABLE_NAME)
UNION
select DT.table_name as parent_table,
NULL AS FOREIGN_KEY, NULL AS CHILD_TABLE,
0 AS TREE_DEPTH
FROM USER_TABLES DT
WHERE TABLE_NAME NOT IN
(SELECT TABLE_NAME
FROM USER_CONSTRAINTS
WHERE CONSTRAINT_TYPE = 'P')
)
group by parent_table
order by 2 desc
)
loop
execute immediate ('drop table ' || i.parent_table ||
' cascade constraints');
end loop;
end;
/
The quick and dirty solution would be to do something like
FOR x IN (SELECT * FROM user_tables)
LOOP
BEGIN
EXECUTE IMMEDIATE 'DROP TABLE ' || x.table_name ||
' CASCADE CONSTRAINTS';
EXCEPTION
WHEN others THEN
dbms_output.put_line( 'Failed to drop ' || x.table_name );
END;
END LOOP;
and run that a number of times until all the tables had been dropped. This will take multiple passes because you can't drop a parent table while there are still child tables with foreign keys that reference the parent.
The cleaner option would be to write a hierarchal query against the data dictionary to get the child tables, the parents of those children, the grandparents, etc. and to walk the tree to drop the appropriate objects. That should avoid errors but it would require a bit more work to code.
execute immediate - pass in the generated string
It is generally more efficient when dropping tables to use the truncate statement.
You can execute dynamic scripts using the execute immediate command