ways to avoid global temp tables in oracle - oracle

We just converted our sql server stored procedures to oracle procedures. Sql Server SP's were highly dependent on session tables (INSERT INTO #table1...) these tables got converted as global temporary tables in oracle. We ended up with aroun 500 GTT's for our 400 SP's
Now we are finding out that working with GTT's in oracle is considered a last option because of performance and other issues.
what other alternatives are there? Collections? Cursors?
Our typical use of GTT's is like so:
Insert into GTT
INSERT INTO some_gtt_1
(column_a,
column_b,
column_c)
(SELECT someA,
someB,
someC
FROM TABLE_A
WHERE condition_1 = 'YN756'
AND type_cd = 'P'
AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12'
AND (lname LIKE (v_LnameUpper || '%') OR
lname LIKE (v_searchLnameLower || '%'))
AND (e_flag = 'Y' OR
it_flag = 'Y' OR
fit_flag = 'Y'));
Update the GTT
UPDATE some_gtt_1 a
SET column_a = (SELECT b.data_a FROM some_table_b b
WHERE a.column_b = b.data_b AND a.column_c = 'C')
WHERE column_a IS NULL OR column_a = ' ';
and later on get the data out of the GTT. These are just sample queries, in actuality the queries are really complext with lot of joins and subqueries.
I have a three part question:
Can someone show how to transform
the above sample queries to
collections and/or cursors?
Since
with GTT's you can work natively
with SQL...why go away from the
GTTs? are they really that bad.
What should be the guidelines on
When to use and When to avoid GTT's

Let's answer the second question first:
"why go away from the GTTs? are they
really that bad."
A couple of days ago I was knocking up a proof of concept which loaded a largish XML file (~18MB) into an XMLType. Because I didn't want to store the XMLType permanently I tried loading it into a PL/SQL variable (session memory) and a temporary table. Loading it into a temporary table took five times as long as loading it into an XMLType variable (5 seconds compared to 1 second). The difference is because temporary tables are not memory structures: they are written to disk (specifically your nominated temporary tablespace).
If you want to cache a lot of data then storing it in memory will stress the PGA, which is not good if you have lots of sessions. So it's a trade-off between RAM and time.
To the first question:
"Can someone show how to transform the
above sample queries to collections
and/or cursors?"
The queries you post can be merged into a single statement:
SELECT case when a.column_a IS NULL OR a.column_a = ' '
then b.data_a
else column_a end AS someA,
a.someB,
a.someC
FROM TABLE_A a
left outer join TABLE_B b
on ( a.column_b = b.data_b AND a.column_c = 'C' )
WHERE condition_1 = 'YN756'
AND type_cd = 'P'
AND TO_NUMBER(TO_CHAR(m_date, 'MM')) = '12'
AND (lname LIKE (v_LnameUpper || '%') OR
lname LIKE (v_searchLnameLower || '%'))
AND (e_flag = 'Y' OR
it_flag = 'Y' OR
fit_flag = 'Y'));
(I have simply transposed your logic but that case() statement could be replaced with a neater nvl2(trim(a.column_a), a.column_a, b.data_a) ).
I know you say your queries are more complicated but your first port of call should be to consider rewriting them. I know how seductive it is to break a gnarly query into lots of baby SQLs stitched together with PL/SQL but pure SQL is way more efficient.
To use a collection it is best to define the types in SQL, because it gives us the flexibility to use them in SQL statements as well as PL/SQL.
create or replace type tab_a_row as object
(col_a number
, col_b varchar2(23)
, col_c date);
/
create or replace type tab_a_nt as table of tab_a_row;
/
Here's a sample function, which returns a result set:
create or replace function get_table_a
(p_arg in number)
return sys_refcursor
is
tab_a_recs tab_a_nt;
rv sys_refcursor;
begin
select tab_a_row(col_a, col_b, col_c)
bulk collect into tab_a_recs
from table_a
where col_a = p_arg;
for i in tab_a_recs.first()..tab_a_recs.last()
loop
if tab_a_recs(i).col_b is null
then
tab_a_recs(i).col_b := 'something';
end if;
end loop;
open rv for select * from table(tab_a_recs);
return rv;
end;
/
And here it is in action:
SQL> select * from table_a
2 /
COL_A COL_B COL_C
---------- ----------------------- ---------
1 whatever 13-JUN-10
1 12-JUN-10
SQL> var rc refcursor
SQL> exec :rc := get_table_a(1)
PL/SQL procedure successfully completed.
SQL> print rc
COL_A COL_B COL_C
---------- ----------------------- ---------
1 whatever 13-JUN-10
1 something 12-JUN-10
SQL>
In the function it is necessary to instantiate the type with the columns, in order to avoid the ORA-00947 exception. This is not necessary when populating a PL/SQL table type:
SQL> create or replace procedure pop_table_a
2 (p_arg in number)
3 is
4 type table_a_nt is table of table_a%rowtype;
5 tab_a_recs table_a_nt;
6 begin
7 select *
8 bulk collect into tab_a_recs
9 from table_a
10 where col_a = p_arg;
11 end;
12 /
Procedure created.
SQL>
Finally, guidelines
"What should be the guidelines on When
to use and When to avoid GTT's"
Global temp tables are very good when we need share cached data between different program units in the same session. For instance if we have a generic report structure generated by a single function feeding off a GTT which is populated by one of several procedures. (Although even that could also be implemented with dynamic ref cursors ...)
Global temporary tables are also good if we have a lot of intermediate processing which is just too complicated to be solved with a single SQL query. Especially if that processing must be applied to subsets of the retrieved rows.
But in general the presumption should be that we don't need to use a temporary table. So
Do it in SQL unless it is too hard it which case ...
... Do it in PL/SQL variables (usually collections) unless it takes too much memory it which case ...
... Do it with a Global Temporary Table

Generally I'd use a PL/SQL collection for storing small volumes of data (maybe a thousand rows). If the data volumes were much larger, I'd use a GTT so that they don't overload the process memory.
So I might select a few hundred rows from the database into a PL/SQL collection, then loop through them to do some calculation/delete a few or whatever, then insert that collection into another table.
If I was dealing with hundreds of thousands of rows, I would try to push as much of the 'heavy lifting' processing into large SQL statements. That may or may not require GTT.
You can use SQL level collection objects as something that translates quite easily between SQL and PL/SQL
create type typ_car is object (make varchar2(10), model varchar2(20), year number(4));
/
create type typ_coll_car is table of typ_car;
/
select * from table (typ_coll_car(typ_car('a','b',1999), typ_car('A','Z',2000)));
MAKE MODEL YEAR
---------- -------------------- ---------------
a b 1,999.00
A Z 2,000.00
declare
v_car1 typ_car := typ_car('a','b',1999);
v_car2 typ_car := typ_car('A','Z',2000);
t_car typ_coll_car := typ_coll_car();
begin
t_car := typ_coll_car(v_car1, v_car2);
FOR i in (SELECT * from table(t_car)) LOOP
dbms_output.put_line(i.year);
END LOOP;
end;
/

Related

Oracle PL/SQL Update statement looping forever - 504 Gateway Time-out

I'm trying to update a table based on another one's information:
Source_Table (Table 1) columns:
TABLE_ROW_ID (Based on trigger-sequence when insert)
REP_ID
SOFT_ASSIGNMENT
Description (Table 2) columns:
REP_ID
NEW_SOFT_ASSIGNMENT
This is my loop statement:
SELECT count(table_row_id) INTO V_ROWS_APPROVED FROM Source_Table;
FOR i IN 1..V_ROWS_APPROVED LOOP
SELECT REQUESTED_SOFT_MAPPING INTO V_SOFT FROM Source_Table WHERE ROW_ID = i;
SELECT REP_ID INTO V_REP_ID FROM Source_Table WHERE ROW_ID = i;
UPDATE Description_Table D
SET D.NEW_SOFT_ASSIGNMENT = V_SOFT
WHERE D.REP_ID = V_REP_ID;
END LOOP;
END;
The ending result of this loop is a beautiful ''504 Gateway Time-out''.
I know the issue is on the Update query but there's no other way (I can think about) of doing it.
Can someone give me a hand please?
Thanks
Unless your row_id values are contiguous - i.e. count(row_id) == max(row_id) - then this will get a no-data-found. Sequences aren't gapless, so this seems fairly likely. We have no way of telling if that is happening and somehow that is leaving your connection hanging until it times out, or if it's just taking a long time because you're doing a lot of individual queries and updates over a large data set. (And you may be squashing any errors that do occur, though you haven't shown that.)
You don't need to query and update in a loop though, or even use PL/SQL; you can apply all the values in the source table to the description table with a single update or merge:
merge into description_table d
using source_table s
on (s.rep_id = d.rep_id)
when matched then
update set d.new_soft_assignment = s.requested_soft_mapping;
db<>fiddle with some dummy data, including a non-contiguous row_id to show that erroring.

Is there a hint to generate execution plan ignoring the existing one from shared pool?

Is there a hint to generate execution plan ignoring the existing one from the shared pool?
There is not a hint to create an execution plan that ignores plans in the shared pool. A more common way of phrasing this question is: how do I get Oracle to always perform a hard parse?
There are a few weird situations where this behavior is required. It would be helpful to fully explain your reason for needing this, as the solution varies depending why you need it.
Strange performance problem. Oracle performs some dynamic re-optimization of SQL statements after the first run, like adaptive cursor sharing and cardinality feedback. In the rare case when those features backfire you might want to disable them.
Dynamic query. You have a dynamic query that used Oracle data cartridge to fetch data in the parse step, but Oracle won't execute the parse step because the query looks static to Oracle.
Misunderstanding. Something has gone wrong and this is an XY problem.
Solutions
The simplest way to solve this problem are by using Thorsten Kettner's solution of changing the query each time.
If that's not an option, the second simplest solution is to flush the query from the shared pool, like this:
--This only works one node at a time.
begin
for statements in
(
select distinct address, hash_value
from gv$sql
where sql_id = '33t9pk44udr4x'
order by 1,2
) loop
sys.dbms_shared_pool.purge(statements.address||','||statements.hash_value, 'C');
end loop;
end;
/
If you have no control over the SQL, and need to fix the problem using a side-effect style solution, Jonathan Lewis and Randolf Geist have a solution using Virtual Private Database, that adds a unique predicate to each SQL statement on a specific table. You asked for something weird, here's a weird solution. Buckle up.
-- Create a random predicate for each query on a specific table.
create table hard_parse_test_rand as
select * from all_objects
where rownum <= 1000;
begin
dbms_stats.gather_table_stats(null, 'hard_parse_test_rand');
end;
/
create or replace package pkg_rls_force_hard_parse_rand is
function force_hard_parse (in_schema varchar2, in_object varchar2) return varchar2;
end pkg_rls_force_hard_parse_rand;
/
create or replace package body pkg_rls_force_hard_parse_rand is
function force_hard_parse (in_schema varchar2, in_object varchar2) return varchar2
is
s_predicate varchar2(100);
n_random pls_integer;
begin
n_random := round(dbms_random.value(1, 1000000));
-- s_predicate := '1 = 1';
s_predicate := to_char(n_random, 'TM') || ' = ' || to_char(n_random, 'TM');
-- s_predicate := 'object_type = ''TABLE''';
return s_predicate;
end force_hard_parse;
end pkg_rls_force_hard_parse_rand;
/
begin
DBMS_RLS.ADD_POLICY (USER, 'hard_parse_test_rand', 'hard_parse_policy', USER, 'pkg_rls_force_hard_parse_rand.force_hard_parse', 'select');
end;
/
alter system flush shared_pool;
You can see the hard-parsing in action by running the same query multiple times:
select * from hard_parse_test_rand;
select * from hard_parse_test_rand;
select * from hard_parse_test_rand;
select * from hard_parse_test_rand;
Now there are three entries in GV$SQL for each execution. There's some odd behavior in Virtual Private Database that parses the query multiple times, even though the final text looks the same.
select *
from gv$sql
where sql_text like '%hard_parse_test_rand%'
and sql_text not like '%quine%'
order by 1;
I think there is no hint indicating that Oracle shall find a new execution plan everytime it runs the query.
This is something we'd want for select * from mytable where is_active = :active, with is_active being 1 for very few rows and 0 for maybe billions of other rows. We'd want an index access for :active = 1 and a full table scan for :active = 0 then. Two different plans.
As far as I know, Oracle uses bind variable peeking in later versions, so with a look at the statistics it really comes up with different execution plans for different bind varibale content. But in older versions it did not, and thus we'd want some hint saying "make a new plan" there.
Oracle only re-used an execution plan for exactly the same query. It sufficed to add a mere blank to get a new plan. Hence a solution might be to generate the query everytime you want to run it with a random number included in a comment:
select /* 1234567 */ * from mytable where is_active = :active;
Or just don't use bind variables, if this is the problem you want to address:
select * from mytable where is_active = 0;
select * from mytable where is_active = 1;

Parameter for IN query oracle [duplicate]

This question already has an answer here:
Oracle: Dynamic query with IN clause using cursor
(1 answer)
Closed 8 years ago.
SELECT * FROM EMPLOYEE
WHERE EMP_NAME IN (:EMP_NAME);
This is my query and now the EMP_NAME parameter I would like to send it as a list of strings.
When I run this query in SQL developer it is asked to send the EMP_NAME as a parameter, Now I want to send 'Kiran','Joshi' (Basically, I want to fetch the details of the employee with employee name either Kiran or Joshi. How should I pass the value during the execution of the query?
It works when I use the value Kiran alone, but when I concatenate with any other string it won't work. Any pointers in this?
I tried the one below
'Kiran','Joshi'
The above way doesn't work as understood this is a single parameter it tries the employee with the name as 'Kiran',Joshi' which won't come. Understandable, but in order to achieve this thing, how can I go ahead?
Any help would be really appreciated.
Thanks to the people who helped me in solving this problem.
I could get the solution using the way proposed, below is the approach
SELECT * FROM EMPLOYEE WHERE EMP_NAME IN (&EMP_NAME)
I have tried in this way and following are the scenarios which I have tested and they are working fine.
Scenario 1:
To fetch details of only "Kiran", then in this case the value of EMP_NAME when sql developer prompts is given as Kiran. It worked.
Scenario 2:
To fetch details of either "Kiran" or "Joshi", then the value of EMP_NAME is sent as
Kiran','Joshi
It worked in this case also.
Thanks Kedarnath for helping me in achieving the solution :)
IN clause would be implicitly converted into multiple OR conditions.. and the limit is 1000.. Also query with bind variable means, the execution plan will be reused.. Supporting bind variables for IN clause will hence affect the bind variable's basic usage, and hence oracle limits it at syntax level itself.
Only way is like name in (:1,:2) and bind the other values..
for this, you might dynamic SQL constructing the in clause bind variables in a loop.
Other way is, calling a procedure or function(pl/sql)
DECLARE
v_mystring VARCHAR(50);
v_my_ref_cursor sys_refcursor;
in_string varchar2='''Kiran'',''Joshi''';
id2 varchar2(10):='123'; --- if some other value you have to compare
myrecord tablename%rowtype;
BEGIN
v_mystring := 'SELECT a.*... from tablename a where name= :id2 and
id in('||in_string||')';
OPEN v_my_ref_cursor FOR v_mystring USING id2;
LOOP
FETCH v_my_ref_cursor INTO myrecord;
EXIT WHEN v_my_ref_cursor%NOTFOUND;
..
-- your processing
END LOOP;
CLOSE v_my_ref_cursor;
END;
IN clause supports maximum of 1000 items. You can always use a table to join instead. That table might be a Global Temporary Table(GTT) whose data is visible to thats particular session.
Still you can use a nested table also for it(like PL/SQL table)
TABLE() will convert a PL/Sql table as a SQL understandable table object(an object actually)
A simple example of it below.
CREATE TYPE pr AS OBJECT
(pr NUMBER);
/
CREATE TYPE prList AS TABLE OF pr;
/
declare
myPrList prList := prList ();
cursor lc is
select *
from (select a.*
from yourtable a
TABLE(CAST(myPrList as prList)) my_list
where
a.pr = my_list.pr
order by a.pr desc) ;
rec lc%ROWTYPE;
BEGIN
/*Populate the Nested Table, with whatever collection you have */
myPrList := prList ( pr(91),
pr(80));
/*
Sample code: for populating from your TABLE OF NUMBER type
FOR I IN 1..your_input_array.COUNT
LOOP
myPrList.EXTEND;
myPrList(I) := pr(your_input_array(I));
END LOOP;
*/
open lc;
loop
FETCH lc into rec;
exit when lc%NOTFOUND; -- Your Exit WHEN condition should be checked afte FETCH iyself!
dbms_output.put_line(rec.pr);
end loop;
close lc;
END;
/

ORACLE PL SQL : Select all and process every records

I would like to have your advise how to implement the plsql. Below is the situation that i want to do..
select * from table A
loop - get each records from #1 step, and execute the store procedure, processMe(a.field1,a.field2,a.field3 || "test",a.field4);
i dont have any idea how to implement something like this. Below is sample parameter for processMe
processMe(
number_name IN VARCHAR,
location IN VARCHAR,
name_test IN VARCHAR,
gender IN VARCHAR )
Begin
select objId into obj_Id from tableUser where name = number_name ;
select locId into loc_Id from tableLoc where loc = location;
insert into tableOther(obj_id,loc_id,name_test,gender)
values (obj_Id ,loc_Id, name_test, gender)
End;
FOR rec IN (SELECT *
FROM table a)
LOOP
processMe( rec.field1,
rec.field2,
rec.field3 || 'test',
rec.field4 );
END LOOP;
does what you ask. You probably want to explicitly list the columns you actually want in the SELECT list rather than doing a SELECT * (particularly if there is an index on the four columns you actually want that could be used rather than doing a table scan or if there are columns you don't need that contain a large amount of data). Depending on the data volume, it would probably be more efficient if a version of processMe were defined that could accept collections rather than processing data on a row-by-row bases as well.
i just add some process. but this is just a sample. By the way, why
you said that this is not a good idea using loop? i interested to know
Performance wise, If you can avoid looping through a result set executing some other DMLs inside a loop, do it.
There is PL/SQL engine and there is SQL engine. Every time PL/SQL engine stumbles upon a SQL statement, whether it's a select, insert, or any other DML statement, it has to send it to the SQL engine for the execution. It calls context switching. Placing DML statement inside a loop will cause the switch(for each DML statement if there are more than one of them) as many times as many times the body of a loop has to be executed. It can be a cause of a serious performance degradation. if you have to loop, say, through a collection, use foreach loop, it minimizes context switching by executing DML statements in batches.
Luckily, your code can be rewritten as a single SQL statement, avoiding for loop entirely:
insert into tableOther(obj_id,loc_id,name_test,gender)
select tu.objId
, tl.locid
, concat(a.field3, 'test')
, a.field4
from table a
join tableUser tu
on (a.field1 = tu.name)
join tableLoc tl
on (tu.field2 = tl.loc)
You can put that insert statement into a procedure, if you want. PL/SQL will have to sent this SQL statement to the SQL engine anyway, but it will only be one call.
You can use a variable declared using a cursor rowtype. Something like this:
declare
cursor my_cursor is
select * from table;
reg my_cursor%rowtype;
begin
for reg in my_cursor loop
--
processMe(reg.field1, reg.field2, reg.field3 || "test", reg.field4);
--
end loop;
end;

optimizing a dup delete statement Oracle

I have 2 delete statements that are taking a long time to complete. There are several indexes on the columns in where clause.
What is a duplicate?
If 2 or more records have same values in columns id,cid,type,trefid,ordrefid,amount and paydt then there are duplicates.
The DELETEs delete about 1 million record.
Can they be re-written in any way to make it quicker.
DELETE FROM TABLE1 A WHERE loaddt < (
SELECT max(loaddt) FROM TABLE1 B
WHERE
a.id=b.id and
a.cid=b.cid and
NVL(a.type,'-99999') = NVL(b.type,'-99999') and
NVL(a.trefid,'-99999')=NVL(b.trefid,'-99999') and
NVL(a.ordrefid,'-99999')= NVL(b.ordrefid,'-99999') and
NVL(a.amount,'-99999')=NVL(b.amount,'-99999') and
NVL(a.paydt,TO_DATE('9999-12-31','YYYY-MM-DD'))=NVL(b.paydt,TO_DATE('9999-12-31','YYYY-MM-DD'))
);
COMMIT;
DELETE FROM TABLE1 a where rowid > (
Select min(rowid) from TABLE1 b
WHERE
a.id=b.id and
a.cid=b.cid and
NVL(a.type,'-99999') = NVL(b.type,'-99999') and
NVL(a.trefid,'-99999')=NVL(b.trefid,'-99999') and
NVL(a.ordrefid,'-99999')= NVL(b.ordrefid,'-99999') and
NVL(a.amount,'-99999')=NVL(b.amount,'-99999') and
NVL(a.paydt,TO_DATE('9999-12-31','YYYY-MM-DD'))=NVL(b.paydt,TO_DATE('9999-12-31','YYYY-MM-DD'))
);
commit;
Explain Plan:
DELETE TABLE1
HASH JOIN 1296491
Access Predicates
AND
A.ID=ITEM_1
A.CID=ITEM_2
ITEM_3=NVL(TYPE,'-99999')
ITEM_4=NVL(TREFID,'-99999')
ITEM_5=NVL(ORDREFID,'-99999')
ITEM_6=NVL(AMOUNT,(-99999))
ITEM_7=NVL(PAYDT,TO_DATE(' 9999-12-31 00:00:00', 'syyyy-mm-dd hh24:mi:ss'))
Filter Predicates
LOADDT<MAX(LOADDT)
TABLE ACCESS TABLE1 FULL 267904
VIEW VW_SQ_1 690385
SORT GROUP BY 690385
TABLE ACCESS TABLE1 FULL 267904
How large is the table? If count of deleted rows is up to 12% then you may think about index.
Could you somehow partition your table - like week by week and then scan only actual week?
Maybe this could be more effecient. When you're using aggregate function, then oracle must walk through all relevant rows (in your case fullscan), but when you use exists it stops when the first occurence is found. (and of course the query would be much faster, when there was one function-based(because of NVL) index on all columns in where clause)
DELETE FROM TABLE1 A
WHERE exists (
SELECT 1
FROM TABLE1 B
WHERE
A.loaddt != b.loaddt
a.id=b.id and
a.cid=b.cid and
NVL(a.type,'-99999') = NVL(b.type,'-99999') and
NVL(a.trefid,'-99999')=NVL(b.trefid,'-99999') and
NVL(a.ordrefid,'-99999')= NVL(b.ordrefid,'-99999') and
NVL(a.amount,'-99999')=NVL(b.amount,'-99999') and
NVL(a.paydt,TO_DATE('9999-12-31','YYYY-MM-DD'))=NVL(b.paydt,TO_DATE('9999-12-31','YYYY-MM-DD'))
);
Although some may disagree, I am a proponent of running large, long running deletes procedurally. In my view it is much easier to control and track progress (and your DBA will like you better ;-) Also, not sure why you need to join table1 to itself to identify duplicates (and I'd be curious if you ever run into snapshot too old issues with your current approach). You also shouldn't need multiple delete statements, all duplicates should be handled in one process. Finally, you should check WHY you're constantly re-introducing duplicates each week, and perhaps change the load process (maybe doing a merge/upsert rather than all inserts).
That said, you might try something like:
-- first create mat view to find all duplicates
create materialized view my_dups_mv
tablespace my_tablespace
build immediate
refresh complete on demand
as
select id,cid,type,trefid,ordrefid,amount,paydt, count(1) as cnt
from table1
group by id,cid,type,trefid,ordrefid,amount,paydt
having count(1) > 1;
-- dedup data (or put into procedure and schedule along with mat view refresh above)
declare
-- make sure my_dups_mv is refreshed first
cursor dup_cur is
select * from my_dups_mv;
type duprec_t is record(row_id rowid);
duprec duprec_t;
type duptab_t is table of duprec_t index by pls_integer;
duptab duptab_t;
l_ctr pls_integer := 0;
l_dupcnt pls_integer := 0;
begin
for rec in dup_cur
loop
l_ctr := l_ctr + 1;
-- assuming needed indexes exist
select rowid
bulk collect into duptab
from table1
where id = rec.id
and cid = rec.cid
and type = rec.type
and trefid = rec.trefid
and ordrefid = rec.ordrefid
and amount = rec.amount
and paydt = rec.paydt
-- order by whatever makes sense to make the "keeper" float to top
order by loaddt desc
;
for i in 2 .. duptab.count
loop
l_dupcnt := l_dupcnt + 1;
delete from table1 where rowid = duptab(i).row_id;
end loop;
if (mod(l_ctr, 10000) = 0) then
-- log to log table here (calling autonomous procedure you'll need to implement)
insert_logtable('Table1 deletes', 'Commit reached, deleted ' || l_dupcnt || ' rows');
commit;
end if;
end loop;
commit;
end;
Check your log table for progress status.
1. Parallel
alter session enable parallel dml;
DELETE /*+ PARALLEL */ FROM TABLE1 A WHERE loaddt < (
...
Assuming you have Enterprise Edition, a sane server configuration, and you are on 11g. If you're not on 11g, the parallel syntax is slightly different.
2. Reduce memory requirements
The plan shows a hash join, which is probably a good thing. But without any useful filters, Oracle has to hash the entire table. (Tbone's query, that only use a GROUP BY, looks nicer and may run faster. But it will also probably run into the same problem trying to sort or hash the entire table.)
If the hash can't fit in memory it must be written to disk, which can be very slow. Since you run this query every week, only one of the tables needs to look at all the rows. Depending on exactly when it runs, you can add something like this to the end of the query: ) where b.loaddt >= sysdate - 14. This may significantly reduce the amount of writing to temporary tablespace. And it may also reduce read IO if you use some partitioning strategy like jakub.petr suggested.
3. Active Report
If you want to know exactly what your query is doing, run the Active Report:
select dbms_sqltune.report_sql_monitor(sql_id => 'YOUR_SQL_ID_HERE', type => 'active')
from dual;
(Save the output to an .html file and open it with a browser.)

Resources