I was asked in an interview, "how to delete the 1st ten rows using a for loop"?
I answered that, "it does not sound logical as using a for loop would not be necessary to delete 1st ten rows". Was I right?
And if I had to give the query, what would be the answer?
for i in 1..10
loop
delete from table where rownum=1;
end loop
for i in 1..10
loop
delete from table where rownum=1;
end loop
The code is wrong.
Yes, it is not logical to delete rows in a for loop, but the question is still confusing.
If you are saying you want to delete the first ten rows fetched by query
no matter in what order then you can use something like this.
DELETE FROM mytable
WHERE key IN (SELECT key from mytable FETCH FIRST 10 ROWS ONLY);
Using for loop does not make sense but still you can do something like this.
DECLARE
key dbms_sql.number_table;
BEGIN
SELECT EMPNO BULK COLLECT INTO key FROM emp;
FOR i IN 1..10
LOOP
DELETE FROM EMP WHERE EMPNO = key(i);
END LOOP;
COMMIT;
END;
You can just use SQL and the ROWID pseudo-column to directly correlate between the SELECT statement where the ORDERing occurs to the DELETE statement using the pointer to the row (rather than having to correlate on a primary key in the table):
DELETE FROM table_name
WHERE ROWID IN (
SELECT ROWID
FROM table_name
ORDER BY value
FETCH FIRST 10 ROWS ONLY
)
If you want to use a FOR loop then you can do the same thing in PL/SQL by using BULK COLLECT to store the ROWID values:
DECLARE
TYPE rowid_t IS TABLE OF ROWID;
rowids rowid_t;
BEGIN
SELECT ROWID
BULK COLLECT INTO ROWIDS
FROM table_name
ORDER BY value
FETCH FIRST 10 ROWS ONLY;
FOR i IN 1 .. rowids.COUNT LOOP
DELETE FROM table_name
WHERE ROWID = rowids(i);
END LOOP;
END;
/
(Note: You want to loop to the size of the collection rather than naively looping to the maximum value of 10 as there may not be 10 rows in the table to delete.)
db<>fiddle here
Using for loop statement:
create table test_tbl
as select level num
from dual
connect by level<=100
/
begin
for rc in (select rowid rid
from test_tbl
order by num fetch first 10 rows only)
loop
delete from test_tbl where rowid=rc.rid;
end loop;
commit;
end;
/
Related
Hello I have this situation
I have a table with 400 millions of records I want to migrate some data to another table for get better performance.
I know that the process could be slow and heavy and I want to execute something like this
INSERT INTO TABLLE_2 SELECT NAME,TAX_ID,PROD_CODE FROM TABLE_1;
But I want this in a procedure that iterates the table from 50000 to 50000 records.
I saw this but it DELETE the rows and the target is SQLServer and not Oracle
WHILE 1 = 1
BEGIN
DELETE TOP (50000) FROM DBO.VENTAS
OUTPUT
Deleted.AccountNumber,
Deleted.BillToAddressID,
Deleted.CustomerID,
Deleted.OrderDate,
Deleted.SalesOrderNumber,
Deleted.TotalDue
INTO DATOS_HISTORY.dbo.VENTAS
WHERE OrderDate >= '20130101'
and OrderDate < '20140101'
IF ##ROWCOUNT = 0 BREAK;
END;
If it's just for a one-time migration, don't paginate/iterate at all. Just use a pdml append insert-select:
BEGIN
EXECUTE IMMEDIATE 'alter session enable parallel dml';
INSERT /*+ parallel(8) nologging append */ INTO table_2
SELECT name,tax_id,prod_code from table_1;
COMMIT;
EXECUTE IMMEDIATE 'alter session disable parallel dml';
END;
Or if the table doesn't yet exist, CTAS it to be even simpler:
CREATE OR REPLACE TABLE table_2 PARALLEL (DEGREE 8) NOLOGGING AS
SELECT name,tax_id,prod_code FROM table_1;
If you absolutely must iterate due to other business needs you didn't mention, you can use PL/SQL for that as well:
DECLARE
CURSOR cur_test
IS
SELECT name,tax_id,prod_code
FROM table_1;
TYPE test_tabtype IS TABLE OF cur_test%ROWTYPE;
tab_test test_tabtype;
var_limit integer := 50000;
BEGIN
OPEN cur_test;
FETCH cur_test BULK COLLECT INTO tab_test LIMIT var_limit;
LOOP
-- do whatever else you need to do
FORALL i IN tab_test.FIRST .. tab_test.LAST
INSERT INTO table_2
(name,tax_id,prod_code)
VALUES (tab_test(i).name, tab_test(i).tax_id, tab_test(i).prod_code);
COMMIT;
EXIT WHEN tab_test.COUNT < var_limit;
FETCH cur_test BULK COLLECT INTO tab_test LIMIT var_limit;
END LOOP;
CLOSE cur_test;
END;
That won't be nearly as fast, though. If you need even faster, you can create a SQL object and nested table type instead of the PL/SQL types you see here and then you can do a normal INSERT /*+ APPEND */ SELECT ... statement that will use direct path. But honestly, I doubt you really need iteration at all..
So I usually get the Primary Key of a newly inserted record as the following while using a trigger.
insert into table1 (pk1, notes) values (null, "Tester") returning pk1
into v_item;
I am trying to use the same concept but with an insert using a select statement. So for example:
insert into table1 (pk1, notes) select null, description from table2 where pk2 = 2 returning pk1
into v_item;
Note:
1. There is a trigger on table1 which automatically creates a pk1 on insert.
2. I need to use a select insert because of the size of the table that is being inserted into.
3. The insert is basically a copy of the record, so there is only 1 record being inserted at a time.
Let me know if I can provide more information.
I don't believe you can do this with insert/select directly. However, you can do it with PL/SQL and FORALL. Given the constraint about the table size, you'll have to balance memory usage with performance using l_limit. Here's an example...
Given this table with 100 rows:
create table t (
c number generated by default as identity,
c2 number
);
insert into t (c2)
select rownum
from dual
connect by rownum <= 100;
You can do this:
declare
cursor t_cur
is
select c2
from t;
type t_ntt is table of number;
l_c2_vals_in t_ntt;
l_c_vals_out t_ntt;
l_limit number := 10;
begin
open t_cur;
loop
fetch t_cur bulk collect into l_c2_vals_in limit l_limit;
forall i in indices of l_c2_vals_in
insert into t (c2) values (l_c2_vals_in(i))
returning c bulk collect into l_c_vals_out;
-- You have access to the new ids here
dbms_output.put_line(l_c_vals_out.count);
exit when l_c2_vals_in.count < l_limit;
end loop;
close t_cur;
end;
You can't use that mechanism; as shown in the documentation railroad diagram:
the returning clause is only allowed with the values version, not with the subquery version.
I'm interpreting your second restriction (about 'table size') as being about the number of columns you would have to handle, possibly as individual variables, rather than about the number of rows - I don't see how that would be relevant here. There are ways to avoid having lots of per-column local variables though; you could select into a row-type variable first:
declare
v_item number;
v_row table1%rowtype;
begin
...
select null, description
into v_row
from table2 where pk2 = 2;
insert into table1 values v_row returning pk1 into v_item;
dbms_output.put_line(v_item);
...
or with a loop, which might make things look more complicated than necessary if you really only ever have a single row:
declare
v_item number;
begin
...
for r in (
select description
from table2 where pk2 = 2
)
loop
insert into table1 (notes) values (r.description) returning pk1 into v_item;
dbms_output.put_line(v_item);
...
end loop;
...
or with a collection... as #Dan has posted while I was answering this so I won't repeat! - though again that might be overkill or overly complicated for a single row.
I have around 5 million of records which needs to be copied from table of one schema to table of another schema(in the same database). I have prepared a script but it gives me the below error.
ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define
Following is my script
DECLARE
TYPE tA IS TABLE OF varchar2(10) INDEX BY PLS_INTEGER;
TYPE tB IS TABLE OF SchemaA.TableA.band%TYPE INDEX BY PLS_INTEGER;
TYPE tD IS TABLE OF SchemaA.TableA.start_date%TYPE INDEX BY PLS_INTEGER;
TYPE tE IS TABLE OF SchemaA.TableA.end_date%TYPE INDEX BY PLS_INTEGER;
rA tA;
rB tB;
rD tD;
rE tE;
f number :=0;
BEGIN
SELECT col1||col2||col3 as main_col, band, effective_start_date as start_date, effective_end_date as end_date
BULK COLLECT INTO rA, rB, rD, rE
FROM schemab.tableb;
FORALL i IN rA.FIRST..rE.LAST
insert into SchemaA.TableA(main_col, BAND, user_type, START_DATE, END_DATE, roll_no)
values(rA(i), rB(i), 'C', rD(i), rE(i), 71);
f:=f+1;
if (f=10000) then
commit;
end if;
end;
Could you please help me in finding where the error lies?
Why not a simple
insert into SchemaA.TableA (main_col, BAND, user_type, START_DATE, END_DATE, roll_no)
SELECT col1||col2||col3 as main_col, band, 'C', effective_start_date, effective_end_date, 71
FROM schemab.tableb;
This
f:=f+1;
if (f=10000) then
commit;
end if;
does not make any sense. f becomes 1 - that's it. f=10000 will never be true, thus you don't make a COMMIT.
Following script worked for me and i was able to load around 5 millions of data within 15 minutes.
ALTER SESSION ENABLE PARALLEL DML
/
DECLARE
cursor c_p1 is
SELECT col1||col2||col3 as main_col, band, effective_start_date as start_date, effective_end_date as end_date
FROM schemab.tableb;
TYPE TY_P1_FULL is table of c_p1%rowtype
index by pls_integer;
v_P1_FULL TY_P1_FULL;
v_seq_num number;
BEGIN
open c_p1;
loop
fetch c_p1 BULK COLLECT INTO v_P1_FULL LIMIT 10000;
exit when v_P1_FULL.count = 0;
FOR i IN 1..v_P1_FULL.COUNT loop
INSERT /*+ APPEND */ INTO schemaA.tableA VALUES (v_P1_FULL(i));
end loop;
commit;
end loop;
close c_P1;
dbms_output.put_line('Load completed');
end;
-- Disable parallel mode for this session
ALTER SESSION DISABLE PARALLEL DML
/
ORA-06502: PL/SQL: numeric or value error: Bulk bind: Error in define
You get that error because you have a literal in the VALUES clause of the INSERT. The FORALL expects everything to be bind to an array.
Your program has a massive problem, literally. You have no LIMIT on the BULK COLLECT clause, so that's going to try to load all five million records from TableB into your collections. That will blow your session's memory limit.
The point of using BULK COLLECT and FORALL is to bite off chunks of a bigger data set and process it in batches. For that you need a loop. The loop has no FOR condition: instead test whether the fetch returned anything and exit when the array has zero records.
DECLARE
TYPE recA IS RECORD (
main_col SchemaA.TableA.main_col%TYPE
, band SchemaA.TableA.band%TYPE
, start_date date
, end_date date
, roll_ni number);
TYPE recsA is table of recA
nt_a recsA;
f number :=0;
CURSOR cur_b is
SELECT col1||col2||col3 as main_col,
band,
effective_start_date as start_date,
effective_end_date as end_date ,
71 as roll_no
FROM schemab.tableb;
BEGIN
open cur_b;
loop
fetch curb_b bulk collect into nt_a limit 1000;
exit when nt_a.count() = 0;
FORALL i IN rA.FIRST..rE.LAST
insert into SchemaA.TableA(main_col, BAND, user_type, START_DATE, END_DATE, roll_no)
values nt_a(i);
f := f + sql%rowcount;
if (f > = 10000) then
commit;
f := 0;
end if;
end loop;
commit;
close cur_b;
end;
Please note that issuing commits inside a loop is contraindicated. You lay yourself open to runtime errors such as ORA-01002 and ORA-01555. If your program does crash half-way through you will have great difficulty in resuming it without problems. By all means persist if you have problems with UNDO tablespace, but the correct answer is to get the DBA to enlarge the UNDO tablespace not weaken your code.
"i am using bulk insert because it gives better performance"
It is true that BULK COLLECT and FORALL ... INSERT is more performative than a CURSOR FOR loop with row-by-row single inserts. It is not more efficient than a pure SQL INSERT INTO ... SELECT. The value of the construct is that it allows us to manipulate the contents of the array before inserting it. This is handle if we have complex business rules which can only be applied programmatically.
Please try after changing first 2 line of your code with below:
DECLARE
TYPE tA IS TABLE OF SchemaA.TableA.main_col%TYPE INDEX BY PLS_INTEGER;
...
...
This may be because of data type/length mismatch. In declaration section you have missed to declare one to inherit type from table.
Also as mentioned, f logic for commit will not do the magic for you. Better you should use LIMIT with BULL COLLECT
I have to move the data from table A to table B (they have almost the same fields).
What I have now is a cursor, that iterates over the records that has to be moved, insert one record in the destination table and updates the is_processed field in the source table.
Something like:
BEGIN
FOR i IN (SELECT *
FROM A
WHERE A.IS_PROCESSED = 'N')
LOOP
INSERT INTO B(...) VALUES(i....);
UPDATE A SET IS_PROCESSED = 'Y' WHERE A.ID = i.ID;
COMMIT;
END LOOP;
END;
The questions is, how to do the same using INSERT FROM SELECT(without the loop) and then update IS_PROCESSED of all the moved rows?
There is no BULK COLLECT INTO for INSERT .. SELECT
May be you can try this. I don't think it's better than your LOOP.
DECLARE
TYPE l_src_tp IS TABLE OF t_source%ROWTYPE;
l_src_rows l_src_tp;
BEGIN
SELECT *
BULK COLLECT INTO l_src_rows
FROM t_source;
FORALL c IN l_src_rows.first .. l_src_rows.last
INSERT INTO t_dest (td_id, td_value)
VALUES (l_src_rows(c).ts_id, l_src_rows(c).ts_value);
FORALL c IN l_src_rows.first .. l_src_rows.last
UPDATE t_source
SET ts_is_proccesed = 'Y'
WHERE ts_id = l_src_rows(c).ts_id;
END;
If you reverse the order and first make update and then insert you can use:
DECLARE
ids sys.odcinumberlist;
BEGIN
UPDATE a SET is_processed = 'Y' WHERE is_processed = 'N' RETURNING id BULK COLLECT INTO ids;
INSERT INTO b (id) SELECT column_value id FROM TABLE(ids);
COMMIT;
END;
In the SELECT you can join the ids table and get other data from other tables you want to insert into b.
Hello I should prefer pure SQL rather than PLSQL. I don't know why another update statement is requires for this simpler task. Let me know if this helps.
INSERT INTO <new table>
SELECT col1,col2,
.....,'Y'
FROM <old_table>
WHERE processed_in = 'N';
I have to generate a table (contains two columns) of random data from a database table through oracle procedure. The user can indicate the number of data required and we have to use the table data with ID values from 1001 to 1060. I am trying to use cursor loop and not sure dbms_random method dhould I use.
I am using the following code to create procedure
create or replace procedure a05_random_plant(p_count in number)
as
v_count number := p_count;
cursor c is
select plant_id, common_name
from ppl_plants
where rownum = v_count
order by dbms_random.value;
begin
delete from a05_random_plants_table;
for c_table in c
loop
insert into a05_random_plants_table(plant_id, plant_name)
values (c_table.plant_id, c_table.common_name);
end loop;
end;
/
it complied successfully. Then I executed with the following code
set serveroutput on
exec a05_random_plant(5);
it shows anonymous block completed
but when run the following code, I do not get any records
select * from a05_random_plants_table;
The rownum=value would not work for a value greater than 1
hence try the below
create or replace procedure a05_random_plant(p_count in number)
as
v_count number := p_count;
cursor c is
select plant_id, common_name
from ppl_plants
where rownum <= v_count
order by dbms_random.value;
begin
delete from a05_random_plants_table;
for c_table in c
loop
insert into a05_random_plants_table(plant_id, plant_name)
values (c_table.plant_id, c_table.common_name);
end loop;
end;
/
Query by Tom Kyte - will generate almost 75K of rows:
select trunc(sysdate,'year')+mod(rownum,365) TRANS_DATE,
mod(rownum,100) CUST_ID,
abs(dbms_random.random)/100 SALES_AMOUNT
from all_objects
/
You can use this example to write your query and add where clause to it - where id between 1001 and 1060, for example.
I don't think you should use a cursor (which is slow naturally) but do a direct insert from a select:
insert into table (col1, col2)
select colx, coly from other_table...
And, isn't missing a COMMIT on the end of your procedure?
So, all code in your procedure would be a DELETE, a INSERT WITH that SELECT and then a COMMIT.