For some reason I'm trying to figure out why the following query executes by full table scan which takes ages because the table has ~31M rows
PROCEDURE d1(k_uni_in IN data_par.k_uni%TYPE) AS
CURSOR d1_cur IS
SELECT d.*
FROM data_par d
WHERE d.k_uni = k_uni_in
ORDER BY d.k_date;
BEGIN
FOR i IN d1_cur LOOP
...
END LOOP;
END;
However seemingly similar query runs index range scan and is pretty much instant
PROCEDURE d1(k_uni_in IN data_par.k_uni%TYPE) AS
CURSOR d1_cur(k_cv IN data_par.k_uni%TYPE) IS
SELECT d.*
FROM data_par d
WHERE d.k_uni = k_cv
ORDER BY d.k_date;
BEGIN
FOR i IN d1_cur(k_uni_in) LOOP
...
END LOOP;
END;
Why does that happen? Should I always use cursor parameters instead of using suprogram parameters in cursors?
If your table DATA_PAR has a column named K_UNI_IN, then Oracle is interpreting this line:
WHERE d.k_uni = k_uni_in
As meaning
WHERE d.k_uni = d.k_uni_in
And, since that's obviously not a condition that an index can help with, you're getting a full table scan.
See also: http://docs.oracle.com/cd/E11882_01/appdev.112/e25519/nameresolution.htm#LNPLS2038
Related
CREATE OR REPLACE PROCEDURE RDBSTAGE.ATCHMNT_ERR_FILEID AUTHID CURRENT_USER
IS
CURSOR cv_atchtab IS
SELECT * FROM ATTACHMENT_ERROR;
I_ATCHMNT_ERR cv_atchtab%ROWTYPE;
V_FILE_ID VARCHAR2(40);
BEGIN
OPEN cv_atchtab;
LOOP
FETCH cv_atchtab BULK COLLECT INTO I_ATCHMNT_ERR;
EXIT WHEN cv_atchtab%NOTFOUND;
FOR i IN 1..I_ATCHMNT_ERR.COUNT
LOOP
SELECT FILE_ID BULK COLLECT
INTO V_FILE_ID
FROM ATTACHMENT_CLAIM t1
WHERE t1.CLAIM_TCN_ID=I_ATCHMNT_ERR(i).CLAIM_TCN_ID;
UPDATE ATTACHMENT_ERROR
SET FILE_ID = V_FILE_ID
WHERE t1.CLAIM_TCN_ID=I_ATCHMNT_ERR.CLAIM_TCN_ID;
END LOOP;
END LOOP;
CLOSE cv_atchtab;
END;
END ATCHMNT_ERR_FILEID;
/
SHOW ERRORS
Procedure ATCHMNT_ERR_FILEID compiled
Errors: check compiler log Errors for PROCEDURE
RDBSTAGE.ATCHMNT_ERR_FILEID:
LINE/COL ERROR
11/40 PLS-00497: cannot mix between single row and multi-row (BULK) in INTO list
14/5 PL/SQL: Statement ignored
14/31 PLS-00302: component 'COUNT' must be declared
PLS-00497: cannot mix between single row and multi-row (BULK) in INTO list
BULK COLLECT INTO is the syntax for populating a PL/SQL collection from a query. But your code is populating a scalar single row variable.
14/31 PLS-00302: component 'COUNT' must be declared
I_ATCHMNT_ERR.COUNT is invalid because count() only applies to collections I_ATCHMNT_ERR is scalar.
To fix this you need to define and use collection types. Something like this:
CREATE OR REPLACE PROCEDURE ATCHMNT_ERR_FILEID
IS
CURSOR cv_atchtab IS
SELECT * FROM ATTACHMENT_ERROR;
type ATCHMNT_ERR_nt is table of cv_atchtab%ROWTYPE;
I_ATCHMNT_ERR ATCHMNT_ERR_nt;
V_FILE_ID VARCHAR2(40);
BEGIN
OPEN cv_atchtab;
LOOP
FETCH cv_atchtab BULK COLLECT INTO I_ATCHMNT_ERR; -- collection type
EXIT WHEN I_ATCHMNT_ERR.COUNT = 0; -- changed this
FOR i IN 1..I_ATCHMNT_ERR.COUNT
LOOP
SELECT FILE_ID
INTO V_FILE_ID -- scalar type
FROM ATTACHMENT_CLAIM t1
WHERE t1.CLAIM_TCN_ID = I_ATCHMNT_ERR(i).CLAIM_TCN_ID;
UPDATE ATTACHMENT_ERROR t2 -- changed this
SET FILE_ID = V_FILE_ID
WHERE t2.CLAIM_TCN_ID = I_ATCHMNT_ERR(i).CLAIM_TCN_ID; -- changed this
END LOOP;
END LOOP;
CLOSE cv_atchtab;
END ATCHMNT_ERR_FILEID;
Here is a db<>fiddle demo of the above working against my guess of the data model.
The Oracle documentation is comprehensive, online and free. The PL/SQL Guide has a whole chapter on Collections and Records which I suggest you read. Find it here.
As an aside, nested loops with single row statements like this are usually a red flag in PL/SQL. They are pretty inefficient and slow. SQL is a set-based language, and we should always try to solve problems using SQL whenever possible, and ideally in one set-based statement. If this code is intended for production (rather than being a homework assignment) you should definitely consider re-writing it in a more performative fashion.
I have a Main cursor that is working fine.
declare
v_firm_id number;
amount number;
v_total_sum TABLE_TEMP.TOTAL_SUM%TYPE;
CURSOR MT_CURSOR IS
SELECT firm_id FROM t_firm;
BEGIN
OPEN MT_CURSOR;
LOOP
FETCH MT_CURSOR INTO v_firm_id;
EXIT WHEN MT_CURSOR%NOTFOUND;
DBMS_OUTPUT.PUT_LINE(to_char(sysdate, 'mi:ss') ||'--- '|| v_firm_id)
INSERT INTO TABLE_TEMP(TOTAL_SUM) VALUES(v_firm_id)
COMMIT;
END LOOP;
DBMS_LOCK.SLEEP(20);
BEGIN
FOR loop_emp IN
(SELECT TOTAL_SUM INTO v_total_sum FROM TABLE_TEMP)
LOOP
dbms_output.put_line(to_char(sysdate, 'mi:ss') ||'--- '|| v_total_sum || '-TEST--');
END LOOP loop_emp;
END;
end;
Everything Works fine except dbms_output.put_line(v_total_sum || '---');
I do not get any data there. I get the correct number of rows. which it inserted.
The problem is the cursor FOR loop has a redundant into clause which it appears the compiler silently ignores, and so v_total_sum is never used.
Try this:
begin
for r in (
select firm_id from t_firm
)
loop
insert into table_temp (total_sum) values (r.firm_id);
end loop;
dbms_lock.sleep(20);
for r in (
select total_sum from table_temp
)
loop
dbms_output.put_line(r.total_sum || '---');
end loop;
commit;
end;
If this had been a stored procedure rather than an anonymous block and you had PL/SQL compiler warnings enabled with alter session set plsql_warnings = 'ENABLE:ALL'; (or the equivalent preference setting in your IDE) then you would have seen:
PLW-05016: INTO clause should not be specified here
I also moved the commit to the end so you only commit once.
To summarise the comments below, the Cursor FOR loop construction declares, opens, fetches and closes the cursor for you, and is potentially faster because it fetches in batches of 100 (or similar - I haven't tested in recent versions). Simpler code has less chance of bugs and is easier to maintain in the future, for example if you need to add a column to the cursor.
Note the original version had:
for loop_emp in (...)
loop
...
end loop loop_emp;
This is misleading because loop_emp is the name of the record, not the cursor or the loop. The compiler is ignoring the text after end loop although really it should at least warn you. If you wanted to name the loop, you would use a label like <<LOOP_EMP>> above it. (I always name my loop records r, similar to the i you often see used in numeric loops.)
I have this simple oracle plsql procedure:
declare
cursor A is
select column_A
from A_TAB; -- no order by
begin
for rec_ in A loop
procedure_A(rec_.column_A);
end loop;
end;
And this is running now for ages.
When I look into sys.v_$sql_bind_capture, value_string column, I can see the current value of bound column_A, and thankfully, that value keeps changing every few minutes.
As the cursor was not sorted by anything, is there a way to see how many more records to go (until this is finished)?
In other words I would need to see the currently fetched values of the query from that cursor. Where to look for it?
This is Oracle 12 database.
You can use dbms_application_info.set_session_longops to do this. The results are visible in V$SESSION_LONGOPS.
In your example, that could do something like:
DECLARE
rindex BINARY_INTEGER;
slno BINARY_INTEGER;
totalwork number;
sofar number;
obj BINARY_INTEGER;
cursor A is
select column_A,
COUNT(*) OVER () cnt
from A_TAB; -- no order by
begin
rindex := dbms_application_info.set_session_longops_nohint;
sofar := 0;
for rec_ in A loop
totalwork := rec_.cnt;
sofar := sofar + 1;
dbms_application_info.set_session_longops(rindex,
slno,
'Process a_tab',
'A_TAB',
0,
sofar,
totalwork,
'table',
'rows');
procedure_A(rec_.column_A);
end loop;
end;
Note that in order to get the totalwork value, I've used the analytic COUNT() function to get the total number of rows within the resultset. You could run a separate query to get the count before looping through your original cursor, if that is faster. You'd have to test both methods to work out which would be fastest for your data etc.
Of course, depending on what procedure_a does, you might be able to avoid the need to monitor the progress if you can refactor things so that all the work is being done in a single SQL statement. My answer above assumes that it's not possible to do that. If it is, I highly recommend you refactor your code instead!
I have a procedure in which 100 tables have to be updated one by one. All tables have the same column to be updated. For improving the performance I am trying to use Execute Immediate with FORALL but I am getting a lot of compilation errors.
Is it syntactically possible to update 100 different tables inside a FORALL statement using Execute immediate.
My code looks something like this.
Declare
TYPE u IS TABLE OF VARCHAR2(240) INDEX BY BINARY_INTEGER;
Table_List u;
FOR somecursor IN (SELECT variable1, variable2 FROM SomeTable)
LOOP
BEGIN
Table_List(1) := 'table1';
Table_List(2) := 'table2';
......
......
table_list(100):= 'table100';
FORALL i IN Table_List.FIRST .. Table_List.LAST
EXECUTE IMMEDIATE 'UPDATE :1 SET column = :3 WHERE column = :2'
USING Table_List(i), somecursor.variable1, somecursor.variable2 ;
end loop;
I hope people can understand what I am trying to do through this code. If something is big time wrong please suggest me what exactly is the syntax and if it can be done in some other efficient way also.
Thanks a lot for all the help which comes my way.
(1) No, you can't use a bind variable for the table name.
(2) When you're using EXECUTE IMMEDIATE, this implies Dynamic SQL - but FORALL requires that only one statement to be executed. As soon as you specify a different table, you're talking about a different statement (regardless of whether the tables' structures happen to be equivalent or not).
You're going to have to do this in an ordinary FOR loop.
Just a guess, but I don't think you can use a bind variable as a table name. Have you tried:
EXECUTE IMMEDIATE 'UPDATE ' || Table_List(i) || ' SET column = :2 WHERE column = :3' ...
I wrote an Oracle function (for 8i) to fetch rows affected by a DML statement, emulating the behavior of RETURNING * from PostgreSQL. A typical function call looks like:
SELECT tablename_dml('UPDATE tablename SET foo = ''bar''') FROM dual;
The function is created automatically for each table and uses Dynamic SQL to execute a query passed as an argument. Moreover, a statement that executes the query dynamically is also wrapped in a BEGIN .. END block:
EXECUTE IMMEDIATE 'BEGIN'||query||' RETURNING col1, col2 BULK COLLECT INTO :1, :2;END;' USING OUT col1_t, col2_t;
The reason behind this perculiar construction is that it seems to be the only way to get values from the DML statement that affects multiple rows. Both col1_t and col2_t are declared as collections of the types corresponding to the table columns.
Finally, to the problem. When the query passed contains a subselect, execution of the function produces a syntax error. Below is a simpe example to illustrate this:
CREATE TABLE xy(id number, name varchar2(80));
CREATE OR REPLACE FUNCTION xy_fn(query VARCHAR2) RETURN NUMBER IS
PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
EXECUTE IMMEDIATE 'BEGIN '||query||'; END;';
ROLLBACK;
RETURN 5;
END;
SELECT xy_fn('update xy set id = id + (SELECT min(id) FROM xy)') FROM DUAL;
The last statement produces the following error: (the SELECT that is mentioned there is the SELECT min(id))
ORA-06550: line 1, column 32: PLS-00103: Encountered the symbol
"SELECT" when expecting one of the following: ( - + mod not null
others avg count current exists max min prior sql stddev sum
variance execute forall time timestamp interval date
This problem occurs on 8i (8.1.6), but not 10g.
If BEGIN .. END blocks are removed - the problem disappears.
If a subselect in a query is replaced with something else, i.e. a constant, the problem disappears.
Unfortunately, I'm stuck with 8i and removing BEGIN .. END is not an option (see the explanation above).
Is there a specific Oracle 8i limitation in play here? Is it possible to overcome it with dynamic SQL?
Not sure why you need to do all this work. Oracle 8i supported RETURNING INTO with bulk collection. Find out more
So you should just be able to execute this statement in non-dynamic SQL. Something like this:
UPDATE tablename
SET foo = 'bar'
returning col1, col2 bulk collect into col1_t, col2_t;
Stripped of all the irrelevancies, I think your question is simple.
This update statement runs in SQL:
update xy set id = id + (SELECT min(id) FROM xy);
And this anonymous block also runs:
begin
update xy set id = id + 100;
end;
But combining the two doesn't work:
begin
update xy set id = id + (SELECT min(id) FROM xy);
end;
Probably you have run into a limitation of older Oracle. Prior to 9i, the SQL engine and the PL/SQL SQL engine were always out of sync. So latest features supported in SQL often weren't supported in PL/SQL. It seems like you have one of those.
Since 9i Oracle have striven to keep the two engines in sync, so it is much rarer to find things which work in SQL but not in PL/SQL.
Given the nature of your task, upgrading your version of Oracle is out. So all I can suggest is that you have two procedures, one which supports the sub query syntax (by avoiding the need for such subqueries. Something like this:
CREATE OR REPLACE FUNCTION xy_sqfn
(main_query VARCHAR2
, sub_query VARCHAR2 )
RETURN NUMBER
IS
n pls_integer;
BEGIN
execute immediate sub_query into n;
EXECUTE IMMEDIATE 'BEGIN '||main_query||'; END;'
using n;
RETURN 5;
END;
call it like this
result := xy_sqfn ('update xy set id = id + :1'
, 'SELECT min(id) FROM xy');
Now this approach won't work for correlated sub-queries. So it you have any of them, you'll need to do something different again.
Incidentally, using the AUTONOMOUS TRANSACTION pragma to fudge executing DML in a SELECT statement is quite horrible. Why not just run the functions in PL/SQL? Or use procedures? I suppose you'll say it doesn't matter because you're just writing some shonky code to support a data migration. Which is fair enough, but for the benefit of future seekers: don't do this! It's very bad practice!