How to write the following pl/sql block without using Cursor? - oracle

I had written a cursor in a pl/sql block. This block taking lot of time if it has more records.
How to write this without a cursor or Is there any other alternative way that will reduce the time?
Is there any alternative query to perform insert into one table and delete from another table using a single query?
DECLARE
MDLCursor SYS_REFCURSOR;
BEGIN
open MDLCursor for
select dc.dest_id, dc.digits, dc.Effectivedate, dc.expirydate
from DialCodes dc
INNER JOIN MDL d
ON dc.Dest_ID = d.Dest_ID
AND d.PriceEntity = 1
join sysmdl_calltypes s
on s.call_type_id = v_CallType_ID
and s.dest_id = dc.Dest_ID
and s.call_type_id not in
(select calltype_id from ignore_calltype_for_routing)
order by length(dc.digits) desc, dc.digits desc;
loop
fetch MDLCursor
into v_mdldest_id, v_mdldigits, v_mdlEffectiveDate, v_mdlExpDate;
insert into tt_pendingcost_temp
(Dest_ID,
Digits,
CCASDigits,
Destination,
tariff_id,
NewCost,
Effectivedate,
ExpiryDate,
previous,
Currency)
select v_mdldest_id,
Digits,
v_mdldigits,
Destination,
tariff_id,
NewCost,
Effectivedate,
ExpiryDate,
previous,
Currency
FROM tt_PendingCost
where substr(Digits, 1, 2) = substr(v_MDLDigits, 1, 2)
and instr(Digits, v_MDLDigits) = 1
and v_mdlEffectiveDate <= effectivedate
and (v_mdlExpDate > effectivedate or v_mdlExpDate is null);
if SQL%ROWCOUNT > 0 then
delete FROM tt_PendingCost
where substr(Digits, 1, 2) = substr(v_MDLDigits, 1, 2)
and instr(Digits, v_MDLDigits) = 1
and v_mdlEffectiveDate <= effectivedate
and (v_mdlExpDate > effectivedate or v_mdlExpDate is null);
end if;
exit when MDLCursor%NOTFOUND;
end loop;
close MDLCursor;
END;

I don't have your tables and your data so I can only guess at a couple of things that would be slowing you down.
Firstly, the query used in your cursor has an ORDER BY clause in it. If this query returns a lot of rows, Oracle has to fetch them all and sort them all before it can return the first row. If this query typically returns a lot of results, and you don't particularly need it to return sorted results, you may find your PL/SQL block speeds up a bit if you drop the ORDER BY. That way, you can start getting results out of the cursor without needing to fetch all the results, store them somewhere and sort them first.
Secondly, the following is the WHERE clause used in your INSERT INTO ... SELECT ... and DELETE FROM ... statements:
where substr(Digits, 1, 2) = substr(v_MDLDigits, 1, 2)
and instr(Digits, v_MDLDigits) = 1
and v_mdlEffectiveDate <= effectivedate
and (v_mdlExpDate > effectivedate or v_mdlExpDate is null);
I don't see how Oracle can make effective use of indexes with any of these conditions. It would therefore have to do a full table scan each time.
The last two conditions seem reasonable and there doesn't seem a lot that can be done with them. I'd like to focus on the first two conditions as I think there's more scope for improvement with them.
The second of the four conditions is
instr(Digits, v_MDLDigits) = 1
This condition holds if and only if Digits starts with the contents of v_MDLDigits. A better way of writing this would be
Digits LIKE v_MDLDigits || '%'
The advantage of using LIKE in this situation instead of INSTR is that Oracle can make use of indexes when using LIKE. If you have an index on the Digits column, Oracle will be able to use it with this query. Oracle would then be able to focus in on those rows that start with the digits in v_MDLDigits instead of doing a full table scan.
The first of the four conditions is:
substr(Digits, 1, 2) = substr(v_MDLDigits, 1, 2)
If v_MDLDigits has length at least 2, and all entries in the Digits columns also have length at least 2, then this condition is redundant since it is implied by the previous one we looked at.
I'm not sure why you would have a condition like this. The only reason I can think why you might have this condition is if you have a functional index on substr(Digits, 1, 2). If not, I would be tempted to remove this substr condition altogether.
I don't think the cursor is what is making this procedure run slowly, and there's no single statement I know of that can insert into one table and delete from another. To make this procedure speed up I think you just need to tune the queries a bit.

Related

SQL%FOUND where SELECT query returns no rows

I have the following function, which returns the next available client ID from the Client table:
CREATE OR REPLACE FUNCTION getNextClientID RETURN INT AS
ctr INT;
BEGIN
SELECT MAX(NUM) INTO ctr FROM Client;
IF SQL%NOTFOUND THEN
RETURN 1;
ELSIF SQL%FOUND THEN
-- RETURN SQL%ROWCOUNT;
RAISE_APPLICATION_ERROR(-20010, 'ROWS FOUND!');
-- RETURN ctr + 1;
END IF;
END;
But when calling this function,
BEGIN
DBMS_OUTPUT.PUT_LINE(getNextClientID());
END;
I get the following result:
which I found a bit odd, since the Client table contains no data:
Also, if I comment out RAISE_APPLICATION_ERROR(-20010, 'ROWS FOUND!'); & log the value of SQL%ROWCOUNT to the console, I get 1 as a result.
On the other hand, when changing
SELECT MAX(NUM) INTO ctr FROM Client;
to
SELECT NUM INTO ctr FROM Client;
The execution went as expected. What is the reason behind this behavior ?
Aggregate functions will always return a result:
All aggregate functions except COUNT(*), GROUPING, and GROUPING_ID
ignore nulls. You can use the NVL function in the argument to an
aggregate function to substitute a value for a null. COUNT and
REGR_COUNT never return null, but return either a number or zero. For
all the remaining aggregate functions, if the data set contains no
rows, or contains only rows with nulls as arguments to the aggregate
function, then the function returns null.
You can change your query to:
SELECT COALESCE(MAX(num), 1) INTO ctr FROM Client;
and remove the conditionals altogether. Be careful about concurrency issues though if you do not use SELECT FOR UPDATE.
Query with any aggregate function and without GROUP BY clause always returns 1 row. If you want no_data_found exception on empty table, add GROUP BY clause or remove max:
SQL> create table t (id number, client_id number);
Table created.
SQL> select nvl(max(id), 0) from t;
NVL(MAX(ID),0)
--------------
0
SQL> select nvl(max(id), 0) from t group by client_id;
no rows selected
Usually queries like yours (with max and without group by) are used to avoid no_data_found.
Agregate functions like MAX will always return a row. It will return one row with a null value if no row is found.
By the way SELECT NUM INTO ctr FROM Client; will raise an exception where there's more than one row in the table.
You should instead check whether or not ctr is null.
Others have already explained the reason why your code isn't "working", so I'm not going to be doing that.
You seem to be instituting an identity column of some description yourself, probably in order to support a surrogate key. Doing this yourself is dangerous and could cause large issues in your application.
You don't need to implement identity columns yourself. From Oracle 12c onwards Oracle has native support for identity columns, these are implemented using sequences, which are available in 12c and previous versions.
A sequence is a database object that is guaranteed to provide a new, unique, number when called, no matter the number of concurrent sessions requesting values. Your current approach is extremely vulnerable to collision when used by multiple sessions. Imagine 2 sessions simultaneously finding the largest value in the table; they then both add one to this value and try to write this new value back. Only one can be correct.
See How to create id with AUTO_INCREMENT on Oracle?
Basically, if you use a sequence then you don't need any of this code.
As a secondary note your statement at the top is incorrect:
I have the following function, which returns the next available client ID from the Client table
Your function returns the maximum ID + 1. If there's a gap in the IDs, i.e. 1, 2, 3, 5 then the "missing" number (4 in this case) will not be returned. A gap can occur for any number of reasons (deletion of a row for example) and does not have a negative impact on your database in any way at all - don't worry about them.

PL SQL Query taking too long to execute

Person table have 2 Million rows and 3 Million rows in Finance and on HR schema respectively. Now I'm going to update the Person status on Finance schema with the person_status on the HR schema. Query is running on M3000 Server with 32 GB RAM with Solaris 10 and Oracle 11.2.2. It took 117 hours and still runnning. How can I optimize this query. I have created index on person_no.
DECLARE
CURSOR c IS
SELECT p1.person_no AS "PersonNo1"
, p2.person_no AS "PersonNo2"
, p2.person_status AS "P2_PERSON_STATUS"
FROM person p1
, hr.person p2
WHERE (lower(p1.person_no) = lower(p2.person_no)
OR substr(p1.person_no, instr(p1.person_no, '-') + 1, length(p1.person_no))
= substr(p2.person_no, instr(p2.person_no, '-') + 1, length(p2.person_no)));
BEGIN
FOR i IN c LOOP
UPDATE person
SET person_status_id = decode(lower(i.P2_PERSON_STATUS), 'y', 1, 'n', 2)
WHERE (lower(person_no) = lower(i.PersonNo2)
OR substr(person_no, instr(person_no, '-') + 1, length(person_no))
= substr(i.PersonNo2, instr(i.PersonNo2, '-') + 1, length(i.PersonNo2)));
COMMIT;
END LOOP;
END;
/
There is several things you can do to optimize this. It would be helpful to see a query plan.
You can use a merge statement. Then you have only one statement and you can work optimizing this statement. Something like this
merge into person
using hr.person person_hr
on (person.person_no=person_hr.person_no)
when matched then
update set person_status_id=decode(lower(person_status),'y',1,'n',2);
You have to adjust the on part to match your where statement.
This might then be the source of best optimization. Maybe you have to create an index for lower and also for substr.
Something like
CREATE INDEX person_idx
ON person (lower(person_no))
Also of course for the substrings etc. Hope this helps.
In the code provided as question i can see there are two aspects.
1) Query Optimization
2) ROW BY ROW Update --> Never Recommended for such a huge records count..
Approach to Optmization.
1) Use Merge Statement as it will Bundle up the block into pure SQL thus will enhance your query.
2) Use Function based Index as i can see there are lot of functions like "LOWER" is used in the query "WHERE" conditions.
NOTE : [Over INDEXING may also cause deterioration in the performance}
3) Last but not the least if you have to go by Anonymous block only then avoid using ROW BY ROW Update. Try using Bulk collect Options as illustrated below.
Since i do not have workstation handy with me please bear with any Syntax Errors.
Let me know if this helps.
DECLARE
TYPE p1
IS
TABLE OF <table_name>.<COLUMN_NAME>%TYPE;
PersonNo1 p1;
TYPE p2
IS
TABLE OF <table_name>.<COLUMN_NAME>%TYPE;
PersonNo2 p2;
TYPE status
IS
TABLE OF <table_name>.<COLUMN_NAME>%TYPE;
P2_PERSON_STATUS status;
BEGIN
SELECT p1.person_no AS "PersonNo1" ,
p2.person_no AS "PersonNo2" ,
p2.person_status AS "P2_PERSON_STATUS" BULK COLLECT
INTO PersonNo1,
PersonNo2,
P2_PERSON_STATUS
FROM person p1 ,
hr.person p2
WHERE (lower(p1.person_no) = lower(p2.person_no)
OR SUBSTR(p1.person_no, instr(p1.person_no, '-') + 1, LENGTH(p1.person_no)) = SUBSTR(p2.person_no, instr(p2.person_no, '-') + 1, LENGTH(p2.person_no)));
FORALL I IN PersonNo1.FIRST..PersonNo1.LAST
UPDATE person
SET person_status_id = DECODE(lower(P2_PERSON_STATUS(I)), 'y', 1, 'n', 2)
WHERE (lower(person_no) = lower(PersonNo2(I))
OR SUBSTR(person_no, instr(person_no, '-') + 1, LENGTH(person_no)) = SUBSTR(PersonNo2(I), instr(PersonNo2(I), '-') + 1, LENGTH(PersonNo2(I))));
COMMIT;
END;

Get count of ref cursor in Oracle

I have a procedure which returns ref cursor as output parameter. I need to find a way to get the count of no.of records in the cursor. Currently I have count fetched by repeating the same select query which is hindering the performance.
ex:
create or replace package temp
TYPE metacur IS REF CURSOR;
PROCEDURE prcSumm (
pStartDate IN DATE,
pEndDate IN DATE,
pKey IN NUMBER,
pCursor OUT metacur
) ;
package body temp is
procedure prcSumm(
pStartDate IN DATE,
pEndDate IN DATE,
pKey IN NUMBER,
pCursor OUT metacur
)
IS
vCount NUMBER;
BEGIN
vCount := 0;
select count(*) into vCount
from customer c, program p, custprog cp
where c.custno = cp.custno
and cp.programid = p.programid
and p.programid = pKey
and c.lastupdate >= pStartDate
and c.lastupdate < pEndDate;
OPEN pCursor for SELECT
c.custno, p.programid, c.fname, c.lname, c.address1, c.address2, cp.plan
from customer c, program p, custprog cp
where c.custno = cp.custno
and cp.programid = p.programid
and p.programid = pKey
and c.lastupdate >= pStartDate
and c.lastupdate < pEndDate;
end prcSumm;
Is there a way to get the no.of rows in the out cursor into vCount.
Thanks!
Oracle does not, in general, know how many rows will be fetched from a cursor until the last fetch finds no more rows to return. Since Oracle doesn't know how many rows will be returned, you can't either without fetching all the rows (as you're doing here when you re-run the query).
Unless you are using a single-user system or you are using a non-default transaction isolation level (which would introduce additional complications), there is no guarantee that the number of rows that your cursor will return and the count(*) the second query returns would match. It is entirely possible that another session committed a change between the time that you opened the cursor and the time that you ran the count(*).
If you are really determined to produce an accurate count, you could add a cnt column defined as count(*) over () to the query you're using to open the cursor. Every row in the cursor would then have a column cnt which would tell you the total number of rows that will be returned. Oracle has to do more work to generate the cnt but it's less work than running the same query twice.
Architecturally, though, it doesn't make sense to return a result and a count from the same piece of code. Determining the count is something that the caller should be responsible for since the caller has to be able to iterate through the results. Every caller should be able to handle the obvious boundary cases (i.e. the query returns 0 rows) without needing a separate count. And every caller should be able to iterate through the results without needing to know how many results there will be. Every single time I've seen someone try to follow the pattern of returning a cursor and a count, the correct answer has been to redesign the procedure and fix whatever error on the caller prompted the design.

UPDATE on INSERT duplicate primary key in Oracle?

I have a simple INSERT query where I need to use UPDATE instead when the primary key is a duplicate. In MySQL this seems easier, in Oracle it seems I need to use MERGE.
All examples I could find of MERGE had some sort of "source" and "target" tables, in my case, the source and target is the same table. I was not able to make sense of the examples to create my own query.
Is MERGE the only way or maybe there's a better solution?
INSERT INTO movie_ratings
VALUES (1, 3, 5)
It's basically this and the primary key is the first 2 values, so an update would be like this:
UPDATE movie_ratings
SET rating = 8
WHERE mid = 1 AND aid = 3
I thought of using a trigger that would automatically execute the UPDATE statement when the INSERT was called but only if the primary key is a duplicate. Is there any problem doing it this way? I need some help with triggers though as I'm having some difficulty trying to understand them and doing my own.
MERGE is the 'do INSERT or UPDATE as appropriate' statement in Standard SQL, and probably therefore in Oracle SQL too.
Yes, you need a 'table' to merge from, but you can almost certainly create that table on the fly:
MERGE INTO Movie_Ratings M
USING (SELECT 1 AS mid, 3 AS aid, 8 AS rating FROM dual) N
ON (M.mid = N.mid AND M.aid = N.aid)
WHEN MATCHED THEN UPDATE SET M.rating = N.rating
WHEN NOT MATCHED THEN INSERT( mid, aid, rating)
VALUES(N.mid, N.aid, N.rating);
(Syntax not verified.)
A typical way of doing this is
performing the INSERT and catch a DUP_VAL_ON_INDEX and then perform an UPDATE instead
performing the UPDATE first and if SQL%Rows = 0 perform an INSERT
You can't write a trigger on a table that does another operation on the same table. That's causing an Oracle error (mutating tables).
I'm a T-SQL guy but a trigger in this case is not a good solution. Most triggers are not good solutions. In T-SQL, I would simply perform an IF EXISTS (SELECT * FROM dbo.Table WHERE ...) but in Oracle, you have to select the count...
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT(*)
INTO cnt
FROM mytable
WHERE id = 12345;
IF( cnt = 0 )
THEN
...
ELSE
...
END IF;
END;
It would appear that MERGE is what you need in this case:
MERGE INTO movie_ratings mr
USING (
SELECT rating, mid, aid
WHERE mid = 1 AND aid = 3) mri
ON (mr.movie_ratings_id = mri.movie_ratings_id)
WHEN MATCHED THEN
UPDATE SET mr.rating = 8 WHERE mr.mid = 1 AND mr.aid = 3
WHEN NOT MATCHED THEN
INSERT (mr.rating, mr.mid, mr.aid)
VALUES (1, 3, 8)
Like I said, I'm a T-SQL guy but the basic idea here is to "join" the movie_rating table against itself. If there's no performance hit on using the "if exists" example, I'd use it for readability.

How to? Correct sql syntax for finding the next available identifier

I think I could use some help here from more experienced users...
I have an integer field name in a table, let's call it SO_ID in a table SO, and to each new row I need to calculate a new SO_ID based on the following rules
1) SO_ID consists of 6 letters where first 3 are an area code, and the last three is the sequenced number within this area.
309001
309002
309003
2) so the next new row will have a SO_ID of value
309004
3) if someone deletes the row with SO_ID value = 309002, then the next new row must recycle this value, so the next new row has got to have the SO_ID of value
309002
can anyone please provide me with either a SQL function or PL/SQL (perhaps a trigger straightaway?) function that would return the next available SO_ID I need to use ?
I reckon I could get use of keyword rownum in my sql, but the follwoing just doens't work properly
select max(so_id),max(rownum) from(
select (so_id),rownum,cast(substr(cast(so_id as varchar(6)),4,3) as int) from SO
where length(so_id)=6
and substr(cast(so_id as varchar(6)),1,3)='309'
and cast(substr(cast(so_id as varchar(6)),4,3) as int)=rownum
order by so_id
);
thank you for all your help!
This kind of logic is fraught with peril. What if two sessions calculate the same "next" value, or both try to reuse the same "deleted" value? Since your column is an integer, you'd probably be better off querying "between 309001 and 309999", but that begs the question of what happens when you hit the thousandth item in area 309?
Is it possible to make SO_ID a foreign key to another table as well as a unique key? You could pre-populate the parent table with all valid IDs (or use a function to generate them as needed), and then it would be a simple matter to select the lowest one where a child record doesn't exist.
well, we came up with this... sort of works.. concurrency is 'solved' via unique constraint
select min(lastnumber)
from
(
select so_id,so_id-LAG(so_id, 1, so_id) OVER (ORDER BY so_id) AS diff,LAG(so_id, 1, so_id) OVER (ORDER BY so_id)as lastnumber
from so_miso
where substr(cast(so_id as varchar(6)),1,3)='309'
and length(so_id)=6
order by so_id
)a
where diff>1;
Do you really need to compute & store this value at the time a row is inserted? You would normally be better off storing the area code and a date in a table and computing the SO_ID in a view, i.e.
SELECT area_code ||
LPAD( DENSE_RANK() OVER( PARTITION BY area_code
ORDER BY date_column ),
3,
'0' ) AS so_id,
<<other columns>>
FROM your_table
or having a process that runs periodically (nightly, for example) to assign the SO_ID using similar logic.
If your application is not pure sql, you could do this in application code (ie: Java code). This would be more straightforward.
If you are recycling numbers when rows are deleted, your base table must be consulted when generating the next number. "Legacy" pre-relational schemes that attempt to encode information in numbers are a pain to make airtight when numbers must be recycled after deletes, as you say yours must.
If you want to avoid having to scan your table looking for gaps, an after-delete routine must write the deleted number to a separate table in a "ReuseMe" column. The insert routine does this:
begins trans
selects next-number table for update
uses a reuseme number if available else uses the next number
clears the reuseme number if applicable or increments the next-number in the next-number table
commits trans
Ignoring the issues about concurrency, the following should give a decent start.
If 'traffic' on the table is low enough, go with locking the table in exclusive mode for the duration of the transaction.
create table blah (soc_id number(6));
insert into blah select 309000 + rownum from user_tables;
delete from blah where soc_id = 309003;
commit;
create or replace function get_next (i_soc in number) return number is
v_min number := i_soc* 1000;
v_max number := v_min + 999;
begin
lock table blah in exclusive mode;
select min(rn) into v_min
from
(select rownum rn from dual connect by level <= 999
minus
select to_number(substr(soc_id,4))
from blah
where soc_id between v_min and v_max);
return v_min;
end;

Resources