Syntax Error In Oracle Function - oracle

I'm trying to make a function that does a simple insert into a table called poli, the purpose of this fuction:
returns 1 when it inserts the values to the table
in any other case it returns 0.
This is the code in oracle that i wrote:
CREATE OR REPLACE FUNCTION ADDPOLI
( ID IN NUMBER, NAME IN VARCHAR2 , LON IN FLOAT , LAT IN FLOAT , STATUS OUT NUMBER )
return status
IS cursor poli_count is select count(id) from poli;
BEGIN
declare number_of_cities int;
fetch poli_c into number_of_cities;
if number_of_cities<= 15 and number_of_cities>=0 then
insert into poli values(id,name,lat,lon);
return 1;
else
return 0;
end if;
END ADDPOLI;
i have a syntax error here: fetch poli_c into number_of_cities;
how can i fix it ?

Why you are using cursor to achieve this. Try below -
CREATE FUNCTION ADDPOLI(ID INT, NAME VARCHAR(255), LON FLOAT, LAT FLOAT)
RETURNS INT
BEGIN
declare number_of_cities int;
select count(id) into number_of_cities from poli;
if number_of_cities between 0 and 15 then
insert into poli values(id,name,lat,lon);
return 1;
else
return 0;
end if;
END

There is something more fundamentally of concern here. What happens when you deploy this function in a multi-user environment (which most databases typically will run in).
The logic of:
"Do I have less than 15 cities?"
"Yes, insert another row"
is more complex than first appears. Because if I have 10 sessions all currently running this function, you can end up with the following scenario:
I start with say 13 rows. Then this happens:
Session 1: Is there less than 15? Yes, do the insert.
Session 2: Is there less than 15? Yes, do the insert.
Session 3: Is there less than 15? Yes, do the insert.
Session 4: Is there less than 15? Yes, do the insert.
Session 5: Is there less than 15? Yes, do the insert.
...
and now Session 1 commits, and so forth for Session 2, 3, ....
And hence voila! You now have 18 rows in your table and everyone is befuddled as to how this happened.
Ultimately, what you are after is a means of enforcing a rule about the data ("max of 15 rows in table X"). There is a lengthy discussion about the complexities of doing that over at AskTOM
https://asktom.oracle.com/pls/asktom/asktom.search?tag=declarative-integrity

Related

Oracle equivalent query for this postgress query - CONFLICT [duplicate]

The UPSERT operation either updates or inserts a row in a table, depending if the table already has a row that matches the data:
if table t has a row exists that has key X:
update t set mystuff... where mykey=X
else
insert into t mystuff...
Since Oracle doesn't have a specific UPSERT statement, what's the best way to do this?
The MERGE statement merges data between two tables. Using DUAL
allows us to use this command. Note that this is not protected against concurrent access.
create or replace
procedure ups(xa number)
as
begin
merge into mergetest m using dual on (a = xa)
when not matched then insert (a,b) values (xa,1)
when matched then update set b = b+1;
end ups;
/
drop table mergetest;
create table mergetest(a number, b number);
call ups(10);
call ups(10);
call ups(20);
select * from mergetest;
A B
---------------------- ----------------------
10 2
20 1
The dual example above which is in PL/SQL was great becuase I wanted to do something similar, but I wanted it client side...so here is the SQL I used to send a similar statement direct from some C#
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" )
However from a C# perspective this provide to be slower than doing the update and seeing if the rows affected was 0 and doing the insert if it was.
An alternative to MERGE (the "old fashioned way"):
begin
insert into t (mykey, mystuff)
values ('X', 123);
exception
when dup_val_on_index then
update t
set mystuff = 123
where mykey = 'X';
end;
Another alternative without the exception check:
UPDATE tablename
SET val1 = in_val1,
val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%rowcount = 0 )
THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
insert if not exists
update:
INSERT INTO mytable (id1, t1)
SELECT 11, 'x1' FROM DUAL
WHERE NOT EXISTS (SELECT id1 FROM mytble WHERE id1 = 11);
UPDATE mytable SET t1 = 'x1' WHERE id1 = 11;
None of the answers given so far is safe in the face of concurrent accesses, as pointed out in Tim Sylvester's comment, and will raise exceptions in case of races. To fix that, the insert/update combo must be wrapped in some kind of loop statement, so that in case of an exception the whole thing is retried.
As an example, here's how Grommit's code can be wrapped in a loop to make it safe when run concurrently:
PROCEDURE MyProc (
...
) IS
BEGIN
LOOP
BEGIN
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" );
EXIT; -- success? -> exit loop
EXCEPTION
WHEN NO_DATA_FOUND THEN -- the entry was concurrently deleted
NULL; -- exception? -> no op, i.e. continue looping
WHEN DUP_VAL_ON_INDEX THEN -- an entry was concurrently inserted
NULL; -- exception? -> no op, i.e. continue looping
END;
END LOOP;
END;
N.B. In transaction mode SERIALIZABLE, which I don't recommend btw, you might run into
ORA-08177: can't serialize access for this transaction exceptions instead.
I'd like Grommit answer, except it require dupe values. I found solution where it may appear once: http://forums.devshed.com/showpost.php?p=1182653&postcount=2
MERGE INTO KBS.NUFUS_MUHTARLIK B
USING (
SELECT '028-01' CILT, '25' SAYFA, '6' KUTUK, '46603404838' MERNIS_NO
FROM DUAL
) E
ON (B.MERNIS_NO = E.MERNIS_NO)
WHEN MATCHED THEN
UPDATE SET B.CILT = E.CILT, B.SAYFA = E.SAYFA, B.KUTUK = E.KUTUK
WHEN NOT MATCHED THEN
INSERT ( CILT, SAYFA, KUTUK, MERNIS_NO)
VALUES (E.CILT, E.SAYFA, E.KUTUK, E.MERNIS_NO);
I've been using the first code sample for years. Notice notfound rather than count.
UPDATE tablename SET val1 = in_val1, val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%notfound ) THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
The code below is the possibly new and improved code
MERGE INTO tablename USING dual ON ( val3 = in_val3 )
WHEN MATCHED THEN UPDATE SET val1 = in_val1, val2 = in_val2
WHEN NOT MATCHED THEN INSERT
VALUES (in_val1, in_val2, in_val3)
In the first example the update does an index lookup. It has to, in order to update the right row. Oracle opens an implicit cursor, and we use it to wrap a corresponding insert so we know that the insert will only happen when the key does not exist. But the insert is an independent command and it has to do a second lookup. I don't know the inner workings of the merge command but since the command is a single unit, Oracle could execute the correct insert or update with a single index lookup.
I think merge is better when you do have some processing to be done that means taking data from some tables and updating a table, possibly inserting or deleting rows. But for the single row case, you may consider the first case since the syntax is more common.
A note regarding the two solutions that suggest:
1) Insert, if exception then update,
or
2) Update, if sql%rowcount = 0 then insert
The question of whether to insert or update first is also application dependent. Are you expecting more inserts or more updates? The one that is most likely to succeed should go first.
If you pick the wrong one you will get a bunch of unnecessary index reads. Not a huge deal but still something to consider.
Try this,
insert into b_building_property (
select
'AREA_IN_COMMON_USE_DOUBLE','Area in Common Use','DOUBLE', null, 9000, 9
from dual
)
minus
(
select * from b_building_property where id = 9
)
;
From http://www.praetoriate.com/oracle_tips_upserts.htm:
"In Oracle9i, an UPSERT can accomplish this task in a single statement:"
INSERT
FIRST WHEN
credit_limit >=100000
THEN INTO
rich_customers
VALUES(cust_id,cust_credit_limit)
INTO customers
ELSE
INTO customers SELECT * FROM new_customers;

Converting function from Oracle PL/SQL to MS SQL Server 2008

I have several Oracle functions that are similar to the one below. I don't know much about Oracle and although I have made in roads on a major query re-write. I'd like to ask for some help on how to convert this function to SQL Server 2008.
I have tried using the online conversion tool at www.sqlines.com and benefited from many pages there... but not successful in converting this function....
Thanks in advance, John
Oracle source:
function OfficeIDMainPhoneID(p_ID t_OfficeID)
return t_OfficePhoneID
is
wPhID t_OfficePhoneID;
wPhID1 t_OfficePhoneID;
cursor cr_phone
is
select Office_PHONE_ID,IS_PHONE_PRIMARY
from Office_PHONE
where Office_ID = p_ID
order by SEQ_NUMBER;
begin
wPhID :=NULL;
wPhID1:=NULL;
for wp in cr_phone
loop
if wPhID is NULL
then wPhID1:=wp.Office_PHONE_ID;
end if;
if wp.IS_PHONE_PRIMARY = 'Y'
then
wPhID:=wp.Office_PHONE_ID;
Exit;
end if;
end loop;
if wPhID is NULL
then wPhID:=wPhID1;
end if;
return(wPhID);
end OfficeIDMainPhoneID;
SQL Server attempt:
create function OfficeIDMainPhoneID(#p_ID t_OfficeID)
returns t_OfficePhoneID
as
begin
declare #wPhID t_OfficePhoneID;
declare #wPhID1 t_OfficePhoneID;
declare cr_phone cursor local
for
select Office_PHONE_ID,IS_PHONE_PRIMARY
from Office_PHONE
where Office_ID = #p_ID
order by SEQ_NUMBER;
set #wPhID =NULL;
set #wPhID1=NULL;
declare wp cursor for cr_phone
open wp;
fetch wp into;
while ##fetch_status=0
begin
if #wPhID is NULL
begin set #wPhID1=wp.Office_PHONE_ID;
end
if wp.IS_PHONE_PRIMARY = 'Y'
begin
set #wPhID=wp.Office_PHONE_ID;
Exit;
end
fetch wp into;
end;
close wp;
deallocate wp;
if #wPhID is NULL
begin set #wPhID=#wPhID1;
end
return(#wPhID);
end ;
To answer the question about the functions as written
If you just want to fix the cursor so it works, one problem is the two "fetch wp into;" statements. You are saying "fetch the data and put it into" and then not giving it anything to put it into. Declare a couple of variables, put the data into them, then later use the variables, not the code. You need one variable per item returned in your cursor definition, so one each for Office_PHONE_ID and IS_PHONE_PRIMARY.
Also, you are trying to declare variables (and the function) as t_OfficePhoneID, I suspect that should be something like INT OR BIGINT instead (whatever the table definition for the column is).
Declare #OP_ID INT, #ISPRIMARY CHAR(1) --Or whatever the column is
later (in two locations),
fetch wp into (#OP_ID, #ISPRIMARY);
then use #OP_ID instead of wp.Office_PHONE_ID, and so on.
HOWEVER, I would throw away all the code in the function after declaring #wPhID, and do something else. Cursors suck if you can get what you want with a simple set based request. If you work your way through the oracle code, it is doing the following:
Get the id of the first phone number marked primary (in sequence order). If it didn't find one of those, just get the id of the first non-primary phone number in sequence order. You can do that with the following
set #wPhID = select TOP 1 Office_PHONE_ID
from Office_PHONE
where Office_ID = #p_ID
order by CASE WHEN IS_PHONE_PRIMARY = 'Y' THEN 0 ELSE 1 END, SEQ_NUMBER;
Return #wPhID and you're done.
I used "CASE WHEN IS_PHONE_PRIMARY = 'Y' THEN 0 ELSE 1 END" in the order by because I don't know what other values are possible, so this will always work. If you know the only possible values are 'Y' and 'N', you could use something like the following instead
order by IS_PHONE_PRIMARY DESC, SEQ_NUMBER;

SQL%FOUND where SELECT query returns no rows

I have the following function, which returns the next available client ID from the Client table:
CREATE OR REPLACE FUNCTION getNextClientID RETURN INT AS
ctr INT;
BEGIN
SELECT MAX(NUM) INTO ctr FROM Client;
IF SQL%NOTFOUND THEN
RETURN 1;
ELSIF SQL%FOUND THEN
-- RETURN SQL%ROWCOUNT;
RAISE_APPLICATION_ERROR(-20010, 'ROWS FOUND!');
-- RETURN ctr + 1;
END IF;
END;
But when calling this function,
BEGIN
DBMS_OUTPUT.PUT_LINE(getNextClientID());
END;
I get the following result:
which I found a bit odd, since the Client table contains no data:
Also, if I comment out RAISE_APPLICATION_ERROR(-20010, 'ROWS FOUND!'); & log the value of SQL%ROWCOUNT to the console, I get 1 as a result.
On the other hand, when changing
SELECT MAX(NUM) INTO ctr FROM Client;
to
SELECT NUM INTO ctr FROM Client;
The execution went as expected. What is the reason behind this behavior ?
Aggregate functions will always return a result:
All aggregate functions except COUNT(*), GROUPING, and GROUPING_ID
ignore nulls. You can use the NVL function in the argument to an
aggregate function to substitute a value for a null. COUNT and
REGR_COUNT never return null, but return either a number or zero. For
all the remaining aggregate functions, if the data set contains no
rows, or contains only rows with nulls as arguments to the aggregate
function, then the function returns null.
You can change your query to:
SELECT COALESCE(MAX(num), 1) INTO ctr FROM Client;
and remove the conditionals altogether. Be careful about concurrency issues though if you do not use SELECT FOR UPDATE.
Query with any aggregate function and without GROUP BY clause always returns 1 row. If you want no_data_found exception on empty table, add GROUP BY clause or remove max:
SQL> create table t (id number, client_id number);
Table created.
SQL> select nvl(max(id), 0) from t;
NVL(MAX(ID),0)
--------------
0
SQL> select nvl(max(id), 0) from t group by client_id;
no rows selected
Usually queries like yours (with max and without group by) are used to avoid no_data_found.
Agregate functions like MAX will always return a row. It will return one row with a null value if no row is found.
By the way SELECT NUM INTO ctr FROM Client; will raise an exception where there's more than one row in the table.
You should instead check whether or not ctr is null.
Others have already explained the reason why your code isn't "working", so I'm not going to be doing that.
You seem to be instituting an identity column of some description yourself, probably in order to support a surrogate key. Doing this yourself is dangerous and could cause large issues in your application.
You don't need to implement identity columns yourself. From Oracle 12c onwards Oracle has native support for identity columns, these are implemented using sequences, which are available in 12c and previous versions.
A sequence is a database object that is guaranteed to provide a new, unique, number when called, no matter the number of concurrent sessions requesting values. Your current approach is extremely vulnerable to collision when used by multiple sessions. Imagine 2 sessions simultaneously finding the largest value in the table; they then both add one to this value and try to write this new value back. Only one can be correct.
See How to create id with AUTO_INCREMENT on Oracle?
Basically, if you use a sequence then you don't need any of this code.
As a secondary note your statement at the top is incorrect:
I have the following function, which returns the next available client ID from the Client table
Your function returns the maximum ID + 1. If there's a gap in the IDs, i.e. 1, 2, 3, 5 then the "missing" number (4 in this case) will not be returned. A gap can occur for any number of reasons (deletion of a row for example) and does not have a negative impact on your database in any way at all - don't worry about them.

Get count of ref cursor in Oracle

I have a procedure which returns ref cursor as output parameter. I need to find a way to get the count of no.of records in the cursor. Currently I have count fetched by repeating the same select query which is hindering the performance.
ex:
create or replace package temp
TYPE metacur IS REF CURSOR;
PROCEDURE prcSumm (
pStartDate IN DATE,
pEndDate IN DATE,
pKey IN NUMBER,
pCursor OUT metacur
) ;
package body temp is
procedure prcSumm(
pStartDate IN DATE,
pEndDate IN DATE,
pKey IN NUMBER,
pCursor OUT metacur
)
IS
vCount NUMBER;
BEGIN
vCount := 0;
select count(*) into vCount
from customer c, program p, custprog cp
where c.custno = cp.custno
and cp.programid = p.programid
and p.programid = pKey
and c.lastupdate >= pStartDate
and c.lastupdate < pEndDate;
OPEN pCursor for SELECT
c.custno, p.programid, c.fname, c.lname, c.address1, c.address2, cp.plan
from customer c, program p, custprog cp
where c.custno = cp.custno
and cp.programid = p.programid
and p.programid = pKey
and c.lastupdate >= pStartDate
and c.lastupdate < pEndDate;
end prcSumm;
Is there a way to get the no.of rows in the out cursor into vCount.
Thanks!
Oracle does not, in general, know how many rows will be fetched from a cursor until the last fetch finds no more rows to return. Since Oracle doesn't know how many rows will be returned, you can't either without fetching all the rows (as you're doing here when you re-run the query).
Unless you are using a single-user system or you are using a non-default transaction isolation level (which would introduce additional complications), there is no guarantee that the number of rows that your cursor will return and the count(*) the second query returns would match. It is entirely possible that another session committed a change between the time that you opened the cursor and the time that you ran the count(*).
If you are really determined to produce an accurate count, you could add a cnt column defined as count(*) over () to the query you're using to open the cursor. Every row in the cursor would then have a column cnt which would tell you the total number of rows that will be returned. Oracle has to do more work to generate the cnt but it's less work than running the same query twice.
Architecturally, though, it doesn't make sense to return a result and a count from the same piece of code. Determining the count is something that the caller should be responsible for since the caller has to be able to iterate through the results. Every caller should be able to handle the obvious boundary cases (i.e. the query returns 0 rows) without needing a separate count. And every caller should be able to iterate through the results without needing to know how many results there will be. Every single time I've seen someone try to follow the pattern of returning a cursor and a count, the correct answer has been to redesign the procedure and fix whatever error on the caller prompted the design.

Oracle indexes "breaking"

I am working on a data warehousing project, and therefore, I have been implementing some ETL Functions in Packages. I first encountered a problem on my developing laptop and thought it had something to do with my oracle installation, but now it has "spread" over to the production servers.
Two functions "sometimes" become incredible slow. We have implemented a logging system, giving us output on a logging table each x rows. When the function usually needs like 10 seconds per chunk, "sometimes" the functions needs up to 3 minutes. After rebuilding some indexes and restarting the function, it is as quick again as it used to be.
Unfortunately, I can't tell which index it is exactly, since restarting the function and building up the cursor it uses for its work takes some time and we do not have the time to check each index on its own, so I just rebuild all indexes that are potentially used by the function and restart it.
The functions that have the problem use a cursor to select data from a table with about 50 million to 200 million entries, joined by a small table with about 50-500 entries. The join condition is a string comparison. We then use the primary key from the small table we get from the join to update a foreign key on the main table. The update process is done by a forall loop, this has proven to save loads of time.
Here is a simplified version of the table structure of both tables:
CREATE TABLE "maintable"
( "pkmid" NUMBER(11,0) NOT NULL ENABLE,
"fkid" NUMBER(11,0),
"fkstring" NVARCHAR2(4) NOT NULL ENABLE,
CONSTRAINT "PK_MAINTABLE" PRIMARY KEY ("pkmid");
CREATE TABLE "smalltable"
( "pksid" NUMBER(11,0) NOT NULL ENABLE,
"pkstring" NVARCHAR2(4) NOT NULL ENABLE,
CONSTRAINT "PK_SMALLTABLE" PRIMARY KEY ("pksid");
Both tables have indexes on their string columns. Adding the primary keys, I therefore rebuild 4 indexes each time the problem happens.
We get our data in a way, that we only have the fkstring in the maintable available and the fkid is set to null. In a first step, we populate the small table. This only takes minutes and is done the following way:
INSERT INTO smalltable (pksid, pkstring)
SELECT SEQ_SMALLTABLE.NEXTVAL, fkstring
FROM
(
SELECT DISTINCT mt.fkstring
FROM maintable mt
MINUS
SELECT st.pkstring
FROM smalltable st
);
commit;
This function never causes any trouble.
The following function does (it is a simplified version of the function - I have removed logging and exception handling and renamed some variables):
function f_set_fkid return varchar2 is
cursor lCursor_MAINTABLE is
SELECT MT.PKmID, st.pksid
FROM maintable mt
JOIN smalltable st ON (mt.fkstring = st.pkstring)
WHERE mt.fkid IS NULL;
lIndex number := 0;
lExitLoop boolean := false;
type lCursorType is table of lCursor_MAINTABLE%rowtype index by pls_integer;
lCurrentRow lCursor_MAINTABLE%rowtype;
lTempDataArray lCursorType;
lCommitEvery constant number := 1000;
begin
open lCursor_MAINTABLE;
loop
-- get next row, set exit condition
fetch lCursor_MAINTABLE into lCurrentRow;
if (lCursor_MAINTABLE%notfound) then
lExitLoop := true;
end if;
-- in case of cache being full, flush cache
if ((lTempDataArray.count > 0) AND (lIndex >= lCommitEvery OR lExitLoop)) then
forall lIndex2 in lTempDataArray.FIRST..lTempDataArray.LAST
UPDATE maintable mt
set fkid = lTempDataArray(lIndex2).pksid
WHERE mt.pkmid = lTempDataArray(lIndex2).pkmid;
commit;
lTempDataArray.delete;
lIndex := 0;
end if;
-- data handling, fill cache
if (lExitLoop = false) then
lIndex := lIndex + 1;
lTempDataArray(lIndex). := lCurrentRow;
end if;
exit when lExitLoop;
end loop;
close lCursor_MAINTABLE;
return null;
end;
I would be very thankful for any help.
P.S. I do know that bulk collect into would speed up the function and probably also ease up the code a bit too, but at the moment we are content with the speed of the function it usually has. Changing the function to use bulk collect is on our plan for next year, but at the moment it is not an option (and I doubt it would solve this index problem).
If you have a table where the number of rows fluctuates wildly (as in when doing ETL loads) I would use the statistics of the fully loaded table throughout the load process.
So, generate statistics when your table is fully loaded and then use those statistics for subsequent loads.
If you use statistics from when the table is half-loaded the optimizer may be tricked into not using indexes or not using the fastest index. This is especially true if data is loaded in order so that low value, high value and density are skewed.
In your case, the statistics for columns fkstring and fkid are extra important since those two columns are heavily involved in the procedure that has performance issues.
function f_set_fkid return varchar2 is
cursor lCursor_MAINTABLE is
SELECT MT.PKmID, st.pksid
FROM maintable mt
JOIN smalltable st ON (mt.fkstring = st.pkstring)
WHERE mt.fkid IS NULL;
commit_every INTGER := 1000000;
commit_counter INTEGER :=0;
begin
for c in lCursor_MAINTABLE
loop
UPDATE maintable mt
set fkid = c.pksid
WHERE mt.pkmid = c.pkmid;
commit_counter := commit_counter+1;
if mod(commit_every,commit_counter) = 0
then
commit;
commit_counter := 0;
end if;
end loop;
return null;
end;

Resources