Reorder to some items and changing primary key - oracle

I've a table MENU_ITEMS with a column SEQ to define the position at which each item should be displayed in a menu. Simplified for the sake of this question, something like...
MENU_ITEMS:
ITEM_CODE PARENT_CODE SEQ* (*unique to parent)
------------------------------
BANANAS FRUIT 1
APPLES FRUIT 2
CHERRIES FRUIT 4
DURIANS FRUIT 3
POTATOES VEG 1
I've a procedure to re-order the items. One of its parameters is a string array containing the ITEM_CODEs in the new order. As it stands, something like this...
procedure arrange_items(
ITEM_CODES in CORE.T_STRINGARRAY,
PARENT in VARCHAR2
)
as
V_SEQS T_NUMBERARRAY := T_NUMBERARRAY();
begin
for i in 1..ITEM_CODES.count loop
V_SEQS.extend();
V_SEQS(i) := i;
end loop
forall i in 1..ITEM_CODES.count
update MENU_ITEMS
set SEQ = V_SEQS(i)
where ITEM_CODE = ITEM_CODES(i) and PARENT_CODE = PARENT;
end arrange_items
PROBLEM
When calling the procedure with ITEM_CODES = ("APPLES", "BANANAS", "CHERRIES"), I get a 'unique constraint violated' exception (...presumably because the first item's SEQ can't be set to 1 while there exists another row whose SEQ is also 1)
I don't know how to handle items that weren't specified in the ITEM_CODES parameter, whose sequence is irrelevant (being disabled menu items and never displayed.) NB: Only the items to be re-arranged can be included in the ITEM_CODES array parameter; disabled items not requiring re-arranging can't be passed in.
Reading the Oracle docs seems to suggest a BULK COLLECT/UPDATE is needed to change all the records in one hit without violating the unique constraint (I had thought FORALL instead of FOR LOOP might do this) but I have extremely limited SQL knowledge and can't get the application right, nor can I see a way of taking into account the unspecified ITEM_CODES whose sequence might need changing just to avoid conflicting with the codes that were specified.
Is anyone able to suggest at least the correct approach I should investigate, if not a working solution?

You can't do it without changing your procedure unless you disable the constraints.
There seems to be nothing wrong with your code although you could have used simple update statements than FORALL. The problem is that when you are updating the column seq , a unique entry already exists for the same parent_code and seq . It is only that you are reordering it while running your update statement. So , the constraint will be violated no matter how you change the order of update.
So, the solution I recommend you is to disable the constraint.If you don't know the constraint name, you need to look at USER_CONS_COLUMNS or ALL_CONS_COLUMNS data dictionary view giving proper column names.Refer this link if you need.
Constraint name
ALTER TABLE MENU_ITEMS DISABLE CONSTRAINT name_of_unique_constraint;
Then execute your procedure and make sure you don't have any duplicate (SEQ,PARENT_CODE )combination after update.
Then enable constraints.
ALTER TABLE MENU_ITEMS ENABLE CONSTRAINT name_of_unique_constraint;
and make sure it does not fail.
EDIT: : It seems the word "can't" I used at the top was taken as a challenge by others and alternate solutions were provided, which is much appreciated. I have slightly changed my first sentence to make it clear.

You can't do this directly with FORALL Agreed with Kaushik: disable the constraint.
solution without Disabling the constraint: add Max(SEQ) +1 to every row, then do your thing:
create or replace procedure arrange_items(
ITEM_CODES in T_STRINGARRAY,
PARENT in VARCHAR2
) as
V_SEQS T_NUMBERARRAY := T_NUMBERARRAY();
i pls_integer;
mx number;
begin
-- calculat max:
select max(seq) into mx from MENU_ITEMS;
-- Add max(SEQ)
update MENU_ITEMS
set SEQ = SEQ + mx+1
where PARENT_CODE = PARENT;
-- then your stuff works
for i in 1..ITEM_CODES.count loop
V_SEQS.extend();
V_SEQS(i) := i;
end loop;
forall i in 1..ITEM_CODES.count
update MENU_ITEMS
set SEQ = V_SEQS(i)
where ITEM_CODE = ITEM_CODES(i) and PARENT_CODE = PARENT;
end arrange_items;
/

Related

Oracle equivalent query for this postgress query - CONFLICT [duplicate]

The UPSERT operation either updates or inserts a row in a table, depending if the table already has a row that matches the data:
if table t has a row exists that has key X:
update t set mystuff... where mykey=X
else
insert into t mystuff...
Since Oracle doesn't have a specific UPSERT statement, what's the best way to do this?
The MERGE statement merges data between two tables. Using DUAL
allows us to use this command. Note that this is not protected against concurrent access.
create or replace
procedure ups(xa number)
as
begin
merge into mergetest m using dual on (a = xa)
when not matched then insert (a,b) values (xa,1)
when matched then update set b = b+1;
end ups;
/
drop table mergetest;
create table mergetest(a number, b number);
call ups(10);
call ups(10);
call ups(20);
select * from mergetest;
A B
---------------------- ----------------------
10 2
20 1
The dual example above which is in PL/SQL was great becuase I wanted to do something similar, but I wanted it client side...so here is the SQL I used to send a similar statement direct from some C#
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" )
However from a C# perspective this provide to be slower than doing the update and seeing if the rows affected was 0 and doing the insert if it was.
An alternative to MERGE (the "old fashioned way"):
begin
insert into t (mykey, mystuff)
values ('X', 123);
exception
when dup_val_on_index then
update t
set mystuff = 123
where mykey = 'X';
end;
Another alternative without the exception check:
UPDATE tablename
SET val1 = in_val1,
val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%rowcount = 0 )
THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
insert if not exists
update:
INSERT INTO mytable (id1, t1)
SELECT 11, 'x1' FROM DUAL
WHERE NOT EXISTS (SELECT id1 FROM mytble WHERE id1 = 11);
UPDATE mytable SET t1 = 'x1' WHERE id1 = 11;
None of the answers given so far is safe in the face of concurrent accesses, as pointed out in Tim Sylvester's comment, and will raise exceptions in case of races. To fix that, the insert/update combo must be wrapped in some kind of loop statement, so that in case of an exception the whole thing is retried.
As an example, here's how Grommit's code can be wrapped in a loop to make it safe when run concurrently:
PROCEDURE MyProc (
...
) IS
BEGIN
LOOP
BEGIN
MERGE INTO Employee USING dual ON ( "id"=2097153 )
WHEN MATCHED THEN UPDATE SET "last"="smith" , "name"="john"
WHEN NOT MATCHED THEN INSERT ("id","last","name")
VALUES ( 2097153,"smith", "john" );
EXIT; -- success? -> exit loop
EXCEPTION
WHEN NO_DATA_FOUND THEN -- the entry was concurrently deleted
NULL; -- exception? -> no op, i.e. continue looping
WHEN DUP_VAL_ON_INDEX THEN -- an entry was concurrently inserted
NULL; -- exception? -> no op, i.e. continue looping
END;
END LOOP;
END;
N.B. In transaction mode SERIALIZABLE, which I don't recommend btw, you might run into
ORA-08177: can't serialize access for this transaction exceptions instead.
I'd like Grommit answer, except it require dupe values. I found solution where it may appear once: http://forums.devshed.com/showpost.php?p=1182653&postcount=2
MERGE INTO KBS.NUFUS_MUHTARLIK B
USING (
SELECT '028-01' CILT, '25' SAYFA, '6' KUTUK, '46603404838' MERNIS_NO
FROM DUAL
) E
ON (B.MERNIS_NO = E.MERNIS_NO)
WHEN MATCHED THEN
UPDATE SET B.CILT = E.CILT, B.SAYFA = E.SAYFA, B.KUTUK = E.KUTUK
WHEN NOT MATCHED THEN
INSERT ( CILT, SAYFA, KUTUK, MERNIS_NO)
VALUES (E.CILT, E.SAYFA, E.KUTUK, E.MERNIS_NO);
I've been using the first code sample for years. Notice notfound rather than count.
UPDATE tablename SET val1 = in_val1, val2 = in_val2
WHERE val3 = in_val3;
IF ( sql%notfound ) THEN
INSERT INTO tablename
VALUES (in_val1, in_val2, in_val3);
END IF;
The code below is the possibly new and improved code
MERGE INTO tablename USING dual ON ( val3 = in_val3 )
WHEN MATCHED THEN UPDATE SET val1 = in_val1, val2 = in_val2
WHEN NOT MATCHED THEN INSERT
VALUES (in_val1, in_val2, in_val3)
In the first example the update does an index lookup. It has to, in order to update the right row. Oracle opens an implicit cursor, and we use it to wrap a corresponding insert so we know that the insert will only happen when the key does not exist. But the insert is an independent command and it has to do a second lookup. I don't know the inner workings of the merge command but since the command is a single unit, Oracle could execute the correct insert or update with a single index lookup.
I think merge is better when you do have some processing to be done that means taking data from some tables and updating a table, possibly inserting or deleting rows. But for the single row case, you may consider the first case since the syntax is more common.
A note regarding the two solutions that suggest:
1) Insert, if exception then update,
or
2) Update, if sql%rowcount = 0 then insert
The question of whether to insert or update first is also application dependent. Are you expecting more inserts or more updates? The one that is most likely to succeed should go first.
If you pick the wrong one you will get a bunch of unnecessary index reads. Not a huge deal but still something to consider.
Try this,
insert into b_building_property (
select
'AREA_IN_COMMON_USE_DOUBLE','Area in Common Use','DOUBLE', null, 9000, 9
from dual
)
minus
(
select * from b_building_property where id = 9
)
;
From http://www.praetoriate.com/oracle_tips_upserts.htm:
"In Oracle9i, an UPSERT can accomplish this task in a single statement:"
INSERT
FIRST WHEN
credit_limit >=100000
THEN INTO
rich_customers
VALUES(cust_id,cust_credit_limit)
INTO customers
ELSE
INTO customers SELECT * FROM new_customers;

Oracle PL/SQL Update statement looping forever - 504 Gateway Time-out

I'm trying to update a table based on another one's information:
Source_Table (Table 1) columns:
TABLE_ROW_ID (Based on trigger-sequence when insert)
REP_ID
SOFT_ASSIGNMENT
Description (Table 2) columns:
REP_ID
NEW_SOFT_ASSIGNMENT
This is my loop statement:
SELECT count(table_row_id) INTO V_ROWS_APPROVED FROM Source_Table;
FOR i IN 1..V_ROWS_APPROVED LOOP
SELECT REQUESTED_SOFT_MAPPING INTO V_SOFT FROM Source_Table WHERE ROW_ID = i;
SELECT REP_ID INTO V_REP_ID FROM Source_Table WHERE ROW_ID = i;
UPDATE Description_Table D
SET D.NEW_SOFT_ASSIGNMENT = V_SOFT
WHERE D.REP_ID = V_REP_ID;
END LOOP;
END;
The ending result of this loop is a beautiful ''504 Gateway Time-out''.
I know the issue is on the Update query but there's no other way (I can think about) of doing it.
Can someone give me a hand please?
Thanks
Unless your row_id values are contiguous - i.e. count(row_id) == max(row_id) - then this will get a no-data-found. Sequences aren't gapless, so this seems fairly likely. We have no way of telling if that is happening and somehow that is leaving your connection hanging until it times out, or if it's just taking a long time because you're doing a lot of individual queries and updates over a large data set. (And you may be squashing any errors that do occur, though you haven't shown that.)
You don't need to query and update in a loop though, or even use PL/SQL; you can apply all the values in the source table to the description table with a single update or merge:
merge into description_table d
using source_table s
on (s.rep_id = d.rep_id)
when matched then
update set d.new_soft_assignment = s.requested_soft_mapping;
db<>fiddle with some dummy data, including a non-contiguous row_id to show that erroring.

SQL%FOUND where SELECT query returns no rows

I have the following function, which returns the next available client ID from the Client table:
CREATE OR REPLACE FUNCTION getNextClientID RETURN INT AS
ctr INT;
BEGIN
SELECT MAX(NUM) INTO ctr FROM Client;
IF SQL%NOTFOUND THEN
RETURN 1;
ELSIF SQL%FOUND THEN
-- RETURN SQL%ROWCOUNT;
RAISE_APPLICATION_ERROR(-20010, 'ROWS FOUND!');
-- RETURN ctr + 1;
END IF;
END;
But when calling this function,
BEGIN
DBMS_OUTPUT.PUT_LINE(getNextClientID());
END;
I get the following result:
which I found a bit odd, since the Client table contains no data:
Also, if I comment out RAISE_APPLICATION_ERROR(-20010, 'ROWS FOUND!'); & log the value of SQL%ROWCOUNT to the console, I get 1 as a result.
On the other hand, when changing
SELECT MAX(NUM) INTO ctr FROM Client;
to
SELECT NUM INTO ctr FROM Client;
The execution went as expected. What is the reason behind this behavior ?
Aggregate functions will always return a result:
All aggregate functions except COUNT(*), GROUPING, and GROUPING_ID
ignore nulls. You can use the NVL function in the argument to an
aggregate function to substitute a value for a null. COUNT and
REGR_COUNT never return null, but return either a number or zero. For
all the remaining aggregate functions, if the data set contains no
rows, or contains only rows with nulls as arguments to the aggregate
function, then the function returns null.
You can change your query to:
SELECT COALESCE(MAX(num), 1) INTO ctr FROM Client;
and remove the conditionals altogether. Be careful about concurrency issues though if you do not use SELECT FOR UPDATE.
Query with any aggregate function and without GROUP BY clause always returns 1 row. If you want no_data_found exception on empty table, add GROUP BY clause or remove max:
SQL> create table t (id number, client_id number);
Table created.
SQL> select nvl(max(id), 0) from t;
NVL(MAX(ID),0)
--------------
0
SQL> select nvl(max(id), 0) from t group by client_id;
no rows selected
Usually queries like yours (with max and without group by) are used to avoid no_data_found.
Agregate functions like MAX will always return a row. It will return one row with a null value if no row is found.
By the way SELECT NUM INTO ctr FROM Client; will raise an exception where there's more than one row in the table.
You should instead check whether or not ctr is null.
Others have already explained the reason why your code isn't "working", so I'm not going to be doing that.
You seem to be instituting an identity column of some description yourself, probably in order to support a surrogate key. Doing this yourself is dangerous and could cause large issues in your application.
You don't need to implement identity columns yourself. From Oracle 12c onwards Oracle has native support for identity columns, these are implemented using sequences, which are available in 12c and previous versions.
A sequence is a database object that is guaranteed to provide a new, unique, number when called, no matter the number of concurrent sessions requesting values. Your current approach is extremely vulnerable to collision when used by multiple sessions. Imagine 2 sessions simultaneously finding the largest value in the table; they then both add one to this value and try to write this new value back. Only one can be correct.
See How to create id with AUTO_INCREMENT on Oracle?
Basically, if you use a sequence then you don't need any of this code.
As a secondary note your statement at the top is incorrect:
I have the following function, which returns the next available client ID from the Client table
Your function returns the maximum ID + 1. If there's a gap in the IDs, i.e. 1, 2, 3, 5 then the "missing" number (4 in this case) will not be returned. A gap can occur for any number of reasons (deletion of a row for example) and does not have a negative impact on your database in any way at all - don't worry about them.

UPDATE on INSERT duplicate primary key in Oracle?

I have a simple INSERT query where I need to use UPDATE instead when the primary key is a duplicate. In MySQL this seems easier, in Oracle it seems I need to use MERGE.
All examples I could find of MERGE had some sort of "source" and "target" tables, in my case, the source and target is the same table. I was not able to make sense of the examples to create my own query.
Is MERGE the only way or maybe there's a better solution?
INSERT INTO movie_ratings
VALUES (1, 3, 5)
It's basically this and the primary key is the first 2 values, so an update would be like this:
UPDATE movie_ratings
SET rating = 8
WHERE mid = 1 AND aid = 3
I thought of using a trigger that would automatically execute the UPDATE statement when the INSERT was called but only if the primary key is a duplicate. Is there any problem doing it this way? I need some help with triggers though as I'm having some difficulty trying to understand them and doing my own.
MERGE is the 'do INSERT or UPDATE as appropriate' statement in Standard SQL, and probably therefore in Oracle SQL too.
Yes, you need a 'table' to merge from, but you can almost certainly create that table on the fly:
MERGE INTO Movie_Ratings M
USING (SELECT 1 AS mid, 3 AS aid, 8 AS rating FROM dual) N
ON (M.mid = N.mid AND M.aid = N.aid)
WHEN MATCHED THEN UPDATE SET M.rating = N.rating
WHEN NOT MATCHED THEN INSERT( mid, aid, rating)
VALUES(N.mid, N.aid, N.rating);
(Syntax not verified.)
A typical way of doing this is
performing the INSERT and catch a DUP_VAL_ON_INDEX and then perform an UPDATE instead
performing the UPDATE first and if SQL%Rows = 0 perform an INSERT
You can't write a trigger on a table that does another operation on the same table. That's causing an Oracle error (mutating tables).
I'm a T-SQL guy but a trigger in this case is not a good solution. Most triggers are not good solutions. In T-SQL, I would simply perform an IF EXISTS (SELECT * FROM dbo.Table WHERE ...) but in Oracle, you have to select the count...
DECLARE
cnt NUMBER;
BEGIN
SELECT COUNT(*)
INTO cnt
FROM mytable
WHERE id = 12345;
IF( cnt = 0 )
THEN
...
ELSE
...
END IF;
END;
It would appear that MERGE is what you need in this case:
MERGE INTO movie_ratings mr
USING (
SELECT rating, mid, aid
WHERE mid = 1 AND aid = 3) mri
ON (mr.movie_ratings_id = mri.movie_ratings_id)
WHEN MATCHED THEN
UPDATE SET mr.rating = 8 WHERE mr.mid = 1 AND mr.aid = 3
WHEN NOT MATCHED THEN
INSERT (mr.rating, mr.mid, mr.aid)
VALUES (1, 3, 8)
Like I said, I'm a T-SQL guy but the basic idea here is to "join" the movie_rating table against itself. If there's no performance hit on using the "if exists" example, I'd use it for readability.

How to? Correct sql syntax for finding the next available identifier

I think I could use some help here from more experienced users...
I have an integer field name in a table, let's call it SO_ID in a table SO, and to each new row I need to calculate a new SO_ID based on the following rules
1) SO_ID consists of 6 letters where first 3 are an area code, and the last three is the sequenced number within this area.
309001
309002
309003
2) so the next new row will have a SO_ID of value
309004
3) if someone deletes the row with SO_ID value = 309002, then the next new row must recycle this value, so the next new row has got to have the SO_ID of value
309002
can anyone please provide me with either a SQL function or PL/SQL (perhaps a trigger straightaway?) function that would return the next available SO_ID I need to use ?
I reckon I could get use of keyword rownum in my sql, but the follwoing just doens't work properly
select max(so_id),max(rownum) from(
select (so_id),rownum,cast(substr(cast(so_id as varchar(6)),4,3) as int) from SO
where length(so_id)=6
and substr(cast(so_id as varchar(6)),1,3)='309'
and cast(substr(cast(so_id as varchar(6)),4,3) as int)=rownum
order by so_id
);
thank you for all your help!
This kind of logic is fraught with peril. What if two sessions calculate the same "next" value, or both try to reuse the same "deleted" value? Since your column is an integer, you'd probably be better off querying "between 309001 and 309999", but that begs the question of what happens when you hit the thousandth item in area 309?
Is it possible to make SO_ID a foreign key to another table as well as a unique key? You could pre-populate the parent table with all valid IDs (or use a function to generate them as needed), and then it would be a simple matter to select the lowest one where a child record doesn't exist.
well, we came up with this... sort of works.. concurrency is 'solved' via unique constraint
select min(lastnumber)
from
(
select so_id,so_id-LAG(so_id, 1, so_id) OVER (ORDER BY so_id) AS diff,LAG(so_id, 1, so_id) OVER (ORDER BY so_id)as lastnumber
from so_miso
where substr(cast(so_id as varchar(6)),1,3)='309'
and length(so_id)=6
order by so_id
)a
where diff>1;
Do you really need to compute & store this value at the time a row is inserted? You would normally be better off storing the area code and a date in a table and computing the SO_ID in a view, i.e.
SELECT area_code ||
LPAD( DENSE_RANK() OVER( PARTITION BY area_code
ORDER BY date_column ),
3,
'0' ) AS so_id,
<<other columns>>
FROM your_table
or having a process that runs periodically (nightly, for example) to assign the SO_ID using similar logic.
If your application is not pure sql, you could do this in application code (ie: Java code). This would be more straightforward.
If you are recycling numbers when rows are deleted, your base table must be consulted when generating the next number. "Legacy" pre-relational schemes that attempt to encode information in numbers are a pain to make airtight when numbers must be recycled after deletes, as you say yours must.
If you want to avoid having to scan your table looking for gaps, an after-delete routine must write the deleted number to a separate table in a "ReuseMe" column. The insert routine does this:
begins trans
selects next-number table for update
uses a reuseme number if available else uses the next number
clears the reuseme number if applicable or increments the next-number in the next-number table
commits trans
Ignoring the issues about concurrency, the following should give a decent start.
If 'traffic' on the table is low enough, go with locking the table in exclusive mode for the duration of the transaction.
create table blah (soc_id number(6));
insert into blah select 309000 + rownum from user_tables;
delete from blah where soc_id = 309003;
commit;
create or replace function get_next (i_soc in number) return number is
v_min number := i_soc* 1000;
v_max number := v_min + 999;
begin
lock table blah in exclusive mode;
select min(rn) into v_min
from
(select rownum rn from dual connect by level <= 999
minus
select to_number(substr(soc_id,4))
from blah
where soc_id between v_min and v_max);
return v_min;
end;

Resources