oracle - sequences without a sequence - oracle

I want to populate a column with a sequential number but a single sequence is not sufficient. This column will behave somewhat like a 'sub id' if you will; an incrementing id for groups of records in the table.
The plan is to get the 'next number in the sequence' when inserting using a trigger, much like a normal sequence may be used. However, rather than just the 'next number', it needs to be the 'next number' in a given result set.
Consider the following example data where the display_id column is the sequence I need help managing and it is dependent on the record's value for group_name..
id | group_name | display_id
------------------------------
5 | GroupA | 3
4 | GroupA | 2
3 | GroupA | 1
2 | GroupB | 2
1 | GroupB | 1
I'm thinking of a query like this query to get the 'next number' for GroupA:
select max(record_id) + 1
from my_records
where group_name = 'GroupA'
For GroupA it returns 4, but for GroupB it returns 3.
We could, of course, use the above query but would lose the atomic benefits of a sequence. Is there any way to manage such a sequence confidently?
We are not concerned about potentially skipping numbers (as sequences may).
Edit:
We are comfortable with a number or two being missed due to rollbacks and the like (as with sequences). However, our requirement is still that the display_id column maintain multiple sequences.

Although I would strongly recommend against it (preferring to use a single sequence and just accept that there will be larger than expected gaps), you can build your own pseudo-sequence table
CREATE TABLE my_sequences (
sequence_name VARCHAR2(30) PRIMARY KEY,
sequence_val NUMBER
);
insert a couple of rows
INSERT INTO my_sequences( sequence_name, sequence_val )
VALUES( 'GroupA', 1 );
INSERT INTO my_sequences( sequence_name, sequence_val )
VALUES( 'GroupB', 1 );
and then write a function to get the next sequence value
CREATE FUNCTION get_nextval( p_sequence_name IN VARCHAR2 )
RETURN NUMBER
IS
l_val NUMBER;
BEGIN
SELECT sequence_val
INTO l_val
FROM my_sequences
WHERE sequence_name = p_sequence_name
FOR UPDATE;
UPDATE my_sequences
SET sequence_val = sequence_val + 1
WHERE sequence_name = p_sequence_name;
RETURN l_val;
END;
This will lock the row in the table for the particular sequence until the transaction that retrieved the next row either commits or rolls back. This will radically decrease the scalability of your application compared to using Oracle sequences by ensuring that only one session can be inserting a row for a particular group_name at a time-- the others will block waiting for the sequence. If you have a system with a relatively small number of concurrent users (or a relatively large number of group_name values), that may be acceptable to you. But in general it is a poor practice. Depending on the Oracle version, you may be able to use autonomous transactions to increase concurrency but that just adds one more bit of complexity to the solution. At the point that you're really worried about scalability, you'd really want to push back on the whole design and just use an Oracle sequence.

Create an unique composite index on group_name + display_id columns.
Then use your code and in case exception is thrown - re-run the next value generation.
PS: personally I don't like it, but it's likely in this case there is no good alternatives

Related

Inserting Row Number based on existing value in the table

I have a requirement that I need to insert row number in a table based on value already present in the table. For example, the max row_nbr record in the current table is something like this:
+----------+----------+------------+---------+
| FST_NAME | LST_NAME | STATE_CODE | ROW_NBR |
+----------+----------+------------+---------+
| John | Doe | 13 | 123 |
+----------+----------+------------+---------+
Now, I need to insert more records, with given FST_NAME and LST_NAME values. ROW_NBR needs to be generated while inserting the data into table with values auto-incrementing from 123.
I can't use a sequence, as my loading process is not the only process that inserts data into this table. And I can't use a cursor as well, as due to high volume of data the TEMP space gets filled up quickly. And I'm inserting data as given below:
insert into final_table
( fst_name,lst_name,state_code)
(select * from staging_table
where state_code=13);
Any ideas how to implement this?
It sounds like other processes are finding the current maximum row_nbr value and incrementing it as they do single-row inserts in a cursor loop.
You could do something functionally similar, either finding the maximum in advance and incrementing it (if you're already running this in a PL/SQL block):
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, variable_holding_maximum + rownum
from staging_table st
where st.state_code=13;
or by querying the table as part of the query, which doesn't need PL/SQL:
insert into final_table (fst_name, lst_name, state_code, row_nbr)
select st.*, (select max(row_nbr) from final_table) + rownum
from staging_table st
where st.state_code=13;
db<>fiddle
But this isn't a good solution because it doesn't prevent clashes from different processes and sessions trying to insert at the same time; but neither would the cursor loop approach, unless it is catching unique constraint errors and re-attempting with a new value, perhaps.
It would be better to use a sequence, which would be an auto-increment column but you said you can't change the table structure; and you need to let the other processes continue to work without modification. You can still do that with a sequence and trigger approach, having the trigger always set the row_nbr value form the sequence, regardless of whether the insert statement supplied a value.
If you create a sequence that starts from the current maximum, with something like:
create sequence final_seq start with <current max + 1>
or without manually finding it:
declare
start_with pls_integer;
begin
select nvl(max(row_nbr), 0) + 1 into start_with from final_table;
execute immediate 'create sequence final_seq start with ' || start_with;
end;
/
then your trigger could just be:
create trigger final_trig
before insert on final_table
for each row
begin
:new.row_nbr := final_seq.nextval;
end;
/
Then your insert ... select statement doesn't need to supply or even think about the row_nbr value, so you can leave it as you have it now (except I'd avoid select * even in that construct, and list the staging table columns explicitly); and any existing inserts that do supply the row_nbr don't need to be modified and the value they supply will just be overwritten from the sequence.
db<>fiddle showing inserts with and withouth row_nbr specified.

Writing a Version Number Function in PL/SQL

I want to write a function that will give me the next version number for a table. The table stores the existing version on each record. For example,
I have the cat table
cats
seqid 1
name Mr Smith
version number 1.2b.3.4
How can I write a program that will be able to increment these values based on various conditions?
This is my first attempt
if v_username is not null
then v_new_nbr = substr(v_cur_nbr, 1,7)||to_number(substr(v_cur_nbr, 8,1))+1
should be 1.2b.3.5
substr(v_cur_nbr, 1,7)||to_number(substr(v_cur_nbr, 8,1))+1
This hurls ORA-01722: invalid number. The reason is a subtle one. It seems Oracle applies the concatenation operator before the additions, so effectively you're adding one to the string '1.2b.3.4'.
One solution is using a TO_CHAR function to bracket the addition with the second substring before concatenating the result with the first substring:
substr(v_cur_nbr, 1,7) || to_char(to_number(substr(v_cur_nbr, 8,1))+1)
Working demo on db<>fiddle.
Incidentally, a key like this is a bad piece of data modelling. Smart keys are dumb. They always lead to horrible SQL (as you're finding) and risk data corruption. A proper model would have separate columns for each element of the version number. We can use virtual columns to concatenate the version number for display circumstances.
create table cats(
seqid number
,name varchar2(32)
,major_ver_no1 number
,major_ver_no2 number
,variant varchar2(1)
,minor_ver_no1 number
,minor_ver_no2 number
,v_cur_nbr varchar2(16) generated always as (to_char(major_ver_no1,'FM9') ||'.'||
to_char(major_ver_no2,'FM9') ||'.'||
variant ||'.'||
to_char(minor_ver_no1,'FM9') ||'.'||
to_char(minor_ver_no2,'FM9') ) );
So the set-up is a bit of a nause but incrementing the version numbers is a piece of cake.
update cats
set major_ver_no1 = major_ver_no1 +1
, major_ver_no2 = 0
, variant = 'a';
There's a db<>fiddle for that too.
Try searching mask for TO_NUMBER to be able to get the decimal number, this small example might help:
CREATE TABLE tmp_table (version varchar2(100));
INSERT INTO tmp_table(version) VALUES ('1.2b.3.4');
DECLARE
mainVersion NUMBER;
subVersion NUMBER;
currentVersion VARCHAR2(100);
BEGIN
SELECT version INTO currentVersion FROM tmp_table;
mainVersion := TO_NUMBER(SUBSTR(currentVersion,1,3),'9.9') + 0.1;
subVersion := TO_NUMBER(SUBSTR(currentVersion,6,3),'9.9') + 1.1;
UPDATE tmp_table SET version = (mainVersion||'b.'||subVersion);
END;

Trigger to find next available inventory location

I am trying to implement inventory tracking and am running into problems. As this is my first foray into database triggers (& PL/SQL in general) I think I need an adjustment to my thinking/understanding of how to solve this issue.
My situation is as follows: Each time a new item is added to my inventory, I need to auto-assign it the first available physical storage location. When items are consumed, they are removed from the inventory thus freeing up a physical location (i.e. we are recycling these physical locations). I have two tables: one inventory table and one table containing all legal location names/Ids.
Table: ALL_LOCATIONS
Location_ID
SP.1.1.1.a
SP.1.1.1.b
SP.1.1.1.c
SP.1.1.2.a
SP.1.1.2.b
SP.1.1.2.c
SP.1.1.3.a
SP.1.1.3.b
SP.1.1.3.c
...
SP.25.5.6.c
Table: ITEM_INVENTORY
Item_ID | Location_ID
1 SP.1.1.1.a
2 SP.1.1.1.b
4 SP.1.1.2.a
5 SP.1.1.2.b
6 SP.1.1.2.c
21 SP.1.1.4.a
… …
Note: First available location_ID should be SP.1.1.1.c
I need to create a trigger that will assign the next available Location_ID to the inserted row(s). Searching this site I see several similar questions along these lines, however they are geared towards the logic of determining the next available location. In my case, i think I have that down, but I don't know how to implement it as a trigger. Let's just focus on the insert trigger. The "MINUS" strategy (shown below) works well in picking the next available location, but Oracle doesn't like this inside a trigger since I am reading form the same table that I am editing (throws a mutating table error).
I've done some reading on mutating table errors and some workarounds are suggested (autonomous transactions etc.) however, the key message from my reading is, "you're going about it the wrong way." So my question is, "what's another way of approaching this problem so that I can implement a clean & simple solution without having to hack my way around mutating tables?"
Note: I am certain you can find all manner of things not-quite-right with my trigger code and I will certainly learn something if you point them out -- however my goal here is to learn new ways to approach/think about the fundamental problem with my design.
create or replace TRIGGER Assign_Plate_Location
BEFORE INSERT ON ITEM_INVENTORY
FOR EACH ROW
DECLARE
loc VARCHAR(100) := NULL;
BEGIN
IF(:new.LOCATION_ID IS NULL) THEN
BEGIN
SELECT LOCATION_ID INTO loc FROM
(SELECT DISTINCT LOCATION_ID FROM ALL_LOCATIONS
MINUS
SELECT DISTINCT LOCATION_ID FROM ITEM_INVENTORY)
WHERE ROWNUM = 1;
EXCEPTION
WHEN NO_DATA_FOUND THEN
loc := NULL;
END;
IF(loc IS NOT NULL) THEN
:new.LOCATION_ID := loc;
END IF;
END IF;
END;
There are several ways to do it. You could add column AVAILABLE or OCCUPIED to first table
and select data only from this table with where available = 'Y'. In this case you need also triggers
for delete and for update of location_id on second table.
Second option - when inserting data use merge or some procedure retrieving data from all_locations when item_inventory.location_id is null.
Third option - Oracle 11g introduced compound triggers
which allows better handling mutating tables. In this case trigger would look something like this:
create or replace trigger assign_plate_location
for insert on item_inventory compound trigger
loc varchar2(15) := null;
type t_locs is table of item_inventory.location_id%type;
v_locs t_locs;
i number := 1;
before statement is
begin
select location_id
bulk collect into v_locs from all_locations al
where not exists (
select location_id from item_inventory ii
where ii.location_id = al.location_id );
end before statement;
before each row is
begin
if :new.location_id is null then
if i <= v_locs.count() then
:new.location_id := v_locs(i);
i := i + 1;
end if;
end if;
end before each row;
end assign_plate_location;
I tested it on data from your example, inserts (with select) looked OK. You can give it a try, check if it's efficient, maybe this will suit you.
And last notes - in your select you do not need distinct, MINUS makes values distinct.
Also think about ordering data, now your select (and mine) may take random row from ALL_LOCATIONS.

Writing Procedure to enforce constraints + Testing

I need to set a constraint that the user is unable to enter any records after he/she has entered 5 records in a single month. Would it be advisable that I write a trigger or procedure for that? Else is that any other ways that I can setup the constraint?
Instead of writing a trigger i have opt to write a procedure for the constraint but how do i check if the procedure is working?
Below is the procedure:
CREATE OR REPLACE PROCEDURE InsertReadingCheck
(
newReadNo In Int,
newReadValue In Int,
newReaderID In Int,
newMeterID In Int
)
AS
varRowCount Int;
BEGIN
Select Count(*) INTO varRowCount
From Reading
WHERE ReaderID = newReaderID
AND Trunc(ReadDate,'mm') = Trunc(Sysdate,'mm');
IF (varRowCount >= 5) THEN
BEGIN
DBMS_OUTPUT.PUT_LINE('*************************************************');
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE(' You attempting to enter more than 5 Records ');
DBMS_OUTPUT.PUT_LINE('');
DBMS_OUTPUT.PUT_LINE('*************************************************');
ROLLBACK;
RETURN;
END;
ELSIF (varRowCount < 5) THEN
BEGIN
INSERT INTO Reading
VALUES(seqReadNo.NextVal, sysdate, newReadValue,
newReaderID, newMeterID);
COMMIT;
END;
END IF;
END;
Anyone can help me look through
This is the sort of thing that you should avoid putting in a trigger. Especially the ROLLBACK and the COMMIT. This seems extremely dangerous (and I'm not even sure whether it's possible). You might have other transactions that you wish to commit that you rollback or vice versa.
Also, by putting this in a trigger you are going to get the following error:
ORA-04091: table XXXX is mutating, trigger/function may not see it
There are ways round this but they're excessive and involve doing something funky in order to get round Oracle's insistence that you do the correct thing.
This is the perfect opportunity to use a stored procedure to insert data into your table. You can check the number of current records prior to doing the insert meaning that there is no need to do a ROLLBACK.
It depends upon your application, if insert is already present in your application many times then trigger is better option.
This is a behavior constraint. Its a matter of opinion but I would err on the side of keeping this kind of business logic OUT of your database. I would instead keep track of who added what records in the records table, and on what day/times. You can have a SP to get this information, but then your code behind should handle whether or not the user can see certain links (or functions) based on the data that's returned. Whether that means keeping the user from accessing the page(s) where they insert records, or give them read only views is up to you.
One declarative way you could solve this problem that would obey all concurrency rules is to use a separate table to keep track of number of inserts per user per month:
create table inserts_check (
ReaderID integer not null,
month date not null,
number_of_inserts integer constraint max_number_of_inserts check (number_of_inserts <= 5),
primary key (ReaderID, month)
);
Then create a trigger on the table (or all tables) for which inserts should be capped at 5:
create trigger after insert on <table>
for each row
begin
MERGE INTO inserts_check t
USING (select 5 as ReaderID, trunc(sysdate, 'MM') as month, 1 as number_of_inserts from dual) s
ON (t.ReaderID = s.ReaderID and t.month = s.month)
WHEN MATCHED THEN UPDATE SET t.number_of_inserts = t.number_of_inserts + 1
WHEN NOT MATCHED THEN INSERT (ReaderID, month, number_of_inserts)
VALUES (s.ReaderID, s.month, s.number_of_inserts);
end;
Once the user has made 5 inserts, the constraint max_number_of_inserts will fail.

Pattern to substitute for MERGE INTO Oracle syntax when not allowed

I have an application that uses the Oracle MERGE INTO... DML statement to update table A to correspond with some of the changes in another table B (table A is a summary of selected parts of table B along with some other info). In a typical merge operation, 5-6 rows (out of 10's of thousands) might be inserted in table B and 2-3 rows updated.
It turns out that the application is to be deployed in an environment that has a security policy on the target tables. The MERGE INTO... statement can't be used with these tables (ORA-28132: Merge into syntax does not support security policies)
So we have to change the MERGE INTO... logic to use regular inserts and updates instead. Is this a problem anyone else has run into? Is there a best-practice pattern for converting the WHEN MATCHED/WHEN NOT MATCHED logic in the merge statement into INSERT and UPDATE statements? The merge is within a stored procedure, so it's fine for the solution to use PL/SQL in addition to the DML if that is required.
Another way to do this (other than Merge) would be using two sql statements one for insert and one for update. The "WHEN MATCHED" and "WHEN NOT MATCHED" can be handled using joins or "in" Clause.
If you decide to take the below approach, it is better to run the update first (sine it only runs for the matching records) and then insert the non-Matching records. The Data sets would be the same either way, it just updates less number of records with the order below.
Also, Similar to the Merge, this update statement updates the Name Column even if the names in Source and Target match. If you dont want that, add that condition to the where as well.
create table src_table(
id number primary key,
name varchar2(20) not null
);
create table tgt_table(
id number primary key,
name varchar2(20) not null
);
insert into src_table values (1, 'abc');
insert into src_table values (2, 'def');
insert into src_table values (3, 'ghi');
insert into tgt_table values (1, 'abc');
insert into tgt_table values (2,'xyz');
SQL> select * from Src_Table;
ID NAME
---------- --------------------
1 abc
2 def
3 ghi
SQL> select * from Tgt_Table;
ID NAME
---------- --------------------
2 xyz
1 abc
Update tgt_Table tgt
set Tgt.Name =
(select Src.Name
from Src_Table Src
where Src.id = Tgt.id
);
2 rows updated. --Notice that ID 1 is updated even though value did not change
select * from Tgt_Table;
ID NAME
----- --------------------
2 def
1 abc
insert into tgt_Table
select src.*
from Src_Table src,
tgt_Table tgt
where src.id = tgt.id(+)
and tgt.id is null;
1 row created.
SQL> select * from tgt_Table;
ID NAME
---------- --------------------
2 def
1 abc
3 ghi
commit;
There could be better ways to do this, but this seems simple and SQL-oriented. If the Data set is Large, then a PL/SQL solution won't be as performant.
There are at least two options I can think of aside from digging into the security policy, which I don't know much about.
Process the records to merge row by row. Attempt to do the update, if it fails to update then insert, or vise versa, depending on whether you expect most records to need updating or inserting (ie optimize for the most common case that will reduce the number of SQL statements fired), eg:
begin
for row in (select ... from source_table) loop
update table_to_be_merged
if sql%rowcount = 0 then -- no row matched, so need to insert
insert ...
end if;
end loop;
end;
Another option may be to bulk collect the records you want to merge into an array, and then attempted to bulk insert them, catching all the primary key exceptions (I cannot recall the syntax for this right now, but you can get a bulk insert to place all the rows that fail to insert into another array and then process them).
Logically a merge statement has to check for the presence of each records behind the scenes anyway, and I think it is processed quite similarly to the code I posted above. However, merge will always be more efficient than coding it in PLSQL as it will be only 1 SQL call instead of many.

Resources