Get oracle sequenve next value without change it - oracle

I have a big problem. I need to know the next value of an oracle sequence without changing it.
If I use sequence.NEXTVAL , I get what I need. The problem is that the value changed, so the next record will get this value + 1 and it's not good for me.
I know I can use sequence.CURRVAL and it's great because it does not change the value, but in case of no records, it's not working and in this case the sequence value can be any number (and not only 1 cause the sequence value steel exist).
Please help me to find a solution.
Thank you vary much !!!!!

You should definitely not rely on never missing a value in a sequence, as they optimise for concurrency over sequential numbering. There are quite a few situations in which a number can be "lost".
Furthermore, the value visible in the dba_sequences may not be the actual next value, as the numbers are assigned from an in-memory cache. The underlying sequence metadata table has no data on the usage of that cache. You should also bear in mind that in a RAC system each instance has its own cache of sequence numbers.
You might describe the problem you are trying to solve, as it could be that sequences are not an appropriate mechanism for you.

In multi-user environment there is no way. The value is either known - when you call .NEXTVAL or unknown. Sequences can not be locked.

Well the only suggestion that I can give you is to get the nextval and then revert back. But CAUTION: Perform these operations only while you sequence is not begin used from any other session. Otherwise if sequence is fetched from another session during this operation, it is possible that you could never revert back to the original value. So be cautious before checking:
In my database, current value of seq_test is 6
--Get the next value
SQL> SELECT seq_test.NEXTVAL FROM dual;
NEXTVAL
----------
7
-- Ok, so this is the next value, now Restore to Original
SQL> ALTER SEQUENCE seq_test INCREMENT BY -1;
Sequence altered.
SQL> SELECT seq_test.NEXTVAL FROM dual;
NEXTVAL
----------
6
-- Restore original increment by clause
SQL> ALTER SEQUENCE seq_test INCREMENT BY 1;
Sequence altered.
--Check
SQL> SELECT seq_test.CURRVAL FROM dual;
CURRVAL
----------
6
I again warn you that this could be fatal ;-)

I can't see any reason why you want to know the next value of a sequence. The only importance for the value is when you have used it for a new record. So you either create a new record and return the ID value immediately:
declare
l_id number;
begin
Insert into table (id)
values (sequence.nextval)
return id into l_id;
-- and then do something more with the ID value
end;
or first get a new ID and use it afterwards
declare
l_id number;
begin
select sequence.nextval
into l_id
from dual;
-- then do something with the ID value
end;

Look at the "last_number" column from
select * from USER_SEQUENCES
Edit:
This works only if the sequence is created with the nocache option.
So if the sequence does has a cache setting, drop it and recreate it with nocache, and then you can use user_sequences.last_number +1.
Note that if the sequence is heavily used than this does impact performance. If on the other hand, the sequence is not used so much, it will be fine.
Short of writing the application (which is the best solution), this is only solution.

Related

Assign a unique value to a user each time

How to generate and assign a unique value (sequence) to a user every time a user logs in? If two users login in at the same time simultaneously, I should be able to avoid assigning same value to both.
And it is not a specific requirement. It can be any operation for which a sequence is required to be generated and assigned. I am trying to understand how the situation can be handled if the operation happens at the same time. I require a solution for Oracle 11g specifically.
You named it - use a sequence.
create sequence myseq;
Fetch value from it using
select myseq.nextval from dual;
Assingn it to that user. Though, I'm not sure I understand what that actually is; what would you assign it to, really? I hope you know.
Without understanding what exactly what you want to accomplish with the unique value, as you are not explaining it, you might do something like this:
Create a Sequence for creating an incremental sequential number
SQL> create sequence my_seq start with xxx increment by yyy cache zzz;
Where
xxx is the number where the sequence should start
yyy is the increment value of the sequence ( normally 1 )
As you plan to use it each time the user logs in, it might be a good idea to cache some values for faster access. zzz represents the number of values of the sequence you want to cache.
Assign the values to each user which logs in
A way to assign this value to a user each time logs in is through a logon trigger and related it to the properties of the session, using the advantages provided by the default context SYS_CONTEXT. Of course, for that you need a DBA user to install the trigger in an admin schema ( as sys or system )
An example to store those values in a custom audit table:
SQL> CREATE TRIGGER AFT_LOG_DBT
AFTER LOGON ON DATABASE
DECLARE
-- declare variables
v_db_user varchar2(100);
v_os_user varchar2(100);
v_seq number;
BEGIN
SELECT sys_context('userenv','session_id') , -- the session id
sys_context('userenv','osuser') , -- the os user
my_seq.nextval -- the sequence next val
INTO
v_db_user,
v_os_user,
v_seq
FROM DUAL;
-- insert those values into an audit table
insert into your_schema.log_audit ( logon_time , db_user, os_user, seq_val )
values ( sysdate , v_db_user, v_os_user, v_seq );
commit;
END;
As you were not explaining what you wanted to do with the unique sequence value, I tried to provide you with an example of a use case.
I have never gotten duplicate sequence numbers from a sequence generator unless it has wrapped the max value or is set to cycle.

PL/SQL Procedure - Ticket system

I'm trying to make a Procedure in PL/SQL for a ticket system for my website.
I want a simple procedure that saves some data into the database.
It will have name of the user (random VARCHAR2) and a number and the date of today as input.
When this procedure is called it will automatically higher a number in the database (Maybe primary key) and add the input into the database. But I have no idea how to make something like this with the option to have more people access it at once.
Can anyone help please?
For the people that want to know what I have already.
CREATE OR REPLACE PROCEDURE VoegBoeteToe
(v_speler IN number, v_date IN date, v_boete IN number)
IS VoegBoeteToe
DECLARE
v_nextNum NUMBER;
BEGIN
SELECT BETALINGSNR
INTO v_nextNum
FROM BOETES
ORDER BY BETALINGSNR DESC;
v_nextNum := v_nextNum + 1;
INSERT INTO BOETES VALUES(v_nextNum,v_speler,v_date,v_boete);
end;
/
The fourth line of your procedure should be IS instead of IS VoegBoeteToe. In addition, remove the fifth line (DECLARE), which is not needed in a procedure. And use a sequence to derive the sequential number safely.
So, when all's said and done, your procedure should look something like:
CREATE SEQUENCE BOETES_SEQ;
CREATE OR REPLACE PROCEDURE VoegBoeteToe
(v_speler IN number, v_date IN date, v_boete IN number)
IS
BEGIN
INSERT INTO BOETES VALUES(BOETES_SEQ.NEXTVAL,v_speler,v_date,v_boete);
end;
It would also be a very good idea to include the list of field names in the BOETES table into which you're inserting values. This will make life better for anyone who has to maintain your code, and may save you from making a few errors.
Share and enjoy.
You need to create SEQUENCE
CREATE SEQUENCE DUMMY_SEQ
/
Then in Your code You can use something like that:
select DUMMY_SEQ.NEXTVAL into myVar from dual;
First, create a sequence. A sequence generates an ever incrementing number whenever it is invoked. See this for some nice examples. The documentation could be useful too.
CREATE SEQUENCE user_sequence
START WITH 1000 -- your sequence will start with 1000
INCREMENT BY 1; -- each subsequent number will increment by 1
Then,
SELECT user_sequence.NEXTVAL INTO v_nextNum FROM DUAL;

Incrementing Oracle Sequence by certain amount

I am programming a Windows Application (in Qt 4.6) which - at some point - inserts any number of datasets between 1 and around 76000 into some oracle (10.2) table. The application has to retrieve the primary keys, or at least the primary key range, from a sequence. It will then store the IDs in a list which is used for Batch Execution of a prepared query.
(Note: Triggers shall not be used, and the sequence is used by other tasks as well)
In order to avoid calling the sequence X times, I would like to increment the sequence by X instead.
What I have found out so far, is that the following code would be possible in a procedure:
ALTER SEQUENCE my_sequence INCREMENT BY X;
SELECT my_sequence.CURVAL + 1, my_sequence.NEXTVAL
INTO v_first_number, v_last_number
FROM dual;
ALTER SEQUENCE my_sequence INCREMENT BY 1;
I have two major concerns though:
I have read that ALTER SEQUENCE produces an implicit commit. Does this mean the transaction started by the Windows Application will be commited? If so, can you somehow avoid it?
Is this concept multi-user proof? Or could the following thing happen:
Sequence is at 10,000
Session A sets increment to 2,000
Session A selects 10,001 as first and 12,000 as last
Session B sets increment to 5,000
Session A sets increment to 1
Session B selects 12,001 as first and 12,001 as last
Session B sets increment to 1
Even if the procedure would be rather quick, it is not that unlikely in my application that two different users cause the procedure to be called almost simultaneously
1) ALTER SEQUENCE is DDL so it implicitly commits before and after the statement. The database transaction started by the Windows application will be committed. If you are using a distributed transaction coordinator other than the Oracle database, hopefully the transaction coordinator will commit the entire distributed transaction but transaction coordinators will sometimes have problems with commits issued that it is not aware of.
There is nothing that you can do to prevent DDL from committing.
2) The scenario you outline with multiple users is quite possible. So it doesn't sound like this approach would behave correctly in your environment.
You could potentially use the DBMS_LOCK package to ensure that only one session is calling your procedure at any point in time and then call the sequence N times from a single SQL statement. But if other processes are also using the sequence, there is no guarantee that you'll get a contiguous set of values.
CREATE PROCEDURE some_proc( p_num_rows IN NUMBER,
p_first_val OUT NUMBER,
p_last_val OUT NUMBER )
AS
l_lockhandle VARCHAR2(128);
l_lock_return_code INTEGER;
BEGIN
dbms_lock.allocate_unique( 'SOME_PROC_LOCK',
l_lockhandle );
l_lock_return_code := dbms_lock.request( lockhandle => l_lockhandle,
lockmode => dbms_lock.x_mode,
release_on_commit => true );
if( l_lock_return_code IN (0, 4) ) -- Success or already owned
then
<<do something>>
end if;
dbms_lock.release( l_lockhandle );
END;
Altering the sequence in this scenario is really bad idea. Particularly in multiuser environment. You'll get your transaction committed and probably several "race condition" data bugs or integrity errors.
It would be appropriate if you had legacy data alredy imported and want to insert new data with ids from sequence. Then you may alter the sequence to move currval to max existing ...
It seems to me that here you want to generate Ids from the sequence. That need not to be done by
select seq.nextval into l_variable from dual;
insert into table (id, ...) values (l_variable, ....);
You can use the sequence directly in the insert:
insert into table values (id, ...) values (seq.nextval, ....);
and optionally get the assigned value back by
insert into table values (id, ...) values (seq.nextval, ....)
returning id into l_variable;
It certainly is possible even for bulk operations with execBatch. Either just creating the ids or even returning them. I am not sure about the right syntax in java but it will be something about the lines
insert into table values (id, ...) values (seq.nextval, ....)
returning id bulk collect into l_cursor;
and you'll be given a ResultSet to browse the assigned numbers.
You can't prevent the implicit commit.
Your solution is not multi user proof. It is perfectly possible that another session will have 'restored' the increment to 1, just as you described.
I would suggest you keep fetching values one by one from the sequence, store these IDs one by one on your list and have the batch execution operate on that list.
What is the reason that you want to fetch a contiguous block of values from the sequence? I would not be too worried about performance, but maybe there are other requirements that I don't know of.
In Oracle, you can use following query to get next N values from a sequence that increments by one:
select level, PDQ_ACT_COMB_SEQ.nextval as seq from dual connect by level <= 5;

select only new row in oracle

I have table with "varchar2" as primary key.
It has about 1 000 000 Transactions per day.
My app wakes up every 5 minute to generate text file by querying only new record.
It will remember last point and process only new records.
Do you have idea how to query with good performance?
I am able to add new column if necessary.
What do you think this process should do by?
plsql?
java?
Everyone here is really really close. However:
Scott Bailey's wrong about using a bitmap index if the table's under any sort of continuous DML load. That's exactly the wrong time to use a bitmap index.
Everyone else's answer about the PROCESSED CHAR(1) check in ('Y','N')column is right, but missing how to index it; you should use a function-based index like this:
CREATE INDEX MY_UNPROCESSED_ROWS_IDX ON MY_TABLE
(CASE WHEN PROCESSED_FLAG = 'N' THEN 'N' ELSE NULL END);
You'd then query it using the same expression:
SELECT * FROM MY_TABLE
WHERE (CASE WHEN PROCESSED_FLAG = 'N' THEN 'N' ELSE NULL END) = 'N';
The reason to use the function-based index is that Oracle doesn't write index entries for entirely NULL values being indexed, so the function-based index above will only contain the rows with PROCESSED_FLAG = 'N'. As you update your rows to PROCESSED_FLAG = 'Y', they'll "fall out" of the index.
Well, if you can add a new column, you could create a Processed column, which will indicate processed records, and create an index on this column for performance.
Then the query should only be for those rows that have been newly added, and not processed.
This should be easily done using sql queries.
Ah, I really hate to add another answer when the others have come so close to nailing it. But
As Ponies points out, Oracle does have a hidden column (ORA_ROWSCN - System Change Number) that can pinpoint when each row was modified. Unfortunately, the default is that it gets the information from the block instead of storing it with each row and changing that behavior will require you to rebuild a really large table. So while this answer is good for quieting the SQL Server fella, I'd not recommend it.
Astander is right there but needs a few caveats. Add a new column needs_processed CHAR(1) DEFAULT 'Y' and add a BITMAP index. For low cardinality columns ('Y'/'N') the bitmap index will be faster. Once you have the rest is pretty easy. But you've got to be careful not select the new rows, process them and mark them as processed in one step. Otherwise, rows could be inserted while you are processing that will get marked processed even though they have not been.
The easiest way would be to use pl/sql to open a cursor that selects unprocessed rows, processes them and then updates the row as processed. If you have an aversion to walking cursors, you could collect the pk's or rowids into a nested table, process them and then update using the nested table.
In MS SQL Server world where I work, we have a 'version' column of type 'timestamp' on our tables.
So, to answer #1, I would add a new column.
To answer #2, I would do it in plsql for performance.
Mark
"astander" pretty much did the work for you. You need to ALTER your table to add one more column (lets say PROCESSED)..
You can also consider creating an INDEX on the PROCESSED ( a bitmap index may be of some advantage, as the possible value can be only 'y' and 'n', but test it out ) so that when you query it will use INDEX.
Also if sure, you query only for every 5 mins, check whether you can add another column with TIMESTAMP type and partition the table with it. ( not sure, check out again ).
I would also think about writing job or some thing and write using UTL_FILE and show it front end if it can be.
If performance is really a problem and you want to create your file asynchronously, you might want to use Oracle Streams, which will actually get modification data from your redo log withou affecting performance of the main database. You may not even need a separate job, as you can configure Oracle Streams to do Asynchronous replication of the changes, through which you can trigger the file creation.
Why not create an extra table that holds two columns. The ID column and a processed flag column. Have an insert trigger on the original table place it's ID in this new table. Your logging process can than select records from this new table and mark them as processed. Finally delete the processed records from this table.
I'm pretty much in agreement with Adam's answer. But I'd want to do some serious testing compared to an alternative.
The issue I see is that you need to not only select the rows, but also do an update of those rows. While that should be pretty fast, I'd like to avoid the update. And avoid having any large transactions hanging around (see below).
The alternative would be to add CREATE_DATE date default sysdate. Index that. And then select records where create_date >= (start date/time of your previous select).
But I don't have enough data on the relative costs of setting a sysdate as default vs. setting a value of Y, updating the function based vs. date index, and doing a range select on the date vs. a specific select on a single value for the Y. You'll probably want to preserve stats or hint the query to use the index on the Y/N column, and definitely want to use a hint on a date column -- the stats on the date column will almost certainly be old.
If data are also being added to the table continuously, including during the period when your query is running, you need to watch out for transaction control. After all, you don't want to read 100,000 records that have the flag = Y, then do your update on 120,000, including the 20,000 that arrived when you query was running.
In the flag case, there are two easy ways: SET TRANSACTION before your select and commit after your update, or start by doing an update from Y to Q, then do your select for those that are Q, and then update to N. Oracle's read consistency is wonderful but needs to be handled with care.
For the date column version, if you don't mind a risk of processing a few rows more than once, just update your table that has the last processed date/time immediately before you do your select.
If there's not much information in the table, consider making it Index Organized.
What about using Materialized view logs? You have a lot of options to play with:
SQL> create table test (id_test number primary key, dummy varchar2(1000));
Table created
SQL> create materialized view log on test;
Materialized view log created
SQL> insert into test values (1, 'hello');
1 row inserted
SQL> insert into test values (2, 'bye');
1 row inserted
SQL> select * from mlog$_test;
ID_TEST SNAPTIME$$ DMLTYPE$$ OLD_NEW$$ CHANGE_VECTOR$$
---------- ----------- --------- --------- ---------------------
1 01/01/4000 I N FE
2 01/01/4000 I N FE
SQL> delete from mlog$_test where id_test in (1,2);
2 rows deleted
SQL> insert into test values (3, 'hello');
1 row inserted
SQL> insert into test values (4, 'bye');
1 row inserted
SQL> select * from mlog$_test;
ID_TEST SNAPTIME$$ DMLTYPE$$ OLD_NEW$$ CHANGE_VECTOR$$
---------- ----------- --------- --------- ---------------
3 01/01/4000 I N FE
4 01/01/4000 I N FE
I think this solution should work..
What you need to do following steps
For the first run, you will have to copy all records. In first run you need to execute following query
insert into new_table(max_rowid) as (Select max(rowid) from yourtable);
Now next time when you want to get only newly inserted values, you can do it by executing follwing command
Select * from yourtable where rowid > (select max_rowid from new_table);
Once you are done with processing above query, simply truncate new_table and insert max(rowid) from yourtable
I think this should work and would be fastest solution;

ORA-04091: table [blah] is mutating, trigger/function may not see it

I recently started working on a large complex application, and I've just been assigned a bug due to this error:
ORA-04091: table SCMA.TBL1 is mutating, trigger/function may not see it
ORA-06512: at "SCMA.TRG_T1_TBL1_COL1", line 4
ORA-04088: error during execution of trigger 'SCMA.TRG_T1_TBL1_COL1'
The trigger in question looks like
create or replace TRIGGER TRG_T1_TBL1_COL1
BEFORE INSERT OR UPDATE OF t1_appnt_evnt_id ON TBL1
FOR EACH ROW
WHEN (NEW.t1_prnt_t1_pk is not null)
DECLARE
v_reassign_count number(20);
BEGIN
select count(t1_pk) INTO v_reassign_count from TBL1
where t1_appnt_evnt_id=:new.t1_appnt_evnt_id and t1_prnt_t1_pk is not null;
IF (v_reassign_count > 0) THEN
RAISE_APPLICATION_ERROR(-20013, 'Multiple reassignments not allowed');
END IF;
END;
The table has a primary key "t1_pk", an "appointment event id"
t1_appnt_evnt_id and another column "t1_prnt_t1_pk" which may or may
not contain another row's t1_pk.
It appears the trigger is trying to make sure that nobody else with the
same t1_appnt_evnt_id has referred to the same one this row is referring to a referral to another row, if this one is referring to another row.
The comment on the bug report from the DBA says "remove the trigger, and perform the check in the code", but unfortunately they have a proprietary code generation framework layered on top of Hibernate, so I can't even figure out where it actually gets written out, so I'm hoping that there is a way to make this trigger work. Is there?
I think I disagree with your description of what the trigger is trying to
do. It looks to me like it is meant to enforce this business rule: For a
given value of t1_appnt_event, only one row can have a non-NULL value of
t1_prnt_t1_pk at a time. (It doesn't matter if they have the same value in the second column or not.)
Interestingly, it is defined for UPDATE OF t1_appnt_event but not for the other column, so I think someone could break the rule by updating the second column, unless there is a separate trigger for that column.
There might be a way you could create a function-based index that enforces this rule so you can get rid of the trigger entirely. I came up with one way but it requires some assumptions:
The table has a numeric primary key
The primary key and the t1_prnt_t1_pk are both always positive numbers
If these assumptions are true, you could create a function like this:
dev> create or replace function f( a number, b number ) return number deterministic as
2 begin
3 if a is null then return 0-b; else return a; end if;
4 end;
and an index like this:
CREATE UNIQUE INDEX my_index ON my_table
( t1_appnt_event, f( t1_prnt_t1_pk, primary_key_column) );
So rows where the PMNT column is NULL would appear in the index with the inverse of the primary key as the second value, so they would never conflict with each other. Rows where it is not NULL would use the actual (positive) value of the column. The only way you could get a constraint violation would be if two rows had the same non-NULL values in both columns.
This is perhaps overly "clever", but it might help you get around your problem.
Update from Paul Tomblin: I went with the update to the original idea that igor put in the comments:
CREATE UNIQUE INDEX cappec_ccip_uniq_idx
ON tbl1 (t1_appnt_event,
CASE WHEN t1_prnt_t1_pk IS NOT NULL THEN 1 ELSE t1_pk END);
I agree with Dave that the desired result probalby can and should be achieved using built-in constraints such as unique indexes (or unique constraints).
If you really need to get around the mutating table error, the usual way to do it is to create a package which contains a package-scoped variable that is a table of something that can be used to identify the changed rows (I think ROWID is possible, otherwise you have to use the PK, I don't use Oracle currently so I can't test it). The FOR EACH ROW trigger then fills in this variable with all rows that are modified by the statement, and then there is an AFTER each statement trigger that reads the rows and validate them.
Something like (syntax is probably wrong, I haven't worked with Oracle for a few years)
CREATE OR REPLACE PACKAGE trigger_pkg;
PROCEDURE before_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID);
PROCEDURE after_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE PACKAGE BODY trigger_pkg AS
TYPE rowid_tbl IS TABLE OF(ROWID);
modified_rows rowid_tbl;
PROCEDURE before_stmt_trigger IS
BEGIN
modified_rows := rowid_tbl();
END before_each_stmt_trigger;
PROCEDURE for_each_row_trigger(row IN ROWID) IS
BEGIN
modified_rows(modified_rows.COUNT) = row;
END for_each_row_trigger;
PROCEDURE after_stmt_trigger IS
BEGIN
FOR i IN 1 .. modified_rows.COUNT LOOP
SELECT ... INTO ... FROM the_table WHERE rowid = modified_rows(i);
-- do whatever you want to
END LOOP;
END after_each_stmt_trigger;
END trigger_pkg;
CREATE OR REPLACE TRIGGER before_stmt_trigger BEFORE INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.before_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER after_stmt_trigger AFTER INSERT OR UPDATE ON mytable AS
BEGIN
trigger_pkg.after_stmt_trigger;
END;
CREATE OR REPLACE TRIGGER for_each_row_trigger
BEFORE INSERT OR UPDATE ON mytable
WHEN (new.mycolumn IS NOT NULL) AS
BEGIN
trigger_pkg.for_each_row_trigger(:new.rowid);
END;
With any trigger-based (or application code-based) solution you need to
put in locking to prevent data corruption in a multi-user environment.
Even if your trigger worked, or was re-written to avoid the mutating table
issue, it would not prevent 2 users from simultaneously updating
t1_appnt_evnt_id to the same value on rows where t1_appnt_evnt_id is not
null: assume there are currenly no rows where t1_appnt_evnt_id=123 and
t1_prnt_t1_pk is not null:
Session 1> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =456;
/* OK, trigger sees count of 0 */
Session 2> update tbl1
set t1_appnt_evnt_id=123
where t1_prnt_t1_pk =789;
/* OK, trigger sees count of 0 because
session 1 hasn't committed yet */
Session 1> commit;
Session 2> commit;
You now have a corrupted database!
The way to avoid this (in trigger or application code) would be to lock
the parent row in the table referenced by t1_appnt_evnt_id=123 before performing the check:
select appe_id
into v_app_id
from parent_table
where appe_id = :new.t1_appnt_evnt_id
for update;
Now session 2's trigger must wait for session 1 to commit or rollback before it performs the check.
It would be much simpler and safer to implement Dave Costa's index!
Finally, I'm glad no one has suggested adding PRAGMA AUTONOMOUS_TRANSACTION to your trigger: this is often suggested on forums and works in as much as the mutating table issue goes away - but it makes the data integrity problem even worse! So just don't...
I had similar error with Hibernate. And flushing session by using
getHibernateTemplate().saveOrUpdate(o);
getHibernateTemplate().flush();
solved this problem for me. (I'm not posting my code block as I was sure that everything was written properly and should work - but it did not until I added the previous flush() statement). Maybe this can help someone.

Resources