sequence in cycle mode in oracle - oracle

I have tried to find some real use cases fot sequences in cycle mode.
we can use use sequence for generating unique IDs for primary keys but it is not needed because we can use IDENTITY as well.
can you give me some good practical scenarios for using CYCLE sequence incrementing by 1 and then by a higher number like 10 or 100?
thanks

As you said, we usually use sequences for generating unique values - a good choice for primary keys. As primary keys can't be duplicated, there's no much sense in cycling the sequence.
(As of the identity columns: yes, that's an option in recent Oracle database versions, but they didn't exist until 12c so we used and still use sequences in lower versions).
Personally, I've never used MAXVALUE; most of my sequences are simple, using default options, such as
create sequence seq;
However, if you do set MAXVALUE and don't pay attention to number of values you use from it, once you reach its maximum you'll get
ORA-08004: sequence SEQ.NEXTVAL exceeds MAXVALUE and cannot be instantiated
One solution to that problem is to remove maxvalue (or set it to a higher value); another one is to use CYCLE so that - once you reach the maximum - sequence keeps working. Though, you have to use the CACHE parameter along with it (and its value must be lower than one cycle):
SQL> create sequence seq maxvalue 3 cycle;
create sequence seq maxvalue 3 cycle
*
ERROR at line 1:
ORA-04013: number to CACHE must be less than one cycle
Cycle is "3", so you can't set CACHE to value higher than that. This works:
SQL> create sequence seq maxvalue 3 cycle cache 2;
Sequence created.
When to use it?
In cases where its value is part of a composite primary key, where the second (actually, other) column(s) make sure that cycled sequence values won't violate primary key.
Another option is a staging table; for example, you daily get up to 1 million rows that represent payments. If primary key is set to number(6), you can't let sequence unconstrained (without the maxvalue) because you won't be able to insert value higher than 999.999 into that column. But, if you use the CYCLE parameter, everything will work OK - there won't be duplicates and values will fit the column.

Related

Random number generation guaranteeing uniqueness over time

I created a counter that goes up from 0 to 9999 until it resets again. I use the output of this counter as a value to make unique entries. However, the application needs to find its last created number each time the application is restarted. Therfore I am looking for a method which avoids any sort of object storage and relies solely on random number generation.
Something like:
int randomTimeBasedGenerator() {
Random r = new Random(System.currentTimeMillis())
int num = r.nextInt() % 9999
return num
}
But what guarantee do I have that this method generates unique numbers? And, if not, how long would it remain unique? Are there any study papers I can look into for this sort of scenario?
Random number generation would be an elegant solution for my situation, if I can at least guarantee it won't repeat within a couple of weeks or months. But random number generation would be useless in my case if no such guarantee exists.
You have no guarantee that the return value of a random number generator remains unique. Random number generators generate unique sequences of numbers, not unique numbers. Random numbers will always repeat themselves, sooner or later.
As suggested by #Thilo, UUIDs are unique numbers. But an even better approach in your case might be to set up a lightweight database (sqlite will do) and add a record to a table with incremental id's. It is not possible to keep track of a process without storing values somewhere.

Problems with a primary key sequence

When adding new data to a form my primary key sequence increases by 1.
However if i was to delete a data and replace it with new data the sequence would carry on.
So for example my primary keys for data go 1,2,3,4,5,6,10 because of previously deleted rows.
I hope that makes sence.
SEQUENCE values in Oracle are guaranteed to be unique, but you cannot expect the values to form a contiguous sequence without any gaps.
Even if you would never delete any rows from the table, you're likely to see gaps at some point, because sequence values are cached (pre-reserved) between different transactions.
It is a SEQUENCE of numbers, it doesn't care if you have used the "current value" or not.
As opposed to MySQL, in Oracle the Sequence is not tied to a column, but it is a separate object that you can ask a value from (through your_sequence.nextval). To handle the uniqueness, it doesn't take back values and offer them again.
If you always want to have a dense sequence of ID-s even through deletion, you would have to either
rearrange the ID-s (read: change ID-s of the rows newer than the deleted one), or
without knowing your task, I would suggest using the DENSE_RANK analytic function for querying your dataset, and separating the real (in-table) ID-s from the ranking of the rows.

Sequence starts with 1 after reaching MAXVALUE even STARTWITH 100 specified

I've come across a very weird Oracle sequence behavior. I have the following Sequence:
CREATE SEQUENCE SEQ1 INCREMENT BY 10 START WITH 100 MAXVALUE 200 CYCLE NOCACHE;
Here's an excerpt from "OCA/OCP Oracle Database 11g All-in-One Exam Guide":
CYCLE Controls the behavior on reaching MAXVALUE or MINVALUE. The
default behavior is to give an error, but if CYCLE is specified the
sequence will return to its starting point and repeat.
From this I infer that after reaching the MAXVALUE of 200, I'll get 100, as the starting point is 100. But surprisingly I get one. Why is that?
Let's look at the following excerpt from document:
Specify CYCLE to indicate that the sequence continues to generate
values after reaching either its maximum or minimum value. After an
ascending sequence reaches its maximum value, it generates its minimum
value. After a descending sequence reaches its minimum, it generates
its maximum value.
It means that START WITH value is not enough in your case, so both MINVALUE and MAXVALUE should be settled. Without given MINVALUE, cycle will start from number 1.
When your sequence cycles, it starts again at the MINVALUE of the sequence. That defaults to 1 if you don't specify a value. If you wanted the sequence to start again at 100, you'd need to specify a MINVALUE of 100.
I was reading the oracle document on this, and it was written like 'it will start with 100' in their case 1000
Creating a Sequence: Example
The following statement creates the sequence customers_seq in the sample schema oe. This sequence could be used to provide customer ID numbers when rows are added to the customers table.
CREATE SEQUENCE customers_seq
START WITH 1000
INCREMENT BY 1
NOCACHE
NOCYCLE;
The first reference to customers_seq.nextval returns 1000. The second returns 1001. Each subsequent reference will return a value 1 greater than the previous reference.

postgres 9.0.4 sequence skipping numbers

We are using hibernate, jpa and spring and our db is postgres 9. We are using sequence to autogenrate primary key. But what we have noticed is, it is skipping 20 numbers when new records is inserted in that tables and in our sequence we have increment by 1, then why postgres incrementing next value to 20. We do use cache value as "20".
That's normal. You can tell Hibernate not to cache sequence values - at a performance cost to inserts - but this still doesn't mean you won't have sequence gaps.
I wrote more about this on an older answer - here.
Sequences have gaps. That's their nature. If they couldn't have gaps, you could only have one transaction inserting at a time.
See:
CREATE SEQUENCE
Sequence manipulation functions
for details.
If you expect gapless sequences, you need to understand that you'll have to do all your inserts serially, with only one transaction able to do work at a time. To learn more, search for "postgresql gaples sequence". Relying on gapless sequences in the DB is usually a bad idea; instead, have your application construct the user-visible values when it fetches values, using the row_number() window function or similar.
Related:
Re-using deleted IDs

Once a HiLo is in use, what happens if you change the capacity (maximum Lo)?

If I start using a HiLo generator to assign ID's for a table, and then decide to increase or decrease the capacity (i.e. the maximum 'lo' value), will this cause collisions with the already-assigned ID's?
I'm just wondering if I need to put a big red flag around the number saying 'Don't ever change this!'
Note - not NHibernate specific, I'm just curious about the HiLo algorithm in general.
HiLo algorithms in general basically map two integers to one integer ID. It guarantees that the pair of numbers will be unique per database. Typically, the next step is to guarantee that a unique pair of numbers maps to a unique integer ID.
A nice explanation of how HiLo conceptually works is given in this previous SO answer
Changing the max_lo will preserve the property that your pair of numbers will be unique. However, will it make sure that the mapped ID is unique and collision-free?
Let's look at Hibernate's implementation of HiLo. The algorithm they appear to use (as from what I've gathered) is: (and I might be off on a technicality)
h = high sequence (starting at 0)
l_size = size of low block
l = low sequence (starting at 1)
ID = h*l_size + l
So, if your low block is, say, 100, your reserved ID blocks would go 1-100, 101-200, 201-300, 301-400...
Your High sequence is now 3. Now what would happen if you all of a sudden changed your l_size to 10? Your next block, your High is incremented, and you'd get 4*10+1 = 41
Oops. This new value definitely falls within the "reserved block" of 1-100. Someone with a high sequence of 0 would think, "Well, I have the range 1-100 reserved just for me, so I'll just put down one at 41, because I know it's safe."
There is definitely a very, very high chance of collision when lowering your l_max.
What about the opposite case, raising it?
Back to our example, let's raise our l_size to 500, turning the next key into 4*500+1 = 2001, reserving the range 2001-2501.
It looks like collision will be avoided, in this particular implementation of HiLo, when raising your l_max.
Of course, you should do some own tests on your own to make sure that this is the actual implementation, or close to it. One way would be to set l_max to 100 and find the first few keys, then set it to 500 and find the next. If there is a huge jump like mentioned here, you might be safe.
However, I am not by any means suggesting that it is best practice to raise your l_max on an existing database.
Use your own discretion; the HiLo algorithm isn't exactly one made with varying l_max in mind, and your results may in the end be unpredictable depending on your exact implementation. Maybe someone who has had experience with raising their l_max and finding troubles can prove this count correct.
So in conclusion, even though, in theory, Hibernate's HiLo implementation will most likely avoid collisions when l_max is raised, it probably still isn't good practice. You should code as if l_max were not going to change over time.
But if you're feeling lucky...
See the Linear Chunk table allocator -- this is logically a more simple & correct approach to the same problem.
What's the Hi/Lo algorithm?
By allocating ranges from the number space & representing the NEXT directly, rather than complicating the logic with high words or multiplied numbers, you can directly see what keys are going to be generated.
Essentially, "Linear Chunk allocator" uses addition rather than multiplication. If the NEXT is 1000 & we've configured range-size of 20, NEXT will advance to 1020 and we'll hold keys 1000-1019 for allocation.
Range-sized can be tuned or reconfigured at any time, without loss of integrity. There is a direct relationship between the NEXT field of the allocator, the generated keys & MAX(ID) existing in the table.
(By comparison, "Hi-Lo" uses multiplication. If the next is 50 & the multiplier is 20, then you're allocating keys around 1000-1019. There are no direct correlation between NEXT, generated keys & MAX(ID) in the table, it is difficult to adjust NEXT safely and the multiplier can't be changed without disturbing current allocation point.)
With "Linear Chunk", you can configure how large each range/ chunk is -- size of 1 is equivalent to traditional table-based "single allocator" & hits the database to generate each key, size of 10 is 10x faster as it allocates a range of 10 at once, size of 50 or 100 is faster still..
A size of 65536 generates ugly-looking keys, wastes vast numbers of keys on server restart, and is equivalent to Scott Ambler's original HI-LO algorithm.
In short, Hi-Lo is an erroneously complex & flawed approach to what should have been conceptually trivially simple -- allocating ranges along a number line.
I tried to unearth behviour of HiLo algorith through a simple helloWrold-ish hibernate application.
I tried a hibernate example with
<generator class="hilo">
<param name="table">HILO_TABLE</param>
<param name="column">TEST_HILO</param>
<param name="max_lo">40</param>
</generator>
Table named "HILO_TABLE" created with single column "TEST_HILO"
Initially I set value of TEST_HILO column to to 8.
update HILO_TABLE set TEST_HILO=8;
I observed that pattern to create ID is
hivalue * lowvalue + hivalue
hivalue is column value in DB (i.e. select TEST_HILO from HILO_TABLE )
lowvalue is from config xml (40 )
so in this case IDs started from 8*40 + 8 = 328
In my hibernate example i added 200 rows in one session. so rows were created with IDs 328 to 527
And in DB hivalue was incremented till 13.
The increment logic seems to be :-
new hivalue in DB = inital value in DB + (rows_inserted/lowvalue + 1 )
= 8 + 200/40 = 8 + 5 =13
Now if I run same hibernate program to insert rows, the IDs should start from
13*40 + 13 = 533
When ran the program it was confirmed.
Just by experience I'd say: yes, decreasing will cause collisions. When you have a lower max low, you get lower numbers, independent of the high value in the database (which is handled the same way, eg. increment with each session factory instance in case of NH).
There is a chance that increasing will not cause collisions. But you either need to try or ask someone who knows better then I do to be sure.
Old question, I know, but worth answering with a 'yes, you can'
You can increase or decrease your nex_hi at any point as long as you recompute your hibernate_unique_key table based on the current Id numbers of your tables.
In our case, we have a Id per entity hibernate_unique_key table with two columns:
next_hi
EntityName.
The next_hi for any given Id is calculated as
SELECT MAX(Id) FROM TableName/(#max_lo + 1) + 1
The script below runs through every table with an Id column and updates our nex_hi values
DECLARE #scripts TABLE(Script VARCHAR(MAX))
DECLARE #max_lo VARCHAR(MAX) = '100';
INSERT INTO #scripts
SELECT '
INSERT INTO hibernate_unique_key (next_hi, EntityName)
SELECT
(SELECT ISNULL(Max(Id), 0) FROM ' + name + ')/(' + #max_lo + ' + 1) + 1, ''' + name + '''
'
FROM sys.tables WHERE type_desc = 'USER_TABLE'
AND COL_LENGTH(name, 'Id') IS NOT NULL
AND NOT EXISTS (select next_hi from hibernate_unique_key k where name = k.EntityName)
DECLARE curs CURSOR FOR SELECT * FROM #scripts
DECLARE #script VARCHAR(MAX)
OPEN curs
FETCH NEXT FROM curs INTO #script
WHILE ##FETCH_STATUS = 0
BEGIN
--PRINT #script
EXEC(#script)
FETCH NEXT FROM curs INTO #script
END
CLOSE curs
DEALLOCATE curs

Resources