Delphi Performance: Reading all values under a field in a dataset - performance

We're trying to find out some performance fixes reading from a TADOQuery. Currently, we loop through the records using 'while not Q.eof do begin ... Q.next method. For each, we read ID and Value of each record, and add each to a combobox list.
Is there a way to convert all values of a specified field into a list in one shot? Rather than looping through the dataset? It would be really handy if I can do something like...
TStrings(MyList).Assign(Q.ValuesOfField['Val']);
I know that's not a real command, but that's the concept I'm looking for. Looking for a fast response and solution (as always but this is to fix a really urgent performance issue).

Looking at your comment, here are a few suggestions:
There are a few things that are likely to be a bottleneck in this situation. The first is looking up the fields repeatedly. If you're calling FieldByName or FindField inside your loop, you're wasting CPU time recomputing a value that's not going to change. Call FieldByName once for each field you're reading from and assign them to local variables instead.
When retrieving values from the fields, call AsString or AsInteger, or other methods that return the data type you're looking for. If you're reading from the TField.Value property, you're wasting time on variant conversions.
If you're adding a bunch of items to a Delphi combo box, you're probably dealing with a string list in the form of the Items property. Set the list's Capacity property and make sure to call BeginUpdate before you start updating, and call EndUpdate at the end. That can enable some internal optimizations that makes loading large amounts of data faster.
Depending on the combo box you're using, it could have some trouble dealing with large numbers of items in its internal list. See if it has a "virtual" mode, where instead of you loading everything up-front, you simply tell it how many items it needs, and when it gets dropped down, it calls an event handler for each item that's supposed to be shown on screen, and you give it the right text to display. This can really speed up certain UI controls.
Also, you should make sure your database query itself is fast, of course, but SQL optimization is beyond the scope of this question.
And finally, Mikael Eriksson's comment is definitely worthy of attention!

You can use Getrows. You specify the column(s) you are interested in and it will return an array with values. The time it takes to add 22.000 rows to a combo box goes from 7 seconds with the while not ADOQuery1.Eof ... loop to 1.3 seconds in my tests.
Sample code:
var
V: Variant;
I: Integer;
begin
V := ADOQuery1.Recordset.GetRows(adGetRowsRest, EmptyParam, 'ColumnName');
for I:= VarArrayLowBound(V, 2) to VarArrayHighBound(V, 2) do
ComboBox1.Items.Add(V[0, I]));
end;
If you want more than one column in the array you should use a variant array as the third parameter.
V := ADOQuery1.Recordset.GetRows(adGetRowsRest, EmptyParam,
VarArrayOf(['ColumnName1', 'ColumnName2']);

There are some great performance suggestions made by other folks that you should implement in Delphi. You should consider them. I will focus on ADO.
You haven't specified what the back end database server is, so I can't be too specific, but there are some things that you should know about ADO.
ADO RecordSet
In ADO, there is a RecordSet object. That RecordSet object is basically your ResultSet in this case. The interesting thing about iterating through the RecordSet is that it's still coupled with the provider.
Cursor Type
If your cursor type is Dynamic or Delphi's default Keyset, then each time the RecordSet requests a new row from the provider, the provider will check to see if there were any changes before it returns the record.
So, for the TADOQuery where all you're doing is reading the result set to populate the combobox, and it's not likely to have changed, you should use the Static cursor type to avoid checking for updated records.
In case you don't know what a cursor is, when you call a function like Next, you are moving the cursor, which represents the current record.
Not every provider supports all of the cursor types.
CacheSize
Delphi's and ADO's default cache size for a RecordSet is 1. That's 1 record. This works in combination with the cursor type. The cachesize tells the RecordSet how many records to fetch and store at a time.
When you issue a command like Next (really MoveNext in ADO) with a cache size of 1, the RecordSet only has the current record in memory, so when it fetches that next record, it must request it from the provider again. If the cursor is not Static, that gives the provider the opportunity to get the latest data before returning the next record. So, a size of 1 makes sense for Keyset or Dynamic, because you want the provider to be able to get you the updated data.
Obviously, with a value of 1, there's communication between the provider and RecordSet each time move the cursor. Well, that's overhead that we don't want if the cursor type is static. So, increasing your cache size will reduce the number of round trips between the RecordSet and the provider. This also increases your memory requirements, but it should be faster.
Also note that with a cache size greater than 1 for Keyset cursors, if the record that you want is in the cache, it won't request it from the provider again, which means that you won't see the updates.

You can't avoid looping. "Very long time" is relative but if retrieving 20000 records takes too long there's something wrong.
Check your query; perhaps the SQL can be improved (missing index?)
Show the code of your loop where you add items to the combobox. Maybe it can be optimized. (calling FieldByName repeatedly in a loop? using variants to retrieve field values?)
Make sure to call ComboBox.Items.BeginUpdate; before the loop and ComboBox.Items.EndUpdate after.
Use a profiler to find the bottleneck.

You could try pushing all data into a ClientDataSet and iterating this, but I think copying the data to the CDS does exactly what you are currently doing - looping and assigning.
What I did once was concatenating values on the server, transmitting it in one bulk to the client and splitting it again. This actually made the system faster because it reduced the communication necessary between client and server.
You have to look careful where the performance bottleneck is. It could as well be the combobox if you don't block GUI updates while adding values (ecpecially when we are talking about 20K values - that's a lot to scroll).
Edit: When you cannot change the communication then you perhaps could make it asynchronous. Request new data in a thread, keep the GUI responsive, fill the combobox when the data is there. It means the user sees an empty combobox for 5 seconds, but at least he can do something else in the meantime. Doesn't change the amount of time needed, though.

Is your query also tied to some data aware controls or a TDataSource? If so, do your looping inside an DisableControls and EnableControls block so your visual controls don't update each time you move to a new record.
Is the list of items fairly static? If so, consider creating a non-visual instance of a combobox at application startup, maybe inside a separate thread, and then assign your non-visual combobox to the visual combobox when your form is created.

try use DisableControls and EnableControls for increase performance of linear process in dataset.
var
SL: TStringList;
Fld: TField;
begin
SL := TStringList.Create;
AdoQuery1.DisableControls;
Fld := AdoQuery1.FieldByName('ListFieldName');
try
SL.Sorted := False; // Sort in the query itself first
SL.Capacity := 25000; // Some amount estimate + fudge factor
SL.BeginUpdate;
try
while not AdoQuery1.Eof do
begin
SL.Append(Fld.AsString);
AdoQuery1.Next;
end;
finally
SL.EndUpdate;
end;
YourComboBox.Items.AddStrings(SL);
finally
SL.Free;
AdoQuery1.EnableControls;
end;
end;

Not sure if this will help, but my suggestion would be not to add directly to the ComboBox. Load to a local TStringList instead, make that as fast as possible, and then use TComboBox.Items.AddStrings to add them all at once:
var
SL: TStringList;
Fld: TField;
begin
SL := TStringList.Create;
Fld := AdoQuery1.FieldByName('ListFieldName');
try
SL.Sorted := False; // Sort in the query itself first
SL.Capacity := 25000; // Some amount estimate + fudge factor
SL.BeginUpdate;
try
while not AdoQuery1.Eof do
begin
SL.Append(Fld.AsString);
AdoQuery1.Next;
end;
finally
SL.EndUpdate;
end;
YourComboBox.Items.BeginUpdate;
try
YourComboBox.Items.AddStrings(SL);
finally
YourComboBox.Items.EndUpdate;
end;
finally
SL.Free;
end;
end;

Related

How to take data in portions from Oracle using Mybatis?

In my application I am making a query to oracle and getting data this way
<select id="getAll" resultType="com.mappers.MyOracleMapper">
SELECT * FROM "OracleTable"
</select>
I get all the data, the problem is that there is a lot of data and it will take too much time to process all the data at once, since the response from the database will come in 3-4 minutes, this is not convenient.
How to make it so that I receive lines in portions without using the id field (since it does not exist, I do not know why). That is, take the first portion of lines, for example, the first 50, process them and take the next portion. It would be desirable to place a variable in properties that will be responsible for the number of lines in portions.
I can't do this in mybatis. This is new to me. Thanks in advance.
there is such a field and it is unique
OFFSET 10 ROWS
FETCH NEXT 10 ROWS ONLY
don't work, because the version is earlier than 12c
If you want to read millions of rows that's going to take time. It's normal to expect a few minutes to read and receive all the data over the wire.
Now, you have two options:
Use a Cursor
In MyBatis you can read the result of the query using the buffering a cursor gives you. The cursor reads a few hundred rows at a time and your app reads them one by one. Your app doesn't notice that behind the scenes there is buffering. Pretty good. For example, you can do:
Cursor<Client> clients = this.sqlSession.selectCursor("getAll");
for (Client c : clients) {
// process one client
}
Consider that cursors remain open until the end of the transaction. If you close the transaction (or exit the method marked as #Transactional) the cursor won't be usable anymore.
Use Manual Pagination
This solution can work well for the first pages of the result set, but it becomes increasingly inefficient and slooooooow the more you advance in the result set. Use it only as a last resort.
The only case where this strategy can be efficient is when you have the chance of implementing "key set pagination". I assume it's not the case here.
You can modify your query to perform explicit pagination. For example, you can do:
<select id="getPage" resultType="com.mappers.MyOracleMapper">
select * from (
SELECT rownum rnum, x.*
FROM OracleTable
WHERE rownum <= #{endingRow}
ORDER BY id
) x
where rnum >= #{startingRow}
</select>
You'll need to provide the extra parameters startingRow and endingRow.
NOTE: It's imperative you include an ORDER BY clause. Otherwise the pagination logic is meaningless. Choose any ordering you want, preferrably something that is backed up by an existing index.

Smart pagination algorithm that works with local data cache

This is a problem I have been thinking about for a long time but I haven't written any code yet because I first want to solve some general problems I am struggling with. This is the main one.
Background
A single page web application makes requests for data to some remote API (which is under our control). It then stores this data in a local cache and serves pages from there. Ideally, the app remains fully functional when offline, including the ability to create new objects.
Constraints
Assume a server side database of products containing +- 50000 products (50Mb)
Assume no db type, we interact with it via REST/GraphQL interface
Assume a single product record is < 1kB
Assume a max payload for a resultset of 256kB
Assume max 5MB storage on the client
Assume search result sets ranging between 0 ... 5000 items per search
Challenge
The challenge is to define a stateless but (network) efficient way fetch pages from a result set so that it is deterministic which results we will get.
Example
In traditional paging, when getting the next 100 results for some query using this url:
https://example.com/products?category=shoes&firstResult=100&pageSize=100
the search result may look like this:
{
"totalResults": 2458,
"firstResult": 100,
"pageSize": 100,
"results": [
{"some": "item"},
{"some": "other item"},
// 98 more ...
]
}
The problem with this is that there is no way, based on this information, to get exactly the objects that are on a certain page. Because by the time we request the next page, the result set may have changed (due to changes in the DB), influencing which items are part of the result set. Even a small change can have a big impact: one item removed from the DB, that happened to be on page 0 of the result set, will change what results we will get when requesting all subsequent pages.
Goal
I am looking for a mechanism to make the definition of the result set independent of future database changes, so if someone was looking for shoes and got a result set of 2458 items, he could actually fetch all pages of that result set reliably even if it got influenced by later changes in the DB (I plan to not really delete items, but set a removed flag on them, for this purpose)
Ideas so far
I have seen a solution where the result set included a "pages" property, which was an array with the first and last id of the items in that page. Assuming your IDs keep going up in number and you don't really delete items from the DB ever, the number of items between two IDs is constant. Meaning the app could get all items between those two IDs and always get the exact same items back. The problem with this solution is that it only works if the list is sorted in ID order... I need custom sorting options.
The only way I have come up with for now is to just send a list of all IDs in the result set... That way pages can be fetched by doing a SELECT * FROM products WHERE id IN (3,4,6,9,...)... but this feels rather inelegant...
Any way I am hoping it is not too broad or theoretical. I have a web-based DB, just no good idea on how to do paging with it. I am looking for answers that help me in a direction to learn, not full solutions.
Versioning DB is the answer for resultsets consistency.
Each record has primary id, modification counter (version number) and timestamp of modification/creation. Instead of modification of record r you add new record with same id, version number+1 and sysdate for modification.
In fetch response you add DB request_time (do not use client timestamp due to possibly difference in time between client/server). First page is served normally, but you return sysdate as request_time. Other pages are served differently: you add condition like modification_time <= request_time for each versioned table.
You can cache the result set of IDs on the server side when a query arrives for the first time and return a unique ID to the frontend. This unique ID corresponds to the result set for that query. So now the frontend can request something like next_page with the unique ID that it got the first time it made the query. You should still go ahead with your approach of changing DELETE operation to a removed operation because it would make sure that none of the entries from the result set it deleted. You can discard the result set of the query from the cache when the frontend reaches the end of the result set or you can set a time limit on the lifetime of the cache entry.

Looking up a "key" in an 8GB+ text file

I have some 'small' text files that contain about 500000 entries/rows. Each row has also a 'key' column. I need to find this keys in a big file (8GB, at least 219 million entries). When found, I need to append the 'Value' from the big file into the small file, at the end of the row as a new column.
The big file that is like this:
KEY VALUE
"WP_000000298.1" "abc"
"WP_000000304.1" "xyz"
"WP_000000307.1" "random"
"WP_000000307.1" "text"
"WP_000000308.1" "stuff"
"WP_000000400.1" "stuffy"
Simply put, I need to look up 'key' in the big file.
Obviously I need to load the whole table in RAM (but this is not a problem I have 32GB available). The big file seems to be already sorted. I have to check this.
The problem is that I cannot do a fast lookup using something like TDictionary because as you can see, the key is not unique.
Note: This is probably a one-time computation. I will use the program once, then throw it away. So, it doesn't have to be the BEST algorithm (difficult to implement). It just need to finish in decent time (like 1-2 days). PS: I prefer doing this without DB.
I was thinking at this possible solution: TList.BinarySearch. But it seems that TList is limited to only 134,217,727 (MaxInt div 16) items. So TList won't work.
Conclusion:
I choose Arnaud Bouchez solution. His TDynArray is impressive! I totally recommend it if you need to process large files.
AlekseyKharlanov's provided another nice solution but TDynArray is already implemented.
Instead of re-inventing the wheel of binary search or B-Tree, try with an existing implementation.
Feed the content into a SQLite3 in-memory DB (with the proper index, and with a transaction every 10,000 INSERT) and you are done. Ensure you target Win64, to have enough space in RAM. You may even use a file-based storage: a bit slower to create, but with indexes, queries by Key will be instant. If you do not have SQlite3 support in your edition of Delphi (via latest FireDAC), you may use our OpenSource unit and its associated documentation.
Using SQlite3 will be definitively faster, and use less resources than a regular client-server SQL database - BTW the "free" edition of MS SQL is not able to handle so much data you need, AFAIR.
Update: I've written some sample code to illustrate how to use SQLite3, with our ORM layer, for your problem - see this source code file in github.
Here are some benchmark info:
with index defined before insertion:
INSERT 1000000 rows in 6.71s
SELECT 1000000 rows per Key index in 1.15s
with index created after insertion:
INSERT 1000000 rows in 2.91s
CREATE INDEX 1000000 in 1.28s
SELECT 1000000 rows per Key index in 1.15s
without the index:
INSERT 1000000 rows in 2.94s
SELECT 1000000 rows per Key index in 129.27s
So for huge data set, an index is worth it, and creating the index after the data insertion reduces the resources used! Even if the insertion is slower, the gain of an index is huge when selecting per key.. You may try to do the same with MS SQL, or using another ORM, and I guess you will cry. ;)
Another answer, since it is with another solution.
Instead of using a SQLite3 database, I used our TDynArray wrapper, and its sorting and binary search methods.
type
TEntry = record
Key: RawUTF8;
Value: RawUTF8;
end;
TEntryDynArray = array of TEntry;
const
// used to create some fake data, with some multiple occurences of Key
COUNT = 1000000; // million rows insertion !
UNIQUE_KEY = 1024; // should be a power of two
procedure Process;
var
entry: TEntryDynArray;
entrycount: integer;
entries: TDynArray;
procedure DoInsert;
var i: integer;
rec: TEntry;
begin
for i := 0 to COUNT-1 do begin
// here we fill with some data
rec.Key := FormatUTF8('KEY%',[i and pred(UNIQUE_KEY)]);
rec.Value := FormatUTF8('VALUE%',[i]);
entries.Add(rec);
end;
end;
procedure DoSelect;
var i,j, first,last, total: integer;
key: RawUTF8;
begin
total := 0;
for i := 0 to pred(UNIQUE_KEY) do begin
key := FormatUTF8('KEY%',[i]);
assert(entries.FindAllSorted(key,first,last));
for j := first to last do
assert(entry[j].Key=key);
inc(total,last-first+1);
end;
assert(total=COUNT);
end;
Here are the timing results:
one million rows benchmark:
INSERT 1000000 rows in 215.49ms
SORT ARRAY 1000000 in 192.64ms
SELECT 1000000 rows per Key index in 26.15ms
ten million rows benchmark:
INSERT 10000000 rows in 2.10s
SORT ARRAY 10000000 in 3.06s
SELECT 10000000 rows per Key index in 357.72ms
It is more than 10 times faster than the SQLite3 in-memory solution. The 10 millions rows stays in memory of the Win32 process with no problem.
And a good sample of how the TDynArray wrapper works in practice, and how its SSE4.2 optimized string comparison functions give good results.
Full source code is available in our github repository.
Edit: with 100,000,000 rows (100 millions rows), under Win64, for more than 10GB of RAM used during the process:
INSERT 100000000 rows in 27.36s
SORT ARRAY 100000000 in 43.14s
SELECT 100000000 rows per Key index in 4.14s
Since this is One-time task. Fastest way is to load whole file into memory, scan memory line by line, parse key and compare it with search key(keys) and print(save) found positions.
UPD: If you have sorted list in source file. And assume you have 411000keys to lookup. You can use this trick: Sort you search keys in same order with source file. Read first key from both lists and compare it. If they differs read next from source until they equal. Save position, if next key in source also equal, save it too..etc. If next key is differ, read next key from search keys list. Continue until EOF.
Use memory-mapped files. Just think your file is already read into the memory in its entirety and do that very binary search in memory that you wanted. Let Windows care about reading the portions of file when you do your in-memory search.
https://en.wikipedia.org/wiki/Memory-mapped_file
https://msdn.microsoft.com/en-us/library/ms810613.aspx
https://stackoverflow.com/a/9609448/976391
https://stackoverflow.com/a/726527/976391
http://msdn.microsoft.com/en-us/library/aa366761%28VS.85.aspx
https://msdn.microsoft.com/en-us/library/windows/desktop/aa366537.aspx
You may take any of those sources for start, just do not forget to update them for Win64
http://torry.net/quicksearchd.php?String=memory+mapped+files&Title=No
A method that needs the file to be sorted, but avoids data structures entirely:
You only care about one line, so why even read the bulk of the file?
Open the file and move the "get pointer" (apologies for talking C) halfway through the file. You'll have to figure out if you're at a number or a word, but a number should be close by. Once you know the closest number, you know if it's higher or lower than what you want, and continue with the binary search.
Idea based on Aleksey Kharlanov answer. I accepted his answer.
I only copy his idea here because he did not elaborate it (no pseudo-code or deeper analysis of the algorithm). I want to confirm it works before implementing it.
We sort both files (once).
We load Big file in memory (once).
We read Small file line by line from disk (once).
Code:
In the code below, sKey is the current key in Small file. bKey is the current key in the Big file:
LastPos:= 0
for sKey in SmallFile do
for CurPos:= LastPos to BigFile.Count do
if sKey = bKey
then
begin
SearchNext // search (down) next entries for possible duplicate keys
LastPos:= CurPos
end
else
if sKey < bKey
then break
It works because I know the last position (in Big file) of the last key. The next key can only be somewhere UNDER the last position; ON AVERAGE it should be in the next 440 entries. However, I don't even have to always read 440 entries below LastPos because if my sKey does not exist in the big file, it will be smaller than the bKey so I quickly break the inner loop and move on.
Thoughts?
If I were doing this as a one-time thing, I'd create a set with all the keys I need to look up. Then read the file line-by line, check to see if the key exists in the set, and output the value if so.
Briefly, the algorithm is:
mySet = dictionary of keys to look up
for each line in the file
key = parse key from line
if key in mySet
output key and value
end for
Since Delphi doesn't have a generic set, I'd use TDictionary and ignore the value.
The dictionary lookup is O(1), so should be very fast. Your limiting factor will be file I/O time.
I figure that'd take about 10 minutes to code up, and less than 10 minutes to run.

Performance of accessing dataSet fields using Field-names instead of indexes

Is the performance negligible?
For example,
myQuery.FieldbyName("MyField").AsString;
myQuery.Fields[0].AsString;
Cases:
Table with a decent number of fields, say > 50 fields
Accessing large resultsets, say > 100,000 rows
Is the readability benefit of field names worth the performance decrease?
Here is an interesting post by François Gaillard about FieldByName performance issues.
The performance may not be negligible, depending on how often you access the field by name. If you use it for every field and every row you may notice a performance decrease (see for example http://www.delphifeeds.com/go/s/74559). To mantain readability yet improve performance you could:
Use the ['FieldName'] or FieldByName() syntax only once, and store a reference to the field in a variable.
Use "static" persistent field declaration, right-clicking the dataset, select Field Editor and adding needed fields. It will declare the proper TField descendant, and let you assign a name.
Also the AsXXXXX calls may be slower than using a TField descendant native Value property.
I have found FieldByName to be noticeable slower.
I normally access the database through an intermediate layer, that access entire records from the same table alot of times. On creation of that layer I assign the index of each field to an variable. I then use the variables for later access, to still have readable code.
ADODataSet.CommandText := 'select * from [TABLE] where 1 = 0'; //table layout
ADODataSet.Open;
ADODataSet.GetFieldNames(List);
varMyField := List.IndexOf('MyField');

Oracle (PL/SQL): Is UPDATE RETURNING concurrent?

I'm using table with a counter to ensure unique id's on a child element.
I know it is usually better to use a sequence, but I can't use it because I have a lot of counters (a customer can create a couple of buckets and each of them needs to have their own counter, they have to start with 1 (it's a requirement, my customer needs "human readable" keys).
I'm creating records (let's call them items) that have a prikey (bucket_id, num = counter).
I need to guarantee that the bucket_id / num combination is unique (so using a sequence as prikey won't fix my problem).
The creation of rows doesn't happen in pl/sql, so I need to claim the number (btw: it's not against the requirements to have gaps).
My solution was:
UPDATE bucket
SET counter = counter + 1
WHERE id = param_id
RETURNING counter INTO num_forprikey;
PL/SQL returns var_num_forprikey so the item record can be created.
Question:
Will I always get unique num_forprikey even if the user concurrently asks for new items in a bucket?
Will I always get unique num_forprikey
even if the user concurrently asks for
new items in a bucket?
Yes, at least up to a point. The first user to issue that update gets a lock on the row. So no other user can successfully issue that same statement until user numero uno commits (or rolls back). So uniqueness is guaranteed.
Obviously, the cavil is regarding concurrency. Your access to the row is serialized, so there is no way for two users to get a new PRIKEY simultaneously. This is not necessarily a problem. It depends on how many users you have creating new Items, and how often they do it. One user peeling off numbers in the same session won't notice a thing.
I seem to recall this problem from many years back working on of all things an INGRES database. There were no sequences in those days so a lot of effort was put into finding the best scaling solution for this problem by the top INGRES minds of the day. I was fortunate enough to be working along side them so that even though my mind is pitifully smaller than any of theirs, proxmity = residual affect and I retained something. This was one of the things. Let me see if I can remember.
1) for each counter you need row in a work table.
2) each time you need a number
a) lock the row
b) update it
c) get its new value (you use returning for this which I avoid like the plague)
d) commit the update to release your lock on the row
The reason for the commit is for trying to get some kind of scalability. There will always be a limit but you do not serialize on getting a number for any period of time.
In the oracle world we would improve the situation by using a function defined as an AUTONOMOUS_TRANSACTION in order to acquire the next number. IF you think about it, this solution requires that gaps be allowed which you said is OK. By commiting the number update independently of the main transaction, you gain scalability but you introduce gapping.
You will have to accept the fact that your scalability will drop dramatically in this scenario. This is due to at least two reasons:
1) the update/select/commit sequence does its best to reduce the time during which the KEY row is locked, but it is still not zero. Under heavy load, you will serialize and eventually be limited.
2) you are commiting on every key get. A commit is an expensive operation requiring many memory and file management actions on the part of the database. This will limit you also.
In the end you are likely looking at three or more orders of magnitude drop in concurrent transaction load because you are not using sequences. I base this on my experience of the past.
But if you customer requires it, what can you do right?
Good luck. I have not tested the code for syntax errors, I leave that to you.
create or replace function get_next_key (key_name_p in varchar2) return number is
pragma autonomous_transaction;
kev_v number;
begin
update key_table set key = key + 1 where key_name = key_name_p;
select key_name into key_name_v from key_name where key_name = key_name_p;
commit;
return (key_v);
end;
/
show errors
You can still use sequences, just use the row_number() analytic function to please your users. I described it here in more detail: http://rwijk.blogspot.com/2008/01/sequence-within-parent.html
Regards,
Rob.
I'd figure out how to make sequences work. It's the only guarantee, though an exception clause could be coded
http://www.orafaq.com/forum/t/83382/0/ The benefit to sequences (and they could be dynamically created, is you can specify nocache and guarantee order)

Resources