For some reason, the Oracle JDBC driver may move the ResultSet cursor somewhere else (not to the same row, and not to the next row) when updateRow is called (I am also inserting and deleting rows). How can I avoid this problem?
Note: The results are ordered by the primary key of the table (I've specified this in the SQL). But I'm increasingly suspecting that the "order by" clause is not working properly.
No where in the documentation says that after an updateRow operation the cursor is moved to the next row...
Sources:
http://docs.oracle.com/cd/B28359_01/java.111/b31224/resltset.htm
http://docs.oracle.com/javase/tutorial/jdbc/basics/retrieving.html#rs_update
http://docs.oracle.com/cd/A97335_02/apps.102/a83724/resltse4.htm
http://docs.oracle.com/javase/7/docs/api/java/sql/ResultSet.html#updateRow%28%29
Forward only updatable result sets maintains a cursor which can only move in one direction (forward), and also update rows. This has to be created with concurrency mode ResultSet.CONCUR_UPDATABLE and type ResultSet.TYPE_FORWARD_ONLY.
Note: The default type is ResultSet.TYPE_FORWARD_ONLY.
Visibility of changes
After an update or delete is made on a forward only result set, the result set's cursor is no longer on the row just updated or deleted, but immediately before the next row in the result set (it is necessary to move to the next row before any further row operations are allowed). This means that changes made by ResultSet.updateRow() and ResultSet.deleteRow() are never visible.
If a row has been inserted, i.e using ResultSet.insertRow() it may be visible in a forward only result set.
Conflicting operations
The current row of the result set cannot be changed by other transactions, since it will be locked with an update lock. Result sets held open after a commit have to move to the next row before allowing any operations on it.
Some conflicts may prevent the result set from doing updates/deletes:
If the current row is deleted by a statement in the same transaction, calls to ResultSet.updateRow() will cause an exception, since the cursor is no longer positioned on a valid row.
This problem was due to user error. I had another copy of the application running on a different machine, which I had forgotten about, and that was changing the database at the same time.
Related
You are an awesome comunity. It is the first time I couldn't find an answer for my questions and I had millions
Take me easy. I'm super noob and I already fell bad for making you waste your time with my dummy question
Soo... I'm tring to make an app with oracle apex. I have an form with an interactive report for table1. On the form page I have 3 processes in this order:
Automatic Row Processing (DML) that apex automaticaly made for me,
a pl/sql process I made and
the reset page process apex made.
The ARP updates, creates and deletes and is triggered by any of the buttons (SAVE, CREATE, DELETE).
My procces deletes a row in another table2 and is performed when DELETE is clicked and ITEM1 is not null (because in ITEM1 I stored the PK for the row in the second table).
The last process is the usual reset page that should clear all items value when DELETE is pressed.
Firing point is by default "Processing" for all 3.
Sometimes my process fails (and return the error I set) because of a FK constraint.
Now here is the think: If my proccess fails, the oder 2 seem not to be executed. Is that posible? If i set the condition (to be executed) of my process to Never the other 2 are working. What am I missing?
You aren't missing anything.
When you push a button that fires those processes, they make a transaction. If any of them raises an error, all of them (executed so far) are rolled back.
If you want to continue processing regardless what your own procedure (2nd one) does (I mean: whether it succeeds or not), then handle it, somehow.
A trivial (and not the best) option is to ignore possible errors, e.g.
begin
delete from child_table where id = :P1_ITEM1;
exception
when others then null; --> ignore any errors
end;
Smarter way would be to intercept errors you expect. If you know (and yes, you do) that there's a possibility that foreign key constraint will be violated, check whether child rows exist; if not, delete the master row.
declare
l_id child_table.id%type;
begin
-- If row(s) with such an ID exists, L_ID will be set to that value.
-- In that case, don't do anything
select m.id
into l_id
from child_table m
where m.id = :P1_ITEM1
and rownum = 1;
-- The above query returned something; don't do anything
null;
exception
when no_data_found then
-- The above query didn't return anything, so - delete a row
delete from child_table where id = :P1_ID;
end;
Now, that can/could/should be modified, depending on what you really have; it is just an idea what to look at.
Yet another option is to set foreign key constraint to be on delete cascade, which means that deleting master record automatically deletes its detail records. Doing so, you wouldn't care about such a problems and your 2nd process would be as simple as
delete from child_table where id = :P1_ID;
(unless you hit another kind of an error, of course).
If you want to let users decide whether they want to delete rows or not, change button's action to "Redirect to URL" (currently it is "Submit", I presume). The target URL will be something like this (suppose that button's name is P1_START_PROCESSES):
javascript:if(confirm('Are you sure you want to delete all rows related to this document?')){doSubmit('P1_START_PROCESSES');}
I have a use case where I need to do the following things in one transaction:
start the transaction
INSERT an item into a table
SELECT all the items in the table
dump the selected items into a file (this file is versioned and another program always uses the latest version)
If all the above things succeed, commit the transaction, if not, rollback.
If two transactions begin almost simultaneously, it is possible that before the first transaction A commits what it has inserted into the table (step 4), the second transaction B has already performed the SELECT operation(step 2) whose result doesn't contain yet the inserted item by the first transaction(as it is not yet committed by A, so not visible to B). In this case, when A finishes, it will have correctly dumped a file File1 containing its inserted item. Later, B finishes, it will have dumped another file File2 containing only its inserted item but not the one inserted by A. Since File2 is more recent, we will use File2. The problem is that File2 doesn't contain the item inserted by A even though this item is well in the DB.
I would like to know if it is feasible to solve this problem by locking the read(SELECT) of the table when a transaction inserts something into the table until its commit or rollback and if yes, how this locking can be implemented in Spring with Oracle as DB.
You need some sort of synchronization between the transactions:
start the transaction
Obtain a lock to prevent the transaction in another session to proceed or wait until the transaction in the other session finishes
INSERT an item into a table
SELECT ......
......
Commit and release the lock
The easiest way is to use LOCK TABLE command, at least in SHARE mode (SHARE ROW EXCLUSIVE or EXCLUSIVE modes can also be used, but they are too restrictve for this case).
The advantage of this approach is that the lock is automatically released at commit or rollback.
The disadvantage is a fact, that this lock can interfere with other transactions in the system that update this table at the same time, and could reduce an overall performance.
Another approach is to use DBMS_LOCK package. This lock doesn't affect other transactions that don't explicitely use that lock. The drawaback is that this package is difficult to use, the lock is not released on commit nor rollback, you must explicitelly release the lock at the end of the transaction, and thus all exceptions must be carefully handled, othervise a deadlock easily could occur.
One more solution is to create a "dummy" table with a single row in it, for example:
CREATE TABLE my_special_lock_table(
int x
);
INSERT INTO my_special_lock_table VALUES(1);
COMMIT:
and then use SELECT x FROM my_special_lock_table FOR UPDATE
or - even easier - simple UPDATE my_special_lock_table SET x=x in your transaction.
This will place an exclusive lock on a row in this table and synchronize only this one transaction.
A drawback is that another "dummy" table must be created.
But this solution doesn't affect the other transactions in the system, the lock is automatically released upon commit or rollback, and it is portable - it should work in all other databases, not only in Oracle.
Use spring's REPEATABLE_READ or SERIALIZABLE isolation levels:
REPEATABLE_READ A constant indicating that dirty reads and
non-repeatable reads are prevented; phantom reads can occur. This
level prohibits a transaction from reading a row with uncommitted
changes in it, and it also prohibits the situation where one
transaction reads a row, a second transaction alters the row, and the
first transaction rereads the row, getting different values the second
time (a "non-repeatable read").
SERIALIZABLE A constant indicating that dirty reads, non-repeatable
reads and phantom reads are prevented. This level includes the
prohibitions in ISOLATION_REPEATABLE_READ and further prohibits the
situation where one transaction reads all rows that satisfy a WHERE
condition, a second transaction inserts a row that satisfies that
WHERE condition, and the first transaction rereads for the same
condition, retrieving the additional "phantom" row in the second read.
with serializable or repeatable read, the group will be protected from non-repeatable reads:
connection 1: connection 2:
set transaction isolation level
repeatable read
begin transaction
select name from users where id = 1
update user set name = 'Bill' where id = 1
select name from users where id = 1 |
commit transaction |
|--> executed here
In this scenario, the update will block until the first transaction is complete.
Higher isolation levels are rarely used because they lower the number of people that can work in the database at the same time. At the highest level, serializable, a reporting query halts any update activity.
I think you need to serialize the whole transaction. While a SELECT ... FOR UPDATE could work, it does not really buy you anything, since you would be selecting all rows. You may as well just take and release a lock, using DBMS_LOCK()
I am using Oracle
What is difference when we create ID using max(id)+1 and using sequance.nexval,where to use and when?
Like:
insert into student (id,name) values (select max(id)+1 from student, 'abc');
and
insert into student (id,name) values (SQ_STUDENT.nextval, 'abc');
SQ_STUDENT.nextval sometime gives error that duplicate record...
please help me on this doubt
With the select max(id) + 1 approach, two sessions inserting simultaneously will see the same current max ID from the table, and both insert the same new ID value. The only way to use this safely is to lock the table before starting the transaction, which is painful and serialises the transactions. (And as Stijn points out, values can be reused if the highest record is deleted). Basically, never use this approach. (There may very occasionally be a compelling reason to do so, but I'm not sure I've ever seen one).
The sequence guarantees that the two sessions will get different values, and no serialisation is needed. It will perform better and be safer, easier to code and easier to maintain.
The only way you can get duplicate errors using the sequence is if records already exist in the table with IDs above the sequence value, or if something is still inserting records without using the sequence. So if you had an existing table with manually entered IDs, say 1 to 10, and you created a sequence with a default start-with value of 1, the first insert using the sequence would try to insert an ID of 1 - which already exists. After trying that 10 times the sequence would give you 11, which would work. If you then used the max-ID approach to do the next insert that would use 12, but the sequence would still be on 11 and would also give you 12 next time you called nextval.
The sequence and table are not related. The sequence is not automatically updated if a manually-generated ID value is inserted into the table, so the two approaches don't mix. (Among other things, the same sequence can be used to generate IDs for multiple tables, as mentioned in the docs).
If you're changing from a manual approach to a sequence approach, you need to make sure the sequence is created with a start-with value that is higher than all existing IDs in the table, and that everything that does an insert uses the sequence only in the future.
Using a sequence works if you intend to have multiple users. Using a max does not.
If you do a max(id) + 1 and you allow multiple users, then multiple sessions that are both operating at the same time will regularly see the same max and, thus, will generate the same new key. Assuming you've configured your constraints correctly, that will generate an error that you'll have to handle. You'll handle it by retrying the INSERT which may fail again and again if other sessions block you before your session retries but that's a lot of extra code for every INSERT operation.
It will also serialize your code. If I insert a new row in my session and go off to lunch before I remember to commit (or my client application crashes before I can commit), every other user will be prevented from inserting a new row until I get back and commit or the DBA kills my session, forcing a reboot.
To add to the other answers, a couple of issues.
Your max(id)+1 syntax will also fail if there are no rows in the table already, so use:
Coalesce(Max(id),0) + 1
There's nothing wrong with this technique if you only have a single process that inserts into the table, as might be the case with a data warehouse load, and if max(id) is fast (which it probably is).
It also avoids the need for code to synchronise values between tables and sequences if you are moving restoring data to a test system, for example.
You can extend this method to multirow insert by using:
Coalesce(max(id),0) + rownum
I expect that might serialise a parallel insert, though.
Some techniques don't work well with these methods. They rely of course on being able to issue the select statement, so SQL*Loader might be ruled out. However SQL*Loader has support for this technique in general through the SEQUENCE parameter of the column specification: http://docs.oracle.com/cd/E11882_01/server.112/e22490/ldr_field_list.htm#i1008234
Assuming MAX(ID) is actually fast enough, wouldn't it be possible to:
First get MAX(ID)+1
Then get NEXTVAL
Compare those two and increase sequence in case NEXTVAL is smaller then MAX(ID)+1
Use NEXTVAL in INSERT statement
In that case I would have a fully stable procedure and manual inserts would also be allowed without worrying about updating the sequence
Any idea about
ORA-1555: snapshot too old: rollback segment number
I am getting this error and nothing seems to be wrong. Please state under what conditions in may occur and how it can be avoided?
Frequent commits can be the cause of ORA-1555.
It's all about read consistency. The time you start a query oracle records a before image. So the result of your query is not altered by DML that takes place in the mean time (your big transaction). The before image uses the rollback segments to get the values of data that is changed after the before image is taken.
By committing in your big transaction you tell oracle the rollback data of that transaction can be overwritten.
If your query need data from the rollback segments that is overwritten you get this error. The less you commit the less chance you have that the rollback data you need is overwritten.
One common cause of ORA-1555 is a procedure that does this all in itself : a cursor on a table, loop through the records, and updates/delete the same table and commits every x records.
As guigui told : let the rollback segments grow to contain your whole transaction
I suggest you read Tom's answer :
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:1441804355350
"The ORA-1555 happens when people try to save space typically. They'll have small
rollback segments that could grow if they needed (and will shrink using OPTIMAL). So,
they'll start with say 10 or so 1meg rollback segments. These rollback segments COULD
grow to 100meg each if we let them (in this example) however, they will NEVER grow unless
you get a big transaction.
"
Typically this occurs when code commits inside a cursor.
eg.
for x in (select ... from ...)
loop
do something
commit;
end loop;
See the AskTom link form guigui42 though for other examples.
I tried to put it in a sentence but it is better to give an example:
SELECT * FROM someTable WHERE id = someID;
returns no rows
...
some time passes (no inserts are done to the table and no ID updates)
...
SELECT * FROM someTable WHERE id = someID;
returns one row!
Is it possible that some DB mechanism prevents first SELECT to return row?
Oracle log has no errors.
No transactions are rolled back when two selects are executed.
You can't see uncommitted data in another session. When did the commit happen?
EDIT1: Are you the only one using this database? Or did/do you have multiple sessions?
I think in another session you or someone else has inserted this row, you do your select and you don't see this row. After that a commit happens in the other session (maybe implicit because a session is closed) and then you see this row when you select again.
I can think of other explanations, but I first want to know are you only one using this database.
With read consistency as provided by Oracle, you should not see a row appear like that. If you are running in some mode with automatic commits, so that each statement is a self-contained transaction, then read consistency is not being violated. Which program are you using to access the database? I agree with the other observations; the row should not appear if your session is not inserting it and no other session is active at the same time. I don't know of a DBMS that indulges in spontaneous data generation.
Don't you have scheduled jobs in that Oracle?