SELECT FOR UPDATE not working with JDBC and Oracle - oracle

I've written a simple Java program, which opens a transaction, selects some records, does some logic and then updates them. I want the records to be locked so I used SELECT...FOR UPDATE.
The program works perfectly fine with MS SQL Server 2005, but in Oracle 10g the records are not locked!
Any idea why?
I create the connection as follow:
Connection connection = DriverManager.getConnection(URL, User, Password);
connection.setAutoCommit(false);
If I execute the SELECT..FOR UPDATE from Oracle SQL Developer client I can see that the records are locked, so I'm thinking it might be an issue with the JDBC driver rather than a database problem, but I couldn't find anything useful online.
These are the details of the JDBC driver I'm using:
Manifest-Version: 1.0
Implementation-Vendor: Oracle Corporation
Implementation-Title: ojdbc14.jar
Implementation-Version: Oracle JDBC Driver version - "10.2.0.2.0"
Implementation-Time: Tue Jan 24 08:55:21 2006
Specification-Vendor: Oracle Corporation
Sealed: true
Created-By: 1.4.2_08 (Sun Microsystems Inc.)
Specification-Title: Oracle JDBC driver classes for use with JDK14
Specification-Version: Oracle JDBC Driver version - "10.2.0.2.0"

Sorry, I cannot reproduce this behaviour. Exactly how are you running your SELECT ... FOR UPDATE queries in JDBC?
I have a table, locktest with the following data in it:
SQL> select * from locktest;
A B
---------- ----------
1 0
2 0
3 0
4 0
5 0
I also have this Java class:
import oracle.jdbc.OracleDriver;
import java.sql.*;
public class LockTest {
public static void main(String[] args) throws Exception {
DriverManager.registerDriver(new OracleDriver());
Connection c = DriverManager.getConnection(
"jdbc:oracle:thin:#localhost:1521:XE", "user", "password");
c.setAutoCommit(false);
Statement stmt = c.createStatement(
ResultSet.TYPE_SCROLL_SENSITIVE, ResultSet.CONCUR_UPDATABLE);
ResultSet rSet = stmt.executeQuery(
"SELECT a, b FROM locktest FOR UPDATE");
while (rSet.next()) {
if (rSet.getInt(1) <= 3) {
rSet.updateInt(2, 1);
}
}
System.out.println("Sleeping...");
Thread.sleep(Long.MAX_VALUE);
}
}
When I run this Java class, it makes some updates to the table and then starts sleeping. It sleeps so that it keeps the transaction open and hence retains the locks.
C:\Users\Luke\stuff>java LockTest
Sleeping...
While this is sleeping, I try to concurrently update the table in SQL*Plus:
SQL> update locktest set b = 1 where a <= 3;
At this point, SQL*Plus hangs until I kill the Java program.

Related

DB2 iSeries doesn't lock on select for update

I'm migrating a legacy application using DB2 iSeries on AS400 that has a specific behavior that I have to reproduce using .NET and DB2.Data.DB2.iSeries client for .NET.
What I'm describing works for me with DB2 non AS400 but in AS400 DB2 it worlks for the legacy application i'm replacing - but not with my application.
The behavior in the original application:
Begin Transaction
ExecuteReader () => Select col1 from table1 where col1 = 1 for update.
The row is now locked. anyone else who tries to run Select for update should fail.
Close the Reader opened in line 2.
The row is now unlocked. - anyone else who tried to run select for update should succeed.
Close transaction and live happily ever after.
In my .NET code I have two problems:
Step 2 - only checks if the row is already locked - but doesn't actually lock it. so another user can and does run select for update - WRONG BEHAVIOUR
Once that works - I need the lock to get unlocked when the reader is closed (step 4)
Here's my code:
var cb = new IBM.Data.DB2.iSeries.iDB2ConnectionStringBuilder();
cb.DataSource = "10.0.0.1";
cb.UserID = "User";
cb.Password = "Password";
using (var con = new IBM.Data.DB2.iSeries.iDB2Connection(cb.ToString()))
{
con.Open();
var t = con.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted);
using (var c = con.CreateCommand())
{
c.Transaction = t;
c.CommandText = "select col1 from table1 where col1=1 FOR UPDATE";
using (var r = c.ExecuteReader())
{
while (r.Read()) {
MessageBox.Show(con.JobName + "The Row Should Be Locked");
}
}
MessageBox.Show(con.JobName + "The Row Should Be unlocked");
}
}
When you run this code twice - you'll see both processes reach the "This row should be locked" which is the problem I'm describing.
The desired result would be that the first process will reach the "This row should be locked" and that the second process will fail with resource busy error.
Then when the first process reaches the second message box - "the row should be unlocked" the second process( after running again ) will reach the "This row should be locked" message.
Any help would be greatly appreciated
The documentation says:
When the UPDATE clause is used, FETCH operations referencing the cursor acquire an exclusive row lock.
This implies a cursor is being used, and the lock occurs when the fetch statement is executed. I don't see a cursor, or a fetch in your code.
Now, whether .NET handles this as a cursor, I don't know, but the DB2 UDB documentation does not have this notation.
Isolation Level allows this behavior. Reading rows that are locked.
ReadUncommitted
A dirty read is possible, meaning that no shared locks are issued and no exclusive locks are honored.
After much investigations we created a work around in the form of a stored procedure that performs the lock for us.
The stored procedure looks like this:
CREATE PROCEDURE lib.Select_For_Update (IN SQL CHARACTER (5000) )
MODIFIES SQL DATA CONCURRENT ACCESS RESOLUTION WAIT FOR OUTCOME
DYNAMIC RESULT SETS 1 OLD SAVEPOINT LEVEL COMMIT ON RETURN
NO DISALLOW DEBUG MODE SET OPTION COMMIT = *CHG BEGIN
DECLARE X CURSOR WITH RETURN TO CLIENT FOR SS ;
PREPARE SS FROM SQL ;
OPEN X ;
END
Then we call it using:
var cb = new IBM.Data.DB2.iSeries.iDB2ConnectionStringBuilder();
cb.DataSource = "10.0.0.1";
cb.UserID = "User";
cb.Password = "Password";
using (var con = new IBM.Data.DB2.iSeries.iDB2Connection(cb.ToString()))
{
con.Open();
var t = con.BeginTransaction(System.Data.IsolationLevel.ReadUncommitted);
using (var c = con.CreateCommand())
{
c.Transaction = t;
c.CommandType = CommandType.StoredProcedure;
c.AddParameter("sql","select col1 from table1 where col1=1 FOR UPDATE");
c.CommandText = "lib.Select_For_Update"
using (var r = c.ExecuteReader())
{
while (r.Read()) {
MessageBox.Show(con.JobName + "The Row Should Be Locked");
}
}
MessageBox.Show(con.JobName + "The Row Should Be unlocked");
}
}
We don't like it - but it works.

java.sql.SQLException: Missing defines

We have a very simple Java code which executes an Oracle stored procedure using CallableStatement API. An Oracle stored procedure doesn't have an OUTPUT parameter defined, however in the Java Code we are trying to register an OUT parameter and retrieve it :
cs = cnDBCon.prepareCall("{call "+strStoredProcedureName+ "}");
cs.registerOutParameter(1, Types.NUMERIC);
cs.execute();
int isOk = cs.getInt(1);
According to Java API ( https://docs.oracle.com/javase/8/docs/api/java/sql/CallableStatement.html#getInt-int- ) when a return value is SQL NULL, the result is 0.
It worked perfectly fine when we were running on Java7/WebLogic 12.1.3. Since we switched to Java8/WebLogic 12.2.1 we started to get the error :
java.sql.SQLException: Missing defines
at oracle.jdbc.driver.Accessor.isNull(Accessor.java:744)
at oracle.jdbc.driver.NumberCommonAccessor.getInt(NumberCommonAccessor.java:73)
at oracle.jdbc.driver.OracleCallableStatement.getInt(OracleCallableStatement.java:1815)
at oracle.jdbc.driver.OracleCallableStatementWrapper.getInt(OracleCallableStatementWrapper.java:780)
at weblogic.jdbc.wrapper.CallableStatement_oracle_jdbc_driver_OracleCallableStatementWrapper.getInt(Unknown Source)
The obvious choice is to update the stored procedure or java code, but I am wondering about the reason the behavior changed. Is it a more strict JDBC driver?

code running in oracle 11g

I am having 3 tables case_details, codehead and case_staus. I am trying to fetch the values from database. Code mentioned below is running well in oracle 10 xe, but in oracle 11g 32 bit it is showing 0 rows...the code is
select a.case_id
,a.case_description
,a.case_contract_value
,b.sub_codehead
,c.case_status_name
from case_details a
,codehead b
,case_status c
where a.case_codehead = b.code_id
and a.case_status = c.case_status_id
and joint_directorate = 85
and case_status < 2
order by case_id;
I am unable to understand that well running code is not executing in oracle 11g...please help me out..

ArrayIndexOutOfBoundsException in Oracle JDBC

I get a strange Exception in our oracle jdbc driver. How can a native method throw an OutOfBounds Exception? I think it has something to to with the blob in one of the columns.
We use as our server solaris x86, sun java 6, websphere. The databse is a oracle 11 und ojdbc 5.
With my local configuration with windows XP everything works fine.
Any ideas?
java.lang.ArrayIndexOutOfBoundsException
at oracle.jdbc.driver.T2CStatement.t2cDefineFetch(Native Method)
at oracle.jdbc.driver.T2CPreparedStatement.doDefineFetch(T2CPreparedStatement.java:1020)
at oracle.jdbc.driver.T2CPreparedStatement.fetch(T2CPreparedStatement.java:1252)
at oracle.jdbc.driver.OracleResultSetImpl.close_or_fetch_from_next(OracleResultSetImpl.java:373)
at oracle.jdbc.driver.OracleResultSetImpl.next(OracleResultSetImpl.java:277)
In the code I do nothing special. Create a prepared statement, fill in the parameters and execute the query.
preparedStatement = connection.prepareStatement(queryAsString);
preparedStatement.setObject( [some paramters...] );
ResultSet result = preparedStatement.executeQuery();
result.setFetchSize(100);
while (result.next()) {
int count = Fields.getFieldsCount();
Object[] objects = new Object[count];
for (int j = 0; j < count; ++j) {
Object object = result.getObject(j + 1);
objects[j] = object;
}
list.add(objects);
}
result.close();
But I dont think the problem is in the java code because the same code did work on java 5 with ojdbc14 and a sparc solaris. I guess its more a configuration thing.

Is there some trick to update an Oracle CLOB with Mybatis 3?

I am updating a CLOB column in my Oracle database. The parameterized SQL looks like it is executing correctly without error, but when I run a select to see the change, it has not been updated. Note: MyBatis 3 is built using JDBC Parameterized Queries, so those rules also apply.
MyBatis Mapping:
<update id="updateRSA103RequestData" parameterType="com.company.domain.RSA103XMLData" flushCache="true">
update
RSA_SUBMIT_DATA
set TXLIFE_REQUEST = #{request}
where RSA_SUBMIT_QUEUE_ID = #{id}
</update>
Runtime Logs:
2012-07-13 12:35:26,728 DEBUG Connection:Thread main: - ooo
Connection Opened 2012-07-13 12:35:26,837 DEBUG
PreparedStatement:Thread main: - ==> Executing: update
RSA_SUBMIT_DATA set TXLIFE_REQUEST = ? where RSA_SUBMIT_QUEUE_ID = ?
2012-07-13 12:35:26,837 DEBUG PreparedStatement:Thread main: - ==>
Parameters: testasdfasdf(String), 51(Integer) 2012-07-13 12:35:27,024
DEBUG Connection:Thread main: - xxx Connection Closed
Select query after change:
select *
from RSA_SUBMIT_DATA
where RSA_SUBMIT_QUEUE_ID = 51
RSA_SUBMIT_QUEUE_ID | TXLIFE_REQUEST | TXLIFE_RESPONSE
51 | originalString | resultString
Mapper invocation:
SqlSession sqlSession = sqlSessionFactory.openSession();
try {
log.debug("autoCommit: " + sqlSessionFactory.getConfiguration().getEnvironment().getDataSource().getConnection().getAutoCommit());
PolicyTransactionMapper policyTransactionDAO = sqlSession
.getMapper(PolicyTransactionMapper.class);
RSA103XMLData xmlData = new RSA103XMLData();
xmlData.setId(rsaSubmitQueueID);
xmlData.setRequest(request);
policyTransactionDAO.updateRSA103RequestData(xmlData);
Any help is appreciated.
I don't think your SqlSession is opened with auto commit.
Per MyBatis User Guide, to use auto commit, try.
SqlSession sqlSession = sqlSessionFactory.openSession(true);
Also, your log statement is actually opening a new connection. See DataSourceUtils.getConnection vs DataSource.getConnection
This will probably return a different connection than what your mapper is using anyways.

Resources