I have the following code block and when the ALTER SESSION statement is executed, it either hangs or it throws an ORA-01013 depending on whether we're connecting to Oracle 12r2 or 19.3 and what version of the OJDBC8 driver is being used:
try(Connection connection = jdbcConnection.connect(false)) {
// We now have a java.sql.Connection open to the database at this point
try(PreparedStatement ps = connection.prepareStatement(someQuery)) {
// We not have a prepared statement based on query, the query is irrelevent
try (Statement s = connection.createStatement()) {
// The following statement fails with the ORA-01013 error
s.execute("ALTER SESSION SET CONTAINER=" + pdbName);
}
}
}
If I rework this code block to the following, the problem disappears.
try(Connection connection = jdbcConnection.connect(false)) {
// We now have a java.sql.Connection open to the database at this point
try (Statement s = connection.createStatement()) {
s.execute("ALTER SESSION SET CONTAINER=" + pdbName);
}
try(PreparedStatement ps = connection.prepareStatement(someQuery)) {
// We not have a prepared statement based on query, the query is irrelevent
}
// or put the alter session here
}
From what I can determine, using Oracle OJDBC8 12.2.0.1, the hang nor the ORA-01013 exception is thrown; however when I migrate to 19.x.0.0, this is where I'm seeing this problem occur.
Is this a bug in the JDBC driver or is there actually a problem with how the code is written that the 12.2.0.1 driver is more lenient with than the later versions?
Related
I don't understand the behavior for distributed locks obtained from a JdbcLockRegistry.
#Bean
public LockRepository lockRepository(DataSource datasource) {
return new DefaultLockRepository(datasource);
}
#Bean
public LockRegistry lockRegistry(LockRepository repository) {
return new JdbcLockRegistry(repository);
}
My project is running upon PostgreSQL and Spring Boot version is 2.2.2
And this is the demonstration use case :
#GetMapping("/isolate")
public String isolate() throws InterruptedException {
Lock lock = registry.obtain("the-lock");
if (lock.tryLock(10, TimeUnit.SECONDS)) { // close
try {
Thread.sleep(30 * 1000L);
} finally {
lock.unlock(); // open
}
} else {
return "rejected";
}
return "acquired";
}
NB: that use case works when playing with Hazelcast distributed locks.
The observed behavior is that a first lock is duly registered in database through a call to the API on a first instance.
Then, within 30 seconds, a second on is requested on a different instance (other port), and it's updating the existing int_lock table's line (client_id changes) instead of failing. So the first endpoint delivers after 30 seconds (no unlock failure), and the second endpoint is delivering after its own period of 30 seconds. There is no mutual exclusion.
These are the logs for a single acquisition :
Trying to acquire lock...
Executing prepared SQL update
Executing prepared SQL statement [DELETE FROM INT_LOCK WHERE REGION=? AND LOCK_KEY=? AND CREATED_DATE<?]
Executing prepared SQL update
Executing prepared SQL statement [UPDATE INT_LOCK SET CREATED_DATE=? WHERE REGION=? AND LOCK_KEY=? AND CLIENT_ID=?]
Executing prepared SQL update
Executing prepared SQL statement [INSERT INTO INT_LOCK (REGION, LOCK_KEY, CLIENT_ID, CREATED_DATE) VALUES (?, ?, ?, ?)]
Processing...
Executing prepared SQL update
Executing prepared SQL statement [DELETE FROM INT_LOCK WHERE REGION=? AND LOCK_KEY=? AND CLIENT_ID=?]
It sounds strange that acquisition process begins with DELETE, though...
I've tried to set a constant client id for the DefaultLockRepository, without improvement.
Does anyone have a clue of understanding of how to fix this ? Thx for any help.
All right. It happens that the repository's TTL is 10s by default, just like my timeout in that specific use case. So the lock obviously dies (DELETE) before timeout period.
Here is a fix then:
#Bean
public LockRepository lockRepository(DataSource datasource) {
DefaultLockRepository repository = new DefaultLockRepository(datasource);
repository.setTimeToLive(60 * 1000);
return repository;
}
try lock.renew to extend lock period. lock.lock() doesn't update lock until it expires.
Trying to maintain a lock, I tried to take benefit of DefaultLockRepository#acquire, called by Lock#lock, which attempts update before inserting a new lock (and after cleaning up expired locks, as said before):
#GetMapping("/isolate")
public String isolate() throws InterruptedException {
Lock lock = registry.obtain("the-lock");
log.warn("Trying to acquire lock...");
if (lock.tryLock(10, TimeUnit.SECONDS)) { // close lock
try {
for (int i=0; i < 6; i++) { // very...
log.warn("Processing...");
Thread.sleep(5 * 1000L); // ... long task
lock.lock(); //DEBUG holding (lock update)
}
} finally {
if (!repository.isAcquired("the-lock")) {
throw new IllegalStateException("lock lost");
} else {
lock.unlock(); // open lock
}
}
} else {
return "rejected";
}
return "acquired";
}
But this didn't work as expected (NB: ttl is on default 10s in this test);
I always get a lock lost IllegalStateException in the end, despite the fact that I can see the lock date changing in PostgreSQL's console.
It seems NOWAIT is not supported by HSQLDB in Oracle syntax.
HSQLDB version: 2.3.3
with
SET DATABASE SQL SYNTAX ORA TRUE;
Exception produced on the SQL
select a, b, c from sometable where id=1 for update NOWAIT
The exception
Caused by: org.hsqldb.HsqlException: unexpected token: NOWAIT
at org.hsqldb.error.Error.parseError(Unknown Source)
at org.hsqldb.ParserBase.unexpectedToken(Unknown Source)
at org.hsqldb.ParserCommand.compileStatement(Unknown Source)
at org.hsqldb.Session.compileStatement(Unknown Source)
at org.hsqldb.StatementManager.compile(Unknown Source)
at org.hsqldb.Session.execute(Unknown Source)
Does anyone know if HSQLDB does not supports this ?
Any ideas how to avoid this exception without modifying the original SQL. I can ignore the NOWAIT functionality in my unit tests but just cant modify the SQL. Additional info: we use spring-jbdc and JdbcTemplate and thinking about intercepting this to replace sqls with NOWAIT as an hack in the JUnit test setup.
Found answer to my own question finally after digging hsqldb source code on sourceforge.
Version 2.3.3 of HSQLDB does NOT support NOWAIT.
I have asked this question in their Discussion Forum and raised the issue however its not like GitHub where you can create an issue so no formal Issue/Request opened.
I am getting along with a bad hack for now modifying HSQLDB code myself org.hsqldb.ParserDQL class to just ignore the NOWAIT in the select-for-update SQL.
If anyone has better answer I will accept their answer.
UPDATE: (Aug-24-2015)
Received confirmation from HSQLDB forum that NOWAIT will be ignored. Meanwhile I am posting the code snippet to ignore NOWAIT that I received from the HSQLDB sourceforge forum. You may want to wait for the next version of HSQLDB than adding this to your code base (as a hack).
if (Tokens.T_NOWAIT.equals(token.tokenString)) {
read();
}
UPDATED to show the full context as to where to add the above snippet in the ParserDQL.java
/**
* Retrieves a SELECT or other query expression Statement from this parse context.
*/
StatementQuery compileCursorSpecification(RangeGroup[] rangeGroups,
int props, boolean isRoutine) {
OrderedHashSet colNames = null;
QueryExpression queryExpression = XreadQueryExpression();
if (token.tokenType == Tokens.FOR) {
read();
if (token.tokenType == Tokens.READ
|| token.tokenType == Tokens.FETCH) {
read();
readThis(Tokens.ONLY);
props = ResultProperties.addUpdatable(props, false);
} else {
readThis(Tokens.UPDATE);
props = ResultProperties.addUpdatable(props, true);
if (token.tokenType == Tokens.OF) {
readThis(Tokens.OF);
colNames = new OrderedHashSet();
readColumnNameList(colNames, null, false);
}
if (Tokens.T_NOWAIT.equalsIgnoreCase(token.tokenString)) {
readIfThis(Tokens.X_IDENTIFIER);
}
}
}
I have created a spring batch service with a item reader, item processor and item writer.I have extended the AbstractPagingItemReader and created my own implementation by the name of JpaPagingItemReader.Now when I ran the batch service the reader reads a fix set of records from db (default page size: 10),processes them and writes them.However on second read it throws me the below exception:
2015-06-25 16:33:00,712 ERROR [jobLauncherTaskExecutor-6][saeedh:120659] org.hibernate.util.JDBCExceptionReporter : DB2 SQL Error: SQLCODE=-270, SQLSTATE=42997, SQLERRMC=63, DRIVER=3.61.65
2015-06-25 16:33:00,712 ERROR [jobLauncherTaskExecutor-6][saeedh:120659] org.hibernate.util.JDBCExceptionReporter : DB2 SQL Error: SQLCODE=-727, SQLSTATE=56098, SQLERRMC=2;-270;42997;63, DRIVER=3.61.65
2015-06-25 16:33:00,712 ERROR [jobLauncherTaskExecutor-6][saeedh:120659] org.hibernate.util.JDBCExceptionReporter : DB2 SQL Error: SQLCODE=-727, SQLSTATE=56098, SQLERRMC=2;-270;42997;63, DRIVER=3.61.65
javax.persistence.PersistenceException: org.hibernate.exception.SQLGrammarException: could not execute query
com.ibm.db2.jcc.am.SqlSyntaxErrorException: DB2 SQL Error: SQLCODE=-270, SQLSTATE=42997, SQLERRMC=63, DRIVER=3.61.65
at com.ibm.db2.jcc.am.ed.a(ed.java:676)
at com.ibm.db2.jcc.am.ed.a(ed.java:60)
at com.ibm.db2.jcc.am.ed.a(ed.java:127)
at com.ibm.db2.jcc.am.gn.c(gn.java:2554)
at com.ibm.db2.jcc.am.gn.d(gn.java:2542)
at com.ibm.db2.jcc.am.gn.a(gn.java:2034)
at com.ibm.db2.jcc.am.hn.a(hn.java:6500)
at com.ibm.db2.jcc.t4.cb.g(cb.java:140)
at com.ibm.db2.jcc.t4.cb.a(cb.java:40)
at com.ibm.db2.jcc.t4.q.a(q.java:32)
at com.ibm.db2.jcc.t4.rb.i(rb.java:135)
I get that this error is probably because there is a CLOB column in the table from where I am reading records but the weird thing is it reads the first batch of 10 records fine,process them and write them but on second time read it throws the above exception.Any suggestions?Below is a snippet from JpaPagingItemReader that I wrote.The override doReadPage method from AbstractPagingItemReader.java.
protected void doReadPage ()
{
setPageSize (10);
// Flush we already have in entity manager
getEntityManager ().flush ();
// clear the entity manager: To read and detach
getEntityManager ().clear ();
Query query = createQuery ().setFirstResult (getPage () * getPageSize ()).setMaxResults (getPageSize ());
if (parameterValues != null)
{
for (Map.Entry<String, Object> me : parameterValues.entrySet ())
{
query.setParameter (me.getKey (), me.getValue ());
}
}
if (results == null)
{
results = new CopyOnWriteArrayList<T> ();
}
else
{
results.clear ();
}
results.addAll (query.getResultList ());
// Detach all objects that became part of persistence context
getEntityManager ().clear ();
}
Any help is highly appreciated as I already behind deadline due to this issue.Please if you think any thing is missing,do let me know and I will update the question.Thanks.
I figured it out.The issue was indeed that we are not allowed to have Clob data as a projection in a scrollable JPA cursor.The first 10 records are read just fine but as soon as It starts to read the second batch it has to move the cursor from 0 to the 11th record.That is when I was getting sql exception.
Simple fix was to remove the CLOB column from the select statement and get it in a separate query where its value is required.That fixed my problem.Thanks.
In my Java-Spring based web app I'm connecting to Cassandra DB using Hector and Spring.
The connection works just fine but I would like to be able to test the connection.
So if I intentionally provide a wrong host to CassandraHostConfigurator I get an error:
ERROR connection.HConnectionManager: Could not start connection pool for host <myhost:myport>
Which is ok of course. But how can I test this connection?
If I define the connection pragmatically (and not via spring context) it is clear, but via spring context it is not really clear how to test it.
can you think of an idea?
Since I could not come up nor find a satisfying answer I decided to define my connection pragmatically and to use a simple query:
private ColumnFamilyResult<String, String> readFromDb(Keyspace keyspace) {
ColumnFamilyTemplate<String, String> template = new ThriftColumnFamilyTemplate<String, String>(keyspace, tableName, StringSerializer.get(),
StringSerializer.get());
// It doesn't matter if the column actually exists or not since we only check the
// connection. In case of connection failure an exception is thrown,
// else something comes back.
return template.queryColumns("some_column");
}
And my test checks that the returned object in not null.
Another way that works fine:
public boolean isConnected() {
List<KeyspaceDefinition> keyspaces = null;
try {
keyspaces = cluster.describeKeyspaces();
} catch (HectorException e) {
return false;
}
return (!CollectionUtils.isEmpty(keyspaces));
}
We are running a websphere commerce site with an oracle DB and facing an issue where we are running out of db connections.
We are using a JDBCHelper singleton for getting the prepared statements and cosing the connections.
public static JDBCHelper getJDBCHelper() {
if (theObject == null){
theObject = new JDBCHelper();
}
return theObject;
}
public void closeResources(Connection con, PreparedStatement pstmt, ResultSet rs){
try{
if(rs!=null){ rs.close();}
}catch(SQLException e){
logger.info("Exception closing the resultset");
}try{
if(pstmt!=null) { pstmt.close(); }
}catch(SQLException e){
logger.info("Exception closing the preparedstatement");
}try{
if(con!=null){ con.close(); }
}catch(SQLException e){
logger.info("Exception closing the connection");
}
}
However when we try getting the connection using a prepStmt.getConnection() for passing to the close resources after execution it throws an sql exception. Any idea why? Does the connection get closed immediately after execution? And is there something wrong in our use of the singleton JDBCHelper?
EDIT
Part of the code which makes the prepared statement,executes and closes the connection
PreparedStatement pstmt = jdbcHelper.getPreparedStatement(query);
try{
//rest of the code
int brs = pstmt.executeUpdate();
}
finally{
try {
jdbcHelper.closeResources(pstmt.getConnection(),pstmt);
} catch (SQLException e1) {
logger.logp(Level.SEVERE,CLASS_NAME,methodName,"In the finally block - Could not close connection", e1);
}
}
Your connection will most likely come from a pool, and closing it actually will return the connection to the pool (under the covers). I think posting the code which gets the connection, uses it and closes it via JDBCHelper will be of more use.
Re. your singleton, I'm not sure why you're using this, since it doesn't appear to have anything to warrant it being a singleton. Check out Apache Commons DbUtils which does this sort of stuff and more besides.
This code seems to be written for single threaded operation only, as it's lacking any synchronisation code. The getJdbcHelper() method for instance is likely to create two JdbcHelpers. If I'm not mistaken there's even no guarantee that a second thread will see theObject, long after a primary thread has created it. Although they usually will, by virtue of the architecture the JVM runs on.
If you're running this inside a web server you're likely to be running into race issues, where two threads are modifying your connection at the same time. Unless you rolled your own connection pool or something.
Brian is right, use one of the freely available libraries that solve this (hard) problem for you.