JDBC connection default autoCommit behavior - oracle

I'm working with JDBC to connect to Oracle. I tested connection.setAutoCommit(false) vs connection.setAutoCommit(true) and the results were as expected.
While by default connection is supposed to work as if autoCommit(true) [correct me if I'm wrong], but none of the records are being inserted till connection.commit() was called. Any advice regarding default behaviour?
String insert = "INSERT INTO MONITOR (number, name,value) VALUES (?,?,?)";
conn = connection; //connection details avoided
preparedStmtInsert = conn.prepareStatement(insert);
preparedStmtInsert.execute();
conn.commit();

From Oracle JDBC documentation:
When a connection is created, it is in auto-commit mode. This means
that each individual SQL statement is treated as a transaction and is
automatically committed right after it is executed. (To be more
precise, the default is for a SQL statement to be committed when it is
completed, not when it is executed. A statement is completed when all
of its result sets and update counts have been retrieved. In almost
all cases, however, a statement is completed, and therefore committed,
right after it is executed.)
The other thing is - you ommitted connection creation details, so I'm just guessing - if you are using some frameworks, or acquiring a connection from a datasource or connection pool, the autocommit may be turned off by those frameworks/pools/datasources - the solution is to never trust in default settings ;-)

Related

JDBC Source Connector Error: Transaction was aborted. It was wounded by a higher priority transaction due to conflict on keys in range

I am using JDBC source connector with the JDBC Driver for collecting data from Google Cloud Spanner to Kafka.
I am using "timestamp+incrementing" mode on a table. The primary key of the table includes 2 columns (order_item_id and order_id).
I used order_item_id for the incrementing column, and a column named "updated_time" for timestamp column.
When I started the connector, I got the following errors sometimes, but I still can get the data finally.
ERROR Failed to run query for table TimestampIncrementingTableQuerier{table="order_item", query='null',
topicPrefix='test_', incrementingColumn='order_item_id', timestampColumns=[updated_time]}: {}
(io.confluent.connect.jdbc.source.JdbcSourceTask:404)
com.google.cloud.spanner.jdbc.JdbcSqlExceptionFactory$JdbcAbortedDueToConcurrentModificationException:
The transaction was aborted and could not be retried due to a concurrent modification
...
Caused by: com.google.cloud.spanner.AbortedDueToConcurrentModificationException:
The transaction was aborted and could not be retried due to a concurrent modification
...
Suppressed: com.google.cloud.spanner.connection.AbstractBaseUnitOfWork$SpannerAsyncExecutionException:
Execution failed for statement:
SELECT * FROM `order_item` WHERE `order_item`.`updated_time` < #p1 AND ((`order_item`.`updated_time` = #p2 AND `order_item`.`order_item_id` > #p3) OR `order_item`.`updated_time` > #p4) ORDER BY `order_item`.`updated_time`,`order_item`.`order_item_id` ASC
...
Caused by: com.google.cloud.spanner.AbortedException: ABORTED: io.grpc.StatusRuntimeException:
ABORTED: Transaction was aborted. It was wounded by a higher priority transaction due to conflict on keys in range [[5587892845991837697,5587892845991837702], [5587892845991837697,5587892845991837702]), column adjust in table order_item.
retry_delay {
nanos: 12974238
}
- Statement: 'SELECT * FROM `order_item` WHERE `order_item`.`updated_time` < #p1 AND ((`order_item`.`updated_time` = #p2 AND `order_item`.`order_item_id` > #p3) OR `order_item`.`updated_time` > #p4) ORDER BY `order_item`.`updated_time`,`order_item`.`order_item_id` ASC'
...
I am wondering how does this error happen in my case. Btw, even with the error, the connector can still collect the data at the end. Can anyone help with it? Thank you so much!
I'm not sure exactly how your entire pipeline is set up, but the error indicates that you are executing the query in a read/write transaction. Any read/write transaction on Cloud Spanner can be aborted by Cloud Spanner, and may result in the error that you are seeing.
If your pipeline is only reading from Cloud Spanner, the best thing to do is to set your JDBC connection in read-only and autocommit mode. You can do this directly in your JDBC connection URL by adding the readonly=true and autocommit=true properties to the URL.
Example:
jdbc:cloudspanner:/projects/my-project/instances/my-instance/databases/my-database;readonly=true;autocommit=true
It could also be that the framework(s) that you are using is changing the JDBC connection after it has been opened. In that case you should have a look if you can change that in the framework(s). But changing the JDBC URL based on the above example may very well be enough in this case.
Background information:
If the JDBC connection is opened with autocommit turned off and the connection is in read/write mode, then a read/write transaction will be started automatically when a query is executed. All subsequent queries will also use the same read/write transaction, until commit() is called on the connection. This is the least efficient way to read large amounts of data on Cloud Spanner, and should therefore be avoided whenever possible. It will also cause aborted transactions, as the read operations will take locks on the data that it is reading.

Redshift VACUUM cannot run inside a transaction block on SQLWorkbenchJ

I have got a:
VACUUM cannot run inside a transaction block
error on SQLWorkbenchJ in Redshift, but I already commit all transactions before this.
You don't need to change the connection profile, you can change the autocommit property inside your SQL script "on-the-fly" with set autocommit
set autocommit on;
vacuum;
set autocommit off;
You can also toggle the current autocommit state through the menu "SQL -> Autocommit"
For me this worked.
END TRANSACTION;
VACCUM <TABLENAME>;
turning autocommit on and off seems like a hacky solution particularly if you have a long script punctuated with commits and vacuums (ie lots of very large temp tables). Instead, try (in one line). Also, many are reporting redshift does not like the syntax. Instead,
COMMIT;VACUUM;COMMIT;
The problem is that vacuum not only wants to be the first command in a transaction block, it wants the block to be explicitly committed after.

oracle two different session

in our work we create two .net listener,
first one:
calling oracle stored procedure that insert bulk of data into table(table1) using insert into select syntax:
insert into table1 select c1,c2... from tbl2 inner join tbl3....
then we use explicity commit;
second listener:
calling oracle procedure that reading data inserted into table1 via listener1
but we notice that even the record inserted into table1 listener2 couldn't see that recordat same time even that commit is use.
my question is how does cmmit work when we use insert ...select?
is this issue related to session?when listener 1 session end listener 2 can read data?
please help,
thank in advance.
You're using the wrong terms...
A listener is a server application that listens to the incoming client requests and hands it to the DB engine. A listener is not being used on the client end.
A session is not related to the data you can see, a transaction is the object that controls that.
Oracle works in a very clear way - After a transaction has committed - all the new transactions can see it, and already existing transactions can see the new content based on it transaction configurations..
I recommend you reading about isolation levels in that context http://msdn.microsoft.com/en-us/library/system.transactions.isolationlevel(v=vs.110).aspx
By default - the moment (and in DB it is defined by SCN) a transaction have been committed - the data is visible to the client.
Bottom line - your issue is related either to transaction isolation levels (in the case the reading transaction started before the commit), or to the writer, which does not commit the data when you think it is (a transaction issue).
After the call to transaction.Commit() in .net returned - the data is already visible, and other transactions are seeing it.
You're second question was how commit works.
This is a very complicated process in Oracle, so I'll give a really short description:
1. When you commit, Oracle first runs some verifications before the commit itself (for example, runs the deferred constraints).
2. After oracle knows it can safely commit the changes it gets the system time (SCN) , write the commit itself to the redo log, and flushes the data to disk (for consistency).
3. Sends an ACK to the user, that the data is already visible to the world.
4. marks the buffers been used as free.
Something I want to add, just to make sure (and I'm writing it half a sleep - so excuse me if it does not compile...)
In you're .net code - your code should be logically equivalent to it:
OracleConnection con = new OracleConnection(connStr);
con.Open();
OracleTransaction trans = con.BeginTransaction();
OracleCommand cmd = con.CreateCommand();
cmd.Connection = cmd;
cmd.CommandText = "insert into ...";
cmd.ExecuteNonQuery();
cmd.Dispose();
trans.Commit();
trans.Dispose();
con.Close();
con.Dispose();
and if you're using LINQ - make sure you create the transaction scope on the right area.

hibernate 4.3.0.Final causes a ORA-01000: maximum open cursors exceeded [duplicate]

I am getting an ORA-01000 SQL exception. So I have some queries related to it.
Are maximum open cursors exactly related to number of JDBC connections, or are they also related to the statement and resultset objects we have created for a single connection ? (We are using pool of connections)
Is there a way to configure the number of statement/resultset objects in the database (like connections) ?
Is it advisable to use instance variable statement/resultset object instead of method local statement/resultset object in a single threaded environment ?
Does executing a prepared statement in a loop cause this issue ? (Of course, I could have used sqlBatch) Note: pStmt is closed once loop is over.
{ //method try starts
String sql = "INSERT into TblName (col1, col2) VALUES(?, ?)";
pStmt = obj.getConnection().prepareStatement(sql);
pStmt.setLong(1, subscriberID);
for (String language : additionalLangs) {
pStmt.setInt(2, Integer.parseInt(language));
pStmt.execute();
}
} //method/try ends
{ //finally starts
pStmt.close()
} //finally ends
What will happen if conn.createStatement() and conn.prepareStatement(sql) are called multiple times on single connection object ?
Edit1:
6. Will the use of Weak/Soft reference statement object help in preventing the leakage ?
Edit2:
1. Is there any way, I can find all the missing "statement.close()"s in my project ? I understand it is not a memory leak. But I need to find a statement reference (where close() is not performed) eligible for garbage collection ? Any tool available ? Or do I have to analyze it manually ?
Please help me understand it.
Solution
To find the opened cursor in Oracle DB for username -VELU
Go to ORACLE machine and start sqlplus as sysdba.
[oracle#db01 ~]$ sqlplus / as sysdba
Then run
SELECT A.VALUE,
S.USERNAME,
S.SID,
S.SERIAL#
FROM V$SESSTAT A,
V$STATNAME B,
V$SESSION S
WHERE A.STATISTIC# = B.STATISTIC#
AND S.SID = A.SID
AND B.NAME = 'opened cursors current'
AND USERNAME = 'VELU';
If possible please read my answer for more understanding of my solution
ORA-01000, the maximum-open-cursors error, is an extremely common error in Oracle database development. In the context of Java, it happens when the application attempts to open more ResultSets than there are configured cursors on a database instance.
Common causes are:
Configuration mistake
You have more threads in your application querying the database than cursors on the DB. One case is where you have a connection and thread pool larger than the number of cursors on the database.
You have many developers or applications connected to the same DB instance (which will probably include many schemas) and together you are using too many connections.
Solution:
Increasing the number of cursors on the database (if resources allow) or
Decreasing the number of threads in the application.
Cursor leak
The applications is not closing ResultSets (in JDBC) or cursors (in stored procedures on the database)
Solution: Cursor leaks are bugs; increasing the number of cursors on the DB simply delays the inevitable failure. Leaks can be found using static code analysis, JDBC or application-level logging, and database monitoring.
Background
This section describes some of the theory behind cursors and how JDBC should be used. If you don't need to know the background, you can skip this and go straight to 'Eliminating Leaks'.
What is a cursor?
A cursor is a resource on the database that holds the state of a query, specifically the position where a reader is in a ResultSet. Each SELECT statement has a cursor, and PL/SQL stored procedures can open and use as many cursors as they require. You can find out more about cursors on Orafaq.
A database instance typically serves several different schemas, many different users each with multiple sessions. To do this, it has a fixed number of cursors available for all schemas, users and sessions. When all cursors are open (in use) and request comes in that requires a new cursor, the request fails with an ORA-010000 error.
Finding and setting the number of cursors
The number is normally configured by the DBA on installation. The number of cursors currently in use, the maximum number and the configuration can be accessed in the Administrator functions in Oracle SQL Developer. From SQL it can be set with:
ALTER SYSTEM SET OPEN_CURSORS=1337 SID='*' SCOPE=BOTH;
Relating JDBC in the JVM to cursors on the DB
The JDBC objects below are tightly coupled to the following database concepts:
JDBC Connection is the client representation of a database session and provides database transactions. A connection can have only a single transaction open at any one time (but transactions can be nested)
A JDBC ResultSet is supported by a single cursor on the database. When close() is called on the ResultSet, the cursor is released.
A JDBC CallableStatement invokes a stored procedure on the database, often written in PL/SQL. The stored procedure can create zero or more cursors, and can return a cursor as a JDBC ResultSet.
JDBC is thread safe: It is quite OK to pass the various JDBC objects between threads.
For example, you can create the connection in one thread; another thread can use this connection to create a PreparedStatement and a third thread can process the result set. The single major restriction is that you cannot have more than one ResultSet open on a single PreparedStatement at any time. See Does Oracle DB support multiple (parallel) operations per connection?
Note that a database commit occurs on a Connection, and so all DML (INSERT, UPDATE and DELETE's) on that connection will commit together. Therefore, if you want to support multiple transactions at the same time, you must have at least one Connection for each concurrent Transaction.
Closing JDBC objects
A typical example of executing a ResultSet is:
Statement stmt = conn.createStatement();
try {
ResultSet rs = stmt.executeQuery( "SELECT FULL_NAME FROM EMP" );
try {
while ( rs.next() ) {
System.out.println( "Name: " + rs.getString("FULL_NAME") );
}
} finally {
try { rs.close(); } catch (Exception ignore) { }
}
} finally {
try { stmt.close(); } catch (Exception ignore) { }
}
Note how the finally clause ignores any exception raised by the close():
If you simply close the ResultSet without the try {} catch {}, it might fail and prevent the Statement being closed
We want to allow any exception raised in the body of the try to propagate to the caller.
If you have a loop over, for example, creating and executing Statements, remember to close each Statement within the loop.
In Java 7, Oracle has introduced the AutoCloseable interface which replaces most of the Java 6 boilerplate with some nice syntactic sugar.
Holding JDBC objects
JDBC objects can be safely held in local variables, object instance and class members. It is generally better practice to:
Use object instance or class members to hold JDBC objects that are reused multiple times over a longer period, such as Connections and PreparedStatements
Use local variables for ResultSets since these are obtained, looped over and then closed typically within the scope of a single function.
There is, however, one exception: If you are using EJBs, or a Servlet/JSP container, you have to follow a strict threading model:
Only the Application Server creates threads (with which it handles incoming requests)
Only the Application Server creates connections (which you obtain from the connection pool)
When saving values (state) between calls, you have to be very careful. Never store values in your own caches or static members - this is not safe across clusters and other weird conditions, and the Application Server may do terrible things to your data. Instead use stateful beans or a database.
In particular, never hold JDBC objects (Connections, ResultSets, PreparedStatements, etc) over different remote invocations - let the Application Server manage this. The Application Server not only provides a connection pool, it also caches your PreparedStatements.
Eliminating leaks
There are a number of processes and tools available for helping detect and eliminating JDBC leaks:
During development - catching bugs early is by far the best approach:
Development practices: Good development practices should reduce the number of bugs in your software before it leaves the developer's desk. Specific practices include:
Pair programming, to educate those without sufficient experience
Code reviews because many eyes are better than one
Unit testing which means you can exercise any and all of your code base from a test tool which makes reproducing leaks trivial
Use existing libraries for connection pooling rather than building your own
Static Code Analysis: Use a tool like the excellent Findbugs to perform a static code analysis. This picks up many places where the close() has not been correctly handled. Findbugs has a plugin for Eclipse, but it also runs standalone for one-offs, has integrations into Jenkins CI and other build tools
At runtime:
Holdability and commit
If the ResultSet holdability is ResultSet.CLOSE_CURSORS_OVER_COMMIT, then the ResultSet is closed when the Connection.commit() method is called. This can be set using Connection.setHoldability() or by using the overloaded Connection.createStatement() method.
Logging at runtime.
Put good log statements in your code. These should be clear and understandable so the customer, support staff and teammates can understand without training. They should be terse and include printing the state/internal values of key variables and attributes so that you can trace processing logic. Good logging is fundamental to debugging applications, especially those that have been deployed.
You can add a debugging JDBC driver to your project (for debugging - don't actually deploy it). One example (I have not used it) is log4jdbc. You then need to do some simple analysis on this file to see which executes don't have a corresponding close. Counting the open and closes should highlight if there is a potential problem
Monitoring the database. Monitor your running application using the tools such as the SQL Developer 'Monitor SQL' function or Quest's TOAD. Monitoring is described in this article. During monitoring, you query the open cursors (eg from table v$sesstat) and review their SQL. If the number of cursors is increasing, and (most importantly) becoming dominated by one identical SQL statement, you know you have a leak with that SQL. Search your code and review.
Other thoughts
Can you use WeakReferences to handle closing connections?
Weak and soft references are ways of allowing you to reference an object in a way that allows the JVM to garbage collect the referent at any time it deems fit (assuming there are no strong reference chains to that object).
If you pass a ReferenceQueue in the constructor to the soft or weak Reference, the object is placed in the ReferenceQueue when the object is GC'ed when it occurs (if it occurs at all). With this approach, you can interact with the object's finalization and you could close or finalize the object at that moment.
Phantom references are a bit weirder; their purpose is only to control finalization, but you can never get a reference to the original object, so it's going to be hard to call the close() method on it.
However, it is rarely a good idea to attempt to control when the GC is run (Weak, Soft and PhantomReferences let you know after the fact that the object is enqueued for GC). In fact, if the amount of memory in the JVM is large (eg -Xmx2000m) you might never GC the object, and you will still experience the ORA-01000. If the JVM memory is small relative to your program's requirements, you may find that the ResultSet and PreparedStatement objects are GCed immediately after creation (before you can read from them), which will likely fail your program.
TL;DR: The weak reference mechanism is not a good way to manage and close Statement and ResultSet objects.
I am adding few more understanding.
Cursor is only about a statement objecct; It is neither resultSet nor the connection object.
But still we have to close the resultset to free some oracle memory. Still if you don't close the resultset that won't be counted for CURSORS.
Closing Statement object will automatically close resultset object too.
Cursor will be created for all the SELECT/INSERT/UPDATE/DELETE statement.
Each ORACLE DB instance can be identified using oracle SID; similarly ORACLE DB can identify each connection using connection SID. Both SID are different.
So ORACLE session is nothing but a jdbc(tcp) connection; which is nothing but one SID.
If we set maximum cursors as 500 then it is only for one JDBC session/connection/SID.
So we can have many JDBC connection with its respective no of cursors (statements).
Once the JVM is terminated all the connections/cursors will be closed, OR JDBCConnection is closed CURSORS with respect to that connection will be closed.
Loggin as sysdba.
In Putty (Oracle login):
[oracle#db01 ~]$ sqlplus / as sysdba
In SqlPlus:
UserName: sys as sysdba
Set session_cached_cursors value to 0 so that it wont have closed cursors.
alter session set session_cached_cursors=0
select * from V$PARAMETER where name='session_cached_cursors'
Select existing OPEN_CURSORS valuse set per connection in DB
SELECT max(a.value) as highest_open_cur, p.value as max_open_cur FROM v$sesstat a, v$statname b, v$parameter p WHERE a.statistic# = b.statistic# AND b.name = 'opened cursors current' AND p.name= 'open_cursors' GROUP BY p.value;
Below is the query to find the SID/connections list with open cursor values.
SELECT a.value, s.username, s.sid, s.serial#
FROM v$sesstat a, v$statname b, v$session s
WHERE a.statistic# = b.statistic# AND s.sid=a.sid
AND b.name = 'opened cursors current' AND username = 'SCHEMA_NAME_IN_CAPS'
Use the below query to identify the sql's in the open cursors
SELECT oc.sql_text, s.sid
FROM v$open_cursor oc, v$session s
WHERE OC.sid = S.sid
AND s.sid=1604
AND OC.USER_NAME ='SCHEMA_NAME_IN_CAPS'
Now debug the Code and Enjoy!!! :)
Correct your Code like this:
try
{ //method try starts
String sql = "INSERT into TblName (col1, col2) VALUES(?, ?)";
pStmt = obj.getConnection().prepareStatement(sql);
pStmt.setLong(1, subscriberID);
for (String language : additionalLangs) {
pStmt.setInt(2, Integer.parseInt(language));
pStmt.execute();
}
} //method/try ends
finally
{ //finally starts
pStmt.close()
}
Are you sure, that you're really closing your pStatements, connections and results?
To analyze open objects you can implment a delegator pattern, which wraps code around your statemant, connection and result objects. So you'll see, if an object will successfully closed.
An Example for: pStmt = obj.getConnection().prepareStatement(sql);
class obj{
public Connection getConnection(){
return new ConnectionDelegator(...here create your connection object and put it into ...);
}
}
class ConnectionDelegator implements Connection{
Connection delegates;
public ConnectionDelegator(Connection con){
this.delegates = con;
}
public Statement prepareStatement(String sql){
return delegates.prepareStatement(sql);
}
public void close(){
try{
delegates.close();
}finally{
log.debug(delegates.toString() + " was closed");
}
}
}
If your application is a Java EE application running on Oracle WebLogic as the application server, a possible cause for this issue is the Statement Cache Size setting in WebLogic.
If the Statement Cache Size setting for a particular data source is about equal to, or greater than, the Oracle database maximum open cursor count setting, then all of the open cursors can be consumed by cached SQL statements that are held open by WebLogic, resulting in the ORA-01000 error.
To address this, reduce the Statement Cache Size setting for each WebLogic datasource that points to the Oracle database to be significantly less than the maximum cursor count setting on the database.
In the WebLogic 10 Admin Console, the Statement Cache Size setting for each data source can be found at Services (left nav) > Data Sources > (individual data source) > Connection Pool tab.
I too had faced this issue.The below exception used to come
java.sql.SQLException: - ORA-01000: maximum open cursors exceeded
I was using Spring Framework with Spring JDBC for dao layer.
My application used to leak cursors somehow and after few minutes or so, It used to give me this exception.
After a lot of thorough debugging and analysis, I found that there was the problem with the Indexing, Primary Key and Unique Constraints in one of the Table being used in the Query i was executing.
My application was trying to update the Columns which were mistakenly Indexed.
So, whenever my application was hitting the update query on the indexed columns, The database was trying to do the reindexing based on the updated values. It was leaking the cursors.
I was able to solve the problem by doing Proper Indexing on the columns which were used to search in the query and applying appropriate constraints wherever required.
I faced the same problem (ORA-01000) today. I had a for loop in the try{}, to execute a SELECT statement in an Oracle DB many times, (each time changing a parameter), and in the finally{} I had my code to close Resultset, PreparedStatement and Connection as usual. But as soon as I reached a specific amount of loops (1000) I got the Oracle error about too many open cursors.
Based on the post by Andrew Alcock above, I made changes so that inside the loop, I closed each resultset and each statement after getting the data and before looping again, and that solved the problem.
Additionaly, the exact same problem occured in another loop of Insert Statements, in another Oracle DB (ORA-01000), this time after 300 statements. Again it was solved in the same way, so either the PreparedStatement or the ResultSet or both, count as open cursors until they are closed.
Did you set autocommit=true? If not try this:
{ //method try starts
String sql = "INSERT into TblName (col1, col2) VALUES(?, ?)";
Connection conn = obj.getConnection()
pStmt = conn.prepareStatement(sql);
for (String language : additionalLangs) {
pStmt.setLong(1, subscriberID);
pStmt.setInt(2, Integer.parseInt(language));
pStmt.execute();
conn.commit();
}
} //method/try ends {
//finally starts
pStmt.close()
} //finally ends
query to find sql that opened.
SELECT s.machine, oc.user_name, oc.sql_text, count(1)
FROM v$open_cursor oc, v$session s
WHERE oc.sid = s.sid
and S.USERNAME='XXXX'
GROUP BY user_name, sql_text, machine
HAVING COUNT(1) > 2
ORDER BY count(1) DESC
This problem mainly happens when you are using connection pooling because when you close connection that connection go back to the connection pool and all cursor associated with that connection never get closed as the connection to database is still open.
So one alternative is to decrease the idle connection time of connections in pool, so may whenever connection sits idle in connection for say 10 sec , connection to database will get closed and new connection created to put in pool.
Using batch processing will result in less overhead. See the following link for examples:
http://www.tutorialspoint.com/jdbc/jdbc-batch-processing.htm
In our case, we were using Hibernate and we had many variables referencing the same Hibernate mapped entity. We were creating and saving these references in a loop. Each reference opened a cursor and kept it open.
We discovered this by using a query to check the number of open cursors while running our code, stepping through with a debugger and selectively commenting things out.
As to why each new reference opened another cursor - the entity in question had collections of other entities mapped to it and I think this had something to do with it (perhaps not just this alone but in combination with how we had configured the fetch mode and cache settings). Hibernate itself has had bugs around failing to close open cursors, though it looks like these have been fixed in later versions.
Since we didn't really need to have so many duplicate references to the same entity anyway, the solution was to stop creating and holding onto all those redundant references. Once we did that the problem when away.
I had this problem with my datasource in WildFly and Tomcat, connecting to a Oracle 10g.
I found that under certain conditions the statement wasn't closed even when the statement.close() was invoked.
The problem was with the Oracle Driver we were using: ojdbc7.jar. This driver is intended for Oracle 12c and 11g, and it seems has some issues when is used with Oracle 10g, so I downgrade to ojdbc5.jar and now everything is running fine.
I faced the same issue because I was querying db for more than 1000 iterations.
I have used try and finally in my code. But was still getting error.
To solve this I just logged into oracle db and ran below query:
ALTER SYSTEM SET open_cursors = 8000 SCOPE=BOTH;
And this solved my problem immediately.
I ran into this issue after setting the prepared statement cache size to a large value. Apparently, when prepared statements are kept in cache, the cursor stays open.

Jdbc call to oracle 11.1.0.7.0 db blocked

The Jdbc call blocks and does not return back .. below is the stack trace
Oracle server = 11.1.0.7
Oracle thin driver used # client
Would appreciate your help ....
at java.net.SocketInputStream.socketRead0(Native Method)
at java.net.SocketInputStream.read(SocketInputStream.java:140)
at oracle.net.ns.Packet.receive(Packet.java:240)
at oracle.net.ns.DataPacket.receive(DataPacket.java:92)
at oracle.net.ns.NetInputStream.getNextPacket(NetInputStream.java:172)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:117)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:92)
at oracle.net.ns.NetInputStream.read(NetInputStream.java:77)
at oracle.jdbc.driver.T4CMAREngine.unmarshalUB1(T4CMAREngine.java:1034)
at oracle.jdbc.driver.T4CMAREngine.unmarshalSB1(T4CMAREngine.java:1010)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:588)
at oracle.jdbc.driver.T4CStatement.doOall8(T4CStatement.java:183)
at oracle.jdbc.driver.T4CStatement.executeForDescribe(T4CStatement.java:780)
at oracle.jdbc.driver.T4CStatement.executeMaybeDescribe(T4CStatement.java:855)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1377)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1186)
at oracle.jdbc.driver.OracleStatement.executeQuery(OracleStatement.java:1377)
at oracle.jdbc.driver.OracleStatementWrapper.executeQuery(OracleStatementWrapper.java:387)
at oracle.jdbc.driver.OracleDatabaseMetaData.getTypeInfo(OracleDatabaseMetaData.java:670)
there might be few reasons:
This thread is locked in DB and waits for other thread commit(or rollback)
This can be the firewall issue. Firewall may handle inappropriately stale connections.
You can see more information here: http://forums.oracle.com/forums/thread.jspa?messageID=4354229
If you are running a stored procedure, it is apparently a JDBC THIN driver BUG.
Update your JDBC driver to 11.0.2.2. It's supposed to fix the hang on getNextPacket issue:
Oracle JDBC Drivers release 11.2.0.2 Readme.txt
===============================================
Note: this readme is specific to fixes in 11.2.0.2 patch-set; see
the master readme of the 11.2 release at http://download.oracle.com/otn/utilities_drivers/jdbc/112/Readme.txt
Bug# Description
======= ===========================================================================================
2935043 SQLException Invalid conversion error when binding short JDBC type to Oracle number column
5071698 PropertyCheckInterval of zero causes high CPU
6748410 registerOutParameter does not perform well in JDBC Thin
7208615 NUMBER column shows as precision 22 in JDBC
7281435 ORA-30006 during xa_commit cause XAER_RMERR in JDBC
8588311 PreparedStatement.setBinaryStream() does not read all the data from the stream
8592559 JDBC Thin cannot fetch PlsqlIndexByTable with more than 32k items
8617180 ORA-1458 error with large batch sizes using JDBC OCI driver
8832811 Non ASCII characters inserted into US7ASCII DB using JDBC Thin
8834344 Binding date of the Julian calendar throws IllegalArgumentException
8873676 JDBC This throws SqlException while reading invalid characters
8874882 ORA-22922 reusing large string bind for a LOB in JDBC
8889839 XA_RMERR being thrown on the recover(TMNOFLAGS) call from JDBC
8891187 JDBC does not close the connection after a fatal error
8980899 JDBC Thin new property enableDataInLocator for LOB data
8980918 JDBC Thin should use "data in locator" feature to save round-trips for small Lobs
8982104 Add JDBC support for SQLXML
9045206 11.2 JDBC driver returns zero rows with REF CURSOR OUT parameter
9099863 ps.setbytes on BLOB columns in batch does not inherit value to following lines
9105438 ORA-22275 during ps.executeBatch with LOBs
9121586 ORA-22925 getting large LOB via JDBC Thin 11.2
9139227 Wrong error code on JDBC connection failure
9147506 Named parameter in callable statement not working from JDBC
9180882 JDBC Statement.Execute does not handle comments as first elements for INSERT
9197956 JDBC Data Change Notification fails with IllegalArgumentException
9240210 Silent truncation reading 4gb BLOB with JDBC Thin 11.2
9259830 DatabaseChangeNotification fails to clean up
9260568 isValidObjectName() rejects valid object names
9341542 getmetadata().getindexinfo fails with quoted table names (ORA-947)
9341742 setBinaryStream causes dump/ORA-24804 if an unread stream is bound to a DML
9374132 Territory is allowed to be NULL resulting in ORA-12705
9394224 Poor performance for batch PreparedStatement execute with XMLType or objects.
9445675 "No more data" / ORA-3137 using end to end metrics with JDBC Thin
9468517 JDBC OCI and OCI do not behave the same on TAF failover
9491385 Memory not released using registerIndexTableOutParameter in JDBC Thin
9491954 RuntimeException "Assertion botch: negative time" from Timestamp bind
9660015 JDBC Thin hangs by waiting getnextpacket when calling stored procedure
9767715 TIMESTAMPTZ stringvalue truncates leading zeros in decimal part
9786503 Cannot use OS authentication with OracleXADataSource
Check out bug #9660015.
Hope it helps.
It could be the Firewall issue, did you try checking, if the connections were active in DB, or did DB send the 10Byte ping data to this specific connections and it was OK ?
If JDBC able to create the PreparedStatement from Connections it means the Connections were OK from client perspective, however the whats there between DB and Client ? Firewall ? router ? check their settings.

Resources