C3P0 connection pool reported ORA-01000: maximum open cursors exceeded error - oracle

I am working with the Hibernate 3.6 and c3p0-0.9.2-pre1. I am getting Exception ORA-01000: maximum open cursors exceeded error while using the C3p0 configurations in the hibernate.cfg.xml.
I have a Oracle's DatabaseSchema.conf file in which I have around 200 create table statement, while validating that Oracle’s Database Schema, I encountered a weird scenario:
If I did not use connection pooling, then no problem, once closed Connection, database released the opened cursors, and open_cursors count did not increased significantly, it will remain under default open_cursors count i.e 300.
But if we used connection pooling, then open_cursors count increased significantly and crosses the default open_cursors count i.e 300 and reported exception ORA-01000: maximum open cursors exceeded error.
I have already tried the following approaches :
Set the value to '0' of the below properties in hibernate.cfg.xml, In order to disable the statement cache of the C3p0.
property name="hibernate.c3p0.max_statements"
property name="hibernate.c3p0.maxStatementsPerConnection"
After closing SESSION , CONNECTION , RESULTSET assign them NULL.
SESSION:
if(session != null && session.isOpen()) {
session.close();
session = null; //assign null
}
RESULTSET :
if(rs != null) {
rs.close();
rs=null; //assign null
}
CONNECTION :
if(conn != null) {
conn.close();
conn=null; //assign null
}
Used Various Permutation and Combination in configuration of C3P0 defined in the hibernate.cfg.xml
Ex: Subsequently Increased the numHelperThreads and observed the OPEN_CURSOR count, changed the pool’s min and max size. In similar way I have changed the other configurations also.
Used the Connection Provider class in the C3P0 configuration and imported the hibernate-c3p0 jar compatible with the Hibernate 3.6.
Upgraded the Hibernate version from 3.6 to 5.2.
Used the DBCP Connection Pooling Mechanism instead of the C3P0.
Approaches from 1-6 results in the High OPEN_CURSOR counts and Only Approach 7 results in the desired outcome.
What else should I tried to work with the C3P0?

Related

io.vertx.oracleclient.OracleException: Error : 1000, Position : 0, Sql = SET TRANSACTION ISOLATION LEVEL READ COMMITTED

I have one question regarding oracle cursors . Actually I upgraded my project to latest vert.x version and now I started to see some errors. I have one SQL Verticle which actively used and it throw exception after a while. You can see on below :
"io.vertx.oracleclient.OracleException: Error : 1000, Position : 0, Sql = SET TRANSACTION ISOLATION LEVEL READ COMMITTED, OriginalSql = SET TRANSACTION ISOLATION LEVEL READ COMMITTED, Error Msg = ORA-01000: maximum open cursors exceeded"
I am using withTransaction method belong to vertx-sql-client:4.2.7 and I am not sure whether I manage it myself or vertx manage. I expected that cursors already managed by vertx. Is there anybody who see such an error ? or Any comment ? Thanks in advance .
Also Any idea why vertx open cursor for sql "SET TRANSACTION ISOLATION LEVEL READ COMMITTED, OriginalSql = SET TRANSACTION ISOLATION LEVEL READ COMMITTED" and not close ?
Example code :
override suspend fun processItem(callback: Callback) {
pool.withTransaction { conn ->
conn.query("select * from TABLE where ROWNUM='1'") .execute()
}}

What is the meaning of "Fb::Error: A transaction has already been started"

I have a Ruby application that crashes sometimes with this error message:
Fb::Error: A transaction has already been started)
I'm now wondering what this message means. I searched a little bit and I read that Firebird is not supporting nested transactions. Could the message hint to this? If not, what else could this mean?
This is not a Firebird error message. It is an error message in the driver you're using. Specifically here:
static void fb_connection_transaction_start(struct FbConnection *fb_connection, VALUE opt)
{
char *tpb = 0;
long tpb_len;
if (fb_connection->transact) {
rb_raise(rb_eFbError, "A transaction has been already started");
}
if (!NIL_P(opt)) {
tpb = trans_parseopts(opt, &tpb_len);
} else {
tpb_len = 0;
tpb = NULL;
}
isc_start_transaction(fb_connection->isc_status, &fb_connection->transact, 1, &fb_connection->db, tpb_len, tpb);
xfree(tpb);
fb_error_check(fb_connection->isc_status);
}
Without in-depth familiarity with this driver, I'm guessing the problem is that you're trying to start a transaction on a connection that already has an active transaction.
Firebird itself supports multiple parallel transactions on a single connection, and it supports nested transactions in the form of SQL standard savepoints, but it looks like the driver you're using doesn't support this.
The solution (or workaround) would seem to be to either not start a transaction when you already have an active transaction, or to first commit or rollback an existing transaction before starting a new one.

java.sql.SQLException: Could not commit with auto-commit set on

I have few insert and update operations in my application. Everything is running fine in Tomcat server. But while deploying in Oracle Weblogic server I'm getting the below exception
java.sql.SQLException: Could not commit with auto-commit set on
In my executUpdate method, I have set DatabaseConnection.setAutoCommit to false at the beginning
dbConnection.setAutoCommit(false, id);
After the PreparedStatement's executeUpdate, if the returned integer is > 0 I'm again setting the setAutoCommit to true something like below:
dbConnection.setIsolationLevel(2,id);
count = PreparedStatement.executeUpdate();
if (cnt > 0)
dbConn.setAutoCommit(true,id);
After all the operations in finally block, we check for DatabaseConnection is null or not and then close it as something like below:
if(dbConnection!=null)
{
dbConnection.close(tranid);
dbConnection= null;
}
The close method we have mocked something like below within a try catch and a message within this catch block's is getting printed :
if(connection!=null)
{
connection.commit();
connection.close();
connection = null;
}
Someone please help me out with this as a proper commit should occur in realtime as I tried setting AutoCommit to false in the below part and it worked without the SQLException.
if (count > 0)
dbConnection.setAutoCommit(false,id);
My worry is, this is not the solution I'm looking for as this causes problem in realtime

Timeout and out of memory errors reading large table using jdbc drivers

I am attempting to read a large table into a spark dataframe from an Oracle database using spark's native read.jdbc in scala. I have tested this with small and medium sized tables (up to 11M rows) and it works just fine. However, when attempting to bring in a larger table (~70M rows) I keep getting errors.
Sample code to show how I am reading this in:
val df = sparkSession.read.jdbc(
url = jdbcUrl,
table = "( SELECT * FROM keyspace.table WHERE EXTRACT(year FROM date_column) BETWEEN 2012 AND 2016)"
columnName = "id_column", // numeric column, 40% NULL
lowerBound = 1L,
upperBound = 100000L,
numPartitions = 60, // same as number of cores
connectionProperties = connectionProperties) // this contains login & password
I am attempting to parallelise the operation, as I am using a cluster with 60 cores and 6 x 32GB RAM dedicated to this app. However, I still keep getting errors relating to timeouts and out of memory issues, such as:
17/08/16 14:01:18 WARN Executor: Issue communicating with driver in heartbeater
org.apache.spark.rpc.RpcTimeoutException: Futures timed out after [10 seconds]. This timeout is controlled by spark.executor.heartbeatInterval
at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:47)
....
Caused by: java.util.concurrent.TimeoutException: Futures timed out after [10 seconds}
...
17/08/16 14:17:14 ERROR RetryingBlockFetcher: Failed to fetch block rdd_2_89, and will not retry (0 retries)
org.apache.spark.network.client.ChunkFetchFailureException: Failure while fetching StreamChunkId{streamId=398908024000, chunkIndex=0}: java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE
at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:869)
at org.apache.spark.storage.DiskStore$$anonfun$getBytes$4.apply(DiskStore.scala:125)
...
17/08/16 14:17:14 WARN BlockManager: Failed to fetch block after 1 fetch failures. Most recent failure cause:
org.apache.spark.SparkException: Exception thrown in awaitResult:
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:205)
There should be more than enough RAM across the cluster for a table of this size (I've read in local tables 10x bigger), so I have a feeling that for some reason the data read may not be happening in parallel? Looking at the timeline in the spark UI, I can see that one executor hangs and is 'computing' for very long periods of time. Now, the partitioning column has a lot of NULL values in it (about 40%), but it is the only numeric column (other's are dates and strings) - could this make a difference? Is there another way to parallelise a jdbc read?
the partitioning column has a lot of NULL values in it (about 40%), but it is the only numeric column (other's are dates and strings) - could this make a difference?
It makes a huge difference. All values with NULL will go to the last partition:
val whereClause =
if (uBound == null) {
lBound
} else if (lBound == null) {
s"$uBound or $column is null"
} else {
s"$lBound AND $uBound"
}
Is there another way to parallelise a jdbc read?
You can use predicates with other columns than numeric ones. You could for example use ROWID pseudocoulmn in table and use a series of predicates based on prefix.

Why does h2 ignore slf4j messages on the first connection when LOG is set?

See sample code & output below (with Slf4j/logback on stdout). I can't find any bug reports on this. I'm using h2 version 1.3.176 (last stable), in-memory mode. It doesn't seem to matter what value is set for the LOG (0, 1 or 2) but just has to be set.
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.SQLException;
public class H2TraceTest {
public static void main(String[] args) throws SQLException {
System.out.println("Query connection 1");
Connection myConn = DriverManager.getConnection("jdbc:h2:mem:tracetest;TRACE_LEVEL_FILE=4;LOG=2");
myConn.createStatement().execute("SELECT 1");
System.out.println("Query connection 2");
DriverManager.getConnection("jdbc:h2:mem:tracetest").createStatement().execute("SELECT 1");
System.out.println("Query connection 1 again");
myConn.createStatement().execute("SELECT 1");
System.out.println("End");
}
}
Output:
Query connection 1
Query connection 2
16:17:02.955 INFO h2database - jdbc[3]
/**/Connection conn2 = DriverManager.getConnection("jdbc:h2:mem:tracetest", "", "");
16:17:02.958 DEBUG h2database - jdbc[3]
/**/Statement stat2 = conn2.createStatement();
16:17:02.959 DEBUG h2database - jdbc[3]
/**/stat2.execute("SELECT 1");
16:17:02.959 INFO h2database - jdbc[3]
/*SQL #:1*/SELECT 1;
Query connection 1 again
End
I know that the H2 documentation says about TRACE_LEVEL_FILE: it affects all connections. But thats not (fully) correct:
Every connection keeps a lazy reference to the logging system. And if you change that with the special marker TRACE_LEVEL_FILE=4, then that reference isn't changed for all existing connections - but only for those who do their first logging after that change.
So if you use the connection string "jdbc:h2:mem:tracetest;TRACE_LEVEL_FILE=4" everything is as expected, because your session will write no logging message before changing the logging system. Unfortunately the LOG=2 in jdbc:h2:mem:tracetest;TRACE_LEVEL_FILE=4;LOG=2 is evaluated first, because both parameter are written into and read from an unordered Map. And because LOG=2 is generating a log statement, the reference to the log adapter (=4) is never applied to the current session. Only to the next one.
What can you do:
Use only "jdbc:h2:mem:tracetest;TRACE_LEVEL_FILE=4" - LOG=2 is the default anyway. If you need any other log mode you can use connection.createStatement().executeUpdate("SET LOG 1")
Add some default parameters to the connection string until the TRACE_LEVEL_FILE parameter is the first parameter in the map (not really reliable, as the order may depend on the VM)
Discard the first connection at once
Fill in a bug report and wait for the fix (or fix it yourself), as I think this is somehow a bug
I know this is an old question but here is a reliable way to do it (i.e. you can ensure that TRACE_LEVEL_FILE is set to 4 first:
String url = "jdbc:h2:mem:tracetest;INIT=SET TRACE_LEVEL_FILE=4\\;SET DB_CLOSE_DELAY=-1/* for example, i.e. do other stuff */";

Resources