I'm using AS400JDBCConnectionPoolDataSource and AS400JDBCConnectionPool in order to create a connection pool inside my project.
this is my code for creating it:
AS400JDBCConnectionPoolDataSource dataSource = new AS400JDBCConnectionPoolDataSource();
dataSource.setServerName(DEVELOP);
dataSource.setUser(USER);
dataSource.setPassword(PASSWORD);
dataSource.setDriver(DRIVER);
dataSource.setPassword(PASSWORD);
dataSource.setLibraries("*LIBL");
dataSource.setNaming(NAME);
AS400JDBCConnectionPool systemi_jdbc_pool = new AS400JDBCConnectionPool(dataSource);
systemi_jdbc_pool.setMaxLifetime(-1);
systemi_jdbc_pool.setMaxConnections(4);
systemi_jdbc_pool.fill(2);
My problem is that the connection is closed every 2-2.5 hours.. and I can't understand why , the max life time is set to -1 which means taht there is no limit on the time.
What could be the problem? how can I make the connection pool not to disconnect itseld?
Thank's In Advance.
There is an IBM i-specific community at midrange.com. You can try asking your question there.
If you get an answer there, maybe post the answer (or at least a link to the answer) here so others can find the answer as well.
found a solution:
Database Host Server Connections Drop after a Period of Inactivity
You must use systemi_jdbc_pool.setCleanupInterval(miliseconds) to indicate to mantainance thread.
Related
During migration from OAS10 to WebLogic 12.1.2, a call to a stored procedure is producing an ORA-03111 around 4 minutes after it is invoked:
java.sql.SQLTimeoutException: ORA-03111: break received on communication channel
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:462)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:931)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:481)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:205)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:548)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:213)
at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:1111)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1488)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3770)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3955)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:9353)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1539)
at weblogic.jdbc.wrapper.PreparedStatement.execute(PreparedStatement.java:101)
at mycode.app.impliq.dao.connection.oracle.OracleProcesoDao.callSP(OracleProcesoDao.java:811)
This code is NOT using statement timeout and it is not configured at data source level either.
Any pointer will be appreciated.
Two ways from here.
1. check the connection pool settings in both weblogic and db for your data source.
2. check log running sql.
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 7 years ago.
Improve this question
I am encountering this error again and again in my tibco code.Somebody please tell how to solve this error
I am using tibco 5.7.3.
JDBC error reported: (SQLState = HY000) - java.sql.SQLException: [tibcosoftwareinc][SQLServer JDBC Driver]Object has been closed."
When a JDBC Query activity is configured to query in subset mode, the resultSet object is kept in the engine for subsequent iterations. Typically the resultSet object will only be closed and cleared from the engine if there is no more data left. However, keep in mind that the default connection idleTimeout is set to 5 minutes. This means that after 5 minutes of no activity the connection will get released. So if you wait longer than the idleTimeout value to retrieve subsequent subsets you will incur this exception since the connection has been closed and hence the resultset is no longer available.
Resolution:
Set Engine.DBConnection.idleTimeout to higher value in the Businessworks engine TRA file, say, 20 minutes so this connection can remain idle without getting released for next iterations for example: Engine.DBConnection.idleTimeout=20. For more details on this setting please see the list of Available Custom Engine Properties.
I used Elasticsearch-1.1.0 to index tweets.
The indexing process is okay.
Then I upgraded the version. Now I use Elasticsearch-1.3.2, and I get this message randomly:
Exception happened: Error raised when there was an exception while talking to ES.
ConnectionError(HTTPConnectionPool(host='127.0.0.1', port=8001): Read timed out. (read timeout=10)) caused by: ReadTimeoutError(HTTPConnectionPool(host='127.0.0.1', port=8001): Read timed out. (read timeout=10)).
Snapshot of the randomness:
Happened --33s-- Happened --27s-- Happened --22s-- Happened --10s-- Happened --39s-- Happened --25s-- Happened --36s-- Happened --38s-- Happened --19s-- Happened --09s-- Happened --33s-- Happened --16s-- Happened
--XXs-- = after XX seconds
Can someone point out on how to fix the Read timed out problem?
Thank you very much.
Its hard to give a direct answer since the error your seeing might be associated with the client you are using. However a solution might be one of the following:
1.Increase the default timeout Globally when you create the ES client by passing the timeout parameter. Example in Python
es = Elasticsearch(timeout=30)
2.Set the timeout per request made by the client. Taken from Elasticsearch Python docs below.
# only wait for 1 second, regardless of the client's default
es.cluster.health(wait_for_status='yellow', request_timeout=1)
The above will give the cluster some extra time to respond
Try this:
es = Elasticsearch(timeout=30, max_retries=10, retry_on_timeout=True)
It might won't fully avoid ReadTimeoutError, but it minimalize them.
Read timeouts can also happen when query size is large. For example, in my case of a pretty large ES index size (> 3M documents), doing a search for a query with 30 words took around 2 seconds, while doing a search for a query with 400 words took over 18 seconds. So for a sufficiently large query even timeout=30 won't save you. An easy solution is to crop the query to the size that can be answered below the timeout.
For what it's worth, I found that this seems to be related to a broken index state.
It's very difficult to reliably recreate this issue, but I've seen it several times; operations run as normal except certain ones which periodically seem to hang ES (specifically refreshing an index it seems).
Deleting an index (curl -XDELETE http://localhost:9200/foo) and reindexing from scratch fixed this for me.
I recommend periodically clearing and reindexing if you see this behaviour.
Increasing various timeout options may immediately resolve issues, but does not address the root cause.
Provided the ElasticSearch service is available and the indexes are healthy, try increasing the the Java minimum and maximum heap sizes: see https://www.elastic.co/guide/en/elasticsearch/reference/current/jvm-options.html .
TL;DR Edit /etc/elasticsearch/jvm.options -Xms1g and -Xmx1g
You also should check if all fine with elastic. Some shard can be unavailable, here is nice doc about possible reasons of unavailable shard https://www.datadoghq.com/blog/elasticsearch-unassigned-shards/
I am trying to use an OracleDataSource connection more than once. In the class, I have set the cacheProperty to 10:
cacheProps.setProperty("MaxLimit", "10");
The class that calls the connection is waiting for a return value so that both calls are not being made at the same time. The class that uses the connection is getting a null pointer with the connection variable at random places in the class. It always happens on the 5th request. Is there some property that I'm unaware of that implies you can only use a connection pool 4 times?
This is the code snippet where the null pointer occurs:
int threadNo = 2;
Connection conn = OraConnODS.getConnection("env " + threadNo);
conn.setAutoCommit(false);
Statement stm = conn.createStatement();
Usually on the second line, and sometimes on the third.
try this query to check how many connections are actually open in oracle
SELECT
'Currently, '
|| (SELECT COUNT(*) FROM V$SESSION)
|| ' out of '
|| VP.VALUE
|| ' connections are used.' AS USAGE_MESSAGE
FROM
V$PARAMETER VP
WHERE VP.NAME = 'sessions'
and see if the count is > 10. If > 10 then you need to post your full code/xml to let us have a look at.
First of all you should try using DB conection pool like C3P0 rather than using datasource directly. This way you will have more control over it.
Anyway, your problem seems to be relation Connection leaks. There might be some stale connections lying around which prevent you from creating more connections. SO next step will be to check how many conections are open [see satya's answer for it]. But then next issue will be to remove stale conections tht will be tough, so i would again suggest DB connection pool.
EDIT: Ok, i can see that you are using connection pool, then try to print total active connections and you will be able to reach the root issue. I faced same issue due to connection leak and crude fix will be to clean connection with brute force [generally Connection pools provide such api, atleast C3P0 does]. Get connection pool log and you will know root cause.
I get the following error under a certain scenario
When a different thread is populating a lot of users via the bulk upload operation and I was trying to view the list of all users on a different web page. The list query, throws the following timeout error. Is there a way to set this timeout so that I can avoid this timeout error.
Env: h2 (latest), Hibernate 3.3.x
Caused by: org.h2.jdbc.JdbcSQLException: Timeout trying to lock table "USER"; SQL statement:
[50200-144]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:327)
at org.h2.message.DbException.get(DbException.java:167)
at org.h2.message.DbException.get(DbException.java:144)
at org.h2.table.RegularTable.doLock(RegularTable.java:482)
at org.h2.table.RegularTable.lock(RegularTable.java:416)
at org.h2.table.TableFilter.lock(TableFilter.java:139)
at org.h2.command.dml.Select.queryWithoutCache(Select.java:571)
at org.h2.command.dml.Query.query(Query.java:257)
at org.h2.command.dml.Query.query(Query.java:227)
at org.h2.command.CommandContainer.query(CommandContainer.java:78)
at org.h2.command.Command.executeQuery(Command.java:132)
at org.h2.server.TcpServerThread.process(TcpServerThread.java:278)
at org.h2.server.TcpServerThread.run(TcpServerThread.java:137)
at java.lang.Thread.run(Thread.java:619)
at org.h2.engine.SessionRemote.done(SessionRemote.java:543)
at org.h2.command.CommandRemote.executeQuery(CommandRemote.java:152)
at org.h2.jdbc.JdbcPreparedStatement.executeQuery(JdbcPreparedStatement.java:96)
at org.jboss.resource.adapter.jdbc.WrappedPreparedStatement.executeQuery(WrappedPreparedStatement.java:342)
at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208)
at org.hibernate.loader.Loader.getResultSet(Loader.java:1808)
at org.hibernate.loader.Loader.doQuery(Loader.java:697)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:259)
at org.hibernate.loader.Loader.doList(Loader.java:2228)
... 125 more
Yes, you can change the lock timeout. The default is relatively low: 1 second (1000 ms).
In many cases the problem is that another connection has locked the table, and using multi-version concurrency also solves the problem (append ;MVCC=true to the database URL).
EDIT: MVCC=true param is no longer supported, because since h2 1.4.200 it's always true for a MVStore engine, which is a default engine.
I faced quite the same problem and using the parameter "MVCC=true", it solved it. You can find more explanations about this parameter in the H2 documentation here : http://www.h2database.com/html/advanced.html#mvcc
I'd like to suggest that if you are getting this error, then perhaps you should not be using a transaction on your bulk database operation. Consider instead doing a transaction on each individual update: does it make sense to think of an entire bulk import as a transaction? Probably not. If it does, then yes, MVCC=true or a bigger lock timeout is a reasonable solution.
However, I think for most cases, you are seeing this error because you are trying to perform a very long transaction - in other words you are not aware that you are performing a really long transaction. This was certainly the case for myself and I simply took more care on how I was writing records (either using no transactions or using smaller transactions) and the lock timeout issue was resolved.
For those having this issue with integration tests (i.e. server is accessing the h2 db and an integration test is accessing the db before calling the server, to prepare the test), adding a 'commit' to the script executed before the test makes sure that the data are in the database before calling the server (without MVCC=true - which I find is a bit 'weird' if it is not enabled by default).
I had MVCC=true in my connection string but still was getting error above. I had added ;DEFAULT_LOCK_TIMEOUT=10000;LOCK_MODE=0 and problem was solved
I got this issue with the PlayFramework
JPAQueryException occured : Error while executing query from
models.Page where name = ?: Timeout trying to lock table "PAGE"
It ended being an infinite loop of sorts because I had a
#Before
without an unless which caused the function to repeatedly call itself
#Before(unless="getUser")
Working with DBUnit, H2 and Hibernate - same error, MVCC=true helped, but I would still get the error for any tests following deletion of data. What fixed these cases was wrapping the actual deletion code inside a transaction:
Transaction tx = session.beginTransaction();
...delete stuff
tx.commit();
From a 2020 user, see reference
Basically, the reference says:
Sets the lock timeout (in milliseconds) for the current session. The default value for this setting is 1000 (one second).
This command does not commit a transaction, and rollback does not affect it. This setting can be appended to the database URL: jdbc:h2:./test;LOCK_TIMEOUT=10000