(Weird) - ORA-12516 - TNS:listener could not find available handler with matching protocol stack [with only one active connection] - spring

I am trying to run a Spring loader. That loader will take the data from the csv file and insert into oracle database. It starts well, after processing some records, i am getting the below error.
'ORA-12516 - TNS:listener could not find available handler with matching protocol stack'
Note : No other jobs were running at that time. Only this job was running.
processes -- 45 (CURRENT_UTILIZATION) -- 51 (MAX_UTILIZATION)
sessions -- 53 (CURRENT_UTILIZATION) -- 61 (MAX_UTILIZATION)
show parameter processes (processes - integer - 300)
show parameter session (sessions - integer - 480)
The thing is, the same batch program is running fine in another server, which has the same set of above configurations.
Since its a new server, anything i am missing in regards to oracle.? Can someone guide.

Related

Jco Adapter pooling performance deadlock?

We're running an enterprise scale SAP application with front-end springboot clients connecting via Jco adapter 3.0 on Oracle VM using the connection pool (size 100). We're experiencing unsystematic long-running requests > 10s that are not visible in the SAP application server log, i.e. the bottleneck does not appear to be on SAP side.
Looking at the trace files (level 4) for an example request we can see that the time seems lost when the adapter thread tries to get the client from the pool (other threads continue execution, removed the irrelevant threads for clarity):
[20:05:50:259]: [JCoAPI] JCoContext.isStateful(P-foo-CPIC0) in session ID Client-53-1 returns false
[20:05:50:259]: [JCoAPI] JCoContext.begin(P-foo-CPIC0) in session ID Client-53-1
[20:05:50:259]: [JCoAPI] Started context for session Client-53-1
[20:05:50:259]: [JCoAPI] JCoContext.begin() for destination PFOO_200 (P-foo-CPIC0) on context with id Client-53-1; current state counter is 1
[20:05:50:259]: [JCoAPI] destination PFOO_200 destinationID=P-foo-CPIC0 executes Z_foo sessionID=Client-53-1, threadID=0x35
[20:05:50:259]: [JCoAPI] Context.getConnection on destination PFOO_200 (state: destination = STATEFUL, default = STATELESS)
[20:05:50:259]: [JCoAPI] PoolingFactory.getClient() on pool P-foo-CPIC0
--> time lost here
[20:06:20:840]: [JCoAPI] PoolingFactory.getClient() returns handle [3/84977415]
[20:06:20:840]: [JCoAPI] Context.getConnection on destination PFOO_200 nothing found in the context - got client from ConnectionManager [3/84977415]
[20:06:20:840]: [JCoAPI] JCoClient before execute(Z_foo) on handle [3/84977415]
[20:06:20:840]: [JCoRFC] Executing function Z_foo on handle [3/84977415]
[20:06:20:866]: [JCoAPI] JCoClient after execute(Z_foo) on handle [3/84977415] returns after 26 ms
[20:06:20:866]: [JCoAPI] Context.releaseConnection on destination PFOO_200 [3/84977415]
[20:06:20:867]: [JCoAPI] JCoContext.end(P-foo-CPIC0) in session ID Client-53-1
[20:06:20:867]: [JCoAPI] PoolingFactory.releaseClient() handle [3/84977415] into pool P-foo-CPIC0 [pool size: 3, peak limit: 100, waiting threads: 0, currently used: 1]
[20:06:20:879]: [JCoAPI] Finished context for session Client-53-1
[20:06:20:879]: [JCoAPI] JCoContext.end() for destination PFOO_200 (P-foo-CPIC0) on context with id Client-53-1; current state counter is 0
For a typical request the step is handled in milliseconds.
Are there any known limitations or configurations regarding pool handling for the Jco adapter, either on adapter or on SAP side?
Update we've on Jco adapter 3.0.16 and will double-check 3.0.17 now. DNS seems unlikely since we're monitoring dig/nslookup and they're running without delays.
Which JCo patch level do you use?
Did you try to update to the latest JCo patch level 3.0.17 first?
In your time gap the RFC connection will be opened and the RFC logon will be done, if the pool is empty at that time. Did you have a closer look with a higher trace level, or did you have a look into the RFC trace?
This can be anything from not having a free dialog work process at ABAP side, to SAP system database issues (required for the RFC logon authentication checks), slow response times from the SAP message server (if using load balanced logons), SNC handshake issues (if using SNC) or general network issues with the DNS (try using the IP address instead of a hostname).
Another point worth checking: you say your connection pool has size 100. Is it possible, that your program has more than 100 threads? Then it may happen from time to time, that all connections are currently busy in other threads and the current thread has to wait until a function call in another thread completes and a connection is returned to the pool.
(How long a thread waits on an empty pool can be customized via the "pool wait time" parameter.)

Can MAX_UTILIZATION for PROCESSES reached cause "Unable to get managed connection" Exception?

A JBoss 5.2 application server log was filled with thousands of the following exception:
Caused by: javax.resource.ResourceException: Unable to get managed connection for jdbc_TestDB
at org.jboss.resource.connectionmanager.BaseConnectionManager2.getManagedConnection(BaseConnectionManager2.java:441)
at org.jboss.resource.connectionmanager.TxConnectionManager.getManagedConnection(TxConnectionManager.java:424)
at org.jboss.resource.connectionmanager.BaseConnectionManager2.allocateConnection(BaseConnectionManager2.java:496)
at org.jboss.resource.connectionmanager.BaseConnectionManager2$ConnectionManagerProxy.allocateConnection(BaseConnectionManager2.java:941)
at org.jboss.resource.adapter.jdbc.WrapperDataSource.getConnection(WrapperDataSource.java:96)
... 9 more
Caused by: javax.resource.ResourceException: No ManagedConnections available within configured blocking timeout ( 30000 [ms] )
at org.jboss.resource.connectionmanager.InternalManagedConnectionPool.getConnection(InternalManagedConnectionPool.java:311)
at org.jboss.resource.connectionmanager.JBossManagedConnectionPool$BasePool.getConnection(JBossManagedConnectionPool.java:689)
at org.jboss.resource.connectionmanager.BaseConnectionManager2.getManagedConnection(BaseConnectionManager2.java:404)
... 13 more
I've stripped off the first part of the exception, which is basically our internal JDBC wrapper code which tries to get a DB connection from the pool.
Looking at the Oracle DB side I ran the query:
select resource_name, current_utilization, max_utilization, limit_value
from v$resource_limit
where resource_name in ('sessions', 'processes');
This produced the output:
RESOURCE_NAME CURRENT_UTILIZATION MAX_UTILIZATION LIMIT_VALUE
processes 1387 1500 1500
sessions 1434 1586 2272
Given the fact that that PROCESSES limit of 1500 was reached, would this cause the JBoss exceptions we experienced? I've also been investigating the possibility of connection leaks, but haven't found any evidence of that so far.
What is the recommended course of action here? Is simply increasing the limit a valid solution?
Usually when max_utilization gets the processes value listener will refuse new connections to database. you can see the errors relates to it in alert log. to solve this in database side you should increase the processes parameter.
hmm strange. is it possible, that exception wrapping in JBOSS hides the original error? You should get some sql exception whose text starts with ORA-. Maybe your JDBC wrapper does not handle errors properly.
The recommended actions is to:
check configured size of connection pool against processes sessions Oracle startup paramters.
check Oracles view v$session, especially columns STATUS, LAST_CALL_ET, SQL_ID, PREV_SQL_ID.
translate sql_id(prev_sql_id) into sql_text via v$sql.
if you application has a connection leak, sql_id and pred_sql_id might point you onto a place in your source code, where a connection was used last (i.e. where it was leaked).

ORA-03111 on WebLogic 12.1.2

During migration from OAS10 to WebLogic 12.1.2, a call to a stored procedure is producing an ORA-03111 around 4 minutes after it is invoked:
java.sql.SQLTimeoutException: ORA-03111: break received on communication channel
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:462)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:405)
at oracle.jdbc.driver.T4C8Oall.processError(T4C8Oall.java:931)
at oracle.jdbc.driver.T4CTTIfun.receive(T4CTTIfun.java:481)
at oracle.jdbc.driver.T4CTTIfun.doRPC(T4CTTIfun.java:205)
at oracle.jdbc.driver.T4C8Oall.doOALL(T4C8Oall.java:548)
at oracle.jdbc.driver.T4CCallableStatement.doOall8(T4CCallableStatement.java:213)
at oracle.jdbc.driver.T4CCallableStatement.executeForRows(T4CCallableStatement.java:1111)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1488)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3770)
at oracle.jdbc.driver.OraclePreparedStatement.execute(OraclePreparedStatement.java:3955)
at oracle.jdbc.driver.OracleCallableStatement.execute(OracleCallableStatement.java:9353)
at oracle.jdbc.driver.OraclePreparedStatementWrapper.execute(OraclePreparedStatementWrapper.java:1539)
at weblogic.jdbc.wrapper.PreparedStatement.execute(PreparedStatement.java:101)
at mycode.app.impliq.dao.connection.oracle.OracleProcesoDao.callSP(OracleProcesoDao.java:811)
This code is NOT using statement timeout and it is not configured at data source level either.
Any pointer will be appreciated.
Two ways from here.
1. check the connection pool settings in both weblogic and db for your data source.
2. check log running sql.

Hibernate - Getting exception : maximum number of processes (550) exceeded

I am using hibernate 3 along with spring.My Hibernate configurations are as under :
hibernate.dialect=org.hibernate.dialect.Oracle8iDialect
hibernate.connection.release_mode=on_close
But after starting application, even if only one user accesses it then also I am getting this exception :
ORA-00020: maximum number of processes (550) exceeded
This is stacktrace:
Caused by: java.sql.SQLException: ORA-00020: maximum number of processes (550) exceeded
at oracle.jdbc.driver.DatabaseError.throwSqlException(DatabaseError.java:112)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:331)
at oracle.jdbc.driver.T4CTTIoer.processError(T4CTTIoer.java:288)
at oracle.jdbc.driver.T4C8Oall.receive(T4C8Oall.java:743)
at oracle.jdbc.driver.T4CPreparedStatement.doOall8(T4CPreparedStatement.java:216)
at oracle.jdbc.driver.T4CPreparedStatement.executeForDescribe(T4CPreparedStatement.java:799)
at oracle.jdbc.driver.OracleStatement.executeMaybeDescribe(OracleStatement.java:1038)
at oracle.jdbc.driver.T4CPreparedStatement.executeMaybeDescribe(T4CPreparedStatement.java:839)
at oracle.jdbc.driver.OracleStatement.doExecuteWithTimeout(OracleStatement.java:1133)
at oracle.jdbc.driver.OraclePreparedStatement.executeInternal(OraclePreparedStatement.java:3285)
at oracle.jdbc.driver.OraclePreparedStatement.executeQuery(OraclePreparedStatement.java:3329)
at com.mchange.v2.c3p0.impl.NewProxyPreparedStatement.executeQuery(NewProxyPreparedStatement.java:76)
at org.hibernate.jdbc.AbstractBatcher.getResultSet(AbstractBatcher.java:208)
at org.hibernate.loader.Loader.getResultSet(Loader.java:1953)
at org.hibernate.loader.Loader.doQuery(Loader.java:802)
at org.hibernate.loader.Loader.doQueryAndInitializeNonLazyCollections(Loader.java:274)
at org.hibernate.loader.Loader.loadEntity(Loader.java:2037)
I have kept connection pool time out = 5000. I have also tried to found the cause and got that release mode may affect the mechanism of closing DB resources. But I couldn't find exact solution for that.
Please help..
Thanks in advance..
This is a database error not an application error so you need to go to the database to solve it. 550 processes is a lot more than it sounds so either someone has gone insane or you have a lot of inactive processes running.
The best way to find out is to query the v$session view or Gv$session if you're using a RAC, look at the STATUS column.
Take careful not of where all these sessions are coming from; the OSUSER, TERMINAL and PROGRAM will probably be the most useful. It might almost be worth creating a temporary table with this information - proof and a record afterwards. Then after checking that you're not going to break anything, and with your DBAs if you have any, kill all the inactive sessions simultaneously or one at a time.
That'll remove the error but if it's occurred once it can occur again, so you need to solve it. Either:
You've got a lot of people using the database.
There is an application / program somewhere that is not closing it's
sessions after it's finished.
Someone is connecting in the middle of a loop.
Whichever reason it is you need to track it down and correct it. I'd start with the program or terminal from v$session that had the most number of processes.

WebSphere FFDC Count, what does it mean?

I am trying to understand what the "Count" column means in a WebSphere FFDC exception log. IBM told us we have received this error 6835 times. I have not found a good guide which explains what this count shows but from what I have seen it seems to be the number of times that this exception happened since the last JVM restart. The problem is that does not match up with our logs as this error only appears to be thrown 1 time per day with our daily restarts which I can see in the Systemout.log. Also this count doesn't seem to change over a weeks time in the exception logs. Can anyone help?
Index Count Time of last Occurrence Exception SourceId ProbeId
------+------+---------------------------+--------------------------
21 6835 11/19/11 7:00:17:631 UTC java.util.zip.ZipException com.ibm.ws.classloader.ClassLoaderUtils.addDependents 238
This website may be helpful: http://wasdynacache.blogspot.com/2011_07_01_archive.html (Section entitled "First Failure Data Capture for your enterprise application with WebSphere Application Server")
FFDC is capturing the state of the application server and/or the
application when an unexpected error or exception occurs the first
time. All subsequent iterations of the same error/exception are
ignored.
I tried seeing if I could find information for you specifically about count, but all FFDC logs can be formatted differently depending on which application calls it and which formatter the call uses. Good luck.

Resources