Connection acquired from ComboPooledDataSource occasionally already checked-out - jdbc

I'm finding that when multiple threads request a connection each from a single, shared instance of a ComboPooledDataSource, there is the occasion of if returning a connection from the pool that's already in-use. Is there a configuration setting or other means to make sure that connections currently checked-out aren't checked-out again?
package stress;
import java.beans.PropertyVetoException;
import java.sql.Connection;
import java.sql.SQLException;
import java.util.HashSet;
import java.util.Set;
import com.mchange.v2.c3p0.ComboPooledDataSource;
import com.mchange.v2.c3p0.DataSources;
public class StressTestDriver
{
private static final String _host = "";
private static final String _port = "3306";
private static final String _database = "";
private static final String _user = "";
private static final String _pass = "";
public static void main(String[] args)
{
new StressTestDriver();
}
StressTestDriver()
{
ComboPooledDataSource cpds = new ComboPooledDataSource();
try
{
cpds.setDriverClass( "com.mysql.jdbc.Driver" );
String connectionString = "jdbc:mysql://" + _host + ":" + _port + "/"
+ _database;
cpds.setJdbcUrl( connectionString );
cpds.setMaxPoolSize( 15 );
cpds.setMaxIdleTime( 100 );
cpds.setAcquireRetryAttempts( 1 );
cpds.setNumHelperThreads( 3 );
cpds.setUser( _user );
cpds.setPassword( _pass );
}
catch( PropertyVetoException e )
{
e.printStackTrace();
return;
}
write("BEGIN");
try
{
for(int i=0; i<100000; ++i)
doConnection( cpds );
}
catch( Exception ex )
{
ex.printStackTrace();
}
finally
{
try
{
DataSources.destroy( cpds );
}
catch( SQLException e )
{
e.printStackTrace();
}
}
write("END");
}
void doConnection( final ComboPooledDataSource cpds )
{
Thread[] threads = new Thread[ 10 ];
final Set<String> set = new HashSet<String>(threads.length);
Runnable runnable = new Runnable()
{
public void run()
{
Connection conn = null;
try
{
conn = cpds.getConnection();
synchronized(set)
{
String toString = conn.toString();
if( set.contains( toString ) )
write("In-use connection: " + toString);
else
set.add( toString );
}
conn.close();
}
catch( Exception e )
{
e.printStackTrace();
return;
}
}
};
for(int i=0; i<threads.length; ++i)
{
threads[i] = new Thread( runnable );
threads[i].start();
}
for(int i=0; i<threads.length; ++i)
{
try
{
threads[i].join();
}
catch( InterruptedException e )
{
e.printStackTrace();
}
}
}
private static void write(String msg)
{
String threadName = Thread.currentThread().getName();
System.err.println(threadName + ": " + msg);
}
}

I've run you stress test in my environment. It terminated without output.
That said, it strikes me as a bit unreliable to use toString() output as a token for identity. The hashcodes encoded in Object.toString() are not guaranteed unique, and you're generating lots. You might just be seeing collisions.
In particular, the scenario you are concerned about would represent a surprising and perplexing sort of bug. You should be seeing instances of com.mchange.v2.c3p0.impl.NewProxyConnection. Those instances are not checked in and out of the pool — they are single use, disposable proxies that wrap the underlying database Connection. They are constructed with new when you call getConnection(), remain associated with a PooledConnection object while you work, then dereferenced by the c3p0 library when you call close(), to be garbage collected after client code dereferences them. If somehow getConnection() was being called on the same underlying PooledConnection, that would be a c3p0 bug, and c3p0 would emit a warning to that effect. You are not seeing anything like...
c3p0 -- Uh oh... getConnection() was called on a PooledConnection when
it had already provided a client with a Connection that has not yet
been closed. This probably indicates a bug in the connection pool!!!
are you?
I'd verify that you are not just seeing collisions of NewProxyConnection.toString(). Use a map rather than a set, hold a reference to close()ed Connection instances as values, when you see a collision of String keys, check with == whether the new Connection is the same instance as the value in your map. I'd be astonished if you find that they are the same instance. (I wish I could reproduce the problem to verify this on my own.)

I added code to get the connection ID/##SPID for Oracle and MS SQL Server, and whenever the toString value was the same, the connection ID always differed, so c3p0 is not the culprit:
94843 - pool-1-thread-12 - : onCheckOut called with: com.microsoft.sqlserver.jdbc.SQLServerConnection#5e65d9 - H: 6186457 - P: : 57
94843 - pool-1-thread-13 - : onCheckOut called with: com.microsoft.sqlserver.jdbc.SQLServerConnection#170a25e - H: 24158814 - P: : 55
94843 - pool-1-thread-1 - : onCheckOut called with: com.microsoft.sqlserver.jdbc.SQLServerConnection#5e65d9 - H: 6186457 - P: : 54
19172 - pool-1-thread-11 - : onCheckOut called with: oracle.jdbc.driver.T4CConnection#2475ca - H: 2389450 - P: : 6564
19172 - pool-1-thread-31 - : onCheckOut called with: oracle.jdbc.driver.T4CConnection#2475ca - H: 2389450 - P: : 6448
I'll have to look into revising the existing code. Thanks for your help Steve! You've pointed me in the right direction.

Related

getting cause:null in property dlqDeliveryFailureCause

I am trying to set up Dead Letter Queue monitoring for a system. So far, I can get it to be thrown in the DLQ queue without problems when the message consumption fails on the consumer. Now I'm having some trouble with getting the reason why it failed;
currently I get the following
java.lang.Throwable: Delivery[2] exceeds redelivery policy imit:RedeliveryPolicy
{destination = queue://*,
collisionAvoidanceFactor = 0.15,
maximumRedeliveries = 1,
maximumRedeliveryDelay = -1,
initialRedeliveryDelay = 10000,
useCollisionAvoidance = false,
useExponentialBackOff = true,
backOffMultiplier = 5.0,
redeliveryDelay = 10000,
preDispatchCheck = true},
cause:null
I do not know why cause is coming back as null. I'm using Spring with ActiveMQ. I'm using the DefaultJmsListenerContainerFactory, which creates a DefaultMessageListenerContainer. I would like cause to be filled with the exception that happened on my consumer but I can't get it to work. Apparently there's something on Spring that's not bubbling up the exception correctly, but I'm not sure what it is. I'm using spring-jms:4.3.10. I would really appreciate the help.
I am using spring-boot-starter-activemq:2.2.2.RELEASE (spring-jms:5.2.2, activemq-client-5.15.11) and I have the same behavior.
(links point to the versions I use)
The rollback cause is added here for the POSION_ACK_TYPE (sic!).
Its assignment to the MessageDispatch is only happening in one place: when dealing with a RuntimeException in the case there is a javax.jms.MessageListener registered.
Unfortunately (for this particular case), Spring doesn't register one, because it prefers to deal with its own hierarchy. So, long story short, there is no chance to make it happen with Spring out-of-the-box.
However, I managed to write an hack-ish way of getting an access to the MessageDispatch instance dealt with, inject the exception as the rollback cause, and it works!
package com.example;
import org.springframework.jms.listener.DefaultMessageListenerContainer;
import javax.jms.*;
public class MyJmsMessageListenerContainer extends DefaultMessageListenerContainer {
private final MessageDeliveryFailureCauseEnricher messageDeliveryFailureCauseEnricher = new MessageDeliveryFailureCauseEnricher();
private MessageConsumer messageConsumer; // Keep for later introspection
#Override
protected MessageConsumer createConsumer(Session session, Destination destination) throws JMSException {
this.messageConsumer = super.createConsumer(session, destination);
return this.messageConsumer;
}
#Override
protected void invokeListener(Session session, Message message) throws JMSException {
try {
super.invokeListener(session, message);
} catch (Throwable throwable) {
messageDeliveryFailureCauseEnricher.enrich(throwable, this.messageConsumer);
throw throwable;
}
}
}
Note: don't deal with the Throwable by overriding the protected void handleListenerException(Throwable ex) method, because at that moment some cleanup already happened in the ActiveMQMessageConsumer instance.
package com.example;
import org.apache.activemq.ActiveMQMessageConsumer;
import org.apache.activemq.command.MessageDispatch;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import org.springframework.util.ReflectionUtils;
import javax.jms.MessageConsumer;
import java.lang.reflect.Field;
import java.util.HashMap;
import java.util.LinkedList;
import java.util.Map;
class MessageDeliveryFailureCauseEnricher {
private static final Logger logger = LoggerFactory.getLogger(MessageDeliveryFailureCauseEnricher.class);
private final Map<Class<?>, Field> accessorFields = new HashMap<>();
private final Field targetField;
public MessageDeliveryFailureCauseEnricher() {
this.targetField = register(ActiveMQMessageConsumer.class, "deliveredMessages");
// Your mileage may vary; here is mine:
register("brave.jms.TracingMessageConsumer", "delegate");
register("org.springframework.jms.connection.CachedMessageConsumer", "target");
}
private Field register(String className, String fieldName) {
Field result = null;
if (className == null) {
logger.warn("Can't register a field from a missing class name");
} else {
try {
Class<?> clazz = Class.forName(className);
result = register(clazz, fieldName);
} catch (ClassNotFoundException e) {
logger.warn("Class not found on classpath: {}", className);
}
}
return result;
}
private Field register(Class<?> clazz, String fieldName) {
Field result = null;
if (fieldName == null) {
logger.warn("Can't register a missing class field name");
} else {
Field field = ReflectionUtils.findField(clazz, fieldName);
if (field != null) {
ReflectionUtils.makeAccessible(field);
accessorFields.put(clazz, field);
}
result = field;
}
return result;
}
void enrich(Throwable throwable, MessageConsumer messageConsumer) {
if (throwable != null) {
if (messageConsumer == null) {
logger.error("Can't enrich the MessageDispatch with rollback cause '{}' if no MessageConsumer is provided", throwable.getMessage());
} else {
LinkedList<MessageDispatch> deliveredMessages = lookupFrom(messageConsumer);
if (deliveredMessages != null && !deliveredMessages.isEmpty()) {
deliveredMessages.getLast().setRollbackCause(throwable); // Might cause problems if we prefetch more than 1 message
}
}
}
}
private LinkedList<MessageDispatch> lookupFrom(Object object) {
LinkedList<MessageDispatch> result = null;
if (object != null) {
Field field = accessorFields.get(object.getClass());
if (field != null) {
Object fieldValue = ReflectionUtils.getField(field, object);
if (fieldValue != null) {
if (targetField == field) {
result = (LinkedList<MessageDispatch>) fieldValue;
} else {
result = lookupFrom(fieldValue);
}
}
}
}
return result;
}
}
The magic happen in the second class:
At construction time we make some private fields accessible.
When a Throwable is caught, we traverse these fields to end up with the appropriate MessageDispatch instance (beware if you prefetch more than 1 message), and inject it the throwable we want to be part of the dlqDeliveryFailureCause JMS property.
I crafted this solution this afternoon, after hours of debugging (thanks OSS!) and many trials and errors. It works, but I have the feeling it's more of an hack than a real, solid solution.
With that in mind, I made my best to avoid side effects, so the worst that can happen is no trace of the original Throwable in the message ending in the Dead Letter Queue.
If I missed the point somewhere, I'b be glad to learn more about this.

WMQ connection socket constantly closed between v9_client and v6_server

We have a standalone java application using third-party tool to manage connection pooling, which worked for us in v6_client + v6_server setup for a long time.
Now we are trying to migrate from v6 to v9 (yes, we are pretty late to the party.....), and found v9_client connection to v6_server connection is constantly interrupted, meaning:
Socket created by XAQueueConnectionFactory#createXAConnection() is always closed immediately, and the created XAConnection seems to be unaware of it.
Due to socket close mentioned above, XASession created from the XAConnection.createXASession() always creates a new socket and close the socket after XASession.close().
We went throught the complete list of tunables for v9_client (XAQCF
column in https://www.ibm.com/support/knowledgecenter/SSFKSJ_9.0.0/com.ibm.mq.ref.dev.doc/q111800_.html) and only spot two potential new configs we haven't used in v6_client, SHARECONVALLOWED and PROVIDERVERSION. Unfortunately neither helps us out..... Specifically:
We tried setShareConvAllowed(WMQConstants.WMQ_SHARE_CONV_ALLOWED_[YES/NO]) Considering there's no SHARECNV property in v6_server side, not a surprise.
We tried "Migration/Restricted/Normal Mode" by setProviderVersion("[6/7/8") ([7/8] throws exceptions, as expected....).
Just wondering if anybody else had similar experience and could share some insights. We tried v9_server+v9_client and haven't seen any similar problem, so that could also be our eventual solution.....
Btw, the WMQ is hosted on linux (RedHat), and we only use products of MQXAQueueConnectionFactory on client side (ibm mq class for jms).
Thanks.
Additional details/updates.
[update-1]
--[playgrond setup]
v9_client jars:
javax.jms-api-2.0.jar
com.ibm.mq.allclient(-9.0.0.[1/5]).jar
v6_client jars:
in addition to v9_client jars, introduced the following jars in eclipse classpath
com.ibm.dhbcore-1.0.jar
com.ibm.mq.pcf-6.0.3.jar
com.ibm.mqjms-6.0.2.2.jar
com.ibm.mq-6.0.2.2.jar
com.ibm.mqetclient-6.0.2.2.jar
connector.jar
jta-1.1.jar
Testing code - single thread:
import javax.jms.*;
import com.ibm.mq.jms.*;
import com.ibm.msg.client.wmq.WMQConstants;
public class MQSeries_simpleAuditQ {
private static String queueManager = "QM.RCTQ.ALL.01";
private static String host = "localhost";
private static int port = 26005;
public static void main(String[] args) throws JMSException {
MQXAQueueConnectionFactory queueFactory= new MQXAQueueConnectionFactory();
System.out.println("\n\n\n*******************\nqueueFactory implementation version: " +
queueFactory.getClass().getPackage().getImplementationVersion() + "*****************\n\n\n");
queueFactory.setHostName(host);
queueFactory.setPort(port);
queueFactory.setQueueManager(queueManager);
queueFactory.setTransportType(WMQConstants.WMQ_CM_CLIENT);
if (queueFactory.getClass().getPackage().getImplementationVersion().split("\\.")[0].equals("9")) {
queueFactory.setProviderVersion("6");
//queueFactory.setShareConvAllowed(WMQConstants.WMQ_SHARE_CONV_ALLOWED_YES);
}
XASession xaSession;
javax.jms.QueueConnection xaQueueConnection;
try {
// Obtain a QueueConnection
System.out.println("Creating Connection...");
xaQueueConnection = (QueueConnection)queueFactory.createXAConnection(" ", "");
xaQueueConnection.start();
for (int counter=0; counter<2; counter++) {
try {
xaSession = ((XAConnection)xaQueueConnection).createXASession();
xaSession.close();
} catch (Exception ex) {
System.out.println(ex);
}
}
System.out.println("Closing connection.... ");
xaQueueConnection.close();
} catch (Exception e) {
System.out.println("Error processing " + e.getMessage());
}
}
}
--[observations]
v6_client only created and close a single socket, while v9_client (both 9.0.0.[1/5]):
socket created and closed right after xaQueueConnection = (QueueConnection)queueFactory.createXAConnection(" ", "");
in the inner for loop, socket created right after xaSession = ((XAConnection)xaQueueConnection).createXASession();, and closed after xaSession.close();
Naively i was expecting socket remains open until xaQueueConnection.close().
[update-2]
Using queueFactory.setProviderVersion("9"); and queueFactory.setShareConvAllowed(WMQConstants.WMQ_SHARE_CONV_ALLOWED_YES); for v9_server+v9_client, we don't see the same constant socket close issue in v6_server+v9_client, which is a good news.
[update-3] MCAUSER on attribute for all SVRCONN channel on v6_server. Same on v9_server (which doesn't have the same socket close problem when connected with the same v9_client).
display channel (SYSTEM.ADMIN.SVRCONN)
MCAUSER(mqm)
display channel (SYSTEM.AUTO.SVRCONN)
MCAUSER( )
display channel (SYSTEM.DEF.SVRCONN)
MCAUSER( )
[update-4]
I tried setting MCAUSER() to mqm, then using both (blank) and mqm from client side, both can create connections, but still seeing the same unexpected socket close using v9_client+v6_user. After updating MCAUSER(), i always added refresh security, and restart the qmgr.
I also tried setting system variable to blank in eclipse before creating the connection using blank user, didn't help either.
[update-5]
Limiting our discussion to v9_client+v9_server. The async testing code below generates a ton of xasession create/close request, using limited number of existing connections. Using SHARECNV(1) we would also end up with unaffordable high TIME_WAIT count, but using larger than 1 SHARECNV (eg. 10) might introduce extra performance penalty......
Async testing code
import java.util.ArrayList;
import java.util.concurrent.Callable;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.FutureTask;
import javax.jms.*;
import com.ibm.mq.jms.*;
import com.ibm.msg.client.wmq.WMQConstants;
public class MQSeries_simpleAuditQ_Async_v9 {
private static String queueManager = "QM.ALPQ.ALL.01";
private static int port = 26010;
private static String host = "localhost";
private static int connCount = 20;
private static int amp = 100;
private static ExecutorService amplifier = Executors.newFixedThreadPool(amp);
public static void main(String[] args) throws JMSException {
MQXAQueueConnectionFactory queueFactory= new MQXAQueueConnectionFactory();
System.out.println("\n\n\n*******************\nqueueFactory implementation version: " +
queueFactory.getClass().getPackage().getImplementationVersion() + "*****************\n\n\n");
queueFactory.setTransportType(WMQConstants.WMQ_CM_CLIENT);
if (queueFactory.getClass().getPackage().getImplementationVersion().split("\\.")[0].equals("9")) {
queueFactory.setProviderVersion("9");
queueFactory.setShareConvAllowed(WMQConstants.WMQ_SHARE_CONV_ALLOWED_YES);
}
queueFactory.setHostName(host);
queueFactory.setPort(port);
queueFactory.setQueueManager(queueManager);
//queueFactory.setChannel("");
ArrayList<QueueConnection> xaQueueConnections = new ArrayList<QueueConnection>();
try {
// Obtain a QueueConnection
System.out.println("Creating Connection...");
//System.setProperty("user.name", "mqm");
//System.out.println("system username: " + System.getProperty("user.name"));
for (int ct=0; ct<connCount; ct++) {
// xaQueueConnection instance of MQXAQueueConnection
QueueConnection xaQueueConnection = (QueueConnection)queueFactory.createXAConnection(" ", "");
xaQueueConnection.start();
xaQueueConnections.add(xaQueueConnection);
}
ArrayList<Double> totalElapsedTimeRecord = new ArrayList<Double>();
ArrayList<FutureTask<Double>> taskBuffer = new ArrayList<FutureTask<Double>>();
for (int loop=0; loop <= 10; loop++) {
try {
for (int i=0; i<amp; i++) {
int idx = (int)(Math.random()*((connCount)));
System.out.println("Using connection: " + idx);
FutureTask<Double> xaSessionPoker = new FutureTask<Double>(new XASessionPoker(xaQueueConnections.get(idx)));
amplifier.submit(xaSessionPoker);
taskBuffer.add(xaSessionPoker);
}
System.out.println("loop " + loop + " completed");
} catch (Exception ex) {
System.out.println(ex);
}
}
for (FutureTask<Double> xaSessionPoker : taskBuffer) {
totalElapsedTimeRecord.add(xaSessionPoker.get());
}
System.out.println("Average xaSession poking time: " + calcAverage(totalElapsedTimeRecord));
System.out.println("Closing connections.... ");
for (QueueConnection xaQueueConnection : xaQueueConnections) {
xaQueueConnection.close();
}
} catch (Exception e) {
System.out.println("Error processing " + e.getMessage());
}
amplifier.shutdown();
}
private static double calcAverage(ArrayList<Double> myArr) {
double sum = 0;
for (Double val : myArr) {
sum += val;
}
return sum/myArr.size();
}
// create and close session through QueueConnection object passed in.
private static class XASessionPoker implements Callable<Double> {
// conn instance of MQXAQueueConnection. ref. QueueProviderService
private QueueConnection conn;
XASessionPoker(QueueConnection conn) {
this.conn = conn;
}
#Override
public Double call() throws Exception {
XASession xaSession;
double elapsed = 0;
try {
final long start = System.currentTimeMillis();
// ref. DualSessionWrapper
// xaSession instance of MQXAQueueSession
xaSession = ((XAConnection) conn).createXASession();
xaSession.close();
final long end = System.currentTimeMillis();
elapsed = (end - start) / 1000.0;
} catch (Exception e) {
// TODO Auto-generated catch block
System.out.println(e);
}
return elapsed;
}
}
}
We found the root cause is a combination of no more session pooling + bitronix TM doesn't offer session pooling across TX. Specifically (in our case), bitronix manages JmsPooledConnection pooling, but everytime a (xa)session is used (under JmsPooledConnection), a new socket is created (createXASession()) and closed (xaSession.close()).
One solution, is to wrap the jms connection with (xa)session pool, similar to what's been done in https://github.com/messaginghub/pooled-jms/tree/master/pooled-jms/src/main/java/org/messaginghub/pooled/jms
http://bjansen.github.io/java/2018/03/04/high-performance-mq-jms.html also suggests Spring CachingConnectionFactory works well, which sounds like a speical case of the first solution.

Oracle DB - Application Continuity - How to test it?

How to Test/POC application continuity concept. I tried making the code to hold connections so that the application is not able get the connections and it tries to get from the pool. I am sure this is not the right way of testing application continuity, but I don't know how to create a proper scenario.
Is it the only way to bring down db while a transaction is in progress ?
I used the below code to simulate AC where a procedure is called 20 times in 20 threads with a new connection everytime. I am creating a scenario where one thread goes and waits for a connection, times-out as not getting connection in time, and then retires to get a connection, IS This a proper AC test. (Below is test class)
package com.ac;
import org.springframework.context.ApplicationContext;
import org.springframework.context.support.ClassPathXmlApplicationContext;
import java.sql.Connection;
import java.sql.SQLException;
import java.sql.Statement;
import oracle.ucp.jdbc.PoolDataSourceFactory;
import oracle.ucp.jdbc.PoolDataSource;
import oracle.ucp.jdbc.ValidConnection;
import java.sql.ResultSet;
public class App {
public static void main(String[] args) {
try {
new App().acSimulation();
} catch (Exception e) {
e.printStackTrace();
}
}
public void acSimulation() throws Exception{
final PoolDataSource pds = PoolDataSourceFactory.getPoolDataSource();
pds.setConnectionFactoryClassName("oracle.jdbc.replay.OracleDataSourceImpl");
System.out.println("connection factory set");
String URL = "jdbc:oracle:thin:#(DESCRIPTION = (TRANSPORT_CONNECT_TIMEOUT=3) (RETRY_COUNT=20)(FAILOVER=ON) " +
" (ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521)) (CONNECT_DATA = " +
" (SERVER = DEDICATED) (SERVICE_NAME=orcl)))";
System.out.println("Using URL\n" + URL);
pds.setURL(URL);
pds.setUser("system");
pds.setPassword("oracle");
pds.setInitialPoolSize(1);
pds.setMinPoolSize(1);
pds.setMaxPoolSize(3);
pds.setConnectionWaitTimeout(10);
// RAC Features
pds.setConnectionPoolName("Application Continuity Pool");
pds.setFastConnectionFailoverEnabled(true);
// use srvctl config nodeapps to get the ONS ports on the cluster
// pds.setONSConfiguration("nodes=192.168.100.30:6200,192.168.100.32:6200");
System.out.println("pool configured, trying to get a connection");
// final Connection conn = null;
final Connection conn = pds.getConnection();
if (conn == null || !((ValidConnection) conn).isValid()) {
System.out.println("connection is not valid");
throw new Exception ("invalid connection obtained from the pool");
}
if ( conn instanceof oracle.jdbc.replay.ReplayableConnection ) {
System.out.println("got a replay data source");
} else {
System.out.println("this is not a replay data source. Why not?");
}
System.out.println("got a connection! Getting some stats if possible");
oracle.ucp.jdbc.JDBCConnectionPoolStatistics stats = pds.getStatistics();
System.out.println("\tgetAvailableConnectionsCount() " + stats.getAvailableConnectionsCount());
System.out.println("\tgetBorrowedConnectionsCount() " + stats.getBorrowedConnectionsCount() );
System.out.println("\tgetRemainingPoolCapacityCount() " + stats.getRemainingPoolCapacityCount());
System.out.println("\tgetTotalConnectionsCount() " + stats.getTotalConnectionsCount());
System.out.println(((oracle.ucp.jdbc.oracle.OracleJDBCConnectionPoolStatistics)pds.getStatistics()).getFCFProcessingInfo());
System.out.println("Now working");
int i=0;
while(i < 20){
new Runnable() {
public void run() {
try {
seperateInstance(conn, pds);
} catch (SQLException e) {
e.printStackTrace();
}
}
}.run();
i++;
}
}
private void seperateInstance(Connection conn, PoolDataSource pds) throws SQLException{
java.sql.CallableStatement cstmt = null;
oracle.ucp.jdbc.JDBCConnectionPoolStatistics stats = pds.getStatistics();
conn = pds.getConnection();
cstmt = conn.prepareCall("{call P_M_emp(?,?)}");
cstmt.setLong(1, 1);
cstmt.setString(2, "TEST");
cstmt.execute();
System.out.println("Statement executed. Now closing down");
System.out.println("Almost done! Getting some more stats if possible");
stats = pds.getStatistics();
System.out.println("\tgetAvailableConnectionsCount() " + stats.getAvailableConnectionsCount());
System.out.println("\tgetBorrowedConnectionsCount() " + stats.getBorrowedConnectionsCount() );
System.out.println("\tgetRemainingPoolCapacityCount() " + stats.getRemainingPoolCapacityCount());
System.out.println("\tgetTotalConnectionsCount() " + stats.getTotalConnectionsCount());
System.out.println(((oracle.ucp.jdbc.oracle.OracleJDBCConnectionPoolStatistics)pds.getStatistics()).getFCFProcessingInfo());
try {
Thread.currentThread().sleep(1000);
} catch (InterruptedException e) {
e.printStackTrace();
}
System.out.println("Closing connection "+ conn);
cstmt.close();
conn.close();
conn = null;
}
}
I got the code from below site and modified a bit to add threads and simulate AC. But I am unable to get a replayable datasource.
https://martincarstenbach.wordpress.com/2013/12/13/playing-with-application-continuity-in-rac-12c/
To test Application Continuity you can kill the sessions on the server by doing ALTER SYSTEM KILL SESSION 'sid,serial#' which will simulate a failure.

CloseableHttpClient.execute freezes once every few weeks despite timeouts

We have a groovy singleton that uses PoolingHttpClientConnectionManager(httpclient:4.3.6) with a pool size of 200 to handle very high concurrent connections to a search service and processes the xml response.
Despite having specified timeouts, it freezes about once a month but runs perfectly fine the rest of the time.
The groovy singleton below. The method retrieveInputFromURL seems to block on client.execute(get);
#Singleton(strict=false)
class StreamManagerUtil {
// Instantiate once and cache for lifetime of Signleton class
private static PoolingHttpClientConnectionManager connManager = new PoolingHttpClientConnectionManager();
private static CloseableHttpClient client;
private static final IdleConnectionMonitorThread staleMonitor = new IdleConnectionMonitorThread(connManager);
private int warningLimit;
private int readTimeout;
private int connectionTimeout;
private int connectionFetchTimeout;
private int poolSize;
private int routeSize;
PropertyManager propertyManager = PropertyManagerFactory.getInstance().getPropertyManager("sebe.properties")
StreamManagerUtil() {
// Initialize all instance variables in singleton from properties file
readTimeout = 6
connectionTimeout = 6
connectionFetchTimeout =6
// Pooling
poolSize = 200
routeSize = 50
// Connection pool size and number of routes to cache
connManager.setMaxTotal(poolSize);
connManager.setDefaultMaxPerRoute(routeSize);
// ConnectTimeout : time to establish connection with GSA
// ConnectionRequestTimeout : time to get connection from pool
// SocketTimeout : waiting for packets form GSA
RequestConfig config = RequestConfig.custom()
.setConnectTimeout(connectionTimeout * 1000)
.setConnectionRequestTimeout(connectionFetchTimeout * 1000)
.setSocketTimeout(readTimeout * 1000).build();
// Keep alive for 5 seconds if server does not have keep alive header
ConnectionKeepAliveStrategy myStrategy = new ConnectionKeepAliveStrategy() {
#Override
public long getKeepAliveDuration(HttpResponse response, HttpContext context) {
HeaderElementIterator it = new BasicHeaderElementIterator
(response.headerIterator(HTTP.CONN_KEEP_ALIVE));
while (it.hasNext()) {
HeaderElement he = it.nextElement();
String param = he.getName();
String value = he.getValue();
if (value != null && param.equalsIgnoreCase
("timeout")) {
return Long.parseLong(value) * 1000;
}
}
return 5 * 1000;
}
};
// Close all connection older than 5 seconds. Run as separate thread.
staleMonitor.start();
staleMonitor.join(1000);
client = HttpClients.custom().setDefaultRequestConfig(config).setKeepAliveStrategy(myStrategy).setConnectionManager(connManager).build();
}
private retrieveInputFromURL (String categoryUrl, String xForwFor, boolean isXml) throws Exception {
URL url = new URL( categoryUrl );
GPathResult searchResponse = null
InputStream inputStream = null
HttpResponse response;
HttpGet get;
try {
long startTime = System.nanoTime();
get = new HttpGet(categoryUrl);
response = client.execute(get);
int resCode = response.getStatusLine().getStatusCode();
if (xForwFor != null) {
get.setHeader("X-Forwarded-For", xForwFor)
}
if (resCode == HttpStatus.SC_OK) {
if (isXml) {
extractXmlString(response)
} else {
StringBuffer buffer = buildStringFromResponse(response)
return buffer.toString();
}
}
}
catch (Exception e)
{
throw e;
}
finally {
// Release connection back to pool
if (response != null) {
EntityUtils.consume(response.getEntity());
}
}
}
private extractXmlString(HttpResponse response) {
InputStream inputStream = response.getEntity().getContent()
XmlSlurper slurper = new XmlSlurper()
slurper.setFeature("http://xml.org/sax/features/validation", false)
slurper.setFeature("http://apache.org/xml/features/disallow-doctype-decl", false)
slurper.setFeature("http://apache.org/xml/features/nonvalidating/load-dtd-grammar", false)
slurper.setFeature("http://apache.org/xml/features/nonvalidating/load-external-dtd", false)
return slurper.parse(inputStream)
}
private StringBuffer buildStringFromResponse(HttpResponse response) {
StringBuffer buffer= new StringBuffer();
BufferedReader rd = new BufferedReader(new InputStreamReader(response.getEntity().getContent()));
String line = "";
while ((line = rd.readLine()) != null) {
buffer.append(line);
System.out.println(line);
}
return buffer
}
public class IdleConnectionMonitorThread extends Thread {
private final HttpClientConnectionManager connMgr;
private volatile boolean shutdown;
public IdleConnectionMonitorThread
(PoolingHttpClientConnectionManager connMgr) {
super();
this.connMgr = connMgr;
}
#Override
public void run() {
try {
while (!shutdown) {
synchronized (this) {
wait(5000);
connMgr.closeExpiredConnections();
connMgr.closeIdleConnections(10, TimeUnit.SECONDS);
}
}
} catch (InterruptedException ex) {
// Ignore
}
}
public void shutdown() {
shutdown = true;
synchronized (this) {
notifyAll();
}
}
}
I also found found this in the log leading me to believe it happened on waiting for response data
java.net.SocketTimeoutException: Read timed out at java.net.SocketInputStream.socketRead0(Native Method) at java.net.SocketInputStream.read(SocketInputStream.java:150) at java.net.SocketInputStream.read(SocketInputStream.java:121) at sun.security.ssl.InputRecord.readFully(InputRecord.java:465)
Findings thus far:
We are using java 1.8u25. There is an open issue on a similar scenario
https://bugs.openjdk.java.net/browse/JDK-8075484
HttpClient had a similar report https://issues.apache.org/jira/browse/HTTPCLIENT-1589 but this was fixed in
the 4.3.6 version we are using
Questions
Can this be a synchronisation issue? From my understanding even though the singleton is accessed by multiple threads, the only shared data is the cached CloseableHttpClient
Is there anything else fundamentally wrong with this code,approach that may be causing this behaviour?
I do not see anything obviously wrong with your code. I would strongly recommend setting SO_TIMEOUT parameter on the connection manager, though, to make sure it applies to all new socket at the creation time, not at the time of request execution.
I would also help to know what exactly 'freezing' means. Are worker threads getting blocked waiting to acquire connections from the pool or waiting for response data?
Please also note that worker threads can appear 'frozen' if the server keeps on sending bits of chunk coded data. As usual a wire / context log of the client session would help a lot
http://hc.apache.org/httpcomponents-client-4.3.x/logging.html

How to purge/delete message from weblogic JMS queue

Is there and way (apart from consuming the message) I can purge/delete message programmatically from JMS queue. Even if it is possible by wlst command line tool, it will be of much help.
Here is an example in WLST for a Managed Server running on port 7005:
connect('weblogic', 'weblogic', 't3://localhost:7005')
serverRuntime()
cd('/JMSRuntime/ManagedSrv1.jms/JMSServers/MyAppJMSServer/Destinations/MyAppJMSModule!QueueNameToClear')
cmo.deleteMessages('')
The last command should return the number of messages it deleted.
You can use JMX to purge the queue, either from Java or from WLST (Python). You can find the MBean definitions for WLS 10.0 on http://download.oracle.com/docs/cd/E11035_01/wls100/wlsmbeanref/core/index.html.
Here is a basic Java example (don't forget to put weblogic.jar in the CLASSPATH):
import java.util.Hashtable;
import javax.management.MBeanServerConnection;
import javax.management.remote.JMXConnector;
import javax.management.remote.JMXConnectorFactory;
import javax.management.remote.JMXServiceURL;
import javax.management.ObjectName;
import javax.naming.Context;
import weblogic.management.mbeanservers.runtime.RuntimeServiceMBean;
public class PurgeWLSQueue {
private static final String WLS_USERNAME = "weblogic";
private static final String WLS_PASSWORD = "weblogic";
private static final String WLS_HOST = "localhost";
private static final int WLS_PORT = 7001;
private static final String JMS_SERVER = "wlsbJMSServer";
private static final String JMS_DESTINATION = "test.q";
private static JMXConnector getMBeanServerConnector(String jndiName) throws Exception {
Hashtable<String,String> h = new Hashtable<String,String>();
JMXServiceURL serviceURL = new JMXServiceURL("t3", WLS_HOST, WLS_PORT, jndiName);
h.put(Context.SECURITY_PRINCIPAL, WLS_USERNAME);
h.put(Context.SECURITY_CREDENTIALS, WLS_PASSWORD);
h.put(JMXConnectorFactory.PROTOCOL_PROVIDER_PACKAGES, "weblogic.management.remote");
JMXConnector connector = JMXConnectorFactory.connect(serviceURL, h);
return connector;
}
public static void main(String[] args) {
try {
JMXConnector connector =
getMBeanServerConnector("/jndi/"+RuntimeServiceMBean.MBEANSERVER_JNDI_NAME);
MBeanServerConnection mbeanServerConnection =
connector.getMBeanServerConnection();
ObjectName service = new ObjectName("com.bea:Name=RuntimeService,Type=weblogic.management.mbeanservers.runtime.RuntimeServiceMBean");
ObjectName serverRuntime = (ObjectName) mbeanServerConnection.getAttribute(service, "ServerRuntime");
ObjectName jmsRuntime = (ObjectName) mbeanServerConnection.getAttribute(serverRuntime, "JMSRuntime");
ObjectName[] jmsServers = (ObjectName[]) mbeanServerConnection.getAttribute(jmsRuntime, "JMSServers");
for (ObjectName jmsServer: jmsServers) {
if (JMS_SERVER.equals(jmsServer.getKeyProperty("Name"))) {
ObjectName[] destinations = (ObjectName[]) mbeanServerConnection.getAttribute(jmsServer, "Destinations");
for (ObjectName destination: destinations) {
if (destination.getKeyProperty("Name").endsWith("!"+JMS_DESTINATION)) {
Object o = mbeanServerConnection.invoke(
destination,
"deleteMessages",
new Object[] {""}, // selector expression
new String[] {"java.lang.String"});
System.out.println("Result: "+o);
break;
}
}
break;
}
}
connector.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}
Works great on a single node environment, but what happens if you are on an clustered environment with ONE migratable JMSServer (currently on node #1) and this code is executing on node #2. Then there will be no JMSServer available and no message will be deleted.
This is the problem I'm facing right now...
Is there a way to connect to the JMSQueue without having the JMSServer available?
[edit]
Found a solution: Use the domain runtime service instead:
ObjectName service = new ObjectName("com.bea:Name=DomainRuntimeService,Type=weblogic.management.mbeanservers.domainruntime.DomainRuntimeServiceMBean");
and be sure to access the admin port on the WLS-cluster.
if this is one time, the easiest would be to do it through the console...
the program in below link helps you to clear only pending messages from queue based on redelivered message parameter
http://techytalks.blogspot.in/2016/02/deletepurge-pending-messages-from-jms.html

Resources