C3p0 trying to create a new connection pool and failing with ClassNotFoundException - spring

I'm seeing a very strange behavior in my application.
My application setup: Spring + Hibernate + C3p0
Application keeps running fine, when all of a sudden I start seeing these errors in logs and system totally stop processing any database specific requests.
WARN c3p0.C3P0Registry - Could not create for find ConnectionCustomizer with class name ''.
java.lang.ClassNotFoundException:
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:169)
at com.mchange.v2.c3p0.C3P0Registry.getConnectionCustomizer(C3P0Registry.java:181)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPoolManager.getConnectionCustomizer(C3P0PooledConnectionPoolManager.java:636)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPoolManager.createPooledConnectionPool(C3P0PooledConnectionPoolManager.java:738)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPoolManager.getPool(C3P0PooledConnectionPoolManager.java:257)
at com.mchange.v2.c3p0.impl.C3P0PooledConnectionPoolManager.getPool(C3P0PooledConnectionPoolManager.java:271)
at com.mchange.v2.c3p0.impl.AbstractPoolBackedDataSource.getConnection(AbstractPoolBackedDataSource.java:128)
at org.springframework.orm.hibernate3.LocalDataSourceConnectionProvider.getConnection(LocalDataSourceConnectionProvider.java:80)
at org.hibernate.jdbc.ConnectionManager.openConnection(ConnectionManager.java:423)
at org.hibernate.jdbc.ConnectionManager.getConnection(ConnectionManager.java:144)
at org.hibernate.jdbc.AbstractBatcher.prepareSelectStatement(AbstractBatcher.java:123)
at org.hibernate.id.SequenceGenerator.generate(SequenceGenerator.java:73)
at org.hibernate.event.def.AbstractSaveEventListener.saveWithGeneratedId(AbstractSaveEventListener.java:99)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.saveWithGeneratedOrRequestedId(DefaultSaveOrUpdateEventListener.java:187)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.entityIsTransient(DefaultSaveOrUpdateEventListener.java:172)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.performSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:94)
at org.hibernate.event.def.DefaultSaveOrUpdateEventListener.onSaveOrUpdate(DefaultSaveOrUpdateEventListener.java:70)
at org.hibernate.impl.SessionImpl.fireSaveOrUpdate(SessionImpl.java:507)
at org.hibernate.impl.SessionImpl.saveOrUpdate(SessionImpl.java:499)
at org.hibernate.impl.SessionImpl.saveOrUpdate(SessionImpl.java:495)
at org.springframework.orm.hibernate3.HibernateTemplate$18.doInHibernate(HibernateTemplate.java:690)
at org.springframework.orm.hibernate3.HibernateTemplate.execute(HibernateTemplate.java:365)
at org.springframework.orm.hibernate3.HibernateTemplate.saveOrUpdate(HibernateTemplate.java:687)
Why would C3p0 require to create a new connection pool at this
particular time, before these exceptions application is 100% working
fine and responding perfectly.
Also I've not provided any connectionCustomizerClassName property in
my c3p0 configurations, why would it load one? in this stack trace I
see it's not-null empty string ''.
Any clues?
==============================================================================
Following hibernate jars I see in application's classpath:
hibernate-3.2.6.ga.jar
spring-hibernate-1.2.6.jar
Following c3p0 jars I see in application's classpath:
c3p0-0.9.1.jar
c3p0-0.9.2-pre5.jar
c3p0-oracle-thin-extras-0.9.2-pre5.jar
Code that manually read these properties and set on datasource (I do not read/set any connectionCustomizerClassName property here at all)
ComboPooledDataSource dataSource = new ComboPooledDataSource();
dataSource.setMinPoolSize(Integer.parseInt(props.getProperty("jdbc.hibernate.c3p0.minPoolSize")));
.....
Here are C3p0 properties being used:
jdbc.hibernate.c3p0.minPoolSize=100
jdbc.hibernate.c3p0.initialPoolSize=100
jdbc.hibernate.c3p0.maxPoolSize=1000
jdbc.hibernate.c3p0.maxIdleTime=21600
jdbc.hibernate.c3p0.maxStatementsPerConnection=0
jdbc.hibernate.c3p0.maxStatements=0
jdbc.hibernate.c3p0.numHelperThreads=30
jdbc.hibernate.c3p0.checkoutTimeout=30000
jdbc.hibernate.c3p0.idleConnectionTestPeriod=900
jdbc.hibernate.c3p0.preferredTestQuery=SELECT 1 FROM dual
jdbc.hibernate.c3p0.maxConnectionAge=0
jdbc.hibernate.c3p0.maxIdleTimeExcessConnections=3600
jdbc.hibernate.c3p0.acquireIncrement=10
jdbc.hibernate.c3p0.acquireRetryDelay=5000
jdbc.hibernate.c3p0.acquireRetryAttempts=6
jdbc.hibernate.c3p0.propertyCycle=180

Following up a conversation in the comments on the posted question, it looks like the issue here is that VisualVM updates the null valued property connectionCustomizerClassName to an empty String value, which c3p0 currently treats an non-null and interprets as a class name.
Going forward (c3p0-0.9.5-pre7 and above), c3p0 will guard against this, interpret an all-whitespace connectionCustomizerClassName as equivalent to null. But in the meantime or for older versions, take care.
One easy workaround would be to define a NullConnectionCustomizer:
package mypkg;
import com.mchange.v2.c3p0.*;
public class NullConnectionCustomizer extends AbstractConnectionCustomizer
{}
And then use mypkg.NullConnectionCustomizer for connectionCustomizerClassName, so that the corresponding field in VisualVM is not empty and ambiguously interpretable as empty String or null.

Related

org.apache.hive.jdbc.HiveDriver: HiveBaseResultSet has not implemented absolute()?

I just started using the driver org.apache.hive.jdbc.HiveDriver (Version
1.2.1 for spark2) with a Spark Thrift Server (STS) (reference here)
java.sql.ResultSet defines the method absolute() (JavaDoc here)
but HiveBaseResultSet seems to have chosen not to implement the method (source code here)
So now my application (built on top of SmartGWT) was doing a simple operation and I got the following error message:
=== 2017-05-13 18:06:16,980 [3-47] WARN RequestContext - dsRequest.execute() failed:
java.sql.SQLException: Method not supported
at org.apache.hive.jdbc.HiveBaseResultSet.absolute(HiveBaseResultSet.java:70)
at org.apache.commons.dbcp.DelegatingResultSet.absolute(DelegatingResultSet.java:373)
at com.isomorphic.sql.SQLDataSource.executeWindowedSelect(SQLDataSource.java:2970)
at com.isomorphic.sql.SQLDataSource.SQLExecute(SQLDataSource.java:2024)
What is the reason that the driver chose not to implement absolute()?
Are there any workaround for the limitation?
Thanks for the hint from Mark Rotteveel. Now I understand better and let me post an answer to my own question.
Implementation of absolute() is optional
As specified by the Interface of ResultSet#absolute() (link), the implementation for absolute() is optional -- especially when the result set type is TYPE_FORWARD_ONLY.
Workaround
In my case, the result set comes from a Spark Thrift Server (STS) so I guess it is indeed forward-only. So the question became how to instruct my application to NOT making a call to absolute(), which is basically for cursor movement.
SmartGWT-specific answer
For SmartGWT, this is controlled by a property called sqlPaging, which we can specified for an OperationBinding. The right value to use seems to be dropAtServer (more reference here). So I set my SmartGWT DataSource XML file to something like this
<operationBindings>
<operationBinding operationType="fetch" progressiveLoading="false"
sqlPaging="dropAtServer"
>
After that I saw another error, which is now related to HiveConnection#commit():
java.sql.SQLException: Method not supported
at org.apache.hive.jdbc.HiveConnection.commit(HiveConnection.java:742)
at org.apache.commons.dbcp.DelegatingConnection.commit(DelegatingConnection.java:334)
at com.isomorphic.sql.SQLTransaction.commitTransaction(SQLTransaction.java:307)
at com.isomorphic.sql.SQLDataSource.commit(SQLDataSource.java:4673)
After more digging, I realized that the right property for SmartGWT to control the commit behavior is autoJoinTransactions and I should set it to false (more reference here). After these two changes, I could get may application to talk to STS via jdbc.HiveDriver
For anyone out there who are also trying this, here is my full settings for the driver in SmartGWT's server.properties (more reference here)
sql.defaultDatabase: perf2 # this name is picked by me, but it can be anyname
sql.perf2.driver.networkProtocol: tcp
sql.perf2.driver: org.apache.hive.jdbc.HiveDriver # important
sql.perf2.database.type: generic # important
sql.perf2.autoJoinTransactions: false # important
sql.perf2.interface.type: driverManager # important
sql.perf2.driver.url: jdbc:hive2://host:port # important -- pick your host:port
sql.perf2.driver.user: someuser # important -- pick your username
sql.perf2.interface.credentialsInURL: true
sql.perf2.driver.databaseName: someDb
sql.perf2.driver.context:

Camel RabbitMQ endpoint cannot be created when a dead letter exchange is declared

I'm having an issue creating a RabbitMQ endpoint with Camel. The issue only occurs when I declare a a dead message letter exchange option based on the camel documentation. This is my URN for creating the endpoint:
rabbitmq://localhost/com.mydomain.inbound.exhange?deadLetterExchange=dead.msgs
All is fine when I omit the deadLetterExchange option but as soon as I include it I get the following (not very helpful) exception:
Caused by: java.lang.NullPointerException
at com.rabbitmq.client.impl.ChannelN.validateQueueNameLength(ChannelN.java:1244) ~[amqp-client-3.6.1.jar:?]
at com.rabbitmq.client.impl.ChannelN.queueDeclare(ChannelN.java:843) ~[amqp-client-3.6.1.jar:?]
at com.rabbitmq.client.impl.ChannelN.queueDeclare(ChannelN.java:61) ~[amqp-client-3.6.1.jar:?]
at org.apache.camel.component.rabbitmq.RabbitMQDeclareSupport.declareAndBindQueue(RabbitMQDeclareSupport.java:96) ~[camel-rabbitmq-2.17.0.jar:2.17.0]
at org.apache.camel.component.rabbitmq.RabbitMQDeclareSupport.declareAndBindDeadLetterExchangeWithQueue(RabbitMQDeclareSupport.java:43) ~[camel-rabbitmq-2.17.0.jar:2.17.0]
at org.apache.camel.component.rabbitmq.RabbitMQDeclareSupport.declareAndBindExchangesAndQueuesUsing(RabbitMQDeclareSupport.java:35) ~[camel-rabbitmq-2.17.0.jar:2.17.0]
at org.apache.camel.component.rabbitmq.RabbitMQEndpoint.declareExchangeAndQueue(RabbitMQEndpoint.java:222) ~[camel-rabbitmq-2.17.0.jar:2.17.0]
at org.apache.camel.component.rabbitmq.RabbitConsumer.openChannel(RabbitConsumer.java:288) ~[camel-rabbitmq-2.17.0.jar:2.17.0]
at org.apache.camel.component.rabbitmq.RabbitConsumer.(RabbitConsumer.java:57) ~[camel-rabbitmq-2.17.0.jar:2.17.0]
at org.apache.camel.component.rabbitmq.RabbitMQConsumer.createConsumer(RabbitMQConsumer.java:108) ~[camel-rabbitmq-2.17.0.jar:2.17.0]
at org.apache.camel.component.rabbitmq.RabbitMQConsumer.startConsumers(RabbitMQConsumer.java:90) ~[camel-rabbitmq-2.17.0.jar:2.17.0]
at org.apache.camel.component.rabbitmq.RabbitMQConsumer.doStart(RabbitMQConsumer.java:160) ~[camel-rabbitmq-2.17.0.jar:2.17.0]
at org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61) ~[camel-core-2.17.0.jar:2.17.0]
at org.apache.camel.impl.DefaultCamelContext.startService(DefaultCamelContext.java:3269) ~[camel-core-2.17.0.jar:2.17.0]
at org.apache.camel.impl.DefaultCamelContext.doStartOrResumeRouteConsumers(DefaultCamelContext.java:3563) ~[camel-core-2.17.0.jar:2.17.0]
at org.apache.camel.impl.DefaultCamelContext.doStartRouteConsumers
....
Just no note that I've also tried creating the exchange and queue manually in a hope that this may work but no luck.
Additional Info:
camel-spring-boot-starter (2.17.0)
camel-rabbitmq (2.17.0)
Try adding a deadLetterQueueoption -
rabbitmq://localhost/com.mydomain.inbound.exhange?deadLetterExchange=dead.msgs&deadLetterQueue=my.dead.letter.queue
I also had to add further options to the uri to get it to work. I added
deadLetterExchangeType
queueArgsConfigurer
The queueArgsConfigurer is an implementation of org.apache.camel.component.rabbitmq.ArgsConfigurer
class MyQueueArgs implements ArgsConfigurer {
void configurArgs(Map<String, Object> args) { //misspelling!!
args.put("x-dead-letter-exchange", "my.dead.letter")
args.put("x-dead-letter-routing-key", "my.dead.letter.key")
}
}
Mine is a Spring app so myArgs (see below) is created in the bean factory.
So, the full uri is like this -
rabbitmq://hostname/exchangeName?routingKey=$routingKey&vhost=virtualHostname&exchangeType=exType&autoDelete=false&queue=my.queue&deadLetterExchange=my.dead.letter&deadLetterExchangeType=dlExType&deadLetterQueue=my.dead.letter.queue&queueArgsConfigurer=#myArgs
I probably don't need to specify the dead letter exchange in the uri and the ArgsConfigurer implementation.
For more on ArgsConfigurer this Camel issue might help - #8457
I had to look at the source code to figure a lot of this out. What is missing from the doc is a definition of dependencies. There are some options, particularly around dead letter exchanges, which become mandatory if another is specified. That's why you are getting your errors. Have a look at populateQueueArgumentsFromDeadLetterExchange in RabbitMQDeclareSupport.
EDIT
A simplification to my answer - I dropped the ArgsConfigurer implementation in the end. I went with this -
rabbitmq://myHostname/myExchangeName?
username=myUserName&
password=myPassword&
queue=myQueueName&
routingKey=myRoutingKey&
vhost=myVirtualHostname&
exchangeType=topic&
autoDelete=false&
deadLetterExchange=myDeadLetter&
deadLetterExchangeType=topic&
deadLetterQueue=myDeadLetterQueue&
deadLetterRoutingKey=myDeadLetterRoutingKey&
autoAck=false

Mocking cassandra session object

I an trying to mock the session object of cassandra which is obtained in the actual code in the following way...
session = cluster.connect(keyspace);
What I am looking for is "To execute the statement and return the mock session object"
I have tried the following options
MemberModifier.stub(MemberMatcher.method(Cluster.class, "connect" String.class)).toReturn(session);
PowerMockito.when(cluster.connect(keyspace)).thenReturn(session);
PowerMockito.when(cluster.connect(keyspace)).thenAnswer(new Answer() { public Object answer(InvocationOnMock invocation) { return session; } });
PowerMockito.when(cluster.connect(keyspace)).thenReturn(session);
Session testSession = cassandraService.getCassandraDBConnection();
None of these individually or in combination seem to work.
When the relevant JUnit is executed, the error that I get is
Stack Trace here below...
All host(s) tried for query failed (tried: /<<ip address>>:<<port no>> (com.datastax.driver.core.exceptions.TransportException: [/ip address] Cannot connect))
at com.datastax.driver.core.ControlConnection.reconnectInternal(ControlConnection.java:231)
at com.datastax.driver.core.ControlConnection.connect(ControlConnection.java:77)
at com.datastax.driver.core.Cluster$Manager.init(Cluster.java:1414)
at com.datastax.driver.core.Cluster.init(Cluster.java:162)
at com.datastax.driver.core.Cluster.connectAsync(Cluster.java:333)
at com.datastax.driver.core.Cluster.connect(Cluster.java:283)
at com.capitalone.payments.customerprofile.service.CassandraInteractionService.getCassandraDBConnection(CassandraInteractionService.java:202)
Could somebody guide me here please?
(I have masked ip address and port number in stack trace)
Thanks!
-Sriram
I guess that you want to mock the Java driver session object for testing right ?
In this case, I would recommend:
Use an embedded Cassandra server for unit test, see Achilles Embedded Cassandra or Cassandra Unit
Use the Stubbed Cassandra which simulate CQL requests and responses. This is probably the closest to achieve what you want instead of mocking

Bad injection of maven properties in spring application context

I am trying to set up an activeMQ broker and apply it the following policyEntry:
<policyEntry
queue="${broker.destination.queue.prefix}>"
gcInactiveDestinations="${broker.destination.purge.inactives}"
inactiveTimoutBeforeGC="${broker.destination.inactive.max.time}">
</policyEntry>
The variables points to an jms.properties with next entries:
broker.destination.purge.inactives = true
broker.destination.inactive.max.time = ${maven.jms.broker.destination.inactive.max.time}
Because of I have diferent profiles, last property point to the following property in POM file:
<maven.jms.broker.destination.inactive.max.time>30000</maven.jms.broker.destination.inactive.max.time>
With this context, I am having a problem with the policy entry because:
gcInactiveDestinations: the broker expects a long value but it is being interpreted as Integer (I have tried with 30000L and 30000l and not work).
inactiveTimeoutBeforeGC: must be interpreted as boolean but it has been interpreted as string.
How can I manage this situation?
Thanks!

Use MRUnit and AVRO together

I have created a Mapper & Reducer which use AVRO for input, map-output en reduce output. When creating a MRUnit test i get the following stacktrace:
java.lang.NullPointerException
at org.apache.hadoop.io.serializer.SerializationFactory.getSerializer(SerializationFactory.java:73)
at org.apache.hadoop.mrunit.mock.MockOutputCollector.deepCopy(MockOutputCollector.java:74)
at org.apache.hadoop.mrunit.mock.MockOutputCollector.collect(MockOutputCollector.java:110)
at org.apache.hadoop.mrunit.mapreduce.mock.MockMapContextWrapper$MockMapContext.write(MockMapContextWrapper.java:119)
at org.apache.avro.mapreduce.AvroMapper.writePair(AvroMapper.java:22)
at com.bol.searchrank.phase.day.DayMapper.doMap(DayMapper.java:29)
at com.bol.searchrank.phase.day.DayMapper.doMap(DayMapper.java:1)
at org.apache.avro.mapreduce.AvroMapper.map(AvroMapper.java:16)
at org.apache.avro.mapreduce.AvroMapper.map(AvroMapper.java:1)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:144)
at org.apache.hadoop.mrunit.mapreduce.MapDriver.run(MapDriver.java:200)
at org.apache.hadoop.mrunit.mapreduce.MapReduceDriver.run(MapReduceDriver.java:207)
at com.bol.searchrank.phase.day.DayMapReduceTest.shouldProduceAndCountTerms(DayMapReduceTest.java:39)
The driver is initialized as follows (i have created a Avro MapReduce API implementation):
driver = new MapReduceDriver<AvroWrapper<Pair<Utf8, LiveTrackingLine>>, NullWritable, AvroKey<Utf8>, AvroValue<Product>, AvroWrapper<Pair<Utf8, Product>>, NullWritable>().withMapper(new DayMapper()).withReducer(new DayReducer());
Adding a configuration object with io.serialization won't help:
Configuration configuration = new Configuration();
configuration.setStrings("io.serializations", new String[] {
AvroSerialization.class.getName()
});
driver = new MapReduceDriver<AvroWrapper<Pair<Utf8, LiveTrackingLine>>, NullWritable, AvroKey<Utf8>, AvroValue<Product>, AvroWrapper<Pair<Utf8, Product>>, NullWritable>().withMapper(new DayMapper()).withReducer(new DayReducer()).withConfiguration(configuration);
I use Hadoop & MRUnit 0.20.2-cdh3u2 from Cloudera and Avro MapRed 1.6.3.
You are getting a NPE because the SerializationFactory is not finding an acceptable class implementing Serialization in io.serializations.
MRUnit had several bugs related to serializations besides Writable including MRUNIT-45, MRUNIT-70, MRUNIT-77, MRUNIT-86 at https://issues.apache.org/jira/browse/MRUNIT. These bugs involved the conf not getting passed to the SerializationFactory constructor correctly or the code required a default constructor from the Key or Value which all Writables have. All of these fixes appear in Apache MRUnit 0.9.0-incubating which will get released sometime this week.
Cloudera's 0.20.2-cdh3u2 MRUnit is close to Apache MRUnit 0.5.0-incubating. I think that your code may still be a problem even in 0.9.0-incubating, please email your full code example to mrunit-user#incubator.apache.org and the Apache MRUnit project will be happy to take a look at it
This will compile now MRUNIT-99 relaxes the restriction on K2 type parameter to not have to be Comparable

Resources