JMeter custom sampler - share same connections with multiple runTest - jmeter

I am successful in creating a custom sampler by extending AbstractJavaSamplerClient. In the sampler implementation I am making a JMS connection and sending message to a queue. This is working great. If I configure my Thread Group to 100 with Ramp-up 1, my sampler is pushing 100 messages.
Now what I wanted to do is to make the connection only once at the JMeter startup and then reuse the same connection to send messages on each run.
Can anyone explain how to create a connection at JMeter startup and then share the same connection with the sampler.
Note: I can't use the existing JMS publisher because I want to calculate my response time based on different application event not just calculating the time taken to publish the message to JMS.
Thanks in advance.

You can use testStarted method to initialize connections for all threads. Note that this method runs once, before threads are cloned, so in testStarted you need a connection pool, which threads then can take from. For example a very primitive connection pool would be a map with some sequential ID for a key, and connection object. Each thread would take one connection from that pool, based on thread number:
So such simple pool could be initialized as:
#Override
public void testStarted()
{
int maxConnections = getThreadContext().getThreadGroup().getNumThreads();
ConcurrentMap<Integer, Object> connections = new ConcurrentHashMap<Integer, Object>();
for(int i = 0; i < maxConnections; i++)
{
Object connection = //... whatever you need to do to connect
connections.put(new Integer(i), connection);
}
// Put in the context of thread group
JMeterContextService.getContext().getVariables().putObject("MyConnections", connections);
}
(connection object could be a more specific type, based on your needs).
Later you can use it in sample method:
// Get connections pool from context
ConcurrentMap<Integer, Object> connections = (ConcurrentHashMap<Integer, Object>) JMeterContextService.getContext().getVariables().getObject("MyConnections");
// Find connection by thread ID, so each thread goes to a different connection
connections.get(getThreadContext().getThreadNum());
Here I naively assume a perfect mapping between thread number returned at run-time and initial sequential integer I used for connection initialization. Not the best assumption, could be improved, but it's a valid starting point.
You can then close and remove connection in testEnded method. This method also runs once, so we close all connections:
#Override
public void testEnded()
{
for(Entry<Integer, Object> connection : connections.entrySet())
{
connection.close(); // or do whatever you need to close it
connections.remove(connection.getKey());
}
}
Or you could just call connections.clear() when all connections are closed.
Disclosure: I did not test code in this answer directly, but used similar code fragments in the past, and reused them to answer this question. If you find any problems, feel free to update this answer.

Related

DefaultMessageListenerContainer stops processing messages

I'm hoping this is a simple configuration issue but I can't seem to figure out what it might be.
Set-up
Spring-Boor 2.2.2.RELEASE
cloud-starter
cloud-starter-aws
spring-jms
spring-cloud-dependencies Hoxton.SR1
amazon-sqs-java-messaging-lib 1.0.8
Problem
My application starts up fine and begins to process messages from Amazon SQS. After some amount of time I see the following warning
2020-02-01 04:16:21.482 LogLevel=WARN 1 --- [ecutor-thread14] o.s.j.l.DefaultMessageListenerContainer : Number of scheduled consumers has dropped below concurrentConsumers limit, probably due to tasks having been rejected. Check your thread pool configuration! Automatic recovery to be triggered by remaining consumers.
The above warning gets printed multiple times and eventually I see the following two INFO messages
2020-02-01 04:17:51.552 LogLevel=INFO 1 --- [ecutor-thread40] c.a.s.javamessaging.SQSMessageConsumer : Shutting down ConsumerPrefetch executor
2020-02-01 04:18:06.640 LogLevel=INFO 1 --- [ecutor-thread40] com.amazon.sqs.javamessaging.SQSSession : Shutting down SessionCallBackScheduler executor
The above 2 messages will display several times and at some point no more messages are consumed from SQS. I don't see any other messages in my log to indicate an issue, but I get no messages from my handlers that they are processing messages (I have 2~) and I can see the AWS SQS queue growing in the number of messages and the age.
~: This exact code was working fine when I had a single handler, this problem started when I added the second one.
Configuration/Code
The first "WARNing" I realize is caused by the currency of the ThreadPoolTaskExecutor, but I can not get a configuration which works properly. Here is my current configuration for the JMS stuff, I have tried various levels of max pool size with no real affect other than the warings start sooner or later based on the pool size
public ThreadPoolTaskExecutor asyncAppConsumerTaskExecutor() {
ThreadPoolTaskExecutor taskExecutor = new ThreadPoolTaskExecutor();
taskExecutor.setThreadGroupName("asyncConsumerTaskExecutor");
taskExecutor.setThreadNamePrefix("asyncConsumerTaskExecutor-thread");
taskExecutor.setCorePoolSize(10);
// Allow the thread pool to grow up to 4 times the core size, evidently not
// having the pool be larger than the max concurrency causes the JMS queue
// to barf on itself with messages like
// "Number of scheduled consumers has dropped below concurrentConsumers limit, probably due to tasks having been rejected. Check your thread pool configuration! Automatic recovery to be triggered by remaining consumers"
taskExecutor.setMaxPoolSize(10 * 4);
taskExecutor.setQueueCapacity(0); // do not queue up messages
taskExecutor.setWaitForTasksToCompleteOnShutdown(true);
taskExecutor.setAwaitTerminationSeconds(60);
return taskExecutor;
}
Here is the JMS Container Factory we create
public DefaultJmsListenerContainerFactory jmsListenerContainerFactory(SQSConnectionFactory sqsConnectionFactory, ThreadPoolTaskExecutor asyncConsumerTaskExecutor) {
DefaultJmsListenerContainerFactory factory = new DefaultJmsListenerContainerFactory();
factory.setConnectionFactory(sqsConnectionFactory);
factory.setDestinationResolver(new DynamicDestinationResolver());
// The JMS processor will start 'concurrency' number of tasks
// and supposedly will increase this to the max of '10 * 3'
factory.setConcurrency(10 + "-" + (10 * 3));
factory.setTaskExecutor(asyncConsumerTaskExecutor);
// Let the task process 100 messages, default appears to be 10
factory.setMaxMessagesPerTask(100);
// Wait up to 5 seconds for a timeout, this keeps the task around a bit longer
factory.setReceiveTimeout(5000L);
factory.setSessionAcknowledgeMode(Session.CLIENT_ACKNOWLEDGE);
return factory;
}
I added the setMaxMessagesPerTask & setReceiveTimeout calls based on stuff found on the internet, the problem persists without these and at various settings (50, 2500L, 25, 1000L, etc...)
We create a default SQS connection factory
public SQSConnectionFactory sqsConnectionFactory(AmazonSQS amazonSQS) {
return new SQSConnectionFactory(new ProviderConfiguration(), amazonSQS);
}
Finally the handlers look like this
#JmsListener(destination = "consumer-event-queue")
public void receiveEvents(String message) throws IOException {
MyEventDTO myEventDTO = jsonObj.readValue(message, MyEventDTO.class);
//messageTask.process(myEventDTO);
}
#JmsListener(destination = "myalert-sqs")
public void receiveAlerts(String message) throws IOException, InterruptedException {
final MyAlertDTO myAlert = jsonObj.readValue(message, MyAlertDTO.class);
myProcessor.addAlertToQueue(myAlert);
}
You can see in the first function (receiveEvents) we just take the message from the queue and exit, we have not implemented the processing code for that.
The second function (receiveAlerts) gets the message, the myProcessor.addAlertToQueue function creates a runnable object and submits it to a threadpool to be processed at some point in the future.
The problem only started (the warning, info and failure to consume messages) only started when we added the receiveAlerts function, previously the other function was the only one present and we did not see this behavior.
More
This is part of a larger project and I am working on breaking this code out into a smaller test case to see if I can duplicate this issue. I will post a follow-up with the results.
In the Mean Time
I'm hoping this is just a config issue and someone more familiar with this can tell me what I'm doing wrong, or that someone can provide some thoughts and comments on how to correct this to work properly.
Thank you!
After fighting this one for a bit I think I finally resolved it.
The issue appears to be due to the "DefaultJmsListenerContainerFactory", this factory creates a new "DefaultJmsListenerContainer" for EACH method with a '#JmsListener' annotation. The person who originally wrote the code thought it was only called once for the application, and the created container would be re-used. So the issue was two-fold
The 'ThreadPoolTaskExecutor' attached to the factory had 40 threads, when the application had 1 '#JmsListener' method this worked fine, but when we aded a second method then each method got 10 threads (total of 20) for listening. This is fine, however; since we stated that each listener could grow up to 30 listeners we quickly ran out of threads in the pool mentioned in 1 above. This caused the "Number of scheduled consumers has dropped below concurrentConsumers limit" error
This is probably obvious given the above, but I wanted to call it out explicitly. In the Listener Factory we set the concurrency to be "10-30", however; all of the listeners have to share that pool. As such the max concurrency has to be setup so that each listeners' max value is small enough so that if each listener creates its maximum that it doesn't exceed the maximum number of threads in the pool (e.g. if we have 2 '#JmsListener' annotated methods and a pool with 40 threads, then the max value can be no more than 20).
Hopefully this might help someone else with a similar issue in the future....

what difference of managed and unmanaged hconnection in hbase?

When i tried to create a HTable instance in this way.
Configuration conf = HBaseConfiguration.create();
HConnection conn = HConnectionManager.getConnection(conf);
conn.getTable("TABLE_NAME");
Then i got a Exception.
#Override
public HTableInterface getTable(TableName tableName, ExecutorService pool) throws IOException {
if (managed) {
throw new IOException("The connection has to be unmanaged.");
}
return new HTable(tableName, this, pool);
}
So , i wants to know the concrete reflection of managed and 'unmanaged' Hconnection?
Before call HConnectionManager.getConnection you have to create connection using HConnectionManager.createConnection passing to it earlier created HBaseConfiguration instance. HConnectionManager.getConnection return connection which is already exists. A bit of HConnectionManager javadoc about how it handle connection pool:
This class has a static Map of HConnection instances keyed by Configuration; all invocations of getConnection(Configuration) that pass the sameConfiguration instance will be returned the sameHConnection instance
In your case, you can simply create connection using HConnectionManager.createConnection and use returned connection to open HTable
Edit:
#ifiddddddbest, I found javadocs for HConnectionImplementation which has description of managed flag(may be it will help you to understand):
#param managed If true, does not do full shutdown on close; i.e.
cleanup of connection to zk and shutdown of all services; we just
close down the resources this connection was responsible for and
decrement usage counters. It is up to the caller to do the full
cleanup. It is set when we want have connection sharing going on --
reuse of zk connection, and cached region locations, established
regionserver connections, etc. When connections are shared, we have
reference counting going on and will only do full cleanup when no more
users of an HConnectionImplementation instance.
In the newer versions of HBase(>1.0), managed flag was disappeared and all connection management now on client side,e.g. client responsible to close it and if it do this, it close all internal connections to ZK,to HBase master, etc, not only decrease reference counter.

How can I implement fixed socket count on proxy->server connection?

I read netty proxy example, (https://github.com/netty/netty/tree/master/example/src/main/java/io/netty/example/proxy )
and I have two requirement.
I want to use fixed-count connection on proxy->server.
On proxy example, proxy->server conn. count equals client->proxy conn. count.
It may be too many.
When client->proxy connection ends, proxy->server connection has to be keep alived
And when new client->proxy connection established, reuse proxy->server connections.
How can it be implemented?
The first requirement can be realized rather easily by using a DefaultChannelGroup to store your channels. Assuming that the ChannelHandler which is accepting incoming connections is a singleton, then you can use the following code.
// initialize channelgroup in your singleton handler
ChannelGroup ALL_CONNECTIONS = new DefaultChannelGroup(GlobalEventExecutor.INSTANCE);
...
#Override
public synchronized void channelActive(ChannelHandlerContext ctx) throws Exception
{
if(ALL_CONNECTIONS.size() > 100){
ctx.channel().close();// dont accept further connections
}else{
ALL_CONNECTIONS.add(ctx.channel());
// do whatever logic.
}
}
I think you are thinking of "connection pooling" for the second requirement. If so, its not a great idea I think. Since, when a new client "connects" to your server, it is always a new connection since it is coming from outside of your network. However I am not sure of this and someone with more knowledge can answer.
Both what your need, i think, is a client with connection pool.
Both HttpComponents and AsyncHttpClient support pooling, You could have a look at the codes in AsyncHttpClient which also have a netty based implementation.

Behaviour of Callable and Prepared Statements in an app server

CallableStatement and PreparedStatements are precompiled. Are they done with respect to a connection? I mean, lets assume there are some 100 connection objects residing in a connection pool of an app server. There's a class that uses Callable and PreparedStatements. Lets say the method that is used for that is :
public void invokePreparedAndCallableStatements(){
//Fetches connection from pool
Connection con = getConnectionFromPool();
CallableStatement cs = con.prepareCall(.....);
cs.register...(...);
cs.execute();
...
...
PreparedStatement st = con.prepareStatement(...);
st.setXXX(..);
st.executeUpdate();
...
}
Now when the method is called for the first time, a connection is fetched from pool and the request is processed. The Callable and Prepared Statements are compiled. When the method is called another 99 times, each time a different connection is fetched from the pool, then - will the statements be complied for each connection ?
What will be the most optimal way to use statements in this context ? I can't make them (con.prepareCall() or con.prepareStatement()) static because connection isn't static.
The code is actually compiled and stored in the shared pool of the database. Any number of connections using that same code will benefit from the cache. The compiled code is kept as long as the memory limits allow.
The statements will be precompiled. Pooling will be based on your specified parameters.
Note: If you are using JDBC 3.0, you can also pool your PreparedStatements. Reference: What's new in JDBC 3.0

Is there any way to have the JBoss connection pool reconnect to Oracle when connections go bad?

We have our JBoss and Oracle on separate servers. The connections seem to be dropped and is causing issues with JBoss. How can I have the JBoss reconnect to Oracle if the connection is bad while we figure out why the connections are being dropped in the first place?
Whilst you can use the old "select 1 from dual" trick, the downside with this is that it issues an extra query each and every time you borrow a connection from the pool. For high volumes, this is wasteful.
JBoss provides a special connection validator which should be used for Oracle:
<valid-connection-checker-class-name>
org.jboss.resource.adapter.jdbc.vendor.OracleValidConnectionChecker
</valid-connection-checker-class-name>
This makes use of the proprietary ping() method on the Oracle JDBC Connection class, and uses the driver's underlying networking code to determine if the connection is still alive.
However, it's still wasteful to run this each and every time a connection is borrowed, so you may want to use the facility where a background thread checks the connections in the pool, and silently discards the dead ones. This is much more efficient, but means that if the connections do go dead, any attempt to use them before the background thread runs its check will fail.
See the wiki docs for how to configure the background checking (look for background-validation-millis).
There is usually a configuration option on the pool to enable a validation query to be executed on borrow. If the validation query executes successfully, the pool will return that connection. If the query does not execute successfully, the pool will create a new connection.
The JBoss Wiki documents the various attributes of the pool.
<check-valid-connection-sql>select 1 from dual</check-valid-connection-sql>
Seems like it should do the trick.
Not enough rep for a comment, so it's in a form of an answer. The 'Select 1 from dual' and skaffman's org.jboss.resource.adapter.jdbc.vendor.OracleValidConnectionChecker method are equivalent , although the connection check does provide a level of abstraction. We had to decompile the oracle jdbc drivers for a troubleshooting exercise and Oracle's internal implementation of the ping is to perform a 'Select 'x' from dual'. Natch.
JBoss provides 2 ways to Validate connection:
- Ping based AND
- Query based
You can use as per requirement. This is scheduled by separate thread as per duration defined in datasource configuration file.
<background-validation>true</background-validation> <background-validation-minutes>1</background-validation-minutes>
Some time if you are not having right oracle driver at Jboss, you may get classcast or related error and for that connection may start dropout from connection pool. You can try creating your own ConnectionValidator class by implementing org.jboss.resource.adapter.jdbc.ValidConnectionChecker interface. This interface provides only single method 'isValidConnection()' and expecting 'NULL' in return for valid connection.
Ex:
public class OracleValidConnectionChecker implements ValidConnectionChecker, Serializable {
private Method ping;
// The timeout (apparently the timeout is ignored?)
private static Object[] params = new Object[] { new Integer(5000) };
public SQLException isValidConnection(Connection c) {
try {
Integer status = (Integer) ping.invoke(c, params);
if (status.intValue() < 0) {
return new SQLException("pingDatabase failed status=" + status);
}
}
catch (Exception e) {
log.warn("Unexpected error in pingDatabase", e);
}
// OK
return null;
}
}
A little update to #skaffman's answer. In JBoss 7 you have to use "class-name" attribute when setting valid connection checker and also package is different:
<valid-connection-checker class-name="org.jboss.jca.adapters.jdbc.extensions.oracle.OracleValidConnectionChecker" />
We've recently had some floating request handling failures caused by orphaned oracle DBMS_LOCK session locks that retained indefinitely in client-side connection pool.
So here is a solution that forces session expiry in 30 minutes but doesn't affect application's operation:
<check-valid-connection-sql>select case when 30/60/24 > sysdate-LOGON_TIME then 1 else 1/0 end
from V$SESSION where AUDSID = userenv('SESSIONID')</check-valid-connection-sql>
This may involve some slow down in process of obtaining connections from pool. Make sure to test this under load.

Resources