I have multiple thread groups for pushing message to ActiveMQ using JMeter publisher sampler. My JMeter publisher sampler is configured with ActiveMQ failover URL. When I'm starting the JMeter it is pushing messages to both ActiveMQ irrespective of failover.
The sampler uses the ActiveMQ JNDI initial context factory (org.apache.activemq.jndi.ActiveMQInitialContextFactory)
The Provider URL: failover:(tcp://host1:61616,tcp://host2:61616)
The connection factory is simply the default one provided by ActiveMQ: ConnectionFactory.
The destination is the name of the JMS queue where we want to produce the message, prefixed with dynamicQueues: dynamicQueues/MyQueue.
As per Failover Transport documentation:
Using Randomize
The Failover transport chooses a URI at random by default. This effectively load-balances clients over multiple brokers. However, to have a client connect to a primary first and only connect to a secondary backup broker when the primary is unavailable, set randomize=false.
So my expectation is that when you run your test with multiple threads (virtual users) each thread sends message to the random broker.
If you want to target the first broker and use the second one only if the first one fails - consider appending ?randomize=false parameter to your failover URL like:
failover:(tcp://host1:61616,tcp://host2:61616)?randomize=false
More information just in case: Building a JMS Testing Plan - Apache JMeter
Related
We have been doing performance testing of an application that uses IBM MQ. Through JMeter we are injecting the payload via a JMS Publisher. However, when running the test it can be observed that the connections from the JMeter threads are not being released. This effects the ability to reach the throughput and test failure due to the accumulation of threads. Is there a better alternative than using the JMS Publisher? Or is there setting there that needs to be enabled in order to release the connection once the request has been sent?
https://www.blazemeter.com/blog/ibm-mq-tutorial - Is this the best practice to implement testing IBM MQ?
What are you trying to achieve? JMeter implements Object Pool Pattern so each JMeter thread (virtual user) creates its own JMS connection and on subsequent iterations of the JMS Publisher sampler the Publisher object is being returned from the pool rather than created from zero.
If this is not something you want (or not how your JMS application acts) and you would like to close the connection after posting a message to queue/topic you can achieve it quite easily using JSR223 PostProcessor and the following Groovy code:
def publisher = sampler.publisher
org.apache.jmeter.protocol.jms.client.ClientPool.removeClient(publisher)
org.apache.commons.io.IOUtils.closeQuietly(publisher, null)
sampler.publisher = null
I'm looking for the easiest way to build a Wildfly cluster with JMS load balancing for a development platform. Messages will be produced by the Wildfly servers themselves.
I wonder how works the ActiveMQ Artemis JMS server embedded in Wildfly in a cluster deployment. I see on this site that a Wildfly node can declare its JMS server as master or slave.
I also read here that a MDB can use an "in-vm-connector" connector.
I'm not sure that I understand how a JMS cluster works with a master and a slave JMS server with "in-vm-connector". Will the MDB instances in the Wildfly node with the slave JMS server receive messages? Will the JMS cluster provide load balancing or will there be only one active JMS server at the same time?
In ActiveMQ Artemis (i.e. the JMS broker embedded into WildFly) clustering (which provides things like message load balancing) and high-availability (which provides redundancy for the integrity of the message data) are separate concepts. The master/slave configuration you mentioned is for high-availability. This configuration doesn't provide message load balancing since only one of the brokers is alive at any given point in time.
If you want configure a master/slave pair it's recommended that you separate those servers from the servers that actually process the messages since it doesn't make sense to have MDBs running on a server which doesn't have a live broker (i.e. a slave) since they won't receive any messages.
Our JMS Listener application connects to an ActiveMQ network of brokers through a load balancer, which we are told distributes connections amongst brokers in a round-robin fashion. Our spring boot application is creating a connection via the load balancer, which in turn feeds the connection to one of the brokers amongst the network of brokers. If a message is published to the brokers then it would be a lot quicker if the message was on the broker that the JMS listener connection lived on. However, the likelihood of that occurring is slim unless we can distribute the connections across the brokers.
I've tried increasing the concurrency in the DefaultJmsListenerContainerFactory, but that didn't do the trick. I was thinking about somehow extending the AbstractJmsListenerContainerFactory, and somehow create a Map of DefaultMessageListenerContainer instances but it looks like the createListenerContainer will only return an instance of whatever is parameterized in the AbstractJmsListenerContainerFactory and we cannot parameterize it with an instance of Map.
We are using Spring Boot 1.5.14.RELEASE.
== UPDATE ==
I've been playing around with the classes above, and it seems like it is inherent in Spring JMS that a Jms Listener be associated with a Single Message Listener Container, which in turn is associated with a single (potentially shared) connection.
For any folks that have JMS Application Listeners that are connecting to a load balanced network of brokers, are you creating a single connection that is connecting to a single broker, and if so, do you experience significant performance degradation as a result of the network of brokers having to move any inbound messages to a broker with consumers?
We have an application running in a GlassFish 3.1.2.2 cluster (two instances) that writes its results to "the_output_queue".
GlassFish sets up Message Queue as an embedded broker cluster, which in turn has also two message broker instances corresponding directly to the two GlassFish instances.
Now I would like to consume results from the_output_queue with an external JMS client (think Android app).
I assumed that a broker cluster can somehow be accessed transparently by a JMS client, but I cannot get this to work. I only succeed in connecting a JMS client to one individual broker.
If I have one JMS client running, connected to one broker I get only half of the messages. The physical queue (the_output_queue) defined in the GlassFish Administration Console exists in both brokers and the messages get evenly distributed thanks to load balancing.
This text from the Oracle manuals sounds to me like every message should be available in all broker instances of the cluster, i.e. if only a single JMS consumer is running it should receive all messages irrespective of the broker instance it is connected to.
"The home broker is responsible for routing and delivering the messages to all consumers of the destination, whether these consumers are local (connected to the home broker) or remote (connected to other brokers in the cluster)."
Have I misunderstood this completely?
Can a JMS client access a Oracle Message Queue broker cluster transparently?
How would the connection string look?
Is there some "global cluster target" (instead of an individual broker) to which the JMS client can connect? Where could I find the connection details for the cluster?
Is there something special in the GlassFish setup I have to verify? The settings currently are (default setup created by jelastic.com, looks good to me):
JMS Availability:
JMS Service Type: Embedded
JMS Cluster Type: Conventional
JMS Configuration Store Type: Master Broker
JMS Message Store Type: File
GMS is enabled
Answer to the main question: Yes, a JMS client can connect to any instance of a cluster and GlassFish will replicate the messages. I have tested it on my PC.
The problem in Jelastic is discussed in this posting.
I'm using a JMeter with JMS Point-to-Point sampler, and I've a MQ queue with listener port 2222 (non standard 1414). JMeter retrieves queue connection details from IBM WebSphere Application Sever by Provider URL, and connection factory class: com.ibm.websphere.naming.WsnInitialContextFactory
My question is: How to define that port number in to JMeter plugin? Because I've got in
JMeter logs messages about trying to create connection to 1414?
Try going to:
Start by adding the sampler JMS Point-to-Point to the Point-to-Point
element (Add --> Sampler --> JMS Point-to-Point). Then, select the JMS
Point-to-Point sampler element in the tree.
and than set the property:
Provider URL tcp://$HOSTNAME:$PORT