ActiveMQ JMS queues get very full during long performance runs - jms

I'm running ActiveMQ 5.16.3, and I'm testing a Java-based order management application which runs across several JVMs. These JVMs each do work to push orders through a status pipeline. For example, one JVM creates orders, one schedules orders, and so forth. JDBC is used to store data, and JMS is used to process work between the JVMs. One JVM will read work from the database and put up to 5000 messages into a JMS queue for another JVM to process and do its own work. I am running just one ActiveMQ server with many JMS queues. I have not changed how ActiveMQ is storing messages so whatever is the default is what is being used (which should be KahaDB). JDBC is used only by the Java application.
We support several different JMS vendors other than ActiveMQ (such as Weblogic JMS and IBM MQ). However, only with ActiveMQ I am finding that for longer running or high volume tests the JMS queues start to back up and sometimes have hundreds of thousands of messages in them. There is no where near that much work in the system so something else is going on. Via JMX I've confirmed that the ActiveMQ console is correctly showing the numbers. This behavior seems random in that it's not all JVMs doing this (though all JVMs conceptually do the same thing) and if I stop the work generators (so that the JVMs only process what is already in the queue), the queues usually empty out quickly.
For example, if I want to make 10k orders, 10k messages would go into the first JMS queue for the create order JVM to process. This JVM would then update the JDBC database to create the orders and insert records that the next JVM (in this case, schedule order) would then pick up. The schedule JVM reads from the database and based on what work it sees, puts messages into the next JMS queue (only up to 5k, then it waits for it to empty, then fetches 5k more) for it to process. What I am seeing is that the schedule order JMS queue is filling up far past 10k messages.
My studies have lead me to the possibility of uncommitted reads and concurrency but I've come to a dead end. Does anyone have any thoughts?

Related

ActiveMQ non-persistent delivery mode limitations?

I am using ActiveMQ where I need following requirements
To have very fast consumers as my producers are already very fast
Need processing at lease 2K messages per second
Not require to process/consume messages again in case of server crash or other failures. I can trigger whole process again.
Needs to run very normal configuration server - 4Gib RAM
I have configured ActiveMQ as given below
Using non-persistent delivery mode (vm://localhost)(http://activemq.apache.org/what-is-the-difference-between-persistent-and-non-persistent-delivery.html)
Using spring integration for put/fetch messages in/from queue/channel.
Using max-concurrent-consumers with 10 threads
Assume all other configs are by default with ActiveMQ and Sprig-integration.
Problems/Questions
I am not sure how ActiveMQ stores messages in case of non-persistent delivery mode, is it possible that my process will fail with out of memory errors once my queue size exceed some limit? I am asking this because it's very difficult to test whole process for me. So I needs to be aware about limitation before I trigger the process.
If non-persistent delivery mode is not sufficient with my above requirements, is there any performance tuning tips with which I can achieve my requirements with persistent delivery mode (tcp://). I have already tested with this mode, but it seems consumers are very slow here. Also, I have already tried to use DUPS_OK_ACKNOWLEDGE to make my consumer fast with persistent delivery mode but no luck.
NOTE : I am using latest ActiveMQ version 5.14
I am not sure how ActiveMQ stores messages in case of non-persistent delivery mode
Activemq store messages in the memory at first, and it will also swap it to the disk(there is a tmp_storage folder in activemq's data path).
is it possible that my process will fail with out of memory errors once my queue size exceed some limit
I have never met out of memory in activemq, even with about one million messages.
You can also make sure by the producer flow control(http://activemq.apache.org/producer-flow-control.html).
You can make the producer hang when there is too many messages not consumed.
And about performance of persistent delivery, I also have no good methods.

JMS messages being consumed from single server

I have a clustered web logic environment with 2 servers.
The source drops JMS messages in the queues of both the servers.
My service, however, is designed to consume these messages only at a particular time of the day when it is activated by a "trigger.txt" file which is picked up by a file adapter which then activates the BPEL to start consuming JMS messages.
However, the problem is, if the server 1 adapter picks up the trigger.txt file, then JMS messages from only server 1 queue are consumed, messages on the other server are left untouched and vice versa.
I want the messages to be consumed by both the servers.
Is there any solution to this ?
This isn't a WLS JMS issue.
So the solution will lie within your BPEL implementation and your solution of leaving the trigger.txt file behind.
I am assuming you are removing the trigger.txt once its picked by a BPEL instance.
You will have to change this logic to say include timestamp something like trigger.txt so each BPEL instance picks and marks internally that it has picked this particular file and does it process it again.
or create 2 files one for each server, but this will be messy if you add say an extra server lateron.
The other option is for WLS to redirect the JMS messages to the server which has an active consumer, this would however effect your ability to parallely process the JMS messages on both the servers.

Is ATG DMS true JMS implementation

It looks like ATG's implementation of JMS is achieved through scheduler whereby it polls database in specific interval through SqlJmsProvider component. I do agree that ATG's DMS does provide with all JMS feature like Queue, Topic, Durable Subscribers, Retry etc... but isin't using a scheduler to poll the DB to send ATG Order to Fulfillment an overkill ? (too many queries fired)
From the ATG Documentation it explains that there are two JMS providers available within ATG:
SQL JMS
Local JMS
The difference between the two are:
Local JMS is Synchronous and extremely fast. It runs within a single transaction and is bound to a single process, thus when sending a message, it blocks while waiting for an acknowledgement. SQL JMS on the otherhand, is Asynchronous and can be used across processes (thus an order submission on Commerce can be processed on Fulfillment). SQL JMS is non-blocking so once the message is put on the queue, the requesting process can continue. This also means that Commerce can continue running, even if Fulfillment is down. The messages are also persisted in SQL JMS while they are stateless and lost during a restart for Local JMS.
Using the scheduler to poll the queue is an acceptable solution and most of the older asynchronous message queues implemented this solution. In IBM MQ Version 7 performance has been improved by reducing the amount of polling while using IBM WAS Version 6 for example the solution is also based on regular polling of the queue.
So no, polling the database on a scheduled interval is not an overkill.

ActiveMQ with slow consumer skips 200 messages

I'm using ActiveMQ along with Mule (a kind of ESB based on Spring).
We got a fast producer and a slow consumer.
It's synchronous configuration with only one consumer.
Here the configuration of the consumer in spring style: http://pastebin.com/vweVd1pi
The biggest requirement is to keep the order of the messages.
However, after hours of running this code, suddenly, ActiveMQ skips 200 messages, and send the next ones.The 200 messages are still there in the activeMQ, they are not lost.
But our client (Mule), does have some custom code to check the order of the messages, using an unique identifier.
I had this issue already a few month ago. We change the consumer by using the parameter "jms.prefetchPolicy.queuePrefetch=1". It seemed to have worked well and to be the fix we needed unti now when the issue reappeared on another consumer.
Is it a bug, or a configuration issue ?
I can't talk about the requirement from a Mule perspective, but there are a couple of broker features that you should take a look at. There are two ways to guarantee message ordering in ActiveMQ:
Message groups are a way of ensuring that a set of related messages will be consumed by the same consumer in the order that they are placed on a queue. To use it you need to specify a JMSXGroupID header on related messages, and assign them an incrementing JMSXGroupSeq number. If a consumer dies, remaining messages from that group will be sent to another single consumer, while still preserving order.
Total message ordering applies to all messages on a topic. It is configured on the broker on a per-destination basis and requires no particular changes to client code. It comes with a synchronisation overhead.
Both features allow you to scale out to more than one consumer.

Spring Integration JMS Outbound adapter transaction control

in order to reach high performance production of messages with jms with transactions enabled, one needs to control the amount of messages being sent on each transaction, the larger the number the higher the performance are,
is it possible to control transactions in such a way using spring integration ?
one might suggest using an aggregator, but that defeats the purpose because i dont want to have one message containing X smaller messages on the queue, but actually X messages on my queue..
Thanks !
I'm not aware of your setup, but I'd bump up the concurrent consumers on the source than try to tweak the outbound adapter. What kind of data source is pumping in this volume of data ? From my experience, usually the producer lags behind the publisher - unless both are JMS / messaging resources - like in the case of a bridge. In which case you will mostly see a significant improvement by bumping up the concurrent consumers, because you are dedicating n threads to receive messages and process them in parallel, and each thread will be running in its own "transaction environment".
It's also worthwhile to note that JMS does not specify a transport mechanism, and its unto the broker to choose the transport. If you are using activemq you can try experimenting with open wire vs amqp and see if you get the desired throughput.

Resources