I'm working on this problem a lot of time now and can't seem to find any final solution to it.
I have a message producer that should work as a broadcaster, posting messages on two different topics. The publisher's posting process follows the flow :
Creating connection to the factory and starting it.
Creating session
Creating Message Producer by using the session and a given topic Name
Sending n* of messages
Waiting n seconds
Closing producer,session,connection
Then i have 3 consumers subscribed to those topics by using the following configuration (each consumer has it's own clientId and durableSubscriptionName):
<route id="consumerOneRoute">
<from uri="activemq:topic:topicName?clientId=consumerOneId&durableSubscriptionName=ConsumerOneName" />
<bean ref="consumerBean" method="processMessage" />
</route>
The fact is that my consumers don't always receive the messages, at least not all of them. Sometimes two of the consumers get all the messages and the 3rd one don't get any, sometimes random consumer receive random number of messages and so on...
One more fact that i noticed is that if i stop the broker and start it again, the consumers will receive the missing messages, and i really can't understand why won't that happen during the 1st lifetime of the broker.
Would anyone be so kind and try to aid me?
Thanks,
George.
P.S : I was thinking about using virtual topics though since my main purpose is to have a broadcasting producer that will allow other consumers to attach in the future i don't want to have the need of modifyin' the producer everytime by adding another virtual branch to the main topic.
I had similar issue, 1 producer sends messages via topic to many consumers, not all of them receives messages. Problem was in consumer's timeout, I manually created timeout and it was shorter then ActiveMQ could deliver last messages. Extending timeout - helped.
Take a look at Prefetch Policy. If you set it to 1 then it may fix this one for you.
...&consumer.prefetchSize=1
The only solution that worked was switching from Topics with durable consumers to Virtual Topics.
your consumers must be connected to broker when producer is sending message.
Related
We're currently having problems with our ActiveMQ 5.16.1 which suddenly starts piling up messages without any apparent reason. The following image shows the ActiveMQ QueueSize:
The ActiveMQ is used as JMS message broker without any other components for e.g. high availability or load balancing. Several producers (in total and worst case around 20) produce small/simple JSON messages which are send to the broker and consumed by a JAVA-based microservice. The microservice processes the message and saves the data to an Oracle database. Average processing time for one request is about 30ms. From those 20 producers only some are active at the same time which might vary between 2 and 10 producers. Each producer sends a message every 3 secondes resulting in 20 messages/min per producer. E.g.: having 10 producers the broker will get 200 messages/min or 30 messages/sec. Preserving the order is crucital thus I'm working with JMSXGroupIds which works good so far. Messages are send via MQTT and routed (via Camel) to an JMS queue:
<route id="handleData">
<from uri="activemq://topic:some.topic.here?clientId=uniqueClientId" />
<setHeader headerName="tName">
<constant>ABC123</constant>
</setHeader>
<setHeader headerName="JMSXGroupId">
<jsonpath>$.producerId</jsonpath>
</setHeader>
<to uri="activemq://queue:myQueue" />
</route>
But for any reason the messages get stuck after some time and I can't find any significant hint why that happens. There is nothing in the log files nor the OS event log. I have to restart the ActiveMQ service in order to "reanimate" it. Afterwards all stuck messages will be processed and everything is working fine until the next "accident". This time it took about 10 days before the messages got stuck.
I already checked whether there might be a network or database-related issue. Even moved the ActiveMQ to a freshly new server in order to asure that nothing else is influencing the ActiveMQ processes. But I couldn't find any hints either. I watched the JVM, heap space growth, memory usage, etc. - everything unremarkable.
Does anybody has an idea what I could check additionally to find out what the problem is?
Add the advisoryForSlowConsumer destinationPolicy setting for those topics and watch the topic://ActiveMQ.Advisory.. for any enqueue counts that would indicate slow consumer occurred.
Your Camel route is almost identical to what a VirtualTopic does. You'll see better consistency with server-side routing in these types of scenarios, since there is no remote process (ie. Camel route) to manage connections, sessions, etc.
Bonus: MQTT transport supports using Virtual Topics to back-end the subscriptions so you any MQTT topic consumers would automatically pull from the queue.
ref: https://activemq.apache.org/virtual-destinations
I am using ActiveMQ v5.10.0 and having an issue almost every weekend where my ActiveMQ instance stops delivering persistent messages sent on queues to the consumers. I have not been able to figure out what could be causing this.
While the issue was happening I tried following things:
I added a new consumer on the affected queue but it didn't receive
any messages.
I restarted the original consumer but it didn't receive any messages after the restart.
I purged the messages that were held on the queue but then messages started accumulating again and broker didn't deliver any of the new messages. When I purged the expiry count didn't increase neither the dequeue and dispatch counters.
I sent 100 non-persistent messages on the affected queue, surprisingly it received those messages.
I tried sending 100 persistent messages on that queue, it didn't deliver anyone of them, all the messages were held by broker.
I created a fresh new queue and sent 100 persistent messages and none of them was delivered to the consumer whereas all the non-persistent messages were delivered.
The same things happen if I send persistent or non-persistent messages from STOMP producers. Surprisingly all this happened only for queues, topic consumers were able to receive persistent as well as non-persistent messages.
I have already posted this on ActiveMQ user forum: http://activemq.2283324.n4.nabble.com/Broker-not-delivering-persistent-messages-to-consumer-on-queue-td4691245.html but no one from ActiveMQ has suggested anything.
The jstack output also isn't very helping.
More details:
1. I am not using any selectors, message groups feature
2. I have disabled producer flow control in my setup
I want some suggestions as to what configuration values might cause this issue- memory limits, message TTL etc.
I am still learning about this activemq and jms stuff.
I already tried some example and now I can produce and consuming message from the queue/topic.
Now I have a problem, when my client/consumer lost the connection, the message in queue/topic still send out that message, that message become lost and not kept in the queue/topic. So my question is how I can keep that failed message and how to make the broker resend that message again?
thanks
You are mixing up terminology a bit.
Queues will hold messages until consumed or the broker is restarted, unless the message has been marked as persistent, in which case they will stick around even after a broker restart.
Topics only deliver the current message to any current subscriber. However there are several methods you can use to persist messages published to a topic:
Durable subscribers.
Virtual Destinations .
Virtual Topics tend to be popular for many reasons over durable subscribers, but it really depends on the use-case.
How you create a durable subscriber depends on what you are using to create the subscriber (Spring, POJO, some other API?). All methods will at some point call the Session.createDurableSubscriber method, but I suggest reading up on how they behave before choosing this over Virtual Topic or Composite Queues.
The thing which you are looking for might be Durable subscription
You can find documentation for same at http://activemq.apache.org/how-do-durable-queues-and-topics-work.html
I'm using ActiveMQ along with Mule (a kind of ESB based on Spring).
We got a fast producer and a slow consumer.
It's synchronous configuration with only one consumer.
Here the configuration of the consumer in spring style: http://pastebin.com/vweVd1pi
The biggest requirement is to keep the order of the messages.
However, after hours of running this code, suddenly, ActiveMQ skips 200 messages, and send the next ones.The 200 messages are still there in the activeMQ, they are not lost.
But our client (Mule), does have some custom code to check the order of the messages, using an unique identifier.
I had this issue already a few month ago. We change the consumer by using the parameter "jms.prefetchPolicy.queuePrefetch=1". It seemed to have worked well and to be the fix we needed unti now when the issue reappeared on another consumer.
Is it a bug, or a configuration issue ?
I can't talk about the requirement from a Mule perspective, but there are a couple of broker features that you should take a look at. There are two ways to guarantee message ordering in ActiveMQ:
Message groups are a way of ensuring that a set of related messages will be consumed by the same consumer in the order that they are placed on a queue. To use it you need to specify a JMSXGroupID header on related messages, and assign them an incrementing JMSXGroupSeq number. If a consumer dies, remaining messages from that group will be sent to another single consumer, while still preserving order.
Total message ordering applies to all messages on a topic. It is configured on the broker on a per-destination basis and requires no particular changes to client code. It comes with a synchronisation overhead.
Both features allow you to scale out to more than one consumer.
I am relatively new to JMS and have encountered a weird problem implementing my first real application. I'm desporate for any help or advice.
Background: I use AtiveMQ (java) as the message broker with non-transacted, non-persitent queues.
The Design: I have a straight forward producer/consumer system based around a single queue. A number of nodes(currently 2) place messages onto/ consume from the queue. Selectors are used to filter which messages a node recieves.
The Problem: The producer succesfully places its items on to the queue (i have verified they are there using the web interface) however the consumers remain blocked and do not read them. Only when i close the JMS connection in the producer do the consumers jump into life and consume the messages as expected.
This bevaior seems very weird to me, surely you shouldnt have to completely hang up the producer connection for the consumers to be able to read from the queue. I must have made a mistake somewhere(possibly with sessions) but the at the moment the number of things that could be wrong is to large and i have no idea what would cause this behaviour.
Any hints as to a solution, the cause of the problem or just how to continue debugging would be greatly appreciated.
Thanks for your time,
P.S If you requrie any additional information i am happy to provide it
Hard to say without seeing the code, but it sounds like the producer is transacted. You should not have to close the producer in order for the consumers to receive a message but a transacted producer won't send it messages until you call commit. Other things to check is that the connection has been started. Also if you have many consumers you should look at the prefetch setting to ensure that one consumer doesn't hog all the messages, setting to prefetch of 1 might be needed, but hard to say without further insight into your use case.