Apache ActiveMQ Artemis uses JMSXGroupId to implement 'sticky' consumer sessions. Messages enqueued with the same JMSXGroupId are sent to the same consumer, in FIFO, single threaded. This does however allow for multiple threads to process unique JMSXGroupId groups concurrently - which is perfect - see below:
16:46:42.451 [Thread-4] INFO Log - This is Message 30 In JMSXGroup: Group C | To Thread Thread-4
16:46:42.451 [Thread-3] INFO Log - This is Message 283 In JMSXGroup: Group B | To Thread Thread-3
16:46:42.451 [Thread-3] INFO Log - This is Message 284 In JMSXGroup: Group B | To Thread Thread-3
16:46:42.451 [Thread-4] INFO Log - This is Message 31 In JMSXGroup: Group C | To Thread Thread-4
16:46:42.452 [Thread-4] INFO Log - This is Message 32 In JMSXGroup: Group C | To Thread Thread-4
16:46:42.452 [Thread-3] INFO Log - This is Message 285 In JMSXGroup: Group B | To Thread Thread-3
Oracle AQ and Amazon SQS do not exhibit the same 'sticky' consumer behaviour. I cannot find anything specific in the JMS Specification other than JMSXGroupId is used to group related messages together.
My expectation was that all JMS Consumers would exhibit this 'sticky' behaviour when JMSXGroupId was set but this does not appear to be the case.
Has anyone managed to achieve this behaviour with Oracle AQ / SQS solely by setting JMSXGroupId? Or is the intention of JMSXGroupId to allow the consumer to use a selector when dequeuing? That does not seem as though it would scale as it would need to be discerned at runtime, whereas the ActiveMQ implementation clearly does.
The JMS specification only states that messages in the same group should be consumed in order. It doesn't say how this functionality should be implemented.
ActiveMQ Artemis implements message grouping by dispatching all the messages in the same group to a single consumer (i.e. what you call "sticky" consumers). However, other JMS providers are free to implement this functionality in other ways.
As noted previously, the only requirement is that messages in the same group are consumed in order. If you've tested this functionality on Oracle AQ and Amazon SQS and have evidence that the messages in the same group are not consumed in order then you should contact those providers for support. It is not valid to simply say that their implementations are different from ActiveMQ Artemis if the ultimate result is the same.
Related
I'm currently faced with a use case where I need to process multiple messages in parallel, but related messages should only be processed by one JMS consumer at a time.
As an example, consider the messages ABCDEF and DEFGHI, followed by a sequence number:
ABCDEF-1
ABCDEF-2
ABCDEF-3
DEFGHI-1
DEFGHI-2
DEFGHI-3
If possible, I'd like to have JMS consumers process ABCDEF and DEFGHI messages in parallel, but never two ABCDEF or DEFGHI messages at the same time across two or more consumers. In my use case, the ordering of messages is irrelevant. I cannot use JMS filters because I would not know the group name ahead of time, and having a static list of group names is not feasible.. Messages are sent via a system which is not under my control, and the group name always consists of 6 letters.
ActiveMQ seems to have implemented this via their message groups feature, but I can't find equivalent functionality in IBM MQ. My understanding is that this behaviour is driven by JMSXGroupId and JMSXGroupSeq headers, which are only defined in an optional part of the JMS specification.
As a workaround, I could always have a staging ground (a database perhaps), where all messages are placed and then have a fast poll on this database, but adding an extra piece of infrastructure seems overkill here. Moreover, it would also not allow me to use JMS transactions for reprocessing in case of failures.
To me this seems like a common use case in messaging architecture, but I can't find a simple yes/no answer anywhere online, and the IBM MQ documentation isn't very clear about whether this functionality is supported or not.
Thanks in advance
IBM MQ also has the concept of message groups.
The IBM MQ message header, called the Message Descriptor (MQMD) has a couple of fields in it that the JMS Group fields map to:-
JMSXGroupID -> MQMD GroupID field
JMSXGroupSeq -> MQMD MsgSeqNumber field
Read more here
IBM MQ docs: Mapping JMS property fields
IBM MQ docs: Message groups
I am trying to consume messages from three IBM MQ queues using JMS.
So, there are three #JmsListener in my spring boot application.
I have a doubt about it, how will they behave if all consumer can consume from their respective queues.
Will there be any concurrency?
If not what is the best way to concurrently consume from queues as I can't afford the serial execution of the application.
Thanks in advance :)
On the JmsListener annotation you can set the concurrency behavoir:
concurrency
public abstract String concurrency
The concurrency limits for the listener, if any. Overrides the value
defined by the container factory used to create the listener
container. The concurrency limits can be a "lower-upper" String — for
example, "5-10" — or a simple upper limit String — for example, "10",
in which case the lower limit will be 1.
Note that the underlying container may or may not support all
features. For instance, it may not be able to scale, in which case
only the upper limit is used.
Source: https://docs.spring.io/spring/docs/current/javadoc-api/org/springframework/jms/annotation/JmsListener.html#concurrency--
Each Listener runs in it's own Thread.
You can easily test this by printing the message received. This will output the Thread it runs in. For example:
2020-06-06 11:26:54.339 INFO 23404 --- [enerContainer-1] c.e.d.ShippingService : Hello World!
The thread name is [enerContainer-1]
Please read more in the documentation about Spring and JMS https://docs.spring.io/spring/docs/current/spring-framework-reference/integration.html#jms-receiving
When I turn on exactly once processing I get the following error. NOTE: Our application are very secure and we only give kafka users and consumers access to resources that they explicitly need.
2019-04-22 15:28:09 INFO (kafka.authorizer.logger)233 - Principal = User:xxx is Denied Operation = Describe from hos
xxx.xxx.xxx.xxx on resource = TransactionalId:application_consumer-0_16
With exactly once processing does kafka streams use a consumer group per stream task instead of a consumer group across all stream tasks?
With exactly-once enabled, there is still only one consumer group that is the same as the application.id. However, instead of using one Producer per thread, one producer per task is used.
What you need is permission for transaction. The TransactionsId the error reports is from the producer of task 0_16. Each producer uses its own transactional ID, that is constructed as <application.id>-<taskId>.
For details, compare the docs: https://docs.confluent.io/current/kafka/authorization.html#using-acls
I'm using Weblogic JMS. What I'd like to do is:
a) producer A produces JMS messages and put them on the queue ( groupA )
b) when processing each message from groupA I want to generate another messages ( groupB )
I've got 16 workers to process this messages.
Now, how I can ensure, that all messages from groupA will be processed before any message from groupB ?
A bit late here but hopefully this will help anyone with the same issue. GroupA is effectively an Intermediate Destination. The JMS queue should have the default value of Pass-Through in the Unit-of-Work (UOW) Message Handling Policy setting. As your MDB processes these messages it needs to get the Unit of Work jms properties and reset them on the new message being sent for GroupB. This jms queue should have Unit-of-Work (UOW) Message Handling Policy set to Single Message Delivery. When the messages are received on this jms queue they will not be processed until all messages for a unit of work group are present i.e. all sequence numbers 1,2,3 etc and an end of message identifier. Once they are all present the mdb will consume them as 1 object message and the individual messages will be contained in a List. It is then your job to code to iterate the list and process as you need.
Weblogic Docs here
I have configured Spring DefaultMessageListenerContainer as ActiveMQ consumer consuming messages from a queue. Let's call it "Test.Queue"
I have this code deployed in 4 different machines and all the machines are configured to the same ActiveMQ instance to process the messages from the same "Test.Queue" queue.
I set the max consumer size to 20 as soon as all the 4 machines are up and running, I see the number of consumers count against the queue as 80 (4 * max consumer size = 80)
Everything is fine when the messages produced and sent to the queue grows high.
When there are 1000's of messages and among the 80 consumers, let's say one of them is stuck it puts a freeze on Active MQ to stop sending messages to other consumers.
All messages are stuck in ActiveMQ forever.
As I have 4 machines with up to 80 consumers , I have no clue as to see which consumer failed to acknowledge.
I go stop and restart all the 4 machines and when I stop the machine that has the bad consumer which got stuck, then messages starts flowing again.
I don't know how to configure DefaultMessageListenerContainer to abandon the bad consumer and signal ActiveMQ immediately to start sending messages.
I was able to create the scenario even without Spring as follows:
I produced up to 5000 messages and sent them to the "Test.Queue" queue
I created 2 consumers (Consumer A, B) and in one consumer B's
onMessage() method, I put the thread to sleep for a long time (
Thread.sleep(Long.MAX_VALUE)) having the condition like when current time % 13 is 0 then put the thread to sleep.
Ran these 2 consumers.
Went to Active MQ and found that the queue has 2 consumers.
Both A and B are processing messages
At some point of time consumer B's onMessage() gets called and it puts the Thread to sleep when the condition of current time % 13 is 0 is satisified.
The consumer B is stuck and it can't acknowledge to the broker
I went back to Active MQ web console, still see the consumers as 2, but no messages are dequeued.
Now I created another consumer C and ran it to consume.
Only the consumer count in ActiveMQ went up to 3 from 2.
But Consumer C is not consuming anything as the broker failed sending any messages holding them all as it is still waiting for consumer B to acknowledge it.
Also I noticed Consumer A is not consuming anything
I go and kill consumer B , now all messages are drained.
Let's say A, B, C are managed by Spring's DefaultMessageListenerContainer, how do I tweak Spring DefaultMessageListenerContainer to take that bad consumer off the pool (in my case consumer B) after it failed to acknowledge for X number of seconds, acknowledge the broker immediately so that the broker is not holding onto messages forever.
Thanks for your time.
Appreciate if I get a solution to this problem.
here are a few options to try...
set the queue prefetch to 0 to promote better distribution across consumers and reduce 'stuck' messages on specific consumers. see http://activemq.apache.org/what-is-the-prefetch-limit-for.html
set "?useKeepAlive=false&wireFormat.maxInactivityDuration=20000" on the connection to timeout the slow consumer after a specified inactive time
set the queue policy "slowConsumerStrategy->abortSlowConsumer"...again to timeout a slow consumer
<policyEntry ...
...
<slowConsumerStrategy>
<abortSlowConsumerStrategy />
</slowConsumerStrategy>
...
</policyEntry>