Routing in ActiveMQ - jms

I have a following scenario:
Producer P produces messages posts all of them to AMQ 'A' (which is local to 'P').
Is it possible for me to route the messages from ActiveMQ A to remote Active MQs B or C?
Basically Iam looking for filters in 'A' side configuration to route these messages.
Thanks in Advance,
Madhav
Gaurav,
What I mean is I have 3 instances of activeMQs at location A, B and C respectively, I have a producer bean 'P' in location A which locally places the messages in AMQ instance #A, locally because I relieve the headache of connection maintenance(lessen) and probability of message loss in 'P' if I were to connect to AMQ instances B or C remotely.

if you want to interconnect the 3 brokers, then just create a network of brokers out of them...then producers and consumers can connect to any broker and messages can flow to the appropriate broker based on demand, etc.
otherwise, if you want more explicit control, then you can use Camel to perform basic (or complex) routing from brokerA queues to broker B/C queues using separate connection factories, etc.

Related

IBM MQ message groups

I'm currently faced with a use case where I need to process multiple messages in parallel, but related messages should only be processed by one JMS consumer at a time.
As an example, consider the messages ABCDEF and DEFGHI, followed by a sequence number:
ABCDEF-1
ABCDEF-2
ABCDEF-3
DEFGHI-1
DEFGHI-2
DEFGHI-3
If possible, I'd like to have JMS consumers process ABCDEF and DEFGHI messages in parallel, but never two ABCDEF or DEFGHI messages at the same time across two or more consumers. In my use case, the ordering of messages is irrelevant. I cannot use JMS filters because I would not know the group name ahead of time, and having a static list of group names is not feasible.. Messages are sent via a system which is not under my control, and the group name always consists of 6 letters.
ActiveMQ seems to have implemented this via their message groups feature, but I can't find equivalent functionality in IBM MQ. My understanding is that this behaviour is driven by JMSXGroupId and JMSXGroupSeq headers, which are only defined in an optional part of the JMS specification.
As a workaround, I could always have a staging ground (a database perhaps), where all messages are placed and then have a fast poll on this database, but adding an extra piece of infrastructure seems overkill here. Moreover, it would also not allow me to use JMS transactions for reprocessing in case of failures.
To me this seems like a common use case in messaging architecture, but I can't find a simple yes/no answer anywhere online, and the IBM MQ documentation isn't very clear about whether this functionality is supported or not.
Thanks in advance
IBM MQ also has the concept of message groups.
The IBM MQ message header, called the Message Descriptor (MQMD) has a couple of fields in it that the JMS Group fields map to:-
JMSXGroupID -> MQMD GroupID field
JMSXGroupSeq -> MQMD MsgSeqNumber field
Read more here
IBM MQ docs: Mapping JMS property fields
IBM MQ docs: Message groups

MQ channel F5 reroute

I would like to know if it is possible to point a MQ Sender channel to a F5 Load balancer VIP address, rather than to a concrete MQ Server IP address and have the message delivered to one of two MQ servers in the F5 Cluster resources group. There are two MQ Servers in the F5 cluster.
What I'm trying to do, is determine if I could use this method in lieu of creating a MQ Cluster Gateway Queue manager, on yet more hardware and use the F5 LB feature to deliver the message to a cluster queue.
If I could capitalize on the F5 Load balancing, I'm thinking it would simulate a MQ Cluster Gateway Queue manager.
Would it work? Pitfalls?
You are looking at a couple of issues with this configuration:
With persistent messages the Sender and the corresponding Receiver channels increment a sequence number with each persistent message sent across the channel. If this sequence number does not match then the channel will not start unless it is reset to match on one end or the other (usually the Sender). This means that if the sender channel connects to QMGR1 behind the F5, the Receiver on QMGR1 will increment the sequence number, if the next time the sender channel connects it is routed to QMGR2, the sequence number of the Receiver will be lower than it is on the sender and it will not start.
Even if you were only sending non-persistent messages which do not increment the sequence number, you would still not achieve the same results as you would with having a cluster gateway in front of the two queue managers. Typically with a cluster configuration you would get a round robin of the messages between the two clustered queue managers. With a sender channel it is normally configured to start when a message is put to the associated transmission queue (TRIGGER) and to keep running until messages have not been sent for the length of time specified on the disconnect interval (DISCINT). Because of this you would not see workload balancing of messages between the two queue managers behind the F5. Depending on how you have the F5 configured and how long the disconnect interval is set to you would see a group of messages to go one queue manager and then a group of messages go to the other queue manager. The number of messages in each group would depend on the traffic pattern compared to the sender channel settings.
Note that even if the Sender channel is configured to connect only to one of the two clustered queue managers, if you set the cluster workload use queue (CLWLUSEQ) to the value of ANY for the clustered queue, you can have messages still round robin between the two instances of the queue. This would require that you have the remote queue (QREMOTE) on the Sender channel queue manager specify a Remote Queue Manager Alias (RQMA) as the remote queue manager name (RQMNAME) value. The RQMA would then allow the messages to resolve to any instance of the clustered queue including the local instance. Examples of objects are below for the Sender queue manager (SQMGR) and Receiver (first clustered) queue manager (CQMGR1) and the second clustered queue manager (CQMGR2):
SQMGR:
DEFINE QREMOTE(TO.CQLOCAL) RNAME(CQLOCAL) RQMNAME(CLUSTER.RQMA) XMITQ(CQMGR1)
DEFINE QLOCAL(CQMGR1) USAGE(XMITQ) INITQ(SYSTEM.CHANNEL.INITQ) TRIGDATA(SQMGR.CQMGR1) TRIGGER .....
DEFINE CHL(SQMGR.CQMGR1) CHLTYPE(SDR) XMITQ(CQMGR1) CONNAME(10.20.30.40) .....
CQMGR1:
DEFINE CHL(SQMGR.CQMGR1) CHLTYPE(RCVR) MCAUSER('notmqm') .....
DEFINE QREMOTE(CLUSTER.RQMA) RNAME('') RQMNAME('') XMITQ('')
DEFINE QLOCAL(CQLOCAL) CLUSTER('YOURCLUSTER') CLWLUSEQ(ANY)
CQMGR2:
DEFINE QLOCAL(CQLOCAL) CLUSTER('YOURCLUSTER') CLWLUSEQ(ANY)

MQGet and MQInput from the same queue

I've come across a curious detail in the legacy integration solution based on WebSphere MQ 7.0.1.3 and WebSphere Message Broker 7.0.0.7. There are 2 message flows:
The 1st flow is a case of MQ Request-Reply pattern. After MQPut it has a MQGet node that gets the message by correlation ID from queue "MQ_BIS_IN".
The 2nd flow is a kind of a one-way router that starts with a MQInput node (without any filters) that listens on the queue "MQ_GW_IN".
Interestingly, "MQ_BIS_IN" is an alias for "MQ_GW_IN" queue. My first thought was that the 2 flows would interfere in a bad way, basically the "omnivorous" MQInput would ruin the Request-Reply thing. But they seem to somehow get along.
I am going to reproduce this configuration in a test environment to determine if their behaviour is stable under load. Nevertheless, does anybody know if there are some rules of precedence between concurrent read operation from the same queue? Does it matter that there's an alias to the queue?
Both the MQInput and the MQGet node can be configured to look for particular msgId's or correlation Id's only, or to pick up the items on the queue in a determined order, or only pick up complete groups of messages - so there doesn't need to be a conflict here.

How does a Rabbitmq Topic Subscriber work in a clustered application server to recieve messages?

I have a Rabbit Topic with multiple (say 2)subscribers which is running in a load balanced application server cluster ( say 3 ) .
So will the message will get delivered to all (2 X 3 ) subscribers of all listeners in a clustered environment or only 2 listeners ?
There's no such thing as a "topic" in rabbitmq (amqp).
The closest thing to a JMS topic for your scenario is a fanout exchange with 2 queues bound to it. Each queue gets a reference to a message sent to the exchange so both consumers (one per queue) gets a copy of the message.
If you have multiple consumers (e.g. 3) on each queue, the messages in that queue are distributed round-robin fashion to those consumers. Only one consumer per queue gets a message.

RabbitMQ: Move messages to another queue on acknowledgement received

I have a setup with two queues (no exchanges), let's say queue A and queue B.
One parser puts messages on queue A, that are consumed by ElasticSearch RabbitMQ river.
What I want now is to move messages from queue A to queue B when the ES river sends an ack to the queue A, so that I can do other processing in the ack'd messages, being sure that ES already has processed them.
Is there any way in RabbitMQ to do this? If not, is there any other setup that can guarantee me that a message is only in queue B after being processed by ES?
Thanks in advance
I don't think this is supported by either AMQP or the rabbitmq extensions.
You could drop the river and let your consumer also publish to elasticsearch.
Since a normal behavior is that the queues are empty you can just perform a few retries of reading the entries from elasticsearch (with exponential backoff), so even if the elasticsearch loses the initial race it will backoff a bit and you can then perform the task. This might require tuning the prefetch_size/count in your clients.

Resources