RabbitMQ: routing messages to threads - ruby

I have an application written in Ruby that has multiple threads that each send requests to remote AMQP endpoints. These threads are spawned from time to time when new tasks have to be run.
If I use temporary, exclusive queues per thread for sending responses to their requests, then it becomes easy to write the code to handle incoming messages in this Ruby service. The queues are deleted as soon as the associated channel is closed so they don't stick around after their purpose is over.
The alternatives I can think of all require a listener thread listening on one or more queues that receive all incoming messages / responses into the Ruby service, and then routing these messages to waiting threads using some message identifiers. This seems more complicated, and I am unable to use RabbitMQ for all the required semantic routing.
Is the first model a viable model for AMQP communication? Is there a better pattern for handling this case?

the answer largely depends on your use case
if you don't care about losing messages when a given queue is deleted, then the first option is fine.
if you need messages to stick around in a queue until something comes along to process it, then you need to have a durable queue where messages sit.
there is no requirement for queue per thread, with rabbitmq.
however, you should be using a channel per thread.
given that, you can have a channel per thread and have multiple channels consuming from the same (or different) queue without issue.
as long as you keep channels limited to a single thread, you can do whatever you need in regards to the queues you are consuming from.

Related

Send message to consumer when connected to ActiveMQ

I have multiple instances of a worker connected to a queue and all requests will be distributed to worker instances in a load balanced way. When a new worker instance is connected to the queue, I should dump a small data from mainstream app to this new worker instance (one time job).
Currently I'm using REST endpoint from mainstream app for doing this at application start-up but can we leverage the messaging queue for this? Once a new worker instance connected to queue, it will ask the initial data dump to mainstream app through queue and then app will reply with initial data.
Is it possible using messaging queue/topic? Kindly share your views/suggestions to achieve this using activemq
If you're using ActiveMQ Artemis this kind of requirement is typically fulfilled with a queue that supports both non-destructive and last-value semantics. The last-value semantics allows the queue to stay up-to-date with the latest messages and the non-destructive semantics means that even when consumers acknowledge the messages they will remain on the queue for the next client which connects. When using this combination clients can first consume all the messages from this special "initialization" queue and then continue on with whatever other messaging work they need to do.
Unfortunately ActiveMQ "Classic" doesn't support either of these semantics and there is no straight-forward way to get equivalent behavior.

MassTransit Multiple Consumers

I have an environment where I have only one app server. I have some messages that take awhile to service (like 10 seconds or so) and I'd like to increase throughput by configuring multiple instances of my consumer application running code to process these messages. I've read about the "competing consumer" pattern and gather that this should be avoided when using MassTransit. According to the MassTransit docs here, each receive endpoint should have a unique queue name. I'm struggling to understand how to map this recommendation to my environment. Is it possible to have N instances of consumers running that each receive the same message, but only one of the instances will actually act on it? In other words, can we implement the "competing consumer" pattern but across multiple queues instead of one?
Or am I looking at this wrong? Do I really need to look into the "Send" method as opposed to "Publish"? The downside with "Send" is that it requires the sender to have direct knowledge of the existence of an endpoint, and I want to be dynamic with the number of consumers/endpoints I have. Is there anything built in to MassTransit that could help with the keeping track of how many consumer instances/queues/endpoints there are that can service a particular message type?
Thanks,
Andy
so the "avoid competing consumers" guidance was from when MSMQ was the primary transport. MSMQ would fall over if multiple threads where reading from the queue.
If you are using RabbitMQ, then competing consumers work brilliantly. Competing consumers is the right answer. Each competing consume will use the same receive from endpoint.

JMS consumer inside a Netty handler?

I'm designing a quite complicated system and was wondering what the best way is to put a jms consumer (activemq, vm protocol, non persitent) inside a netty handler.
Let me explain, i have several clients connecting to my netty server using websockets. For every client connection i create a jms consumer that listens for interesting messages on one or more topics. If a interesting message arrives i need to do a extra step (additional filtering) before sending the message to the client using the websocket.
Is the following a good way to do this:
inside a SimpleChannelInboundHandler i declare a private non static consumer
the consumer is initialized in channelActive
the consumer is destroyed in channelInactive
when a message is received by consumer i do the extra filter a send it using ctx.channel().write()
In this setup i'm a bit worried that the consumer might turn into slow consumer and slow everything down, cause the websocket goes over the internet.
I came up with a more complex one to decouple the "receiving of message by consumer" and "sending of message through a websocket".
inside a SimpleChannelInboundHandler i declare a private non static consumer
the consumer is initialized in channelActive
the consumer is destroyed in channelInactive
when a message is received by consumer i put it in a blockedqueue
every minute i let a thread (created for every client) look in the queue and send the found messages to the client using ctx.channel().write().
At this point i'm a bit worried about the extra thread per client.
Or is there maybe a better way to accomplish this task?
This is a classic slow consumer problem and the first step to resolving it is to determine what the appropriate action is when a slow consumer is detected. If it is acceptable that the slow consumer misses messages then the solution is some variation on dropping messages or unsubscribing them from the feed. For example, if it's acceptable that the client misses messages then, when one is received from JMS, check if the channel is writable. If it isn't, drop the message. If you want to give yourself a bit more of a buffer (although OS buffers are quite large) you can track the number of write completion future's that haven't completed (ie the messages haven't been written to the OS send buffer) and drop messages if there are too many outstanding write requests.
If the client may not miss messages, and is consistently slow, then the problem is more difficult. One option might be to divert messages to a JMS queue with a specific header value, then open a new consumer that reads messages from that queue using a JMS selector. This will put more load on the JMS server but might be appropriate for temporary slowness and hopefully it won't interfere with you main topic feeds. Alternatively you might want to stash the messages in a different store, such as a database, so you can poll for messages when they can be sent. If you do this right a single polling thread can cope with many clients (query for clients which have outstanding messages, then for each client, load a bunch of messages). However this isn't as convenient as using JMS.
I wouldn't go with option 2 because the blocking queue is only going to solve the problem temporarily, and you can achieve the same thing by tracking how many write operations are waiting to complete.

JMS Producer-Consumer-Observer (PCO)

In JMS there are Queues and Topics. As I understand it so far queues are best used for producer/consumer scenarios, where as topics can be used for publish/subscribe. However in my scenario I need a way to combine both approaches and create a producer-consumer-observer architecture.
Particularly I have producers which write to some queues and workers, which read from these queues and process the messages in those queues, then write it to a different queue (or topic). Whenever a worker has done a job my GUI should be notified and update its representation of the current system state. Since workers and GUI are different processes I cannot apply a simple observer pattern or notify the GUI directly.
What is the best way to realize this using a combination of queues and/or topics? The GUI should always be notified, but it should never consume anything from a queue?
I would like to solve this with JMS directly and not use any additional technology such as RMI to implement the observer part.
To give a more concrete example:
I have a queue with packages (PACKAGEQUEUE), produced by machine (PackageProducer)
I have a worker which takes a package from the PACKAGEQUEUE adds an address and then writes it to a MAILQUEUE (AddressWorker)
Another worker processes the MAILQUEUE and sends the packages out by mail (MailWorker).
After step 2. when a message is written to the MAILQUEUE, I want to notify the GUI and update the status of the package. Of course the GUI should not consume the messages in the MAILQUEUE, only the MailWorker must consume them.
You can use a combination of queue and topic for your solution.
Your GUI application can subscribe to a topic, say MAILQUEUE_NOTIFICATION. Every time (i.e at step 2) PackageProducer writes message to MAILQUEUE, a copy of that message should be published to MAILQUEUE_NOTIFICATION topic. Since the GUI application has subscribed to the topic, it will get that publication containing information on status of the package. GUI can be updated with the contents of that publication.
HTH

About JMS system structure

I’m writing a server/client game, a typical scenario looks like this: one client (clientA) send a message to the server, there is a MessageDrivenBean in server to handle such messages. After the MDB finished its job, it sends the result message back to another client (clientB).
In my opinion I only need two queues for such communication, one for input the other for output. Creating new queue for each connection is not a good idea, right?
The Input queue is relative clear, if more clients are sending message at the same time, the messages are just waiting in the queue, while there are more MDB instances in server, that should not a big performance issue.
But on the other side I am not quite clear about the output queue, should I use a topic instead of a queue? Every client is listening the output queue, one of them gets the new message and checks the property to determine if the message is to it, if not, it rollback the transaction, the message goes back to queue and be ready for other client … It should work but must be very slow. If I use topic instead, every client gets a copy of the message, if it’s not to it, just ignores the message. It should be better, right?
I’m new about message system. Is there any suggestion about my implementation? Thanks!
To begin with, choosing JMS as a gaming platform is, well, unusual — businesses use JMS brokers for delivery reliability and transaction support. Do you really need this heavy lifiting in a game? Shouldn't you resort to your own HTTP-based protocol, for example?
That said, two queues are a standard pattern for point-to-point communication. Creating a queue for a new connection is definitely not OK — message-driven beans are attached to queues at deployment time, so you won't be able to respond to queue creation events. Besides, queues are not meant to be created and destroyed in short cycles, they're rather designed to be long-living entities. If you need to deliver a message to one precise client, have the client listen on the server response queue with a message selector set to filter only the messages intended for this client (see javax.jms.Message API).
With topics it's exactly as you noted — each connected client will get a copy of the message — so again, it's not a good pattern to send to n clients a message that has to be discarded by n-1 clients.
MaDa;
You could stick one output queue (or topic) and simply tag the message with a header that identifies the intended client. Then, clients can listen on the queue/topic using a selector. Hopefully your JMS implementation has efficient server-side listener evaluation.

Resources