In JMS there are Queues and Topics. As I understand it so far queues are best used for producer/consumer scenarios, where as topics can be used for publish/subscribe. However in my scenario I need a way to combine both approaches and create a producer-consumer-observer architecture.
Particularly I have producers which write to some queues and workers, which read from these queues and process the messages in those queues, then write it to a different queue (or topic). Whenever a worker has done a job my GUI should be notified and update its representation of the current system state. Since workers and GUI are different processes I cannot apply a simple observer pattern or notify the GUI directly.
What is the best way to realize this using a combination of queues and/or topics? The GUI should always be notified, but it should never consume anything from a queue?
I would like to solve this with JMS directly and not use any additional technology such as RMI to implement the observer part.
To give a more concrete example:
I have a queue with packages (PACKAGEQUEUE), produced by machine (PackageProducer)
I have a worker which takes a package from the PACKAGEQUEUE adds an address and then writes it to a MAILQUEUE (AddressWorker)
Another worker processes the MAILQUEUE and sends the packages out by mail (MailWorker).
After step 2. when a message is written to the MAILQUEUE, I want to notify the GUI and update the status of the package. Of course the GUI should not consume the messages in the MAILQUEUE, only the MailWorker must consume them.
You can use a combination of queue and topic for your solution.
Your GUI application can subscribe to a topic, say MAILQUEUE_NOTIFICATION. Every time (i.e at step 2) PackageProducer writes message to MAILQUEUE, a copy of that message should be published to MAILQUEUE_NOTIFICATION topic. Since the GUI application has subscribed to the topic, it will get that publication containing information on status of the package. GUI can be updated with the contents of that publication.
HTH
Related
I have multiple instances of a worker connected to a queue and all requests will be distributed to worker instances in a load balanced way. When a new worker instance is connected to the queue, I should dump a small data from mainstream app to this new worker instance (one time job).
Currently I'm using REST endpoint from mainstream app for doing this at application start-up but can we leverage the messaging queue for this? Once a new worker instance connected to queue, it will ask the initial data dump to mainstream app through queue and then app will reply with initial data.
Is it possible using messaging queue/topic? Kindly share your views/suggestions to achieve this using activemq
If you're using ActiveMQ Artemis this kind of requirement is typically fulfilled with a queue that supports both non-destructive and last-value semantics. The last-value semantics allows the queue to stay up-to-date with the latest messages and the non-destructive semantics means that even when consumers acknowledge the messages they will remain on the queue for the next client which connects. When using this combination clients can first consume all the messages from this special "initialization" queue and then continue on with whatever other messaging work they need to do.
Unfortunately ActiveMQ "Classic" doesn't support either of these semantics and there is no straight-forward way to get equivalent behavior.
I have an environment where I have only one app server. I have some messages that take awhile to service (like 10 seconds or so) and I'd like to increase throughput by configuring multiple instances of my consumer application running code to process these messages. I've read about the "competing consumer" pattern and gather that this should be avoided when using MassTransit. According to the MassTransit docs here, each receive endpoint should have a unique queue name. I'm struggling to understand how to map this recommendation to my environment. Is it possible to have N instances of consumers running that each receive the same message, but only one of the instances will actually act on it? In other words, can we implement the "competing consumer" pattern but across multiple queues instead of one?
Or am I looking at this wrong? Do I really need to look into the "Send" method as opposed to "Publish"? The downside with "Send" is that it requires the sender to have direct knowledge of the existence of an endpoint, and I want to be dynamic with the number of consumers/endpoints I have. Is there anything built in to MassTransit that could help with the keeping track of how many consumer instances/queues/endpoints there are that can service a particular message type?
Thanks,
Andy
so the "avoid competing consumers" guidance was from when MSMQ was the primary transport. MSMQ would fall over if multiple threads where reading from the queue.
If you are using RabbitMQ, then competing consumers work brilliantly. Competing consumers is the right answer. Each competing consume will use the same receive from endpoint.
I have an application written in Ruby that has multiple threads that each send requests to remote AMQP endpoints. These threads are spawned from time to time when new tasks have to be run.
If I use temporary, exclusive queues per thread for sending responses to their requests, then it becomes easy to write the code to handle incoming messages in this Ruby service. The queues are deleted as soon as the associated channel is closed so they don't stick around after their purpose is over.
The alternatives I can think of all require a listener thread listening on one or more queues that receive all incoming messages / responses into the Ruby service, and then routing these messages to waiting threads using some message identifiers. This seems more complicated, and I am unable to use RabbitMQ for all the required semantic routing.
Is the first model a viable model for AMQP communication? Is there a better pattern for handling this case?
the answer largely depends on your use case
if you don't care about losing messages when a given queue is deleted, then the first option is fine.
if you need messages to stick around in a queue until something comes along to process it, then you need to have a durable queue where messages sit.
there is no requirement for queue per thread, with rabbitmq.
however, you should be using a channel per thread.
given that, you can have a channel per thread and have multiple channels consuming from the same (or different) queue without issue.
as long as you keep channels limited to a single thread, you can do whatever you need in regards to the queues you are consuming from.
I am creating a hosted system where multiple customers can send messages. I am receiving thoses messages on a JMS queue.
Now, all processing is done in a similar way and I want my process to poll all incoming queues for messages and handle them. Is there a way in WSO2 ESB to subscribe to multiple queues?
If not possible, the workaround would be to create a seperate listener process for each queue and have this post the message to a central processing queue. But that seems to be a less clean solution (and I think it will scale worse than listening to multiple queues).
Any ideas on this?
If changes to activeMQ server is possible ie. if OP is able to influence the configuration to the server, something like ActiveMQ diverts could do the trick.
<divert name="prices-divert">
<address>jms.queue.ABC</address>
<forwarding-address>jms.queue.theone</forwarding-address>
<exclusive>true</exclusive>
</divert>
<divert name="prices-divert">
<address>jms.queue.xyz</address>
<forwarding-address>jms.queue.theone</forwarding-address>
<exclusive>true</exclusive>
</divert>
Basically, multiple diverts that converge the messages from multiple queues to the single queue. This method has advantage over the reading and writing to single queue-as mentioned by the OP and would in my view scale well as it is inbuilt feature.
You can define a sequence with all the required logic in it and then call it from multiple proxy services (each listening to a specific queue). Otherwise you can try something similar to this sample.
Here is my use case: I am developing a trading application and i want to send incremental stock updates (bidQty etc) to active consumers instead of the whole quote and a snapshot update to a new consumer (to start with).
Now, is it possible to override any ActiveMQ's class (implementors of Topic) to achieve this behavior? Any clues on this would be helpful .
If the same is possible in any other openSource provider, please let me know.
This is NOT a case where you simply can change the implementation of topic. You should actually avoid changing the implementation of core ActiveMQ features to solve specific business requirements. Fixing bugs and adding core messaging features is another thing.
There are multiple ways to solve your use case with regular ActiveMQ features.
Separate Sync and Update channel
I would probably divide the "sync/snapshot" channel from the "incremental update" channel.
One way is to implement the "snapshot-sync" as JMS request/reply where the consumer asks the provider for a sync, then continues to rely on incremental updates pushed via the topic.
Advisory messages and Selectors
You can also implement it all using a single topic using a mix of AdvisoryMessages and JMS Selectors.
An idea (you can do this in many ways):
Introduce two message properties: MsgType and Receiver
Mark each incremental update with MsgType=inc
Mark each snapshot with some client id of the consumer, Receiver=.
Have the producer listen to advisory messages from ActiveMQ and and fire a snapshot/sync message marked with Receiver= and MsgType=snapshot when there is a new client subscribing the stock topic.
The client subscribes with a selector of something like
MsgType='inc' OR (MsgType='snapshot' AND Receiver=<me>)
This way you can trigger snapshot syncs with specific clients as well as incremental updates for all clients.
If you start think about the dynamics you already have, you can probably think of another ten or so solutions.
Retroactive Consumers
You might have some use of a Retroactive Consumer - the example actually shows a scenario similar to yours.