I have a requirement to publish message to different queues based on header and that too can be on different cluster or vhost.
My preference is to have a direct exchange which will be binded to a queue with a routing key and from there I can have multiple shovel configurations to multiple queues/exchanges in different vhost or cluster. But the problem here is how do I route shovel based on header since source is always one queue.
As a second option I can have a header exchange binded to multiple queues based on header and each queue can have corresponding shovel and destination. But in this approach I need to create multiple source queues.
What would be the ideal, future proof and maintainable solution?
Related
I'm new to RabbitMQ, but I know that my use case fits well in this kind of architecture. What I want to achieve is the following.
Using an android application, the user will push the "start trip" button. This will call to an API which will create the trip. Then, the android application will send data periodically, gps coordinates, to the API (which will accomplish some task). When the user finishes the trip, another call to the API will be made.
Until now, the API was a simple Restful written using spring boot. Now, I want to make changes to the architecture and add RabbitMQ.
I've thought that whenever a trip is started, the API will create a Queue (queue_trip_XXX, as XXX is the trip identificator), bound to a exchange (trips_exchange) with a routing key (trip_XXX). Then, but dynamically, gps coordinates will be sent to the exchange and routed to the corresponding queue. When the user ends the trip, the queue will be removed.
So, there will be one queue for each trip and a unique exchange. Is this appropriate? Do you have any other solution that would best fit to this use case?
Another question is how can I create a consumer which listens to messages sent to a queue?
Thanks!
So, there will be one queue for each trip and a unique exchange. Is this appropriate?
As I've mentioned in the comment I don't think it's a good idea due to the fact that every queue in RabbitMQ is a separate Erlang process.
Is there any reason why you would like to process messages from one trip separately from the others? Maybe one queue will be enough for start?
Another question is how can I create a consumer which listens to messages sent to a queue?
I assume you already have two nodes (one for API and one for RabbitMQ broker).
You should just create the third one which will be responsible for processing the data.
We are introducing SNS + SQS to handle event production and propagation in our micro services architecture, which has so far relied on HTTPS calls to communicate with each other. We are considering connecting multiple SQS queues onto one SNS topic. The events in the queues will then be consumed by a lambda or a service running in EC2.
My question is, how generic should the topics be? When should we create new topics?
Say, we have a user domain which needs to publish two events—created and deleted. Two options we are considering are:
OPTION A: Have two topics, "user-created" and "user-deleted". Each topic guarantees a single event type.
the consumers would not have to worry about discarding events that they are not interested in, as they know already know the messages coming from a "user-created" topic is only related to user creations.
multiple different parts of the code publishing to the same topic
OPTION B: Have one topic, "users", that accepts multiple event types
the consumers would have an additional responsibility of filtering through the events or taking different actions depending on the type of the event (they can also configure their queues subscriptions to filter certain event types)
can ensure a single publisher for each topic
Does anyone have a strong preference for either of the options and why would that be?
On a related note, where would you include the cloud configuration for each of the resources? (should the queue resource creation be deployed together with the consumers, or should they live independently from any of the publishers/consumers?)
I think you should go with Option B and keep all events concerning a given "domain" (e.g. "user") in a single topic:
keeps your infrastructure simple
you might introduce services interested in multiple event types (e.g. "create" and "delete"). Its kind of tricky to get the ordering right consuming this from two topics; imagine a "user-delete" event arrive before the "user-create" event
throughput might be an issue, this really depends on your domain (creating and deleting users doesn't sound like a high volume issue)
think about changes in the data structures in your topics, introducing changes in two or more topics simultaniously can get complicated pretty fast
Concerning your other question: Keep your topic/infrastructure configuration separate from your services. It's an individual piece of infrastructure (like a database) and should kept separate; especially if you introduce more consumers & producers to your system.
EDIT: This might be an example "setup":
Repository user-service contains the service/lambda code, cloudformation/terraform templates for the service and its topic subscriptions
Repository sns contains all cloudformation/terraform templates concerning SNS topics
Repository sqs contains all cloudformation/terraform templates concerning SQS topics
You can think about keeping the SNS & SQS infra code in a single repository (the last two), but I would strongly recommend everything specific to a certain service/lambda to be kept in separate repositories.
Generally it helps to think about your topics as a "database", this line of thinking should point you in the right direction for all your questions.
I have to run two instances of the same application that read messages from 'queue-1' and write them back to another queue 'queue-2'.
I need my messages inside the two queues to be ordered by specific property (sequence number) which is initially added to every message by producer. As per documentation, inside queue-1 the order of messages will be preserved as messages are sent by a single producer. But because of having multiple consumers that read, process and send the processed messages to queue-2, the order of messages inside queue-2 might be lost.
So my task is to make sure that messages are delivered to queue-2 in the same order as they were read from queue-1. I have implemented re-sequencer pattern from Apache camel to re-order messages inside queue-2. The re-sequencer works fine but results to data transfer overhead as the camel routes run locally.
Thinking about doing it in a better way, I have three questions:
Does artemis inherently supports re-ordering of messages inside a
queue using a property such as sequence number.
Is it possible to run the routes inside the server? If yes, can you
give an example or give a link to the documentation?
Some artemis features such as divert (split) requires modifying
broker configuration (broker.xml file), is there a way to do them
programmatically and dynamically so I can decide when to start
diverting message? I know this can be accomplished by using camel,
but I want everything to be running in the server.
Does artemis inherently supports re-ordering of messages inside a queue using a property such as sequence number.
No. Camel is really the best solution here in my opinion.
Is it possible to run the routes inside the server? If yes, can you give an example or give a link to the documentation?
You should be able to do the same kind of thing in Artemis as in ActiveMQ 5.x using a web application with a Camel context. The 5.x doc is here.
Some artemis features such as divert (split) requires modifying broker configuration (broker.xml file), is there a way to do them programatically and dynamically so I can decide when to start diverting message?
You can use the Artemis management methods to create, modify, and delete diverts programmatically (or administratively) at runtime. However, these modifications will be volatile (i.e. they won't survive a broker restart).
I am creating a hosted system where multiple customers can send messages. I am receiving thoses messages on a JMS queue.
Now, all processing is done in a similar way and I want my process to poll all incoming queues for messages and handle them. Is there a way in WSO2 ESB to subscribe to multiple queues?
If not possible, the workaround would be to create a seperate listener process for each queue and have this post the message to a central processing queue. But that seems to be a less clean solution (and I think it will scale worse than listening to multiple queues).
Any ideas on this?
If changes to activeMQ server is possible ie. if OP is able to influence the configuration to the server, something like ActiveMQ diverts could do the trick.
<divert name="prices-divert">
<address>jms.queue.ABC</address>
<forwarding-address>jms.queue.theone</forwarding-address>
<exclusive>true</exclusive>
</divert>
<divert name="prices-divert">
<address>jms.queue.xyz</address>
<forwarding-address>jms.queue.theone</forwarding-address>
<exclusive>true</exclusive>
</divert>
Basically, multiple diverts that converge the messages from multiple queues to the single queue. This method has advantage over the reading and writing to single queue-as mentioned by the OP and would in my view scale well as it is inbuilt feature.
You can define a sequence with all the required logic in it and then call it from multiple proxy services (each listening to a specific queue). Otherwise you can try something similar to this sample.
I'm working on updating an existing Mule configuration and the task is to enhance it to route messages to different endpoints depending on some properties of the messages, therefore it would be nice to have some pros and cons on the two options I have at hand:
Add properties on the message, using the "message-properties-transformer" transformer which is later used by a "filtering-router" to single out the message and put it on the correct endpoint. This option allows me to use a single queue for all destinations.
Create one queue for each destination and thus instead of adding some property for later routing, I just put on on the right queue at once. I.e. this option would mean one queue per destination.
Any feedback would be welcome. Is there any "best practices" with regards to this?
I've had a great deal of success with using your first approach with a filtering-router. It reduces cohesion between your message producers and consumers. It forms a valuable abstraction, so any service can blindly drop messages within the generic "outbox".
We've come to depend on mule for filtering and routing messages so much so that we have a dedicated cluster of hardware to do only this. Using mule I was able to get far greater performance and not have to maintain connections to all queues.
The down side will be having to very carefully maintain your messaging object version globally, and having to keep a set of transformers on hands to accept and convert from different versions if you plan to upgrade only a portion of your infrastructure.
thanks, matt