I have 3 different queues in rabbitmq where I have to read messages and send them to elasticsearch in the same index. I am confuse it is possible to read multiple queues in single config file. I am already reading one queue at a time. But I am getting realtime messages from different queues and need to process at the same time all these three queues?
you can, you need one input section for each queue
Related
We have one topic with one partition due to ordering of message requirements. We have two consumers running on different servers with same set of configurations i.e. groupId, consumerId, consumerGroup. i.e.
1 Topic -> 1 Partition -> 2 Consumers
When we deploy consumers same code is deployed on both the servers. Noticed when a message comes we see both the consumers are consuming message rather than only one processing. Reason having consumers running on two separate servers is if one server crashes at least other can continue processing messages. But looks like if both up both consuming messages. Reading Kafka docs it says if we have more consumers than partitions then some stay idle don't see that happening. Anything we are missing on configuration side apart from consumerId & groupId. Thanks
As #Gary Russel said, as long as the two consumer instances have their own consumer group, they will consume every event that is written to the topic. Just put them into the same consumer-group. You can provide a consumer-group-id in the consumer.properties.
My application create with SpringBoot and is in cluster (two different istance openshit)
Every istance has one consumer that read message of topic in replication factory.
I would like to find a mechanism to block the reading of a message into topic in replication factory if it has already been read by one of the two consumers
Example:
CONSUMER CLIENT A -- READ MSG_1 --> BROKER_1
- Offset increase
- Commit OK
CONSUMER CLIENT B --> NOT READ MSG_1 --> BROKER_1
-- Correct beacause already commit
Now BROKER_1 is show and new lead is BROKER_2
How can I block the already read message into BROKER_2?
Thanks all!
Giuseppe.
Replication factor doesn't control if/how consumers read messages. The partition count does. If the topic only has one partition, then only one consumer instance is able to read messages, and all other instances are "blocked". And if the message is already read and commited then it doesn't matter which broker is the leader because the offsets are maintained per topic, not per replica
If you have more than one partition and you still want to block consumers from being able to read data, then you'll need to implement some external, coordinated lock via Zookeeper, for example
We have an IBM MQ JMS queue and want to distribute the data into multiple consumers for load balancing. So if we write two JMS Clients to consume from same JMS queue what will happen? Will Messages be equally distributed across both consumers since one consumer will delete the data after it is read? Is there a possibility for data duplication, like if the same message is read by both consumers in a race condition?
My comments below are based on destructive get and not a browse get.
So if we write two JMS Clients to consume from same JMS queue what
will happen?
They will both consume messages.
Will Messages be equally distributed across both consumers since one
consumer will delete the data after it is read?
No. The "hot" consumer will be feed the next available message, assuming it is "getting" a message again before the next message arrives.
Is there a possibility for data duplication, like if the same message
is read by both consumers in a race condition?
Not if you are performing a destructive get (the default).
According to the documentation here: https://github.com/OpenHFT/Chronicle-Engine one is able to do pub/sub using maps. This allows one to create a construct similar to topics that are available in middleware such as Tibco, 29W, Kafka and use that as a way of sending events across processes. Is this a recommended usage of chronicle map? What kind of latency can I expect if both publisher and subscriber stay in the same machine?
My second question is, how can this be extended to send messages across machines? How does this work with enterprise TCP replication?
My requirement is to create thousands of topics and use them to communicate across processes running in different machines (in a LAN). Each of these topics would be written by a single source and read by multiple readers running in same or different machines. If the source of a particular topic dies, that source's replica would start writing to the topic and listeners will continue to receive messages. These messages need not be stored for replay.
Is this a recommended usage of chronicle map?
Yes, you can use engine to support event notification across a machine. However, if you want lowest latencies you might need to send a notification via Queue and keep the latest value in a map.
What kind of latency can I expect if both publisher and subscriber stay in the same machine?
It depends on your use case esp the size of the data (in maps case the number of entries as well) The Latency for Map in Engine is around 30 - 100 us, however the latency for Queue is around 2 - 5 us.
My second question is, how can this be extended to send messages across machines?
For this you need our licensed product but the code is the same.
Each of these topics would be written by a single source and read by multiple readers running in same or different machines. If the source of a particular topic dies, that source's replica would start writing to the topic and listeners will continue to receive messages.
Most likely, the simplest solution is to have a Map where each topic is a different key. This will send the latest value for that topic to the consumers.
If you need to recorded every event, a Queue is likely to be a better choice. If you don't need to retain the data for long, you can use a very sort file rotation.
I am trying to implement an application(Java) which will subscribe to different message types (XMLs) from other different applications via TIBCO EMS. Each of these message types will have a specific purpose. I am of the opinion that I should have multiple queues with multiple subscribers in my application, however, the TIBCO guy is adamant that there should be only one queue where all of these messages will be published and I will have one subscriber and the subscriber then should have logic to different tasks based on the XML received.
Which approach is better? One with multiple queues and subscribers OR the one queue and one subscriber? Please let me know reasons for the choice.
Thanks!
-Naveen
In general, if the same application is reading all the messages, it is much cleaner for that application to have a single input queue instead of multiple input queues. With multiple then the application will need to have logic to know which order to process the queues and so on. With one input queue, the messaging system can deal with the order of the messages - whether FIFO or by priority etc, and the application can just read the next message and process it.
Use unique message header for each type of xml while sending the message. And use message selectors / filters while receiving the same, so that it can be routed / delegated to the respective handler based on the header value. This way, you will be able to handle different type of xml messages by single queue as well.