how to implement keep alive in hazelcast cluster of verticles - cluster-computing

i have some verticles that belong to the same cluster and they send a hello message over the event bus -the message is basically the sending verticle's name- to a receiver verticle which stores the message in a map.
i want to implement a keep alive method so if i kill one of the senders the message it sent to the receiver is deleted from the receiver's map.
i looked at the hazelcast website but i didn't find an easy way to implement that feature.
the expected result is to have the sender's record deleted from the receiver's map when the sender's verticle is stopped/killed.

You can register a MembershipListener and listen to the membership changes. This way you can remove the previously added messages of a member when it leaves the cluster.

There can be multiple ways to achieve this and one is using Hazelcast ILock's would be an easier approach, each member can take a lock (maybe with a verticle name), and when a member leaves the cluster, all the locks acquired by that dead member will be auto-removed so your application can check if it's locked or not anytime.

Related

Autoscaling Backend and RabbitMQ Queues

I have an IoT system around 100k devices, publishing their state every second to the backend written in Java/Spring Boot. Until now, I was using gRPC but I see excessive CPU usage so I was planning to let the devices publish to RabbitMQ and let the backend workers process them.
Processing: Updating the db table.
Since data from same device must be processed sequentially, I was planning to use RabbitMQ's consistent hashing exchange, and bind the n queues for n workers. But I'm not sure how it'd work with autoscaling.
I thought of creating auto-delete queues for each backend instance and binding them to the exchange but I couldn't figure out:
How to rebalance messages already sitting in the queue?
If connectivity issue occurs, queue might get deleted, so I need to re-forward those messages to the existing queues.
Is there any algorithms for handle the autoscaling of workers? For instance if messages pile up, I need to spawn new workers even though cpu/memory usage is low.
I think I'll go with MQTT's shared subcriptions for this case.
https://emqx.medium.com/introduction-to-mqtt-5-0-protocol-shared-subscription-4c23e7e0e3c1
Sharing strategy
Although shared subscriptions allow subscribers to consume messages in
a load-balanced manner, the MQTT protocol does not specify what
load-balancing strategy the server should use. For reference, EMQ X
provides four strategies for users to choose: random, round_robin,
sticky, and hash.
random: randomly select one in all shared subscription sessions to publish messages
round_robin: select in turn according to subscription order
sticky: use a random strategy to randomly select a subscription session, continue to use the session until the subscription is cancelled or disconnect and repeat the process
hash: Hash the ClientID of the sender, and select a subscription session based on the hash result
Hash seems like what I'm looking for.

emulate radio communication using channel or mutex

I need to emulate a radio communication network composed of N nodes, with these properties:
nodes either send then receive data, or receive then send data, but not at the same time.
data sent over-the-air are received by all nodes which are in receive mode at that time.
if two or more nodes sends data simultaneously, data are lost.
there is no time synchronization among nodes.
In Go, if I use a channel to emulate the transmission media, data are serialized, and only one receiver gets the data, not all of them.
Also, I cannot think of a way to "ruin" the data if two sender try to send at the same time. Whether I use a mutex or not, one of the sender will successfully get its message sent.
Why don't you create a publisher and subscriber module using Golang Channels.
Create a centralized queuing system where all your sender and receiver nodes register themself. If any node sends the data it will go to that module and from the sender's list, it picks the senders channel and starts writing on it. The same applies to receivers also.
You have to create one channel per node and register it to a central pub/sub module. This will definitely solve your problem.

how to use same rabbitmq queue in different java microservice [duplicate]

I have implemented the example from the RabbitMQ website:
RabbitMQ Example
I have expanded it to have an application with a button to send a message.
Now I started two consumer on two different computers.
When I send the message the first message is sent to computer1, then the second message is sent to computer2, the thrid to computer1 and so on.
Why is this, and how can I change the behavior to send each message to each consumer?
Why is this
As noted by Yazan, messages are consumed from a single queue in a round-robin manner. The behavior your are seeing is by design, making it easy to scale up the number of consumers for a given queue.
how can I change the behavior to send each message to each consumer?
To have each consumer receive the same message, you need to create a queue for each consumer and deliver the same message to each queue.
The easiest way to do this is to use a fanout exchange. This will send every message to every queue that is bound to the exchange, completely ignoring the routing key.
If you need more control over the routing, you can use a topic or direct exchange and manage the routing keys.
Whatever type of exchange you choose, though, you will need to have a queue per consumer and have each message routed to each queue.
you can't it's controlled by the server check Round-robin dispatching section
It decides which consumer turn is. i'm not sure if there is a set of algorithms you can pick from, but at the end server will control this (i think round robin algorithm is default)
unless you want to use routing keys and exchanges
I would see this more as a design question. Ideally, producers should create the exchanges and the consumers create the queues and each consumer can create its own queue and hook it up to an exchange. This makes sure every consumer gets its message with its private queue.
What youre doing is essentially 'worker queues' model which is used to distribute tasks among worker nodes. Since each task needs to be performed only once, the message is sent to only one node. If you want to send a message to all the nodes, you need a different model called 'pub-sub' where each message is broadcasted to all the subscribers. The following link shows a simple pub-sub tutorial
https://www.rabbitmq.com/tutorials/tutorial-three-python.html

ActiveMQ converting existing Queue to CompositeQueue

I'll try to explain this the best I can.
As I store my data that I receive from my ActiveMQ queue in several distinct locations, I have decided to build a composite Queue so I can process the data for each location individually.
The issue I am running into is that I currently have the Queue in a production environment. It seems that changing a queue named A to a composite Queue also called A having virtual destinations named B and C causes me to lose all the data on the existing Queue. It does not on start-up forward the previous messages. Currently, I am creating a new CompositeQueue with a different name, say D, which forwards data to B and C. Then I have some clunky code that prevents all connections until I have both a) updated all the producers to send to D and b) pulled the data from A using a consumer and sent it to D with a producer.
It feels rather messy. Is there any way around this? Ideally I would be able to keep the same Queue name, have all its current data sent to the composite sub-queues, and have the Queue forward only in the end.
From the description given the desired behavior is no possible as message routing on the composite queue works when messages are in-flight and not sometime later when that queue has already stored messages and the broker configuration is changed. You need to consume the past messages from the initial Queue (A I guess it is) and send them onto the destinations desired.

Why QueueSession can create only one receiver in JMS?

I can make some senders to send some messages, but when I create two receivers in one Session, the first one works and the second one blocked. In debug, I see the queue list size which the second receiver received is zero. I found that session is made for one thread, I don't know whether this problem involves in the unsafe thread?
I use ActiveMQ implementation.
A JMS Session is absolutely single threaded. As such, it can only have one active receiver. You have 2 options:
Use one connection with multiple sessions, each session having a receiver. Connections are thread safe and you can create many sessions from that single connections.
ActiveMQ gives you a number of options regarding multiplexing multiple destinations, so rather than having multiple receivers, you might want to focus on one, but use ActiveMQ's facilities to create virtual destinations that will funnel all the messages you want through the one receiver.
See this question.

Resources