Buffering messages for dead subscriber with zeromq - zeromq

I using a pub-sub pattern with tcp. When one of my subscriber dies (kill -9 for example) et been restarted with the same IDENTITY it does not get the previous messages.
What are the solutions so when it restart it gets the messages sent? (I understand 0mq does not handle that)
run publisher
run sub0 (subscribe to socket)
run sub1 (subscribe to socket)
pkill -9 sub0 (simulate daemon dying)
publisher send message
run sub0 again (same ZMQ_IDENTITY)
sub0 does not receive the lost message.

This is entirely the responsibility of your application. Take a look at The Guide... particularly Chapter 5 on advanced pub/sub patterns, and even more specifically Getting an out of band snapshot.
The upshot is that your publishing server actually has two sockets, one for publishing, and one for other system-level communication. Anytime it publishes a new messages, it also adds that message to a local cache... it never forgets the messages it sends. Anytime your subscribing client re-connects to the server, it's 2nd socket sends a request to the server to get all messages it missed (or, as in the case of the linked example, the entire current state of the data), which are sent back over that 2nd socket pair. In this way the subscriber is up to date with all messages when it starts to get new ones over the normal subscriber channel.

Related

Websockets: One handler to rule them all? Best case w/ backups?

I'm working on making an iOS app that does a few things, some of which would benefit from real-time data streams (like chat)
For right now I have a few handlers on my server, one of them gets all the threads a user has access to, another can get messages (offset, all, time-ranged, etc.) for a thread. When a user sends a message to a thread, I get all the listeners for the thread and send them a push notification. This works, but I was reading through the APNS docs and it says "dont do more than 3/hr" and I'm definitely doing more than 3/hr.
So I'm thinking I move to websockets. I know how to synchronize pub/subs across machines via redis so I'm not worried about that, I'm more stuck on the following:
If I start to bring websockets into the project, should I just pump all the information App <-> Server through the websocket? Create a thread -> Don't POST, just send a message along the socket. Get a message -> Don't poll or send notification, just send a message along the socket. Literally anything -> Don't make a request, just send a message along the socket.
Right now I'm leaning towards loading initial state and bulk data via normal HTTP URLs (eg: Create a thread, load the last 20 messages for thread XYZ), but for data that needs to be pushed and received in real time (eg: Chat Message send/recv) do that via a websocket.

How to only send last message to server using NetMQ/ZeroMQ?

I want to send data to a server from a client. Only the last message is important to the server. If the server comes up after a failure I only want the server to get the last message from the client.
While the server is down I want the client to keep processing and send messages or atlest put them in a queue(with the length of one message).
I try to use NetMQ/ZeroMQ for this. How can it be done?
Thanks!
First use PubSub where the client is the publisher, with PubSub you will only get messages while you online, if the subscriber (server in your case) was down it missed all the messages (like a radio)
ZeroMQ also has a feature called Conflate (NetMQ doesn't have it yet, you might want to port it), take a look at the following question:
ZeroMQ: I want Publish–Subscribe to drop older messages in favor of newer ones
Also description of conflate from ZeroMQ documentation:
ZMQ_CONFLATE: Keep only last message If set, a socket shall keep only one message in its inbound/outbound queue, this message being the last message received/the last message to be sent. Ignores 'ZMQ_RCVHWM' and 'ZMQ_SNDHWM' options. Does not support multi-part messages, in particular, only one part of it is kept in the socket internal queue.

RabbitMQ Consumer Disconnect Event

Is there any way we can know when a consumer disconnects from a queue or when a queue is deleted?
The requirement is as follows:
I'm building a system in which multiple clients can subscribe to certain events from the system. All clients create their own queue and registers themselves with the system using some sort of authentication. The system, as the events are generated, filters the events and forwards them to clients who are eligible for them.
I have implemented a POC for most part of it and it works well. An issue that I'm not able to fix is that, if a client just disconnects from the queue (due to program termination or so), the registration still exists and the system keeps trying to push messages to that client.
So we would like to be notified when a client disconnects or a queue gets deleted so that we can remove that client's registration data and no longer push messages to him.
Let your publisher utilize Confirms (aka Publisher Acknowledgements) and make client queue be exclusive and transient, so only one client at a time will be consuming from one queue and after it disconnection it will be deleted.
If you publish message that get routed to only one queue and that queue gone (assume you utilize publisher confirms and publish message with mandatory flag set) publisher will be notified that message cannot be routed with that message returned back to it, so you can stop publishing messages.
For details see How Confirms Work section in RabbitMQ blog post "Introducing Publisher Confirms" and Confirms (aka Publisher Acknowledgements) official docs.

Correct socket types for a message catchup mechanism?

I have a single publisher application (PUB) which has N number of subscribers (SUB)
These subscribers need to be able to catch up if they are restarted, or fall down and miss messages.
We have implemented a simple event store that the publisher writes to.
We have implemented a CatchupService which can query the event store and send missed messages to the subscriber.
We have implemented in the subscriber a PUSH socket which sends a request for missed messages.
The subscriber also has a PULL socket which listens for missed messages on a seperate port.
The subscriber will:
Detect a gap
Send a request for missed messages to our CatchupService, the request also contains the address on which to send the results to.
The catchup service has a PULL socket on which it listens for requests
When the CatchupService receives a request it starts a worker thread which:
Gets the missed messages
Opens a PUSH socket connecting to the subscribers PULL socket
Sends the missed messages to the subscriber.
This seems to work quite well however we are unsure if we are using the right socket types for this sort of application. Are these correct or should be using a different pattern.
Sounds okay. Otherwise 0MQ is able to recovery from message loss when peers go offline for a short time. Take a look at the Socket Options and specifically option ZMQ_SNDHWM.
I don't know just how guaranteed the 0MQ recovery mechanisms are so maybe you're best to stay with what you've got, but it is something to be aware of.

Using polling with pub/sub + req/rep in zeromq

I was working with different patterns in zeromq in my project and right now i am using req/rep(later will shift to dealer/router) and pub/sub . The client sends messages to the server and the server publishes this information to other clients who have subscribed.
To use multiple sockets i followed the suggestions on this thread
Combining pub/sub with req/rep in zeromq and used zmq_poll . My server polls on req socket and pub socket.
While writing the code and while reading the above post i guessed that my pub socket will never get polledin and that's what i am observing now when i run the program. Only my request is polled in and publish is not happening at all.
If i don't use polling it works fine i.e as soon as the server gets the message i publish it.
So i am unclear on how polling will be useful in this pattern and how i can use it ?
You probably don't need to poll the pub socket. You certainly don't need to poll in on it - because that can never be triggered (pub sockets are send only).
The polling pattern might be useful in the case where you want to poll for "ready to send" on the req and the pub socket, allowing you to multiplex those channels. This will be particularly useful if/when you move to using a dealer/router.
The reason for that is that replacing req with a dealer (e.g.) can allow you to send multiple messages before receiving responses. Polling for inward and outbound messages will allow you to make maximum advantage of that.

Resources