I have a regular cloud server set up, I have a mobile app talking to the server via HTTP requests. I also have a Wifi device that I need to send messages and I want to do that over MQTT. When some change happens on the mobile app, I want the cloud server to publish a topic via MQTT so that the wifi device can receive the message. Can a broker also be a client? Am I understanding it wrong?
I'm going to attempt an answer based on my understanding; sorry if I misunderstood your question.
The way I understand it, you will have three/four pieces of software:
HTTP Server / MQTT Broker (these two services could run in the
same application or in separate ones)
Mobile application (communicates over HTTP)
Wifi Device (communicates using MQTT protocol)
Scenario:
The Wifi device will open a connection to the MQTT Broker and subscribe to a well defined topic. You can use a subscription with a QoS of 1 if you cannot afford to lose the messages. Any messages published prior to adding the subscription will not be received by your client. It might also be useful to open an MQTT connection using a non-clean session if your wifi connection is unstable (again, if you don't want to lose any messages).
After a specific event, the mobile application which communicates with the HTTP server will send it information.
Upon reception of the information, the HTTP server will then send an MQTT message to the MQTT Broker on the predefined topic (a topic that will match the Wifi Device's subscription).
The MQTT broker will relay the message from the HTTP Server to the Wifi Device (and any other MQTT clients with a matching subscription).
I hope this clarifies, let me know if anything is unclear.
"Can a broker also be a client?" Not really, although I'm certain some specific brokers will publish messages to special subscriptions based on special events, it only acts as a broker. It receives messages from publishers and forwards messages to any client who has shown interest in that message using a subscription (the message could potentially be dropped by the broker if no subscriber (client) is interested in that message).
Related
Whereas the corporate environment I am working in accepts the use of http(s) based request response patterns, which is OK for GraphQL Query and Mutation, they have issues with the use of websockets as needed for GraphQL Subscription and would prefer that the subscription is routed via IBM MQ.
Does anyone have any experience with this? I am thinking of using Apollo Server to serve up the GraphQL interface. Perhaps there is a front-end subscription solution that can be plugged in using IBM MQ? The back end data sources are Oracle databases.
Message queues are usually used to communicate between services while web sockets are how browsers can communicate with the server over a constant socket. This allows the server to send data to the client when a new event of a subscription arrived (classically browsers only supported "pull" and could only receive data when they asked for it). Browsers don't implement the MQ protocols you would need to directly subscribe to the MQ itself. I am not an expert on MQs but what is usually done is there is a subscription server that connects to the client via web socket. The subscription service then itself subscribes to the message queue and notifies relevant clients about their subscribed events. You can easily scale the subscription servers horizontally when you need additional resources.
I'm working on a project to develop a real time mobile messaging application that needs to have advanced message filtering based on message content and user's balance, meaning that if the user has ran out of balance or if he's sending content that violates the policy the messages have to be blocked.
For this reason I need to implement some load balancing solution that scans published messages and could also determine if the message should be blocked based on the rules above, hence I can't implement a basic proxy as I need special rules applied on each message.
The difficult part:
Mobile app needs to receive subscription messages (connection acknowledgement too) without passing through the load balancer preferably (see my next point).
The problem is that the only way I could forward subscription messages to the mobile app would be by handling connections and subscriptions in the load balancer which is disastrous. I need the connections to be transparent and the load balancer stateless.
How can I accomplish this? (if it's of any help my current design involves Java component with spring boot for load balancing and VerneMQ as the message broker)
Try
Mobile App -- > MQTT Broker -- > Message Scan / Block Algorithm -- > MQTT Broker -- > Subscriber.
Your mobile app should have the intelligence to stop messages when the app runs out of balance after the first message.
So the MQTT Broker should not send the message to its subscriber directly at its layer. It should send out messsages that were received after processing.
Not sure anyone has a ready made solution for this flow. But doable.
I've got the following camel route which listens for messages on an ActiveMQ topic and immediately sends them to all connected web socket clients. This is working fine, but the connection to the topic is made as soon as the route builder is initialised.
from("activemq:topic:mytopic").routeId("routeid").to("websocket://test?sendToAll=true");
What I need is to only connect to the topic when one or more clients are connected to the web socket. Once there are no more connections I want to stop listening on the topic. Is this possible?
According to me there is no proper way to do this. The only way this can be achieved is override Jetty WebSocket code. Once you override Jetty Websocket code you get the flexibility to write your own custom code in open and close websocket.
Maintain a List for all websocket clients in open websocket. Check for close websocket and remove it from the list to know how many are connected or disconnected. Or keep a counter on open and close websocket.
Once all websocket clients get closed suspend the route so that your messages stay in the topic or queue.
If any client gets connected to websocket, resume the route so that the messages reach the particular client connected.
I'm new to ZeroMQ and trying to figure out a design issue. My scenario is that I have one or more clients sending requests to a single server. The server will process the requests, do some stuff, and send a reply to the client. There are two conditions:
The replies must go to the clients that sent the request.
If the client disconnects, the server should queue messages for a period of time so that if the client reconnects, it can receive the messages it missed.
I am having a difficult time figuring out the simplest way to implement this.
Things I've tried:
PUB/SUB - I could tag replies with topics to ensure only the subscribers that sent their request (with their topic as their identifier) would receive the correct reply. This takes care of the routing issue, but since the publisher is unaware of the subscribers, it knows nothing about clients that disconnect.
PUSH/PULL - Seems to be able to handle the message queuing issue, but looks like it won't support my plan of having messages sent to specific clients (based on their ID, for example).
ROUTER/DEALER - Design seemed like the solution to both, but all of the examples seem pretty complex.
My thinking right now is continuing with PUB/SUB, try to implement some sort of heartbeat on the client end (allowing the server to detect the client's presence), and when the client no longer sends a heartbeat, it will stop sending messages tagged with its topic. But that seems sub-optimal and would also involve another socket.
Are there any ideas or suggestions on any other ways I might go about implementing this? Any info would be greatly appreciated. I'm working in Python but any language is fine.
To prepare the best proposition for your solution, more data about your application requirements. I have made a little research about your conditions and connnect it with my experience about ZMQ, here I present two possibilities:
1) PUSH/PULL pattern in two direction, bigger impact on scalability, but messages from server will be cached.
Server has one PULL socket to register each client and get all messages from clients. Each message should have client ID to for server knowledge where send response.
For each client - server create PUSH socket to send responses. Socket configuration was sent in register message. You can use also REQ/REP pattern for register clients (assign socket number).
Each client has own PULL socket, which configuration was sent to server in register message.
It means that server with three clients required to (example port numbers in []):
server: 1 x PULL[5555] socket, 3 x PUSH[5560,5561,5562] sockets (+ optional 1 X REQ[5556] socket for registrations, but I think it depends how you prepare client identity)
client: 1 x PUSH[5555] socket, 1 x PULL[5560|5561|5562] (one per client) (+ optional 1 X REP[5556])
You have to connect server to multiple client sockets to send responses but if client disconnects, messages will not lost. Client will get their own messages when it reconnect to their PULL socket. The disadvantage is requirements of creating few PUSH sockets on server side (number of clients).
2) PUB/SUB + PUSH/PULL or REQ/REP, static cocket configuration on server side (only 2), but server has to prepare some mechanism for retransmit or cache messages.
Server create PUB socket and PULL or REQ. Client register it identity by PULL or REQ socket. server will publish all messages to client with this identity as filter. Server use monitor() function on PUB socket to count number of connected and disconnected clients (actions: 'accept' and 'disconnect'). After 'disconnect' action server publish message to all clients to register again. For clients which not re-register, server stop publish messages.
Client create SUB socket and PUSH or REQ to register and send requests.
This solution requires maybe some cache on server side. Client could confirm each message after get it from SUB socket. It is more complicated and have to be connected with your requirement. If you just would like to know that client lost message. Client could send timestamps of last message received from server during registration. If you need guarantee that clients get all messages, you need some cache implementation. Maybe other process which subscribe all messages and delete each confirmed by client.
In this solution server with three clients required to (example port numbers in []):
server: 1 x PUB[5555] socket, 1 x REP or PULL[5560] socket + monitoring PUB socket
client: 1 x SUB[5555] socket and own identity for filter, 1 x REQ or PUSH[5560] socket
About monitoring you could read here: https://github.com/JustinTulloss/zeromq.node#monitoring (NodeJS implementation, but Python will be similar)
I think about other patterns, but I am not sure that ROUTER/DEALER or REQ/REP will cover your requirements. You should read more about patterns, because each of it is better for some solutions. Look here:
official ZMQ guide (a lot of examples and pictures)
easy ROUTER/DEALER example: http://blog.scottlogic.com/2015/03/20/ZeroMQ-Quick-Intro.html
I have some kind of system that send messages from Java back-end to Web front-end constantly. I use Openfire with XMPP protocol as a transport system. But XMPP is just a transport protocol, it do not guaranty delivery in case when Internet connection is down. So firstly I decided to switch to more light transport protocol - WebSocket. And again WebSocket is just a transport protocol. Is there any production ready and free message delivery framework based on WebSockets that support message guaranteed delivery. You just send message with clientId and that framework will do the rest. If user is offline or Internet connection is down. I mean that framework will care about delivery.
I would shift responsibility of assuring message delivery away from your transport choice and onto some message queueing system like rabbitMQ or similar - here is a blip from their feature set :
"Queues can be mirrored across several machines in a cluster, ensuring that even in the event of hardware failure your messages are safe"