With GraphQL is it possible to replace the websocket used for subscription with a message-based approach (e.g. MQ) - graphql

Whereas the corporate environment I am working in accepts the use of http(s) based request response patterns, which is OK for GraphQL Query and Mutation, they have issues with the use of websockets as needed for GraphQL Subscription and would prefer that the subscription is routed via IBM MQ.
Does anyone have any experience with this? I am thinking of using Apollo Server to serve up the GraphQL interface. Perhaps there is a front-end subscription solution that can be plugged in using IBM MQ? The back end data sources are Oracle databases.

Message queues are usually used to communicate between services while web sockets are how browsers can communicate with the server over a constant socket. This allows the server to send data to the client when a new event of a subscription arrived (classically browsers only supported "pull" and could only receive data when they asked for it). Browsers don't implement the MQ protocols you would need to directly subscribe to the MQ itself. I am not an expert on MQs but what is usually done is there is a subscription server that connects to the client via web socket. The subscription service then itself subscribes to the message queue and notifies relevant clients about their subscribed events. You can easily scale the subscription servers horizontally when you need additional resources.

Related

How to use Pusher API for bi-directional communication?

When taking a look at the Pusher Servcer and their Client / Server API I am having some problems trying to figure out how Pusher will help me allow bi-directional communication between devices / apps.
I am having multiple smaller devices / apps in the field that should return their status to a server or another client, which acts as a dashboard to browse all those devices and monitor status, etc.
In my understanding this can be done using traditional WebSockets and a cloud-server in between which manages all connections between those clients - something I though Pusher would be.
But after reading through the docs I can't really see a concept of bi-directional data communication. Here's why:
To push data to the clients I have to use one of Pushers Server Libraries
To receive that Data I have to use one of Pusher Client Libraries
This concept however does not fit into what I need. I want to:
Broadcast to Clients.
Clients can send Data directly to Clients (Server acting as Gateway / Routing).
Clients can send Data to Server.
Server can send / response to unique Client.
When reading about Pusher, they state: "Bi-Directional Communication" which I currently cannot see. So how to implement that advertised Bi-Directional Communication?
Pusher does PubSub only. Using this, you can simulate bi-directional communication: Both sides of the conversation each need to have a topic dedicated to the conversation, and you then publish to this.
This is not ideal. For something which is probably closer to what you seem to want, take a look at WAMP (Web Application Messaging Protocol), which has more than just PubSub. There is a list of implementations at http://wamp-proto.org/implementations. For a router I would recommend Crossbar.io (http://crossbar.io), which has the most documentation to help you get started. Full disclosure: I am involved both with WAMP and Crossbar.io - but it's all open source and may just be what you need.

Moving from socket.io to raw websockets?

Right now I'm using socket.io with mandatory websockets as the transport. I'm thinking about moving to raw websockets but I'm not clear on what functionality I will lose moving off of socket.io. Thanks for any guidance.
The socket.io library adds the following features beyond standard webSockets:
Automatic selection of long polling vs. webSocket if the browser does not support webSockets or if the network path has a proxy/firewall that blocks webSockets.
Automatic client reconnection if the connection goes down (even if the server restarts).
Automatic detection of a dead connection (by using regular pings to detect a non-functioning connection)
Message passing scheme with automatic conversion to/from JSON.
The server-side concept of rooms where it's easy to communicate with a group of connected users.
The notion of connecting to a namespace on the server rather than just connecting to the server. This can be used for a variety of different capabilities, but I use it to tell the server what types of information I want to subscribe to. It's like connection to a particular channel.
Server-side data structures that automatically keep track of all connected clients so you can enumerate them at any time.
Middleware architecture built-in to the socket.io library that can be used to implement things like authentication with access to cookies from the original connection.
Automatic storage of the cookies and other headers present on the connection when it was first connected (very useful for identifying what user is connected).
Server-side broadcast capabilities to send a common message to either to all connected clients, all clients in a room or all clients in a namespace.
Tagging of every message with a message name and routing of message names into an eventEmitter so you listen for incoming messages by listening on an eventEmitter for the desired message name.
The ability for either client or server to send a message and then wait for a response to that specific message (a reply feature or request/response model).

How do RethinkDB, Laravel, and Ratchet work together?

Situation
Am trying to build a real-time chat toy app using the following technology stack
RethinkDB
Laravel 5
Ratchet
What I perceive to be the conceptual situation
The green arrows represent the real-time exchange of data.
The black arrows represent other non real-time requests and exchange of data.
My question
I was wondering if my understanding of the implementation of chat using the technology stack is correct based on the diagram?
if there are inaccuracies, what would they be?
Your interpretation seems correct, although I would not suggest using the websocket to send data to but only to distribute live data to all subscribers of a channel.
To do this, get an API(preferably) going to receive new posts/chats/users.
And use a push server to send the data received to the socket.
A push server is just an in between of the app and websocket that allows php(laravel) to access the socket easily.
Edit: to elaborate
To retry explaining this to you.
All clients listen to the WebScoket Server. This is a connection which is passive and they will only receive messages from the socket according to what topics/subscriptions they have.
When someone wants to send a message(in case of a chat application) they send it to an API to check if the right user sent it, maybe even use apikeys or other means of security.
Once the message is received in the API then the API wants to distribite it to all listening clients for that chat room/topic/subscription.
So the message is forwarded to the pushserver which is an in between of the backend (API, controllers) and the WebSocket (subscriptions, topics).
The pushserver forwards the message to the WebSocket afterwards and then the WebSocket distibutes the message to the correct listeners.
Advantages of using an API:
Security
Scalability

Realtime connection (SockJS/Socket.io) and Microservice application

Currently I'm building an application in a micro service architecture.
The first application is an API that does the user authentication, receive requests to initiate/keep a realtime connection with the user (via Socket.io or SockJS) and the system store the socket id into the User object.
The second application is a WORKER doing some stuff and sometime he has to send realtime data to the user.
The question is: How should the second application (the WORKER) send realtime data to the user?
Should the WORKER send a message to the API then the API forward this message to the user?
Or the WORKER can directly send the message to the user?
Thank you
In a perfect world example, the service that are responsible to send "publish" a real time push notifications should be separated from other services. Since the micro service is a set of narrowly related methods, and there is no relation between the authentication "user" service, and the realtime push notification service. And to a deep break down, the authentication actually is a separate service, this only FYI, There might be a reason you did this way.
How the service would communicate? There is actually many ways how to implement the internal communication between the services, MQ solution, which could add more technology to your stack, like Rabbit MQ, Beanstalk, Gearman, etc...
And also you can do the communication on top of HTTP protocal, but you need to consider that the HTTP call will add more cost.
The perfect solution is that each service will have to interfaces to execute on their behalf, HTTP interface and an MQ interface (console)

How would I create an asynchronous notification system using RESTful web services?

I have a Java application which I make available via RESTful web services. I want to create a mechanism so clients can register for notifications of events. The rub is that there is no guarantee that the client programs will be Java programs and hence I won't be able to use JMS for this (i.e. if every client was a Java app then we could allow the clients to subscribe to a JMS topic and listen there for notification messages).
The use case is roughly as follows:
A client registers itself with my server application, via a RESTful web service call, indicating that it is interested in getting a notification message anytime a specific object is updated.
When the object of interest is updated then my server application needs to put out a notification to all clients who are interested in being notified of this event.
As I mentioned above I know how I would do this if all clients were Java apps -- set up a topic that clients can listen to for notification messages. However I can't use that approach since it's likely that many clients will not be able to listen to a JMS topic for notification messages.
Can anyone here enlighten me as to how this problem is typically solved? What mechanism can I provide using a RESTful API?
I can think of four approaches:
A Twitter approach: You register the Client and then it calls back periodically with a GET to retrieve any notifications.
The Client describes how it wants to receive the notification when it makes the registration request. That way you could allow JMS for those that can handle it and fall back to email or similar for those that can't.
Take a URL during the registration request and POST back to each Client individually when you have a notification. Hardly Pub/Sub but the effect would be similar. Of course you'd be assuming that the Client was listening for these notifications and had implemented their Client according to your specs.
Buy IBM WebSphere MQ (MQSeries). Best IBM product ever. Not REST but it's great at multi-platform integration like this.
We have this problem and need low-latency asynchronous updates to relatively few listeners. Our two alternative solutions have been:
Polling: Hammer the list of resources you need with GET requests
Streaming event updates: Provide a monitor resource. The server keeps the connection open. As events occur, the server transmits a stream of event descriptions using multipart content-type or chunked transfer-encoding.
In the response to the RESTful request, you could supply an individualized RESTful URL that the client can monitor for updates.
That is, you have one URL (/Signup.htm, say), that accepts the client's information (id if appropriate, id of object to monitor) and returns a customized url (/Monitor/XYZPDQ), where XYZPDQ is a UUID created for that particular client. The client can poll that customized URL at some interval, and it will receive a notification if the update occurs.
If you don't care about who the client is (and don't want to create so many UUIDs) you could just have separate RESTful URLs for each object that might want to be monitored, and the "signup" URL would just return the correct one.
As John Saunders says, you can't really do a more straightforward publish/subscribe via HTTP.
If polling is not acceptable I would consider using web-sockets (e.g. see here). Though to be honest I like the idea suggested by user189423 of multipart content-type or chunked transfer-encoding as well.

Resources