Grpc server send data to queue ,Nats io - go

I have a situation where I need to publish the data from grpc server (streaming )data to
Nats io publisher ,this nats io would be subscribed by many clients
Grpc Server ----->Nats Io -->Clients
How can I achieve that ,I was able to send data from GRpc server --->Grpc client in go and then was able to publish on nats server
But I don't want to involve the grpc client in between .Any idea how to achieve it

Related

With GraphQL is it possible to replace the websocket used for subscription with a message-based approach (e.g. MQ)

Whereas the corporate environment I am working in accepts the use of http(s) based request response patterns, which is OK for GraphQL Query and Mutation, they have issues with the use of websockets as needed for GraphQL Subscription and would prefer that the subscription is routed via IBM MQ.
Does anyone have any experience with this? I am thinking of using Apollo Server to serve up the GraphQL interface. Perhaps there is a front-end subscription solution that can be plugged in using IBM MQ? The back end data sources are Oracle databases.
Message queues are usually used to communicate between services while web sockets are how browsers can communicate with the server over a constant socket. This allows the server to send data to the client when a new event of a subscription arrived (classically browsers only supported "pull" and could only receive data when they asked for it). Browsers don't implement the MQ protocols you would need to directly subscribe to the MQ itself. I am not an expert on MQs but what is usually done is there is a subscription server that connects to the client via web socket. The subscription service then itself subscribes to the message queue and notifies relevant clients about their subscribed events. You can easily scale the subscription servers horizontally when you need additional resources.

AKKA gRPC - Detect stream client crash at server

I am connecting to AKKA GRPC Stream server to get the stream of messages. When the client dies/crashes for some reasons, not finding a way to detect at Server.
Is there any ErrorHandler/Callback register to get to know the client crashed?

MQTT over websocket in c

I have implemented mqtt using server connection tcp socket on my machine with mosquitto broker. I have totally understood the mqtt protocol and its frame format. I want to publish my data over webserver which supports mqtt over websocket.
How can I start with this thing?
I am not clear with websocket concept
Can I implement websocket using tcp or is there any other method.
do i have to use http to implement mqtt over web socket as to send data over webserver?
As http and mqtt use different methods to send or receive data.
I don't want to use ready libraries such as paho.
I am totally new to this socket programming.any help or guideline will greatly appreciated!!!
Websockets are an extension to the HTTP protocol, you need to use a correctly formatted HTTP request to setup a new Websocket connection.
Once the connection is setup it can be used to send the exact same binary MQTT packets that you would send over an existing TCP connection.
I suggest you look at using an existing library like libwebsockets to handle the Websocket connection setup, then you should be able to interface your existing code to just use the websocket handle instead of the socket handle.
If you REALLY don't want to use a library then you will need to start by reading the Websocket RFC https://www.rfc-editor.org/rfc/rfc6455

Bidirectional gRPC stream implementation in go

I'm looking at a proto file which has a bidirectional stream between the client and the server. Does this mean that the client and server can send and receive messages arbitrarily? I'm more confused about the server side. How can the server send data over this bidirectional stream arbitrarily? What would be the trigger?
Thanks!
From the docs:
In a bidirectional streaming RPC, again the call is initiated by the
client calling the method and the server receiving the client
metadata, method name, and deadline. Again the server can choose to
send back its initial metadata or wait for the client to start sending
requests.
What happens next depends on the application, as the client and server
can read and write in any order - the streams operate completely
independently. [...]
This means: the client would establish the connection to the server and you'd then have a connection on wich both parties can read/write.

When 2 servers connected to same socket redis adapter. Is both of them get messages from the client same time?

I have two servers server-a and server-b.
For using socket.io usually, the two servers are using redis adapter. Then the client can connect to server-a or server-b.
Now the question is: If the client is connected to server-a and emit a message. Is server-b have an option to get the message?
The client code:
io.emit('sendMessage',myMessage)
The Server-a Code:
io.on('sendMessage',function(){
console.log('Server A got the message')
}
The Server-a Code:
io.on('sendMessage',function(){
console.log('Server B got the message')
}
The client is connected only to server-a. server-a & server-b are using the same redis adapter.
The question is: When client emit a message, is server-b will get it? (Server-B is only connected to the same redis)
What I want to do: I have several servers that should do an action, based on client request. When client request something, all the servers needs to start works. I thought to do with socket.io, and to keep one connection between the client and on of the servers.
All the servers will use socket.io to get the same message from the client.
If you are using the redis adapter properly with all your servers, then when you do something like:
io.emit('sendMessage',myMessage)
from any one of your servers, then that message will end up being sent to all the clients connected to all your servers. What happens internally is the message is sent to a redis channel which all the servers are listening to. When each server gets the message, they then broadcast to all their users but these last steps are transparently handled for you by the redis adapter and redis store.
So, io.emit() is used to send to all connected clients (which uses all the servers in order to carry out the broadcast). It is not used to broadcast the same message directly to all your servers so that they can each manually process that message.
To send to each of your servers, you could probably use your own custom redis publish/subscribe channel messages since each server is already connected to redis and this is something that redis is good at.
Or, you could designate one master socket.io server and have all the other servers connect to it with socket.io. Then any server could ask the central server to broadcast a message to all the other servers.

Resources