We want our SignalR Hub (Microsoft.AspNetCore.SignalR.Hub) to be a middle-man of sorts; in addition to pushing and receiving messages to/from clients, it should consume messages from a separate web socket server using ClientWebSocket. It processes those messages, then sends derived messages to clients via SignalR.
One problem is that the ClientWebSocket is injected and then configured in the constructor, but the constructor is called several times, which throws an error when trying to configure ClientWebSocket when it is already open.
Another problem is that when ClientWebSocket.ConnectAsync is called, SignalR no longer receives messages.
Is there a way to get these to work in the Hub? Or do we have to design it differently to not use SignalR and ClientWebSocket in the same class?
SignalR is a protocol over any transport. What that means is that you can connect with ClientWebSocket but you will need to manually write and parse the protocol messages. The protocol is documented at https://github.com/dotnet/aspnetcore/blob/main/src/SignalR/docs/specs/HubProtocol.md
Related
Whereas the corporate environment I am working in accepts the use of http(s) based request response patterns, which is OK for GraphQL Query and Mutation, they have issues with the use of websockets as needed for GraphQL Subscription and would prefer that the subscription is routed via IBM MQ.
Does anyone have any experience with this? I am thinking of using Apollo Server to serve up the GraphQL interface. Perhaps there is a front-end subscription solution that can be plugged in using IBM MQ? The back end data sources are Oracle databases.
Message queues are usually used to communicate between services while web sockets are how browsers can communicate with the server over a constant socket. This allows the server to send data to the client when a new event of a subscription arrived (classically browsers only supported "pull" and could only receive data when they asked for it). Browsers don't implement the MQ protocols you would need to directly subscribe to the MQ itself. I am not an expert on MQs but what is usually done is there is a subscription server that connects to the client via web socket. The subscription service then itself subscribes to the message queue and notifies relevant clients about their subscribed events. You can easily scale the subscription servers horizontally when you need additional resources.
I'm extremely new to all of this, but from my understanding, websockets allow for a bidirectional transfer of information between browsers. Vert.x is a library that allows for asynchronous I/O. And sockJS is a JavaScript library that attempts to use websockets for communication, and provides fallback options otherwise.
But if I'm writing something in Java using vert.x, I don't quite understand how the pieces fit together. Does vert.x actually support websockets? Or do I need a combination of vert.x and sockJS to make that happen?
HTTP(s) is a stateless protocol, which means that once its job is done it will be idle till the next job is given.
So lets take an example of chat application, assume A is chatting with B using HTTP protocol. B has sent a message which is in server, now until A refreshes the browser, B's message will not appear. That's stateless behavior.
Coming to sockets, which is not stateless. Sockets use ws protocol which is always connected to the server. Taking the same example, now if B sends a message, A's socket will fetch and display to the browser, without the need to refresh. That's how sockets work.
To serve a webpage you need an http server. Similarly to use sockets, sockets server is needed. Which is provided by Vert.x. So Vert.x will start socket server, your browser will listen to that server using clientside sock.js file.
I'm new to ServiceStack and want some validation on a pattern we're thinking about using.
We want to use ServiceStack with Xamarin and Message Queues. While I understand how REST works under the covers, I'm not sure how the Message Queues on ServiceStack work and if its appropriate for mobile devices.
Specifically we know that all mobile devices are essentially behind a NAT firewall setup by the Telco. Meaning Clients can talk to servers, but servers cant talk directly to clients, without the client talking first.
While the concept of a ServiceBus is designed specifically to handle this case, i'm not sure if its "mobile network friendly".
I would assume that the client side implementation, would need to work in one of two ways: polling, blocking get.
Polling would have the client side frequently runing a Http GET to ask the server if anything is available on a queue. A Blocking Get would, perform a Http GET but have the server return nothing until data is ready. Or is there another technique that i'm missing?
If it is a poll, is there any way to control the Poll frequencies in service stack. If its a blocking get how is this configured..
What happens when the app goes to the background, do we need to cancel the connections manually. etc.etc.
We tool an old version of the ServiceStack client library and ported them to xamarin. We now see that the latest ServiceStack client side library is Xamarin compatible.
So, basically my question is: Had anyone used Message Queues from a Xamarin Mobile to ServiceStack with RedisMQ or other server side message queue.
Right now I'm using socket.io with mandatory websockets as the transport. I'm thinking about moving to raw websockets but I'm not clear on what functionality I will lose moving off of socket.io. Thanks for any guidance.
The socket.io library adds the following features beyond standard webSockets:
Automatic selection of long polling vs. webSocket if the browser does not support webSockets or if the network path has a proxy/firewall that blocks webSockets.
Automatic client reconnection if the connection goes down (even if the server restarts).
Automatic detection of a dead connection (by using regular pings to detect a non-functioning connection)
Message passing scheme with automatic conversion to/from JSON.
The server-side concept of rooms where it's easy to communicate with a group of connected users.
The notion of connecting to a namespace on the server rather than just connecting to the server. This can be used for a variety of different capabilities, but I use it to tell the server what types of information I want to subscribe to. It's like connection to a particular channel.
Server-side data structures that automatically keep track of all connected clients so you can enumerate them at any time.
Middleware architecture built-in to the socket.io library that can be used to implement things like authentication with access to cookies from the original connection.
Automatic storage of the cookies and other headers present on the connection when it was first connected (very useful for identifying what user is connected).
Server-side broadcast capabilities to send a common message to either to all connected clients, all clients in a room or all clients in a namespace.
Tagging of every message with a message name and routing of message names into an eventEmitter so you listen for incoming messages by listening on an eventEmitter for the desired message name.
The ability for either client or server to send a message and then wait for a response to that specific message (a reply feature or request/response model).
Java EE 7 allows you to create new endpoints very easily through annotations. However, I was wondering is having multiple endpoints one to handle each message type a good idea or should I have just one endpoint facade for everything?
I am leaning towards having one single end-point facade based on the theory that each endpoint creates a new socket connection to the client. However, that theory could be incorrect and Web Socket may be implemented so that it will use just one TCP/IP socket connection regardless of how many web socket end points are connected so long as they connect to the same host:port.
I am asking specifically for Java EE 7, as there may be other web socket server implementations that may do things differently.
Just noticed an ambiguity on my question re: message types. When I say message types I meant different kinds of application messages not native message types such as "binary" or "text". As such I marked #PavelBucek answer as the accepted one.
However, I did try an experiment with Glassfish and having two end points. My suspicions were correct and that there is a TCP connection established per connected endpoint. This would cause more load on the server side if there is more than one websocket endpoint being used on a single page.
As such I concluded that there should be only one endpoint to handle the application messages provided that everything is a single native type.
This would mean that the application needs to do the dispatching rather than relying on some higher level API to do it for us.
The only valid answer here is the latter option - having multiple endpoints.
See WebSocket spec chaper 2.1.3:
The API limits the registration of MessageHandlers per Session to be one MessageHandler per native websocket message type. [WSC 2.1.3-1] In other words, the developer can only register at most one Mes- sageHandler for incoming text messages, one MessageHandler for incoming binary messages, and one MessageHandler for incoming pong messages. The websocket implementation must generate an error if this restriction is violated [WSC 2.1.3-2].
As for using or not using multiple TCP connections - AFAIK currently there will be new connection for every client and there is no easy way how you can force anything else. WebSocket multiplexing should solve it, but I don't think any WebSocket API implementation support it (I might be wrong..)