So, I have built a Netty 3.6.2 based Websockets server application. This application will have many, many users.
The idea is that clients register to listen for information on a topic, and when information flows through the server, the server sends the information to the clients. Sound straightforward so far, right?
I implemented this by building a giant map held in memory mapping the topic to the client's Channel. When the server wants to send a message about a topic too all interested clients, it loops over all channels mapped to that topic. Seem straightforward, right?
However, in some preliminary multi-user testing, I find myself realizing there is not a one-to-one mapping between channel and client. How do I specifically target sending a message to a particular client if not through the channel? I am at a loss....
There should be a one-one ratio of clients to open channels. The fact that there is not is some sort of issue that is not related to netty.
Thank you for your help.
Related
I am a backend developer and I would like to know what are the common technologies for building real-time servers. I know I could use a service like Firebase, but I really want to create it. I have some experience using Websockets on Java, but I would like to know more ways to achieve a real-time server. When I say real-time, I mean something like Facebook. I also would like to know how to scale real-time servers.
Thank you all!
I've asked the same in multiple forums. Common answer to this question is strangely enough still:
WebSocket
Socket.io
Server-Sent Events (SSE)
But those are mainly ways of transporting or streaming events to the clients. Something needs to be built on top of it. And there are multiple other things to consider, such as:
Considerations for real-time API's
What events to send to the client
How to send each client only the events they need
How to handle authorization for events
Where to keep state on the event subscriptions (for stateless services)
How to recover from missed events due to lost connections and service crashes
Producing events for search-, or pagination queries
How to scale
Publish/Subscribe solutions
There are multiple pub/sub solutions out there, such as:
Pusher
PubNub
SocketCluster
etc.
But because of the limitation of a topic based pub/sub architecture, some of the above questions are still left unanswered and has to be dealt with by yourself. Examples are lost connections, where Pusher has no fallback, neither does SocketCluster, and PubNub has a limited queue.
Resgate - Realtime API Gateway
An alternative to the traditional topic based pub/sub pattern is using a resource-aware realtime API Gateway, such as Resgate.
Instead of the client subscribing to topics, the gateway keeps track on which resources (objects or arrays) that the client has fetched, keeping the client data up to date until it unsubscribes.
As a developer of Resgate, I can really recommend checking it out as it solves all above question, is language agnostic, simple and light-weight, and blazingly fast.
Read more at NATS blog.
Scaling
Let's say you want to scale both in the number of concurrent clients and the number of events that is produced. You will eventually need to ensure each client only gets the data they are interested in through either traditional topic based publish/subscribe, or through resource subscriptions. All above solutions handles that.
I also assume all the above mentioned solutions scales concurrent clients by allowing you to add more nodes/servers that handles the persistent WebSocket connections.
With Resgate, first level of scaling is done by simply running multiple instances (it is a simple executable), and adding a load balancer that distributes the connection evenly between them:
Handling 100M concurrent clients
Let's say a single Resgate instance handles 10000 persistent WebSocket connections, and you can add 10000 Resgates (distributed to multiple data centers) to a single NATS Server. This would allow a total of 100M connections. Of course, depending on your data, you might have other scaling issues as well, such as network traffic ;) .
A second layer of scaling (and adding redundancy) would be to replicate the whole setup to different data centers, and have the services synchronize their data between the data centers using other tools like Kafka, CockroachDB, etc.
Scaling data retrieval
With the traditional publish/subscribe solution that only deals with events, you will also have to handle scaling for the HTTP (REST) requests.
With Resgate, this is not required, as resource data is also fetched over the WebSocket connection. This allows Resgate not only to ensure that resource data and events are synchronized (another issue with separate pub/sub solutions), but also that the data can be cached. If multiple clients requests the same data, Resgate will only need to fetch it from the service once, effectively improving scalability.
Butterfly Server .NET is a real-time server written in C# allowing you to create real-time apps. You can see the source at https://github.com/firesharkstudios/butterfly-server-dotnet.
We have few micro services created in .net core 1.0, we are following CQRS pattern and we are also using swagger which list all api's, we have a requirement were in we need to implement Message Bus(not decided yet might be AWS), this message bus will orchestrate UI operations, that spans across multiple backend services, so I don't know how to start because I 'm new to this, I need to understand message bus, queues, publishing events to message bus so could you please help me to understand?
Also any pointers to tutorial videos and so on with explanation would be helpful.
You would read this topic from Maira Wenzel at Microsoft. It helped me a lot on my beginning days.
An event bus allows publish/subscribe-style communication between microservices without requiring the components to explicitly be aware of each other
In a microservice architecture, Services should not communicate with each other in a synchronous way, instead, all services need to communicate via Message-Brocker. There are lots of messages brokers like RabbitMq, Kafka and so on which you can use them.
Using a message broker directly needs to manage all of the details related to message brokers' behaviors and need lots of code to do that. So here Service-Bus comes to the story.
The service bus is an intermediate between each service and message broker to encapsulate all details and configurations in the Message broker. So microservice blocks (each bounded context in DDD) could communicate with each other simply and in a high-level manner.
The point is in the service bus you just get rid of message broker complexities. So you don't need to worry about low-level concepts like the type of exchanges, queues, and so on in message broker.
I just have tried to give you high-level information, But in fact, Message Bus could do lots of other things.
I have an architectural problem.
I know what represents a (fanout, direct, topic, headers) exchange, bindings, queues and almost everything about message architectures. I have the following problem and I need some pieces of advice.
I would like to implement notification logic to my application, where each user will receive in real-time notification only intended for him. ( Actually, I don't want mention what are my UI and BE languages/frameworks, because of an additional level of abstraction) The UI will make a connection to RabbitMQ with WebSocket, SockJS and STOMP. My UI will be only a consumer, the BE is the writer - that one, which will add some Messages to RabbitMQ.
It's perfectly clear to me if I have a direct Exchange with routing key, which uniquely to identify the specific user (for example: my-routing-to-empoyee-with-id-1) and N-number of Queues for each user. This is too heavy to me (I don't know actually whether is normal situation to have so much queues).
Is there any solution, where I can use only one Queue and the message to be delivered only to user for who is intended ?
I know a solution, where I can have a topic exchange and to have one writer and many subscribers, but on this way, I can filter the message only on clients level, which is not so secure. :(
Actually, I found a very interesting article, which describes me a problem that I have pretty well.
What I want is called Selective Consumers and for Enterprise Integration Patterns this is an anti-pattern and we should not use it.
For more details, all who want can read this article: https://derickbailey.com/2015/07/22/airport-baggage-claims-selective-consumers-and-rabbitmq-anti-patterns/
I'm having a play around with websockets and I'm having a bit of trouble wrapping my head around some stuff. Specifically, being able to send a whole bunch of subscribers different data without using a stupid amount of resources.
For example, if you had some sort of twitter like service, how would you send all followers of a person a newly posted tweet that they have made (and do the same for the other hundreds of people doing the same). It just seems that handling that many separate people is a bit absurd.
Can someone talk me through how you would go about treating each client individually? Please tell me if I have the whole idea of websockets wrong.
Thanks in advance!
P.S. for reference, I'm probably going to play around using either node or clojure (with aleph)
Use an established messaging protocol and broker on top of websockets.
It seems you are looking at websockets at the application layer when it is more of a network protocol. A variety of messaging APIs exists (such as JMS) with open source message brokers that are designed to do the complex and scalable message routing.
I feel a little bit kind of confused — for about 24 hours I have been thinking which group broadcasting technology to use in my project.
Basically, what I need is:
create groups (by some backend process)
broadcast messages by any client (1:N, N:N)
(potentially) direct messages (1:1)
(important) authenticate/authorize clients with my own backend (say, through some kind of HTTP API)
to be able to kick specific clients by backend process (or server plugin)
Here is what I will have:
Backend-related process(es) in either Ruby or Haxe
Frontend in JS+Haxe(Flash9) — in browser, so ideally communicating through 80/443, but not necessarily.
So, this technology will have to be easily accessible in Haxe for Flash and preferably Ruby.
I've been thinking about: RabbitMQ (or OpenAMQ), RabbitMQ+STOMP, ejabberd, ejabberd+BOSH, juggernaut (with a need to write a Haxe lib for it).
Any ideas/suggestions?
Yurii,
RabbitMQ, Haxe and as3: http://geekrelief.wordpress.com/2008/12/15/hxamqp-amqp-with-haxe/
RabbitMQ, Ruby and ACLs: http://pastie.org/pastes/368315
You might also want to look at using Nanite with RabbitMQ to manage backend groups: http://brainspl.at/articles/2008/10/11/merbcamp-keynote-and-introducing-nanite
You say you need:
* broadcast messages by any client (1:N, N:N)
* (potentially) direct messages (1:1)
You can easily do both using RabbitMQ. RabbitMQ supports both cases, 1:N pubsub and 1:1 messaging, with 'direct' exchanges.
The direct exchange pattern is as follows:
Any publisher (group member) sends a message to the broker with a 'routing key' such as "yurii". RabbitMQ matches this key with subscription bindings in the routing table (aka "exchange") for you. Each binding represents a subscription by a queue, expressing interest in messages with a given routing key. When the routing and binding keys match, the message is then routed to queues for subsequent consumption by clients (group members). This works for 1:N and 1:1 cases; with N:N building on 1:N.
Introduction to the routing model: http://blogs.digitar.com/jjww/2009/01/rabbits-and-warrens/
General intro: http://google-ukdev.blogspot.com/2008/09/rabbitmq-tech-talk-at-google-london.html
You also require:
* (important) authenticate/authorize clients with my own backend (say, through some kind of HTTP API)
Please see the ACLs code for this (link above). There is also a HTTP interface to RabbitMQ but we have not yet combined the HTTP front end with the ACL code. That shouldn't hold oyu back though. Please come to the rabbitmq-discuss list where this topic has been talked about recently.
You also require:
* create groups (by some backend process)
* to be able to kick specific clients by backend process (or server plugin)
I suggest looking at how tools like Nanite and Workling do this. Group creation is not usually part of a messaging system, instead, in RabbitMQ, you create routing patterns using subscriptions. You can kick specific clients by sending messages to them by whichever key they have used to bind their consuming queue to the exchange.
Hope this helps!
alexis
If you are going to be doing Flash dev have you looked at SmartfoxServer? It has everything you want and has native Flash client libraries. I used in on a project to manage 10s of thousands of connected users.
http://www.smartfoxserver.com/
Well group communication is a slightly different beast than simple messaging / queuing.
Most group communication systems are commercial but there are two (that I know of) open-source / free you can take a look at:
Spread Toolkit
OpenAIS
Both of these might be tough to find Ruby bindings though. Spread, and probably OpenAIS, view clients as trusted so a browser based client doesn't make sense. You'd need to have your browser front-ends talk to a group client(s) on the back-end.
We've been using ActiveMQ. Our vendor who supplies our HR system is using Ruby/ActiveMQ to broadcast and receive updates.
http://activemq.apache.org/cross-language-clients.html
Other open source message brokers which support the Stomp protocol are OpenMQ, which is included in GlassFish V3 and GlassFish 2.1.1 but also works standalone, and soon the JBoss message broker, HornetQ V2.1.
OpenMQ supports temporary queues which are useful for a RPC style communication, but ActiveMQ offers some interesting features in the Stomp adapter too.