Spring SseEmitter, can it be persisted to external data store? - spring

I am able to successfully implement Server Sent Events, with Spring framework. I have a use case, in which my application [SSE-Publisher] will get notification from some third party application over the rest api.
And on receipt of these notification, it will be broadcasting events to the connected clients.
This works for single instance of SSE-Publisher and without load balancer.
If I have load balancer between SSE-Publisher and external application. Clients can subscribe to sse events on any instances of SSE-Publisher.
Now, when a notification is received from external application and it is routed to arbitary SSE-Publisher, the instance doesn't know how to publish the events.
So, in this case, can we store this SSEmitter object to some external store like MongoDB / Redis?
And later on, any SSE-Publisher can pull it from the backend and publish the event.

Related

How to subscribe a specific instance within an elastic beanstalk application to an SNS topic?

Ok, so I have an elastic beanstalk application with a scalable web tier, served behind an ELB. Long story short, I need to be able to subscribe a specific instance within my web tier to an SNS topic. Is it safe for me to use a standard method to get the instance ip address (as detailed in python here How can I get the IP address of eth0 in Python?) and then simply subscribe to an SNS topic using that ip as an http subscriber?
Why? Good question...
My datamodel is made up of lots of objects many of which can have an attached set of users which may want to be able to observe those objects. This web tier in my application is responsible for handling the socket interface (using socket.io) for client applications.
When a user is created in the system, so too is an SNS topic for the user, allowing notifications to be pushed to that user when an object it is interested in changes. The way I am planning to set this up, a client application will connect to EB via socket.io at which point the server instance it connected to will subscribe to that user's SNS topic. Then when an interesting object changes, notifications will be posted to the associated user's topics, thus notifying the server instance that the client application has an open connection to, which can then send a message down the socket.
I believe it is important that the specific instance is subscribed rather than the web tier's external CNAME or ip, as the client application is connected to a specific instance and so only that instance can send messages over it's socket. Subscribing the load balancer would be no good as the notification may be delivered to an instance that the user is not connected to.
I believe the question at the top is all I need, but I'm open to creative solutions if my reasoning seems flawed??
Just incase anyone gets stuck down this same rabbit hole... The solution was to use Redis pub/sub rather than SNS and SQS.

Using SignalR to push to clients from a long running process

Firstly, here is state of my application:
I have a request coming in from a client (angularjs app) into my API (web api 2). This request is processed and a record is stored in a database. A response is then sent back to the client.
Currently, I have a windows service polling and processing this record(s).
Processing this record can be long running. As a side effect to processing this record, there might be notifications generated to be sent back to one or more clients.
My question is how do I architect this, such that I can utilise SignalR to be able to push the notifications back to the client.
My stumbling block:
I can register and store (in-memory backed by a db) the client's SignalR connectionid along with the application's own user identifier. This way I can match a generated notification with a signalr client.
At the moment, I'm hosting the SignalR hubs within the IIS process. So how do I get back from the Windows Service to IIS to notify the client when a notification is generated?
Furthermore, I should say I am already using SignalR elsewhere in the application and am using a SQL Server backplane.
The issue's with the current architecture:
Any processing is done in the same web request, and notifications are sent out via SignalR before a response to the client is returned. Luckily, the processing is minimal and very quick.
I think this is not very good in terms of performance or maintenance in the long run.
Potential solutions:
Remove SignalR hubs from IIS and host them somewhere else - windows service?
Expose an endpoint on the API to for the windows service to call to push the notification once a notification is generated?
Finally, to add more ingredients to the mix: Use a service bus to remove the polling component of the windows service, and move to a pub/sub architecture. Although this is more work than I want to chew off right now.
Any ideas/recommendations/constructive criticisms are welcome.
Thanks.
Take a look at this sample for starters
Another more advanced solution can be using a backplane to manage the communications between the front end and the backend...
HTH

Realtime connection (SockJS/Socket.io) and Microservice application

Currently I'm building an application in a micro service architecture.
The first application is an API that does the user authentication, receive requests to initiate/keep a realtime connection with the user (via Socket.io or SockJS) and the system store the socket id into the User object.
The second application is a WORKER doing some stuff and sometime he has to send realtime data to the user.
The question is: How should the second application (the WORKER) send realtime data to the user?
Should the WORKER send a message to the API then the API forward this message to the user?
Or the WORKER can directly send the message to the user?
Thank you
In a perfect world example, the service that are responsible to send "publish" a real time push notifications should be separated from other services. Since the micro service is a set of narrowly related methods, and there is no relation between the authentication "user" service, and the realtime push notification service. And to a deep break down, the authentication actually is a separate service, this only FYI, There might be a reason you did this way.
How the service would communicate? There is actually many ways how to implement the internal communication between the services, MQ solution, which could add more technology to your stack, like Rabbit MQ, Beanstalk, Gearman, etc...
And also you can do the communication on top of HTTP protocal, but you need to consider that the HTTP call will add more cost.
The perfect solution is that each service will have to interfaces to execute on their behalf, HTTP interface and an MQ interface (console)

Web Chat application - how to persist data properly?

We are currently implementing a simple chat app that allows users to create conversations and exchange messages.
Our basic setup involves AngularJS on the front-end and SignalR hub on the back end. It works like this:
Client app opens a Websockets connection to our real-time service (based on SignalR) and subscribes to chat updates
User starts sending messages. For each new message, client app calls HTTP API to send it
The API stores the message in the database and notifies our real-time service that there is a new message
Real-time service pushes the message via Websockets to subscribed Clients
However, we noticed that opening up so many HTTP connections for each new message may not be a good idea, so we were wondering if Websockets should be used to both send and receive messages?
The new setup would look like this:
Client app opens a Websockets connection with real-time service
User starts sending messages. Client app pushes the messages to real-time service using Websockets
Real-time service picks up the message, notifies our persistence service it needs to be stored, then delivers the message to other subscribed Clients
Persistence service stores the message
Which of these options is more typical when setting up an efficient and performant chat system? Thanks!
You don't need a different http or Web API to persist message. Persist it in the hub method that is broadcasting the message. You can use async methods in the hub, create async tasks to save the message.
Using a different persistence API then calling signalr to broadcase isn't efficient, and why dublicate all the efforts?

How would I create an asynchronous notification system using RESTful web services?

I have a Java application which I make available via RESTful web services. I want to create a mechanism so clients can register for notifications of events. The rub is that there is no guarantee that the client programs will be Java programs and hence I won't be able to use JMS for this (i.e. if every client was a Java app then we could allow the clients to subscribe to a JMS topic and listen there for notification messages).
The use case is roughly as follows:
A client registers itself with my server application, via a RESTful web service call, indicating that it is interested in getting a notification message anytime a specific object is updated.
When the object of interest is updated then my server application needs to put out a notification to all clients who are interested in being notified of this event.
As I mentioned above I know how I would do this if all clients were Java apps -- set up a topic that clients can listen to for notification messages. However I can't use that approach since it's likely that many clients will not be able to listen to a JMS topic for notification messages.
Can anyone here enlighten me as to how this problem is typically solved? What mechanism can I provide using a RESTful API?
I can think of four approaches:
A Twitter approach: You register the Client and then it calls back periodically with a GET to retrieve any notifications.
The Client describes how it wants to receive the notification when it makes the registration request. That way you could allow JMS for those that can handle it and fall back to email or similar for those that can't.
Take a URL during the registration request and POST back to each Client individually when you have a notification. Hardly Pub/Sub but the effect would be similar. Of course you'd be assuming that the Client was listening for these notifications and had implemented their Client according to your specs.
Buy IBM WebSphere MQ (MQSeries). Best IBM product ever. Not REST but it's great at multi-platform integration like this.
We have this problem and need low-latency asynchronous updates to relatively few listeners. Our two alternative solutions have been:
Polling: Hammer the list of resources you need with GET requests
Streaming event updates: Provide a monitor resource. The server keeps the connection open. As events occur, the server transmits a stream of event descriptions using multipart content-type or chunked transfer-encoding.
In the response to the RESTful request, you could supply an individualized RESTful URL that the client can monitor for updates.
That is, you have one URL (/Signup.htm, say), that accepts the client's information (id if appropriate, id of object to monitor) and returns a customized url (/Monitor/XYZPDQ), where XYZPDQ is a UUID created for that particular client. The client can poll that customized URL at some interval, and it will receive a notification if the update occurs.
If you don't care about who the client is (and don't want to create so many UUIDs) you could just have separate RESTful URLs for each object that might want to be monitored, and the "signup" URL would just return the correct one.
As John Saunders says, you can't really do a more straightforward publish/subscribe via HTTP.
If polling is not acceptable I would consider using web-sockets (e.g. see here). Though to be honest I like the idea suggested by user189423 of multipart content-type or chunked transfer-encoding as well.

Resources