How would I create an asynchronous notification system using RESTful web services? - events

I have a Java application which I make available via RESTful web services. I want to create a mechanism so clients can register for notifications of events. The rub is that there is no guarantee that the client programs will be Java programs and hence I won't be able to use JMS for this (i.e. if every client was a Java app then we could allow the clients to subscribe to a JMS topic and listen there for notification messages).
The use case is roughly as follows:
A client registers itself with my server application, via a RESTful web service call, indicating that it is interested in getting a notification message anytime a specific object is updated.
When the object of interest is updated then my server application needs to put out a notification to all clients who are interested in being notified of this event.
As I mentioned above I know how I would do this if all clients were Java apps -- set up a topic that clients can listen to for notification messages. However I can't use that approach since it's likely that many clients will not be able to listen to a JMS topic for notification messages.
Can anyone here enlighten me as to how this problem is typically solved? What mechanism can I provide using a RESTful API?

I can think of four approaches:
A Twitter approach: You register the Client and then it calls back periodically with a GET to retrieve any notifications.
The Client describes how it wants to receive the notification when it makes the registration request. That way you could allow JMS for those that can handle it and fall back to email or similar for those that can't.
Take a URL during the registration request and POST back to each Client individually when you have a notification. Hardly Pub/Sub but the effect would be similar. Of course you'd be assuming that the Client was listening for these notifications and had implemented their Client according to your specs.
Buy IBM WebSphere MQ (MQSeries). Best IBM product ever. Not REST but it's great at multi-platform integration like this.

We have this problem and need low-latency asynchronous updates to relatively few listeners. Our two alternative solutions have been:
Polling: Hammer the list of resources you need with GET requests
Streaming event updates: Provide a monitor resource. The server keeps the connection open. As events occur, the server transmits a stream of event descriptions using multipart content-type or chunked transfer-encoding.

In the response to the RESTful request, you could supply an individualized RESTful URL that the client can monitor for updates.
That is, you have one URL (/Signup.htm, say), that accepts the client's information (id if appropriate, id of object to monitor) and returns a customized url (/Monitor/XYZPDQ), where XYZPDQ is a UUID created for that particular client. The client can poll that customized URL at some interval, and it will receive a notification if the update occurs.
If you don't care about who the client is (and don't want to create so many UUIDs) you could just have separate RESTful URLs for each object that might want to be monitored, and the "signup" URL would just return the correct one.
As John Saunders says, you can't really do a more straightforward publish/subscribe via HTTP.

If polling is not acceptable I would consider using web-sockets (e.g. see here). Though to be honest I like the idea suggested by user189423 of multipart content-type or chunked transfer-encoding as well.

Related

Backend to Frontend Websocket

I need to add support for instant messages or reminders to my web application. I was reading that this could be accomplished with websockets.
The idea is that during the time the web app is been used, it could receive messages originated (not as a request response) from the server. For example, the server application might want to remind the user about and unpaid service.
As I understand, when the web app starts it connects to the websocket server through a standard HTTP Request call to announce itself as a client. My question is:
"If I have hundreds of clients connected at the same time, how do I call one in particular?"
Do I need to store every websocket object in an array or something so I can use it to send a message when it is required?
What would be the right approach?
Thanks.

Async response from API Gateway in microservices

In microservice architecture, It is suggested that:
client app to API gateway communication should be synchronous (like
REST over http).
API gateway to micro-service communication should also be
synchronous
But service to service communication should be asynchronous.
Another rule you should try to follow, as much as possible, is to use
only asynchronous messaging between the internal services, and to use
synchronous communication (such as HTTP) only from the client apps to
the front-end services (API Gateways plus the first level of
microservices).
https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/architect-microservice-container-applications/asynchronous-message-based-communication
Now, If I understood it right, when user requests to API gateway, and in turn it calls the fist service, it will return a acknowledgement (with some GUID) which will be passed to client application. But services will keep on executing the request.
Now the question pop ups, how will they notify the client application when the request is processed completely. One way is that client can check the status using the GUID passed to it.
But can it be done with some push notification? How can we integrate server to server push notification?
I have little bit different understanding on this as it says communication between services should be asynchronous while communication to API gateway and API gateway to service should be rest API.
so we don't need to do anything as these are simple API calls and pipeline will handle request-response tracking while asynchronous calls between services will increase the throughput of the service.
Now, If I understood it right, when user requests to API gateway, and in turn it calls the fist service, it will return a acknowledgement (with some GUID) which will be passed to client application. But services will keep on executing the request.
No, the microservices should not continue to execute the request, it is already finished. They will, when it is required, update their internal cache (local representation to be more precise) of the needed remote data (data from the microservice that executed the request) when that remote data has changed. The best way to do that update is using integration events (i.e. when a microservice executes a request that mutates the data, it publishes an event to the subscribed microservices).
The microservices should not communicate not even asynchronously in order to fulfill a request from the gateway or clients. They should use background tasks to prepare the data ahead of time for when a request comes.
You're depicting a scenario where the whole interaction between the system and external actors (to be rude, the users) follows an asynchronous model. This is perfectly reasonable, but just if you really need it. Matter of fact, if you are choosing to let 'the outside' interact with your system through REST APIs, maybe you don't need it at all.
If the system receives requests through a synchronous application endpoint, such as REST endpoint, it has to complete requests before to send a response, otherwise it would be meaningless. consider an API like
POST users/:username/notifications
a notification is synchronous by it's nature, but the the request just states that 'a new notification should be appendend to the notifications collection of user'. The API responds 201 that means 'ok, the notification is already associated with the user, it will be pushed on some channel, eventually'. This is a 'transactional' way to describe an asynchronous interaction
Another scenario comes when the user wants to subscribe the notification channel. I expect that this would be implemented with a bi-directional, asynchronous, pubsub communication protocol, such as websockets.
In both cases, however, doesn't matter how microservices communicate with each other, if the request is synchronous, the first service of 'the chain' should wait until is ready to respond. This is the reason beacause API gateway forwards the request in http.
On the other hand, aynchronous communication could be used to enforce consistency between services, instead of to make the actual communication. Let's say that the Orders service sends data to a broker. each time some attribute on the orders[orderId] is changed, it published the change in /orders/:orderId topic. At the same time, expose an internal http point. each service caches data from the services which depends on. The user service make a GET /orders/:orderId , while sends a response to the requester, puts the data in a local cache and subscribes the orders/:orderId topic. each time that a 'mutation' is sent on this topic, the User service catches it and applies the mutation on the corresponding cached object. The communication is syncrhonous, keeps to be synchronous and it' relatively simple to manage; at the same time your system can hold replicated data and be still [eventually] consistent

Using SignalR to push to clients from a long running process

Firstly, here is state of my application:
I have a request coming in from a client (angularjs app) into my API (web api 2). This request is processed and a record is stored in a database. A response is then sent back to the client.
Currently, I have a windows service polling and processing this record(s).
Processing this record can be long running. As a side effect to processing this record, there might be notifications generated to be sent back to one or more clients.
My question is how do I architect this, such that I can utilise SignalR to be able to push the notifications back to the client.
My stumbling block:
I can register and store (in-memory backed by a db) the client's SignalR connectionid along with the application's own user identifier. This way I can match a generated notification with a signalr client.
At the moment, I'm hosting the SignalR hubs within the IIS process. So how do I get back from the Windows Service to IIS to notify the client when a notification is generated?
Furthermore, I should say I am already using SignalR elsewhere in the application and am using a SQL Server backplane.
The issue's with the current architecture:
Any processing is done in the same web request, and notifications are sent out via SignalR before a response to the client is returned. Luckily, the processing is minimal and very quick.
I think this is not very good in terms of performance or maintenance in the long run.
Potential solutions:
Remove SignalR hubs from IIS and host them somewhere else - windows service?
Expose an endpoint on the API to for the windows service to call to push the notification once a notification is generated?
Finally, to add more ingredients to the mix: Use a service bus to remove the polling component of the windows service, and move to a pub/sub architecture. Although this is more work than I want to chew off right now.
Any ideas/recommendations/constructive criticisms are welcome.
Thanks.
Take a look at this sample for starters
Another more advanced solution can be using a backplane to manage the communications between the front end and the backend...
HTH

The theory of websockets with API

I have an API running on a server, which handle users connection and a messaging system.
Beside that, I launched a websocket on that same server, waiting for connections and stuff.
And let's say we can get access to this by an Android app.
I'm having troubles to figure out what I should do now, here are my thoughts:
1 - When a user connect to the app, the API connect to the websocket. We allow the Android app only to listen on this socket to get new messages. When the user want to answer, the Android app send a message to the API. The API writes itself the received message to the socket, which will be read back by the Android app used by another user.
This way, the API can store the message in database before writing it in the socket.
2- The API does not connect to the websocket in any way. The Android app listen and write to the websocket when needed, and should, when writing to the websocket, also send a request to the API so it can store the message in DB.
May be none of the above is correct, please let me know
EDIT
I already understood why I should use a websocket, seems like it's the best way to have this "real time" system (when getting a new message for example) instead of forcing the client to make an HTTP request every x seconds to check if there are new messages.
What I still don't understand, is how it is suppose to communicate with my database. Sorry if my example is not clear, but I'll try to keep going with it :
My messaging system need to store all messages in my API database, to have some kind of historic of the conversation.
But it seems like a websocket must be running separately from the API, I mean it's another program right? Because it's not for HTTP requests
So should the API also listen to this websocket to catch new messages and store them?
You really have not described what the requirements are for your application so it's hard for us to directly advise what your app should do. You really shouldn't start out your analysis by saying that you have a webSocket and you're trying to figure out what to do with it. Instead, lay out the requirements of your app and figure out what technology will best meet those requirements.
Since your requirements are not clear, I'll talk about what a webSocket is best used for and what more traditional http requests are best used for.
Here are some characteristics of a webSocket:
It's designed to be continuously connected over some longer duration of time (much longer than the duration of one exchange between client and server).
The connection is typically made from a client to a server.
Once the connection is established, then data can be sent in either direction from client to server or from server to client at any time. This is a huge difference from a typical http request where data can only be requested by the client - with an http request the server can not initiate the sending of data to the client.
A webSocket is not a request/response architecture by default. In fact to make it work like request/response requires building a layer on top of the webSocket protocol so you can tell which response goes with which request. http is natively request/response.
Because a webSocket is designed to be continuously connected (or at least connected for some duration of time), it works very well (and with lower overhead) for situations where there is frequent communication between the two endpoints. The connection is already established and data can just be sent without any connection establishment overhead. In addition, the overhead per message is typically smaller with a webSocket than with http.
So, here are a couple typical reasons why you might choose one over the other.
If you need to be able to send data from server to client without having the client regular poll for new data, then a webSocket is very well designed for that and http cannot do that.
If you are frequently sending lots of small bits of data (for example, a temperature probe sending the current temperature every 10 seconds), then a webSocket will incur less network and server overhead than initiating a new http request for every new piece of data.
If you don't have either of the above situations, then you may not have any real need for a webSocket and an http request/response model may just be simpler.
If you really need request/response where a specific response is tied to a specific request, then that is built into http and is not a built-in feature of webSockets.
You may also find these other posts useful:
What are the pitfalls of using Websockets in place of RESTful HTTP?
What's the difference between WebSocket and plain socket communication?
Push notification | is websocket mandatory?
How does WebSockets server architecture work?
Response to Your Edit
But it seems like a websocket must be running separately from the API,
I mean it's another program right? Because it's not for HTTP requests
The same process that supports your API can also be serving the webSocket connections. Thus, when you get incoming data on the webSocket, you can just write it directly to the database the same way the API would access the database. So, NO the webSocket server does not have to be a separate program or process.
So should the API also listen to this websocket to catch new messages
and store them?
No, I don't think so. Only one process can be listening to a set of incoming webSocket connections.

JMS / MQ confidentiality between clients

I'm designing a system where one server must send messages to lots of independent clients. The clients doesn't know about each other and should not be able to consume, peek or in any other way acquire knowledge about each others messages.
I therefore wonder if JMS / ActiveMq have the ability to control which clients get which messages?
I want all the clients to connect to the same JSM provider (the 'destination') and consume only messages meant for them. This would be a simple setup from the servers point of view.
An alternative would be to acquire webservice endpoints from all the clients and perform ws-calls every time the server have a message for a client. I think this alternative sound 'wrong' as I think ws calls are bloated. There is a great overhead for each ws call, and this server would have to make 1000's of call each day. In my opinion this would be suboptimal for the server...
Short answer: Use Message selector.
Detail answer:
The question doesn't mention about how conversation is initiated. So here my answers for both scenarios.
a) If client initiates the conversation (i.e. Client sends a message to server and waiting for a reply).
This is a request/reply scenario. Messaging/JMS is a decoupled communication system. But request/reply is a common pattern in JMS. It can be implemented using correlation pattern.
A unique identifier(correlation id) is sent part of the request message.
Server receives the message and sets the correlation id in the reply message.
Client uses Message selector to receive the message with the correct correlation id.
b) If server initiates the conversation (i.e. Server sends messages to the clients without client request).
In this case, similar approach can be used.
A fixed client id is assigned to each client.
Server maintains all client ids and sets client id of the recipient as correlation id of the message.
Client uses Message selector to receive the message which has correlation id equals to its client id.
Update about confidentiality.
Following info extracted from this link useful for you to understand JMS security.
JMS does not specify a security
contract or an API for controlling
message confidentiality and integrity.
Security is considered to be a
JMS-provider-specific feature. It is
controlled by a System Administrator
rather than implemented
programmatically or by the J2EE server
runtime.
Two major features of JMS security are Authentication and Authorization. According to my knowledge, JMS security for client access is focusing on protecting the JMS destinations (not the individual messages). As long as a client has access to a destination, the security role assigned to the client is applicable for all the messages belongs to the destination.
Based on this,
Solution 1: If the client code is controlled by a trusted party.
Follow my solutions in my original answer.
This will make sure the message is delivered to the right person. But will not protect anything if the client code is purposely modified to receive all messages.
Solution 2: Assign private destination and user account to each client and configure security such that user account of a client can access only its destination.
Note: Found a link about "Restrictions for message selectors to provide message level authorization". But I think it is a vendor specific custom feature.
Hope this will be helpful.

Resources