How to implement a queue for HTTP requests? - spring

I need to send HTTP (POST) requests to external system but the system has limit on number of requests per second.
Are there any approaches how to implement the queue? frameworks? Spring tools?
How to find a required capacity for the queue?
And one more question is:
A client devise sending a HTTP request has two variants:
1. The client's request is kept in the queue until the system is able to handle the request
2. The client's device retries to send a request until the system is able to handle the request.
Which of these options is preferable?

You can use Resilience4j's rate limiter. It is intergrated with Spring well.
Github
Doc

Related

Spring HTTP client timeout - webservice call - misresponse

I have an unknown App consuming my Spring webservices.
The app set a timeout to every webservice calls.
The server regardless of the app timeout keeps processing.
Is there a risk of any other webservice call in receiving a misresponse (the response to the timed out webservice call)? How does Spring manages this? Doesn't HTTP protocol take care of this, given that each connection channel is open for a particular call to webservice and if broken there shouldn't be possible to retrieve the response?
As a developer, you should try to make all possible HTTP requests to your web server to be idempotent. It means that the client side has to be able to retry the failed request without new possible errors due to the inability to know the previous (timeout) request results.
The client side should handle the HTTP client timeouts himself and (by default) should treat the timeout error as a failure. Your clientside may repeat the request later and the server side should be able to handle the same request.
The solutions may vary for different tasks depending on complexity (from an INSERT statement to the database or scheduling a new CRON job avoiding duplication).

ZeroMQ Request/Response pattern with Node.js

I'm implementing a distributed system for a project and am a bit confused as to how I should properly implement the Req/Res pattern. Basically I have a few endpoints that will send a request to a client for processing tasks and responding.
So basically:
Incoming request is received
The endpoint opens a req and res socket type with the broker
Broker receives the request, proxies it to an available worker
Worker responds and the endpoint receives the processed value, reports it back via the endpoint.
I've found a decent load balance broker script here: http://zguide.zeromq.org/js:lbbroker. There's also an async client/server pattern I'm interested in implementing: http://zguide.zeromq.org/js:asyncsrv which I might adapt into a load balanced implementation.
My question is perhaps a bit simplistic but, would each endpoint open a new socket on EVERY request or maintain and open socket for every request? That means there would be n connections for every request made to the endpoint.
You'd keep the sockets open, there's no need to close them after each request. And there'd be a single socket one every endpoint (client and server). At the server end you read a request from the socket, and write your response back to the socket; zmq takes care of ensuring that the response goes back from the right client.

How to issue http request with golang context capability but not by golang http client?

I found golang context is useful for canceling the processing of the server during a client-server request scope.
I can use http.Request.WithContext method to issue the http request with context, but if the client side is NOT using golang, is it possible to achieve that?
Thanks
I'm not 100% sure what you are asking, but using a context for sometime like a timeout is possible for both handling incoming requests and outbound requests.
For incoming requests you can use the context and send back a timeout http status code indicating that the server want able to process the request. It doesn't matter what the client sends you, you get to decide the timeout on your own with the server.
For outgoing requests you don't need the server to even know you have a timeout. You simply set a timeout and have your request just cancel if it doesn't get a response back in a set time. This means you likely won't get any response from the server because your code would cancel the outgoing request.
Now are you asking for an example of how to code on of these? Or just if both are possible?

The theory of websockets with API

I have an API running on a server, which handle users connection and a messaging system.
Beside that, I launched a websocket on that same server, waiting for connections and stuff.
And let's say we can get access to this by an Android app.
I'm having troubles to figure out what I should do now, here are my thoughts:
1 - When a user connect to the app, the API connect to the websocket. We allow the Android app only to listen on this socket to get new messages. When the user want to answer, the Android app send a message to the API. The API writes itself the received message to the socket, which will be read back by the Android app used by another user.
This way, the API can store the message in database before writing it in the socket.
2- The API does not connect to the websocket in any way. The Android app listen and write to the websocket when needed, and should, when writing to the websocket, also send a request to the API so it can store the message in DB.
May be none of the above is correct, please let me know
EDIT
I already understood why I should use a websocket, seems like it's the best way to have this "real time" system (when getting a new message for example) instead of forcing the client to make an HTTP request every x seconds to check if there are new messages.
What I still don't understand, is how it is suppose to communicate with my database. Sorry if my example is not clear, but I'll try to keep going with it :
My messaging system need to store all messages in my API database, to have some kind of historic of the conversation.
But it seems like a websocket must be running separately from the API, I mean it's another program right? Because it's not for HTTP requests
So should the API also listen to this websocket to catch new messages and store them?
You really have not described what the requirements are for your application so it's hard for us to directly advise what your app should do. You really shouldn't start out your analysis by saying that you have a webSocket and you're trying to figure out what to do with it. Instead, lay out the requirements of your app and figure out what technology will best meet those requirements.
Since your requirements are not clear, I'll talk about what a webSocket is best used for and what more traditional http requests are best used for.
Here are some characteristics of a webSocket:
It's designed to be continuously connected over some longer duration of time (much longer than the duration of one exchange between client and server).
The connection is typically made from a client to a server.
Once the connection is established, then data can be sent in either direction from client to server or from server to client at any time. This is a huge difference from a typical http request where data can only be requested by the client - with an http request the server can not initiate the sending of data to the client.
A webSocket is not a request/response architecture by default. In fact to make it work like request/response requires building a layer on top of the webSocket protocol so you can tell which response goes with which request. http is natively request/response.
Because a webSocket is designed to be continuously connected (or at least connected for some duration of time), it works very well (and with lower overhead) for situations where there is frequent communication between the two endpoints. The connection is already established and data can just be sent without any connection establishment overhead. In addition, the overhead per message is typically smaller with a webSocket than with http.
So, here are a couple typical reasons why you might choose one over the other.
If you need to be able to send data from server to client without having the client regular poll for new data, then a webSocket is very well designed for that and http cannot do that.
If you are frequently sending lots of small bits of data (for example, a temperature probe sending the current temperature every 10 seconds), then a webSocket will incur less network and server overhead than initiating a new http request for every new piece of data.
If you don't have either of the above situations, then you may not have any real need for a webSocket and an http request/response model may just be simpler.
If you really need request/response where a specific response is tied to a specific request, then that is built into http and is not a built-in feature of webSockets.
You may also find these other posts useful:
What are the pitfalls of using Websockets in place of RESTful HTTP?
What's the difference between WebSocket and plain socket communication?
Push notification | is websocket mandatory?
How does WebSockets server architecture work?
Response to Your Edit
But it seems like a websocket must be running separately from the API,
I mean it's another program right? Because it's not for HTTP requests
The same process that supports your API can also be serving the webSocket connections. Thus, when you get incoming data on the webSocket, you can just write it directly to the database the same way the API would access the database. So, NO the webSocket server does not have to be a separate program or process.
So should the API also listen to this websocket to catch new messages
and store them?
No, I don't think so. Only one process can be listening to a set of incoming webSocket connections.

How would I create an asynchronous notification system using RESTful web services?

I have a Java application which I make available via RESTful web services. I want to create a mechanism so clients can register for notifications of events. The rub is that there is no guarantee that the client programs will be Java programs and hence I won't be able to use JMS for this (i.e. if every client was a Java app then we could allow the clients to subscribe to a JMS topic and listen there for notification messages).
The use case is roughly as follows:
A client registers itself with my server application, via a RESTful web service call, indicating that it is interested in getting a notification message anytime a specific object is updated.
When the object of interest is updated then my server application needs to put out a notification to all clients who are interested in being notified of this event.
As I mentioned above I know how I would do this if all clients were Java apps -- set up a topic that clients can listen to for notification messages. However I can't use that approach since it's likely that many clients will not be able to listen to a JMS topic for notification messages.
Can anyone here enlighten me as to how this problem is typically solved? What mechanism can I provide using a RESTful API?
I can think of four approaches:
A Twitter approach: You register the Client and then it calls back periodically with a GET to retrieve any notifications.
The Client describes how it wants to receive the notification when it makes the registration request. That way you could allow JMS for those that can handle it and fall back to email or similar for those that can't.
Take a URL during the registration request and POST back to each Client individually when you have a notification. Hardly Pub/Sub but the effect would be similar. Of course you'd be assuming that the Client was listening for these notifications and had implemented their Client according to your specs.
Buy IBM WebSphere MQ (MQSeries). Best IBM product ever. Not REST but it's great at multi-platform integration like this.
We have this problem and need low-latency asynchronous updates to relatively few listeners. Our two alternative solutions have been:
Polling: Hammer the list of resources you need with GET requests
Streaming event updates: Provide a monitor resource. The server keeps the connection open. As events occur, the server transmits a stream of event descriptions using multipart content-type or chunked transfer-encoding.
In the response to the RESTful request, you could supply an individualized RESTful URL that the client can monitor for updates.
That is, you have one URL (/Signup.htm, say), that accepts the client's information (id if appropriate, id of object to monitor) and returns a customized url (/Monitor/XYZPDQ), where XYZPDQ is a UUID created for that particular client. The client can poll that customized URL at some interval, and it will receive a notification if the update occurs.
If you don't care about who the client is (and don't want to create so many UUIDs) you could just have separate RESTful URLs for each object that might want to be monitored, and the "signup" URL would just return the correct one.
As John Saunders says, you can't really do a more straightforward publish/subscribe via HTTP.
If polling is not acceptable I would consider using web-sockets (e.g. see here). Though to be honest I like the idea suggested by user189423 of multipart content-type or chunked transfer-encoding as well.

Resources