Laravel API check if client http connection is still alive - laravel

In Laravel we have an api endpoint that may take a few minutes. It's processing an input in batches and giving a response when all batches are processed. Pseudo-code below.
Sometimes it takes too long for the user, so the user navigates away and the connection is killed client-side. However, the backend processing still continues until the backend tries to return the response with a broken pipe error.
To save ressources, we're looking for a way to check after each batch if the client is still connected with a function like check_if_client_is_still_connected() below. If not, an error is raised and processing is stopped. Is there a way to achieve this ?
function myAPIEndpoint($all_batches){
$result = [];
for ($batch in $all_batches) {
$batch_result = do_something_long($batch);
$result = $result + $batch_result;
check_if_client_is_still_connected();
}
return result;
}
PS: I know async tasks or web sockets could be more appropriate for long requests, but we have good reasons to use a standard http endpoint for this.

Related

MassTransit RequestClient timeout handling

I have tried several scenario to handle timeouts in request, but they dont seem to work.
I have passed the timeout TimeSpan when both creating the request client and/or creating the request. The request does not recive a response during the time span configured, but the task continue executing and seems hanging and no RequestTimeoutException is thrown.
What is the exact solution for handling clients timeout.
EDIT
The use case leading to the timeout is when the whole consumer service is down, so the initial request is not consumed at all. Folowing the code exemple of the request. As mentioned, i tried to pass the RequestTimeout also in the client creation. Other than this, it works perfectly when all parts are running.
var client = _busControl.CreateRequestClient<CheckRequest>(new Uri($"{rabbitHostUri}/CheckQueue"));
var response = await client.GetResponse<CheckResponse>(checkRequest, timeout: RequestTimeout.After(s: 60)).ConfigureAwait(false);
var checkResponse = response.Message;

sending multiple responses to client

I have a web client and a Go server. The client send some json data, which is processed and the server then return a json response.
But what can I do when I want to inform the client about the results of a very slow process, and even allow the client to stop the process?
I've been thinking maybe I could keep sending new requests every 5-10 seconds for updates, but that doesn't seem very efficient, and it wouldn't allow me to stop a process I started using go mySlowFunc()
You may create some “guards” for slow functions. They limit execution time, of function succeeded during this time they return result, if not - default value is returned and function is cancelled.
Example of code:
select {
case result := <-successChan:
return result, nil
case <-timeoutChan:
return "", nil
}
Example of usage: https://github.com/lisitsky/go-site-search-string

ExpressJS backend hanging with too much requests

I have an express app running with Sequelize.js as an ORM. My express app receives requests from my main Rails app, and because of the cross-domain policy, these requests are performed with getJSON.
On the client, the request is fired when the user hits a key.
Everything goes fine and express logs the queries being performed (and json being served) each time the user hits the key. Even trying to hit quickly it performs ok. But, whenever I leave the key pressed (or maybe several clients hitting the key very quickly), as it starts firing lots of requests, at some moment the server just hangs, all the requests from that point on are left pending (I see that in the Network tab of Chrome Dev Tools), and they slowly start to timeout. I have to reboot the server to make it respond again.
The server code for my request is:
models.Comment.findAllPublic(req.params.pId, req.params.sId, function(comments){
var json = comments.map(function(comment){
var com = {};
['user_id','user_avatar', 'user_slug', 'user_name', 'created_at', 'text', 'private', 'is_speaker_note'].forEach(function(key){
com[key]=comment[key];
});
return com;
});
res.json({comments: json});
});
And the findAllPublic method from the Comment model (this is a Sequelize model) is:
findAllPublicAndMyNotes: function(current_user, presentationId, slideId, cb){
db.query("SELECT * FROM `comments` WHERE commentable_type='Slide' AND commentable_id=(SELECT id from `slides` where `order_in_presentation`="+slideId+" AND `presentation_id`="+presentationId+") AND (`private` IS FALSE OR (`private` IS TRUE AND `user_id`="+current_user+" AND `is_speaker_note` IS FALSE))",self.Comment).on('success', cb).on('failure',function(err){console.log(err);});
}
How to avoid the server from getting stuck? Am I leaving some blocking code in the request that may slowly hang the server as new requests are made?
At first I thought it could be a problem because of the "forEach" when composing the json object from the Sequelize model, but I also tried leaving the callback for the mysql query empty, just responding empty json and it also got frozen.
Maybe it is a problem of the mysql connector? When the server gets stuck I can normally run the mysql console and perform queries on my database and it also responds, so I don't know if that's the problem.
I know I could just control the key event to prevent it from firing too many requests when the key gets pressed for a long time, but the problem seems to appear also when several clients hit the key repeatedly and concurrently.
Any thoughts? Thanks in advance for the help :D
Two things:
It seems like you have some path where res.render is not being called. It could be that the database you're connecting to is dropping the connection to your Express server after the absurd number of requests and the callback is never fired (and there's no database.on('close', function() { // Handle disconnect from DB, perhaps auto-restarting }) code to catch it.
Your client-side code should detect when an AJAX request on keypress is still pending while a new one is being started, and cancel the old one. I'm guessing getJSON is a jQuery method? Assuming it's jQuery's, then you need something like the following
.
var currKeyRequest = null;
function callOnKeyUp() {
var searchText = $('#myInputBox').value;
if(currKeyRequest) {
currKeyRequest.reject();
currKeyRequest = null;
}
currKeyRequest = $.getJSON('path/to/server', function(json) {
currKeyRequest = null;
// Use JSON code
});
}
This way, you reduce the load on the client, the latency of the autocomplete functionality (but why not use the jQuery UI autocomplete if that's what you're after?), and you can save the server from some of the load as well if the keypresses are faster than handshaking with the server (possible with a good touch-typist a few hours flight away).

Building an high performance node.js application with cluster and node-webworker

I'm not a node.js master, so I'd like to have more points of view about this.
I'm creating an HTTP node.js web server that must handle not only lots of concurrent connections but also long running jobs. By default node.js runs on one process, and if there's a piece of code that takes a long time to execute any subsequent connection must wait until the code ends what it's doing on the previous connection.
For example:
var http = require('http');
http.createServer(function (req, res) {
doSomething(); // This takes a long time to execute
// Return a response
}).listen(1337, "127.0.0.1");
So I was thinking to run all the long running jobs in separate threads using the node-webworker library:
var http = require('http');
var sys = require('sys');
var Worker = require('webworker');
http.createServer(function (req, res) {
var w = new Worker('doSomething.js'); // This takes a long time to execute
// Return a response
}).listen(1337, "127.0.0.1");
And to make the whole thing more performant, I thought to also use cluster to create a new node process for each CPU core.
In this way I expect to balance the client connections through different processes with cluster (let's say 4 node processes if I run it on a quad-core), and then execute the long running job on separate threads with node-webworker.
Is there something wrong with this configuration?
I see that this post is a few months old, but I wanted to provide a comment to this in the event that someone comes along.
"By default node.js runs on one process, and if there's a piece of code that takes a long time to execute any subsequent connection must wait until the code ends what it's doing on the previous connection."
^-- This is not entirely true. If doSomething(); is required to complete before you send back the response, then yes, but if it isn't, you can make use of the Asynchronous functionality available to you in the core of Node.js, and return immediately, while this item processes in the background.
A quick example of what I'm explaining can be seen by adding the following code in your server:
setTimeout(function(){
console.log("Done with 5 second item");
}, 5000);
If you hit the server a few times, you will get an immediate response on the client side, and eventually see the console fill with the messages seconds after the response was sent.
Why don't you just copy and paste your code into a file and run it over JXcore like
$ jx mt-keep:4 mysourcefile.js
and see how it performs. If you need a real multithreading without leaving the safety of single threading try JX. its 100% node.JS 0.12+ compatible. You can spawn the threads and run a whole node.js app inside each of them separately.
You might want to check out Q-Oper8 instead as it should provide a more flexible architecture for this kind of thing. Full info at:
https://github.com/robtweed/Q-Oper8

Long Running Wicket Ajax Request

I occasionally have some long running AJAX requests in my Wicket application. When this occurs the application is largely unusable as subsequent AJAX requests are queued up to process synchronously after the current request. I would like the request to terminate after a period of time regardless of whether or not a response has been returned (I have a user requirement that if this occurs we should present the user an error message and continue). This presents two questions:
Is there any way to specify a
timeout that's specific to an AJAX
or all AJAX request(s)?
If not, is there any way to kill the current request?
I've looked through the wicket-ajax.js file and I don't see any mention of a request timeout whatsoever.
I've even gone so far as to try re-loading the page after some timeout on the client side, but unfortunately the server is still busy processing the original AJAX request and does not return until the AJAX request has finished processing.
Thanks!
I think it won't help you to let the client 'cancel' the request. (However this could work.)
The point is that the server is busy processing a request that is not required anymore. If you want to timeout such operations you had to implement the timeout on the server side. If the operation takes too long, then the server aborts it and returns some error value as the result of the Ajax request.
Regarding your queuing problem: You may consider to use asynchronous requests in spite of synchronous ones. This means that the client first sends a request for starting the long running process. This request immediately returns. Then the client periodically polls the server and asks if the process has finished. Those poll requests also return immediately saying either that the process is still running or that it has finished with a certain result.
Failed solution: After a given setTimeout I kill the active transports and restart the channel, which handles everything on the client side. I avoided request conflicts by tying each to an ID and checking that against a global reference that increments each time a request is made and each time a request completes.
function longRunningCallCheck(refId) {
// make sure the reference id matches the global id.
// this indicates that we are still processing the
// long running ajax call.
if(refId == id){
// perform client processing here
// kill all active transport layers
var t = Wicket.Ajax.transports;
for (var i = 0; i < t.length; ++i) {
if (t[i].readyState != 0) {
t[i].onreadystatechange = Wicket.emptyFunction;
t[i].abort();
}
}
// process the default channel
Wicket.channelManager.done('0|s');
}
}
Unfortunately, this still left the PageMap blocked and any subsequent calls wait for the request to complete on the server side.
My solution at this point is to instead provide the user an option to logout using a BookmarkablePageLink (which instantiates a new page, thus not having contention on the PageMap). Definitely not optimal.
Any better solutions are more than welcome, but this is the best one I could come up with.

Resources