ExpressJS backend hanging with too much requests - ajax

I have an express app running with Sequelize.js as an ORM. My express app receives requests from my main Rails app, and because of the cross-domain policy, these requests are performed with getJSON.
On the client, the request is fired when the user hits a key.
Everything goes fine and express logs the queries being performed (and json being served) each time the user hits the key. Even trying to hit quickly it performs ok. But, whenever I leave the key pressed (or maybe several clients hitting the key very quickly), as it starts firing lots of requests, at some moment the server just hangs, all the requests from that point on are left pending (I see that in the Network tab of Chrome Dev Tools), and they slowly start to timeout. I have to reboot the server to make it respond again.
The server code for my request is:
models.Comment.findAllPublic(req.params.pId, req.params.sId, function(comments){
var json = comments.map(function(comment){
var com = {};
['user_id','user_avatar', 'user_slug', 'user_name', 'created_at', 'text', 'private', 'is_speaker_note'].forEach(function(key){
com[key]=comment[key];
});
return com;
});
res.json({comments: json});
});
And the findAllPublic method from the Comment model (this is a Sequelize model) is:
findAllPublicAndMyNotes: function(current_user, presentationId, slideId, cb){
db.query("SELECT * FROM `comments` WHERE commentable_type='Slide' AND commentable_id=(SELECT id from `slides` where `order_in_presentation`="+slideId+" AND `presentation_id`="+presentationId+") AND (`private` IS FALSE OR (`private` IS TRUE AND `user_id`="+current_user+" AND `is_speaker_note` IS FALSE))",self.Comment).on('success', cb).on('failure',function(err){console.log(err);});
}
How to avoid the server from getting stuck? Am I leaving some blocking code in the request that may slowly hang the server as new requests are made?
At first I thought it could be a problem because of the "forEach" when composing the json object from the Sequelize model, but I also tried leaving the callback for the mysql query empty, just responding empty json and it also got frozen.
Maybe it is a problem of the mysql connector? When the server gets stuck I can normally run the mysql console and perform queries on my database and it also responds, so I don't know if that's the problem.
I know I could just control the key event to prevent it from firing too many requests when the key gets pressed for a long time, but the problem seems to appear also when several clients hit the key repeatedly and concurrently.
Any thoughts? Thanks in advance for the help :D

Two things:
It seems like you have some path where res.render is not being called. It could be that the database you're connecting to is dropping the connection to your Express server after the absurd number of requests and the callback is never fired (and there's no database.on('close', function() { // Handle disconnect from DB, perhaps auto-restarting }) code to catch it.
Your client-side code should detect when an AJAX request on keypress is still pending while a new one is being started, and cancel the old one. I'm guessing getJSON is a jQuery method? Assuming it's jQuery's, then you need something like the following
.
var currKeyRequest = null;
function callOnKeyUp() {
var searchText = $('#myInputBox').value;
if(currKeyRequest) {
currKeyRequest.reject();
currKeyRequest = null;
}
currKeyRequest = $.getJSON('path/to/server', function(json) {
currKeyRequest = null;
// Use JSON code
});
}
This way, you reduce the load on the client, the latency of the autocomplete functionality (but why not use the jQuery UI autocomplete if that's what you're after?), and you can save the server from some of the load as well if the keypresses are faster than handshaking with the server (possible with a good touch-typist a few hours flight away).

Related

Is there a way to delay cache revalidation in service worker?

I am currently working on performance improvements for a React-based SPA. Most of the more basic stuff is already done so I started looking into more advanced stuff such as service workers.
The app makes quite a lot of requests on each page (most of the calls are not to REST endpoints but to an endpoint that basically makes different SQL queries to the database, hence the amount of calls). The data in the DB is not updated too often so we have a local cache for the responses, but it's obviously getting lost when a user refreshes a page. This is where I wanted to use the service worker - to keep the responses either in cache store or in IndexedDB (I went with the second option). And, of course, the cache-first approach does not fit here too well as there is still a chance that the data may become stale. So I tried to implement the stale-while-revalidate strategy: fetch the data once, then if the response for a given request is already in cache, return it, but make a real request and update the cache just in case.
I tried the approach from Jake Archibald's offline cookbook but it seems like the app is still waiting for real requests to resolve even when there is a cache entry to return from (I see those responses in Network tab).
Basically the sequence seems to be the following: request > cache entry found! > need to update the cache > only then show the data. Doing the update immediately is unnecessary in my case so I was wondering if there is any way to delay that? Or, alternatively, not to wait for the "real" response to be resolved?
Here's the code that I currently have (serializeRequest, cachePut and cacheMatch are helper functions that I have to communicate with IndexedDB):
self.addEventListener('fetch', (event) => {
// some checks to get out of the event handler if certain conditions don't match...
event.respondWith(
serializeRequest(request).then((serializedRequest) => {
return cacheMatch(serializedRequest, db.post_cache).then((response) => {
const fetchPromise = fetch(request).then((networkResponse) => {
cachePut(serializedRequest, response.clone(), db.post_cache);
return networkResponse;
});
return response || fetchPromise;
});
})
);
})
Thanks in advance!
EDIT: Can this be due to the fact that I put stuff into IndexedDB instead of cache? I am sort of forced to use IndexedDB instead of the cache because those "magic endpoints" are POST instead of GET (because of the fact they require the body) and POST cannot be inserted into the cache...

Laravel API check if client http connection is still alive

In Laravel we have an api endpoint that may take a few minutes. It's processing an input in batches and giving a response when all batches are processed. Pseudo-code below.
Sometimes it takes too long for the user, so the user navigates away and the connection is killed client-side. However, the backend processing still continues until the backend tries to return the response with a broken pipe error.
To save ressources, we're looking for a way to check after each batch if the client is still connected with a function like check_if_client_is_still_connected() below. If not, an error is raised and processing is stopped. Is there a way to achieve this ?
function myAPIEndpoint($all_batches){
$result = [];
for ($batch in $all_batches) {
$batch_result = do_something_long($batch);
$result = $result + $batch_result;
check_if_client_is_still_connected();
}
return result;
}
PS: I know async tasks or web sockets could be more appropriate for long requests, but we have good reasons to use a standard http endpoint for this.

Nodejs - How to kill a running SQLite query

Possible similar question :
SQL: Interrupting a query
Is there a way to abort an SQLite call?
Hi everyone,
I am actually using socket.io and sqlite3 modules to perform SELECT query on a SQLite database. When a user click on an OpenLayers Map, its sends a signal to the server (through socket.io) to gather informations by performing spatial request (like intersection, union... using Spatialite extension) and then finally send back data to the client (these are long-running queries (depending of the amount of geometries) ) to show a popup on the map where the user clicked to.
The problem is: if a user click many times on the map, sending many requests to the server, only the last one is important. Imagine that if a query takes 5 sec to be executed, and that a user click 3 times in a second on the map (he just wants the last location where he clicked to be used), then the server will do 3 queries, sending back 3 signals through socket.io (and opening 3 popup, we just need the last one to be opened) ! Is there any solution to kill/abort a running sqlite query with nodejs ?
Example code :
socket.on('askForInfo', function (data) {
sendInfo(socket, data.latitude, data.longitude);
});
sendInfo definition :
function sendInfo(socket, lat, lng) {
// Database connection
var db = new sqlite.Database('some file.sqlite', sqlite.OPEN_READONLY);
// Load Spatialite extension
db.loadExtension('mod_spatialite', function(err) {
// Query doing spatial request
db.get("VERY LONG SQL QUERY", function(err, row) {
// Send the data gathered from database
socket.emit('sendData', row['some sql column']);
});
});
}
I want to do something like :
if ("sendInfo didn't finished to emit any signal through socket"
AND "user did another resquest")
then
"kill all running sendInfo function execution and sql query"
I know that if there are many users connected this won't work like that (I may need to use session to know for which user the function is actually gathering data). But I don't find any solution even if there is only one user.
I tryed using AJAX(jquery) instead of socket.io. I can abort the xhr request, but the SQL query is still running even if the request is aborted until she is finish ( using lot of ressources uselessly )
Thanks in advance for your answer.
Thanks for your answer Qualcuno.
I found a solution, using child_process to query database from another process, killing this one if user do another request.
var cp = require('child_process');
// more stuff ...
socket.on('askForInfo', function (data) {
// if process is connected, kill it (query isn't terminated)
if (child.connected) {
child.kill();
}
// create a new process executing 'query_database.js'
child = cp.fork('./query_database.js', [
data.latitude, data.longitude
]);
// when invoking [ process.send(some_data_here) ] in child process, fire this event to send data to the user
child.on('message', function(d) {
socket.emit('sendData', d)
};
});
Nodejs’ SQLite has interrupt method, albeit currently undocumented: https://github.com/mapbox/node-sqlite3/issues/1205#issuecomment-559983976

How are notifications such as "xxxx commented on your post" get pushed to the front end in Facebook? [duplicate]

I have read some posts about this topic and the answers are comet, reverse ajax, http streaming, server push, etc.
How does incoming mail notification on Gmail works?
How is GMail Chat able to make AJAX requests without client interaction?
I would like to know if there are any code references that I can follow to write a very simple example. Many posts or websites just talk about the technology. It is hard to find a complete sample code. Also, it seems many methods can be used to implement the comet, e.g. Hidden IFrame, XMLHttpRequest. In my opinion, using XMLHttpRequest is a better choice. What do you think of the pros and cons of different methods? Which one does Gmail use?
I know it needs to do it both in server side and client side.
Is there any PHP and Javascript sample code?
The way Facebook does this is pretty interesting.
A common method of doing such notifications is to poll a script on the server (using AJAX) on a given interval (perhaps every few seconds), to check if something has happened. However, this can be pretty network intensive, and you often make pointless requests, because nothing has happened.
The way Facebook does it is using the comet approach, rather than polling on an interval, as soon as one poll completes, it issues another one. However, each request to the script on the server has an extremely long timeout, and the server only responds to the request once something has happened. You can see this happening if you bring up Firebug's Console tab while on Facebook, with requests to a script possibly taking minutes. It is quite ingenious really, since this method cuts down immediately on both the number of requests, and how often you have to send them. You effectively now have an event framework that allows the server to 'fire' events.
Behind this, in terms of the actual content returned from those polls, it's a JSON response, with what appears to be a list of events, and info about them. It's minified though, so is a bit hard to read.
In terms of the actual technology, AJAX is the way to go here, because you can control request timeouts, and many other things. I'd recommend (Stack overflow cliche here) using jQuery to do the AJAX, it'll take a lot of the cross-compability problems away. In terms of PHP, you could simply poll an event log database table in your PHP script, and only return to the client when something happens? There are, I expect, many ways of implementing this.
Implementing:
Server Side:
There appear to be a few implementations of comet libraries in PHP, but to be honest, it really is very simple, something perhaps like the following pseudocode:
while(!has_event_happened()) {
sleep(5);
}
echo json_encode(get_events());
The has_event_happened function would just check if anything had happened in an events table or something, and then the get_events function would return a list of the new rows in the table? Depends on the context of the problem really.
Don't forget to change your PHP max execution time, otherwise it will timeout early!
Client Side:
Take a look at the jQuery plugin for doing Comet interaction:
Project homepage: http://plugins.jquery.com/project/Comet
Google Code: https://code.google.com/archive/p/jquerycomet/ - Appears to have some sort of example usage in the subversion repository.
That said, the plugin seems to add a fair bit of complexity, it really is very simple on the client, perhaps (with jQuery) something like:
function doPoll() {
$.get("events.php", {}, function(result) {
$.each(result.events, function(event) { //iterate over the events
//do something with your event
});
doPoll();
//this effectively causes the poll to run again as
//soon as the response comes back
}, 'json');
}
$(document).ready(function() {
$.ajaxSetup({
timeout: 1000*60//set a global AJAX timeout of a minute
});
doPoll(); // do the first poll
});
The whole thing depends a lot on how your existing architecture is put together.
Update
As I continue to recieve upvotes on this, I think it is reasonable to remember that this answer is 4 years old. Web has grown in a really fast pace, so please be mindful about this answer.
I had the same issue recently and researched about the subject.
The solution given is called long polling, and to correctly use it you must be sure that your AJAX request has a "large" timeout and to always make this request after the current ends (timeout, error or success).
Long Polling - Client
Here, to keep code short, I will use jQuery:
function pollTask() {
$.ajax({
url: '/api/Polling',
async: true, // by default, it's async, but...
dataType: 'json', // or the dataType you are working with
timeout: 10000, // IMPORTANT! this is a 10 seconds timeout
cache: false
}).done(function (eventList) {
// Handle your data here
var data;
for (var eventName in eventList) {
data = eventList[eventName];
dispatcher.handle(eventName, data); // handle the `eventName` with `data`
}
}).always(pollTask);
}
It is important to remember that (from jQuery docs):
In jQuery 1.4.x and below, the XMLHttpRequest object will be in an
invalid state if the request times out; accessing any object members
may throw an exception. In Firefox 3.0+ only, script and JSONP
requests cannot be cancelled by a timeout; the script will run even if
it arrives after the timeout period.
Long Polling - Server
It is not in any specific language, but it would be something like this:
function handleRequest () {
while (!anythingHappened() || hasTimedOut()) { sleep(2); }
return events();
}
Here, hasTimedOut will make sure your code does not wait forever, and anythingHappened, will check if any event happend. The sleep is for releasing your thread to do other stuff while nothing happens. The events will return a dictionary of events (or any other data structure you may prefer) in JSON format (or any other you prefer).
It surely solves the problem, but, if you are concerned about scalability and perfomance as I was when researching, you might consider another solution I found.
Solution
Use sockets!
On client side, to avoid any compatibility issues, use socket.io. It tries to use socket directly, and have fallbacks to other solutions when sockets are not available.
On server side, create a server using NodeJS (example here). The client will subscribe to this channel (observer) created with the server. Whenever a notification has to be sent, it is published in this channel and the subscriptor (client) gets notified.
If you don't like this solution, try APE (Ajax Push Engine).
Hope I helped.
According to a slideshow about Facebook's Messaging system, Facebook uses the comet technology to "push" message to web browsers. Facebook's comet server is built on the open sourced Erlang web server mochiweb.
In the picture below, the phrase "channel clusters" means "comet servers".
Many other big web sites build their own comet server, because there are differences between every company's need. But build your own comet server on a open source comet server is a good approach.
You can try icomet, a C1000K C++ comet server built with libevent. icomet also provides a JavaScript library, it is easy to use as simple as:
var comet = new iComet({
sign_url: 'http://' + app_host + '/sign?obj=' + obj,
sub_url: 'http://' + icomet_host + '/sub',
callback: function(msg){
// on server push
alert(msg.content);
}
});
icomet supports a wide range of Browsers and OSes, including Safari(iOS, Mac), IEs(Windows), Firefox, Chrome, etc.
Facebook uses MQTT instead of HTTP. Push is better than polling.
Through HTTP we need to poll the server continuously but via MQTT server pushes the message to clients.
Comparision between MQTT and HTTP: http://www.youtube.com/watch?v=-KNPXPmx88E
Note: my answers best fits for mobile devices.
One important issue with long polling is error handling.
There are two types of errors:
The request might timeout in which case the client should reestablish the connection immediately. This is a normal event in long polling when no messages have arrived.
A network error or an execution error. This is an actual error which the client should gracefully accept and wait for the server to come back on-line.
The main issue is that if your error handler reestablishes the connection immediately also for a type 2 error, the clients would DOS the server.
Both answers with code sample miss this.
function longPoll() {
var shouldDelay = false;
$.ajax({
url: 'poll.php',
async: true, // by default, it's async, but...
dataType: 'json', // or the dataType you are working with
timeout: 10000, // IMPORTANT! this is a 10 seconds timeout
cache: false
}).done(function (data, textStatus, jqXHR) {
// do something with data...
}).fail(function (jqXHR, textStatus, errorThrown ) {
shouldDelay = textStatus !== "timeout";
}).always(function() {
// in case of network error. throttle otherwise we DOS ourselves. If it was a timeout, its normal operation. go again.
var delay = shouldDelay ? 10000: 0;
window.setTimeout(longPoll, delay);
});
}
longPoll(); //fire first handler

Long Running Wicket Ajax Request

I occasionally have some long running AJAX requests in my Wicket application. When this occurs the application is largely unusable as subsequent AJAX requests are queued up to process synchronously after the current request. I would like the request to terminate after a period of time regardless of whether or not a response has been returned (I have a user requirement that if this occurs we should present the user an error message and continue). This presents two questions:
Is there any way to specify a
timeout that's specific to an AJAX
or all AJAX request(s)?
If not, is there any way to kill the current request?
I've looked through the wicket-ajax.js file and I don't see any mention of a request timeout whatsoever.
I've even gone so far as to try re-loading the page after some timeout on the client side, but unfortunately the server is still busy processing the original AJAX request and does not return until the AJAX request has finished processing.
Thanks!
I think it won't help you to let the client 'cancel' the request. (However this could work.)
The point is that the server is busy processing a request that is not required anymore. If you want to timeout such operations you had to implement the timeout on the server side. If the operation takes too long, then the server aborts it and returns some error value as the result of the Ajax request.
Regarding your queuing problem: You may consider to use asynchronous requests in spite of synchronous ones. This means that the client first sends a request for starting the long running process. This request immediately returns. Then the client periodically polls the server and asks if the process has finished. Those poll requests also return immediately saying either that the process is still running or that it has finished with a certain result.
Failed solution: After a given setTimeout I kill the active transports and restart the channel, which handles everything on the client side. I avoided request conflicts by tying each to an ID and checking that against a global reference that increments each time a request is made and each time a request completes.
function longRunningCallCheck(refId) {
// make sure the reference id matches the global id.
// this indicates that we are still processing the
// long running ajax call.
if(refId == id){
// perform client processing here
// kill all active transport layers
var t = Wicket.Ajax.transports;
for (var i = 0; i < t.length; ++i) {
if (t[i].readyState != 0) {
t[i].onreadystatechange = Wicket.emptyFunction;
t[i].abort();
}
}
// process the default channel
Wicket.channelManager.done('0|s');
}
}
Unfortunately, this still left the PageMap blocked and any subsequent calls wait for the request to complete on the server side.
My solution at this point is to instead provide the user an option to logout using a BookmarkablePageLink (which instantiates a new page, thus not having contention on the PageMap). Definitely not optimal.
Any better solutions are more than welcome, but this is the best one I could come up with.

Resources