Cancel ajax calls? - ajax

I'm using the Select2 select boxes in my Django project. The ajax calls it makes can be fairly time-consuming if you've only entered a character or two in the query box, but go quicker if you've entered several characters. So what I'm seeing is you'll start typing a query, and it will make 4 or 5 ajax calls, but the final one returns and the results display. It looks fine on the screen, but meanwhile, the server is still churning away on the earlier queries. I've increased the "delay" parameter to 500 ms, but it's still a bit of a problem.
Is there a way to have the AJAX handler on the server detect that this is a new request from the same client as one that is currently processing, and tell the older one to exit immediately? It appears from reading other answers here that merely calling .abort() on the client side doesn't stop the query running on the server side.

If they are DB queries that are taking up time, then basically nothing will stop them besides stopping the database server, which is of course not tangible. If it is computation in nested loops for example, then you could use cache to detect whether another request has been submitted from the same user. Basically:
from django.core.cache import cache
def view(request):
start_time = timestamp # timezone.now() etc.
cache.set(request.session.session_key + 'some_identifier', start_time)
for q in werty:
# Very expensive computation with millions of loops
if start_time != cache.get(request.session.session_key + 'some_identifier'):
break
else:
# Continue the nasty computations
else:
cache.delete(request.session.session_key + 'some_identifier')
But the Django part aside - what I would do: in JS add a condition that when the search word is less than 3 chars, then it waits 0.5s (or less, whatever you like) before searching. And if another char is added then search right away.
I.e.
var timeout;
function srch(param) {
timeout = false;
if (param.length < 3) {
timeout = true;
setTimeout(function () {
if (timeout) {
$.ajax({blah: blah});
}
}, 500);
} else {
$.ajax({blah: blah});
}
}

Related

RxJS - Queueing ajax requests from separate inputs

I have a list of multiple inputs (dynamically generated - unknown number).
I want each to trigger an ajax request on every keystroke
I want these ajax requests to be queued up, so only one is sent to
the server at a time, and the next one is sent only after getting a response from the earlier one.
if new requests are triggered from an input that already has requests in the queue, I want the old ones associated with the same input to be cancelled.
if new requests are triggered from an input that does not already have inputs in the queue, I want the new requests to just be added to the end of the queue without cancelling anything.
I'm told that RxJS makes these kinds of complicated async operations easy, but I can't seem to wrap my head around all the RxJS operators.
I have queueing working with a single input below, but I don't really understand why the defer is necessary or how to queue requests for separate inputs while maintaining the switchMap-like behavior I think I want for individual inputs themselves.
Rx.Observable.fromEvent(
$("#input"),
'keyup'
)
.map((event) => {
return $("#input").val();
});
.concatMap((inputVal) => {
return Rx.Observable.defer(() => Rx.Observable.fromPromise(
fetch(myURL + inputVal)
))
.catch(() => Rx.Observable.empty());
})
.subscribe();
First of all you have to create some sort of function that manages each input. Something along the following lines
requestAtKeyStroke(inputId: string) {
return Rx.Observable.fromEvent(
$(inputId),
'keyup'
)
.map((event) => {
return $("#input").val();
})
.filter(value => value.length > 0)
.switchMap((inputVal) => Rx.Observable.fromPromise(fetch(myURL + inputVal)))
}
Such a function deals with your third requisite, to cancel requests still on fly when a new one arrives. The key here is the switchMap operator.
Then what you can do is to merge all the Observables corresponding to your inputs into one Observable. One way could be the following
Observable.from(['input1, 'input2']).map(input => requestAtKeyStroke(input)).mergeAll()
This is not fulfilling all you requisites, since you still may have more than one requests under execution at the same time, coming from different inputs. I am not sure though if it is possible to fulfill all your requisites at the same time.

Is it problematic to request the server on every character typed in?

In my frontend I have an input-field that sends an ajax request on every character typed in (using vue.js) to get realtime-filtering (can't use vue filter because of pagination).
Everything works smooth in my test environment, but could this lead to performance issues on (a bigger amount of) real data and if so, what can I do to prevent this?
Is it problematic?
Yes.
The client will send a lot of requests. Depending on the network connection and browser, this could lead to a perceptible feeling of lag by the client.
The server will receive a lot of requests, potentially leading to degraded performance for all clients, and extra usage of resources on the server side.
Responses to requests have a higher chance of arriving out of order. If you send requests very fast, it has increased chances of being apparent (e.g. displaying autocomplete for "ab" when the user has already typed "abc")
Overall, it's bad practice mostly because it's not necessary to do that many requests.
How to fix it?
As J. B. mentioned in his answer, debouncing is the way to go.
The debounce function (copied below) ensures that a certain function doesn't get called more than once every X milliseconds. Concretely, it allows you to send a request as soon as the user hasn't typed anything for, say, 200ms.
Here's a complete example (try typing text very fast in the input):
function debounce(func, wait, immediate) {
var timeout;
return function() {
var context = this, args = arguments;
var later = function() {
timeout = null;
if (!immediate) func.apply(context, args);
};
var callNow = immediate && !timeout;
clearTimeout(timeout);
timeout = setTimeout(later, wait);
if (callNow) func.apply(context, args);
};
}
var sendAjaxRequest = function(inputText) {
// do your ajax request here
console.log("sent via ajax: " + inputText);
};
var sendAjaxRequestDebounced = debounce(sendAjaxRequest, 200, false); // 200ms
var el = document.getElementById("my-input");
el.onkeyup = function(evt) {
// user pressed a key
console.log("typed: " + this.value)
sendAjaxRequestDebounced(this.value);
}
<input type="text" id="my-input">
For more details on how the debounce function works, see this question
I actually discuss this exact scenario in my Vue.js training course. In short, you may want to wait until a user clicks a button or something of that nature to trigger sending the request. Another approach to consider is to use the lazy modifier, which will delay the event until the change event is fired.
It's hard to know the correct approach without knowing more about the goals of the app. Still, the options listed above are two options to consider.
I hope this helps.
The mechanism I was searching for is called debouncing.
I used this approach in the application.

Paginated connection plus added edge from mutation

I'm having trouble figuring out how to accomplish what seems to be a pretty standard pattern when doing a RANGE_ADD mutation.
Say on page load I pull in and render a connection chatmessages with first: 10 pagination. I now do a AddMessageMutation which does a prepend to the that same connection. Since the connection is paginated by first: 10 the last item of the connection is now gone to give room for my new edge and is thus removed from rendering. I can of course add +1 to first on the onSuccess of the mutation, but this often leaves a weird flickering effect of removing and reinserting the edge at the end.
This problem seem to get even more difficult if I want to do an optimistic update to the connection since there is no onOptimistic callback.
Since this seems like a pretty common pattern I figured I'd ask if I'm approaching this the wrong way.
Referenced in issue:
https://github.com/facebook/relay/issues/384
I think that the problem is that you're incrementing the count in the onSuccess handler (ie. after the server has responded) when what you want to do is to increment it in tandem with the optimistic mutation (ie. right away).
Try this:
_handleMessageCreated() {
Relay.Store.update(new AddMessageMutation(
{/* ... */},
{onFailure: () => this._handleMessageCreationRollback()}
);
// Optimistically increment the count
this.props.relay.setVariables({
numMessagesToShow: this.props.relay.variables.numMessagesToShow + 1,
});
}
_handleMessageCreationRollback() {
this.props.relay.setVariables({
numMessagesToShow: this.props.relay.variables.numMessagesToShow - 1,
});
}
See also: https://github.com/facebook/relay/issues/135#issuecomment-134400856

SCAN command with spring redis template

I am trying to execute "scan" command with RedisConnection. I don't understand why the following code is throwing NoSuchElementException
RedisConnection redisConnection = redisTemplate.getConnectionFactory().getConnection();
Cursor c = redisConnection.scan(scanOptions);
while (c.hasNext()) {
c.next();
}
Exception:
java.util.NoSuchElementException at
java.util.Collections$EmptyIterator.next(Collections.java:4189) at
org.springframework.data.redis.core.ScanCursor.moveNext(ScanCursor.java:215)
at
org.springframework.data.redis.core.ScanCursor.next(ScanCursor.java:202)
Yes, I have tried this, in 1.6.6.RELEASE spring-data-redis.version. No issues, the below simple while loop code is enough. And i have set count value to 100 (more the value) to save round trip time.
RedisConnection redisConnection = null;
try {
redisConnection = redisTemplate.getConnectionFactory().getConnection();
ScanOptions options = ScanOptions.scanOptions().match(workQKey).count(100).build();
Cursor c = redisConnection.scan(options);
while (c.hasNext()) {
logger.info(new String((byte[]) c.next()));
}
} finally {
redisConnection.close(); //Ensure closing this connection.
}
I'm using spring-data-redis 1.6.0-RELEASE and Jedis 2.7.2; I do think that the ScanCursor implementation is slightly flawed w/rgds to handling this case on this version - I've not checked previous versions though.
So: rather complicated to explain, but in the ScanOptions object there is a "count" field that needs to be set (default is 10). This field, contains an "intent" or "expected" results for this search. As explained (not really clearly, IMHO) here, you may change the value of count at each invocation, especially if no result has been returned. I understand this as "a work intent" so if you do not get anything back, maybe your "key space" is vast and the SCAN command has not worked "hard enough". Obviously, as long as you're getting results back, you do not need to increase this.
A "simple-but-dangerous" approach would be to have a very large count (e.g 1 million or more). This will make REDIS go away trying to search your vast key space to find "at least or near as much" as your large count. Don't forget - REDIS is single-threaded so you just killed your performance. Try this with a REDIS of 12M keys and you'll see that although SCAN may happily return results with a very high count value, it will absolutely do nothing more during the time of that search.
To the solution to your problem:
ScanOptions options = ScanOptions.scanOptions().match(pattern).count(countValue).build();
boolean done = false;
// the while-loop below makes sure that we'll get a valid cursor -
// by looking harder if we don't get a result initially
while (!done) {
try(Cursor c = redisConnection.scan(scanOptions)) {
while (c.hasNext()) {
c.next();
}
done = true; //we've made it here, lets go away
} catch (NoSuchElementException nse) {
System.out.println("Going for "+countValue+" was not hard enough. Trying harder");
options = ScanOptions.scanOptions().match(pattern).count(countValue*2).build();
}
}
Do note that the ScanCursor implementation of Spring Data REDIS will properly follow the SCAN instructions and loop correctly, as much as needed, to get to the end of the loop as per documentation. I've not found a way to change the scan options within the same cursor - so there may be a risk that if you get half-way through your results and get a NoSuchElementException, you'll start again (and essentially do some of the work twice).
Of course, better solutions are always welcome :)
My old code
ScanOptions.scanOptions().match("*" + query + "*").count(10).build();
Working code
ScanOptions.scanOptions().match("*" + query + "*").count(Integer.MAX_VALUE).build();

calling HTTP requests in angularjs in batched form?

I have two for loops and an HTTP call inside them.
for(i=0;i<m;i++) {
for(j=0;j<n;j++) {
$http call that uses i and j as GET parameters
.success(//something)
.error(//something more)
}
}
The problem with this is it makes around 200-250 AJAX calls based on values of m and n. This is causing problem of browser crash when tried to access from mobile.
I would like to know if there is a way to call HTTP requests in batched form (n requests at a time) and once these calls are finished, move to next batch and so on.
You could always use a proper HTTP batch module like this angular-http-batcher - which will take all of the requests and turn them into a single HTTP POST request before sending it to the server. Therefore it reduces 250 calls into 1! The module is here https://github.com/jonsamwell/angular-http-batcher and a detailed explanation of it is here http://jonsamwell.com/batching-http-requests-in-angular/
Yes, use the async library found here: https://github.com/caolan/async
First, use the loop to create your tasks:
var tasks = []; //array to hold the tasks
for(i=0;i<m;i++) {
for(j=0;j<n;j++) {
//we add a function to the array of "tasks"
//Async will pass that function a standard callback(error, data)
tasks.push(function(cb){
//because of the way closures work, you may not be able to rely on i and j here
//if i/j don't work here, create another closure and store them as params
$http call that uses i and j as GET parameters
.success(function(data){cb(null, data);})
.error(function(err){cb(err);});
});
}
}
Now that you've got an array full of callback-ready functions that can be executed, you must use async to execute them, async has a great feature to "limit" the number of simultaneous requests and therefore "batch".
async.parallelLimit(tasks, 10, function(error, results){
//results is an array with each tasks results.
//Don't forget to use $scope.$apply or $timeout to trigger a digest
});
In the above example you will run 10 tasks at a time in parallel.
Async has a ton of other amazing options as well, you can run things in series, parlallel, map arrays, etc.It's worth noting that you might be able to achieve greater efficiency by using a single function and the "eachLimit" function of async.
The way I did it is as follows (this will help when one wants to call HTTP requests in a batch of n requests at a time )
call batchedHTTP(with i=0);
batchedHTTP = function() {
/* check for terminating condition (in this case, i=m) */
for(j=0;j<n;j++) {
var promise = $http call with i and j GET parameters
.success(// do something)
.error(// do something else)
promisesArray.push(promise);
}
$q.all(promisesArray).then(function() {
call batchedHTTP(with i=i+1)
});
}

Resources