for (var i = 0; i < 5; ++i) {
var xhr;
if (window.XMLHttpRequest) {
xhr = new XMLHttpRequest();
} else if (window.ActiveXObject) {
xhr = new ActiveXObject("Msxml2.XMLHTTP");
}
xhr.open('GET', '/Test/LongOperation?p=' + new Date());
xhr.send('');
}
This is only a demo (not live code) but it illustrates the core problem.
LongOperation is a method that returns a result after 10 seconds.
Questions:
Why does IE8 (and maybe other IEs) hang when the user tries to navigate away from page right after the above code snippet has been executed? FireFox/Safari cancel these requests and allow navigation to another page. If you replace 'i < 5' with 'i < 4' then IE would not hang.
How to work around this ugly IE behavior? Users are very upset when their browser suddenly hangs.
Most browsers have an inbuilt limit of 4 connections to any given server. One way to work around this "problem" might be to use a different hostname for out of band XML requests - your user requests will go to the main hosts, while the AJAX requests can go to the second server.
My answer to my question. I abort all not completed xhr objects in window.onbeforeunload. At least this solution works for me. I slightly override $.ajax() method behavior:
;(function($) {
var rq = [];
var ajax = $.ajax;
$.ajax = function(settings) {
// override complete() operation
var complete = settings.complete;
settings.complete = function(xhr) {
if (xhr) {
// xhr may be undefined, for example when downloading JavaScript
for (var i = 0, len = rq.length; i < len; ++i) {
if (rq[i] == xhr) {
// drop completed xhr from list
rq.splice(i, 1);
break;
}
}
}
// execute base
if (complete) {
complete.apply(this, arguments)
}
}
var r = ajax.apply(this, arguments);
if (r) {
// r may be undefined, for example when downloading JavaScript
rq.push(r);
}
return r;
};
// 'kill' all pending xhrs
$(window).bind('beforeunload', function() {
$.each(rq, function(i, xhr) {
try {
xhr.abort();
} catch(e) {
$debug.fail('failed to abort xhr');
}
});
rq = [];
});
})(jQuery);
$debug - my utility class
Try running them asynchronously and then triggering the next http request when the each completes. I suspect that the xmlhttp request is blocking the UI thread of IE whereas the implementations of that on other browsers is a little more graceful.
Hopefully that will give you a work-around for question 2 but I can only guess at the true reason for question 1, it could just be a bug.
Related
I have some code that saves data using Breeze and reports progress over multiple saves that is working reasonably well.
However, sometimes a save will timeout, and I'd like to retry it once automatically. (Currently the user is shown an error and has to retry manually)
I am struggling to find an appropriate way to do this, but I am confused by promises, so I'd appreciate some help.
Here is my code:
//I'm using Breeze, but because the save takes so long, I
//want to break the changes down into chunks and report progress
//as each chunk is saved....
var surveys = EntityQuery
.from('PropertySurveys')
.using(manager)
.executeLocally();
var promises = [];
var fails = [];
var so = new SaveOptions({ allowConcurrentSaves: false});
var count = 0;
//...so I iterate through the surveys, creating a promise for each survey...
for (var i = 0, len = surveys.length; i < len; i++) {
var query = EntityQuery.from('AnsweredQuestions')
.where('PropertySurveyID', '==', surveys[i].ID)
.expand('ActualAnswers');
var graph = manager.getEntityGraph(query)
var changes = graph.filter(function (entity) {
return !entity.entityAspect.entityState.isUnchanged();
});
if (changes.length > 0) {
promises.push(manager
.saveChanges(changes, so)
.then(function () {
//reporting progress
count++;
logger.info('Uploaded ' + count + ' of ' + promises.length);
},
function () {
//could I retry the fail here?
fails.push(changes);
}
));
}
}
//....then I use $q.all to execute the promises
return $q.all(promises).then(function () {
if (fails.length > 0) {
//could I retry the fails here?
saveFail();
}
else {
saveSuccess();
}
});
Edit
To clarify why I have been attempting this:
I have an http interceptor that sets a timeout on all http requests. When a request times out, the timeout is adjusted upwards, the user is displayed an error message, telling them they can retry with a longer wait if they wish.
Sending all the changes in one http request is looking like it could take several minutes, so I decided to break the changes down into several http requests, reporting progress as each request succeeds.
Now, some requests in the batch might timeout and some might not.
Then I had the bright idea that I would set a low timeout for the http request to start with and automatically increase it. But the batch is sent asynchronously with the same timeout setting and the time is adjusted for each failure. That is no good.
To solve this I wanted to move the timeout adjustment after the batch completes, then also retry all requests.
To be honest I'm not so sure an automatic timeout adjustment and retry is such a great idea in the first place. And even if it was, it would probably be better in a situation where http requests were made one after another - which I've also been looking at: https://stackoverflow.com/a/25730751/150342
Orchestrating retries downstream of $q.all() is possible but would be very messy indeed. It's far simpler to perform retries before aggregating the promises.
You could exploit closures and retry-counters but it's cleaner to build a catch chain :
function retry(fn, n) {
/*
* Description: perform an arbitrary asynchronous function,
* and, on error, retry up to n times.
* Returns: promise
*/
var p = fn(); // first try
for(var i=0; i<n; i++) {
p = p.catch(function(error) {
// possibly log error here to make it observable
return fn(); // retry
});
}
return p;
}
Now, amend your for loop :
use Function.prototype.bind() to define each save as a function with bound-in parameters.
pass that function to retry().
push the promise returned by retry().then(...) onto the promises array.
var query, graph, changes, saveFn;
for (var i = 0, len = surveys.length; i < len; i++) {
query = ...; // as before
graph = ...; // as before
changes = ...; // as before
if (changes.length > 0) {
saveFn = manager.saveChanges.bind(manager, changes, so); // this is what needs to be tried/retried
promises.push(retry(saveFn, 1).then(function() {
// as before
}, function () {
// as before
}));
}
}
return $q.all(promises)... // as before
EDIT
It's not clear why you might want to retry downsteam of $q.all(). If it's a matter of introducing some delay before retrying, the simplest way would be to do within the pattern above.
However, if retrying downstream of $q.all() is a firm requirement, here's a cleanish recursive solution that allows any number of retries, with minimal need for outer vars :
var surveys = //as before
var limit = 2;
function save(changes) {
return manager.saveChanges(changes, so).then(function () {
return true; // true signifies success
}, function (error) {
logger.error('Save Failed');
return changes; // retry (subject to limit)
});
}
function saveChanges(changes_array, tries) {
tries = tries || 0;
if(tries >= limit) {
throw new Error('After ' + tries + ' tries, ' + changes_array.length + ' changes objects were still unsaved.');
}
if(changes_array.length > 0) {
logger.info('Starting try number ' + (tries+1) + ' comprising ' + changes_array.length + ' changes objects');
return $q.all(changes_array.map(save)).then(function(results) {
var successes = results.filter(function() { return item === true; };
var failures = results.filter(function() { return item !== true; }
logger.info('Uploaded ' + successes.length + ' of ' + changes_array.length);
return saveChanges(failures), tries + 1); // recursive call.
});
} else {
return $q(); // return a resolved promise
}
}
//using reduce to populate an array of changes
//the second parameter passed to the reduce method is the initial value
//for memo - in this case an empty array
var changes_array = surveys.reduce(function (memo, survey) {
//memo is the return value from the previous call to the function
var query = EntityQuery.from('AnsweredQuestions')
.where('PropertySurveyID', '==', survey.ID)
.expand('ActualAnswers');
var graph = manager.getEntityGraph(query)
var changes = graph.filter(function (entity) {
return !entity.entityAspect.entityState.isUnchanged();
});
if (changes.length > 0) {
memo.push(changes)
}
return memo;
}, []);
return saveChanges(changes_array).then(saveSuccess, saveFail);
Progress reporting is slightly different here. With a little more thought it could be made more like in your own answer.
This is a very rough idea of how to solve it.
var promises = [];
var LIMIT = 3 // 3 tris per promise.
data.forEach(function(chunk) {
promises.push(tryOrFail({
data: chunk,
retries: 0
}));
});
function tryOrFail(data) {
if (data.tries === LIMIT) return $q.reject();
++data.tries;
return processChunk(data.chunk)
.catch(function() {
//Some error handling here
++data.tries;
return tryOrFail(data);
});
}
$q.all(promises) //...
Two useful answers here, but having worked through this I have concluded that immediate retries is not really going to work for me.
I want to wait for the first batch to complete, then if the failures are because of timeouts, increase the timeout allowance, before retrying failures.
So I took Juan Stiza's example and modified it to do what I want. i.e. retry failures with $q.all
My code now looks like this:
var surveys = //as before
var successes = 0;
var retries = 0;
var failedChanges = [];
//The saveChanges also keeps a track of retries, successes and fails
//it resolves first time through, and rejects second time
//it might be better written as two functions - a save and a retry
function saveChanges(data) {
if (data.retrying) {
retries++;
logger.info('Retrying ' + retries + ' of ' + failedChanges.length);
}
return manager
.saveChanges(data.changes, so)
.then(function () {
successes++;
logger.info('Uploaded ' + successes + ' of ' + promises.length);
},
function (error) {
if (!data.retrying) {
//store the changes and resolve the promise
//so that saveChanges can be called again after the call to $q.all
failedChanges.push(data.changes);
return; //resolved
}
logger.error('Retry Failed');
return $q.reject();
});
}
//using map instead of a for loop to call saveChanges
//and store the returned promises in an array
var promises = surveys.map(function (survey) {
var changes = //as before
return saveChanges({ changes: changes, retrying: false });
});
logger.info('Starting data upload');
return $q.all(promises).then(function () {
if (failedChanges.length > 0) {
var retries = failedChanges.map(function (data) {
return saveChanges({ changes: data, retrying: true });
});
return $q.all(retries).then(saveSuccess, saveFail);
}
else {
saveSuccess();
}
});
i am using ajax and jquery to send a synchronous http request.
I have to use synchronous request because i want to return some value from ajax function after evaluating result of the server side script.
i know synchronous request will freeze the browser but due to my requirement i will have to do this request.
i also knows that in synchronous request there is no use of onreadystatechange function we should evaluate our result after sending the request or below the send function.By doing this my code is in working state.
But, my problem is that when i used onreadystatechange function it is working in firefox >=4 version but not working in below firefox 4 version.
Please. help me in finding out whether problem is with the code or the browser.
i am not able to find out the bug now i am helpless...plz help
here is my code
function test(txt_obj) {
var keywords = txt_obj.value;
var SHttpRequestObject = false;
var url = "/speller/server-scripts/ifmisspelled_words.html" + '?keywords=' + keywords);
var speller_res = 0;
if (window.XMLHttpRequest){
SHttpRequestObject = new XMLHttpRequest();
} else if (window.ActiveXObject){
SHttpRequestObject = new ActiveXObject("Microsoft.XMLHTTP");
}
SHttpRequestObject.open("POST", url, false);
if (SHttpRequestObject){
SHttpRequestObject.onreadystatechange = function() {
if (SHttpRequestObject.readyState == 4 && SHttpRequestObject.status == 200)
{
var result = eval("(" + SHttpRequestObject.responseText + ")");
if(result.error) {
speller_res = 1;
} else if(result.word_exist) {
speller_res = 1;
}
else if(result.word_not_exist) {
speller_res = 0;
}
}
};
}
SHttpRequestObject.send(null);
return speller_res;
}
From MDN:
onreadystatechange
Warning: This must not be used from native code. You should also not
use this with synchronous requests.
I am using an observer on "http-on-modify-request" to analyze HTTP requests (and responses with the corresponding other observers).
Is it possible to determine whether the HTTP request / response is the main frame loading (the actual page DOM)? As opposed to another resource (image, css, sub_frame, etc.).
The docs have most of the answer you're looking for here and I've modified it below for use with the addon-sdk.
You can watch for an IFRAME by comparing the location with the top.document location.
I don't think there's an easy way to detect loading of images, etc so you'll probably want to just watch for the first hit that's not an IFRAME and regard everything else as css/image/script content loading.
var chrome = require("chrome");
var httpmods = {
observe : function(aSubject, aTopic, aData) {
console.log("observer", aSubject, aTopic, aData);
aSubject.QueryInterface(chrome.Ci.nsIHttpChannel);
var url = aSubject.URI.spec;
var dom = this.getBrowserFromChannel(aSubject);
if (dom) {
if (dom.top.document && dom.location === dom.top.document.location) {
console.log("ISN'T IFRAME");
} else {
console.log("IS IFRAME");
}
}
},
getBrowserFromChannel: function (aChannel) {
try {
var notificationCallbacks =
aChannel.notificationCallbacks ? aChannel.notificationCallbacks : aChannel.loadGroup.notificationCallbacks;
if (!notificationCallbacks)
return null;
var domWin = notificationCallbacks.getInterface(chrome.Ci.nsIDOMWindow);
return domWin;
}
catch (e) {
dump(e + "\n");
return null;
}
}
}
require("observer-service").add("http-on-modify-request", httpmods.observe, httpmods);
I'm using LESS CSS (more exactly less.js) which seems to exploit LocalStorage under the hood. I had never seen such an error like this before while running my app locally, but now I get "Persistent storage maximum size reached" at every page display, just above the link the unique .less file of my app.
This only happens with Firefox 12.0 so far.
Is there any way to solve this?
P.S.: mainly inspired by Calculating usage of localStorage space, this is what I ended up doing (this is based on Prototype and depends on a custom trivial Logger class, but this should be easily adapted in your context):
"use strict";
var LocalStorageChecker = Class.create({
testDummyKey: "__DUMMY_DATA_KEY__",
maxIterations: 100,
logger: new Logger("LocalStorageChecker"),
analyzeStorage: function() {
var result = false;
if (Modernizr.localstorage && this._isLimitReached()) {
this._clear();
}
return result;
},
_isLimitReached: function() {
var localStorage = window.localStorage;
var count = 0;
var limitIsReached = false;
do {
try {
var previousEntry = localStorage.getItem(this.testDummyKey);
var entry = (previousEntry == null ? "" : previousEntry) + "m";
localStorage.setItem(this.testDummyKey, entry);
}
catch(e) {
this.logger.debug("Limit exceeded after " + count + " iteration(s)");
limitIsReached = true;
}
}
while(!limitIsReached && count++ < this.maxIterations);
localStorage.removeItem(this.testDummyKey);
return limitIsReached;
},
_clear: function() {
try {
var localStorage = window.localStorage;
localStorage.clear();
this.logger.debug("Storage clear successfully performed");
}
catch(e) {
this.logger.error("An error occurred during storage clear: ");
this.logger.error(e);
}
}
});
document.observe("dom:loaded",function() {
var checker = new LocalStorageChecker();
checker.analyzeStorage();
});
P.P.S.: I didn't measure the performance impact on the UI yet, but a decorator could be created and perform the storage test only every X minutes (with the last timestamp of execution in the local storage for instance).
Here is a good resource for the error you are running into.
http://www.sitepoint.com/building-web-pages-with-local-storage/#fbid=5fFWRXrnKjZ
Gives some insight that localstorage only has so much room and you can max it out in each browser. Look into removing some data from localstorage to resolve your problem.
Less.js persistently caches content that is #imported. You can use this script to clear content that is cached. Using the script below you can call the function destroyLessCache('/path/to/css/') and it will clear your localStorage of css files that have been cached.
function destroyLessCache(pathToCss) { // e.g. '/css/' or '/stylesheets/'
if (!window.localStorage || !less || less.env !== 'development') {
return;
}
var host = window.location.host;
var protocol = window.location.protocol;
var keyPrefix = protocol + '//' + host + pathToCss;
for (var key in window.localStorage) {
if (key.indexOf(keyPrefix) === 0) {
delete window.localStorage[key];
}
}
}
PLEASE READ THE UPDATE #2 BELOW IF YOU ARE INTERESTED IN THIS PROBLEM ;)
Say I put this code into the JS of my extension.
var reader = {
onInputStreamReady : function(input) {
var sin = Cc["#mozilla.org/scriptableinputstream;1"]
.createInstance(Ci.nsIScriptableInputStream);
sin.init(input);
sin.available();
var request = '';
while (sin.available()) {
request = request + sin.read(512);
}
console.log('Received: ' + request);
input.asyncWait(reader,0,0,null);
}
}
var listener = {
onSocketAccepted: function(serverSocket, clientSocket) {
console.log("Accepted connection on "+clientSocket.host+":"+clientSocket.port);
input = clientSocket.openInputStream(0, 0, 0).QueryInterface(Ci.nsIAsyncInputStream);
output = clientSocket.openOutputStream(Ci.nsITransport.OPEN_BLOCKING, 0, 0);
input.asyncWait(reader,0,0,null);
}
}
var serverSocket = Cc["#mozilla.org/network/server-socket;1"].
createInstance(Ci.nsIServerSocket);
serverSocket.init(-1, true, 5);
console.log("Opened socket on " + serverSocket.port);
serverSocket.asyncListen(listener);
Then I run Firefox and connect to the socket via telnet
telnet localhost PORT
I send 5 messages and they get printed out, but when I try to send 6th message I get
firefox-bin: Fatal IO error 11 (Resource temporarily unavailable) on X server :0.0.
Even worse, when I try to put this same code into an XPCOM component (because that's where I actually need it), after I try sending a message via telnet I get
Segmentation fault
or sometimes
GLib-ERROR **: /build/buildd/glib2.0-2.24.1/glib/gmem.c:137: failed to allocate 32 bytes
aborting...
Aborted
printed to the terminal from which I launched firefox.
This is really weird stuff.. Can you spot something wrong with the code I've pasted or is smth wrong with my firefox/system or is the nsIServerSocket interface deprecated?
I'm testing with Firefox 3.6.6.
I would really appreciate some answer. Perhaps you could point me to a good example of using Sockets within an XPCOM component. I haven't seen many of those around.
UPDATE
I just realised that it used to work so now I think that my Console
component breaks it. I have no idea how this is related. But if I
don't use this component the sockets are working fine.
Here is the code of my Console component. I will try to figure out
what's wrong and why it interferes and I'll post my findings later.
Likely I'm doing something terribly wrong here to cause Segmentation
faults with my javascript =)
Voodoo..
components/Console.js:
const Cc = Components.classes;
const Ci = Components.interfaces;
const Cr = Components.results;
Console.prototype = (function() {
var win;
var initialized = false;
var ready = false;
var _log = function(m, level, location) {
if (initialized&&ready) {
var prefix = "INFO: ";
switch (level) {
case "empty":
prefix = ""
break;
case "error":
prefix = "ERORR: "
break;
case "warning":
prefix = "WARNING: "
break;
}
win.document.getElementById(location).value =
win.document.getElementById(location).value + prefix + m + "\n";
win.focus();
} else if (initialized&&!ready) {
// Now it is time to create the timer...
var timer = Components.classes["#mozilla.org/timer;1"]
.createInstance(Components.interfaces.nsITimer);
// ... and to initialize it, we want to call
event.notify() ...
// ... one time after exactly ten second.
timer.initWithCallback(
{ notify: function() { log(m); } },
10,
Components.interfaces.nsITimer.TYPE_ONE_SHOT
);
} else {
init();
log(m);
}
}
var log = function(m, level) {
_log(m, level, 'debug');
}
var poly = function(m, level) {
_log(m, "empty", 'polyml');
}
var close = function() {
win.close();
}
var setReady = function() {
ready = true;
}
var init = function() {
initialized = true;
var ww = Components.classes["#mozilla.org/embedcomp/window-
watcher;1"]
.getService(Components.interfaces.nsIWindowWatcher);
win = ww.openWindow(null, "chrome://polymlext/content/
console.xul",
"console", "chrome,centerscreen,
resizable=no", null);
win.onload = setReady;
return win;
}
return {
init: init,
log : log,
poly : poly,
}
}());
// turning Console Class into an XPCOM component
Components.utils.import("resource://gre/modules/XPCOMUtils.jsm");
function Console() {
this.wrappedJSObject = this;
}
prototype2 = {
classDescription: "A special Console for PolyML extension",
classID: Components.ID("{483aecbc-42e7-456e-b5b3-2197ea7e1fb4}"),
contractID: "#ed.ac.uk/poly/console;1",
QueryInterface: XPCOMUtils.generateQI(),
}
//add the required XPCOM glue into the Poly class
for (attr in prototype2) {
Console.prototype[attr] = prototype2[attr];
}
var components = [Console];
function NSGetModule(compMgr, fileSpec) {
return XPCOMUtils.generateModule(components);
}
I'm using this component like this:
console = Cc["#ed.ac.uk/poly/console;1"].getService().wrappedJSObject;
console.log("something");
And this breaks the sockets :-S =)
UPDATE #2
Ok, if anyone is interested in checking this thing out I would really
appreciate it + I think this is likely some kind of bug (Seg fault
from javascript shouldn't happen)
I've made a minimal version of the extension that causes the problem,
you can install it from here:
http://dl.dropbox.com/u/645579/segfault.xpi
The important part is chrome/content/main.js:
http://pastebin.com/zV0e73Na
The way my friend and me can reproduce the error is by launching the
firefox, then a new window should appear saying "Opened socket on
9999". Connect using "telnet localhost 9999" and send a few messages.
After 2-6 messages you get one of the following printed out in the
terminal where firefox was launched:
1 (most common)
Segmentation fault
2 (saw multiple times)
firefox-bin: Fatal IO error 11 (Resource temporarily unavailable) on
X
server :0.0.
3 (saw a couple of times)
GLib-ERROR **: /build/buildd/glib2.0-2.24.1/glib/gmem.c:137: failed
to
allocate 32 bytes
aborting...
Aborted
4 (saw once)
firefox-bin: ../../src/xcb_io.c:249: process_responses: Assertion
`(((long) (dpy->last_request_read) - (long) (dpy->request)) <= 0)'
failed.
Aborted
If you need any more info or could point me to where to post a bug
report :-/ I'll be glad to do that.
I know this is just one of the many bugs... but perhaps you have an
idea of what should I do differently to avoid this? I would like to
use that "console" of mine in such way.
I'll try doing it with buffer/flushing/try/catch as people are suggesting, but I wonder whether try/catch will catch the Seg fault...
This is a thread problem. The callback onInputStreamReady happened to be executed in a different thread and accessing UI / DOM is only allowed from the main thread.
Solution is really simple:
change
input.asyncWait(reader,0,0,null);
to
var tm = Cc["#mozilla.org/thread-manager;1"].getService();
input.asyncWait(reader,0,0,tm.mainThread);