Wicket AbstractAjaxTimerBehavior and performance - ajax

I'm using AbstractAjaxWicketBehavior in my Wicket application and it seems to have a descending performance over time when more AJAX calls appear. When a page is refreshed without AJAX, the performance is fine again. I would like to know if this is a normal thing or can be a memory leak of some kind present? I can't simply attach the code as it's spread over more classes and it would require too much effort to understand, but in short I want to do this:
create and start the timer
repeat some code 10x
stop the timer
set some values to attributes
ajax refresh (causes show/hide of some components)
and do the same again (hypotetically infinite times).
Every repetition of this flow is slower even though I use constant updating interval of 100ms.
As the timer is a behavior and does not allow to be restarted or reused, it is created every time as a new instance and attached to the Form component.
The timer looks like this:
static int count = 0
new AbstractAjaxTimerBehavior(Duration.milliseconds(100)) {
// do some code
count++
if(count == 10) {
stop();
// do some code
}
}
This behavior is attached to a Form inside a Panel and upon a click on an AjaxLink the Form is refreshed (added to AjaxRequestTarget). Every time I remove the old timer from the Form component before adding the new behavior.
Everything works fine, but every repetition of this procedure runs slower (The first one is perfect, the second one is also around 100ms, but then it gets slower (after 10 or 15 repetitions, the refresh interval is about 1 second) and all other AJAX calls in the app also go significantly slower), so I suspect there is a memory leak... any obvious reasons? Or any ways how to make a wicket timer better for my purpose? Any advice appreciated. Thanks.

Our wicket applications also tend to get slower with every AJAX-Request. I'm not sure if this is the exact same problem or if it relates to the AjaxTimerBehavior in particular, but:
We found that one reason for this are pseudo leaks in the browser that occur due to HTML replacement. Obviously the nrowser cannot free memory until the page is reloaded.
You can monitor Browser Memory with the Task manager (or another Tool) and watch Memory raise with every AJAX-request and how it reliefs on a full page reload (F5). Especially in IE.
We replace alot of HTML with our AJAX-requests though.

Related

Web forms: Go back in history without refreshing page

Is it possible to go back in a page without reloading it?
I am developing a Web Forms website and every time a go back in history, the page reloads (and takes a long time).
Following is the curl of the page:
Honestly, no.
The life cycle of a Web Form is very specific and the page goes through it every time it is run (that is every time you request it through your browser).
On the other hand, you can always optimize your page to make it load faster. How you do it depends on many things one of which is what code runs on the server side upon loading and if any portions of that code can be either optimized for speed or moved in event handlers to be executed at a later point in time. For example, if you're fetching data from a database when your page loads consider applying paging to narrow the number of selected rows.
Please, feel free to ask a new question if you decide to take that course of action.

Mac application window stops updating

I am writing a Mac application (target 10.9+, using Xcode 6 Beta 3 on Mavericks) in Swift where I have a number of NSTextFields (labels) updating several times per second for extended periods of time by modifying their .stringvalue from a background thread. This seems to work well for a varying duration of time (anywhere between five minutes to 2 hours), but then the application window seems to stop updating. The text stops updating, hovering over the 'stoplight' controls on the upper-left does not show the symbols, and clicking in text boxes, etc., does not highlight the box/show the I-beam. However, my indeterminate progress wheel DOES continue to spin, and when I resize/minimize/zoom the window, or scroll in an NSScrollView box, the window updates during the movement.
My first guess was that some sort of window buffer was being used instead of a live image, so I tried to force an update using window.update(), window.flushWindowIfNeeded(), and window.flushWindow(), all to no avail. Can someone please tell me what's going on, why my window stops updating, and how to fix this problem?
Your problem is right here:
I have a number of NSTextFields (labels) updating several times per
second for extended periods of time by modifying their .stringvalue
from a background thread.
In OSX (and iOS), UI updates must occur in the main thread/queue. Doing otherwise is undefined behavior; sometimes it'll work, sometimes it won't, sometimes it'll just crash.
A quick fix to your issue would be to simply use Grand Central Dispatch (GCD) to dispatch those updates to the main queue with dispatch_async like:
dispatch_async(dispatch_get_main_queue(), ^{
textField.stringValue = "..."
});
The simplified version of what that does is it puts the block/closure (the code between {}) in a queue that the default run loop (which runs on the main thread/queue) checks on each pass through its loop. When the run loop sees a new block in the queue, it pops it off and executes it. Also, since that's using dispatch_async (as opposed to dispatch_sync), the code that did the dispatch won't block; dispatch_async will queue up the block and return right away.
Note: If you haven't read about GCD, I highly recommend taking a look at this link and the reference link above (this is also a good one on general concurrency in OSX/iOS that touches on GCD).
Using a timer to relieve strain on your UI
Edit: Several times a second really isn't that much, so this section is probably overkill. However, if you get over 30-60 times a second, then it will become relevant.
You don't want to run in to a situation where you're queueing up a backlog of UI updates faster than they can be processed. In that case it would make more sense to update your NSTextField with a timer.
The basic idea would be to store the value that you want displayed in your NSTextField in some intermediary variable somewhere. Then, start a timer that fires a function on the main thread/queue tenth of a second or so. In that function, update your NSTextField with the value stored in that intermediary variable. Since the timer will already be running on the main thread/queue, you'll already be in the right place to do your UI update.
I'd use NSTimer to setup the timer. It would look something like this:
var timer: NSTimer?
func startUIUpdateTimer() {
// NOTE: For our purposes, the timer must run on the main queue, so use GCD to make sure.
// This can still be called from the main queue without a problem since we're using dispatch_async.
dispatch_async(dispatch_get_main_queue()) {
// Start a time that calls self.updateUI() once every tenth of a second
timer = NSTimer.scheduledTimerWithTimeInterval(0.1, target:self, selector:"updateUI", userInfo: nil, repeats: true)
}
}
func updateUI() {
// Update the NSTextField(s)
textField.stringValue = variableYouStoredTheValueIn
}
Note: as #adv12 pointed out, you should think about data synchronization when you're accessing the same data from multiple threads.
Note: you can also use GCD for timers using dispatch sources, but NSTimer is easier to work with (see here if interested).
Using a timer like that should keep your UI very responsive; no need to worry about "leaving the main thread as empty as possible". If, for some reason, you start losing some responsiveness, simply change the timer so that it doesn't update as often.
Update: Data Synchronization
As #adv12 pointed out, you should synchronize your data access if you're updating data on a background thread and then using it to update the UI in the main thread. You can actually use GCD to do this rather easily by creating a serial queue and making sure you only read/write your data in blocks dispatched to that queue. Since serial queues only execute one block at a time, in the order the blocks are received, it guarantees that only one block of code will be accessing your data at the same time.
Setup your serial queue:
let dataAccessQueue = dispatch_queue_create("dataAccessQueue", DISPATCH_QUEUE_SERIAL)
Surround your reads and write with:
dispatch_sync(dataAccessQueue) {
// do reads and/or writes here
}

Extremely slow performance of element.addEventListener("touchstart")

On Chrome something is seriously wrong with the performance of element.addEventListener("touchstart") in my system, in some cases reaching 100ms for a single call.
r00122 listen touchstart: 60.000ms
r00123 listen touchstart: 61.000ms
r00124 listen touchstart: 61.000ms
The above is the console.time output of a pure addEventListener call.
Identical calls for other events take 0ms.
The interesting thing is that every call or two the time taken goes up by another ms.
There is no difference when I turn on or off "Emulate touch events".
However, a simple test case on Chrome runs at 0.01ms/call, so there must be some other dependency. I can't think what it is, other than that fact that I have a large number of elements on the page and am setting up many event listeners (1000). But still, in my page on Mozilla and Safari the call is instantaneous. What on earth could be accounting for this?
I'm experiencing the same behaviour inserting listeners to over 1,000 elements, and indeed only in Chrome on desktop. I consider this to be a bug in Chrome, and created the following two-step workaround.
Check if the client supports the touch event; if not, then don't register it. The code I use to check for touch support is based on this answer:
var bTouchEnabled = 'ontouchstart' in window ||
('onmsgesturechange' in window &&
'msMaxTouchPoints' in window.navigator &&
window.navigator.msMaxTouchPoints);
Don't register all elements at once, but buffer the registering: register 50, call a setTimeout() with a delay of, say, 20 ms, which registers the next 50, repeat.
Combining these two techniques helped me to greatly improve performance of the script and avoiding user agent freezing. It's still a workaround, but checking for the existence of the touch events seems semantically correct.

Do browsers limit AJAX polling rate? What is the limit?

I just read that some browsers would prevent HTTP polling (I guess by limiting the rate of requests)...
From https://github.com/sstrigler/JSJaC:
Note: As security restrictions of most modern browsers prevent HTTP
Polling from being usable anymore this module is disabled by default
now. If you want to compile it in use 'make polling'.
This could explain some misbehavior of some of my JavaScripts (sometimes requests are just not sent or retried, even if they were actually successful). But I couldn't find further information on details..
Questions
if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
Is there any way good resource for this?
Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
Thanks for your help...
Stefan
Yes, as far as I am aware there is a default pool limit of 10 and a default request timeout of 30 seconds per request, however the timeout and poll limits can be controlled and different browsers implement different limitations!
Check out this Google implementation.
and this is an awesome implementation of catching a timeout error!
You can find the Firefox specifics HERE!
Internet Explorer specifics are controlled from inside the Windows registry.
Also have a look at this question.
Basically, the way you control is not by changing the browser limitations, but by abiding them. So you apply a technique called throttle-ing.
Think of it as creating a FIFO/priority queue of functions. A queue struct that takes xhr requests as members and enforces delay between them is an Xhr Poll. For instance, I am using
Jsonp to get data from a node.js server located on another domain and I am polling of course due to browser limitations. Otherwise, I get zero response back from the server and that is only because of browser limitations.
I am actually doing a console log for every request that's supposed to be sent, but not all of them are being logged. So the browser limits them.
I'll be even more specific with helping you out. I have a page on my website which is supposed to render a view for tens or even hundreds of articles. You go through them using a cool horizontal slider.
The current value of the slider matches the currrent 'page'. Since I am only displaying 5 articles per page and I can't exactly load thousands of articles 'onload' without severe performance implications, I load the articles for the current page. I get them from a MongoDB by sending a cross-domain request to a Python script.
The script is supposed to return an array of five objects with all the details I need to build the DOM elements for a 'page'. However, there are a couple of issues.
First, the slider works extremely fast, as it's more or less a value change. Even if there is drag drop functionality, key down events etc, the actual change takes miliseconds. However, the code of the slider looks something like this:
goog.events.listen(slider, goog.events.EventType.CHANGE, function() {
myProject.Articles.page(slider.getValue());
}
The slider.getValue() method returns an int with the current page number, so basically I have to load from:
currentPage * articlesPerPage to (currentPage * articlesPerPage + 1) - 1
But in order to load, i do something like this:
I have a storage engine(think of it as an array):
I check if the content is not already there
If it is, there is no point to make another request, so go forward with getting the DOM elements from the array with the already created DOM elements in place.
If it isn't, then I need to get it so I need to send that request I was mentioning, which would look something like(without accounting for browser limitations):
JSONP.send({'action':'getMeSomeArticles','start':start,'length': itemsPerPage, function(callback){
// now I just parse the callback quickly to make sure it is consistent
// create DOM elements, and populate the client side storage
// and update the view for the user.
}}
The problem comes from the speed with which you can change that slider. Since every change supposedly triggers a request(same would happen for normal Xhr requests), then you are basically crossing the limitations of all browsers, so without throttle-ing, there would be no 'callback' for most of the requests. 'callback' is the JS code returned by the JSONP request(which is more of a remote script inclusion than anything else).
So what I do is push a request to a priority queue, not POLL, as now I don't need to send multiple simultaneous requests. If the queue is empty, the recently added member is executed and everyone is happy. If it's not, then all non-completed requests in progress are cancelled and only the last one is executed.
Now in my particular case, I do a binary search(0(log n)) to see if the storage engine doesn't have data for the previous requests yet, which tells me if the previous request has been completed or not. If it has, then it's removed from the queue and the current one is processed, otherwise the new one fires. So an and so forth.
Again, for speed consideration and shit browser wanna-bes such as Internet Explorer, I do the above described procedure about 3-4 steps ahead. So I pre-load 20 pages ahead till everything is the client side storage engine. This way, every limitation is successfully dealt with.
The cooldown time is covered by the minimum time it would take to slide through 20 pages and the throttle-ing makes sure there are no more than 1 active requests at any given time(with backwards compatibility going as far as Internet Explorer 5).
The reason why I wrote all this is to give you an example trying to say that you cannot always enforce delay directly from the FIFO structure, as your calls may need to turn into what a user sees, and you don't exactly want to make a user wait 10-15 seconds for a single page to render.
Also, always minimize the polling and the need to poll(simultaneously fired Ajax events, as not all browsers actually do good things with them). For instance, instead of doing something like sending one request to get content and sending another for that content to be tracked as viewed in your app metrics, do as many tasks at server level as you possibly can!
Of course, you probably want to track your errors properly, so your Xhr object from your library of choice implement error handling for ajax and because you are an awesome developer you want to make use of them.
so say you have a try - catch block in place
The scenario is this:
An Ajax call has finished and it's supposed to return a JSON, but the call somehow failed. However, you try to parse the JSON and do whatever you need to do with it.
so
function onAjaxSuccess (ajaxResponse) {
try {
var yourObj = JSON.parse(ajaxRespose);
} catch (err) {
// Now I've actually seen this on a number of occasions, to log that an error occur
// a lot of developers will attempt to send yet another ajax request to log the
// failure of the previous one.
// for these reasons, workers exist.
myProject.worker.message('preferrably a pre-determined error code should go here');
// Then only the worker should again throttle and poll the ajax requests that log the
//specific error.
};
};
While I have seen various implementations that try to fire as many Xhr requests at the same time as they possible can until they encounter browser limitations, then do quite a good job at stalling the ones that haven't fired in wait for the browser 'cooldown', what I can advise you is to think about the following:
How important is speed for your app?
Just how scalable and how intensive the I/O will be?
If the answer to the first one is 'very' and to the latter 'OMFG modern technology', then try to optimize your code and architecture as much as you can so that you never need to send 10 simultaneous Xhr requests. Also, for large scale apps, multi-thread your processes. The JavaScript way to accomplish that is by using workers. Or you could call the ECMA board, tell them to make this a default, and then post it here so that the rest of us JS devs can enjoy native multi-threading in JS:)(how dafuq did they not think about this?!?!)
Stefan, quick answers below:
-if it's "max. number of requests n per x seconds", what are the usual/default settings for x and n?
This sounds more like a server restriction. The browser ones usually sound like:
-"the maximum requests for the same hostname is x"
-"the maximum connections for ANY hostname is y"
-Is there any way good resource for this?
http://www.browserscope.org/?category=network (also hover over table headers to see what is measured)
http://www.stevesouders.com/blog/2008/03/20/roundup-on-parallel-connections
-Any way to detect if a request has been "delayed" or "rejected" because of a rate limit?
You could look at the http headers for "Connection: close" to detect server restrictions but I am not aware of being able in JavaScript to read settings from so many browsers in a consistent, browser-independent way. (For Firefox, you could read this http://support.mozilla.org/en-US/questions/746848)
Hope this quick answer helps?
No, browser does not in any way affect polling. I think what was meant on that page is the same origin policy - you can only access the same host and port as your original page.
Only known limitation to connections themselves is that you usually can only have from two to four simultaneous connections to the same host.
I've written some apps with long poll, some with C++ backend with my own webserver, and one with PHP backend with Apache2.
My long poll timeout is 4..10 s. When something occurs, or 4..10 s passes, my server returns an empty response. Then the client immediatelly starts another AJAX request. I found that some browsers hangs up when I start AJAX call from previous AJAX handler, so I am using setTimeout() with a small value to start the next AJAX request.
When something happens on the client side, which should be sent to server, I use another AJAX request for it, but it's a one-way thing: the server does not send any response, and the client does not process anything. The result of the operation (if any) will be received on the long poll. It requires max. 2 connection to the server, which all browsers supports.
Keep in mind, that if there's 500 client, it means 500 server-side webserver thread, which will move together, occurring load peaks, because when something happens, the server have to report it at the same time for each clients, the clients will process it near same time long, they will start the next long request in the same time, and from then, the timeout will expire also at the same time, and furthcoming ones too. You can trick with rnd timeout, say 4 rnd(0..4), but it's worthless, if anything happens, they will "sync" again, all the request have to be served at the same time, when something reportable happens.
I've tested it thru a router, and it works. I assume, routers respects 4..10 lag, it's around the speed of a slow webapge (far, far away), which no router think, that it should be canceled.
My PHP work is a collaborative spreadsheet, it looks amazing when you hit enter and the stuff is updating simultaneously in several browsers. Have fun!
No limit for no of ajax requests. However it will be on same host & port.
Server can limit no of request from a machine based on its setting.
For example. A server can set so that if there are more than few request from same machine within specified time it will reject request.
After small mistake in javascript code, neverending loop was made witch each step calling 2 ajax requests. In firebug i could see more and more requests until firefox started to slow down, dont response and finally crash.
So, yes, there is a "limit" ;)

Can I use a QTimer to periodically refresh a form while still letting user edit some of the fields?

I have a form in which I want to periodically refresh its values(mostly labels but 2 comboboxes and 1 spinboxe). I have done this before with a QThread but this time I would like to do it with a QTimer. Would that be ok or would it potentially create problems like freezing the GUI. There are a couple of fields in the form that are both user editable and periodically refreshed.
UPDATE: im removing the QTimer because it is causing problems.
I don't think that refreshing from GUI would make any difference against refreshing from a QThread - the controls painting takes place in the GUI (=main) thread anyway. If your values don't require a lot of calculations before being set, you can safely do this from GUI thread.
The only thing to watch for is to not refresh the particular value if user is currently editing it - I guess that would make a real surprise for him :)
Unless you have some special ui design of course...
I've done this in the GUI thread and haven't encountered any problems. What I needed it for was to update a timer for a popup (it displayed something like "Reconnecting in [time] seconds" and I was updating the [time] when the timer fired)
You must be careful not to do any CPU intensive computation though (ie don't compute some Mandlebrot values in the GUI thread or something of the sort) - that will freeze the GUI thread.
if all you do is refresh the form you should be okay but if you call long function that require calls to QCoreApplication::processEvents(); then you shouldn't. I tried using the timer and I had problems which went away as soon as I removed the timer and used a thread in which I emited a signal to the main thread to refresh the form once the work was done.

Resources