Why is Firefox lagging so much? - performance

I have made a simple PHP script and I am running it from my localhost.
The script does not use any sessions, cookies, databases or files. It just sleeps for 100 ms and measures how long it did sleep. The code is not important here, but anyway:
$r = microtime(true);
usleep(100000);
echo 1000*(microtime(true) - $r);
When I run this script in Chrome or Firefox I get results like:
99.447011947632
or
99.483013153076
However- the script always renders within a second in Chrome but takes up to 4 seconds in Firefox!
Here is my benchmark from Tamper Data for Firefox:
(Sorry it's not in English. The columns are from left to right: URL, Total time, Size, HTTP Status)
So there is something wrong with Firefox. What could it be? The problem affects other scripts as well.
Should I send any special HTTP headers for Firefox?

Related

Codeception Acceptance Test: How to test AJAX request?

Well I was using codeception to test on an ajax page, in which I click the button and some kind of text is shown after an AJAX request is performed.
$I->amOnPage('/clickbutton.html');
$I->click('Get my ID');
$I->see('Your user id is 1', '.divbox');
As you see, the test is supposed to work in a way that 'Your user id is {$id}' is returned(in this case the id is 1), and updates a div box with the text. However, it doesnt work at all, instead the test says the div box is blank. What did I do wrong? How can I use codeception to test an AJAX request?
You can also use this:
$I->click('#something');
$I->waitForText('Something that appears a bit later', 20, '#my_element');
20 = timeout (in seconds), give it some sane value.
It's a bit more flexible instead of hammering down things like $I->wait(X);, because they are usually a lot faster than waiting for them in seconds. So, for example, if you've got many elements that you need to "wait" for, let's say 15-20, then your test will spend 15-20 seconds "waiting" while actual operations finish in maybe 1-2s total. Across many tests this can increase build times significantly, which is... not good :)
Are you sure your request has finished by the time you check the div? Try adding a little wait after sending the request:
$I->amOnPage('/clickbutton.html');
$I->click('Get my ID');
$I->wait(1); //add this
$I->see('Your user id is 1', '.divbox');

Bash script that checks website every 10 seconds

The following script checks a sites content to see if any change has been done to it, every 10 seconds. It's for a very time sensitive application. If something on the site has changed, I merely have seconds to do something else. It will then start a new download and compare cycle and wait for the next change and do cycle. The do something else, has yet to be scripted and not relevant to the question.
The question: Will it be a problem for a public website to have a script downloading a single page every 10-15 seconds. If so, is there any other way to monitor a site, unmanned?
#!/bin/bash
Domain="example.com"
Ocontent=$(curl -L "$Domain")
Ncontent="$Ocontent"
until [ "$Ocontent" != "$Ncontent" ]; do
Ocontent=$(curl -L "$Domain")
#CONTENT CHANGED TRUE
#if [ "$Ocontent" == "$Ncontent ]; then
# Ocontent=$(curl -L "$Domain")
#fi
echo "$Ocontent"
sleep 10
done
The problems you're going to run into:
If the site notices and has a problem with it, you may end up on a banned IP list. Using an IP pool or other distributed resource can mitigate this.
Pinging a website precisely every x number of seconds is unlikely. Network latency is likely to cause a great deal of variance in this.
If you get a network partition, your code should know how to cope. (What if your connection goes down? What should happen?)
Note that getting the immediate response is only part of downloading a webpage. There may be changes to referenced files, such as css, javascript or images that are not immediately apparent from just the original http response.

AJAX query weird delay between DNS lookup and initial connection on Chrome but not FF, what is it?

I have an AJAX query on my client that passes two parameters to a server:
var url = window.location.origin + "/instanceStats"
$.getJSON(url, { 'unit' : unit, "stat" : stat }, function(data) {
instanceData[key] = data;
var count = showInstanceStats(targetElement, unit, stat, limiter);
});
The server itself is a very simple Python Flask application. On that particular URL, it grabs the "unit" and "stat" parameters from the query to determine the name of a CSV file and line within that file, grabs the line, and sends the data back to the client formatted as JSON (roughly 1KB).
Here is the funny thing: When I measure the time it takes for the data to come back, I observe that some queries are fast (between 20 and 40 ms), and some queries are slow (between 320 and 350 ms). Varying the "stat" parameter (i.e. selecting a different line in the CSV) doesn't seem to have any impact. The fast and slow queries usually switch back and forth (i.e. all even queries are fast, all odd ones are slow). The Python server itself reports roughly the same time for each query.
AJAX itself doesn't seem to have any impact either, as I can take the url that is constructed in the JS and paste it into the browser myself and get the same behavior. Here are some measurements from two subsequent queries:
Fast: http://i.imgur.com/VQ7qopd.png
Slow: http://i.imgur.com/YuG0ROM.png
This seems to be Chrome-specific, as I've tried it on Firefox and the same experiment yields roughly the same query time everytime (between 30 and 50 ms). This is unfortunate, as I want to deploy on both Chrome and Firefox.
What's causing this behavior, and how can I fix it?
I've run into this also. It only seems to happen when using localhost. If you use 127.0.0.1 (or even the computer name), it will not have the extra delay.
I'm having it too, and it's exactly the same: my Node.js application serves Ajax requests and no matter which /url I request it's either 30ms or 300ms and it switches back and forth: odd requests are long, even requests are short.
The thing I see in Chrome Web Inspector (aka Chrome DevTools) is that there is a long gap between "DNS lookup" and "Initial Connection".
They say it's OCSP related here:
http://www.webpagetest.org/forums/showthread.php?tid=12357
OCSP is some kind of certificate validation protocol:
https://en.wikipedia.org/wiki/Online_Certificate_Status_Protocol
Moving from localhost to 127.0.0.1 seems to fix it: response times are 30ms now.

Node.JS Response Time

Threw Node.JS on an AWS instance and was testing the request times, got some interesting results.
I used the following for the server:
var http = require('http');
http.createServer(function(req, res) {
res.writeHead(200, {'Content-Type': 'text/html'});
res.write('Hello World');
res.end();
}).listen(8080);
I have an average 90ms delay to this server, but the total request takes ~350+ms. Obviously a lot of time is wasted on the box. I made sure the DNS was cached prior to the test.
I did an Apache bench on the server with a cocurrency of 1000 - it finished 10,000 requests in 4.3 seconds... which means an average of 4.3 milliseconds.
UPDATE: Just for grins, I installed Apache + PHP on the same machine and did a simple "Hello World" echo and got a 92ms response time on average (two over ping).
Is there a setting somewhere that I am missing?
While Chrome Developer Tools is a good way to investigate front end performance, it gives you very rough estimate of actual server timings / cpu load. If you have ~350 ms total request time in dev tools, subtract from this number DNS lookup + Connecting + Sending + Receiving, then subtract roundtrip time (90 ms?) and after that you have first estimate. In your case I expect actual request time to be sub-millisecond. Try to run this code on server:
var http = require('http');
function hrdiff(t1, t2) {
var s = t2[0] - t1[0];
var mms = t2[1] - t1[1];
return s*1e9 + mms;
}
http.createServer(function(req, res) {
var t1 = process.hrtime();
res.writeHead(200, {'Content-Type': 'text/html'});
res.write('Hello World');
res.end();
var t2 = process.hrtime();
console.log(hrdiff(t1, t2));
}).listen(8080);
Based on ab result you should estimate average send+request+receive time to be at most 4.2 ms ( 4200 ms / 10000 req) (did you run it on server? what concurrency?)
I absolutely hate answering my own questions, but I want to pass along what I have discovered with future readers.
tl;dr: There is something wrong with res.write(). Use express.js or res.end()
I just got through conducting a bunch of tests. I setup multiple types of Node server and mixed in things like PHP and Nginx. Here are my findings.
As stated previously, with the snippet I included above, I was loosing around 250ms/request, but the Apache benchmarks did not replicate that issues. I then proceeded to do a PHP test and got results ranging from 2ms - 20ms over ping... a big difference.
This prompted some more research, I started a Nginx server and proxied the node through it, and somehow, that magically changed the response from 250ms to 15ms over ping. I was on par with that PHP script, but that is a really confusing result. Usually additional hops would slow things down.
Intrigued, I made an express.js server as well - and something even more interesting happened, the ping was 2ms over on its own. I dug around in the source for quite a while and noticed that it lacked a res.write() command, rather, it went straight to the res.end(). I started another server removing the "Hello World" from the res.write and added it to the res.end and amazingly, the ping was 0ms over ping.
I did some searching on this, wanted to see if it was a well-known issue and came across this SO question, who had the exact same problem. nodejs response speed and nginx
Overall, intresting stuff. Make sure you optimize your responses and send it all at once.
Best of luck to everyone!

FastRWeb performance on Ubuntu with built-in web server

I have installed FastRWeb 1.1-0 on an installation of R 2.15.2 (Trick or Treat) running on an Ubuntu 10.04 box. I hope to use the resulting system to run a web service.
I've configured the system by setting http.port to 8181 in rserve.conf and unsetting the socket destination. I've assigned .http.request to FastRWeb::.http.request. I exchange JSON blobs between the client and the server using HTTP POST (the second blob can exceed 150KB in size, and will not fit in an HTTP GET query string.)
Everything works end to end -- I have a little client-side R script which generates JSON RPC calls across the channel. I see the run function invoked, and see it returned.
I've run into a significant performance problem, however: the return path takes in excess of 12 seconds from the time run() returns (including the call to done()) and the time that the R client gets the return value. RCurl doesn't seem to be the culprit; it appears that something is taking twelve seconds to do a return.
Does anybody have any suggestions of where to look? I can easily shift over to using Apache 2.0 and CGI, but, honestly, I'd rather keep everything R centric.
Answering my own question.
I wrapped .http.request with an Rprof()/Rprof(NULL) pair and looked at the time spent in each routine. It turns out that the system spends ~11 seconds inside URLDecode in the standard implementation of .run. This looks like a scaling problem in URLDecode in the core.

Resources