How does Chartbeat measure Server Load Time? - download

It seems that Chartbeat is reporting my Server Load Time incorrectly.
Does anyone know how Chartbeat measures this data so I can make this metric more accurate?

chartbeat server load time is a measure of how long time it takes to load the HTML from your server, from servers on the east coast US.
http://chartbeat.com/faq/

From Chartbeat: How fast is my site? What information do I get from Load Times?
User Page load
...
It is calculated by timing the two pieces of Chartbeat javascript on the site. This
means it includes all the Javascript and embedded items you have on the page and tells
you how long it's taking to load on the user's browser.
If the load times look incorrect, check to make sure that you have:
<script type="text/javascript">var _sf_startpt=(new Date()).getTime()</script>
In the head of your page before loading your other resources, and the other code they give you just before the body closes.

Related

What difference does it make if I add think time to my virtual users as opposed to letting them execute requests in a loop as fast as they can?

I have a requirement to test that a Public Website can serve a defined peak number of 400 page loads per second.
From what I read online, when testing web pages performance, virtual users (threads) should be configured to pause and "think" on each page they visit, in order to simulate the behavior of a real live user before sending a new page load request.
I must use some remote load generator machines to generate this necessary load, and I have a limit on how many virtual users I can use per each load generator. This means that if I make each virtual user pause and "think" for x seconds on each page, that user will not generate a lot of load compared to how much it would if it was executing as fast as it could with no configured think time - and this would cause me to need more users and implicitly need more load generator machines to achieve my desired "page loads per second" and this would be more costly in the end.
If my only request is to prove that a server can serve 400 page loads per second, I would like to know what difference does it really make if I add think times (and therefore use more virtual users) or not.
Why is generally "think time" considered as something which should be added when testing web pages performance ?
Virtual user which is "idle" (doing nothing) has minimal resources footprint (mainly thread stack size) so I don't think you will need to have more machines
Well-behaved load test must represent real life usage of the application with 100% accuracy, if you're testing a website each JMeter thread (virtual user) must mimic a real user using a real browser with all related features like
handling embedded resources (image, scripts, styles, fonts, sounds, etc.)
using caching properly
getting and sending back cookies
sending appropriate headers
processing AJAX requests like browser does
the most straightforward example of the difference between 400 users without think times and 4000 users with think times will be that 4000 users will open 4000 connections and keep them open and 400 users will open only 400 connections.

Why is JMeter Result is different to User Experience Result?

We are currently conducting performance tests on both web apps that we have, one is running within a private network and the other is accessible for all. For both apps, a single page-load of the landing page or initial page only takes between 2-3 seconds on a user POV, but when we use blaze and JMeter, the results are between 15-20 seconds. Am I missing something? The 15-20 seconds result came from the Loadtime/Sample Time in JMeter and in Elapsed column if extracted to .csv. Please help as I'm stuck.
We have tried conducting tests on multiple PCs within the office premises along with a PC remotely accessed on another site and we still get the same results. The number of thread and ramp-up period is both set to 1 to imitate a single user only.
Where a delta exists, it is certain to mean that two different items are being timed. It would help to understand on your front end are you timing to a standard metric, such as w3c domComplete, time to interactive, first contentful paint, some other location, and then compare where this comes into play on the drilldown on the performance tab of chrome. Odds are that there is a lot occuring that is not visible that is being captured by Jmeter.
You might also look for other threads on here on how jmeter operates as compared to a "real browser" There are differences which could come into play affecting your page comparisons, particularly if you have dozens/hundreds of elements that need to be downloaded to complete your page. Also, pay attention to third party components where you do not have permission to test their servers.
I can think of 2 possible causees:
Clear your browser history, especially browser cache. It might be the case you're getting HTTP Status 304 for all requests in browser because responses are being returned from the browser cache and no actual requests are being made while JMeter always uses "clean" session.
Pay attention to Connect Time and Latency metrics as it might be the case the server response time is low but the time for network packets to travel back and forth is very high.
Connect Time. JMeter measures the time it took to establish the connection, including SSL handshake. Note that connect time is not automatically subtracted from latency. In case of connection error, the metric will be equal to the time it took to face the error, for example in case of Timeout, it should be equal to connection timeout.
Latency. JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client.
So basically "Elapsed time = Connect Time + Latency + Server Processing Time"
In general given:
the same machine
clean browser session
and JMeter configured to behave like a real browser
you should get similar or equal timings for the same page

LoadRunner TruClient - How to measure page response time properly?

Question for LoadRunner TruClient, how to measure page load time properly?
Right now I'm thinking the only way to do is to have 1) Click a link event 2) Verify object show up on the next page.
But if I put the transaction around 1)+2), the response time is really long. If I put the transaction around 2), the response time is really short. I feel it's not accurate measurement of page load time in both ways. What's the proper way to measure page load time? What should I set as the End Event for both steps?
If you are trying to measure client load on the front end the place to begin is in the developer tools long before you get to any sort of multi-user performance test, for any issue you find in the last 100 yards before deployment will have zero chance of going through the page architecture changes required to affect client performance
Example, from Google Chrome, developer tools

How can we measure the Server Processing Time,Page Loading Time,Page Rendering Time and Page Size from Apache Jmeter

How can I measure the following points
Server Processing Time
Page Loading Time
Page Rendering Time
Page Size
from Apache Jmeter?
Is there any suitable listener to measure all these points?
With aggregate report Or csv / xml results you get nearly all the infos you can regarding response times BUT:
Server Processing Time: you cannot get this one as jmeter act on client side it includes network time, so you need to add some profiling data or look at access logs
Page Loading Time : if it's page response time yes
Page Rendering Time : no as jmeter is not a browser, furthermore rendering occurs on client side so what interests you in load testing is time to get response.
Page Size: yes
I suggest you read:
http://jmeter.apache.org/usermanual/index.html
http://jmeter.apache.org/usermanual/test_plan.html
http://jmeter.apache.org/usermanual/component_reference.html#Aggregate_Report
Regards
Philippe M.
http://www.ubik-ingenierie.com/-Solutions-
Server processing time = time to first byte - request sent
Page loading time = time to last byte - time to first byte
Page size = Response size
Page rendering time - you'll have to use GUI testing tools for this one.
E.g.
Chrome has Ctrl+Shift+i > Timeline tab
Firefox has Firebug > Net tab.
See here for more info on these phrases mentioned above.

Why does the same html page take 25 sec to load on one server and 2 sec to load on another?

I have the exact same html sitting on two different servers. Both pages call things like stylesheets and images from the same servers (not each from their local server). In other words, these pages are identical except they exist on two different servers. It's all static html. The only DNS lookups are for images.
On one server it takes 25 seconds to load, and it appears most of that is waiting on the html page itself
http://tools.pingdom.com/fpt/#!/CmGSycTZd/http://205.158.110.184/contents/mylayout/2
On another server it takes under 2 seconds to load
http://tools.pingdom.com/fpt/#!/rqg73fi7V/http://socialmediaphyte.com/TEST/image-dns-testing-ImageON.html
The only difference I can ID from Pingdom is "Connection." The slow server responds with "close" and the fast server responds with "Keep-Alive". Is THAT the most likely issue? Or is it possibly something else? (And if you know the remedy for your suspected cause, that would be wonderful.)
Thanks!
Not using keep-alive will slow the overall load time a bit because you incur the additional overhead of having to establish a new connection for each resource, rather than re-using one or more connections. This shouldn't equate to 23 seconds difference though.
Using the FireBug Net Panel for Firefox can be of great assistance in seeing what is taking so long. It shows you how long each requested resource from the page took to load, and how long each phase of requesting the resource took.
Other factors could include one server is using gzip compression on pages and the other is not, or it could just be overloaded.

Resources