Can chrome's network tab can be compared with jmeter? - jmeter

I have webportal for testing, our product owner want us to do measure performace of website by using chrome's network tab. Is it correct way to measure performance. My product owner wants me to mesaure Finish time and consider that time for performance of web portal. So my question is, is it a correct way of performance testing.Chrome Network tab finish time

This tool reports client-side performance, you can rely on it up to certain extent as it reports how long did page load take from browser's perspective.
However the test is not very representative as for example I'm getting the page loaded in ~3 seconds (and I'm using flaky mobile internet) while you have ~13 seconds.
So I would recommend considering a better tool, for example YSLow which not only measures rendering time but also can detect and report related issues.
Another point is, given application response time is 3 seconds while 1 user is accessing it, how do you know what response time will be when 100 users will be concurrently working with the application? 500 users? 1000 users? That's why you should not limit your performance testing to measuring DOM load and rendering speed only, you need to mimic at least anticipated number of your application users simultaneously accessing the application - this will give you some initial performance metrics to see whether your application behaviour is acceptable or not.

Related

detect scaling problem form performance test result

I conducted performance testing on e-commerce website and I have the test results with some matrices. I already found some problems on some component for example on checkout or post login with high response time and error. But I also would like to find issues that are limiting the application to scale. I only did the testing on the application server. And I observed that CPU , I/O rate are very stable as well. But still the application gives high response time. Is there any other way I can determine from the test result why it is not scaling well? Thank!
From JMeter test result only - unlikely, JMeter just sends requests, waits for the responses and measures the time in-between plus collects some extra metrics like connect time and latency, see the JMeter Glossary for full list with explanations
The integrated system acts at the speed of its slowest component, possible reasons could be in:
Network issues (i.e. lack of bandwidth, faulty router, long DNS resolution time, etc.)
Your application is not properly configured for high loads. Inspect the current setup of the application in terms of thread pools, maximum number of open connections, any limitations on resource usage, etc. Look for documentation on performance tuning of individual middleware compoments as well.
Repeat your test run with a profiler tool telemetry enabled or look at the APM tool output for the test time frame if the tool is in place, it will allow you do perform a deep dive into what's going on under the hood of this or that function call as it might be inefficient algorithm or a slow database query

JMeter Load Testing Time Verification

I use JMeter for checking load testing.
I note a time with stopwatch when i check load time personally it was
8.5 seconds
when i run same case with JMeter it gave load time of 2 seconds
There is huge difference between them, How can i verify the actual time?
e.g : if one user taking 9 seconds to load the form while in JMeter it is given load time 2 seconds
Client time is a complex item, as you can see from the clip from the Chrome Developer tools, performance tab, above. There's lots going on at the client which does lead to a difference between the time you see with an HTTP protocol test tool, such as JMETER (and most of the other performance test tools on the planet) and the actual client render.
You can address this Delta in a number of ways:
Run a single GUI Virtual user. Name your timing records such as "Login" and "login_GUI." The delta between the two is your client weight. Make sure to run the GUI virtual user on a dedicated host to avoid resource contention
Run a test with all browsers. This was state of the art in 1995. Because of the resource cost and the skew imposed on trying to figure out the cost of the server response the entire industry shifted to protocol level virtual users. Some are trying to bring back this model as "state of the art." It is not
Ask a performance question earlier, also known as "shift left..." Every developer has these developer tools at their disposal, as does every functional tester. If you find that a client is slow for one user, be curious and use the developer tools to identify, "why?" If you are waiting to multi user performance testing to answer questions related to client weight, then you have waited too long and often will not have the time or resources to change the page architecture in meaningful ways to reduce the client page cost. This is where understanding earlier has tremendous advantages for making changes.
I picked the graphic above deliberately to illustrate the precise challenge you have. Notice, the loading of the components takes less than a tenth of a second. These are the requests that JMETER would be making. But the page takes almost five seconds to "render." Jmeter is not broken, it is working as designed. It is your understanding that needs to change on which tools can be used to pull particular stats for analysis.
You can't compare JMeter load time to browser as is, also because your browser will load JavaScript files and can call JavaScript functions on page load while JMeter doesn't execute JavaScript.
JMeter is not a browser, it works at protocol level. As far as
web-services and remote services are concerned, JMeter looks like a
browser (or rather, multiple browsers); however JMeter does not
perform all the actions supported by browsers. In particular, JMeter
does not execute the Javascript found in HTML pages. Nor does it
render the HTML pages as a browser does (it's possible to view the
response as HTML etc., but the timings are not included in any
samples, and only one sample in one thread is ever displayed at a
time).
Just a side note - you can use plugin to check exact load time in chrome.
Well-behaved JMeter test timing should be equal or similar to real user timing, if there is a 4x times difference - most probably your JMeter configuration is not correct.
Probably the most important. Make sure your HTTP Request samplers are configured to retrieve so called "embedded resources" (images, scripts, styles) which are referenced in the web page
If your application is using AJAX technology make sure you execute AJAX-driven requests as well and add their elapsed time to main sampler using i.e. Transaction Controller.
Make sure you mimic browser's:
Cookies via HTTP Cookie Manager
Headers via HTTP Header Manager
Cache via HTTP Cache Manager
Assuming all above you should be receiving similar to real user experience page load time. See How to make JMeter behave more like a real browser article for more detailed information on the above tips.
In addition to the answers provided by James and user7294900, please find these images to help you understand the reason behind the difference in time given by your stop watch and JMeter.
Below image gives the ideology behind how JMeter provides the time.
Below image gives the ideology behind how you have measured the time with
your stop watch.
Notice that there are additional actions performed by the browser when you are taking the time using your stop watch. This is the reason behind the huge difference in time between JMeter and your stop watch.
In addition to this, ensure that you are using the same test environmental conditions for both the tests (like same network conditions, same LG etc.)
Hope this helps!

Jmeter reporting higher load time

I am performing baseline performance test on a project. Average load time reported by jmeter is much higher than actual load time in browser(fresh- no cache and cookies).
What will be the issue?
I suggest to check the following:
Load generator overload. Re-run JMeter test with one user/thread and compare with Firefox. If the results will be comparable then the response time in JMeter may be excessive due to its overload. Try to address it by adding more load generators.
Inaccurate browser emulation. If even with one user the response time in the load test is higher, then it can be caused by inaccurate emulation of browser paralel connections. To troubleshoot it, compare waterfall diagrams. To get it from Firefox, use Firebug. Route JMeter traffic through Fiddler which displays the waterfall on the Timeline tab. If the waterfalls are different, you may have the following issue: a web browser downloads resources in parallel, while by default JMeter replays recorded traffic sequentially. To fix it, add these settings: Simulating browsers using JMeter.
Are you checking with browser during the load test ? or at another time ?
In the latter case, you would be comparing apples and oranges.
Are you using JMeter GUI mode ? if yes, it's a bad practice, GUI mode is for scripting, NON GUI mode for load testing:
http://www.ubik-ingenierie.com/blog/jmeter_performance_tuning_tips/
How much threads are you using ? and which version of Jmeter ?
For embedded resources testing, 3.0 is the most realistic and performing:
https://jmeter.apache.org/changes.html
Whenever you run performance tests instead of Average response times, always consider 90th percentile. In some cases avg. response time is skewed even if one request takes long to respond. So please check 90th Percentile.
If you are running the test with multiple users, try to hit the application from browser while load test is going on and check the response time on browser. This will tell you if your observation is correct.
The load generator may not be able to establish enough connections due to which you might see higher response time. Check load generator utilization in case if you doubt. In some cases load generator itself can't generate enough load.
Check your sever utilization when you run the performance test. This will give you an idea if the application is not able to handle the load or if it's the issue with load generator.
If you are running tests from UI mode, please try to run tests with non-UI mode. (Can you specify with how many users you are running these tests?)
Increase JMeter memory if you see issue with load generator and keep eye on load generator CPU usage too.
Check if load generator and browser from which you are hitting the application are on the same network and check network latency to check if there is network problem.

Page Loading Time in Jmeter

I would like to measure loading time of a document in my testing web app. I have used JMeter for this, but I am getting different values for each run. I am measuring average time in the summary report.
I am not sure, that the value is proper or not.Is this approach is correct or Is there any plugin JMeter available?
I have used HTTP watch to get rendering time, but I can't use that tool for more than 1 user (Load Testing). I am using JMeter 2.13. Could you please help me in this?
With the help of aggregate report or csv / xml results you get required information regarding response times BUT
In Jmeter, Response time = Processing time + Latency(time taken by network while transferring data)
In Browser, Response time = Processing time + Latency + Rendering time
Hence you will found a difference between http watch response times and jmeter response times.
If you need to include rendering times also in your response times, then use tools, like loadrunner (commercial), selenium (open source) and so on. Personally in my opinion client side rendering is not a measurable value, unless all of the users accessing the application are having same configuration of hardware, software and network access. However, while JMeter test running with peak load to the system, manually browse the site using various browsers and with the help of developer tools you can find rendering times.
I am getting different values for each run - this will depends upon test data you are using, server health status, network delays and so on.
I doubt you'll be able to get 2 fully identical test run results, there always will be some form of fluctuation caused by underlying hardware and software implementations. You should be receiving similar results with some statistical noise.
If this is not the case, your JMeter test might be misconfigured. From "realness" perspective mind the following configuration:
Make sure you have Retrieve All Embedded Resources from HTML Files box checked and you Use concurrent pool. The easiest way to configure it for all the samplers is using HTTP Request Defaults
Add HTTP Cache Manager to your test plan. Previous setting "tells" JMeter to fetch embedded resources like scripts, styles, etc. from the pages. Real browsers do it as well but they do it only once, on subsequent requests these resources are being returned from browser's cache.
Add HTTP Cookie Manager to your test plan. It represents browser cookies, enables cookie-based authentication and maintains sessions.
Add HTTP Header Manager to represent browser headers like User-Agent, Content-Type, encoding, etc.
When you use a straight HTTP Protocol layer virtual user, independent of the tool (Jmeter, LoadRunner, SOASTA, Grinder, ...) then what you will be timing will be the request/response information coming from the server with very low coloration from the local processing on the client for JavaScript and the final "drawing on the screen" which is rendering.
Up until the point where the server is degraded due to number of requests or network limitations the only area where you can tune is in the page architecture, which if you are waiting to the last 100 yards before deployment to address then you are likely in trouble.
Steve Souders has written quite a bit on the subject of page architecture in his books "High Performance Websites" and related works. In short, the rule of thumb comes down to making fewer requests, smaller responses and serving the data from the closest possible location to the client. These have the effect of minimizing the most expensive finite resource to a web client, the network. For instance, a browser sprite reduces the number of calls for images, minification and compression reduce the size of the transmission and a CDN changes the number of hops to the requested item to a location closer to the end client.
In order to affect changes to page architecture you need to move upstream into your development cycle and your functional testing cycle. You will need to work with development to implement hard gates where code/pages cannot be submitted to the project without first passing performance gates related to design. Your development team and functional testing members will need to respect those gates. As to what the gates should be, I refer you back to the works of Mr Souders as a great source of data for construction of your gate rules.
This gets you to the level of "works for one: Performant for one." Then you can use that as a known good to answer the questions related to server scalability and at which point the service to the client from requests begins to degrade. If you have a CDN in your organization, be sure to take that into account in your test model, for if you do not then you will overload your server vs production.
As far as actual speeding of the "rendering" or drawing on the screen? You need to purchase a faster video card barring changes from the browser manufacturer. Speeding up JavaScript? Make sure that all of your JavaScript is as small and as lean as possible. Have your functional test team test on very dirty browsers with lots of add-ins as well as lower powered hardware for a view of maximum out of spec response. If you need a view of what your standard hardware model looks like from your clients (Browser/OS/some hardware into) then you can process the data in your HTTP request logs and specifically the user agent related to client configuration information.

Performance of NewRelic Real User Monitoring

We're been using NewRelic Real User Monitoring to track performance and activity.
We've noticed that the browser metrics are showing the majority of time is just Network times.
Even extremely small and simple server pages are showing average times of 3-5 seconds, even though they are just a few k in size and their Web application times and rendering times are mere milliseconds.
The site is hosted in the UK and when I run Chrome's Network Developer Tools I can see the page loading in around 50ms and then the hit to beacon-1.newrelic.com (in the USA) taking a further 500ms.
The majority of our clients do not have the luxury of high bandwidth or modern browsers and I believe that NewRelic itself is causing them a particularly poor user experience.
Are there any ways of making the new relic calls perform better? Can I make new relic call to a local (UK or Europe) based beacon?
I don't want to turn off new relic, but at the moment, it is causing more performance issues than it is alerting us to.
New Relic real user monitoring (RUM) does not affect the page load time for your users. The 500 ms that you are seeing refers to the amount of time it takes for the RUM data we collected from your app to reach our servers here in the U.S. The data is transferred after the pages are loaded, so it doesn't affect the page load at all for your users. This 500 ms of data travel time, therefore, is not part of any of our measurements of the networking, page rendering or DOM processing time.
New Relic calculates network time by first finding the total amount of time your application takes from request to page load, and then subtracting any application server time from that total. It is assumed that the resulting amount of time is "network" time. As such, it doesn't include the amount of time it takes to send that data to New Relic's servers. See this page for more info on how RUM works:
https://newrelic.com/docs/features/how-does-real-user-monitoring-work
If you're worried that there might be a bug or that your numbers don't look accurate, you can always file a support ticket with New Relic so we can look at your account in more detail.

Resources