I conducted performance testing on e-commerce website and I have the test results with some matrices. I already found some problems on some component for example on checkout or post login with high response time and error. But I also would like to find issues that are limiting the application to scale. I only did the testing on the application server. And I observed that CPU , I/O rate are very stable as well. But still the application gives high response time. Is there any other way I can determine from the test result why it is not scaling well? Thank!
From JMeter test result only - unlikely, JMeter just sends requests, waits for the responses and measures the time in-between plus collects some extra metrics like connect time and latency, see the JMeter Glossary for full list with explanations
The integrated system acts at the speed of its slowest component, possible reasons could be in:
Network issues (i.e. lack of bandwidth, faulty router, long DNS resolution time, etc.)
Your application is not properly configured for high loads. Inspect the current setup of the application in terms of thread pools, maximum number of open connections, any limitations on resource usage, etc. Look for documentation on performance tuning of individual middleware compoments as well.
Repeat your test run with a profiler tool telemetry enabled or look at the APM tool output for the test time frame if the tool is in place, it will allow you do perform a deep dive into what's going on under the hood of this or that function call as it might be inefficient algorithm or a slow database query
Related
I am testing a web application login page loading time with 300 thread users and ramp up period of 300 secs.Most of my samples return response code 200.But few of them return response code 400,503.
My goal is to just check the performance of the web application if 300 users start using it.
I am new to Jmeter and have basic knowledge of programming.
My Question :-
1.Can i ignore these errors and focus just on timings from the summary report ?
2.If i really need to fix these errors, how to fix it ?
There are 2 different problems indicated by these errors:
HTTP Status 400 stands for Bad Request - it means that you're sending malformed requests which cannot be understood by the server. You should inspect request details and amend JMeter configuration as it is the problem in your script.
HTTP Status 503 stands for Service Unavailable - it indicates the problem on server side, i.e. server is not capable of handling the load you're generating. This is something you can already report as the application issue. You can try to identify the underlying cause by:
looking into your application log files
checking whether your application has enough headroom to operate in terms of CPU, RAM, Network, Disk, etc. It can be done using APM tool or JMeter PerfMon Plugin
re-running your test with profiler tool telemetry to deep dive into what's under the hood of the longest response times
So first of all you should ensure that your test is doing what it is supposed to be doing by running it with 1-2 users/loops and inspecting requests/response details. At this stage you should not be having any errors.
Going forward you should increase the load gradually and correlate the increasing number of virtual users with the increasing response time/number of errors
`
Performance testing is different from load testing. What you are doing is load testing.
Performance testing is more about how quickly an action takes. I typically capture performance on a system not under load for a given action.
This gives a baseline that I can then refer to during load tests.
Hopefully, you’ve been given some performance figures to test. E.g. must be able to handle 300 requests in two minutes.
When moving onto load, I run a series of load tests with increasing number of users/threads and capture the results from each test.
Armed with this, I can see how load degrades performance to the point where errors start to show up. This gives you an idea of how much typical load the system can handle.
I’d also look to run soak tests too. This where I’d run JMeter for a long period with typical (not peak) load to make sure the system can handle sustained load.
In terms of the errors you’re seeing, no I would not ignore them. Assuming your test is calling the same endpoint, it seems safe to say the code is fine, its the infrastructure struggling with the load you’re throwing at it.
I have been running some load tests against APIs using JMeter, the results are below:
I am trying to understand what the cause of the two different patterns of slow behaviour I am seeing could be:
Pattern 1: Time to connect is low, Latency is high
Pattern 2: Time to connect is high, Latency is low
*Note: the majority of calls are returning in around 45-50ms.
My current thoughts are as follows:
Pattern 1: This is "server processing time" so for some reason back-end server is taking longer than usual to respond. We will need to do deeper dive to figure out why.
Pattern 2: This pattern shows a long time to establish a TCP connection. Is there a way to determine if this is a problem on the outgoing side i.e. JMeter itself is running out of threads to make API connections, or if the API server is running out of connections and is unable to accept more?
How should I interpret these results? Are there any additional data points I could pull or tools I could use to better understand the findings?
Both Connect Time and Latency are network-related metrics, the formula is:
Elapsed Time = Connect Time + Latency + Server Response time
It looks like your server itself is no brainer, the problem is either on network level or connected with JMeter which might lack resources in order to send requests fast enough.
With regards to additional information sources:
Generate HTML Reporting Dashboard and look into "Over Time" charts. It should allow you to correlate increasing load with the increasing connect time / latency.
Consider setting up monitoring of essential health metrics of JMeter load generator(s) and the application under test. You can use JMeter PerfMon Plugin for this.
Make sure to follow JMeter Best Practices as JMeter default configuration is good for tests development and debugging and you need to perform fine tuning of JMeter for the high loads.
I have webportal for testing, our product owner want us to do measure performace of website by using chrome's network tab. Is it correct way to measure performance. My product owner wants me to mesaure Finish time and consider that time for performance of web portal. So my question is, is it a correct way of performance testing.Chrome Network tab finish time
This tool reports client-side performance, you can rely on it up to certain extent as it reports how long did page load take from browser's perspective.
However the test is not very representative as for example I'm getting the page loaded in ~3 seconds (and I'm using flaky mobile internet) while you have ~13 seconds.
So I would recommend considering a better tool, for example YSLow which not only measures rendering time but also can detect and report related issues.
Another point is, given application response time is 3 seconds while 1 user is accessing it, how do you know what response time will be when 100 users will be concurrently working with the application? 500 users? 1000 users? That's why you should not limit your performance testing to measuring DOM load and rendering speed only, you need to mimic at least anticipated number of your application users simultaneously accessing the application - this will give you some initial performance metrics to see whether your application behaviour is acceptable or not.
I would like to know which are the factors i need to consider when using jmeter, Most of the time internet speed will be varying and due to which i don't get accurate response time, and operations on server side [CPU utilization, etc].
Do I need to consider all this points when calculating performance of the application.
In regards to "internet speed vary", JMeter is smart enough to detect it and report as Latency metric. As per JMeter glossary:
Latency.
JMeter measures the latency from just before sending the request to just after the first response has been received. Thus the time includes all the processing needed to assemble the request as well as assembling the first part of the response, which in general will be longer than one byte. Protocol analysers (such as Wireshark) measure the time when bytes are actually sent/received over the interface. The JMeter time should be closer to that which is experienced by a browser or other application client
So you should be able to subtract latency from the overall response time and calculate the time, required to process your request on the server side. However it will be much better if JMeter load generators will live in the same intranet. If you need to test your application behavior when virtual users are sitting on different network types it can also be simulated
In regards to other factors that matter:
Application under test health. You should be monitoring baseline server-side health metrics to identify whether application server(s) gets overloaded during the load test as i.e. if you see high response times the reason could be as simple as a lack of free RAM or slow hard-drive or whatever.
JMeter load generator(s) health. The same approach should be applicable to JMeter engine(s). If JMeter hosts are overloaded they cannot generate the requests and send them fast enough which will be reported as reduced throughput.
You can use PerfMon JMeter Plugin for both. See How to Monitor Your Server Health & Performance During a JMeter Load Test article for detailed description of the plugin installation and usage
Your tests need to be realistic and represent virtual user as close to real one as possible. So make sure you:
Add Timers to your test plan to represent "think time"
If you are testing web-based application consider adding and properly configuring HTTP Cookie Manager, Header Manager and Cache Manager. Also don't forget to configure HTTP Request Defaults to "Retrieve all embedded resources" and use "Parallel pool" for this.
I would like to measure loading time of a document in my testing web app. I have used JMeter for this, but I am getting different values for each run. I am measuring average time in the summary report.
I am not sure, that the value is proper or not.Is this approach is correct or Is there any plugin JMeter available?
I have used HTTP watch to get rendering time, but I can't use that tool for more than 1 user (Load Testing). I am using JMeter 2.13. Could you please help me in this?
With the help of aggregate report or csv / xml results you get required information regarding response times BUT
In Jmeter, Response time = Processing time + Latency(time taken by network while transferring data)
In Browser, Response time = Processing time + Latency + Rendering time
Hence you will found a difference between http watch response times and jmeter response times.
If you need to include rendering times also in your response times, then use tools, like loadrunner (commercial), selenium (open source) and so on. Personally in my opinion client side rendering is not a measurable value, unless all of the users accessing the application are having same configuration of hardware, software and network access. However, while JMeter test running with peak load to the system, manually browse the site using various browsers and with the help of developer tools you can find rendering times.
I am getting different values for each run - this will depends upon test data you are using, server health status, network delays and so on.
I doubt you'll be able to get 2 fully identical test run results, there always will be some form of fluctuation caused by underlying hardware and software implementations. You should be receiving similar results with some statistical noise.
If this is not the case, your JMeter test might be misconfigured. From "realness" perspective mind the following configuration:
Make sure you have Retrieve All Embedded Resources from HTML Files box checked and you Use concurrent pool. The easiest way to configure it for all the samplers is using HTTP Request Defaults
Add HTTP Cache Manager to your test plan. Previous setting "tells" JMeter to fetch embedded resources like scripts, styles, etc. from the pages. Real browsers do it as well but they do it only once, on subsequent requests these resources are being returned from browser's cache.
Add HTTP Cookie Manager to your test plan. It represents browser cookies, enables cookie-based authentication and maintains sessions.
Add HTTP Header Manager to represent browser headers like User-Agent, Content-Type, encoding, etc.
When you use a straight HTTP Protocol layer virtual user, independent of the tool (Jmeter, LoadRunner, SOASTA, Grinder, ...) then what you will be timing will be the request/response information coming from the server with very low coloration from the local processing on the client for JavaScript and the final "drawing on the screen" which is rendering.
Up until the point where the server is degraded due to number of requests or network limitations the only area where you can tune is in the page architecture, which if you are waiting to the last 100 yards before deployment to address then you are likely in trouble.
Steve Souders has written quite a bit on the subject of page architecture in his books "High Performance Websites" and related works. In short, the rule of thumb comes down to making fewer requests, smaller responses and serving the data from the closest possible location to the client. These have the effect of minimizing the most expensive finite resource to a web client, the network. For instance, a browser sprite reduces the number of calls for images, minification and compression reduce the size of the transmission and a CDN changes the number of hops to the requested item to a location closer to the end client.
In order to affect changes to page architecture you need to move upstream into your development cycle and your functional testing cycle. You will need to work with development to implement hard gates where code/pages cannot be submitted to the project without first passing performance gates related to design. Your development team and functional testing members will need to respect those gates. As to what the gates should be, I refer you back to the works of Mr Souders as a great source of data for construction of your gate rules.
This gets you to the level of "works for one: Performant for one." Then you can use that as a known good to answer the questions related to server scalability and at which point the service to the client from requests begins to degrade. If you have a CDN in your organization, be sure to take that into account in your test model, for if you do not then you will overload your server vs production.
As far as actual speeding of the "rendering" or drawing on the screen? You need to purchase a faster video card barring changes from the browser manufacturer. Speeding up JavaScript? Make sure that all of your JavaScript is as small and as lean as possible. Have your functional test team test on very dirty browsers with lots of add-ins as well as lower powered hardware for a view of maximum out of spec response. If you need a view of what your standard hardware model looks like from your clients (Browser/OS/some hardware into) then you can process the data in your HTTP request logs and specifically the user agent related to client configuration information.