Response times from far locations grows rapidly under load - performance

We have a simple Java web application and we are performance testing it with a test case, where we upload a 1 MB file to the application under test. We have several locations around the world where we can test it. From some far locations the transaction response time seems to grow very fast under load, but from most places it stays steady. The used load is not high and network shouldn't be a limiting factor. What else could it be?

Your remote network saturation is the limiting factor.
If it was inbound then all locations using the same inbound interface would be subject to the delay. As you indicate that only some of the locations are impacted we can use the non impacted as control elements of the test to determine that it is not the common inbound interface.

Related

JMeter - in different geographical location

I need some advise. I always worked on prim setup and had my JMeters in the same data center as application server to be tested. Now we got a cloud set up and I can ask for JMeters in different geographical locations in cloud to mimic real production behavior of the load. So is that what I should do? Will the response times of the transactions have network disturbances in them or it will be in fact like production? ...in on-prem testing when we have JMeters in the same data center as test application servers we totally eliminate network issues from the response times!!
There are 2 different types of the performance metrics for the websites:
Perceived System Performance
Perceived User Experience
Perceived System Performance won't be impacted by geo-distributed JMeter setup, the backend doesn't really care where the requests are originating from and how much time it takes the request/response to travel over the wire back and forth. In fact the system will receive less load comparing to the same test scenario running in the same network.
Perceived User Experience will be different and you will experience larger response times as it takes some time for the packet to physically travel around the globe and pass through all routers/switches on its way towards the system under test and back.
In terms of JMeter Glossary
Latency will be higher
Delta between latency and elapsed time will be higher

How to load test 10k requests per second using jmeter?

I need to load test my website with 10k req/sec for 1 hour using JMeter. I am confused with the values of loop count, number of thread, ramp-up period and duration.
Also will my laptop (i5 8GB) be able to do that? If not what is the alternative.
PS: I checked every question/answer on stackoverflow for this but I couldn't find any help. Please dont mark it repeated question.
You can use "Constant Throughput Timer" and define target throughput and select throughput based on "all active threads".
Define maximum number of users count in your script so that it will be enough for 10K req/sec.
Also if you are using windows machine then I think you will face this issue "https://www.baselogic.com/2011/11/23/solved-java-net-bindexception-address-use-connect-issue-windows/"
I will recommend to use distributed testing or use more than 1 machine.
The easiest way of configuring JMeter to send X requests per second is using either Precise Troughput Timer or Throughput Shaping Timer in combination with the Concurrency Thread Group. The number of threads needs to be sufficient, the exact number mainly depends on your application response time, if response time is 1 second - you will need 10k threads, if it's 500ms - you will need 5k threads, if it is 2 seconds - you will need 20k threads, etc.
Only you can answer whether your laptop can kick off the required number of virtual users as there are too many factors to consider: nature of the test, the size of the requests/responses, number of pre/post processors and assertions, etc. Make sure to follow JMeter Best Practices and monitor CPU, RAM, Network, etc. usage using i.e. JMeter PerfMon Plugin as if your laptop will be overloaded - JMeter won't be able to send requests fast enough and you will not be able to conduct 10k requests per second even if the server supports it. If your laptop hardware specifications are too low for the test scenario - you will have to go for Distributed Testing
You have a number of issues in play
test design. Use more than one load generator. In fact, use no fewer than three, evenly matched in hardware. Take one and load only one user of each type. This is your control set. If this set degrades at the same rate as your other load generators then you have a common issue, likely the site. If the control set does not degrade, but the other load generators do, then you likely have an overloaded generator. On the commercial test tool side of the fence, generating all load from one host have never been considered a good practice in performance testing.
10K requests per second. This is substantial. I have worked on some top 20 eCommerce sites and I can tell you that even they do not receive this type of traffic to the origin servers. Why? Cache! Either this his a Content Delivery Network where the load is spread across the county, OR there is a cache node directly in front of the load balancer(S) for the site (thing varnishcache of equivalent), OR both for a multi-staged cache. You might want to look for an objective reference in production to pin this to as a validation poinnt, if and only if (IFF) your goal is to represent end user behavior. Running a count of requests grouped by second from the HTTP access logs should be able to validate this number. Also, check the cache plan for fixed assets - it could be poorly managed and load would drop significantly just by better managing the sites cache settings to the client. If your goal is simply to saturate a SOAP/REST interface to the point of destruction then you might have a better path.
If you are looking to take a particular SOAP or REST set of remote procedure calls to the point of destruction, consider a classical stress test. Start your test at zero load, increase with the smallest step interval possible over the longest possible period of time. The physical analogy to this would be the classical hospital style stress test where a nurse comes around every minute and increases the speed OR the incline on the treadmill OR both until some end of test condition is achieved. For a hospital style test that is moving into Oxygen debt, an inability to keep pace, etc... For your application/interface it could be the doubling of response times from what is acceptable, a saturation of resources in the finite resource pool (CPU, DISK, MEMORY, NETWORK) on the back end hosts, etc...

Performance Testing: What does fluctuating Response time indicates?

Below is the graph which I received after the performance test execution.
I am confused about the fluctuated response time graph.
NOTE: 1) Throughput graph is also fluctuating. 2) I did not receive any error during test.
It normally indicates that either application under test or JMeter engine is overloaded hence it cannot handle/produce stable load pattern.
Your response time is around 1.5 minutes which seems little bit high to me so I would suggest that you need to monitor the application under test and check:
whether it has enough headroom to operate in terms of CPU, RAM, Network IO, etc. as it might be the case the application is short on RAM and goes swapping and disk IO is much slower than RAM, it can be checked using i.e. JMeter PerfMon Plugin
whether it is properly configured for high loads as its middleware (database, application server, load balancer, etc. need to be tuned, spike-like response time pattern may stand for intensive GC activity
in any case ensure that JMeter is also properly configured for high load and isn't short on resources as if JMeter isn't able to send/receive requests fast enough you will get false-negative results
Single chart never tells the full story, you need to correlate information from all the possible sources, collect log files, etc.
-

JMeter: What could be possible reason of same throughput though user load has increased

What could be the possible reason for throughput remaining the same though load has increased much compared to previous tests? NOTE: I even received the error "Internal Server Error" while running my performance test.
It means you have reached the Saturation Point - the point for maximum performance!
A certain amount of concurrent users adjoining with maximum CPU utilization and peak throughput. Adding any more concurrent users will lead to degradation of response time and throughput, and will cause peak CPU utilization. Also, it can throw some errors!
After that, If you continue increasing the number of virtual users you may see these:
Response time is increasing.
Some of your requests got failed.
Throughput is either remains the same or decreases - this indicates the performance bottleneck!
Ideal load test in ideal world looks like:
response time remains the same, not matter how many virtual users are hitting the server
throughput increases by the same factor as the load increases, i.e:
100 virtual users - 500 requests/second
200 virtual users - 1000 requests/second
etc.
In reality application under test might scale up to certain extent, but finally you will reach the point when you will be increasing the load but the response time will be going up and throughput will remain the same (or going down)
Looking into HTTP Status Code 500
The HyperText Transfer Protocol (HTTP) 500 Internal Server Error server error response code indicates that the server encountered an unexpected condition that prevented it from fulfilling the request.
Most probably it indicates the fact that the application under test is overloaded, the next step would be to find out the reason which may be in:
Application is not properly configured for the high loads (including all the middleware: application server, database, load balancer, etc.)
Application lacks resources (CPU, RAM, etc.), can be checked using i.e. JMeter PerfMon Plugin
Application uses inefficient functions/algorithms, can be checked using profiling tools

Visual Studio load testing, is it real-world?

Is it even possible to fake the traffic, considering your network connection will have a bandwidth limit? If I create a test for 1000 users, visiting 3 pages a second, will the results really represent this scenario if done in real life (not simulation)?
Should I create an executable to perform the load test, and run it from separate network connections?
It looks like you have a concern that the network bandwidth limit of the test machine will invalidate the accuracy of the test. To determine if your concern is legit, perform the following rough assessment. Let say your three pages have sizes 20k, 40k and 60k respectively. To load test them you would need to generate the following maximum bandwidth:
1000x (20k+40k+60k)= 120Mbps
Make an adjustment for compression if you use it. Factor in the requests sizes if they are significant. If you test with browser caching emulation enabled, increase the size of the first requests by the size of static resources. Then compare the maximum bandwidth with your network adapter bandwidth. If it is 100 Mbps, then the network adapter will be a bottleneck. If it is 1 Gps or more, then the network adapter will not be a bottleneck.
Alternatively, skip calculations and just run the test. On the bandwidth graph in VSTS find the max value. If it is by far smaller than your network adapter limit, then you can run this test from one machine.
Keep in mind that requesting 1000 x 3 pages per second does not guarantee that you will receive responses from 1000 X 3 pages per second due to the server speed limit.

Resources