Network Settings for JMeter Traffic | Internet or LAN? - jmeter

I'm going to perform a load and stress test on a webpage using Apache JMeter, but I'm not very sure about the appropriate network setting. Is it better to connect the two machines, the server with the webpage and the client running JMeter via local network or via the internet. Using the internet would be closer to the real scenario, but with a local network the connection is much more stable and you have more bandwidth for more requests and the same time.
I'm very thankful for opinions!

These are in fact two styles or approaches to load testing, both are valid.
The first you might call Lab testing. Where you minimise the number of factors that can affect throughput/resp. times and really focus the test on the system itself.
The second is the more realistic scenario where you are trying to get as much coverage as possible by routing requests through as many of the actual network layers that will exist when the system goes live.
The benefit of method 1 is that you simplify the test which makes understanding and finding any problems much easier. The problem is you lack complete coverage.
The benefit of method 2 is that it is not only more realistic but it also gives a higher level of confidence - esp. with higher volume tests, you might find you have a problem with a Switch or Firewall and it is only with this type of testing that you identify such issues. The problem is it can make finding any issues harder.
So, in short, you really want to do both types. You might find it easier to start with the full end to end test from outside in, and then only move to a more focused test if you find that you need to isolate / investigate a problem. That way you stand a chance of reducing the amount of setup work whilst still getting the maximum benefit from testing.
Note: Outside in means just that, your test rig should be located outside of the LAN (assuming this is how live traffic will flow). These days this is easy to setup using cloud based hardware.
Also note: If the machine that you are running the tests from is the same in both cases then routing the traffic via the internet (out of your LAN and then back in again) is probably not going to tell you anything useful and could actually cause a false negative in your results (not to mention network problems for your company!)

IMHO you should use your LAN.
Practically every user will have slightly different dl/ul speed, so I suggest you first do a normal performance test using your LAN and when you finish, you can do a few runs from outside, just to see the difference.
Remember, you're primarily testing efficiency of your application on the hardware it sits on. Network speed (of your future users) is the factor you cannot influence in any way.

Related

Performance testing result analysis using jmeter

I need to undergo performance testing for my project and I have learned how to Handle the Jmeter for Performance testing through online, but still, i was unable to find the solution how to analyze the result? from the report.I do know how to analyze the result so that I can't able to find the Performance Issue I n m application, where the error had been occurring, so from that how I can improve that performance.Is there is any article or video tutorial to learn how to analyze the result?
There are 2 possible test outcomes:
Positive: the performance of your application matches SLA and/or NFR. In this case you need to provide the report as a proof, it might be i.e. HTML Reporting Dashboard
Negative: the performance of your application is beyond the expectations. In this case you need to perform some investigation on the reasons, which could be in:
Simply lack of resources, i.e. your application doesn't have enough headroom to operate due to not enough CPU, RAM, Network or Disk bandwidth. So make sure you are monitoring these resources on the application under test side, you can do it using i.e. JMeter PerfMon Plugin.
The same but on JMeter load generator(s) side. If JMeter cannot send requests fast enough the application won't be able to serve more requests so if JMeter machine doesn't have enough resources - the situation will be the same so make sure to monitor the same metrics on JMeter host(s) as well
You have some software configuration problem (i.e. application server thread pool, db connection pool, memory settings, caching settings, etc. are not optimal). In the majority of cases web, application and database servers default configuration is not suitable for high loads and it needs to be tuned so try playing with different settings and see the impact.
Your application code is not optimal. In case when there is a plenty of free hardware resources and you are sure that infrastructure is properly set up (i.e. other applications behave fine) it might be a problem with your application under test code. In this case you will need to re-run your test with profiler tool telemetry to see what are the most time and resources consuming methods and how they can be optimised.
It might also be a networking related problem, i.e. faulty router or bad cable or whatever.
There are too many possible reasons however the approach should be the same: the whole system acts at the speed of its slowest component so you need to identify this component and determine why it is slow. See Understanding Your Reports posts series to learn how to read JMeter load test results and identify the bottlenecks from them.

Does JMeter performance testing effect live websites

I have been using my blog to learn JMeter and I wondered how risky this could be. For example if I load test a site ex:- randomsite.com(Which has limited resource where the website is hosted) with 100,000 users or more wouldn't it effect the website? Are there mechanisms to prevent such scenario.
Yes it will affect your web site. Performance benchmarking tools do introduce load and are designed to stress test applications, websites and databases. The idea is to do this before you deploy your application, web site and other systems to know what your theoretical limits are. Also keep in mind by monitoring the systems performance with a tool you are also adding extra load. Thus the number you get from these tools are not always 100% accurate. Its better to know the theoretical limitations then not knowing at all.
One mechanism you can use to stop such tools being used in a malicious way is to run some intrusion detection system(IDS) on the network edge. These system will probably identify this type of activity as a DOS attack of sorts and then block the originating IP.
DDOS attacks makes things a lot more difficult to cope with. This is where 1000's of machines make requests small enough not to be picked up by the IDS as a DOS attack at the same target. The IDS just sees a lot of small amounts of traffic,request etc coming from a lot of addresses. This makes it very hard to determine what is a real request and what is a request that is an attack.

What differentiate virtual users / real users when performing load test?

Anyone can point out the difference between virtual user and real user?
In the context of web load testing, there are a lot of differences. A virtual user is a simulation of human using a browser to perform some actions on a website. One company offers what they call "real browser users", but they, too, are simulations - just at a different layer (browser vs HTTP). I'm going to assume you are using "real users" to refer to humans.
Using humans to conduct a load test has a few advantages, but is fraught with difficulties. The primary advantage is that there are real humans using real browsers - which means that, if they are following the scripts precisely, there is virtually no difference between a simulation and real traffic. The list of difficulties, however, is long: First, it is expensive. The process does not scale well beyond a few dozen users in a limited number of locations. Humans may not follow the script precisely...and you may not be able to tell if they did. The test is likely not perfectly repeatable. It is difficult to collect, integrate and analyze metrics from real browsers. I could go on...
Testing tools which use virtual users to simulate real users do not have any of those disadvantages - as they are engineered for this task. However, depending on the tool, they may not perform a perfect simulation. Most load testing tools work at the HTTP layer - simulating the HTTP messages passed between the browser and server. If the simulation of these messages is perfect, then the server cannot tell the difference between real and simulated users...and thus the test results are more valid. The more complex the application is, particularly in the use of javascript/AJAX, the harder it is to make a perfect simulation. The capabilities of tools in this regard varies widely.
There is a small group of testing tools that actually run real browsers and simulate the user by pushing simulated mouse and keyboard events to the browser. These tools are more likely to simulate the HTTP messages perfectly, but they have their own set of problems. Most are limited to working with only a single browser (i.e. Firefox). It can be hard to get good metrics out of real browsers. This approach is far more scalable better than using humans, but not nearly as scalable as HTTP-layer simulation. For sites that need to test <10k users, though, the web-based solutions using this approach can provide the required capacity.
There is a difference.
Depends on your jmeter testing, if you are doing from a single box, your IO is limited. You cant imitate lets say 10K users with jmeter in single box. You can do small tests with one box. If you use multiple jmeter boxes that s another story.
Also, how about the cookies, do you store cookies while load testing your app? that does make a difference
A virtual user is an automatic emulation of a real users browser and http requests.
Thus the virtual users is designed to simulate a real user. It is also possible to configure virtual users to run through what we think a real users would do, but without all the delay between getting a page and submitting a new one.
This allows us to simulate a much higher load on our server.
The real key differences between virtual user simulations and real users is are the network between the server and thier device as well as the actual actions a real user performs on the website.

Load testing consumer websites

I am looking to load test a consumer website. I have tried using JMeter. However, in that case, all the requests originate from one machine. What I really want is to simulate real users across the country some on low speed dialup connections and others on highspeed.
What are the best practices to follow in such a scenario?
JMeter supports distributed testing - so if you're already comfortable with it as a tool, you can use it to power these distributed requests from an arbitrary number of machines, too.
Note that all machines run the exact same test plan, so either your plan should have some random selection of fast/slow environments, or you may be able to select which profile to use based on some system properties.
You might want to consider using a 3rd party service such as Load Impact.
You've expressed two different but related concerns - traffic coming from a single machine and simulating various end-user network speeds.
Why is the first one a concern for your testing? Unless you have a load balancer that uses the IP address as part of its load-distributing algorithm, the vast majority of servers (and application platforms) don't care that all the traffic is coming from a single machine (or IP address). Note also that you can configure the OS of your load generator for multiple IP addresses and the better load-testing tools will make use of those IP addresses so that traffic comes from all of them.
For the simulation of end-user network speeds, again, the better load-testing tools will do this for you. That can give you a pretty good feel for how the bandwidth will affect the page load time, without actually using distributed load generation. But tools frequently do not account for latency. That is where there is no substitute for distributing your load generation.
You can do distributed testing with JMeter, though it can be a bit cumbersome. How many locations do you need? Without knowing more about what you need, my first suggestion would be to choose a tool that has features designed specifically to do what you need. I will pimp our product, Web Performance Load Tester, but there are certainly other options. Load Tester can emulate various end-user connection speeds and has built-in support for generating load from Amazon EC2 (US east and west coast and Dublin, IR...support for Asia coming soon). Once you set up an EC2 account, you can be running your first test from the cloud in 10 minutes.

Performance testing scenarios required

What can be the various performance testing scenarios to be considered for a website with huge traffic? Is there any way to identify the elements of the code which are adversely affecting the site performance?
Please provide something similar to checklist of generalised scenarios to be tested to ensure proper performance testing.
It would be good to start with some load testing tools like JMeter or PushToTest and start running it against your web application. JMeter simulates HTTP traffic and loads the server that way. You can do that as well as load test AJAX parts of your application with PushToTest because it can use Selenium Scripts.
If you don't have the resources (computers to run load tests) you can always use a service like BrowserMob to run the scripts against a web accessible server.
It sounds like you need more of a test plan than a suggestion of tools to use. In performance testing, it is best to look at the users of the application -
How many will use the application on a light day? How many will use the app on a heavy day?
What type of users make up your user population?
What transactions will each of these user types perform?
Using this information, you can identify the major transactions and come up with different user levels (e.g. 10, 25, 50, 100) and percentages of user types (30% user A, 50% user B, ...) to test these transactions with. Time each of these transactions for each test you execute and examine how the transaction times change as compared to your user levels.
After gathering some metrics, since you should be able to narrow transactions to individual pieces of code, you will be able to know where to focus your code improvements. If you still need to narrow things down further, finer tests within each transaction can be created to provide more granular results.
Concurrency will kill you here, as you need to test your maximum projected concurrent users + wiggling room hitting the database, website, and any other web service simultaneously. It really depends on the technologies you're using, but if you have a large interaction of different web technologies, you may want to check out Neoload. I've had nothing but success with this web stress tool, and the support is top notch if you need to emulate specific, complicated behavior (such as mocking AMF traffic, or using responses from web pages to dictate request behavior.)
If you have a DB layer then this should be the initial focus of your attention, once the system is stable (i.e. no memory leaks or other resource issues). If the DB is not the bottle neck (or not relevant) then you need to correlate CPU/Memory/Disk IO and Network traffic with the increasing load and increasing response times. This gives you an idea of capacity and correlation (but not cause) to resource usage.
To find the cause of a given issue with resources you need to establish a Six Sigma style project where you define the problem and perform root case analysis in order to pin point the piece of code (or resource configuration) that is the bottleneck. Once you have done this a couple of times in your environment, you will notice patterns of workload, resource usage and counter measures (solutions) that will guide you in your future performance testing 'projects'.
To choose correct performance scenarios you need to go through the next basic checklist:
High priority scenarios from the business logic perspective. For example: login/order transactions, etc.
Mostly used scenarios by end users. Here you may need information from monitoring tools like NewRelic, etc.
Search / filtering functionality (if applicable) - Scenarios which involve different user roles/permissions
Performance test is a comparison test either with the previous release of the same application or with the existing players in the market.
Case 1- Existing application
1)Carry out the test for the same scenarios as covered before to get a clear picture on the response of the application before and after the upgrade.
2)If you need to dig deeper you can get back to the database team to understand which functionalities are getting more requests. Also ask them on the total number of requests on an average on any particular day so that you can take a call on what user load and time duration to be given for the test.
Case 2- New Application
1) Look for existing market players and design your test as per the critical functions of the rival product (for e.g. Gmail might support many functions what what is being used often is launch ->login ->compose mail -> inbox ->outbox).
2) Any time you can get back to your clients on what they suppose to be business critical scenarios or scenarios that will be used more often..

Resources