What differentiate virtual users / real users when performing load test? - visual-studio-2010

Anyone can point out the difference between virtual user and real user?

In the context of web load testing, there are a lot of differences. A virtual user is a simulation of human using a browser to perform some actions on a website. One company offers what they call "real browser users", but they, too, are simulations - just at a different layer (browser vs HTTP). I'm going to assume you are using "real users" to refer to humans.
Using humans to conduct a load test has a few advantages, but is fraught with difficulties. The primary advantage is that there are real humans using real browsers - which means that, if they are following the scripts precisely, there is virtually no difference between a simulation and real traffic. The list of difficulties, however, is long: First, it is expensive. The process does not scale well beyond a few dozen users in a limited number of locations. Humans may not follow the script precisely...and you may not be able to tell if they did. The test is likely not perfectly repeatable. It is difficult to collect, integrate and analyze metrics from real browsers. I could go on...
Testing tools which use virtual users to simulate real users do not have any of those disadvantages - as they are engineered for this task. However, depending on the tool, they may not perform a perfect simulation. Most load testing tools work at the HTTP layer - simulating the HTTP messages passed between the browser and server. If the simulation of these messages is perfect, then the server cannot tell the difference between real and simulated users...and thus the test results are more valid. The more complex the application is, particularly in the use of javascript/AJAX, the harder it is to make a perfect simulation. The capabilities of tools in this regard varies widely.
There is a small group of testing tools that actually run real browsers and simulate the user by pushing simulated mouse and keyboard events to the browser. These tools are more likely to simulate the HTTP messages perfectly, but they have their own set of problems. Most are limited to working with only a single browser (i.e. Firefox). It can be hard to get good metrics out of real browsers. This approach is far more scalable better than using humans, but not nearly as scalable as HTTP-layer simulation. For sites that need to test <10k users, though, the web-based solutions using this approach can provide the required capacity.

There is a difference.
Depends on your jmeter testing, if you are doing from a single box, your IO is limited. You cant imitate lets say 10K users with jmeter in single box. You can do small tests with one box. If you use multiple jmeter boxes that s another story.
Also, how about the cookies, do you store cookies while load testing your app? that does make a difference

A virtual user is an automatic emulation of a real users browser and http requests.
Thus the virtual users is designed to simulate a real user. It is also possible to configure virtual users to run through what we think a real users would do, but without all the delay between getting a page and submitting a new one.
This allows us to simulate a much higher load on our server.
The real key differences between virtual user simulations and real users is are the network between the server and thier device as well as the actual actions a real user performs on the website.

Related

Jmeter Mobile Native App Testing

I have Two Question related to Native App Performance Testing?
1)I have a Payment App, and it comes with bank security which is installed at the time of app installation. It sends an token number and rest of the data in encrypted format. Is it possible to handle such kind of request using Jmeter or any other performance testing tool, do i need to change some setting in app server or jmeter to get this done ?
2)Mobile App uses Device ID, so if i simulated load on cloud server it will use same Device ID which i used while creating script? is it possible to simulate different mobile ID to make it real-time?
any Help or references will be appreciated ..:)
(1) Yes. This is why performance testing tools are built around general purpose programming languages, to allow you (as the tester) to leverage your foundation skills in programming to leverage the appropriate algorithms and libraries to represent the same behavior as the client
(2) This is why performance testing tools allow for parameterization of the sending datastream to the server/application under test
I'm not an expert in JMeter. But work a lot with Loadrunner (LR) (Performance Testing Tool from HP). Though JMeter and LR are different tools, they work under same principle and objective and so objective of performance testing.
As James Pulley mentioned, the performance testing tool may have the capability. But the question is,
Have your tried recording your app with JMeter? Since your app is a native kind, please do the recording from simulator/emulator and check the feasibility. JMeter might not be the right candidate for mobile app load testing.
Alternatively there are lot of other tools available (both commercial and opensource) in market for your objective.
Best Regards
With the raise of several mobile network technologies, load testing a mobile application has become a different ball game in comparison with normal web app load testing. This is because of the differences in the response times that occur in different mobile networks such as 2G, 3G, 4G, etc. Additionally the client being a mobile device has plenty of physical constraints such as limited CPU, RAM, internal storage etc. All of these need to be considered while conducting performance testing of a mobile application if one wants to simulate a scenario close to a real time condition.
Coming to your 2 questions,
1) Yes it is possible but the amount of manual effort that needs to be invested to make the script execution ready might vary (since you are mentioning there is data in encrypted format - some are easy to understand and some are just crude and difficult to handle using JMeter). But there might not be any app server setting that would be required to change (unless of course you are unable to handle the encryption with JMeter in which case, the encryption might have to be disabled for QA phase)
2) As rightly said by James Pulley, these values can be parameterized. However, I fear that these values will be validated by the app server and hence the values need to be appropriately fed in the requests.
You can refer to this link for reference on how to do Mobile Performance Testing for Native application http://www.neotys.com/documents/doc/neoload/latest/en/html/#4234.htm#o4237
.The same could be extrapolated to JMeter to an extent.

Does JMeter performance testing effect live websites

I have been using my blog to learn JMeter and I wondered how risky this could be. For example if I load test a site ex:- randomsite.com(Which has limited resource where the website is hosted) with 100,000 users or more wouldn't it effect the website? Are there mechanisms to prevent such scenario.
Yes it will affect your web site. Performance benchmarking tools do introduce load and are designed to stress test applications, websites and databases. The idea is to do this before you deploy your application, web site and other systems to know what your theoretical limits are. Also keep in mind by monitoring the systems performance with a tool you are also adding extra load. Thus the number you get from these tools are not always 100% accurate. Its better to know the theoretical limitations then not knowing at all.
One mechanism you can use to stop such tools being used in a malicious way is to run some intrusion detection system(IDS) on the network edge. These system will probably identify this type of activity as a DOS attack of sorts and then block the originating IP.
DDOS attacks makes things a lot more difficult to cope with. This is where 1000's of machines make requests small enough not to be picked up by the IDS as a DOS attack at the same target. The IDS just sees a lot of small amounts of traffic,request etc coming from a lot of addresses. This makes it very hard to determine what is a real request and what is a request that is an attack.

Network Settings for JMeter Traffic | Internet or LAN?

I'm going to perform a load and stress test on a webpage using Apache JMeter, but I'm not very sure about the appropriate network setting. Is it better to connect the two machines, the server with the webpage and the client running JMeter via local network or via the internet. Using the internet would be closer to the real scenario, but with a local network the connection is much more stable and you have more bandwidth for more requests and the same time.
I'm very thankful for opinions!
These are in fact two styles or approaches to load testing, both are valid.
The first you might call Lab testing. Where you minimise the number of factors that can affect throughput/resp. times and really focus the test on the system itself.
The second is the more realistic scenario where you are trying to get as much coverage as possible by routing requests through as many of the actual network layers that will exist when the system goes live.
The benefit of method 1 is that you simplify the test which makes understanding and finding any problems much easier. The problem is you lack complete coverage.
The benefit of method 2 is that it is not only more realistic but it also gives a higher level of confidence - esp. with higher volume tests, you might find you have a problem with a Switch or Firewall and it is only with this type of testing that you identify such issues. The problem is it can make finding any issues harder.
So, in short, you really want to do both types. You might find it easier to start with the full end to end test from outside in, and then only move to a more focused test if you find that you need to isolate / investigate a problem. That way you stand a chance of reducing the amount of setup work whilst still getting the maximum benefit from testing.
Note: Outside in means just that, your test rig should be located outside of the LAN (assuming this is how live traffic will flow). These days this is easy to setup using cloud based hardware.
Also note: If the machine that you are running the tests from is the same in both cases then routing the traffic via the internet (out of your LAN and then back in again) is probably not going to tell you anything useful and could actually cause a false negative in your results (not to mention network problems for your company!)
IMHO you should use your LAN.
Practically every user will have slightly different dl/ul speed, so I suggest you first do a normal performance test using your LAN and when you finish, you can do a few runs from outside, just to see the difference.
Remember, you're primarily testing efficiency of your application on the hardware it sits on. Network speed (of your future users) is the factor you cannot influence in any way.

Performance Testing Secured Web Site

How is the community handling performance testing of their secured web areas? We don't particularly have a public facing web site, thus users have to be logged into be able view data / access the system. To further complicate matters, we can not allow users to be logged in multiple times -- if you attempt to login a second time your first session is invalidated. We could turn this feature off (as well as second-level caching), but then we are testing a system which is inherently different from production.
What methodologies should we look into to stress test our application?
Our developers are pretty proficient with Java and Python.
Good question.
Normally we'd use something like Selenium to automate a web-browser talking to the web application itself. This is a system-level approach, and has several advantages:
You are measuring the performance of client-browser too
You can see (to some extent) if the site performs better or worse in different browsers
It is compatible with techniques which do not lend themselves to "raw" web driver programs like ApacheBench
Of course it can take a large amount of work to create automated tests which are representative of real users actions.
Normally you'd have some special test-system with known hardware (ideally similar to production) and a database which includes certain objects which the test suite expects to find. You could also load a production-size (or bigger) simulated data set into this system.
If you used (for example) Selenium to automate functional tests, the functional tests could be reused to build a performance-test suite. That's what we did before.

Performance testing scenarios required

What can be the various performance testing scenarios to be considered for a website with huge traffic? Is there any way to identify the elements of the code which are adversely affecting the site performance?
Please provide something similar to checklist of generalised scenarios to be tested to ensure proper performance testing.
It would be good to start with some load testing tools like JMeter or PushToTest and start running it against your web application. JMeter simulates HTTP traffic and loads the server that way. You can do that as well as load test AJAX parts of your application with PushToTest because it can use Selenium Scripts.
If you don't have the resources (computers to run load tests) you can always use a service like BrowserMob to run the scripts against a web accessible server.
It sounds like you need more of a test plan than a suggestion of tools to use. In performance testing, it is best to look at the users of the application -
How many will use the application on a light day? How many will use the app on a heavy day?
What type of users make up your user population?
What transactions will each of these user types perform?
Using this information, you can identify the major transactions and come up with different user levels (e.g. 10, 25, 50, 100) and percentages of user types (30% user A, 50% user B, ...) to test these transactions with. Time each of these transactions for each test you execute and examine how the transaction times change as compared to your user levels.
After gathering some metrics, since you should be able to narrow transactions to individual pieces of code, you will be able to know where to focus your code improvements. If you still need to narrow things down further, finer tests within each transaction can be created to provide more granular results.
Concurrency will kill you here, as you need to test your maximum projected concurrent users + wiggling room hitting the database, website, and any other web service simultaneously. It really depends on the technologies you're using, but if you have a large interaction of different web technologies, you may want to check out Neoload. I've had nothing but success with this web stress tool, and the support is top notch if you need to emulate specific, complicated behavior (such as mocking AMF traffic, or using responses from web pages to dictate request behavior.)
If you have a DB layer then this should be the initial focus of your attention, once the system is stable (i.e. no memory leaks or other resource issues). If the DB is not the bottle neck (or not relevant) then you need to correlate CPU/Memory/Disk IO and Network traffic with the increasing load and increasing response times. This gives you an idea of capacity and correlation (but not cause) to resource usage.
To find the cause of a given issue with resources you need to establish a Six Sigma style project where you define the problem and perform root case analysis in order to pin point the piece of code (or resource configuration) that is the bottleneck. Once you have done this a couple of times in your environment, you will notice patterns of workload, resource usage and counter measures (solutions) that will guide you in your future performance testing 'projects'.
To choose correct performance scenarios you need to go through the next basic checklist:
High priority scenarios from the business logic perspective. For example: login/order transactions, etc.
Mostly used scenarios by end users. Here you may need information from monitoring tools like NewRelic, etc.
Search / filtering functionality (if applicable) - Scenarios which involve different user roles/permissions
Performance test is a comparison test either with the previous release of the same application or with the existing players in the market.
Case 1- Existing application
1)Carry out the test for the same scenarios as covered before to get a clear picture on the response of the application before and after the upgrade.
2)If you need to dig deeper you can get back to the database team to understand which functionalities are getting more requests. Also ask them on the total number of requests on an average on any particular day so that you can take a call on what user load and time duration to be given for the test.
Case 2- New Application
1) Look for existing market players and design your test as per the critical functions of the rival product (for e.g. Gmail might support many functions what what is being used often is launch ->login ->compose mail -> inbox ->outbox).
2) Any time you can get back to your clients on what they suppose to be business critical scenarios or scenarios that will be used more often..

Resources