How to start performance testing - performance

I have taken as an example for learning and gathered some information about tools, objectives,scenarios, but I need your inputs. Please assist me.
I am new to Performance testing and would like to test the following website www.volkswagen.co.nz
Can you tell me, what are need to be tested? What are the scenarios and activities for each scenario? What metrics do I need to add? Which is the best and free tool for testing it? How to test if it is deployed in cloud like AWS?
Please let me know, Thanks in advance.

Performance testing needs,
Identify critical/heavy/important scenario in your webapp (irrespective of deployment cloud/standalone)
Identify service level agreements in terms of response times, throughput, latency etc.
Identify workload model i.e. how much user load application is expecting. this should be as fine grained as possible (avg users per transaction/workflow at a point of time)
Identify tools (JMeter is freeware and best but if you can afford paid then look at loadrunner, neoload etc.)
Record the script for workflows and parameterise and correlate.
generate test setup for load test and execute the load test.
monitor system utilization, collect metrics like response time, throughput, error rate, latency etc.
This all comes in load testing. For more you can read http://www.guru99.com/performance-testing.html

I am new to Performance testing and would like to test the following website www.volkswagen.co.nz
That is a recipe for disaster. No one new should be allowed to work on their own without a full period of training and internship with a master in the field. This is true of stone masons, electricians, plumbers, barbers, accountants, engineers and physicians. And it is most certainly true of performance testers/engineers.
There are dozens of foundation skills you need to master before you touch any tool, open source or otherwise. Until you show mastery of those items along with tool mechanics for your tool you should not be allowed to test any website, particularly a production website. And, if you don't work for this company what you are engaging in is a denial of service attack and could leave you with exposed legal liability.

I strongly agree with James on this one.
Do not touched the site if:
it's not yours
not sure what you are doing
the owner gave you explicit (and sounds like irresponsible) permission
don't know or don't have the support to restore the environment into a working state
If you do work for the company then you need to have a test environment first, a playground where you can mess around and nobody would mind if you take it down.
Firstly get information from the business on which use cases needs to
be tested.
Get response times target for user actions and for environments utilisation.
Get response time targets for environments utilisation: define environment monitoring tactics.
Found a tool that can fit for purpose: Jmeter, Gattling,etc, lot's of free ones available.
Get a test environment, preferably similar scale to production
Create scripts to cover critical use cases
Comply scripts into scenarios
Create a reporting framework
Kick off monitoring
Kick off scenario
Collect and analyse results
Be mindful of the free editions of load testing tools: they tend to be easy to use at first but soon as you start to outgrow it it can cost a fortune and more often then not it's hard to port scripts/scenarios to another tool.

Related

Performance monitoring all layers of a system

I use several loadtesting tools (Loadrunner, JMeter, NeoLoad) to performance test different applications. Im wondering if it is possible to monitor all layers of an application stack so for example. Say i have the following data chain.
Loadbalancer <-x-> Application Server <-x-> RMI <-x-> Java Application <-x-> MQ <-x-> Legacy application <-x-> Database
Where i have marked the x in the chain i am interested in monitoring, for example avg responsetimes.
Obviously we could simply create a wrapper on all endpoints which would gather the statistics for us and maybe we could import it into loadrunner or other loadtesting tools and sideline hem with the tools inbuilt performance statistics, but maybe there is tools/applications which already does this?
If not, how should we proceed, in order to gather this kind of statistics?
The standard for this was supposed to be Application Response Measurement (ARM). It was a cross language set of APIs that did just what you were looking for. The issue is that the products that implement this spec all tend to be big, expensive "enterprise" level monitoring tools. Think multi-week installs, consultants, more infrastructure and lots of buzzwords.
Still, if this is a mission critical app with a mission critical budget, this may be what you need. But you may be able to build your own that does just enough without too much effort. A quick search turns up at least one open source ARM implementation if you still want to use that API.
Another option is to simply to have transactions you can run against each tier of the system to check general responsiveness. For example you can have a static web page on the LB, a no-op tx on the app server, a "hello" servlet on the Java app, put a message directly on the queue, etc. During a performance / load test, these could be hit directly by the load testing tool or you could write a wrapper servlet / application call that does this as a single HTTP (RMI?) call. Running these a few times a minute won't add too much load to the system, but it should help you pinpoint which tier is slower. The nice thing about this approach is that it also works in production, just watch out for security issues.
For single user kind of test, where you know you have problem (e.g. this tx is "slow"), I have also had pretty good luck with network tracing. It's very tedious, but when you aren't sure what tier is slow, starting up a network trace on a few machines and running a single tx usually gives a good idea of what the system is doing.
I have handled this decomposition a number of ways in the past. The first is at a very low level using protocol analyzer dumped data to find the time points where a conversation leaves tier X and enters tier Y. The second method is through the use of log examination for the various tiers. Something that can make your examination quite usefule in this case is a common log server for all of your components (syslog, Rsyslog, etc....) and a nice log parsing tool, such as the freely available Microsoft Logparser. The third method utilization of the audit trail for an application stored in the database. You may find this when working on enterprise services bus style applications which have a consumer/producer model and a bus to pass information rather than a direct connection. The audit trails I have seen are typically stored in a database and allow the tracking of an individual transaction through the entire application infrastructure. Your Load balancer, as a network device, may be out of the hunt on this one.
Note, if you go the protocol analyzer or log route, then be sure and synchronize all of your source information devices to a common time server. Having one of your collectors (analyzer, app log) off on a time stamp basis can really be a hair pulling experience when you get into the analysis phase.
As to how you move from your collected data into LoadRunner, that part is very mechanical. The Analysis program supports an interface to import external datapoints. The format is very specific and is documented in both help and the online docs. This import process works very well, as I often have to use it for collection of statistics from hosts which I do not have direct monitoring access to, but which need to be included as a part of the monitored test infrastructure.
James Pulley
Moderator (YahooGroups LoadRunner, Advanced-Loadrunner; GoogleGroups lr-LoadRunner; Linkedin LoadRunner, LoadRunnerByTheHour; SQAForums LoadRunner, WinRunner)

How many users a single-small Windows azure web role can support?

We are creating a simple website but with heave user logins (about 25000 concurrent users). How can I calculate no. of instances required to support it?
Load testing and performance testing are really the only way you're going to figure out the performance metrics and instance requirements of your app. You'll need to define "concurrent users" - does that mean 25,000 concurrent transactions, or does that simply mean 25,000 active sessions? And if the latter, how frequently does a user visit web pages (e.g. think-time between pages)? Then, there's all the other moving parts: databases, Azure storage, external web services, intra-role communication, etc. All these steps in your processing pipeline could be a bottleneck.
Don't forget SLA: Assuming you CAN support 25,000 concurrent sessions (not transactions per second), what's an acceptable round-trip time? Two seconds? Five?
When thinking about instance count, you also need to consider VM size in your equation. Depending again on your processing pipeline, you might need a medium or large VM to support specific memory requirements, for instance. You might get completely different results when testing different VM sizes.
You need to have a way of performing empirical tests that are repeatable and remove edge-case errors (for instance: running tests a minimum of 3 times to get an average; and methodically ramping up load in a well-defined way and observing results while under that load for a set amount of time to allow for the chaotic behavior of adding load to stabilize). This empirical testing includes well-crafted test plans (e.g. what pages the users will hit for given usage scenarios, including possible form data). And you'll need the proper tools for monitoring the systems under test to determine when a given load creates a "knee in the curve" (meaning you've hit a bottleneck and your performance plummets).
Final thought: Be sure your load-generation tool is not the bottleneck during the test! You might want to look into using Microsoft's load-testing solution with Visual Studio, or a cloud-based load-test solution such as Loadstorm (disclaimer: Loadstorm interviewed me about load/performance testing last year, but I don't work for them in any capacity).
EDIT June 21, 2013 Announced at TechEd 2013, Team Foundation Service will offer cloud-based load-testing, with the Preview launching June 26, coincident with the //build conference. The announcement is here.
No one can answer this question without a lot more information... like what technology you're using to build the website, what happens on a page load, what backend storage is used (if any), etc. It could be that for each user who logs on, you compute a million digits of pi, or it could be that for each user you serve up static content from a cache.
The best advice I have is to test your application (either in the cloud or equivalent hardware) and see how it performs.
It all depends on the architecture design, persistence technology and number of read/write operations you are performing per second (average/peak).
I would recommend to look into CQRS-based architectures for this kind of application. It fits cloud computing environments and allows for elastic scaling.
I was recently at a Cloud Summit and there were a few case studies. The one that sticks in my mind is an exam app. it has a burst load of about 80000 users over 2 hours, for which they spin up about 300 instances.
Without knowing your load profile it's hard to add more value, just keep in mind concurrent and continuous are not the same thing. Remember the Stack overflow versus Digg debacle "http://twitter.com/#!/spolsky/status/27244766467"?

Performance testing application for bottle necks using production data

I have been tasked with looking for a performance testing solution for one of our Java applications running on a Weblogic server. The requirement is to record production requests (both GET and POST including POST data) and then run these requests in a performance test environment with a copy of the production database.
The reasons for using production requests instead of a test script are:
It is a large application with no existing test scripts so it would be a a large amount of work to write scripts to cover the entire application.
Some performance issues only appear when users do a number of actions in a particular order.
To test using actual user interaction with the system not an estimation at how the users may interact with the system. We all know that users will do things we have not thought of.
I want to be able to fix performance issues and rerun the requests against the fixed code before releasing to production.
I have looked at using JMeters Access Log Sampler with server access logs however the access logs do not contain POST data and the access log sampler only looks at the request URL so it cannot simulate users submitting form data.
I have also looked at using the JMeter HTTP Proxy Server however this can record the actions of only one user and requires the user to configure their browser to use the proxy. This same limitation exist with Tsung and The Grinder.
I have looked at using Wireshark and TCReplay but recording at the packet level is excessive and will not give any useful reports at a request level.
Is there a better way to analyze production performance considering I need to be able to test fixes before releasing to production?
That is going to be a hard ask. I work with Visual Studio Test Edition to load test my applications and we are only able to "estimate" the users activity on the site.
It is possible to look at the logs and gather information on the likelyhood of certain paths through your app. You can then look at the production database to look at the likely values entered in any post requests. From that you will have to make load tests that approach the useage patterns of your production site.
With any current tools I don't think it is possible to record and playback actual user interation.
It is possible to alter your web app so that is records and logs every request and post against session and datetime. This custom logging could be then used to generate load test requests against a test website. This would be some serious code change to your existing site and would likely have performance impacts.
That said, I have worked with web apps that do this level of logging and the ability to analyse the exact series of page posts/requests that caused an error is quite valuable to a developer.
So in summary: It is possible, but I have not heard of any off the shelf tools that do it.
Please check out this Whitepaper by Impetus Technologies on this page.. http://www.impetus.com/plabs/sandstorm.html
Honestly, I'm not sure the task you're being asked to do is even possible, let alone a good idea. Depending on how complex the application's backend is, and how perfect you can recreate the state (ie: all the way down to external SOA services or the time/clock), it may not be possible to make those GET and POST requests reproduce the same behavior.
That said, performance testing against production data is always great, but it usually requires application-specific knowledge that will stress said data. Simply repeating HTTP GETs and POSTs will almost certainly not yield useful results.
Good luck!
I would suggest the following to get the production requests and simulate the accurate workload:
1) Use coremetrics: CoreMetrics provides such solutions using which you can know the application usage patterns. This would help in coming up with an accurate workload model. This model can then be converted into test scripts and executed against a masked copy of production database. This will provide you accurate results about the application performance in realtime.
2) Another option would be creating a small utility using AOP (Aspect oriented apporach) so that it can trace all the requests and corresponding method traces. This would help in identifying the production usage pattern and in turn accurate simulation of workload. AOP frameworks such as AspectJ can be used. This would not require any changes in code. The instrumentation can be done on the fly. The other benefit would be that thi cna only be enabled for a specific time window and then it can be turned off.
Regards,
batterywalam

Performance testing scenarios required

What can be the various performance testing scenarios to be considered for a website with huge traffic? Is there any way to identify the elements of the code which are adversely affecting the site performance?
Please provide something similar to checklist of generalised scenarios to be tested to ensure proper performance testing.
It would be good to start with some load testing tools like JMeter or PushToTest and start running it against your web application. JMeter simulates HTTP traffic and loads the server that way. You can do that as well as load test AJAX parts of your application with PushToTest because it can use Selenium Scripts.
If you don't have the resources (computers to run load tests) you can always use a service like BrowserMob to run the scripts against a web accessible server.
It sounds like you need more of a test plan than a suggestion of tools to use. In performance testing, it is best to look at the users of the application -
How many will use the application on a light day? How many will use the app on a heavy day?
What type of users make up your user population?
What transactions will each of these user types perform?
Using this information, you can identify the major transactions and come up with different user levels (e.g. 10, 25, 50, 100) and percentages of user types (30% user A, 50% user B, ...) to test these transactions with. Time each of these transactions for each test you execute and examine how the transaction times change as compared to your user levels.
After gathering some metrics, since you should be able to narrow transactions to individual pieces of code, you will be able to know where to focus your code improvements. If you still need to narrow things down further, finer tests within each transaction can be created to provide more granular results.
Concurrency will kill you here, as you need to test your maximum projected concurrent users + wiggling room hitting the database, website, and any other web service simultaneously. It really depends on the technologies you're using, but if you have a large interaction of different web technologies, you may want to check out Neoload. I've had nothing but success with this web stress tool, and the support is top notch if you need to emulate specific, complicated behavior (such as mocking AMF traffic, or using responses from web pages to dictate request behavior.)
If you have a DB layer then this should be the initial focus of your attention, once the system is stable (i.e. no memory leaks or other resource issues). If the DB is not the bottle neck (or not relevant) then you need to correlate CPU/Memory/Disk IO and Network traffic with the increasing load and increasing response times. This gives you an idea of capacity and correlation (but not cause) to resource usage.
To find the cause of a given issue with resources you need to establish a Six Sigma style project where you define the problem and perform root case analysis in order to pin point the piece of code (or resource configuration) that is the bottleneck. Once you have done this a couple of times in your environment, you will notice patterns of workload, resource usage and counter measures (solutions) that will guide you in your future performance testing 'projects'.
To choose correct performance scenarios you need to go through the next basic checklist:
High priority scenarios from the business logic perspective. For example: login/order transactions, etc.
Mostly used scenarios by end users. Here you may need information from monitoring tools like NewRelic, etc.
Search / filtering functionality (if applicable) - Scenarios which involve different user roles/permissions
Performance test is a comparison test either with the previous release of the same application or with the existing players in the market.
Case 1- Existing application
1)Carry out the test for the same scenarios as covered before to get a clear picture on the response of the application before and after the upgrade.
2)If you need to dig deeper you can get back to the database team to understand which functionalities are getting more requests. Also ask them on the total number of requests on an average on any particular day so that you can take a call on what user load and time duration to be given for the test.
Case 2- New Application
1) Look for existing market players and design your test as per the critical functions of the rival product (for e.g. Gmail might support many functions what what is being used often is launch ->login ->compose mail -> inbox ->outbox).
2) Any time you can get back to your clients on what they suppose to be business critical scenarios or scenarios that will be used more often..

Running test on Production Code/Server

I'm relatively inexperienced when it comes to Unit Testing/Automated Testing, so pardon the question if it doesn't make any sense.
The current code base I'm working on is so tightly coupled that I'll need to refactor most of the code before ever being able to run unit tests on it, so I read some posts and discovered Selenium, which I think is a really cool program.
My client would like specific automated tests to run every ten minutes on our production server to ensure that our site is operational, and that certain features/aspects are running normally.
I've never really thought to run tests against a production server because you're adding additional stress to the site. I always thought you would run all tests against a staging server, and if those work, you can just assume the prouction site is operational as long as the hosting provider doesn't experience an issue.
Any thoughts on your end on testing production code on the actual production server?
Thanks a lot guys!
Maybe it would help if you thought of the selenium scripts as "monitoring" instead of "testing"? I would hope that every major web site out there has some kind of monitoring going on, even if it's just a periodic PING, or loading the homepage every so often. While it is possible to take this way too far, don't be afraid of the concept in general. So what might some of the benefits of this monitoring/testing to you and your client?
Somehow not all the best testing in the world can predict the odd things users will do, either intentionally or by sheer force of numbers (if 1 million monkeys on typewriters can write Hamlet, imagine what a few hundred click happy users can do? Pinging a site can tell you if it's up, but not if a table is corrupted and a report is now failing, all because a user typed in a value with a umlaut in it.
While your site might perform great on the staging servers, maybe it will start to degrade over time. If you are monitoring the performance of those selenium tests, you can stay ahead of slowness complaints. Of course as you mentioned, be sure your monitoring isn't causing problems either! You may have to convince your client that certain test are appropriate to run every X minutes, and others should only be run once a day, at 3am.
If you end up making an emergency change to the live site, you'll be more confident knowing that tests are running to make sure everything is ok.
I have worked on similar production servers from long time. From my experience, i can say is that, Always it is better to test our change changes/patches in Stage environment and just deploy it, in production servers. This is because, both Staging and Production environments are alike, except the volume of data.
If really required, it would be ok, to run few tests on Production servers, once the code/patch is installed. But it is not recommended/good way to run the tests always on the production server.
My suggestion would be to shadow the production database down to a staging/test environment on a nightly basis and run your unit tests there nightly. The approach suggested by the client would be good for making sure that new data introduced into the system didn't cause exceptions within the system, but i do not agree with doing this in production.
Running it in a staging environment would give you the ability to evaluate features as new data flows into the system without using the production environment as a test bed.
[edit] to make sure the site is up, you could write a simple program which pings it every 10 minutes rather than running your whole test suite against it.
What will change in production environment that you would need to run automated tests? I understand that you may need monitoring and alerts to make sure the servers are up and running.
Whatever the choice, whether it be a monitoring or testing type solution, the thing that you should be doing first and foremost for your client is warning them. As you have alluded to, testing in production is almost never a good idea. Once they are aware of the dangers and if there are no other logical choices, carefully construct very minimal tests. Apply them in layers and monitor them religiously to make sure that they aren't causing any problems to the app.
I agree with Peter that this sounds more like monitoring than testing. A small distinction but an important one I think. If the client's requirements relate to Service Level Agreements then their requests do not sound too outlandish.
Also, it may not be safe to assume that if the service provider is not experiencing any issues that the site is functioning properly. What if the site becomes swamped with requests? Or perhaps SQL that runs fine in test starts causing issues (timeouts, blocking etc.) with a larger production database?

Resources