How to proceed for performance testing on an application which is not in production? - performance

How to proceed with performing performance testing for an application that is not in production.

If the environment of the application under test is the same as it will be in production - the approach should not be different.
If the environment of the application under test is different, i.e. has less servers, the servers has less memory, etc. - in the absolute majority of cases you will not be able to calculate and predict the performance on more powerful hardware as there are too many factors to consider. You can inform the client about it right away.
However there are still some types of testing you can perform, i.e.:
You could run an integration test to check whether your application is configured for high load
You could run a scalability test to check whether and how does your application scale
You could run a soak test to check for possible memory leaks
You could use profiling tools to check bottlenecks in terms of code quality
You could identify slow DB queries and look for a way to optimise them
etc.
More information: Performance Testing in a Scaled Down Environment. Part Two: 5 Things You Can Test

If you meant how to judge on how much volume to test if application is not already in production, the answer is simple, you have to predict. The prediction can be based on survey, reports from your business analysts.
If none of the above are available. Just test how much your application can withstand in a test environment of similar configuration as production. This will give you an idea of when you need to start worrying while the application is live.

Related

Performance testing environment

How do you begin with setting a Load environment for an Enterprise application (traditional MVC application with connectivity to some dependent enterprise systems)? For example, it would be ideal to have similar amount of Servers with same configuration as Production environment, also the Database should have similar size and capacity as Production to make Load environment similar to Production.
This does not happen in many organization, and I have seen organization using a trimmed down version of Production infra for the Load testing. Does that seems to be correct approach? Can we run a load test on the trimmed down version of Prod infra? Will this approach produce the result that can be used to predict Production application performance?
If you need to measure how many users / requests per second your application can support the only way of doing it is running your test against production or production-like environment.
There are some things you could check against a scaled-down environment, for example:
Running a soak test, this way you can discover memory leaks
Running a load test having profiler tools telemetry enabled on the application under test side, this way you will identify the slowest functions, largest objects, etc.
Running a database load test, this way you can find out slow queries subject to optimization
More information: Performance Testing in a Scaled Down Environment. Part Two: 5 Things You Can Test

Simulation of a DoS attack with Apache JMeter

i want to use the tool Apache JMeter in order to make an http flood in a website that i have designed in Visual Studio 2017. My question is, if i start running the website and in the same time start the http flood, is it "safe" for my PC? I mean, is it possible to cause a damage because of the resource consumption? Or is it depended from the number of threads i'll use at the JMeter script parameters?
Thank you.
From the hardware perspective I don't think you will be able to damage your machine as most probably it will throttle or simply turn off when temperatures will exceed acceptable threshold.
From the test results point of view you might be getting inaccurate metrics especially if you run JMeter at the same machine as when it comes to high loads both JMeter and your website will become very resource intensive will "fight" for the resources like CPU, RAM, etc.
So I would recommend considering deploying your website on prod-like environment and use separate JMeter load generator(s) for conducting the load. This way you will get confidence that test results are not impacted by mutual interference.
If you cannot set up proper load testing environment unfortunately your test scenario will not make a lot of sense, there are still some areas which you can test on a scaled-down environment from performance perspective like:
integration test
soak test
interoperability test

Performance testing result analysis using jmeter

I need to undergo performance testing for my project and I have learned how to Handle the Jmeter for Performance testing through online, but still, i was unable to find the solution how to analyze the result? from the report.I do know how to analyze the result so that I can't able to find the Performance Issue I n m application, where the error had been occurring, so from that how I can improve that performance.Is there is any article or video tutorial to learn how to analyze the result?
There are 2 possible test outcomes:
Positive: the performance of your application matches SLA and/or NFR. In this case you need to provide the report as a proof, it might be i.e. HTML Reporting Dashboard
Negative: the performance of your application is beyond the expectations. In this case you need to perform some investigation on the reasons, which could be in:
Simply lack of resources, i.e. your application doesn't have enough headroom to operate due to not enough CPU, RAM, Network or Disk bandwidth. So make sure you are monitoring these resources on the application under test side, you can do it using i.e. JMeter PerfMon Plugin.
The same but on JMeter load generator(s) side. If JMeter cannot send requests fast enough the application won't be able to serve more requests so if JMeter machine doesn't have enough resources - the situation will be the same so make sure to monitor the same metrics on JMeter host(s) as well
You have some software configuration problem (i.e. application server thread pool, db connection pool, memory settings, caching settings, etc. are not optimal). In the majority of cases web, application and database servers default configuration is not suitable for high loads and it needs to be tuned so try playing with different settings and see the impact.
Your application code is not optimal. In case when there is a plenty of free hardware resources and you are sure that infrastructure is properly set up (i.e. other applications behave fine) it might be a problem with your application under test code. In this case you will need to re-run your test with profiler tool telemetry to see what are the most time and resources consuming methods and how they can be optimised.
It might also be a networking related problem, i.e. faulty router or bad cable or whatever.
There are too many possible reasons however the approach should be the same: the whole system acts at the speed of its slowest component so you need to identify this component and determine why it is slow. See Understanding Your Reports posts series to learn how to read JMeter load test results and identify the bottlenecks from them.

How to start performance testing

I have taken as an example for learning and gathered some information about tools, objectives,scenarios, but I need your inputs. Please assist me.
I am new to Performance testing and would like to test the following website www.volkswagen.co.nz
Can you tell me, what are need to be tested? What are the scenarios and activities for each scenario? What metrics do I need to add? Which is the best and free tool for testing it? How to test if it is deployed in cloud like AWS?
Please let me know, Thanks in advance.
Performance testing needs,
Identify critical/heavy/important scenario in your webapp (irrespective of deployment cloud/standalone)
Identify service level agreements in terms of response times, throughput, latency etc.
Identify workload model i.e. how much user load application is expecting. this should be as fine grained as possible (avg users per transaction/workflow at a point of time)
Identify tools (JMeter is freeware and best but if you can afford paid then look at loadrunner, neoload etc.)
Record the script for workflows and parameterise and correlate.
generate test setup for load test and execute the load test.
monitor system utilization, collect metrics like response time, throughput, error rate, latency etc.
This all comes in load testing. For more you can read http://www.guru99.com/performance-testing.html
I am new to Performance testing and would like to test the following website www.volkswagen.co.nz
That is a recipe for disaster. No one new should be allowed to work on their own without a full period of training and internship with a master in the field. This is true of stone masons, electricians, plumbers, barbers, accountants, engineers and physicians. And it is most certainly true of performance testers/engineers.
There are dozens of foundation skills you need to master before you touch any tool, open source or otherwise. Until you show mastery of those items along with tool mechanics for your tool you should not be allowed to test any website, particularly a production website. And, if you don't work for this company what you are engaging in is a denial of service attack and could leave you with exposed legal liability.
I strongly agree with James on this one.
Do not touched the site if:
it's not yours
not sure what you are doing
the owner gave you explicit (and sounds like irresponsible) permission
don't know or don't have the support to restore the environment into a working state
If you do work for the company then you need to have a test environment first, a playground where you can mess around and nobody would mind if you take it down.
Firstly get information from the business on which use cases needs to
be tested.
Get response times target for user actions and for environments utilisation.
Get response time targets for environments utilisation: define environment monitoring tactics.
Found a tool that can fit for purpose: Jmeter, Gattling,etc, lot's of free ones available.
Get a test environment, preferably similar scale to production
Create scripts to cover critical use cases
Comply scripts into scenarios
Create a reporting framework
Kick off monitoring
Kick off scenario
Collect and analyse results
Be mindful of the free editions of load testing tools: they tend to be easy to use at first but soon as you start to outgrow it it can cost a fortune and more often then not it's hard to port scripts/scenarios to another tool.

Performance test against Web Service

I am doing performance test against our site. The site exposes many Web services APIs. Our product has several workflows and each complete workflow is composed of calling more than one APIs.
I am wondering which of the following apporoaches is reasonable:
Test against each single API in a standalone way.
Test against each complete workflow which involves several APIs.
I think the latter one make more sense since it mimic the real scenario.
I'd like to hear your comments.
Thanks.
There are (as usual) pro's and cons of each approach. Personally I'd consider doing both, starting with the single API test.
The single API is probably the easiest to build, and it has the benefit of exactly pinpointing out where your performance issues are. It's also useful to so spot regressions in performance during development. If there are unit tests for the application consider using those instead. When there is a performance regression there usually also is a unit test which suddenly became slower.
Once you've done that you will still need to do the more complex tests. Firstly because you need to know if the performance of a certain flow is acceptable, but also because there can be unexpected interactions between different API's. Depending on you application there may be nasty concurrency issues, throughput bottlenecks etc. Make sure you run several flows concurrently, that's what happens in really live and it's only way to find issues related to locking in the database, I/O bottlenecks etc.
But before you start make sure you have a realistic idea of what the performance should be, how many concurrent users there will be and what the hardware requirements are. There is no limit to improving performance, so you have to decide what is good enough or you will never stop optimizing.

Resources