I have a test automation suite that uses cucumber-appium-ruby which I run on AWS device farm.
In order to improve performance (and avoid the 150 minutes limit of use per device), I would like to be able to run my tests in parallel (meaning different tests of the suite running on different devices)
I have been looking for a solution to this or if it's even possible, and the only actual solution I have found is to run different sections of my suite per run. This is not ideal since is not a real test sharding, and I would like to know if it's even possible to implement or if someone has done it.
Related
Please could someone suggest:
I need to performance test a few APIs. Which of the below methods will be better way of conducting the tests (trying to understand the pros and cons of various ways to run JMeter tests):
Create Ultimate Concurrency Thread groups using only Jmeter.
Buy Blazemeter, add the Jmeter scripts, create scenario and run the tests
Run the JMeter tests using Maven plugin as part of the build.
Thanks,
N
Testing the performance of an API can be achieved by the above mentioned 3 ways.
Open Source: (Standalone JMeter/Taurus)
Cloud based solutions (Octoperf/Blazemeter)
If you are looking for an open source tool + you have your LG machine allocated + not gonna test with massive number of users - I would suggest to go with standalone JMeter/ taurus for your load testing solution.
If you are looking for an hassle free environment to manage all your test, LGs, reports, monitoring, etc then cloud based solutions like Blazemeter/Octoperf seems promising. This is going to cost you money and requires a port needs to be open for the solution to communicate with AUT.
For Scripting: You can use the same JMeter scripting in all 3 cases. Choosing a thread group requires what kinda test you wanna perform on your microsevrices.
In any case you will be using JMeter so points 2 and 3 don't really differ. The pro is that you don't have to pay. The con is that you need to manage your infrastructure yourself.
BlazeMeter provides easy way of launching a JMeter test so you don't have to worry about creating and configuring machines, installing JMeter, setting up distributed testing, real time monitoring and results comparison.
JMeter Maven Plugin makes the following processes easier:
installing JMeter
generating reporting dashboard
returning non-zero exit status code when acceptance criteria are not met
How to proceed with performing performance testing for an application that is not in production.
If the environment of the application under test is the same as it will be in production - the approach should not be different.
If the environment of the application under test is different, i.e. has less servers, the servers has less memory, etc. - in the absolute majority of cases you will not be able to calculate and predict the performance on more powerful hardware as there are too many factors to consider. You can inform the client about it right away.
However there are still some types of testing you can perform, i.e.:
You could run an integration test to check whether your application is configured for high load
You could run a scalability test to check whether and how does your application scale
You could run a soak test to check for possible memory leaks
You could use profiling tools to check bottlenecks in terms of code quality
You could identify slow DB queries and look for a way to optimise them
etc.
More information: Performance Testing in a Scaled Down Environment. Part Two: 5 Things You Can Test
If you meant how to judge on how much volume to test if application is not already in production, the answer is simple, you have to predict. The prediction can be based on survey, reports from your business analysts.
If none of the above are available. Just test how much your application can withstand in a test environment of similar configuration as production. This will give you an idea of when you need to start worrying while the application is live.
I have to test my rest API such that 100k API calls are made simultaneously(within 500ms).Any Idea how to simulate it?Utility to use?
I would suggest to use JMeter too, it enable your test to create multiple concurrent jmeter server. Just to be clear you can control multiple remote JMeter engines from a single JMeter client and replicate a test across many computers and thus simulate a larger load on the target server.
To be honest, your target is quite high (100k API calls simultaneously within 500ms), i.e. you'll need a lot of jmeter servers. When you create stress tests, there are not magical recipes, guides or manuals. Trial and error is a fundamental method of solving this kind of problems.
In my experience, I first try with few concurrent users and see how the server react. Then increase the number of concurrent users till to reach an intolerable performance decrease or, worst, a bottleneck .
http://jmeter.apache.org/usermanual/remote-test.html
You will obviously need a load testing tool which can be run in a distributed mode, i.e. 1 controller and X load generators executing the same test.
Grinder - scripts are written in some Python dialect
Apache JMeter - this guy doesn't require any specific knowledge, you can create tests using simple GUI
Tsung - is written in Erlang, known for capability to produce high loads even on low-end hardware.
See Open Source Load Testing Tools: Which One Should You Use? article for more information on above tools.
JMeter
The Apache JMeter™ application is open source software, a 100% pure
Java application designed to load test functional behavior and measure
performance. It was originally designed for testing Web Applications
but has since expanded to other test functions.
I have taken as an example for learning and gathered some information about tools, objectives,scenarios, but I need your inputs. Please assist me.
I am new to Performance testing and would like to test the following website www.volkswagen.co.nz
Can you tell me, what are need to be tested? What are the scenarios and activities for each scenario? What metrics do I need to add? Which is the best and free tool for testing it? How to test if it is deployed in cloud like AWS?
Please let me know, Thanks in advance.
Performance testing needs,
Identify critical/heavy/important scenario in your webapp (irrespective of deployment cloud/standalone)
Identify service level agreements in terms of response times, throughput, latency etc.
Identify workload model i.e. how much user load application is expecting. this should be as fine grained as possible (avg users per transaction/workflow at a point of time)
Identify tools (JMeter is freeware and best but if you can afford paid then look at loadrunner, neoload etc.)
Record the script for workflows and parameterise and correlate.
generate test setup for load test and execute the load test.
monitor system utilization, collect metrics like response time, throughput, error rate, latency etc.
This all comes in load testing. For more you can read http://www.guru99.com/performance-testing.html
I am new to Performance testing and would like to test the following website www.volkswagen.co.nz
That is a recipe for disaster. No one new should be allowed to work on their own without a full period of training and internship with a master in the field. This is true of stone masons, electricians, plumbers, barbers, accountants, engineers and physicians. And it is most certainly true of performance testers/engineers.
There are dozens of foundation skills you need to master before you touch any tool, open source or otherwise. Until you show mastery of those items along with tool mechanics for your tool you should not be allowed to test any website, particularly a production website. And, if you don't work for this company what you are engaging in is a denial of service attack and could leave you with exposed legal liability.
I strongly agree with James on this one.
Do not touched the site if:
it's not yours
not sure what you are doing
the owner gave you explicit (and sounds like irresponsible) permission
don't know or don't have the support to restore the environment into a working state
If you do work for the company then you need to have a test environment first, a playground where you can mess around and nobody would mind if you take it down.
Firstly get information from the business on which use cases needs to
be tested.
Get response times target for user actions and for environments utilisation.
Get response time targets for environments utilisation: define environment monitoring tactics.
Found a tool that can fit for purpose: Jmeter, Gattling,etc, lot's of free ones available.
Get a test environment, preferably similar scale to production
Create scripts to cover critical use cases
Comply scripts into scenarios
Create a reporting framework
Kick off monitoring
Kick off scenario
Collect and analyse results
Be mindful of the free editions of load testing tools: they tend to be easy to use at first but soon as you start to outgrow it it can cost a fortune and more often then not it's hard to port scripts/scenarios to another tool.
I'd like to know something, I know that to make your test easier you should use mock during unit testing to only test the component you want, without external dependencies. But at some point, you have to bite the bullet and test classes which interact with your database, files, network, etc.
My main question is: what do you do to test these classes?
I don't feel that installing a database on my CI server is a good practice, but do you have other options?
Should I create another server with other CI tools, with all externals dependencies?
Should I run integration test on my CI as often as my unit tests?
Maybe a full-time person should be in charge to test these components manually? (or in charge to create the test environment and configure the interaction between your class and your external dependency, like editing config files of your application)
I'd like to know how do you do in the real world.
I'd like to know how do you do in the
real world ?
In the real world there isn't a simple prescription about what to do, but there is one guiding truth: you want to catch mistake/bugs/test failures as soon as possible after they are introduced. Let that be your guide; everything else is technique.
A couple common techniques:
Tests running in parallel. This is my preference; I like to have two systems, each running their own instance of CruiseControl* (which I'm a committer for), one running the unit tests with fast feedback (< 5 minutes) while another system runs the integration tests constantly. I like this because it minimizes the delay between when a checkin happens and a system test might catch it. The downside that some people don't like is that you can end up with multiple test failures for the same checkin, both a unit test failure and an integration test failure. I don't find this a major downside in practice.
A life-cycle model where system/integration tests run only after unit tests have passed. There are tools like AnthillPro* that are built around this kind of model and the approach is very popular. In their model they take the artifacts that have passed the unit tests, deploy them to a separate staging server, and then run the system/integration tests there.
If you've more questions about this topic I'd recommend the Continuous Integration and Testing Conference (CITCON) and/or the CITCON mailing list.
There are lots of CI and build|process automation tools out there. These are just representatives of their class of tools.
The approach I've seen taken most often is to run unit tests immediately on checkin, and to run more lengthy integration tests at fixed intervals (possibly on a different server; that's really up to your preference). I've also seen integration tests split into "short-running" integration tests and "long-running" integration tests, which are run at different intervals (the "short-running" tests run every hour, for example, and the "long-running" tests run overnight).
The real goal of any automated testing is to get feedback to developers as quickly as is feasible. With that in mind, you should run integration tests as often as you possibly can. If there's a wide variance in the run length of your integration tests, you should run the quicker integration tests more often, and the slower integration tests less often. How often you run any set of tests in going to depend on how long it takes all the tests to run, and how disruptive the test runs will be to shorter-running tests (including unit tests).
I realize this doesn't answer your entire question, but I hope it gives you some ideas about the scheduling part.
Depending on the actual nature of the integration tests I'd recommend using an embedded database engine which is recreated at least once before any run. This enables tests of different commits to work in parallel and provides a well defined starting point for the tests.
Network services - by definition - can also be installed somewhere else.
Always be very careful though, to keep your CI machine separated from any dev or prod environments.
I do not know what kind of platform you're on, but I use Java. Where I work, we create integration tests in JUnit and inject the proper dependencies using a DI container like Spring. They are run against a real data source, both by the developers themselves (normally a small subset) and the CI server.
How often you run the integration tests depends on how long they take to run, in my opinion. Run them as often as you can. Leave the real person out of this, and let him or her run manual system tests in areas that are difficult or too expensive to automate testing for (for instance: spelling, position of different GUI components). Leave the editing of config files to a machine. Where I work, we have system variables (DEV; TEST and so on) set on the computers, and let the app choose a config file based on that.