I have a project with many integration tests that and I'm trying to reduce the tests execution time.
The tests are all JUnit tests that use a DB connection.
Currently all tests run one by one using maven-surefire-plugin with fork for each test in order to handle cache issues (The caches here are not the issue).
All tests use an app that persist to the same DB schema. This face a challenge when trying to parallel the process.
I found a nice blog that explain a bit about concurrency in surefire http://incodewetrustinc.blogspot.com/2010/01/run-your-junit-tests-concurrently-with.html
but I still have a problem implementing this solution since I have a shared resource.
My idea was to create multiple schemas and share them between threads \ process. How can I assign each test with a separate connection and avoid collisions ?
I would love to hear some ideas.
Thanks,
Ika.
Use ${surefire.forkNumber} as part of your DB connection ID. Then each thread running tests will use a separate connection.
Related
I have a number of feature files in my cucumber scenario test suite.
I run the tests by launching Cucumber using the CLI.
These are the steps which occur when the test process is running:
We create a static instance of a class which manages the lifecycle of testcontainers for my cucumber tests.
This currently involves three containers: (i) Postgres DB (with our schema applied), (ii) Axon Server (event store), (iii) a separate application container.
We use spring's new #DynamicPropertySource to set the values of our data source, event store, etc. so that the cucumber process can connect to the containers.
#Before each scenario we perform some clean up on the testcontainers.
This is so that each scenario has a clean slate.
It involves truncating data in tables (postgres container), resetting all events in our event store (Axon Server container), and some other work for our application (resetting relevant tracking event processors), etc.
Although the tests pass fine, the problem is by default it takes far too long for the test suite to run. So I am looking for a way to increase parallelism to speed it up.
Adding the arguments --threads <n> will not work because the static containers will be in contention (and I have tried this and as expected it fails).
The way I see it there is are different options for parallelism which would work:
Each scenario launches its own spring application context (essentially forking a JVM), gets its own containers deployed and runs tests that way.
Each feature file launches its own spring application context (essetially forking a JVM), gets its own containers deployed and runs each scenario serially (as it would normally).
I think in an ideal world we would go for 1 (see *). But this would require a machine with a lot of memory and CPUs (which I do not have access to). And so option 2 would probably make the most sense for me.
My questions are:
is it possible to configure cucumber to fork JVMs which run assigned feature files (which matches option 2 above?)
what is the best way to parallelise this situation (with testcontainers)?
* Having each scenario deployed and tested independently agrees with the cucumber docs which state: "Each scenario should be independent; you should be able to run them in any order or in parallel without one scenario interfering with another. Each scenario should test exactly one thing so that when it fails, it fails for a clear reason. This means you wouldn’t reuse one scenario inside another scenario."
This isn't really a question for stack overflow. There isn't a single correct answer - mostly it depends. You may want to try https://softwareengineering.stackexchange.com/ in the future.
No. This is not possible. Cucumber does not support forking the JVM. Surefire however does support forking and you may be able to utilize this by creating a runner for each feature file.
However I would reconsider the testing strategy and possibly the application design too.
To execute tests in parallel your system has to support parallel invocations. So I would not consider resetting your database and event store for each test a good practice.
Instead consider writing your tests in such a way that each test uses its own isolated set of resources. So for example if you are testing users, you create randomized users for each test. If these users are part of an organization, you create a random organization, ect.
This isn't always possible. Some applications are designed with implicit singleton resources in the code. In this case you'll have to refactor the application to make these resources explicit.
Alternatively consider pushing your Cucumber tests down the stack. You can test business logic at any abstraction level. It doesn't have to be an integration test. Then you can use JUnit with Surefire instead and use Surefire to create multiple forks.
In Using after or afterEach hooks, it is recommended to clean up server/db state in beforeEach or before. I understand the rationale but I believe the text lacks some real use case. Here is a use case that I don't know how to solve following the best practice.
Imagine I'm testing my own clone of github. To have a clean environment for my tests, I want Cypress to use a clean temporary user and a clean temporary repository. To avoid conflicts between multiple Cypress instances targeting the same server (e.g., multiple front-end developers testing their changes in parallel), there should be one user and one repository dedicated to each Cypress instance. This can be implemented by generating users and repositories with well-known random ids (e.g., temp-user-13432481 and temp-repo-134234). Cleaning up the mess in the database is just a removal of temp-* databases away.
The problem is when to clean up. If the clean up is done in a beforeEach() as is recommended, running a test in a Cypress instance will delete the data of other Cypress instances running in parallel.
Is there an obvious solution that I'm missing? How do people usually cleanup temporary testing data in a database?
The obvious answer would be to not run tests in a distributed manner against a single remote server (and instead run the DB server locally on each client), but since this is not an answer to your question, here are a few ideas:
Set up a cron job that will clean up old test repos/users at the end of each day.
If you only clean up users/repos that are older than e.g. several hours, it will avoid cleaning up resources that may still be used by running tests.
You must ensure that the ids are random and large enough (i.e. have enough entropy) that you won't run into collisions even if you don't clean them up for a while.
Make each client (i.e. the PC running the tests) use a fingerprint that you'll use to namespace the repo/user in the DB, and clean them up before each test run.
This way, each client will only clean up their own resources.
I'm leaning towards solution (1).
I am developing with CakePHP 2.4.3 and use the Unittest a lot. At the moment mostly on models.
Is there a possibility to shorten the time these test need to run? What makes them so slow? The db insertions of the fixtures?
I notice that I don't have the patience to wait for the tests to run and while waiting I start doing other things and then when I come back I lost track of what problem I was testing.
Thanks for any hints!
CalamityJane
I strongly disagree here with marks comment:
Unittests are not supposed to be "speedy"
Technically they're not that's true but it can become annoying. If you use CI on a large project testing can become horrible slow. You don't want to wait 30min until all tests are done. We had this case in a project with ~550 tables.
The bottleneck is in fact the fixture loading. Because for each test all fixtures have to be created again over and over. It is slow.
We use an internal plugin to copy a test database template to the test database instead of using fixtures. This dropped the time to run the tests on this project from 30+ minutes down to a few minutes.
An open source plugin that should be capable of doing this as well is https://github.com/lorenzo/cakephp-fixturize. You can load fixtures from SQL files or load them from a template database as well, see this section of the readme.md.
If you just have to test a single method there is no need to run all tests, you can filter the tests:
cake test <file> --filter testMyMethod
How do we put a timeout on a TeamCity build?
We have a TeamCity build which runs some integration tests. These tests read/write data to a database and sometimes this is very slow (why it is slow is another open quesiton).
We currently have timeouts in our integration tests to check that e.g. the data has been written within 30 seconds, but these tests are randomly failing during periods of heavy use.
If we removed the timeouts from the tests, we would want to fail the build only if the entire run took more than some much larger timeout.
But I can't see how to do that.
On the first page of the build setup you will find the field highlights in my screenie - use that
In TeamCity v.9 and v.10 you should find it under the "Failure Conditions". See:
I am running some unit test that persist documents into the MongoDb database. For this unit test to succeed the MongoDb server must be started. I perform this by using Process.Start("mongod.exe").
It works but sometimes it takes time to start and before it even starts the unit test tries to run and FAILS. Unit test fails and complains that the mongodb server is not running.
What to do in such situation?
If you use external resource(DB, web server, FTP, Backup device, server cluster) in test then it rather integration test then unit test. It is not convenient and not practical to start that all external resources in test. Just ensure that your test will be running in predictable environment. There are several ways to do it:
Run test suite from script (BAT,
nant, WSC), which starts MongoDB
before running test.
Start MongoDB on server and never shut
down it.
Do not add any loops with delays in your tests to wait while external resource is started - it makes tests slow, erratic and very complex.
Can't you run a quick test query in a loop with a delay after launching and verify the DB is up before continuing?
I guess I'd (and by that I mean, this is what I've done, but there's every chance someone has a better idea) write some kind of MongoTestHelper that can do a number of things during the various stages of your tests.
Before the test run, it checks that a test mongod instance is running and, if not, boots one up on your favourite test-mongo port. I find it's not actually that costly to just try and boot up a new mongod instance and let it fail as that port is already in use. However, this very different on windows, so you might want to check that the port is open or something.
Before each individual test, you can remove all the items from all the tested collections, if this is the kind of thing you need. In fact, I just drop all the DBs, as the lovely mongodb will recreate them for you:
for (String name : mongo.getDatabaseNames()) {
mongo.dropDatabase(name);
}
After the tests have run you could always shut it down if you've chosen to boot up on a random port, but that seems a bit silly. Life's too short.
The TDD purists would say that if you start the external resource, then it's not a unit test. Instead, mock out the database interface, and test your classes against that. In practice this would mean changing your code to be mockable, which is arguably a good thing.
OTOH, to write integration or acceptance test, you should use an in-memory transient database with just your test data in it, as others have mentioned.