I want to set up some database state for integration tests. My basic setup looks like this:
describe "some behaviour" do
before(:each) do
<setup some mock objects with stubs>
<call the database set up script>
end
end
Setting up the mocks naturally belongs in a before(:each), the examples depend on the mocks.
The database setup script is slow, so I only want to call it once.
Part of the process being integration tested is the database setup script. While I can change the script to make it easier to test, I can't remove its dependency on the state of the mocks.
The database setup must be called after the setup for the mocks because it depends on them. E.g. some of the mocks return identifiers which the database setup script needs to put in database records.
So really I want to put the database setup in a before(:all). before(:all) causes the setup to be run before the before(:each), so I can't see how to meet all the requirements.
I tried to split out the setup of the mocks into a helper function, which I called as the first call in both the before(:all) and before(:each). rspec doesn't allow creation of doubles in before(:all) blocks, so this didn't work.
Is there any way to run the slow database setup only once, but still have the mocks available?
Related
I have a number of feature files in my cucumber scenario test suite.
I run the tests by launching Cucumber using the CLI.
These are the steps which occur when the test process is running:
We create a static instance of a class which manages the lifecycle of testcontainers for my cucumber tests.
This currently involves three containers: (i) Postgres DB (with our schema applied), (ii) Axon Server (event store), (iii) a separate application container.
We use spring's new #DynamicPropertySource to set the values of our data source, event store, etc. so that the cucumber process can connect to the containers.
#Before each scenario we perform some clean up on the testcontainers.
This is so that each scenario has a clean slate.
It involves truncating data in tables (postgres container), resetting all events in our event store (Axon Server container), and some other work for our application (resetting relevant tracking event processors), etc.
Although the tests pass fine, the problem is by default it takes far too long for the test suite to run. So I am looking for a way to increase parallelism to speed it up.
Adding the arguments --threads <n> will not work because the static containers will be in contention (and I have tried this and as expected it fails).
The way I see it there is are different options for parallelism which would work:
Each scenario launches its own spring application context (essentially forking a JVM), gets its own containers deployed and runs tests that way.
Each feature file launches its own spring application context (essetially forking a JVM), gets its own containers deployed and runs each scenario serially (as it would normally).
I think in an ideal world we would go for 1 (see *). But this would require a machine with a lot of memory and CPUs (which I do not have access to). And so option 2 would probably make the most sense for me.
My questions are:
is it possible to configure cucumber to fork JVMs which run assigned feature files (which matches option 2 above?)
what is the best way to parallelise this situation (with testcontainers)?
* Having each scenario deployed and tested independently agrees with the cucumber docs which state: "Each scenario should be independent; you should be able to run them in any order or in parallel without one scenario interfering with another. Each scenario should test exactly one thing so that when it fails, it fails for a clear reason. This means you wouldn’t reuse one scenario inside another scenario."
This isn't really a question for stack overflow. There isn't a single correct answer - mostly it depends. You may want to try https://softwareengineering.stackexchange.com/ in the future.
No. This is not possible. Cucumber does not support forking the JVM. Surefire however does support forking and you may be able to utilize this by creating a runner for each feature file.
However I would reconsider the testing strategy and possibly the application design too.
To execute tests in parallel your system has to support parallel invocations. So I would not consider resetting your database and event store for each test a good practice.
Instead consider writing your tests in such a way that each test uses its own isolated set of resources. So for example if you are testing users, you create randomized users for each test. If these users are part of an organization, you create a random organization, ect.
This isn't always possible. Some applications are designed with implicit singleton resources in the code. In this case you'll have to refactor the application to make these resources explicit.
Alternatively consider pushing your Cucumber tests down the stack. You can test business logic at any abstraction level. It doesn't have to be an integration test. Then you can use JUnit with Surefire instead and use Surefire to create multiple forks.
I have Jasmine tests for a data service that does CRUD operations. There are some functions that are only executed on user actions like add, update or delete data. These functions are marked red by Istanbul which decreases the code coverage.
How should I handle these functions?
Either:
exercise the user actions
manually using a hand-written script so you can reliably repeat it,
add a programmable way for your application to trigger these actions, and then
write unit tests that use the triggers
I much prefer the latter because you can then simply run the tests whenever needed.
I am trying to use RSpec to functional test my REST apis.
The way I would LIKE it to work is using a CI build to build and deploy my application to a test server somewhere in the cloud and then have it kick off automated functional tests. But in order to properly do that, I need to be able to pass in the base url/domain for where the app was deployed. It will not be the same.
Everything I've found so far makes it seem like RSpec can't do this. Is there another way to do it if I can't pass in parameters on the command line? Or is RSpec not the right choice for this?
One way would be to bypass the call to rspec with something that accepts command line arguments and then initiate rspec in code. If you do not want to write your own binary for that, rake is capable of that too.
Look here for how to run rspec from code.
Another way would be setting an ENV variable when calling your test and preferably making it optional in the specs.
$> SPEC_URL=http://anotherhost:666 rspec
in code:
url = ENV['SPEC_URL'] || "http://localhost:4000"
I would suggest method two as it's the cleaner and easier approach in my opinion.
Since application tests can now be run on the simulator from Xcode, what would the advantage be, apart from possibly a small saving in execution time, of still separating your tests into logic and application tests?
The differentiation as per the Apple docs:
Logic tests. These tests check the correct functionality of your code in a clean-room environment; that is, your code is not run inside an application. Logic tests let you put together very specific test cases to exercise your code at a very granular level (a single method in class) or as part of a workflow (several methods in one or more classes). You can use logic tests to perform stress-testing of your code to ensure that it behaves correctly in extreme situations that are unlikely in a running application. These tests help you produce robust code that works correctly when used in ways that you did not anticipate. Logic tests are iOS Simulator SDK–based; however, the application is not run in iOS Simulator: The code being tested is run during the corresponding target’s build phase.
Application tests. These tests check the functionality of your code in a running application. You can use application tests to ensure that the connections of your user-interface controls (outlets and actions) remain in place, and that your controls and controller objects work correctly with your object model as you work on your application. Because application tests run only on a device, you can also use these tests to perform hardware testing, such as getting the location of the device.
Application tests compared to logic tests are really used for two different things:
Logic tests/unit tests are used to test very small behavior for one or a few methods, e.g. "Given that I create my object like this, is the value of a certain property what I expect it to be?"
Application tests however are used to test the big picture, e.g. "Do I get the right data in my detail view when I tap on a certain table view cell?"
I am running some unit test that persist documents into the MongoDb database. For this unit test to succeed the MongoDb server must be started. I perform this by using Process.Start("mongod.exe").
It works but sometimes it takes time to start and before it even starts the unit test tries to run and FAILS. Unit test fails and complains that the mongodb server is not running.
What to do in such situation?
If you use external resource(DB, web server, FTP, Backup device, server cluster) in test then it rather integration test then unit test. It is not convenient and not practical to start that all external resources in test. Just ensure that your test will be running in predictable environment. There are several ways to do it:
Run test suite from script (BAT,
nant, WSC), which starts MongoDB
before running test.
Start MongoDB on server and never shut
down it.
Do not add any loops with delays in your tests to wait while external resource is started - it makes tests slow, erratic and very complex.
Can't you run a quick test query in a loop with a delay after launching and verify the DB is up before continuing?
I guess I'd (and by that I mean, this is what I've done, but there's every chance someone has a better idea) write some kind of MongoTestHelper that can do a number of things during the various stages of your tests.
Before the test run, it checks that a test mongod instance is running and, if not, boots one up on your favourite test-mongo port. I find it's not actually that costly to just try and boot up a new mongod instance and let it fail as that port is already in use. However, this very different on windows, so you might want to check that the port is open or something.
Before each individual test, you can remove all the items from all the tested collections, if this is the kind of thing you need. In fact, I just drop all the DBs, as the lovely mongodb will recreate them for you:
for (String name : mongo.getDatabaseNames()) {
mongo.dropDatabase(name);
}
After the tests have run you could always shut it down if you've chosen to boot up on a random port, but that seems a bit silly. Life's too short.
The TDD purists would say that if you start the external resource, then it's not a unit test. Instead, mock out the database interface, and test your classes against that. In practice this would mean changing your code to be mockable, which is arguably a good thing.
OTOH, to write integration or acceptance test, you should use an in-memory transient database with just your test data in it, as others have mentioned.