I have two tests directories. Unit tests, and integration tests. Both use mocha.
Unit tests run on average between 1-5 ms. Unfortunately our integration tests take longer. Some of them up to 30 seconds.
I was wondering if I could set the timeout to 30 seconds only for the test/integration directory, but leave test/unit using the default mocha timeout (2 seconds) in the mocha.opts file. Or perhaps have multiple mocha.opts files.
There's no support for multiple mocha.opts files being active for a single invocation of Mocha. You could have two Mocha invocations each with their own mocha.opts, however.
If you want everything in a single Mocha invocation, and set different timeouts for different parts of the suite, there's no direct way for telling Mocha "files in this directory have one timeout, and files in that other directory have another timeout". You are limited to calling this.timeout in your callbacks, like this:
describe("User view", function () {
this.timeout(...);
// Tests....
});
If you structure your suite so that all integration tests are seen by Mocha as being descendants of a single top describe, you can effectively set this timeout in only one location (the top describe) for all your integration tests. See this question and its answers for ways to structure a suite in this way.
Related
I have a number of feature files in my cucumber scenario test suite.
I run the tests by launching Cucumber using the CLI.
These are the steps which occur when the test process is running:
We create a static instance of a class which manages the lifecycle of testcontainers for my cucumber tests.
This currently involves three containers: (i) Postgres DB (with our schema applied), (ii) Axon Server (event store), (iii) a separate application container.
We use spring's new #DynamicPropertySource to set the values of our data source, event store, etc. so that the cucumber process can connect to the containers.
#Before each scenario we perform some clean up on the testcontainers.
This is so that each scenario has a clean slate.
It involves truncating data in tables (postgres container), resetting all events in our event store (Axon Server container), and some other work for our application (resetting relevant tracking event processors), etc.
Although the tests pass fine, the problem is by default it takes far too long for the test suite to run. So I am looking for a way to increase parallelism to speed it up.
Adding the arguments --threads <n> will not work because the static containers will be in contention (and I have tried this and as expected it fails).
The way I see it there is are different options for parallelism which would work:
Each scenario launches its own spring application context (essentially forking a JVM), gets its own containers deployed and runs tests that way.
Each feature file launches its own spring application context (essetially forking a JVM), gets its own containers deployed and runs each scenario serially (as it would normally).
I think in an ideal world we would go for 1 (see *). But this would require a machine with a lot of memory and CPUs (which I do not have access to). And so option 2 would probably make the most sense for me.
My questions are:
is it possible to configure cucumber to fork JVMs which run assigned feature files (which matches option 2 above?)
what is the best way to parallelise this situation (with testcontainers)?
* Having each scenario deployed and tested independently agrees with the cucumber docs which state: "Each scenario should be independent; you should be able to run them in any order or in parallel without one scenario interfering with another. Each scenario should test exactly one thing so that when it fails, it fails for a clear reason. This means you wouldn’t reuse one scenario inside another scenario."
This isn't really a question for stack overflow. There isn't a single correct answer - mostly it depends. You may want to try https://softwareengineering.stackexchange.com/ in the future.
No. This is not possible. Cucumber does not support forking the JVM. Surefire however does support forking and you may be able to utilize this by creating a runner for each feature file.
However I would reconsider the testing strategy and possibly the application design too.
To execute tests in parallel your system has to support parallel invocations. So I would not consider resetting your database and event store for each test a good practice.
Instead consider writing your tests in such a way that each test uses its own isolated set of resources. So for example if you are testing users, you create randomized users for each test. If these users are part of an organization, you create a random organization, ect.
This isn't always possible. Some applications are designed with implicit singleton resources in the code. In this case you'll have to refactor the application to make these resources explicit.
Alternatively consider pushing your Cucumber tests down the stack. You can test business logic at any abstraction level. It doesn't have to be an integration test. Then you can use JUnit with Surefire instead and use Surefire to create multiple forks.
I'm using JMeter for integration and non-regression testing.
The tests are automated and reports are working.
But since it is scenario testing and not performance testing the report doesn't give real business added value for that kind of tests.
My question: Is there any way to have a scenario (transaction controller based)reporting?
For the moment, to have some more meaningful result, transactions controllers and dummy sampler are used.
What we would like to have is the number of success/failure scenarios of the last test run. And also an history of success/failures per test run (1 by day).
Thank you for your advices.
The easiest way of getting the things done is putting your JMeter test under Jenkins orchestration so it will be automatically executed based on a VCS hook or according to the Schedule
Once done you will be able to utilize Jenkins Performance Plugin which adds test results trends charts and ability to mark build as unstable/failed depending on various criteria.
If I am not wrong, you want to create a suite based on particular test cases. like if single case include execution of more than 1 request in a single execution.
If this is the case, you can simple create a test fragment through jmeter gui, and copy all the samplers in single fragment.
Now to control their execution you can use any controller of your choice, i would suggest you to use module controller for http samplers.
I have a set of mocha test scripts (= files), located in /test together with mocha.opts. It seems mocha is running all test files in parallel, is this correct? This could be a problem if test data are used in different test scripts.
How can I ensure, that each file is executed separately?
It seems mocha is running all test files in parallel, is this correct?
No.
By default, Mocha loads the test files sequentially, and records all tests that must be run, then it runs the test one by one, again sequentially. Mocha will not run two tests at the same time, no matter whether the tests are in the same file or in different files. Note that whether your tests are asynchronous or synchronous makes no difference: when Mocha starts an asynchronous test, it waits for it to complete before moving on to the next test.
There are tools that patch Mocha to run tests in parallel. So you may see demonstrations showing Mocha tests running in parallel, but this requires additional tools, and is not part of Mocha properly speaking.
If you are seeing behavior that suggest tests running in parallel, that's a bug in your code, or perhaps you are misinterpreting the results you are getting. Regarding bugs, it is possible to make mistakes and write code that will indicate to Mocha that your test is over, when in fact there are still asynchronous operations running. However, this is a bug in the test code, not a feature whereby Mocha is running tests in parallel.
Be careful when assigning environment variables outside of mocha hooks since the assignments to that variables are done in all files before any test execution (i.e eny "before*" or "it" hook).
Hence the value assigned to the environment variable in the first file will be overwritten in the second one, before any Mocha test hook execution.
Eg. if you are assigning process.env.PORT=5000 in test1.js file and process.env.PORT=6000 in test2.js outside of any mocha hook, then when the tests from test1.js starts execution the value of the process.env.PORT will be 6000 and not 5000 as you may expect.
Tools: Protractor 3.3.0, Jasmine 2.4.1, Selenium Standalone Server.
I have a test suite that has a plethora of spec.js files each containing unique tests for my application.
I'm using the maxInstances and shardTestFiles browser capabilities to launch 3 browsers and run each spec file to decrease the run time of the entire suite (with the 3 browsers its at about 20 minutes.. so without it's probably pushing over an hour).
My question is how can I tell Protractor to have a spec file wait for the completion of another spec file before executing. For example:
Lets say I have a Page 1 Tests and I have spec files a1.spec, a2.spec, and a3.spec and then I have some other tests of similar structure or what have you.
When I launch protractor with 3 browser instances, as expected, a1.spec, a2.spec, and a3.spec all launch with their own individual browser instance since it's a 1:1 ratio. BUT what if a3.spec can't run unless a2.spec completes? How do I make this wait occur, or is it just a best practice to not make certain tests dependent on each other?
You could use the done callback either in the beforeEach or it of any test which needs time to complete asynchronously. Any following tests won't run until the one you're waiting on has finished.
beforeEach Example
beforeEach(function(done) {
setTimeout(function() {
value = 0;
done();
}, 1);
});
it Example
it("should support async execution of test preparation and expectations", function(done) {
value++;
expect(value).toBeGreaterThan(0);
done();
});
This would queue as you need but the tests would be run serially. To speed this up you could use Promises and when/guard but this may complicate things too much.
How do we put a timeout on a TeamCity build?
We have a TeamCity build which runs some integration tests. These tests read/write data to a database and sometimes this is very slow (why it is slow is another open quesiton).
We currently have timeouts in our integration tests to check that e.g. the data has been written within 30 seconds, but these tests are randomly failing during periods of heavy use.
If we removed the timeouts from the tests, we would want to fail the build only if the entire run took more than some much larger timeout.
But I can't see how to do that.
On the first page of the build setup you will find the field highlights in my screenie - use that
In TeamCity v.9 and v.10 you should find it under the "Failure Conditions". See: