I have some unit tests that where I'm using server paths to hit my mock stubs. Currently I'm add the URL into each test. I'd really like to pass this URL from my gulp task into the unit tests. Is this possible?
require('.path_to_my_karma').apply(null, [].concat(args).concat([**host_url**]));
When you run karma you can pass in custom options;
karma start --my-custom-url http://google.com
Then in your karma.conf.js you can access them like so;
module.exports = function(config) {
console.log('URL passed in from command line is %s', config.myCustomUrl);
config.set({
// ...
});
};
Related
I want to do some clearing up after all .spec files have been run by Cypress. For this I created another .spec file that does several API calls. I need Cypress to run this .spec file only after all tests form all the other files have ran. One more thing, my .spec files are being run by Cypress in parallel mode, via 4 machines.
I found out there are the "after" hooks I could use, but as far as I read, these hooks apply per only one .spec file, not all of them.
This is not exactly the answer you are looking for, but maybe this information can be useful to you in some way:
One of the antipatterns I have heard multiple times from Cypress gurus is cleaning the application state using after or afterEach hooks.
If I remember correctly, the reasoning was that if the test fails or some step of the after hook fails, it will never get to the end of the script, thus we cannot be sure that the application state is prepared for the future tests.
That's why it is suggested to use before or beforeEach hooks, to prepare application state right before running tests.
If you decide to run some scripts before or after tests in different specs, then the support file can be useful, because it is rendered before each spec:
https://docs.cypress.io/guides/core-concepts/writing-and-organizing-tests#Support-file
As a final thought, if you want something to happen before or after running Cypress, then maybe the best way to accomplish this is to prepare a separate script and run it using whatever you use to run Cypress (node, docker, etc.).
Sorry that this answer got so long :)
To conclude my thoughts:
Better prepare the state for the tests, not clean up after them
Separate scripts can be run before or after tests, and they do not have to be inside Cypress (using node, bash, docker...)
Hope this helps!
To run a single spec out of process, take a look at the Module Api.
With this you can create a node script that can be run after all parallel processes have completed.
./scripts/e2e-run-cleanup.js
const cypress = require('cypress')
cypress
.run({
spec: './cypress/e2e/cleanup.cy.js', // your cleanup spec
})
.then((results) => {
console.log(results)
})
.catch((err) => {
console.error(err)
})
Some configuration:
cypress.config.js
const { defineConfig } = require('cypress')
module.exports = defineConfig({
e2e: {
baseUrl: 'http://localhost:1234',
excludeSpecPattern: ['cleanup.cy.js'], // exclude this one from the normal run
}
})
package.json
"scripts": {
...
"test:headless": "yarn cypress:run",
...
"cleanup": "yarn ./scripts/e2e-run-cleanup.js", // script to kick off
"posttest:headless": "yarn cleanup", // also after local run
// "post" prefix automatically
// runs this after "test:headless" script
...
}
main.yml
...
jobs:
install:
...
ui-chrome-tests:
...
cleanup:
...
needs: ui-chrome-tests # ensure tests have run
steps:
- run: yarn cleanup
You can use the After Run API for this use case:
module.exports = (on, config) => {
on('after:run', (results) => {
// run some code after the run
})
}
or
const { defineConfig } = require('cypress')
module.exports = defineConfig({
e2e: {
setupNodeEvents(on, config) {
on('after:run', (results) => {
// run some code after the run
})
}
}
})
I want to use Cypress to test locally and on CI at the same time. On CI, I would like to test a production version of my application, which has a different baseUrl than the local version, obviously. I am using the https://github.com/bjowes/cypress-ntlm-auth package to ause windows authentication for the CI, and to do so I have to call cy.ntlm line in my tests. I want to make an IF function that calls the cy.ntlm line ONLY if the baseUrl matches the production one. If the baseUrl is localhost then I would like the cy.ntlm line to be ignored. So my bottom line questions are, how do I let cypress know that I want to use 2 different URLs and how do I pack that into an IF statement? Thank you
You can check the baseUrl to conditionally call cy.ntlm,
const baseUrl = Cypress.config('baseUrl')! // use non-null assertion operator
const isLocal = baseUrl.includes('localhost')
if (!isLocal) {
cy.ntlm(...)
}
When using Typescript with Cypress you will get complaints because Typescript has no idea if you have set the baseUrl configuration or not.
You can overcome that by adding ! after getting the baseUrl value.
Ref Typescript - Non-null assertion operator
I separated the steps to make it clearer.
Assuming your cypress config file has the baseUrl. You can then update the baseUrl using the CLI during run time. For this create two different scripts with the staging and production URL's in your package.json like this:
"scripts": {
"test:local": "cypress run --config baseUrl=https://example.com/staging",
"test:ci": "cypress run --config baseUrl=https://example.com/production"
}
Then to run the scripts in CI use npm run test:ci and for local use npm run test:local.
I want to reset my database before each "spec" start. NOT before each test
I see that i can add code in support/index.js
before(function () {
cy.exec('npm run db:reset')
// This run only once before ALL the spec
})
beforeEach(function () {
cy.log('RUN BEFORE EACH TEST IN EACH SPEC')
})
I want to run before each spec file. I am using GUI and clicking "Run all specs";
I have multiple spec file and in each spec have multiple test.
UPDATE : My test in GUI failed because database wasn't getting reset. I tried running test in CLI and all tests pass. So does that mean in CLI it does run before each spec ? Is it issue only in GUI ?
I've been trying to use the mocha require option:
mocha mytest.js --require myglobals.js
But I don't know how to do it from karma. The idea is to run karma start and it will automatically require myglobals.js.
Is that possible to do it from within karma.conf.js or somehow else?
Maybe I'm not using karma/mocha in the right way.
My idea is:
I want to have unit/integration tests for both the client (react) and the server (node/express)
I want to just run karma start and both client and server tests are tested
I found very useful to have the following file pre-required, in order to avoid requiring some things in all tests:
myglobals.js:
const chai = require('chai');
// Load Chai assertions
global.expect = chai.expect;
global.assert = chai.assert;
chai.should();
// Load Sinon
global.sinon = require('sinon');
// Initialize Chai plugins
chai.use(require('sinon-chai'));
chai.use(require('chai-as-promised'));
chai.use(require('chai-things'));
For the server side I've made it work using the command:
mocha mytest.js --require myglobals.js
But still, I wanted to keep it running under the npm run test (which calls karma start) instead of creating another npm run test:server command.
Furthermore, I wanted to do the same on the client. I'm using webpack there as a preprocessor.
Any ideas if it is possible to accomplish that? Or maybe I'm in the wrong way?
Short Answer
Mocha in the browser does not support an equivalent of the --require option, but you do not need it. You can just load whatever you need ahead of your tests listing the files you want to load in files in front of your test files. Or if you use a loader like RequireJS, write a test-main.js that loads the modules you would load with --require first, and then load your test files.
Long Answer
If you look at Mocha's code you'll see that the only place --require is used is in the bin/_mocha file. This option is not passed further into the Mocha code but is immediately used to load the requested modules:
requires.forEach(function(mod) {
require(mod);
});
When you run Mocha in the browser, none of this code is run, and if you look in the rest of the Mocha code you won't find a similar facility anywhere else. Why?
Because it would serve no purpose. The --require option is very useful at the command line. Without it, the only way to load modules before Mocha loads the test files would be to either write custom code to start Mocha or put the necessary require calls at the start of every single test file.
In the browser, if you do not use a module loader, you can just load the code you'd load using --require by putting the script elements that load them in front of the script elements that load your tests. In Karma, this means putting these files earlier in the list of files you have in your karma.conf.js. Or if you use RequireJS, for instance, you write test-main.js so that the loading is done in two phases: one that loads the modules you'd load through --require on the command-line, and a second that loads your test files. It could be something like:
const allTestFiles = [];
const TEST_REGEXP = /test\/test.*\.js$/i;
Object.keys(window.__karma__.files).forEach((file) => {
if (TEST_REGEXP.test(file)) {
const normalizedTestModule = file.replace(/^\/base\/|\.js$/g, "");
allTestFiles.push(normalizedTestModule);
}
});
require.config({
baseUrl: "/base",
paths: {
...
},
});
// This guarantees that "a", "b", "c" loads before any other module
require(["a", "b", "c", ...], () => {
require(allTestFiles, window.__karma__.start);
});
I'm trying to convert a project from browserify+mochify to webpack.
The webpack docs demonstrate how to use mocha-loader to run the tests with webpack-dev-server, but assumes a single entry point into the tests.
All the existing tests were designed with mochify in mind which does not require a single entry point as it recursively bundles ./test/*.js.
The setup below sort of works for me. It still uses mochify to run the tests (because it has all the phantomjs interfacing), but doesn't rely on anything from browserify. If you run webpack --watch, it reruns all tests when a file changes.
webpack.config.js:
var path = require("path");
var child_process = require('child_process');
module.exports = {
entry: {
tests: "./tests.js"
},
output: {
filename: "tests.js", // Should be a unique name
path: "/tmp"
},
plugins: [
// Automatically run all tests when webpack is done
function () {
this.plugin("done", function (stats) {
child_process.execSync('mochify /tmp/tests.js', { stdio: 'inherit'});
});
}
],
};
tests.js:
// List all test dirs here if you have multiple
var contexts = [
require.context('./dir1/test', true, /\.js$/),
require.context('./dir2/test', true, /\.js$/)
];
contexts.forEach(function (context) {
context.keys().forEach(context);
});
Another approach is described here: https://stackoverflow.com/a/32386750/675011
This is not exactly an answer, but it solved the problem for me. The reason I wanted to combine mochify and webpack was that I wanted to use the browser console to debug my mocha tests. Since my mocha tests themselves don't rely on a browser, it was enough for me to finally realize I could use a node debugger and it would bring up the Chrome console, (almost) solving my problem. node-inspector is the node debugger, but I'm using babel, so I needed babel-node-debug, but that doesn't yet work with babel6, but there's an unmerged pull request that fixes it: https://github.com/CrabDude/babel-node-debug/pull/12.