I run Cypress from the terminal using an npm script. I would like to run a series of checks before any tests in any spec are executed (e.g. to ensure env variables are set as expected).
If any check fails, I'd like to log some debug info and Cypress to mark each spec as failed.
Is something like this possible without having to have a custom script that executes before Cypress is started?
I've played around with the support file, but logging to the terminal and failing all test specs seems problematic.
Yes you can use Before Run API
on('before:run', (details) => {
/* ... */
})
Related
I have enabled the "experimentalInteractiveRunEvents": true in cypress.json to run the after:spec event of cypress-testRail-simple plugin through cypress IDE.
link for plugin : https://github.com/bahmutov/cypress-testrail-simple.git
I am using this event in cypress-testRail-simple's plugin.js file.
I have defined plugin in plugins -> index.js
enabled the flag in cypress.json file:
I have multiple spec files.
When running through command line :
npx cypress run
after:spec - getting called after all the spec run finish.
When running through cypress IDE :
npx cypress open
after:spec - called zero time
reference : how to run the cypress in interactive mode with flag enabled
There's a fundamental difference between npx cypress open (interactive mode) and npx cypress run (command line mode).
Interactive mode
All the specs are run as in a single continuous run, as if they had been written as one spec. Because of this, before:spec is called only one.
Command line mode
All specs are run as separate runs, hence two calls to before:spec.
From After Spec API
When running cypress via cypress open, the event will fire when the browser closes.
I have a file with N>10 tests defined in BDD in a feature file.
For debugging purposes, I need to execute only one of the tests many times, but I am not finding any other way than commenting out the other tests.
I am trying now to use the #focus tag in front of the test, in a way that cypress IDE actually marks the rest of the tests as sync skip; aborting execution, but the test that I need to run actually says No commands were issued in this test..
This is the example .feature file I'm playing with:
#e2eTests
#cases
Feature: Test feature
Scenario: Test 1
Given foo
When bar
Then foobar
#focus
Scenario: Test 2
Given bar
When foo
Then barfoo!
And when running npm run cypress and selecting this feature, I get:
What am I doing wrong?
Thanks
I would recommend using cypress-grep if you are looking for a flaky test, that needs repeating n times.
After installation and configuration (see here: Burning Tests with cypress-grep) in terminal indicate which test needs to be repeated and how many times (cypress 7.6) ->
npx cypress run --spec cypress/folder/with/test/testFile.spec.ts, --env grep="name/description of specific test", burn= >>numberOfIterations<<
Example:
npx cypress run --spec cypress/integration/testExample.spec.ts, --env grep="A popup should appear after login",burn=250
I have a few different environments in which I am running Cypress tests (i.e. envA, envB, envC)
I run the tests like so:
npm run cypress:open -- --env apiEndpoint=https://app-envA.mySite.com
npm run cypress:open -- --env apiEndpoint=https://app-envB.mySite.com
npm run cypress:open -- --env apiEndpoint=https://app-envC.mySite.com
As you can see, the apiEndpoint varies based on the environment.
In one of my Cypress tests, I am testing a value that changes based on the environment being tested.
For example:
expect(resourceTiming.name).to.eq('https://cdn-envA.net/myPage.html')
As you can see the text envA appears in this assertion.
The issue I'm facing is that if I run this test in envB, it will fail like so:
Expected: expect(resourceTiming.name).to.eq('https://cdn-envB.net/myPage.html')
Actual: expect(resourceTiming.name).to.eq('https://cdn-envA.net/myPage.html')
My question is - how can I update the spec files so that the correct URL is asserted when I run in the different environments?
I am wondering if there's a way to pass a value from the command line to the spec file to tell the spec file which environment I'm in, but I'm not sure if that's possible.
You can directly use the Cypress.env('apiEndpoint') in your assertions, so that whatever you're passing via CLI, your spec files has the same value -
expect(resourceTiming.name).to.eq(Cypress.env('apiEndpoint'))
And if you want to check that when you pass https://app-envA.mySite.com and the url you expect in the spec file is https://cdn-envA.net/myPage.html, You can use:
expect(resourceTiming.name).to.eq(Cypress.env('apiEndpoint').replace('app', 'cdn').replace('mySite.com', 'net') + '/myPage.html')
Your best bet, in my opinion, is to utilize environment configs (envA.json, envB.json, etc)
Keep all of the property names in the configs identical, and then apply the values based on the environment:
// envA.json file
"env": {
"baseUrl": "yourUrlForEnvA.com"
}
// envB.json file
"env": {
"baseUrl": "yourUrlForEnvB.com"
}
That way, you can call Cypress.env('baseUrl') in your test, and no matter what, the right property should be loaded in.
You would call your environment from the command line with the following syntax:
"cypress run --config-file cypress\\config\\envA.json",
This sets up the test run to grab the right config from the start.
Calling the url for login, for example, would be something like:
cy.login(Cypress.env('baseUrl'))
Best of luck to you!
I am using Cypress for end to end testing, and I would like to be able to see all run test suites, in the browser, even after they are run. Currently, after each test suite is completed (test which are stored in separate files), the browser reloads and I cannot see previously run tests, and after the final test suite, the browser closes. Is there an option to change this behavior so that I can run all test files, have all the results visible in the browser and that the browser doesn't close at the end?
I am currently running tests using this command: ./node_modules/.bin/cypress run --headed --spec 'cypress/integration/tests/*'
where /tests is the folder where I currently have my files.
I have added --no-exit but in this case cypress doesn't move to the next test file and only the first one runs.
A workaround solution could be to generate reports with Mochawsome, for each Test Spec, and then merge and view those rendered reports. The reports will contain the results from the tests, test bodies, any errors that occurred and some other bits of information.
If you read through the page in the link it shows you how to generate individual reports then combine them together, and then render them as HTML. These can then be viewed in the browser.
This command can be used to install what's needed npm install --save-dev mochawesome mochawesome-merge mochawesome-report-generator
and then add the Reporter configuration to the cypress.json:
{
"reporter": "mochawesome",
"reporterOptions": {
"reportDir": "cypress/results",
"overwrite": false,
"html": false,
"json": true
}
}
Keep in mind that it may not give you the level of detail that is contained in the Cypress Dashboard in the browser, for example, what was yielded from a request.
Cypress has a lot of possible command with a lot of possible config too.
Read this.
And if you use npm just run like this :
npm run cypress:open
and in your package.json :
"scripts": {
"cypress:open": "cypress open"
}
I run my Protractor tests using a gulp task and I pass all the parameters in the gulp task like:
gulp protractor-integration --useProxy=true --baseUrl=http://10.222.25.18:81 --apiUrl=10.124.22.213:8080 --suite=tests
I have tried to set up a configuration in WebStorm for Gulp and pass all the parameters there. When I hit run the correct tests are executed.
When I put a break point and hit debug the tests are executed and WebStorm does not stop at the break points.
Gulp Configuration in WebStorm for Debugging is not working. Sample Picture
Gulp run configuration is not supposed to be used for protractor tests debugging - it was designed to run/debug Gulp tasks.
To debug certain Node.js application, like Protractor tests, you need to make sure that debug arguments (--debug-brk/inspect-brk) are passed to Node process that starts the application. In your case, the application is spawned as a child process by Gulp. The IDE can only pass debug args to the main process (Gulp), that's why only Gulp tasks themselves will be debugged and not the child processes started by these tasks.
If you still prefer using Gulp to start your tests instead of using the dedicated Protractor run configuration, make sure that protractor process is started with --debug-brk/inspect-brk .
Do do this, you need changing node_modules/gulp-protractor/index.js accordingly. For example, modifying childProcess.fork call as follows will start Protractor with --inspect-brk=5860:
child = childProcess.fork(getProtractorCli(), args, {
stdio: 'inherit',
env: process.env,
execArgv: ['--inspect-brk=5860'] //added line
}).on('exit', function(code) {
...