I need to test the same logic hosted on different domains e.g.
cy.visit('DomainA');
cy.sameTestLogic();
cy.visit('DomainB');
cy.sameTestLogic();
There are entire suites of tests that need to run on both domains, A and B. Does anyone have any suggestions based on experience on how to approach this?
Iterate the domains like this (applied per spec)
const domains = ['a', 'b']
domains.forEach(domain => {
describe(`Testing domain ${domain}`, () => {
beforeEach(() => {
cy.visit(domain)
})
it('...', () =>
To do something more flexible, take a look at the Module API
Node script
const cypress = require('cypress')
const domains = ['a', 'b']
domains.forEach(domain => {
// run all specs or as specified
cypress.run({
...
env: {
domain
},
})
})
Test
const domain = Cypress.env('domain')
cy.visit(domain)
Related
I am very new to cypress automation and have been following though some examples and have ran into an issue that does not appear to be addressed in any video I have seen, where multiple tests in the same 'describe' do not run as expected.
If I create the following code & run it, it all works perfectly:-
describe('My First Test', () => {
it('Open Google', () => {
cy.visit('https://google.co.uk')
cy.get('#L2AGLb > .QS5gu').click()
cy.get('.gLFyf').type('Automation Step by Step{Enter}')
})
})
I have then attempted to split up the test into individual tests, as follows:-
describe('My First Test', () => {
it('Open Google', () => {
cy.visit('https://google.co.uk')
})
it('Confirm warning', () => {
cy.get('#L2AGLb > .QS5gu').click()
cy.get('.gLFyf').type('Automation Step by Step{Enter}')
})
it('Confirm warning', () => {
cy.get('.gLFyf').type('Automation Step by Step{Enter}')
})
})
The problem now is that after opening Chrome and then going into the next test, which should allow me to type the text, a 'default blank page' is displayed and the tests then fail.
What am I missing here to be able to run these three tests fully?
Code in VS Code
Error after opening Chrome & attempting to type in box
As above really, I was hoping to be able to run all three simple tests together.
EDIT - I rolled back my version of Cypress to 10.10.0 and it works perfectly, so no idea what has changed on the latest version released yesterday.
Try it with Test Isolation turned to false.
Best Practice: Tests should always be able to be run independently from one another and still pass
This was added in Cypress 12.0.0.
But if you want to play without it,
Test Isolation Disabled
testIsolation
beforeEach test
true
- clears page by visiting about:blank
- clears cookies in all domains
- local storage in all domains
- session storage in all domains
false
does not alter the current browser context
cypress.config.js
const { defineConfig } = require('cypress')
module.exports = defineConfig({
e2e: {
testIsolation: false,
},
})
You should visit your page for every test. You can put the cy.visit in a beforeEach function.
How can we run parallel cypress test with multiple users in a cypress-cucumber-preprocessor BDD framework. I can't use single user in parallel run, as the latest session will kicked out the existing cypress test run and existing test run will fail.
Please Note: In our web application if a user access the system and login to the web application and perform some actions. At the same time access the system in a different browser and login with the same user. The first session will gets logged out.
We are using
bitbucket
jenkins CI/CD pipeline
docker container
Could someone please advise if you happened come across with similar
Automation folder structure:
tests/
cypress/
integration
/folder1/
test1.feature
/folder2/
test2.feature
test3.feature
/folder3/
test4.feature
/folder4/
test5.feature
/folder5/
test6.feature
/folder6/
test7.feature
If you just need multiple users, but the tests are completely independent, you can have multiple user credentials in a fixture,
for example
// users.json
[
{
userId: 1,
username: 'John',
password: 'abc'
},
{
userId: 2,
username: 'Jack',
password: 'def'
},
... // and so on
]
Each test will pick a different user in the beginning
// test1
import { Given } from "#badeball/cypress-cucumber-preprocessor";
Given('User has logged in with unique credentials', () => {
before(() => {
cy.fixture('users.json').then(users => {
const user = users[0]
cy.login(user.username, user.password)
})
})
// test2
import { Given } from "#badeball/cypress-cucumber-preprocessor";
Given('User has logged in with unique credentials', () => {
before(() => {
cy.fixture('users.json').then(users => {
const user = users[1]
cy.login(user.username, user.password)
})
})
Then if the tests overlap when parallel running, the server sees different users logging on.
Or use the Cucumber Before hook
import { Before } from "#badeball/cypress-cucumber-preprocessor";
Before(function () {
cy.fixture('users.json').then(users => {
const user = users[0]
cy.login(user.username, user.password)});
})
})
You can have each test create a new user with appropriate permissions for each tests, run with APIs for quicker execution. You'll may need to have a separate clean up process to remove all the created users. The benefit of this to truly have a clean state for each test and not have to worry about user accounts.
I'm currently trying to get code coverage on my fastify routes using Mocha and NYC.
I've tried instrumenting the code beforehand and then running the tests on the instrumented code as well as just trying to setup NYC in various ways to get it to work right.
Here is my current configuration. All previous ones produced the same code coverage output):
nyc config
"nyc": {
"extends": "#istanbuljs/nyc-config-typescript",
"extension": [
".ts",
".tsx"
],
"exclude": [
"**/*.d.ts",
"**/*.test.ts"
],
"reporter": [
"html",
"text"
],
"sourceMap": true,
"instrument": true
}
Route file:
const routes = async (app: FastifyInstance, options) => {
app.post('/code', async (request: FastifyRequest, response: FastifyReply<ServerResponse>) => {
// route logic in here
});
};
The integration test:
import * as fastify from fastify;
import * as sinon from 'sinon';
import * as chai from 'chai';
const expect = chai.expect;
const sinonChai = require('sinon-chai');
chai.use(sinonChai);
describe('When/code POST is called', () => {
let app;
before(() => {
app = fastify();
// load routes for integration testing
app.register(require('../path/to/code.ts'));
});
after(() => {
app.close();
});
it('then a code is created and returned', async () => {
const {statusCode} = await apiTester.inject({
url: '/code',
method: 'POST',
payload:{ code: 'fake_code' }
});
expect(statusCode).to.equal(201);
});
});
My unit test call looks like the following:
nyc mocha './test/unit/**/*.test.ts' --require ts-node/register --require source-map-support/register --recursive
I literally get 5% code coverage just for the const routes =. I'm really banging my head trying to figure this one out. Any help would be greatly appreciated! None of the other solutions I have investigated on here work.
I have a detailed example for typescript + mocha + nyc. It also contains fastify tests including route tests (inject) as well as mock + stub and spy tests using sinon. All async await as well.
It's using all modern versions and also covers unused files as well as VsCode launch configs. Feel free to check it out here:
https://github.com/Flowkap/typescript-node-template
Specifically I think that
instrumentation: true
messes up the results. Heres my working .nycrc.yml
extends: "#istanbuljs/nyc-config-typescript"
reporter:
- html
- lcovonly
- clover
# those 2 are for commandline outputs
- text
- text-summary
report-dir: coverage
I have proper coverage even for mocked ans tub parts of fastify in my above mentioned example.
I'm using apollo client in an exponent react native app and have noticed that the graphql options method gets run 11 times, why is that? Is that an error or a performance problem? Is that normal? Is it running the query 11 times as well?
...
#graphql(getEventGql,{
options: ({route}) => {
console.log('why does this log 11 times', route.params);
return {
variables: {
eventId: route.params.eventId,
}
}
},
})
#graphql(joinEventGql)
#connect((state) => ({ user: state.user }))
export default class EventDetailScreen extends Component {
...
Looking at the sample from the documentation http://dev.apollodata.com/react/queries.html
Typically, variables to the query will be configured by the props of
the wrapper component; where ever the component is used in your
application, the caller would pass arguments. So options can be a
function that takes the props of the outer component (ownProps by
convention):
// The caller could do something like:
<ProfileWithData avatarSize={300} />
// And our HOC could look like:
const ProfileWithData = graphql(CurrentUserForLayout, {
options: ({ avatarSize }) => ({ variables: { avatarSize } }),
})(Profile);
By default, graphql will attempt to pick up any missing variables from
the query from ownProps. So in our example above, we could have used
the simpler ProfileWithData = graphql(CurrentUserForLayout)(Profile);.
However, if you need to change the name of a variable, or compute the
value (or just want to be more explicit about things), the options
function is the place to do it.
When writing a unit test in Jest or Jasmine when do you use describe?
When do you use it?
I usually do
describe('my beverage', () => {
test('is delicious', () => {
});
});
When is it time for a new describe or a new it?
describe breaks your test suite into components. Depending on your test strategy, you might have a describe for each function in your class, each module of your plugin, or each user-facing piece of functionality.
You can also nest describes to further subdivide the suite.
it is where you perform individual tests. You should be able to describe each test like a little sentence, such as "it calculates the area when the radius is set". You shouldn't be able to subdivide tests further-- if you feel like you need to, use describe instead.
describe('Circle class', function() {
describe('area is calculated when', function() {
it('sets the radius', function() { ... });
it('sets the diameter', function() { ... });
it('sets the circumference', function() { ... });
});
});
As I mentioned in this question, describe is for grouping, it is for testing.
As the jest docs says, test and it are the same:
https://jestjs.io/docs/en/api#testname-fn-timeout
test(name, fn, timeout)
Also under the alias: it(name, fn, timeout)
and describe is just for when you prefer your tests to be organized into groups:
https://jestjs.io/docs/en/api#describename-fn
describe(name, fn)
describe(name, fn) creates a block that groups together several related tests. For example, if you have a myBeverage object that is supposed to be delicious but not sour, you could test it with:
const myBeverage = {
delicious: true,
sour: false,
};
describe('my beverage', () => {
test('is delicious', () => {
expect(myBeverage.delicious).toBeTruthy();
});
test('is not sour', () => {
expect(myBeverage.sour).toBeFalsy();
});
});
This isn't required - you can write the test blocks directly at the top level. But this can be handy if you prefer your tests to be organized into groups.
I consider this more from the impact on the test output. By using describe or multiple levels of describe you can group your output for readability.