Always some test cases getting jasmine.DEFAULT_TIMEOUT_INTERVAL - jasmine

I am going to create end to end(e2e) test using protractor with jasmine and angular 6. I have written some test cases almost 10 cases. That's all working fine, but always some cases become fails. And its failed because of jasmine timeout. I have configure timeout value like below. But I am not getting consistant result. sometimes a test cases is success but at next run it will goes to success or fail. I have searched on google but I have not found any useful solution.
I have defined some common properties for wait
waitForElement(element: ElementFinder){
browser.waitForAngularEnabled(false);
browser.wait(() => element.isPresent(), 100000, 'timeout: ');
}
waitForUrl(url: string){
browser.wait(() => protractor.ExpectedConditions.urlContains(url), 100000, 'timeout')
}
And protractor.conf.js file I have defined that
jasmineNodeOpts: {
showColors: true,
includeStackTrace: true,
defaultTimeoutInterval: 20000,
print: function () {
}
}
I am getting below error
- Error: Timeout - Async callback was not invoked within timeout specified by jasmine.DEFAULT_TIMEOUT_INTERVAL.
- Failed: stale element reference: element is not attached to the page document
(Session info: chrome=76.0.3809.100)
(Driver info: chromedriver=76.0.3809.12 (220b19a666554bdcac56dff9ffd44c300842c933-refs/branch-heads/3809#{#83}),platform=Windows NT 10.0.17134 x86_64)

I have got the solution:
I have configured waiting timeout 100000 ms for individual element find where whole script timeout was 20000 ms. So I have follow below process:
Keep full spec timeout below than sum of all elements find timeouts. I have configured defaultTimeoutInterval at jasmineNodeOpts greater than sum of value for all test cases timeout. And then add a large value to allScriptsTimeout: 2000000 inside of export.config. Its resolved my problem.
NB: I gave this answer because I think it may help others who will face this kind of problem.

Related

Run specific part of Cypress test multiple times (not whole test)

Is it possible to run specific part of the test in Cypress over and over again without execution of whole test case? I got error in the second part of test case and first half of it takes 100s. It means I have to wait 100s every time to get to the point where the error occurs. I would like rerun test case just few steps before error occurs. So once again, my question is: Is it possible to do in Cypress? Thanks
Workaround #1
If you are using cucumber in cypress you can modify your scenario to a Scenario Outline that will execute Nth times with a scenario tag:
#runMe
Scenario Outline: Visit Google Page
Given that google page is displayed
Examples:
| nthRun |
| 1 |
| 2 |
| 3 |
| 4 |
| 100 |
After that run the test in the terminal by running through tags:
./node_modules/.bin/cypress-tags run -e TAGS='#runMe'
Reference: https://www.npmjs.com/package/cypress-cucumber-preprocessor?activeTab=versions#running-tagged-tests
Workaround #2
Cypress does have retry capability but it would only retry the scenario during failure. You can force your scenario to fail to retry it Nth times with a scenario tag:
In your cypress.json add the following configuration:
{
"retries": {
// Configure retry attempts for `cypress run`
// Default is 0
"runMode": 99,
// Configure retry attempts for `cypress open`
// Default is 0
"openMode": 99
}
}
Reference: https://docs.cypress.io/guides/guides/test-retries#How-It-Works
Next is In your feature file, add an unknown step definition on the last step of your scenario to make it fail:
#runMe
Scenario: Visit Google Page
Given that google page is displayed
And I am an uknown step
Then run the test through tags:
./node_modules/.bin/cypress-tags run -e TAGS='#runMe'
For a solution that doesn't require adding a change to the config file, you can pass retries as a param to specific tests that are known to be flakey for acceptable reasons.
https://docs.cypress.io/guides/guides/test-retries#Custom-Configurations
Meaning you can write (from docs)
describe('User bank accounts', {
retries: {
runMode: 2,
openMode: 1,
}
}, () => {
// The per-suite configuration is applied to each test
// If a test fails, it will be retried
it('allows a user to view their transactions', () => {
// ...
}
it('allows a user to edit their transactions', () => {
// ...
}
})```

Terminate / Skip / Stop all tests from all spec files if any one test fails in cypress

am trying to skip all other tests from all spec files if one test fails and found a working solution over here Is there a reliable way to have Cypress exit as soon as a test fails?. However, this looks to be working only if the test fails in it() assertions. How can we skip the tests if somethings fails in beforeach()
For eg:
before(() => {
cy.get('[data-name="email-input"]').type(email);
cy.get('[data-name="password-input"]').type(email);
cy.get('[data-name="account-save-btn"]').click();
});
And if something goes wrong (for eg: CypressError: Timed out retrying: Expected to find element: '[data-name="email-input"]', but never found it.) in above code then stop/ skip all tests in all spec files.
Just in case anyone is looking answer for the same question. I have found a solution and would like to share.
To implement the solution I have used a cookie that I will set to value true if something fails and before executing each test cypress will check the value of cookie. If the value of cookie is true it skips the test.
Cypress.on('fail', error => {
document.cookie = "shouldSkip=true" ;
throw error;
});
function stopTests() {
cy.getCookie('shouldSkip').then(cookie => {
if (cookie && typeof cookie === 'object' && cookie.value === 'true') {
Cypress.runner.stop();
}
});
}
beforeEach(stopTests);
Also to note: Tests should be written in it() block and avoid using before() to write tests
As of Cypress 10, tests don't run if before or beforeEach hook fails.

Cypress - first test randomly fails with "Invalid or unexpected token"

Recently switched to using Cypress parallel for our Angular project in our pipeline. We run on a Codebuild on AWS and run 5 threads of the Cypress runner. About a quarter of the time, the first test on one of the threads fails with this error:
An uncaught error was detected outside of a test
Invalid or unexpected token
This error originated from your test code, not from Cypress.
When Cypress detects uncaught errors originating from your test code it will automatically fail the current test.
Cypress could not associate this error to any specific test. We dynamically generated a new test to display this failure.
Tried many things to try to fix this, including setting modifyObtrusiveCode to false, chromeWebSecurity to false, upgrading Cypress. We are already catching uncaught exceptions so that doesn't seem like it should be the issue. I turned on some extra logs for this and here is the output
[3] 2020-03-06T19:57:20.369Z cypress:server:project onMocha start
[3] 2020-03-06T19:57:20.369Z cypress:server:reporter got mocha event 'start' with args: [ { start: '2020-03-06T19:57:20.366Z' } ]
[3] 2020-03-06T19:57:20.374Z cypress:server:project onMocha suite
[3] 2020-03-06T19:57:20.374Z cypress:server:reporter got mocha event 'suite' with args: [ { id: 'r1', title: '', root: true, type: 'suite', file: 'cypress/integration/ci-tests/content-acquisition/channels/channel-manual-upload-run-acquired-items-tab.spec.ts' } ]
[3]
[3] 2020-03-06T19:57:20.390Z cypress:server:project onMocha test
[3] 2020-03-06T19:57:20.391Z cypress:server:reporter got mocha event 'test' with args: [ { id: 'r2', title: 'An uncaught error was detected outside of a test', body: 'function throwErr() {\n throw err;\n }', type: 'test' } ]
[3] 2020-03-06T19:57:20.555Z cypress:server:reporter got mocha event 'fail' with args: [ { id: 'r2', title: 'An uncaught error was detected outside of a test', err: { message: 'Unexpected end of input\n' + '\n' + 'This error originated from your test code, not from Cypress.\n' + '\n' + 'When Cypress detects uncaught errors originating from your test code it will automatically fail the current test.\n' + '\n' + 'Cypress could not associate this error to any specific test.\n' + '\n' + 'We dynamically generated a new test to display this failure.', name: 'Uncaught SyntaxError', stack: 'Uncaught SyntaxError: Unexpected end of input\n' + '\n' + 'This error originated from your test code, not from Cypress.\n' + '\n' + 'When Cypress detects uncaught errors originating from your test code it will automatically fail the current test.\n' + '\n' + 'Cypress could not associate this error to any specific test.\n' + '\n' + 'We dynamically generated a new test to display this failure.' }, state: 'failed', body: 'function throwErr() {\n throw err;\n }', type: 'test', duration: 179, wallClockStartedAt: '2020-03-06T19:57:20.374Z', timings: { lifecycle: 26, test: [Object] } } ]
I couldn't really make anything of these errors, but maybe someone else can. I'm kind of out of ideas on what to try (I've tried more things today than I've listed but can't recall them all). Any ideas?
as setting modifyObtrusiveCode to false didn't help you as the folks in https://github.com/cypress-io/cypress/issues/6132 .. I can give my debug procedure when I encountered a similar flakey "unexpected .." error with Cypress:
cypress run has a burn= param, able to repeatedly run. Enable .har output recording for those runs with the cypress-har-generator plugin.
When you have two groups of successful and failing example .har files for the same request, open them in a Browser to compare if anything stands out.
I used diff + jq queries on the .har files to compare between the groups content per request path, but already opening a failing .har in the browser inspector network tab showed a 30s processing time for a .js path that was ultimately incomplete, and thus violated js syntax, causing an unexpected end of input error, similar to your "unexpected token".
Interestingly this occured to the same file at the same code line, hinting at a parsing problem in Cypress.
We exchanged that dependency (or specifically - updated it and changed how it was webpacked) and Cypress stopped to hiccup on the ressource, the flakiness disappeared.
My impression is, running parallel threads of Cypress contributes to the problem occuring.

increase pageSize of Generic Test Data results in sonarqube 6.2

I have imported test results according to https://docs.sonarqube.org/display/SONAR/Generic+Test+Data into SonarQube 6.2.
I can look at the detailed test results in sonar by navigating to the test file and then by clicking menu "Show Measures". The opened page then shows me the correct total number of tests 293 of which 31 failed. The test result details section however only shows 100 test results.
This page seems to get its data through a request like: http://localhost:9000/api/tests/list?testFileId=AVpC5Jod-2ky3xCh908m
with a result of:
{
paging: {
pageIndex: 1,
pageSize: 100,
total: 293
},
tests: [
{
id: "AVpDK1X_-2ky3xCh91QQ",
name: "GuiButton:Type Checks->disabledBackgroundColor",
fileId: "AVpC5Jod-2ky3xCh908m",
fileKey: "org.sonarqube:Scripting-Tests-Publishing:dummytests/ScriptingEngine.Objects.GuiButtonTest.js",
fileName: "dummytests/ScriptingEngine.Objects.GuiButtonTest.js",
status: "OK",
durationInMs: 8
...
}
From this I gather that the page size is set to 100 in the backend. Is there a way to increase it so that I can see all test results?
You can certainly call the web service with a larger page size parameter value, but you cannot change the page size requested by the UI

Jest silently ignores errors

Given the following test.js
var someCriticalFunction = function() {
throw 'up';
};
describe('test 1', () => {
it('is true', () => {
expect(true).toBe(true);
})
});
describe('test 2', () => {
it('is ignored?', () => {
someCriticalFunction();
expect(true).toBe(false);
})
});
Running Jest will output
Using Jest CLI v0.9.2, jasmine2
PASS __tests__/test.js (0.037s)
1 test passed (1 total in 1 test suite, run time 0.795s)
Which pretty much gives you the impression everything is great. So how do I prevent shooting my own foot while writing tests? I just spent an hour writing tests and now I wonder how many of them actually run? Because I will certainly not count all of them and compare with the number in the report.
Is there a way to make Jest fail as loud as possible when an error is thrown as part of the test suite, not the tests itself (like preparation code)? Putting it inside beforeEach doesn't change anything.
This is fixed in 0.9.3 jest-cli https://github.com/facebook/jest/pull/812
Which for some reason is unpublished again (I had it installed minutes ago). So anyway, throwing strings is now caught. You should never throw strings anyway and thanks to my test suite I found one of the last places I actually still threw strings...

Resources