We are running Cypress tests in Team City and the output of the tests is crazy verbose. When I run the tests on the agent machine directly I don't see such verbose logging so it might be because of Team City.
Small sample (there are thousands of these):
cypress:https-proxy Making intercepted connection to 60378
cypress:server:server Got UPGRADE request from /__socket.io/?EIO=3&transport=websocket
cypress:server:timers queuing timer id 5 after 85000 ms
cypress:server:timers child received timer id 5
cypress:server:socket socket connected
cypress:server:timers clearing timer id 5 from queue { '3': { args: [], ms: 30000, cb: [Function: timeoutTimeout] }, '5': { args: [], ms: 85000, cb: [Function] } }
cypress:server:timers queuing timer id 6 after 85000 ms
cypress:server:timers child received timer id 6
cypress:server:timers clearing timer id 6 from queue { '3': { args: [], ms: 30000, cb: [Function: timeoutTimeout] }, '6': { args: [], ms: 85000, cb: [Function] } }
cypress:server:timers queuing timer id 7 after 85000 ms
cypress:server:timers queuing timer id 8 after 1000 ms
cypress:server:timers clearing timer id 8 from queue { '3': { args: [], ms: 30000, cb: [Function: timeoutTimeout] }, '7': { args: [], ms: 85000, cb: [Function] }, '8': { args: [], ms: 1000, cb: [Function: timeoutTimeout] } }
cypress:server:timers child received timer id 7
cypress:server:timers child received timer id 8
Any ideas?
Those are Cypress' own debug messages - they are only logged in case the DEBUG environment variable is set to a matching pattern, i.e. * or cypress:*. They are not related to the teamcity logger.
Either override the env.DEBUG Build Parameter in your Build Configuration and set it to an empty string, or (in case there's something else in your build that needs it to be set to *) unset it just at the launch inside your build step, like
env -u DEBUG npm run cypress -- --reporter teamcity
Related
There is React Application (ant).
Cypress 10.3.1.
testList.cy.js:
import './1_test.cy';
...
import './30_test.cy';
test.cy.js:
describe('test' , () => {
before(() => {
cy.intercept('/login').as('makeLogin')
cy.makeLogin();
cy.wait('#makeLogin')
cy.wait(0)
});
it('TEST' , () => {
cy.visit('https://localhost:3000/home')
cy.wait(0)
cy.get('.element')
.should('be.visible')
.click({ force:true }); // This string doesn't operate as expected
})
})
config.cy.js:
numTestsKeptInMemory: 1,
pageLoadTimeout: 60000,
defaultTimeout: 30000,
defaultCommandTimeout: 40000,
requestTimeout: 60000,
responseTimeout: 60000,
timeout: 60000
Command.js :
Cypress.Commands.add('makeLogin', () => {
... command description ...
});
});
I execute each test (from 1 to 30). Tests operate as expected.
But if I execute testList.cy.js I had problem:
I launch cypress open and execute testList.cy.js.
Actual result: Cypress application falled
I use cypress run --spec 'cypress/e2e/testList.cy.js' --browser chrome .
Actual Result: The testList.cy.js faulted. Few tests '№_test.cy.js' fall on the string (I marked this string in the test.cy.js)
I guess that it is leaking memory, maybe. Maybe, can I clear cypress cash before each №_test.cy.js in the testList.cy.js file? How can I resolve this problem?
I want to make a Bot, that plays a livestream of an online radio. I use Discord JS v13.
On Heroku I have installed the following buildpacks:
heroku/nodejs
https://github.com/jonathanong/heroku-buildpack-ffmpeg-latest.git
https://github.com/xrisk/heroku-opus.git
https://github.com/OnlyNoob/heroku-buildpack-libsodium.git
My code is the following:
let voiceChn = message.member.voice.channel;
const connection = joinVoiceChannel({
channelId: message.member.voice.channel.id,
guildId: message.member.voice.channel.guildId,
adapterCreator: message.guild.voiceAdapterCreator,
selfDeaf: true
});
const player = createAudioPlayer();
let resource = createAudioResource(STREAM_URL);
connection.subscribe(player);
connection.on(VoiceConnectionStatus.Ready, () => {
player.play(resource);
});
It does work running on my pc but it does not run on Heroku.
These are the packages I have installed:
"#discordjs/opus": "^0.5.3"
"#discordjs/rest": "^0.5.0"
"#discordjs/voice": "^0.10.0"
"discord-api-types": "^0.36.0"
"discord.js": "^13.8.1"
"ffmpeg-static": "^4.4.1"
"libsodium-wrappers": "^0.7.10"
I do get the following error: The player immediately emits the idle event and thats logged:
{
status: 'playing',
missedFrames: 0,
playbackDuration: 120,
resource: AudioResource {
playStream: OggDemuxer {
_readableState: [ReadableState],
_events: [Object: null prototype],
_eventsCount: 5,
_maxListeners: undefined,
_writableState: [WritableState],
allowHalfOpen: true,
_remainder: null,
_head: null,
_bitstream: null,
[Symbol(kCapture)]: false,
[Symbol(kCallback)]: null
},
edges: [ [Object], [Object] ],
metadata: null,
volume: undefined,
encoder: undefined,
audioPlayer: undefined,
playbackDuration: 0,
started: true,
silencePaddingFrames: 5,
silenceRemaining: 0
},
onStreamError: [Function: onStreamError]
}
Not sure about the error, but I believe I had the same problem and fixed it.
Didn't test if all the changes are needed, but here goes:
Instead of your heroku-buildpack-libsodium package, I use:
https://github.com/Crazycatz00/heroku-buildpack-libopus
Changed every URLs 'https' with 'http'
Use a DNS lookup tool to change domain name with IPV4 form
'https://stream.skymedia.ee/live/NRJdnb' becomes 'http://185.31.240.229:8888/NRJdnb'
Deploy slash commands after you change their code
How to set Appium tests (WDIO-Mocha tests) pass % tolerance on BitRise Test Jobs
Hi,
We are running UI Test Suite on BitRise CI/CD.
The test suite itself is built on ReactNative/Jest codebase.
And WDIO-Mocha runner is used for running the tests.
Exports.conf
exports.config = {
specs: [
'./<my directory>/**/*.<testName>.js'
],
exclude: [],
maxInstances: 1,
capabilities: [],
sync: true,
logLevel: 'verbose',
coloredLogs: true,
deprecationWarnings: true,
bail: 0,
screenshotPath: './errorShots/',
waitforTimeout: 5000,
connectionRetryTimeout: 90000,
connectionRetryCount: 3,
baseUrl: '',
framework: 'mocha',
mochaOpts: {
ui: 'bdd',
timeout: 90000,
logLevel: 'info',
logOutput: './wdio.log'
},
reporters: ['dot', 'allure'],
before() {
require('#babel/register');
global.expect = jestMatchers;
}
};
How to set pass % tolerance viz- 98% or 90% etc
[For Example- On CI/CDs like - Jenkins we can achieve this using Hidden parameter mechanism or on TeamCity CI/CD it comes as out of box facility]
Thanks
I'll sometimes have 1 or 2 tests that fail in CI, and rerunning the build causes them to pass.
How can I automatically re-run these flaky tests so my build will pass the first time? Is there something similar to mocha's this.retries?
For example, I have a test that fails with "The element has an effective height of 0x0" about 10% of the time:
cy.visit('/')
cy.get('.my-element').click() // sometimes fails with not visible error
Update (v5.0.0)
Cypress now has built-in retry support.
You can set test retries in Cypress 5.0 via configuration in cypress.json
{
"retries": 1
}
or specify different options for runMode and openMode:
{
"retries": {
"runMode": 1,
"openMode": 3
}
}
runMode allows you to define the number of test retries when running cypress run
openMode allows you to define the number of test retries when running cypress open
You can turn on test retries for just a single test or suite via test options:
it('my test', {
retries: 2
}, () => {
// ...
})
// or
describe('my suite', {
retries: 2
}, () => {
// ...
})
If a test fails in a beforeEach, afterEach, or in the test body, it will be retried. Failures in beforeAll and afterAll hooks will not retry.
Old answer:
Official Support for test retries is on the way, but there's a plugin for that. cypress-plugin-retries
Disclosure: I'm the creator of the plugin.
Installation
Add the plugin to devDependencies
npm install -D cypress-plugin-retries
At the top of cypress/support/index.js:
require('cypress-plugin-retries')
Usage
Use the environment variable CYPRESS_RETRIES to set the retry number:
CYPRESS_RETRIES=2 npm run cypress
or use Cypress.env('RETRIES') in your spec file:
Cypress.env('RETRIES', 2)
or on a per-test or per-hook basis, set the retry number:
Note: this plugin adds Cypress.currentTest and you should only access it in the context of this plugin.
it('test', () => {
Cypress.currentTest.retries(2)
})
Note: Please refer to this issue for updates about official cypress retry support
Cypress 5 now has a native support for retries. Check: https://cypress.io/blog/2020/08/19/introducing-test-retries-in-cypress-5-0
For retry, you can add it in your config as below
{
"retries": {
// Configure retry attempts for `cypress run`
// Default is 0
"runMode": 2,
// Configure retry attempts for `cypress open`
// Default is 0
"openMode": 0
}
}
but instead of adding above I prefer to add (Which works for both run and open mode)
"retries": 1
Points to consider
You need cypress version > 5
As of now in cypress, you can retry test, not entire spec which will be a great ability to make tests more robust. I have already voted this as a critical feature (Also this one of the most voted for now)
Ref:
https://portal.productboard.com/cypress-io/1-cypress-dashboard/tabs/1-under-consideration
Cypress supports test retries as of version 5.0.0, released on 8/19/2020. These are present to reduce test flakiness and continuous integration build failures. This feature is documented in the online Cypress documentation under the Test Retries guide.
By default, tests will not retry when they fail. To retry failing tests, test retries need to be enabled in the configuration.
Retries can be configured separately for run mode (cypress run) vs. open mode (cypress open), as these will typically be different.
There are two ways to configure retries: globally, and per test or test suite.
Global Configuration
The Cypress configuration file (cypress.json by default) allows configuring the retry attempt either per mode or for all modes.
To use a different value per mode:
{
"retries": {
"runMode": 2,
"openMode": 0
}
}
To use the same value for both modes:
{
"retries": 1
}
Single test configuration
A test's configuration can specify the number of retry attempts specific to that test:
// Customize retry attempts for an individual test
describe('User sign-up and login', () => {
// `it` test block with no custom configuration
it('should redirect unauthenticated user to sign-in page', () => {
// ...
})
// `it` test block with custom configuration
it(
'allows user to login',
{
retries: {
runMode: 2,
openMode: 1,
},
},
() => {
// ...
}
)
})
Single test suite configuration
A test suite's configuration can specify the number of retry attempts for each test within that suite:
// Customizing retry attempts for a suite of tests
describe('User bank accounts', {
retries: {
runMode: 2,
openMode: 1,
}
}, () => {
// The per-suite configuration is applied to each test
// If a test fails, it will be retried
it('allows a user to view their transactions', () => {
// ...
}
it('allows a user to edit their transactions', () => {
// ...
}
})
Running multiple test specs using Protractor results in some of them timing out, giving the following error:
Jasmine spec timed out. Resetting the WebDriver Control Flow.
The failure is not consistent; not the same specs fail with each run but a percentage of the specs do and they vary from time to time. below is the config file for Protractor:
'use strict';
exports.config = {
baseUrl: 'http://www.example.com/',
capabilities: {
browserName: 'chrome',
chromeOptions: {
args: [ "--headless", "--disable-gpu"]
},
specs: 'specs/**/*Spec.js'
shardTestFiles: true,
maxInstances: 4
},
"scripts": {
"test": "protractor conf.js",
"test-in-parallel": "node -r parallel-protractor node_modules/.bin/protractor conf.js"
},
useAllAngular2AppRoots: true,
allScriptsTimeout: 30000,
getPageTimeout: 30000,
restartBrowserBetweenTests: true,
jasmineNodeOpts: {
defaultTimeoutInterval: 30000
},
onPrepare: function() {
browser.ignoreSynchronization = false;
}
};
Note: I have tried increasing the timeout interval for Jasmine but a percentage of the specs fail anyway (they just take a longer time to do so)