How can I debug a test that is failing when ran headlessly but passing
when ran with a opened browser ?
below is the error I'm getting when running the test without opened browser:
AssertionError: Timed out retrying: Expected to find content: 'Logout' within the element: [ <a.navbar-item>, 2 more... ] but never did.
Meaning ... the test passes locally but fails within CI tool.
Please ask if you need to know any details.
I would suggest recording a video and using console.log statements in your test. https://docs.cypress.io/guides/guides/screenshots-and-videos.html#Screenshots
Related
When I run all my component cypress tests locally on a Macbook pro on a react-vite project with around ~10 tests, I get the following error:
An uncaught error was detected outside of a test:
TypeError: The following error originated from your test code, not from Cypress.
> Failed to fetch dynamically imported module: http://localhost:5173/__cypress/src/cypress/support/component.ts
When Cypress detects uncaught errors originating from your test code it will automatically fail the current test.
Cypress could not associate this error to any specific test.
We dynamically generated a new test to display this failure.
the error is not Consistant and doesn't show up on every run. It also throws on a random test every run. How can I solve this?
update: I think a possible lead could be that I import files on my project with the absolute paths pattern.
For example:
import {comp1, comp2} from 'components'
where as components is configured in my tsconfig.ts file
ok so after countless attempts to fix this and also encountering terminal freezes when I execute
cypress run.
I've gave up and created a bash script to run each of the tests in the code base separately:
set -x
#!/bin/bash
for file in $( find . -type f -name '*.spec.cy.tsx' );
do yarn cypress run --component --browser chrome --spec $file || exit 1
done
for now it seems to get the job done. Hope this helps anyone else that encounters this
I'm using webdriverIO for some automation testing and have recently migrated from 'selenium-standalone' service to default wdio devtools protocol.
wdio v.7.16.12
firefox v.95.0.2
From that point I can't start testing with firefox browser:
INFO #wdio/cli:launcher: Run onPrepare hook
INFO #wdio/cli:launcher: Run onWorkerStart hook
INFO #wdio/local-runner: Start worker 0-0 with arg: run,wdio.conf.js
INFO #wdio/local-runner: Run worker command: run
...
INFO devtools:puppeteer: Initiate new session using the DevTools protocol
ERROR #wdio/runner: Error: Couldn't find executable for browser
...
INFO #wdio/cli:launcher: Run onComplete hook
I've tried different combinations of options with 'wdio:devtoolsOptions' and 'moz:firefoxOptions'.
Plus checked whether could help dumpio: true, and 'moz:debuggerAddress': true options.
Also I've tried substitution browserName with product and adding binary and executablePath to capabilities.
When passing binary: 'path.to.firefox' to 'moz:firefoxOptions' options, the error message changes to:
ERROR #wdio/runner: Error: Only Nightly release channel is supported in Devtools/Puppeteer for Firefox. Refer to the following issue:
...
Any ideas how it could be fixed in webdriverIO (without installing separately puppeteer or puppeteer-firefox)?
Thanks!
Seems that I took desired for real.
wdio + devtools:puppeteer still work with Firefox Nightly only – https://github.com/webdriverio/webdriverio/discussions/7916
I've a test cases written in cucumber and cypress. The test case run successfully through cypress Test runner, but fails while running through headless mode using the command.
node_modules\.bin\cypress run --spec **/*.features
CypressError: Timed out retrying: `cy.click()` failed because this element is not visible:
Questions:
what is the possible reason to have this element not visible error?
How can i handle the wait in headless mode?
The fix is :
Fix all your issues displayed on the console w.r.t xpaths, sync or
any such.
I am running the protractor Suite (spec file having multiple test cases), If any test case fails, protractor does not continue with the next test case execution and all the rest of test cases also fail.
EXPECTED BEHAVIOR:
Upon failure on any test case, protractor should continue with next test case execution.
I used "Protractor-Fail-Fast" Npm package to stop the rest test case execution if any test case fail. But ideally I am not looking for the same.
But this will not help me!
Just for reference: In Visual Studio MS test, If I created ordered test (same as Spec file in protractor having multiple test cases) and then set test setting like "continue on failure", ordered test execution will continue even if some test case failed.
I am looking for a similar test setting or any solution for protractor.
If you dont't want to stop all tests run just stop using Protractor-Fail-Fast library? Protractor tests run till the end by default even if some of the tests are failed.
set ignoreUncaughtExceptions: true in config file as following:
/**
* If set, Protractor will ignore uncaught exceptions instead of exiting
* without an error code. The exceptions will still be logged as warnings.
*/
ignoreUncaughtExceptions?: boolean;
you can get above description from here
export.config = {
...
ignoreUncaughtExceptions: true
}
I am running a TFS nightly build that for the last few days has not been able to complete all its tests. It fails after several hours with a "Test run is aborted" message. Previous to this the tests always ran successfully, and no major changes(or even minor) have been made to the system that runs these tests.
Information:
Two MStest runs in the build(unit tests)
Timeout is set to 20 hours
Runs for approx. 15 hours before failure
Tests are set to continue on failure
When I look in the TFS log for the latest run it lists the following(2017-04-11T06:42:47.5500707Z):
[warning]DistributedTests: Test run is aborted. Logging details of the run logs.
[warning]DistributedTests: New test run created.
[warning]Test Run queued for Project Collection Build Service
[warning]DistributedTests: Test discovery started.
[warning]DistributedTests: Test Run Discovery Completed . Test run id: 533
[warning]DistributedTests: 290 test cases discovered.
[warning]DistributedTests: Test execution started. Test run id : 533
[warning]DistributedTests: Test run timed out. Test run id : 533
[warning]DistributedTests: Test run aborted. Test run id: 533
[error]The test run was aborted, failing the task.
When I look at the run log(worker_20170410-234426-utc_864.log) I see:
06:42:47.659516 BaseLogger.LogConsoleMessage(scope.JobId =
7ced7f31-e360-47f3-b334-ef20faeaf000, message = ##[error]The test run
was aborted, failing the task.) 06:42:47.659516
Microsoft.TeamFoundation.DistributedTask.Agent.Common.AgentExecutionTerminationException:
PowerShell script completed with errors. at
Microsoft.TeamFoundation.DistributedTask.Handlers.PowerShellHandler.Execute(ITaskContext
context, CancellationToken cancellationToken) at
Microsoft.TeamFoundation.DistributedTask.Worker.JobRunner.RunTask(ITaskContext
context, TaskWrapper task, CancellationTokenSource tokenSource)
In the test log, I don't see any errors in the VS, just a warning about not able to connect(I see these often):
W, 2060, 5, 2017/04/10, 16:26:03.595, XXXTESTING\QTController.exe,
Test of LoadTestResultConnectString failed: A network-related or
instance-specific error occurred while establishing a connection to
SQL Server. The server was not found or was not accessible. Verify
that the instance name is correct and that SQL Server is configured to
allow remote connections. (provider: SQL Network Interfaces, error: 26
- Error Locating Server/Instance Specified)
I also see an error thrown in the Application Event log at the same time:
The description for Event ID 0 from source Application cannot be
found. Either the component that raises this event is not installed on
your local computer or the installation is corrupted. You can install
or repair the component on the local computer.
If the event originated on another computer, the display information
had to be saved with the event.
The following information was included with the event:
Error Handler Exception: System.ServiceModel.CommunicationException:
There was an error reading from the pipe: The pipe has been ended.
(109, 0x6d). ---> System.IO.IOException: The read operation failed,
see inner exception. ---> System.ServiceModel.CommunicationException:
There was an error reading from the pipe: The pipe has been ended.
(109, 0x6d). ---> System.IO.PipeException: There was an error reading
from the pipe: The pipe has been ended. (109, 0x6d).....
the message resource is present but the message is not found in the
string/message table
The issue is that I really don't know how to interpret these messages, each log just says "test run was aborted, failing the task", I'm not even certain the powershell issue is what caused it. I'm also not sure that the error thrown in the application log is related, though it was thrown at exactly the same time that the run failed.
It's also difficult to research this issue, when you really don't know what's causing the test agent to fail. There are posts related to VS, and to the TFS Test Agent, but these don't strike me as related issues, and of course there is this somewhat unhelpful post about the Powershell message.
Has anyone seen this sort of issue before? I don't think anything on my build server has changed over the last few days(maybe updates...), what do you think would cause an issue like this to occur?
If you look at the failed build(containing tests) after it is aborted in the "Build" section of TFS, its says it was "Aborted", that's it... If you look at results of the build(containing tests) in the "Test" section of TFS it specified that the test run "Exceeded Timeout".
Apparently MSTest was running up against the default value of this little gem. I think it defaults to 8 hours when not specified, but I'm not too sure about this. Anyways I set the following setting in my "Default.testsettings" file:
<?xml version="1.0" encoding="utf-8"?>
<TestSettings name="TestSettings1">
<Execution>
<Timeouts runTimeout="200000000" />
</Execution>
</TestSettings>
Seems to resolve the issue. Tests runs successfully and no longer time out.