Are there inconsistencies between Cypress tool and running them from the terminal? - cypress

i'm evaluating Cypress (Version 3.4.1), and running into inconsistencies between running the same tests from the cypress tool and running them from the terminal, i'm using the same browser in both cases (Electron 61). Anyone experienced this? (failing test from the terminal, but same test runs smoothly from the Cypress tool)

The interactive test runner can be flakey, but I do not see the same issues when I run without the interactive test runner. I wouldn't worry about the test failing in the interactive test runner if it passes when you refresh the page, or passes when run via command line. Use the browser refresh button not the interactive test runner refresh button.
Note: cypress is pretty heavy on your resources. I noticed I have intermittent failures when doing a screen share.

While running Cypress rest from command, I found that the test finishes with All specs passed, but yet it didn't complete the full iterations (last one is to save form in db, didn't happen).
I didn't know why? I tried to change the Cypress code but with no result.
I decided to check the recorded video after test finishes from terminal so I enabled video recording and TARAA!! Test finished correctly. Once I disable video recording It fails.

Related

We detected that the Chromium Renderer process just crashed Cypress CI error

I get this error at random times when I run my Cypress test scripts on Azure CI server.
We detected that the Chromium Renderer process just crashed.
This is the equivalent to seeing the 'sad face' when Chrome dies.
This can happen for a number of different reasons:
- You wrote an endless loop and you must fix your own code
- You are running Docker (there is an easy fix for this: see link below)
- You are running lots of tests on a memory intense application
- You are running in a memory starved VM environment
- There are problems with your GPU / GPU drivers
- There are browser bugs in Chromium
I am using cypress 11.2.0. I tried adding the option --browser chrome to the cypress run command, but it makes the tests fail many assertions unexpectedly. Any ideas on how to solve this?

Cypress soaking up all available memory

I'm having serious issues with Cypress soaking up all my available memory (16Gb). I have 30+ tests and if I attempt to run them through the UI all at once then Cypress gradually uses all my available memory and then typically fails with a test timeout error. Closing the Cypress UI always recovers all the memory. I've seen https://github.com/cypress-io/cypress/issues/431 which suggests setting numTestsKeptInMemory to 0 but this makes no difference, also running in headless mode with cypress run makes no difference either: Ultimately all my memory get soaked up.
Also during development of the tests I've been using it.only but even when running only one test at a time the memory gradually gets soaked up until restarting Cypress is needed.
I'm using Cypress 1.4.1 on Ubuntu 16.04 (elementaryOS Loki)
Does anyone else have the same trouble?
I'm assuming this is happening while using cypress open?
cypress open is used for TDD, so you can get immediate feedback while you're developing. It's recommended in the docs to NOT run all your tests in the test runner, but with cypress run instead.
You won't get to do the snapshot history navigating, and instead will only get pictures and a video recording, but your tests will run in a headless browser and not soak up your memory.
PS: If you need to fix a broken test and want to use the test runner, you can isolate it using it.only('test case...)

Selenium- internet explorer disable script debugging

Last couple of days I have been trying to figure out the ways to handle a few issues with Selenium and IE Such as this
Today, I wanted to run all the Selenium tests against IE9 but in the middle of the test, due to a JavaScript error the debugger popped up and all the tests started failing. Just an extra bit of information I tried using
options.UnexpectedAlertBehavior = InternetExplorerUnexpectedAlertBehavior.Dismiss;
which was not able to dismiss that debugger window.
I know I can manually do it by unchecking the debugger as follows. But, this is not a feasible solution since these tests can be executed in a lot of different machines.
Is there a way to do these programatically with DesireCapability or Options? Or at least any commands for commandline?

Tests without active browser window necessity

Is there any possibility to run written CUIT tests without necessity of having active browser window?
Basically, what I would like to have is just to run a bunch of tests on local machine. And I need to wait until all the tests are executed (each test requires browser window to be active during test run). But during this execution I can't use my machine for any other work - otherwise tests fall due to loosing control on the window. So is there any ability to run tests locally and work on the machine without any limitations simultaneously?
Coded UI Test Needs Active Browser while Execution.
To match your necessity you should try running test cases into virtual machine. That will not affect your work in local machine.
You might consider using WatiN and run them by "normal" MSTest/NUnit/whatever instead of using CodedUiTests. It does open up a browser window too, but allows you to interact with your desktop nevertheless.
That of course means rewriting your tests but as these tests are much more readable IMHO this might be worth it.

Selenium Webdriver test works locally but fails on the build server

my selenium webdriver - ruby test builds locally and identifies all the elements on firefox. However, it fails on the server. Strange thing is that the step it is failing on comes up 4 times in the test. And it passes the first 3 times and fails on the 4th time. How can I troubleshoot this issue? what could be the possible cause of failure?
Not as simple as it sounds, I could resolve this issue by adding some time waits in the test script, especially around the steps which required actions on a child window. Since my test involved lots of child windows, modal windows, flash messages etc., It took me a lot of time to identify the exact step where the script failed.
The server that I was running the test has a headless browser and therefore few actions(pop up window actions) take longer than usual.

Resources