How to run Test cases sequentially in cypress? - cypress

lets take one spec file in cypress in that i have 5 test cases i have to run sequentially
I am tried with npx cypress run command it will run test cases one by one is it write or wrong? i am confused.

Yes it is totally normal. Here is a link to the documentation of Cypress, which explains what Parallelization is and how to set it up.
You need to activate parallelization first, in order to use it. However it is not recommended to be done on a singular machine. It needs a significant amount of resources.
Cypress Parallelization
If your project has a large number of tests, it can take a long time for tests to complete running serially on one machine. Running tests in parallel across many virtual machines can save your team time and money when running tests in Continuous Integration (CI).
Cypress can run recorded tests in parallel across multiple machines since version 3.1.0. While parallel tests can also technically run on a single machine, we do not recommend it since this machine would require significant resources to run your tests efficiently.
This guide assumes you already have your project running and recording within Continuous Integration. If you have not set up your project yet, check out our Continuous Integration guide. If you are running or planning to run tests across multiple browsers (Firefox, Chrome, or Edge), we also recommend checking out our Cross Browser Testing guide for helpful CI strategies when using parallelization.
Turning on parallelization
Refer to your CI provider's documentation on how to set up multiple machines to run in your CI environment.
Once multiple machines are available within your CI environment, you can pass the --parallel key to cypress run to have your recorded tests parallelized.
cypress run --record --key=abc123 --parallel
Running tests in parallel requires the --record flag be passed. This ensures Cypress can properly collect the data needed to parallelize future runs. This also gives you the full benefit of seeing the results of your parallelized tests in Cypress Cloud. If you have not set up your project to record, check out our setup guide.
Source: Cypress Documentation

Related

Mocha not running all tests in suite in parallel

I have a test suite consisting of around 100 UI scenario tests that I am running with Mocha. When I run the suite in parallel (using the --parallel flag), the number of tests that are run is inconsistent and fluctuates. For instance, it might be 87 tests in one run and 94 in the next. Limiting the number of parallel jobs to for instance 10 (using the --jobs option) seems to mitigate the problem. However, this feels like a workaround rather than addressing the root cause. Thus, I am wondering what the reason is that Mocha runs a fluctuating number of tests in parallel mode (often not the entire suite), and how this issue can be addressed properly?
I am using version 8.1.1 of Mocha, and mocha-trx-reporter at 3.3.1 to report the test results. I do not use any options other than --recursive (to run the test suite) and --parallel.

Do we need to run tests in CI server if every developers run the tests before push?

I am not sure what is the best practice for running unit tests.
I suppose every developers should pass the unit tests locally before pushing the code to the GIT repo. And then the CI server (Jenkins) would pick up the new changes and run the tests again.
Why do we want to do it twice? Should we?
If the unit tests take a lot of time to run, do we expect the developer only picks the tests related to the change or runs every tests (even outside the scope of his projects), assuming we have a big maven multi-module POM.
Also consider we usually have a powerful hardware for CI server and relatively less powerful workstation for developers.
If the unit tests take a lot of time to run, do we expect the
developer only picks the tests related to the change or runs every
tests (even outside the scope of his projects), assuming we have a big
maven multi-module POM.
As a developer changes a class, modifies the database structure or makes any change that could have side effects, he/she will not/cannot know all potential side effects on the whole application.
And he/she never has to try to be too clever by saying : "I know that it may have be broken with my changes, so I will run just this test".
Unit testing ensures a some degree of code quality : don't make it less helpful
The unit tests are also non-regression tests. To not play all non regression tests before commit and push is taking the risk to introduce flawed code in the source content management.
You will never do it.
Unit tests have to be fast executed
If units test are so long to be executed as it hurts the developer velocity , it generally means that they are bad designed or maybe that they are not real unit tests. A unit test is designed to be run fast.
If you write tests that are long to be run because they require to start a server, to load/clean data in a database, to load/unload some containers, and so for... it means you didn't write unit test but integration tests. These tests are not designed to be executed regularly and automatically on the local development machine but on a CI tool.
The CI has to run all tests
Do we need to run tests in CI server if every developers run the tests
before push?
As explained, integration tests have to be executed by the CI tool.
About unit testing, sparing their execution in the CI side is not a good idea either.
Of course developers have to run the tests before pushing to the SCM but actually you don't have the guarantee that it will always be done.
Besides, even in a perfect world where developers execute all tests before pushing, you could fall into cases where the tests are successful on the developer machine but fail on the CI or the other developer machines.
For example, the developer could introduce in the base code some absolute paths specific to its machine or he/she could also forget to replicate a modification on the database used in the CI environment.
So running all tests (unit and integration tests) has not to be an option for the CI.
Yes, they shall be run twice. Because if some developers don't, then they run never. Developers should run the tests locally to make sure that their code works correctly.
Your CI system is however your reference, so there's no chance of one person arguing that it "works on my machine", but fails for others. Looking forward to continuous delivery, knowing this state on the CI/CD system becomes even more important.
You might hope that always and forever, every commit has been tested successfully locally (and all workstations are the same and identical with production systems...), but hope is a bad strategy.

NUnit3.2 running test cases in parallel

Have anyone tried running test cases in parallel using NUnit 3.2? I have few thousand test cases written using NUnit(2.6.3) and i want to run them in parallel. Since NUnit 2.6.3 don't have the feature to run test cases in parallel, I thought of switching to NUnit 3.2. When i read the documentation, it says it support running test fixtures in parallel not test cases. Some web site says NUnit 3.2 support running test cases in parallel. I'm confused. Any help much appreciated.
NUnit 3 intends to allow running all kinds of tests (suites, fixtures, simple test methods, test cases) in parallel.
As far as NUnit 3.2, we only run tests in parallell down to the fixture level. The test methods/cases under the fixture run one at a time. So long as you have a relatively large number of fixtures, this gives an equivalent performance increase to running the methods in parallel. However, in the extreme, for example with a single fixture and 1000 methods, you will see no improvement.
No promises, but I imagine we'll be running test cases in parallel for 3.4.

Running automation tests on 2 TeamCity agents in parallel

We use TeamCity with MsTest to manage and run an automation test suite for front-end testing of a WPF application.
Currently the test suite is running on one dedicated test agent (where TC is installed) and I'm now at a stage where I need to dramatically reduce the overall time it takes to run. I want to do this by adding another test agent to run the tests in parallel.
My test scenarios are large so I have them separated into approx 4 Specflow feature files that run in sequence. All these test scenarios are also categorised by their functional areas.
Firstly:
Is it possible to configure TeamCity to have one test agent managing the sending of tests to be run on each test agent? And then collating all results at the end!
Secondly:
And also keeping the categorised tests together that need to run in sequence?
I decided to use 2 separate project configurations in my TC setup. Each project is pointing at a different test agent using the Agent Requirements step. And I have simply divided up the test categories (that I have setup in my test scenarios anyhow) for each project (half and half).
Pro:
Simple solution and easy to maintain
Con:
Results for each build are separated in TC

Continuous Integration testing with physical devices

I've been wondering how you go about doing CI-style testing when you're dealing with physical devices.
I imagine you have a suite of tests, and a pool of devices against which they can be run.
Additionally:
Some tests may require specific device models.
Some tests may require the use of more than one device.
What CI servers have support for this?
I'm still interested in those which have partial support, either natively or through plugins, as I'm interested in how it's done.
Continuous Integration enables a team integrate & test their work frequently. Automated builds are meant to compile, link and run unit-tests. You want your CI to run fast, especially if you run it at every check-in. That is why you would want to restrict CI activities to simple confirmation of the build and unit tests alone. What you're asking seems more-along the lines of quality assurance (QA) testing...and having QA failures mixed into your CI efforts would detract development efforts from progressing.
As such, I'm more under the impression activities associated with CI are not dependent upon the final physical machine said work may eventually be migrated to.
Now...this doesn't mean you CAN'T take the CI-compiled package and run it against some final target-machine....but again...that is really considered a seperate activity.
This seems to be re-enforced in the following article by Martin Fowler.
Notice he doesn't talk about the final targeted devices...only the build machine.
I can suggest Test Manager that is part of the TFS suite of Microsoft. I have not tried it with many different environments apart from windows based though I know there are many connectors. For windows based environments I believe it will satisfy most needs.
I use it for nightly builds to perform smoke tests (Turn it on, see if any smoke comes out) but you have to be careful to keep tests small in order to have them finished in a matter of hours and not days, if you want it to be part of your CI.
Then when you have a good enough quality you can proceed onto regression tests and integration tests if needed.
I wouldn't get too caught up in what a CI system is supposed to do or not do. Instead I would focus on the problem you are trying to solve. It sounds like that problem is to facilitate development on multiple platforms. You can use the concept of Continuous Integration and add to it successfully address the issue. I know, because I've done it in the past.
I implemented a build system for code that needed to compile and test successfully on 4 different platforms (nt, wince, linux-arm, linux-x86). The CI server would:
Used a linux and winnt build server compilation (and cross compilation)
The compiled tests and supporting libs would then be copied to the appropriate devices and an automated test run executed.
After the test suite was completed the log would be copied back, (or it was written to a network mounted fs)
If the test suite was successful we would tag the source, and package the libs and executables.
This same platform was reused for developer verification before commits. Developers would run a partial build and test (only updated source would be recompiled and those tests rerun). The CI would execute a full build (from scratch).
Our build were pretty fast because we had a proper DAG for build dependencies. This allowed for concurrent compilation within a platform build. Each platform build was also concurrent. As a result partial builds took a few seconds, full builds took ~30 minutes. Our build servers were quite beefy (optimized for fast compiles) and the codebase was of moderate size (I don't remember the stats).

Resources