Cypress-cucumber-preprocessor missing steps in execution - cypress

When I execute the tests many steps of the scenario are missing and it the tests "pass", although these steps were never executed, they are not displayed in test runner.
It is a sporadic issue, but decreases the reliability of the tests, since these were not executed but in macro, it looks like passed tests.
Has anyone run into the same problem?

Related

How to run a single test with Quarkus live reload

Our test suite takes 5 minutes to run (mostly due to Kafka containers being setup before each test I presume).
When running mvn quarkus:dev and working on a test, I don't know how to re-run only a single test, the one I'm working on.
If my test is broken, and is the only one broken, then it is fine. But as soon as it turns green, quarkus will not run it again if I change the test code.
If I am making big changes, quarkus will run all broken tests and I cannot clearly follow the result of the test I am working on.
I can use mvn verify to run a single test, but the compilation time and application startup times makes it too boring and breaks the mental flow.
How can I tell quarkus to run only some specific test while running?

Per-test coverage in Go

I am building a Go provider for Pruner (a CLI that runs only the tests that ran through the lines you changed, saving you time).
For that, I need to be able to see per-test coverage. Not just a full coverage report after all tests have run, but I need a way to know which tests ran through what line.
Is that possible in Go?
I tried using -func, but it just gives me the method names of the original code, not the test code. In other words, I can't know what code each individual test runs through.
I need a way to know which tests ran through what line.
Is that possible in Go?
It's not supported by the tools. But you can do it. It's just very inefficient.
The way to do this is to run:
go test -cover -run=TheName/OfSome/SpecificTest
Then run this for each test in your suite.
Naturally, this will make your tests much more cumersome to manage, and incredibly slow.
So I would consider whether this is truly a requrement for your use case.
Go is optimized, from the ground-up, to compile quickly. If you have a Go project so long, that running all the tests is too slow, you may want to consider other alternatives. Some suggestions:
Run more tests in parallel, so the total runtime is reduced.
Take advantage of Short mode, and only run short tests by default, saving long-running tests for special cases.
If you really need to run only a subset of tests, do it on a per-package basis, not on a per-test basis.

Do we need to run tests in CI server if every developers run the tests before push?

I am not sure what is the best practice for running unit tests.
I suppose every developers should pass the unit tests locally before pushing the code to the GIT repo. And then the CI server (Jenkins) would pick up the new changes and run the tests again.
Why do we want to do it twice? Should we?
If the unit tests take a lot of time to run, do we expect the developer only picks the tests related to the change or runs every tests (even outside the scope of his projects), assuming we have a big maven multi-module POM.
Also consider we usually have a powerful hardware for CI server and relatively less powerful workstation for developers.
If the unit tests take a lot of time to run, do we expect the
developer only picks the tests related to the change or runs every
tests (even outside the scope of his projects), assuming we have a big
maven multi-module POM.
As a developer changes a class, modifies the database structure or makes any change that could have side effects, he/she will not/cannot know all potential side effects on the whole application.
And he/she never has to try to be too clever by saying : "I know that it may have be broken with my changes, so I will run just this test".
Unit testing ensures a some degree of code quality : don't make it less helpful
The unit tests are also non-regression tests. To not play all non regression tests before commit and push is taking the risk to introduce flawed code in the source content management.
You will never do it.
Unit tests have to be fast executed
If units test are so long to be executed as it hurts the developer velocity , it generally means that they are bad designed or maybe that they are not real unit tests. A unit test is designed to be run fast.
If you write tests that are long to be run because they require to start a server, to load/clean data in a database, to load/unload some containers, and so for... it means you didn't write unit test but integration tests. These tests are not designed to be executed regularly and automatically on the local development machine but on a CI tool.
The CI has to run all tests
Do we need to run tests in CI server if every developers run the tests
before push?
As explained, integration tests have to be executed by the CI tool.
About unit testing, sparing their execution in the CI side is not a good idea either.
Of course developers have to run the tests before pushing to the SCM but actually you don't have the guarantee that it will always be done.
Besides, even in a perfect world where developers execute all tests before pushing, you could fall into cases where the tests are successful on the developer machine but fail on the CI or the other developer machines.
For example, the developer could introduce in the base code some absolute paths specific to its machine or he/she could also forget to replicate a modification on the database used in the CI environment.
So running all tests (unit and integration tests) has not to be an option for the CI.
Yes, they shall be run twice. Because if some developers don't, then they run never. Developers should run the tests locally to make sure that their code works correctly.
Your CI system is however your reference, so there's no chance of one person arguing that it "works on my machine", but fails for others. Looking forward to continuous delivery, knowing this state on the CI/CD system becomes even more important.
You might hope that always and forever, every commit has been tested successfully locally (and all workstations are the same and identical with production systems...), but hope is a bad strategy.

Visual Studios Unit Tests passing locally, "Not Executed" on build server

I am currently running into an issue with Unit tests passing locally, but getting reported "failed" test cases on our build servers. Locally, I am running win7 VS2013 test agent and the build servers are running win8 VS2013 test agent.
Firstly, all our tests are passing on our local machines, so we are having trouble reproducing the problems on the build server. Secondly, on the build servers, the tests are reporting that they aren't being executed, but based on the console output as well as some coverage information reported, we can see that the tests are actually running (which is a mystery to us).
We have attempted the tests locally and all our local machines will pass, but all our server machines will not pass certain tests. Also we have attempted to rollback to a previous state on the build server that was passing, but even that has not worked. We have already attempted a clean build/rebooting of the build machines, but nothing has helped so far as the tests are still reported as "not executed" (console outputs suggest that it is running since our debugging information is being printed). The tests are running in parallel on the build server, but that shouldn't be an issue as it was running fine before.
UPDATE: After doing some debugging, we are having trouble with Singleton Classes. The odd situation is some of our other test suites use the same singleton class and doesn't have any problems on the build server, while other suites will.
We have removed all tests but one using this singleton class, but it still reports as "Not executed".
anyone running into this issue or have a solution or a suggestion on where to look?
-L

Gui Tests take too long - what's your approach?

we have a typical web application stack. there are 120 selenium (webdriver) tests that are executed against the application. this takes roundabout 1 hour. we execute them as part of our build chain "compile > unit test > integration test > gui tests". the gui tests take up a lot of time and we are wondering how to better structure them. currently they are "happy case and unhappy" case tests. they are quite stable i.e. they won't fail because of programmer errors.
we want to get the build times down and the biggest part are the gui tests. we want to do this based on "customer journeys" i.e. specify (together with the business people) some typical use case and test them (happy path) instead of testing too much.....
how do you guys structure your gui tests? here are some ideas that came to my mind
only execute happy path tests
do a "customer journey test", i.e. do several happy path tests in one ("clicking through the pages")
only take the "top 10" specified by the business (mission critical)
top 10 + "all the rest" as nightly build (one time)
i would appreciate your ideas
thanks
marcel
The nighttime is a perfect time for Selenium tests - you just have to remember to put a "Don't turn me off!" sticky note on your computer :).
Also, there always is Selenium Grid when the nighttime begins to be too short to run all tests. With Grid, you can run your tests on several machines in parallel!
We have several test suites that are applicable to different situations. Before a major release (to test, to pre-live, to production), everything runs. Usually (on a daily or even hourly basis on rush days) only the "The Quickened Normal Path of a User Through the Application" suite runs. And if somebody "fixes" a large bug, then the tests related to that part of application are run.
An hour seems absolutely fine to me.
One suggestion could be to decide which of the tests come under smoke tests, and are required to run every night. That is, tests that show the core functionality of your web application is still intact and working - other more detailed tests can be run at different times (once every few days?).
With that said, ours take around 2 hours - the only problem comes when one test has failed, you fix it, commit it, but then have to wait a very long time to verify it is fixed on the CI server.
TeamCity allows to run builds in parallel on the same machine, so gui tests should not be in build chain along with unit and integration tests. UI tests should have separate database and separate build so they will not waste time of developers or manual testers or any other stakeholders. TeamCity will gather all statistic, will send email on build failures and so on.
Next step is parallelization. As Slanec said you can use Grid (several machines are not required) with Mbunit (c#) or TestNG (java). With the help of Grid you can decrease tests execution time e.g. by 10 times so it will take only 6(!) min to run all your tests.
Also you can combine some of your tests in the bigger ones (but this will lead to increasing time for discover the reasons of failure and make tests difficult to maintain).
After these steps Gui tests can be executed after each source commit and provide fast response on application bugs status.
Great question, great answers.
An extra consideration is that you could prioritize your 120 gui tests: You can run them in an order such that the most important ones or those that are most likely to fail are run first.
This won't help to get the build times down, but it will help to get useful feedback from a build faster.
This prioritization (your top 10) need not be fixed, but can change per release / iteration / completed story / day, etc.
For example, you may want to run the newest gui tests first. Or those that were changed most recently. Or the ones covering most of the code that was most recently changed.
There is no tooling up and running out there immediately supporting this, as far as I know, although there is quite some (academic) research going on in the area of test case prioritization.

Resources