V5 -- Data driven Tags? - pester

In a Pester v5 implementation, any way to have a data driven tag?
My Use Case:
Operating on larger data sets
To have all tests runable on a data set
To be able to run against a specific element of my data set via the Config Filter
My Conceptual example:
Describe "Vehicles" {
Context "Type: <_>" -foreach #("car","truck") {
# Should be tagged Car for iteration 1, Truck for iteration 2
It "Should be True" -tag ($_) { $true | should -betrue }
}
}
TIA

Your example works for me, so the answer seems to be, yes you can do that. Usage examples:
~> Invoke-Pester -Output Detailed
Pester v5.3.1
Starting discovery in 1 files.
Discovery found 2 tests in 12ms.
Running tests.
Running tests from 'C:\Users\wragg\OneDrive\Desktop\so.tests.ps1'
Describing Vehicles
Context Type: car
[+] Should be True 18ms (1ms|17ms)
Context Type: truck
[+] Should be True 19ms (2ms|16ms)
Tests completed in 129ms
Tests Passed: 2, Failed: 0, Skipped: 0 NotRun: 0
~> Invoke-Pester -Output Detailed -TagFilter car
Pester v5.3.1
Starting discovery in 1 files.
Discovery found 2 tests in 12ms.
Filter 'Tag' set to ('car').
Filters selected 1 tests to run.
Running tests.
Running tests from 'C:\Users\wragg\OneDrive\Desktop\so.tests.ps1'
Describing Vehicles
Context Type: car
[+] Should be True 9ms (4ms|5ms)
Tests completed in 66ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 1
~> Invoke-Pester -Output Detailed -TagFilter truck
Pester v5.3.1
Starting discovery in 1 files.
Discovery found 2 tests in 11ms.
Filter 'Tag' set to ('truck').
Filters selected 1 tests to run.
Running tests.
Running tests from 'C:\Users\wragg\OneDrive\Desktop\so.tests.ps1'
Describing Vehicles
Context Type: truck
[+] Should be True 21ms (1ms|19ms)
Tests completed in 97ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 1
~>

Related

How to not exit a for loop inside a pytest although few items fail

I would like to run the pytest for all the items in the for loop. The pytest should fail in the end but it should run all the elements in the for loop.
The code looks like this
#pytest.fixture
def library():
return Library( spec_dir = service_spec_dir)
#pytest.fixture
def services(library):
return list(library.service_map.keys())
def test_properties(service, services):
for service_name in services:
model = library.models[service_name]
proxy = library.get_service(service_name)
if len(model.properties ) != 0 :
for prop in model.properties:
try:
method = getattr(proxy, f'get_{prop.name}')
method()
except exception as ex:
pytest.fail(ex)
The above code fails if one property of one service fails. I am wondering if there is a way to to run the test for all the service and get a list of failed cases for all the services.
I tried parametrize But based on this stackoverflow discussion. The parameter list should be resolved during the collection phase and in our case the library is loaded during the execution phase. Hence I am also not sure if it can be parametrized.
The goal is run all the services and its properties and get the list of failed items in the end.
I moved the variables to the global scope. I can parametrize the test now\
library = Library( spec_dir = service_spec_dir)
service_names = list(library.service_map.keys())
#pytest .mark.paramertize("serivce_name", service_names)
def test_properties(service):
pass
Don't use pytest.fail, but pytest_check.check instead.
fail point, is that you stop test execution on condition, while check made for collect how much cases were failed.
import logging
import pytest
import pytest_check as check
def test_000():
li = [1, 2, 3, 4, 5, 6]
for i in li:
logging.info(f"Test still running. i = {i}")
if i % 2 > 0:
check.is_true(False, msg=f"value of i is odd: {i}")
Output:
tests/main_test.py::test_000
-------------------------------- live log call --------------------------------
11:00:05 INFO Test still running. i = 1
11:00:05 INFO Test still running. i = 2
11:00:05 INFO Test still running. i = 3
11:00:05 INFO Test still running. i = 4
11:00:05 INFO Test still running. i = 5
11:00:05 INFO Test still running. i = 6
FAILED [100%]
================================== FAILURES ===================================
__________________________________ test_000 ___________________________________
FAILURE: value of i is odd: 1
assert False
FAILURE: value of i is odd: 3
assert False
FAILURE: value of i is odd: 5
assert False

Run specific part of Cypress test multiple times (not whole test)

Is it possible to run specific part of the test in Cypress over and over again without execution of whole test case? I got error in the second part of test case and first half of it takes 100s. It means I have to wait 100s every time to get to the point where the error occurs. I would like rerun test case just few steps before error occurs. So once again, my question is: Is it possible to do in Cypress? Thanks
Workaround #1
If you are using cucumber in cypress you can modify your scenario to a Scenario Outline that will execute Nth times with a scenario tag:
#runMe
Scenario Outline: Visit Google Page
Given that google page is displayed
Examples:
| nthRun |
| 1 |
| 2 |
| 3 |
| 4 |
| 100 |
After that run the test in the terminal by running through tags:
./node_modules/.bin/cypress-tags run -e TAGS='#runMe'
Reference: https://www.npmjs.com/package/cypress-cucumber-preprocessor?activeTab=versions#running-tagged-tests
Workaround #2
Cypress does have retry capability but it would only retry the scenario during failure. You can force your scenario to fail to retry it Nth times with a scenario tag:
In your cypress.json add the following configuration:
{
"retries": {
// Configure retry attempts for `cypress run`
// Default is 0
"runMode": 99,
// Configure retry attempts for `cypress open`
// Default is 0
"openMode": 99
}
}
Reference: https://docs.cypress.io/guides/guides/test-retries#How-It-Works
Next is In your feature file, add an unknown step definition on the last step of your scenario to make it fail:
#runMe
Scenario: Visit Google Page
Given that google page is displayed
And I am an uknown step
Then run the test through tags:
./node_modules/.bin/cypress-tags run -e TAGS='#runMe'
For a solution that doesn't require adding a change to the config file, you can pass retries as a param to specific tests that are known to be flakey for acceptable reasons.
https://docs.cypress.io/guides/guides/test-retries#Custom-Configurations
Meaning you can write (from docs)
describe('User bank accounts', {
retries: {
runMode: 2,
openMode: 1,
}
}, () => {
// The per-suite configuration is applied to each test
// If a test fails, it will be retried
it('allows a user to view their transactions', () => {
// ...
}
it('allows a user to edit their transactions', () => {
// ...
}
})```

pytest database access takes 66 seconds to start

This is my core/tests.py that I use with pytest-django:
import pytest
def test_no_db():
pass
def test_with_db(db):
pass
Seems that setting up to inject db takes 66 seconds. When the tests start, collection is almost instant, followed by a 66-second pause, then the tests run rapidly.
If I disable the second test, the entire test suite runs in 0.002 seconds.
The database runs on PostgreSQL.
I run my tests like this:
$ pytest -v --noconftest core/tests.py
================================================================================ test session starts ================================================================================
platform linux -- Python 3.8.6, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /home/mslinn/venv/aw/bin/python
cachedir: .pytest_cache
django: settings: main.settings.test (from ini)
rootdir: /var/work/ancientWarmth/ancientWarmth, configfile: pytest.ini
plugins: timer-0.0.11, django-4.4.0, Faker-8.0.0
collected 2 items
core/tests.py::test_with_db PASSED [50%]
core/tests.py::test_no_db PASSED [100%]
=================================================================================== pytest-timer ====================================================================================
[success] 61.18% core/tests.py::test_with_db: 0.0003s
[success] 38.82% core/tests.py::test_no_db: 0.0002s
====================================================================== 2 passed, 0 skipped in 68.04s (0:01:08) ======================================================================
pytest.ini:
[pytest]
DJANGO_SETTINGS_MODULE = main.settings.test
FAIL_INVALID_TEMPLATE_VARS = True
filterwarnings = ignore::django.utils.deprecation.RemovedInDjango40Warning
python_files = tests.py test_*.py *_tests.py
Why does this happen? What can I do?

Ginkgo skipped specs counted as failed

I've being using Ginkgo for a while and I have found a behavior I don't really understand. I have a set of specs that I only want to run if and only if a condition is available. If the condition is not available I want to skip the test suite.
Something like this:
ginkgo.BeforeSuite(func(){
if !CheckCondition() {
ginkgo.Skip("condition not available")
}
}
When the suite is skipped this counts as a failure.
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 0 Skipped
I assumed there should be one tests to be considered to be skipped. Am I missing something? Any comments are welcome.
Thnaks
I think you are using Skip method incorrectly. It should be use inside spec like below, not inside BeforeSuite. When used inside spec it does show up as "skipped" in the summary.
It("should do something, if it can", func() {
if !someCondition {
Skip("special condition wasn't met")
}
})
https://onsi.github.io/ginkgo/#the-spec-runner

Not understanding the FAIL/PASS in casperjs

I am using casperjs/phantomjs with this code
casper.test.begin('assertEquals() tests', 3, function(test) {
test.assertEquals(1 + 1, 3);
test.assertEquals([1, 2, 3], [1]);
test.assertEquals({a: 1, b: 2}, {a: 1, b: 4});
test.done();
});
In the console I get Failed tests as expected but what I can't understand is why the test suite is marked as PASS
PASS assertEquals() tests (3 tests)
FAIL 1 test executed in 0.029s, 0 passed, 1 failed, 0 dubious, 0 skipped.
I didn't regonized that before like this, but you also get the Error-Messages of the failing (first) equal.
The last PASS is just saying that casperjs is finished with the testsuite, no matter what is failing inside the suite.
This is the full log:
root#4332425a143d:/casperjs# casperjs test test.js
Test file: test.js
# assertEquals() tests
FAIL Subject equals the expected value
# type: assertEquals
# file: test.js
# subject: 2
# expected: 3
PASS assertEquals() tests (3 tests)
FAIL 1 test executed in 0.025s, 0 passed, 1 failed, 0 dubious, 0 skipped.
So that says that the first equals fails and the suite "assertEquals()" finished.

Resources