I want to set a continuous integration server with buildbot and gtest. I have already managed to set up the environment which leads to the following output after the unit testing step:
Running main() from gtest_main.cc
[==========] Running 7 tests from 3 test cases.
[----------] Global test environment set-up.
[----------] 4 tests from VectorTest
[ RUN ] VectorTest.size_is_correct
[ OK ] VectorTest.size_is_correct (0 ms)
[ RUN ] VectorTest.min_index
[ OK ] VectorTest.min_index (0 ms)
[ RUN ] VectorTest.sort_is_correct
[ OK ] VectorTest.sort_is_correct (0 ms)
[ RUN ] VectorTest.indices_of_smallest_are_correct
[ OK ] VectorTest.indices_of_smallest_are_correct (0 ms)
[----------] 4 tests from VectorTest (0 ms total)
[----------] 2 tests from MatrixTest
[ RUN ] MatrixTest.NumberOfColumnsIsCorrect
[ OK ] MatrixTest.NumberOfColumnsIsCorrect (0 ms)
[ RUN ] MatrixTest.NumberOfRowsIsCorrect
[ OK ] MatrixTest.NumberOfRowsIsCorrect (0 ms)
[----------] 2 tests from MatrixTest (0 ms total)
[----------] 1 test from SparseMatrix
[ RUN ] SparseMatrix.IteratorIsCorrect
[ OK ] SparseMatrix.IteratorIsCorrect (0 ms)
[----------] 1 test from SparseMatrix (0 ms total)
[----------] Global test environment tear-down
[==========] 7 tests from 3 test cases ran. (2 ms total)
[ PASSED ] 7 tests.
[100%] Built target unit
I would like buildbot to parse this output so as to check that the keyword PASSED is present in order to know if something went wrong during unit testing.
Do you know how to do that?
GoogleTest supports XML output in JUnit format using command line option --gtest_output, which most CI systems already know how to parse.
I don't know whether Buildbot supports JUnit parsing or not. If not, it is for sure easier to parse an XML structured output than the standard plain text output.
Why don't you check the exit code of the test program? It will be a success code (0) if the tests pass and a failure (usually 1) if they fail.
Related
I would like to run the pytest for all the items in the for loop. The pytest should fail in the end but it should run all the elements in the for loop.
The code looks like this
#pytest.fixture
def library():
return Library( spec_dir = service_spec_dir)
#pytest.fixture
def services(library):
return list(library.service_map.keys())
def test_properties(service, services):
for service_name in services:
model = library.models[service_name]
proxy = library.get_service(service_name)
if len(model.properties ) != 0 :
for prop in model.properties:
try:
method = getattr(proxy, f'get_{prop.name}')
method()
except exception as ex:
pytest.fail(ex)
The above code fails if one property of one service fails. I am wondering if there is a way to to run the test for all the service and get a list of failed cases for all the services.
I tried parametrize But based on this stackoverflow discussion. The parameter list should be resolved during the collection phase and in our case the library is loaded during the execution phase. Hence I am also not sure if it can be parametrized.
The goal is run all the services and its properties and get the list of failed items in the end.
I moved the variables to the global scope. I can parametrize the test now\
library = Library( spec_dir = service_spec_dir)
service_names = list(library.service_map.keys())
#pytest .mark.paramertize("serivce_name", service_names)
def test_properties(service):
pass
Don't use pytest.fail, but pytest_check.check instead.
fail point, is that you stop test execution on condition, while check made for collect how much cases were failed.
import logging
import pytest
import pytest_check as check
def test_000():
li = [1, 2, 3, 4, 5, 6]
for i in li:
logging.info(f"Test still running. i = {i}")
if i % 2 > 0:
check.is_true(False, msg=f"value of i is odd: {i}")
Output:
tests/main_test.py::test_000
-------------------------------- live log call --------------------------------
11:00:05 INFO Test still running. i = 1
11:00:05 INFO Test still running. i = 2
11:00:05 INFO Test still running. i = 3
11:00:05 INFO Test still running. i = 4
11:00:05 INFO Test still running. i = 5
11:00:05 INFO Test still running. i = 6
FAILED [100%]
================================== FAILURES ===================================
__________________________________ test_000 ___________________________________
FAILURE: value of i is odd: 1
assert False
FAILURE: value of i is odd: 3
assert False
FAILURE: value of i is odd: 5
assert False
In a Pester v5 implementation, any way to have a data driven tag?
My Use Case:
Operating on larger data sets
To have all tests runable on a data set
To be able to run against a specific element of my data set via the Config Filter
My Conceptual example:
Describe "Vehicles" {
Context "Type: <_>" -foreach #("car","truck") {
# Should be tagged Car for iteration 1, Truck for iteration 2
It "Should be True" -tag ($_) { $true | should -betrue }
}
}
TIA
Your example works for me, so the answer seems to be, yes you can do that. Usage examples:
~> Invoke-Pester -Output Detailed
Pester v5.3.1
Starting discovery in 1 files.
Discovery found 2 tests in 12ms.
Running tests.
Running tests from 'C:\Users\wragg\OneDrive\Desktop\so.tests.ps1'
Describing Vehicles
Context Type: car
[+] Should be True 18ms (1ms|17ms)
Context Type: truck
[+] Should be True 19ms (2ms|16ms)
Tests completed in 129ms
Tests Passed: 2, Failed: 0, Skipped: 0 NotRun: 0
~> Invoke-Pester -Output Detailed -TagFilter car
Pester v5.3.1
Starting discovery in 1 files.
Discovery found 2 tests in 12ms.
Filter 'Tag' set to ('car').
Filters selected 1 tests to run.
Running tests.
Running tests from 'C:\Users\wragg\OneDrive\Desktop\so.tests.ps1'
Describing Vehicles
Context Type: car
[+] Should be True 9ms (4ms|5ms)
Tests completed in 66ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 1
~> Invoke-Pester -Output Detailed -TagFilter truck
Pester v5.3.1
Starting discovery in 1 files.
Discovery found 2 tests in 11ms.
Filter 'Tag' set to ('truck').
Filters selected 1 tests to run.
Running tests.
Running tests from 'C:\Users\wragg\OneDrive\Desktop\so.tests.ps1'
Describing Vehicles
Context Type: truck
[+] Should be True 21ms (1ms|19ms)
Tests completed in 97ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 1
~>
This is my core/tests.py that I use with pytest-django:
import pytest
def test_no_db():
pass
def test_with_db(db):
pass
Seems that setting up to inject db takes 66 seconds. When the tests start, collection is almost instant, followed by a 66-second pause, then the tests run rapidly.
If I disable the second test, the entire test suite runs in 0.002 seconds.
The database runs on PostgreSQL.
I run my tests like this:
$ pytest -v --noconftest core/tests.py
================================================================================ test session starts ================================================================================
platform linux -- Python 3.8.6, pytest-6.2.4, py-1.10.0, pluggy-0.13.1 -- /home/mslinn/venv/aw/bin/python
cachedir: .pytest_cache
django: settings: main.settings.test (from ini)
rootdir: /var/work/ancientWarmth/ancientWarmth, configfile: pytest.ini
plugins: timer-0.0.11, django-4.4.0, Faker-8.0.0
collected 2 items
core/tests.py::test_with_db PASSED [50%]
core/tests.py::test_no_db PASSED [100%]
=================================================================================== pytest-timer ====================================================================================
[success] 61.18% core/tests.py::test_with_db: 0.0003s
[success] 38.82% core/tests.py::test_no_db: 0.0002s
====================================================================== 2 passed, 0 skipped in 68.04s (0:01:08) ======================================================================
pytest.ini:
[pytest]
DJANGO_SETTINGS_MODULE = main.settings.test
FAIL_INVALID_TEMPLATE_VARS = True
filterwarnings = ignore::django.utils.deprecation.RemovedInDjango40Warning
python_files = tests.py test_*.py *_tests.py
Why does this happen? What can I do?
Using jmeter in a bash script, how can I manage that it returns a non zero value if any assertion failed?
jmeter -n -t someFile.jmx
echo $?
# always returns 0, even if an assertion failed
I tried with a Bean Shell Assertion using the script:
if (ResponseCode.equals("200") == false) {
System.exit(-1);
}
But this does not even return 0, it just kills the process (I guess?)
May anyone help me with this?
Put the following code in JSR223 Element
System.exit(1);
It will return error level 1 which will be displayed in Linux when executing echo $?
In case you only care about jmeter returning exit code when errors are found, you can check the .log for lines with error afterwards:
test $(grep -c ERROR jmeter.log) -eq 0
If you call echo $?, you can see it returns 1 if errors are found and 0, if not.
I used the following approach (taken from here) for JMeter version 4.0 within a JSR223 Assertion with scripting language set to groovy.
But watch out where you use System.exit code.
First with a JSR223 Assertion we collect all sampler that have failed tests and put them into a user defined variable:
String expectedCode = "200";
if(!expectedCode.equals(prev.getResponseCode())){
String currentValue = vars.get("failedTests");
currentValue = currentValue + "Expected <response code> [" + expectedCode + "] but we got instead [" + prev.getResponseCode() + "] in sampler: '" + sampler.name + "'\n"
vars.put("failedTests",currentValue);
}
Then, based on that variable, at the very end of the test we check if it contains any value. In such a case we fail the whole suite and log accordingly:
String testResults = vars.get("failedTests");
if(testResults.length() > 0)
{
println testResults;
log.info(testResults);
println "Exit the system now with: System.exit(1)";
System.exit(1);
} else {
println "All test passed!";
log.info("All test passed!");
}
I cannot reproduce your issue, are you sure your assertion really works? The code is more or less ok, however it also matters where you place it.
#./jmeter -n -t test.jmx
Creating summariser <summary>
Created the tree successfully using test.jmx
Starting the test # Thu Jun 21 07:34:49 CEST 2018 (1529559289011)
Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445
#echo $?
255
You can use Jenkins Performance Plugin which can automatically mark build as unstable or failed in case of certain thresholds met/exceeded.
You can use Taurus tool as a wrapper for JMeter test. It has powerful and flexible pass/fail criteria subsystem where you can define your assertions logic, if there will be failures - Taurus will return non-zero exit code to the parent shell.
I am using casperjs/phantomjs with this code
casper.test.begin('assertEquals() tests', 3, function(test) {
test.assertEquals(1 + 1, 3);
test.assertEquals([1, 2, 3], [1]);
test.assertEquals({a: 1, b: 2}, {a: 1, b: 4});
test.done();
});
In the console I get Failed tests as expected but what I can't understand is why the test suite is marked as PASS
PASS assertEquals() tests (3 tests)
FAIL 1 test executed in 0.029s, 0 passed, 1 failed, 0 dubious, 0 skipped.
I didn't regonized that before like this, but you also get the Error-Messages of the failing (first) equal.
The last PASS is just saying that casperjs is finished with the testsuite, no matter what is failing inside the suite.
This is the full log:
root#4332425a143d:/casperjs# casperjs test test.js
Test file: test.js
# assertEquals() tests
FAIL Subject equals the expected value
# type: assertEquals
# file: test.js
# subject: 2
# expected: 3
PASS assertEquals() tests (3 tests)
FAIL 1 test executed in 0.025s, 0 passed, 1 failed, 0 dubious, 0 skipped.
So that says that the first equals fails and the suite "assertEquals()" finished.