Not understanding the FAIL/PASS in casperjs - casperjs

I am using casperjs/phantomjs with this code
casper.test.begin('assertEquals() tests', 3, function(test) {
test.assertEquals(1 + 1, 3);
test.assertEquals([1, 2, 3], [1]);
test.assertEquals({a: 1, b: 2}, {a: 1, b: 4});
test.done();
});
In the console I get Failed tests as expected but what I can't understand is why the test suite is marked as PASS
PASS assertEquals() tests (3 tests)
FAIL 1 test executed in 0.029s, 0 passed, 1 failed, 0 dubious, 0 skipped.

I didn't regonized that before like this, but you also get the Error-Messages of the failing (first) equal.
The last PASS is just saying that casperjs is finished with the testsuite, no matter what is failing inside the suite.
This is the full log:
root#4332425a143d:/casperjs# casperjs test test.js
Test file: test.js
# assertEquals() tests
FAIL Subject equals the expected value
# type: assertEquals
# file: test.js
# subject: 2
# expected: 3
PASS assertEquals() tests (3 tests)
FAIL 1 test executed in 0.025s, 0 passed, 1 failed, 0 dubious, 0 skipped.
So that says that the first equals fails and the suite "assertEquals()" finished.

Related

How to not exit a for loop inside a pytest although few items fail

I would like to run the pytest for all the items in the for loop. The pytest should fail in the end but it should run all the elements in the for loop.
The code looks like this
#pytest.fixture
def library():
return Library( spec_dir = service_spec_dir)
#pytest.fixture
def services(library):
return list(library.service_map.keys())
def test_properties(service, services):
for service_name in services:
model = library.models[service_name]
proxy = library.get_service(service_name)
if len(model.properties ) != 0 :
for prop in model.properties:
try:
method = getattr(proxy, f'get_{prop.name}')
method()
except exception as ex:
pytest.fail(ex)
The above code fails if one property of one service fails. I am wondering if there is a way to to run the test for all the service and get a list of failed cases for all the services.
I tried parametrize But based on this stackoverflow discussion. The parameter list should be resolved during the collection phase and in our case the library is loaded during the execution phase. Hence I am also not sure if it can be parametrized.
The goal is run all the services and its properties and get the list of failed items in the end.
I moved the variables to the global scope. I can parametrize the test now\
library = Library( spec_dir = service_spec_dir)
service_names = list(library.service_map.keys())
#pytest .mark.paramertize("serivce_name", service_names)
def test_properties(service):
pass
Don't use pytest.fail, but pytest_check.check instead.
fail point, is that you stop test execution on condition, while check made for collect how much cases were failed.
import logging
import pytest
import pytest_check as check
def test_000():
li = [1, 2, 3, 4, 5, 6]
for i in li:
logging.info(f"Test still running. i = {i}")
if i % 2 > 0:
check.is_true(False, msg=f"value of i is odd: {i}")
Output:
tests/main_test.py::test_000
-------------------------------- live log call --------------------------------
11:00:05 INFO Test still running. i = 1
11:00:05 INFO Test still running. i = 2
11:00:05 INFO Test still running. i = 3
11:00:05 INFO Test still running. i = 4
11:00:05 INFO Test still running. i = 5
11:00:05 INFO Test still running. i = 6
FAILED [100%]
================================== FAILURES ===================================
__________________________________ test_000 ___________________________________
FAILURE: value of i is odd: 1
assert False
FAILURE: value of i is odd: 3
assert False
FAILURE: value of i is odd: 5
assert False

V5 -- Data driven Tags?

In a Pester v5 implementation, any way to have a data driven tag?
My Use Case:
Operating on larger data sets
To have all tests runable on a data set
To be able to run against a specific element of my data set via the Config Filter
My Conceptual example:
Describe "Vehicles" {
Context "Type: <_>" -foreach #("car","truck") {
# Should be tagged Car for iteration 1, Truck for iteration 2
It "Should be True" -tag ($_) { $true | should -betrue }
}
}
TIA
Your example works for me, so the answer seems to be, yes you can do that. Usage examples:
~> Invoke-Pester -Output Detailed
Pester v5.3.1
Starting discovery in 1 files.
Discovery found 2 tests in 12ms.
Running tests.
Running tests from 'C:\Users\wragg\OneDrive\Desktop\so.tests.ps1'
Describing Vehicles
Context Type: car
[+] Should be True 18ms (1ms|17ms)
Context Type: truck
[+] Should be True 19ms (2ms|16ms)
Tests completed in 129ms
Tests Passed: 2, Failed: 0, Skipped: 0 NotRun: 0
~> Invoke-Pester -Output Detailed -TagFilter car
Pester v5.3.1
Starting discovery in 1 files.
Discovery found 2 tests in 12ms.
Filter 'Tag' set to ('car').
Filters selected 1 tests to run.
Running tests.
Running tests from 'C:\Users\wragg\OneDrive\Desktop\so.tests.ps1'
Describing Vehicles
Context Type: car
[+] Should be True 9ms (4ms|5ms)
Tests completed in 66ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 1
~> Invoke-Pester -Output Detailed -TagFilter truck
Pester v5.3.1
Starting discovery in 1 files.
Discovery found 2 tests in 11ms.
Filter 'Tag' set to ('truck').
Filters selected 1 tests to run.
Running tests.
Running tests from 'C:\Users\wragg\OneDrive\Desktop\so.tests.ps1'
Describing Vehicles
Context Type: truck
[+] Should be True 21ms (1ms|19ms)
Tests completed in 97ms
Tests Passed: 1, Failed: 0, Skipped: 0 NotRun: 1
~>

3 test cases would fail with SbSocketGetInterfaceAddressTest in NPLB

we run the nplb test both on the x86-x11 and arm-linux plarform with cobalt Release 11.104700 version, the SbSocketGetInterfaceAddressTest test would both fail, so it seemed to be the issue of NPLB itself, can someone have a look?
[ FAILED ] SbSocketAddressTypes/SbSocketGetInterfaceAddressTest.SunnyDayDestination/1, where GetParam() = 1
[ FAILED ] SbSocketAddressTypes/SbSocketGetInterfaceAddressTest.SunnyDaySourceForDestination/1, where GetParam() = 1
[ FAILED ] SbSocketAddressTypes/SbSocketGetInterfaceAddressTest.SunnyDaySourceNotLoopback/1, where GetParam() = 1
1>SbSocketAddressTypes/SbSocketGetInterfaceAddressTest.SunnyDayDestination/1
../../starboard/nplb/socket_get_interface_address_test.cc:85: Failure
Value of: SbSocketGetInterfaceAddress(&destination, &source, NULL)
Actual: false
Expected: true
../../starboard/nplb/socket_get_interface_address_test.cc:86: Failure
Value of: source.type == GetAddressType()
Actual: false
Expected: true
../../starboard/nplb/socket_get_interface_address_test.cc:87: Failure
Value of: SbSocketGetInterfaceAddress(&destination, &source, &netmask)
Actual: false
Expected: true
../../starboard/nplb/socket_get_interface_address_test.cc:93: Failure
Value of: GetAddressType()
Actual: 1
Expected: source.type
Which is: 4278124286
../../starboard/nplb/socket_get_interface_address_test.cc:94: Failure
Value of: GetAddressType()
Actual: 1
Expected: netmask.type
Which is: 4278124286
../../starboard/nplb/socket_get_interface_address_test.cc:95: Failure
Value of: 0
Expected: source.port
Which is: -16843010
2>SbSocketAddressTypes/SbSocketGetInterfaceAddressTest.SunnyDaySourceForDestination/1
[13672:19284243583:ERROR:socket_connect.cc(52)] SbSocketConnect: connect failed: 101
../../starboard/nplb/socket_get_interface_address_test.cc:128: Failure
Value of: source.type == GetAddressType()
Actual: false
Expected: true
../../starboard/nplb/socket_get_interface_address_test.cc:132: Failure
Value of: GetAddressType()
Actual: 1
Expected: netmask.type
Which is: 4278124286
../../starboard/nplb/socket_get_interface_address_test.cc:134: Failure
Expected: (0) != (SbMemoryCompare(source.address, invalid_address.address, (sizeof(source.address) / sizeof(source.address[0])))), actual: 0 vs 0
../../starboard/nplb/socket_get_interface_address_test.cc:136: Failure
Expected: (0) != (SbMemoryCompare(netmask.address, invalid_address.address, (sizeof(netmask.address) / sizeof(netmask.address[0])))), actual: 0 vs 0
3>SbSocketAddressTypes/SbSocketGetInterfaceAddressTest.SunnyDaySourceNotLoopback/1
../../starboard/nplb/socket_get_interface_address_test.cc:165: Failure
Value of: SbSocketGetInterfaceAddress(&destination, &source, NULL)
Actual: false
Expected: true
../../starboard/nplb/socket_get_interface_address_test.cc:166: Failure
Value of: GetAddressType()
Actual: 1
Expected: source.type
Which is: 4278124286
../../starboard/nplb/socket_get_interface_address_test.cc:167: Failure
Value of: SbSocketGetInterfaceAddress(&destination, &source, &netmask)
Actual: false
Expected: true
../../starboard/nplb/socket_get_interface_address_test.cc:172: Failure
Expected: (0) != (SbMemoryCompare(netmask.address, invalid_address.address, (sizeof(netmask.address) / sizeof(netmask.address[0])))), actual: 0 vs 0
../../starboard/nplb/socket_get_interface_address_test.cc:174: Failure
Expected: (0) != (SbMemoryCompare(source.address, invalid_address.address, (sizeof(source.address) / sizeof(source.address[0])))), actual: 0 vs 0
It looks like a number of SbSocket implementations in your Starboard port are broken and NPLB rightfully points it out.
For example, in order to pass SbSocketGetInterfaceAddressTest.SunnyDayDestination you need to follow the comment from SbSocketGetInterfaceAddress declaration:
// If the destination address is 0.0.0.0, and its |type| is
// |kSbSocketAddressTypeIpv4|, then any IPv4 local interface that is up and not
// a loopback interface is a valid return value.
and
// Returns whether it was possible to determine the source address and the
// netmask (if non-NULL value is passed) to be used to connect to the
// destination. This function could fail if the destination is not reachable,
// if it an invalid address, etc.
In other words, the test expects out_source_address to be an IP address of the machine and return value to be true.
Since you are seeing the same error on linux_x86-x11 build, I suggest you to verify that POSIX function connect (used by SbSocketConnect implementation on Linux) works well with 0.0.0.0 IP address on your platform.

Continuous integration with GTest and Buildbot

I want to set a continuous integration server with buildbot and gtest. I have already managed to set up the environment which leads to the following output after the unit testing step:
Running main() from gtest_main.cc
[==========] Running 7 tests from 3 test cases.
[----------] Global test environment set-up.
[----------] 4 tests from VectorTest
[ RUN ] VectorTest.size_is_correct
[ OK ] VectorTest.size_is_correct (0 ms)
[ RUN ] VectorTest.min_index
[ OK ] VectorTest.min_index (0 ms)
[ RUN ] VectorTest.sort_is_correct
[ OK ] VectorTest.sort_is_correct (0 ms)
[ RUN ] VectorTest.indices_of_smallest_are_correct
[ OK ] VectorTest.indices_of_smallest_are_correct (0 ms)
[----------] 4 tests from VectorTest (0 ms total)
[----------] 2 tests from MatrixTest
[ RUN ] MatrixTest.NumberOfColumnsIsCorrect
[ OK ] MatrixTest.NumberOfColumnsIsCorrect (0 ms)
[ RUN ] MatrixTest.NumberOfRowsIsCorrect
[ OK ] MatrixTest.NumberOfRowsIsCorrect (0 ms)
[----------] 2 tests from MatrixTest (0 ms total)
[----------] 1 test from SparseMatrix
[ RUN ] SparseMatrix.IteratorIsCorrect
[ OK ] SparseMatrix.IteratorIsCorrect (0 ms)
[----------] 1 test from SparseMatrix (0 ms total)
[----------] Global test environment tear-down
[==========] 7 tests from 3 test cases ran. (2 ms total)
[ PASSED ] 7 tests.
[100%] Built target unit
I would like buildbot to parse this output so as to check that the keyword PASSED is present in order to know if something went wrong during unit testing.
Do you know how to do that?
GoogleTest supports XML output in JUnit format using command line option --gtest_output, which most CI systems already know how to parse.
I don't know whether Buildbot supports JUnit parsing or not. If not, it is for sure easier to parse an XML structured output than the standard plain text output.
Why don't you check the exit code of the test program? It will be a success code (0) if the tests pass and a failure (usually 1) if they fail.

Two apparently equal test cases coming back failed. What can cause that?

Below are a few lines from my test case. The first assertion comes back as false, but why? The second does not.
result=Parser.parse_subject(##lexicon.scan("kill princess"), Pair.new(:noun, "bear"))
assert_equal(Parser.parse_subject(##lexicon.scan("kill princess"), Pair.new(:noun, "bear")),Parser.parse_subject(##lexicon.scan("kill princess"), Pair.new(:noun, "bear")))
assert_equal(result,result)
Here is the actual error:
Run options:
# Running tests:
.F.
Finished tests in 0.004000s, 750.0000 tests/s, 1750.0000 assertions/s.
1) Failure:
test_parse_subject(ParserTests) [test_fournineqa.rb:30]:
Sentence:0x21ad958 #object="princess", #subject="bear", #verb="kill" expect
ed but was
Sentence:0x21acda0 #object="princess", #subject="bear", #verb="kill".
3 tests, 7 assertions, 1 failures, 0 errors, 0 skips
It looks like you have defined a class Sentence but have provided no way to compare two Sentence instances, leaving assert_equal comparing the identities of two objects to discover that they are not the same instance.
A simple fix would be something like:
class Sentence
def ==(sentence)
#subject == sentence.subject and
#verb == sentence.verb and
#object == sentence.object
end
end
The first assertion compares two different objects with the same content whereas the second assertion compares two identical objects. Apparently equal in this context means "identical objects". (Check the implementation.)

Resources