Two apparently equal test cases coming back failed. What can cause that? - ruby

Below are a few lines from my test case. The first assertion comes back as false, but why? The second does not.
result=Parser.parse_subject(##lexicon.scan("kill princess"), Pair.new(:noun, "bear"))
assert_equal(Parser.parse_subject(##lexicon.scan("kill princess"), Pair.new(:noun, "bear")),Parser.parse_subject(##lexicon.scan("kill princess"), Pair.new(:noun, "bear")))
assert_equal(result,result)
Here is the actual error:
Run options:
# Running tests:
.F.
Finished tests in 0.004000s, 750.0000 tests/s, 1750.0000 assertions/s.
1) Failure:
test_parse_subject(ParserTests) [test_fournineqa.rb:30]:
Sentence:0x21ad958 #object="princess", #subject="bear", #verb="kill" expect
ed but was
Sentence:0x21acda0 #object="princess", #subject="bear", #verb="kill".
3 tests, 7 assertions, 1 failures, 0 errors, 0 skips

It looks like you have defined a class Sentence but have provided no way to compare two Sentence instances, leaving assert_equal comparing the identities of two objects to discover that they are not the same instance.
A simple fix would be something like:
class Sentence
def ==(sentence)
#subject == sentence.subject and
#verb == sentence.verb and
#object == sentence.object
end
end

The first assertion compares two different objects with the same content whereas the second assertion compares two identical objects. Apparently equal in this context means "identical objects". (Check the implementation.)

Related

How to not exit a for loop inside a pytest although few items fail

I would like to run the pytest for all the items in the for loop. The pytest should fail in the end but it should run all the elements in the for loop.
The code looks like this
#pytest.fixture
def library():
return Library( spec_dir = service_spec_dir)
#pytest.fixture
def services(library):
return list(library.service_map.keys())
def test_properties(service, services):
for service_name in services:
model = library.models[service_name]
proxy = library.get_service(service_name)
if len(model.properties ) != 0 :
for prop in model.properties:
try:
method = getattr(proxy, f'get_{prop.name}')
method()
except exception as ex:
pytest.fail(ex)
The above code fails if one property of one service fails. I am wondering if there is a way to to run the test for all the service and get a list of failed cases for all the services.
I tried parametrize But based on this stackoverflow discussion. The parameter list should be resolved during the collection phase and in our case the library is loaded during the execution phase. Hence I am also not sure if it can be parametrized.
The goal is run all the services and its properties and get the list of failed items in the end.
I moved the variables to the global scope. I can parametrize the test now\
library = Library( spec_dir = service_spec_dir)
service_names = list(library.service_map.keys())
#pytest .mark.paramertize("serivce_name", service_names)
def test_properties(service):
pass
Don't use pytest.fail, but pytest_check.check instead.
fail point, is that you stop test execution on condition, while check made for collect how much cases were failed.
import logging
import pytest
import pytest_check as check
def test_000():
li = [1, 2, 3, 4, 5, 6]
for i in li:
logging.info(f"Test still running. i = {i}")
if i % 2 > 0:
check.is_true(False, msg=f"value of i is odd: {i}")
Output:
tests/main_test.py::test_000
-------------------------------- live log call --------------------------------
11:00:05 INFO Test still running. i = 1
11:00:05 INFO Test still running. i = 2
11:00:05 INFO Test still running. i = 3
11:00:05 INFO Test still running. i = 4
11:00:05 INFO Test still running. i = 5
11:00:05 INFO Test still running. i = 6
FAILED [100%]
================================== FAILURES ===================================
__________________________________ test_000 ___________________________________
FAILURE: value of i is odd: 1
assert False
FAILURE: value of i is odd: 3
assert False
FAILURE: value of i is odd: 5
assert False

Trailblazer: Which step caused my operation to fail?

Given an operation like
class MyOperation < Trailblazer::Operation
step :do_a!
step :do_b!
def do_a(options, **)
false
end
def do_b(options, **)
true
end
end
and the result of run(MyOperation), how can I tell which step of the operation failed?
If the result object doesn't contain this info by default, what's a good way to add it?
There is this gem now which provides operation specific debugging utilities - https://github.com/trailblazer/trailblazer-developer
It allows you to see exactly which step raised the exception or which step caused the track to change from success to failure.
Trailblazer::Developer.wtf?(MyOperation, options)
It will print the trace of steps on STDOUT/Logger.

Ginkgo skipped specs counted as failed

I've being using Ginkgo for a while and I have found a behavior I don't really understand. I have a set of specs that I only want to run if and only if a condition is available. If the condition is not available I want to skip the test suite.
Something like this:
ginkgo.BeforeSuite(func(){
if !CheckCondition() {
ginkgo.Skip("condition not available")
}
}
When the suite is skipped this counts as a failure.
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 0 Skipped
I assumed there should be one tests to be considered to be skipped. Am I missing something? Any comments are welcome.
Thnaks
I think you are using Skip method incorrectly. It should be use inside spec like below, not inside BeforeSuite. When used inside spec it does show up as "skipped" in the summary.
It("should do something, if it can", func() {
if !someCondition {
Skip("special condition wasn't met")
}
})
https://onsi.github.io/ginkgo/#the-spec-runner

Chef serverspec `describe command` using an or statement

I would like to use a serverspec check and run it against two acceptable outcomes, so that if either passes then the check passes. I want my check to pass if the exit status of the command is either 0 or 1. Here is my check:
describe command("rm /var/tmp/*.test") do
its(:exit_status) { should eq 0 }
end
Right now it can only check if the exit status is 0. How can I change my check to use either 0 or 1 as an acceptable exit status?
Use a compound matcher.
its(:exit_status) { should eq(0).or eq(1) }

expect(Class).to receive(:x).with(hash_including(y: :z)) doesn't work

I want to check that Pandoc.convert is called with to: :docx option like this:
options = {to: :docx}
PandocRuby.convert("some string", options)
I have the following expectation in a spec:
expect(PandocRuby).to receive(:convert).with(hash_including(to: :docx))
The spec fails like this:
Failure/Error: expect(PandocRuby).to receive(:convert).with(hash_including(to: :docx))
(PandocRuby (class)).convert(hash_including(:to=>:docx))
expected: 1 time with arguments: (hash_including(:to=>:docx))
received: 0 times
But when debugging, options is like this:
[2] pry(#<ReportDocumentsController>)> options
=> {
:to => :docx,
:reference_docx => "/Users/josh/Documents/Work/Access4All/Projects/a4aa2/src/public/uploads/report_template/reference_docx/1/reference.docx"
}
I think I'm using the wrong RSpec matcher (or the right one in the wrong way), but I can't get it working.
You just need to expect all of the method arguments:
expect(PandocRuby).to receive(:convert).with("some string", hash_including(to: :docx))
Or you could use a matcher to be less specific about the first argument, e.g.
expect(PandocRuby).to receive(:convert).with(an_instance_of(String), hash_including(to: :docx))

Resources