I would like to use a serverspec check and run it against two acceptable outcomes, so that if either passes then the check passes. I want my check to pass if the exit status of the command is either 0 or 1. Here is my check:
describe command("rm /var/tmp/*.test") do
its(:exit_status) { should eq 0 }
end
Right now it can only check if the exit status is 0. How can I change my check to use either 0 or 1 as an acceptable exit status?
Use a compound matcher.
its(:exit_status) { should eq(0).or eq(1) }
Related
I've being using Ginkgo for a while and I have found a behavior I don't really understand. I have a set of specs that I only want to run if and only if a condition is available. If the condition is not available I want to skip the test suite.
Something like this:
ginkgo.BeforeSuite(func(){
if !CheckCondition() {
ginkgo.Skip("condition not available")
}
}
When the suite is skipped this counts as a failure.
FAIL! -- 0 Passed | 1 Failed | 0 Pending | 0 Skipped
I assumed there should be one tests to be considered to be skipped. Am I missing something? Any comments are welcome.
Thnaks
I think you are using Skip method incorrectly. It should be use inside spec like below, not inside BeforeSuite. When used inside spec it does show up as "skipped" in the summary.
It("should do something, if it can", func() {
if !someCondition {
Skip("special condition wasn't met")
}
})
https://onsi.github.io/ginkgo/#the-spec-runner
Using jmeter in a bash script, how can I manage that it returns a non zero value if any assertion failed?
jmeter -n -t someFile.jmx
echo $?
# always returns 0, even if an assertion failed
I tried with a Bean Shell Assertion using the script:
if (ResponseCode.equals("200") == false) {
System.exit(-1);
}
But this does not even return 0, it just kills the process (I guess?)
May anyone help me with this?
Put the following code in JSR223 Element
System.exit(1);
It will return error level 1 which will be displayed in Linux when executing echo $?
In case you only care about jmeter returning exit code when errors are found, you can check the .log for lines with error afterwards:
test $(grep -c ERROR jmeter.log) -eq 0
If you call echo $?, you can see it returns 1 if errors are found and 0, if not.
I used the following approach (taken from here) for JMeter version 4.0 within a JSR223 Assertion with scripting language set to groovy.
But watch out where you use System.exit code.
First with a JSR223 Assertion we collect all sampler that have failed tests and put them into a user defined variable:
String expectedCode = "200";
if(!expectedCode.equals(prev.getResponseCode())){
String currentValue = vars.get("failedTests");
currentValue = currentValue + "Expected <response code> [" + expectedCode + "] but we got instead [" + prev.getResponseCode() + "] in sampler: '" + sampler.name + "'\n"
vars.put("failedTests",currentValue);
}
Then, based on that variable, at the very end of the test we check if it contains any value. In such a case we fail the whole suite and log accordingly:
String testResults = vars.get("failedTests");
if(testResults.length() > 0)
{
println testResults;
log.info(testResults);
println "Exit the system now with: System.exit(1)";
System.exit(1);
} else {
println "All test passed!";
log.info("All test passed!");
}
I cannot reproduce your issue, are you sure your assertion really works? The code is more or less ok, however it also matters where you place it.
#./jmeter -n -t test.jmx
Creating summariser <summary>
Created the tree successfully using test.jmx
Starting the test # Thu Jun 21 07:34:49 CEST 2018 (1529559289011)
Waiting for possible Shutdown/StopTestNow/Heapdump message on port 4445
#echo $?
255
You can use Jenkins Performance Plugin which can automatically mark build as unstable or failed in case of certain thresholds met/exceeded.
You can use Taurus tool as a wrapper for JMeter test. It has powerful and flexible pass/fail criteria subsystem where you can define your assertions logic, if there will be failures - Taurus will return non-zero exit code to the parent shell.
I'm writing a script in which i will verify if some elements are present and others are not.
For the ones which are present I'm using:
verify do
assert_include(
#driver.find_element(
:css,
"div.launchpadMain > section:nth-of-type(4) > div.launchpadCategoryBody > a:nth-of-type(2)"
).text,
"shiftplan"
)
end
Example for a element which is not present... I'm trying:
verify do
element_not_present(
#driver.find_element(:css, "button.btn.btn-icon.pull-right > i")
)
end
---> This is not working though. Which command can I use to make a verification if the element/object is present or not? In this case the element/object is a trash icon.
Switch to the group collector find_elements (plural), to stop it returning an exception, then assert it's empty.
verify do
assert_empty(
#driver.find_elements(:css, "button.btn.btn-icon.pull-right > i").size
)
end
find_elements: http://www.rubydoc.info/gems/selenium-webdriver/0.0.28/Selenium%2FWebDriver%2FFind:find_elements
Below are a few lines from my test case. The first assertion comes back as false, but why? The second does not.
result=Parser.parse_subject(##lexicon.scan("kill princess"), Pair.new(:noun, "bear"))
assert_equal(Parser.parse_subject(##lexicon.scan("kill princess"), Pair.new(:noun, "bear")),Parser.parse_subject(##lexicon.scan("kill princess"), Pair.new(:noun, "bear")))
assert_equal(result,result)
Here is the actual error:
Run options:
# Running tests:
.F.
Finished tests in 0.004000s, 750.0000 tests/s, 1750.0000 assertions/s.
1) Failure:
test_parse_subject(ParserTests) [test_fournineqa.rb:30]:
Sentence:0x21ad958 #object="princess", #subject="bear", #verb="kill" expect
ed but was
Sentence:0x21acda0 #object="princess", #subject="bear", #verb="kill".
3 tests, 7 assertions, 1 failures, 0 errors, 0 skips
It looks like you have defined a class Sentence but have provided no way to compare two Sentence instances, leaving assert_equal comparing the identities of two objects to discover that they are not the same instance.
A simple fix would be something like:
class Sentence
def ==(sentence)
#subject == sentence.subject and
#verb == sentence.verb and
#object == sentence.object
end
end
The first assertion compares two different objects with the same content whereas the second assertion compares two identical objects. Apparently equal in this context means "identical objects". (Check the implementation.)
I tried this, but it doesn't seem to work
subtest 'catalyst scripts that should be executable' => sub {
plan({ skip_all => 'skip failing executable tests on windows' }) if $^O eq 'MSWin32';
my $should_exec = [ #{ $dzpcs->scripts } ];
foreach ( #{ $should_exec } ) {
ok ( -x $_ , "$_" . ' is executable' );
}
};
Here's what I got in my cpants report.
plan() doesn't understand HASH(0x286f4cc) at t/02-MintingProfileCatalyst.t line 46.
# Child (catalyst scripts that should be executable) exited without calling finalize()
# Failed test 'catalyst scripts that should be executable'
# at C:/strawberry/perl/lib/Test/Builder.pm line 252.
# Tests were run but no plan was declared and done_testing() was not seen.
So I guess it's not a hash, not really sure what it is then... what's the cleanest way to make this work? (p.s. I can't test win32, I only have my Linux box)
plan takes two parameters, not a hashref:
plan( skip_all => 'skip failing executable tests on windows' ) if $^O eq 'MSWin32';
Not everything uses Moose. ;-)
Note: for testing purposes, you could change eq to ne, so it will skip the tests on your Linux box. Just remember to change it back afterwards.