why if any line fail inside a afterAll() or beforeAll() does not fail test or log anything? - jasmine

I have intentionally written errored code in a afterEach block and executed it but I see the block is executing silently and marking tests as passed. How to capture failures in afterAll() block?

Related

How to make a manual job always exit with success on GitLab CI?

On my Gitlab CI I run the gem https://rubygems.org/gems/brakeman as a manual stage. When it finds any warning or error, on Gitlab CI in the end, after it's gone through all the code, it exits with error 1 and gets rendered as yellow.
I want it to always exit with success - green. Then I'll examine its output myself for warnings and errors it found in my code.
How can I make it always return success and get rendered with the green colour?
You should be able to just prepend | true to your command for it to always succeed.
eg: gem https://rubygems.org/gems/brakeman | true
You will want to modify your Brakeman command to include the --no-exit-warn and --no-exit-error options. Otherwise, it will set a non-zero error code if any warnings or recoverable errors are encountered.
I am assuming the exit code of 1 is not from Brakeman itself, as that would indicate an unhandled exception was raised, perhaps during the report generation.

Is there a Ruby Cucumber test hook for at_start?

Is there a Ruby Cucumber test hook for at_start? I tried at_start and it didn't work.
I have something like this in support/hooks.rb and I want to print a single global message before any of the tests start:
Before do
print '.'
end
at_exit do
puts ''
puts 'All Cucumber tests finished.'
end
It seems like if they have an at_exit hook, they should have a before-start hook as well right?
There is some documentation for "global hooks" at https://github.com/cucumber/cucumber/wiki/Hooks
You don't need to wrap it in any special method such as Before or at_exit. You just execute the code at the root level in any file contained in the features/support directory, such as env.rb. To copy and paste the example they've given:
# these following lines are executed at the root scope,
# accomplishing the same thing that an "at_start" block might.
my_heavy_object = HeavyObject.new
my_heavy_object.do_it
# other hooks can be defined in the same file
at_exit do
my_heavy_object.undo_it
end
They also give an example of how to write a Before block that gets executed only once. Basically you have this block exit if some global variable is defined. The first time the block is run, the global variable is defined which prevents it from being executed multiple times. See the "Running a Before hook only once" section on that page I linked.

TestKichen, Serverspec and out-of-order command execution

Inside TestKitchen describe blocks I'm running a command, loading its output into a variable then running multiple expect statements over that output validating different parts of it. The end goal is using this as part of CI builds to do blackbox testing.
In this instance I'm calling Jmeter (using it to run a remote agent to perform off-DUT tests) then running through the results that it returns checking each test (yeah yeah... it's a little nasty but it works a treat):
describe "Test Transparent Proxy (JMeter)" do
$jmeter_run = command("/usr/local/apache-jmeter-2.13/bin/jmeter -n -t /root/jmx/mytest.jmx -r -Jremote_hosts=192.168.7.252 -Gdut_ip=#$internal_ip -X -l /dev/stdout 2>&1").stdout
it 'test1' do
expect($jmeter_run).to match /text_to_match/
end
it 'test2' do
expect($jmeter_run).to match /more_text to match/
end
end
The tests themselves run fine, but I'm finding that I'm getting multiple jmeter runs (different test sets) being run out-of-order as to how they're defined in the test spec. I have other blocks that are being executed around the Jmeter tests. Here is my flow:
block 1
block 2
block 3 (Jmeter1)
block 4
block 5 (Jmeter2)
What I'm getting though is this:
block5
block3
block1
block2
block4
None of the documentation I've found seems to give me any clues as to how to avoid this. I don't want to put the command execution inside a should/expect chunk of its own as I want/need to be able to tell if an individual test has failed. I would also like to avoid running 50-odd individual Jmeter tests (they're about 5 secs each even with an avg of 20 tests in each run).
Help? :D
Well I managed to resolve this issue myself.
After a lot of tinkering I ended up running the command inside a test:
it 'JMeter executed correctly' do
$jmeter_run1 = command("/usr/local/apache-jmeter-2.13/bin/jmeter -n -t /root/jmx/mytest.jmx -r -Jremote_hosts=192.168.7.252 -Gdut_ip=#$internal_ip -X -l /dev/stdout 2>&1").stdout
expect($jmeter_run1).not_to be_empty
end
Everything now runs nicely in order like it is supposed to and everything is happy.

Print MSTest summary after command line exeution

When running a large set of tests using MsTest from the command line, I can see each test executing and its outcome logged in the window like so:
Passed Some.NameSpace.Test1
Passed Some.NameSpace.Test2
And so on for thousands of tests. Once completed, MsTest will spit out a summary like this
Summary
---------
Test run failed
Passed 2000
Failed 1
------------
Total 2001
At this point I either have to start scrolling backwards in the window trying to find the needle in a haystack that represents my single failing test, or I can open the huge xml file that represents the result, and text-search for some keyword indicating a failed test.
Isn't there an easier way? Can I have MsTest report progress without dumping Passed test names to the console (still logging failed ones), or can I have a summary of just Failed tests at the end?
I think its obvious what any command line user wants to do: follow progress AND know the outcome at the end, without having to read xml or browse the cmd window history.
Answering my own question: A simple wrapper/parser script that calls MsTest.exe and parses/summarizes the output, either the stdout or the trx, is the only solution it seems.
You could use the TestContext.CurrentTestOutcome at the end of each test to determine if the test was failed and then log all failed tests to a different file.
[TestCleanup]
public void CleanUp()
{
if (TestContext.CurrentTestOutcome.ToString().Equals("Failed"))
{
TestContext.WriteLine("{0}.{1} ==> {2}", TestContext.FullyQualifiedTestClassName,
TestContext.TestName, TestContext.CurrentTestOutcome.ToString());
//Log the result to a file.
}
}
I don't know if this could help you.

How to explicitly fail a task in ruby rake?

Let's say I have a rakefile like this:
file 'file1' => some_dependencies do
sh 'external tool I do not have control over, which sometimes fail to create the file'
???
end
task :default => 'file1' do
puts "everything's OK"
end
Now if I put nothing in place of ???, I get the OK message, even if the external tool fails to generate file. What is the proper way to informing rake, that 'file1' task has failed and it should abort (hopefully presenting a meaningful message - like which task did fail) - the only think I can think of now is raising an exception there, but that just doesn't seem right.
P.S The tool always returns 0 as exit code.
Use the raise or fail method as you would for any other Ruby script (fail is an alias for raise). This method takes a string or exception as an argument which is used as the error message displayed at termination of the script. This will also cause the script to return the value 1 to the calling shell. It is documented here and other places.
You can use abort("message") to gracefully fail rake task.
It will print message to stdout and exit with code 1.
Exit code 1 is a failure in Unix-like systems.
See Kernel#abort for details.

Resources