How to Summarize Lua Unit Test Results? - bash

I have a script that runs my Lua Unit Test. Each test has its own output of a summary. However, I want to count and see which test fail after all of the test are ran.
The script loops through the test like so:
# Loop over all the UTs and run them
for utLuaScript in `ls ut*.lua` ; do
echo "LAUNCH TEST: ${utLuaScript}"
lua ./${utLuaScript} -v
echo
done
What is the solution here? Have the number of successes and failure saved to file, then once outside of this loop, go through the file and summarize all of the test. Can the script spit out a variable? What is best practice?

The usual way this is done is with a test runner script. In this approach, the unit tests are not executable by themselves (i.e. lua ut_foo.lua doesn't do anything), but must be run through the test runner. For example, lua test_runner.lua might run all the tests, and lua test_runner.lua ut_foo.lua might run just the "ut_foo" tests. The test runner script takes care of formatting and displaying the test results.
There are quite a few test runners already available for Lua; see the overview on the Lua-users wiki. Perhaps one of those will meet your needs, or can be adapted to do so.

Related

How to run specific test cases in each describe function in mocha

I have many test spec files with describe() and it(). Needs to run only some cases (it()) say that is sanity cases of each spec file. How to run all sanity cases of each describe() of all test spec files?
I am using Webdriverio and javascript.
There are two ways of doing it.
Create separate files for each type of tests and run them as per your needs.
You can utilize grep flag of mocha to tell mocha which test case to pick.
I would prefer second one as it is more extensible. Here is what you have to do:
Update summary of it blocks to include a pattern e.g. #sanity#regression etc
At the tile of running tests from command line, pass grep flag as
mocha -g "#sanity"
Mocha will check for the text passed in the command in each of the tests and will execute only the matching ones.

go test ./package dumps Stdout of successful tests, not just the failed test

While writing a CLI tool that outputs to stdout, I noticed that if one test fails, then whatever the other (successful) tests had also written to stdout gets dumped out as well, which is misleading.
Is this to be expected, or should I set os.Stdout to /dev/null while testing? but then how would the testing package find anything to print out?
The test package doesn't interfere with the standard output of code under test, whether it passes or fails. If it's important for you not to see this output, you can capture stdout while executing your specific test and then decided what to do with it based on the test outcome.
Try to use -failfast. Following an example.
$ go test -failfast -coverprofile=coverage.out -covermode=count <pkg path>

Have make fail if unit tests fail

I have a makefile for compiling a static library. This makefile has a rule for compiling a unit test suite associated with the static library. Once the test suite is compiled, a python script is invoked to run the tests and log the results. It looks like this:
unit:
$(MAKE) -C cXbase/unit
python $(TESTS_RUNNER) $(UNIT_TESTS_EXEC) $(UNIT_TESTS_LOG)
I use Python to make the tests invocation portable. Right now, everything works fine: the tests compile and the test runner is called and logs everything properly in Linux and Windows. Only, when a test fail, I would like to stop the whole make process and return an error. More precisely, I would like not to be able to make all or to make unit when a unit test fails (or many).
If a unit test fails, the Python script returns a specific exit code. I would like to be able to capture it in a portable way and have make fail if that exit code is captured.
Would you have any recommendations on how to do this? I have found nothing convincing or portable elsewhere.
Thanks
It seems the solution was way simpler than I imagined. The python exit code reflects directly in make it's exit code. In other words, if the script fails (exit code other than 0), make sees this as a command error and stops.
I had an error in my Python script exit code handling upon tests failure which hid this from me. Now it is solved and it works perfectly.
I found out about this here: Handling exit code returned by python in shell script

Executing particular tests under ruby test framework

I have a set of test cases under ruby test framework 1.8.7
Lets say i have a ruby file named check.rb which contains differents tests like
test_a_check, test_b_check and test_c_check.
Now when i run the file ruby check.rb, all the test cases will be executed.
My part of the question is,
I want to pass a new parameter to the script while running, say ruby check.rb --sunset
based on the sunset parameter i want my script to execute only test_a_check and test_b_check and not the test_c_check.
By default, if i run the script all the tests should be excuted but when the --sunset parameter is passed only two of three tests should be executed.
is there are way i can achieve this?
If you are using minitest you can specify the method via
ruby check --name test_method_name
If it's a common testing framework, then look into it's manual, but
If it's your personal testing script, then just look in ARGV:
test_a_check
test_b_check
test_c_check if ARGV[0] != '--sunset'

Looking for way to automate testing kshell app

I inherited a shell-script application that is a combination of kshell scripts, awk, and java programs. I have written JUnit tests for the java pieces.
Is there a good way to do something similar for the kshell scripts and awk programs?
I have considered using JUnit and System.exec() to call the scripts, but it seems like there should be a better way.
I have found shUnit2 and will try that.
Update with the results of trying out shUnit
shUnit works as expected. Script files are written with test functions defined and then a call to shUnit.
Example:
#!/bin/sh
testFileCreated()
{
TESTFILE=/tmp/testfile.txt
# some code that creates the $TESTFILE
assertTrue 'Test file missing' "[ -s '${TESTFILE}' ]"
}
# load shunit2
. /path/to/shUnit/shunit2-2.1.5/src/shell/shunit2
Result
Ran 1 test.
OK
The 'OK' would be replaced with 'FAILED' if the file did not exist.
You might want to try Expect. It was designed for automating interactive programs. Of course Expect was written on top of TCL, which is an abominable scripting language. So there are interfaces for Python (Pexpect) and perhaps other languages that are more programmer friendly. But there is lots of documentation laying around for TCL/Expect that is still useful.
This is not a direct answer to your question, but you may consider using a simple Makefile to run bash scripts with different parameters.
For example, write something like this:
cat >Makefile
test_all: test1 test2 test3
test1:
script1 -parameter1 -parameter2
test2: $(addprefix test2file_, $TESTFILES)
test2file_%:
script2 -filename $*
test3:
grep|awk|gawk|sed....
By calling 'make test_all' you will execute all the scripts automatically, and the syntax is not so difficult to learn - you just have to define a rule name (test1, test_all...) and the commands associated with it.

Resources