Cucumber-cpp step defiinition runner exits immediately - boost

Based on the instructions given at cucumber-cpp github repo and cucumber-cpp step definition quick-start guide , I created my cucumber step definition files. The features and their step_definition files are under features/ folder, and the cpp code is built with cucumber-cpp headers and linked against libcucumber-cpp.a as instructed.
Cucumber step definition runners should stay running as a seperate process and cucumber command should execute while the runner is running. Indeed, the examples in the cucumber-cpp repository execute like that, but when I create my own step definitions, with gtest or boost test, they execute immediately, without waiting for calls from cucumber.
Onats-MacBook-Pro:bin onatbas$ ./tests/AdditionTest_TESTTARGET
Running main() from gtest_main.cc
[==========] Running 0 tests from 0 test cases.
[==========] 0 tests from 0 test cases ran. (0 ms total)
[ PASSED ] 0 tests.
Onats-MacBook-Pro:bin onatbas$
Instead of executing immediately, it should say nothing and wait for cucumber calls. I copy-pasted the example code from the cucumber-cpp into my project and they, too, exit immediately. So even though there's no source code difference between cucumber-cpp's examples and mine, they act differently.
I suspected the cmake build scripts might be linking with different libraries, but the linkage process is exactly the same too.
Does anybody have any idea on why this might be happening?
Here's the repository with minimum code that reproduces the error I have. https://github.com/onatbas/CucumberCppTest
The complete trace is at readme.
The cucumber files are under features/, and ther's only one feature that's identical to what's here
The runner executable is defined in tests/CMakeLists.txt
For quick reference: Here's the step-definition file
AdditionTest.cxx
#include <boost/test/unit_test.hpp>
#include <cucumber-cpp/defs.hpp>
#include <CucumberApp.hxx>
using cucumber::ScenarioScope;
struct CalcCtx {
Calculator calc;
double result;
};
GIVEN("^I have entered (\\d+) into the calculator$") {
REGEX_PARAM(double, n);
ScenarioScope<CalcCtx> context;
context->calc.push(n);
}
WHEN("^I press add") {
ScenarioScope<CalcCtx> context;
context->result = context->calc.add();
}
WHEN("^I press divide") {
ScenarioScope<CalcCtx> context;
context->result = context->calc.divide();
}
THEN("^the result should be (.*) on the screen$") {
REGEX_PARAM(double, expected);
ScenarioScope<CalcCtx> context;
BOOST_CHECK_EQUAL(expected, context->result);
}
and here's the tests/CMakeLists.txt file where the executable is added.
cmake_minimum_required(VERSION 3.1)
find_package(Threads)
set(CUCUMBERTEST_TEST_DEPENDENCIES cucumberTest
${CMAKE_THREAD_LIBS_INIT}
${GTEST_BOTH_LIBRARIES}
${GMOCK_BOTH_LIBRARIES}
${CMAKE_THREAD_LIBS_INIT}
${Boost_LIBRARIES}
${CUCUMBER_BINARIES}
)
macro(ADD_NEW_CUCUMBER_TEST TEST_SOURCE FOLDER_NAME)
set (TARGET_NAME ${TEST_SOURCE}_TESTTARGET)
add_executable(${TARGET_NAME} ${CMAKE_SOURCE_DIR}/features/step_definitions/${TEST_SOURCE})
target_link_libraries(${TARGET_NAME} ${CUCUMBERTEST_TEST_DEPENDENCIES})
add_test(NAME ${TEST_SOURCE} COMMAND ${TARGET_NAME})
set_property(TARGET ${TARGET_NAME} PROPERTY FOLDER ${FOLDER_NAME})
endmacro()
ADD_NEW_CUCUMBER_TEST(AdditionTest "cucumberTest_tests")

Your example outputs
Running main() from gtest_main.cc
That main method will run the test runner's default behaviour instead of Cucumber-CPP's. The main mathod that you want (src/main.cpp) is included as part of the compiled cucumber-cpp library.
Try moving ${CUCUMBER_BINARIES} in CUCUMBERTEST_TEST_DEPENDENCIES before all others, or linking to testing libraries that do not contain a main method (e.g. GoogleTest ships with two libraries: one with and one without the main method).

Related

Ruby test-unit not showing summary after tests

Normally Ruby test-unit will display a summary of tests run after they are finished, something like this:
Finished in 0.117158443 seconds.
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
10 tests, 10 assertions, 0 failures, 0 errors, 0 pendings, 0 omissions, 0 notifications
100% passed
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
298.74 tests/s, 0.00 assertions/s
This was working, but now something has changed and when the unit tests are run it shows the dots but then stops. I tried re-organizing some test file into different directories and made absolutely sure to change the filepaths in the testrunner. Also, the dots do not match the number of tests/assertions.
Loaded suite test
Started
.................$prompt> // <<-- does not put newline here.
I notice that if I run the testrunner from another directory, the summary will show, but it will cause errors with the test dependencies. I should be able to run the testrunner from the same directory. This is an example of the testrunner I am using: https://test-unit.github.io/test-unit/en/file.how-to.html. What are the reasons that this would not display at the end?
Seems like it could be an issue with not having the test-unit.yml file in the same directory as which you run the script.
See here in the code or in the same section in that document you posted.
See how it's configured here, for example:
runner: console
console_options:
color_scheme: inverted
color_schemes:
inverted:
success:
name: red
bold: true
failure:
name: green
bold: true
This part of the code documentation really stuck out:
# ## Test Runners
#
# So, now you have this great test class, but you still
# need a way to run it and view any failures that occur
# during the run. There are some test runner; console test
# runner, GTK+ test runner and so on. The console test
# runner is automatically invoked for you if you require
# 'test/unit' and simply run the file. To use another
# runner simply set default test runner ID to
# Test::Unit::AutoRunner:
Maybe you need to have that runner specified in your YML file?
Without seeing how you are calling your script and your directory organization, it will be hard to tell what is causing that issue but I think it begins with it not reading that yaml file for instance.
If all else fails, let me recommend two great unit testing libraries for Ruby if you feel compelled to switch to a more widely-used library:
minitest
RSpec
Edit: You could also revert your directories to be in the same order as before and hardcode in your Gemfile for that test-unit gem the last version that worked for you, like `"test-unit": "3.4.0".

Webdriver io running in Parallel - How to ensure one test spec runs before another one - execution order

I have inherited a webdriver io - mocha test framework. Until now the tests have been ran one at a time. There was one test spec that had to be ran before the other. This was just handled in the file naming convention:
aFirstTest.js
xLastTest.js
So when the whole suite was ran, this ensured that aFirstTest.js was ran before xLastTest.js
I now want to run the tests in parallel mode.
How can I ensure that aFirstTest.js is ran before xLastTest.js?
This post might give you some ideas.
Otherwise, you'd need to present the specs to WebdriverIO as one. An easy way to do this would be wrapping them in another file.
wrapper.spec.js:
const first = require('./aFirstTest')
const last = require('./xLastTest')
And in your config:
suites: {
firstLast: [
'./specs/wrapper.spec.js'
]
}

Cannot use lib.Const (constant 16777216 of type lib.Version) as lib.Version

I've come across an odd error. I have this larger project that compiles fine with the typical go build. However when I switch to TinyGo (v0.8.0). I get the above error from this code:
func main() {
_ = lib.NewObject{
Version: lib.Const,
}
}
I changed the names to be less confusing but the symbols are completely identical. lib.Const is a constant of a lib.Version. And neither are pointers.
I understand this is a very specific question in the sense that it's in the realm TinyGo. This is more "for the record"... plus I even had to create the "tinygo" tag because this question is so specific. But to add further detail:
It has been compiling before the above code was added.
The build command in exact is tinygo build -target=wasm -o build/out.wasm src/main-wasm.go
This is a bug with the compiler: https://github.com/tinygo-org/tinygo/issues/726
It stems from importing the same package twice under different names. In this case, it was:
// file1:
import "./lib"
// file2:
import "../lib"
The above made 2 instances of the package "lib". This is normally okay to do when working with the normal Go compiler. But TinyGo does not have mechanisms in place to deal with this properly.
It is recommended to append to the $GOPATH to prevent the use of relative paths:
// file1:
import "lib"
// file2:
import "lib"

Grails test-app updating function being tested and test print out problems

I am running grails 2.3.3 in a GGTS.
I am successfully running a single unit test for a service function within the Spring GGTS.
I am hoping to be able to use this unit test to develop the particular function - such an approach will really speed up my development going forward.
This means I need to make changes to the service function that is being tested and then retest over and over again (no doubt a sad reflection on my coding skills!). The problem is when I make a change to the logic or any log.debug output it does not come through in the test. In other words the test continues to run against the original service function and not the updated one.
In order for me to force it to use the updated function the only way I have found that will do this is to restart the GGTS!
Is there a command I can use in GGTS to force a test on the most recent version of the function I am testing?
Here are the commands I am using within the GTTS:
test-app unit: UtilsService
I do run a clean after a function update without any success:
test-app -clean
I am also struggling with getting additional output from within the test function - introducing 'println' or 'log.debug' commands results in a failure of the test.
It would be useful to know of a good link to documentation about the test syntax - I have looked at grails section 12 about testing in general.
Here is the test file:
package homevu1
import grails.test.mixin.TestFor
import spock.lang.Specification
/**
* See the API for {#link grails.test.mixin.services.ServiceUnitTestMixin} for usage instructions
*/
#TestFor(UtilsService)
class UtilsServiceSpec extends Specification {
// to test utilSumTimes for example use the command :
// test-app utilSumTimes
// test-app HotelStay
def setup() {
}
def cleanup() {
}
void "test something"() {
when:
def currSec = service.utilSumTimeSecs( 27, 1, false)
//println "currSec" , currSec
then:
//println "currSec" , currSec
assert currSec == "26"
}
}
If I uncomment either of the println lines these comments are not displayed and the test fails.
Welcome any suggestions.
-mike
I've to get this working now by running grail from a command prompt (in MS Windows).
In the command prompt I moved to the root folder/directory of the grails project - in my case:
cd C:\grails\workspace\rel_3.1.0\HomeVu
Then I type grails to start a grails command line session.
The unit test command I used being:
test-app -unit UtilsService -echoOut -echoErr
That said I still am unable to successfully put any print commands in the test file - but I can use the assert to determine any problems.
Also output from the last log.debug line of the grails code of the service function fails to appears. Perhaps there is some output buffering issue with MS Windows here.
At least I can now do some rapid function development, by making changes to the service/function code and instantly test is against a set of known requirement conditions.
Hope this helps others.
-mike

How to instrument gcc?

I have to instrument gcc for some purposes. The goal is to be able to track what GCC functions are called during a particularly compile. Unfortunately I'm not really familiar with the architecture of GCC so I need a little help. I tried the following steps:
1) Hacking gcc/Makefile.in and adding "-finstrument-functions" flag to T_CFLAGS.
2) I have an already implemented and tested version of start_test and end_test functions. They are called from gcc/main.c, before and after toplev_main() call. The containing file is linked to gcc (the object is added to OBJS-common and the dependency is defined later in gcc/Makefile.in)
3) Downloading prerequisites with contrib/download_prerequisites.
4) Executing the configuration from a clean build directory (on the same level with the source dir): ./../gcc-4.6.2/configure --prefix="/opt/gcc-4.6.2/" --enable-languages="c,c++"
5) Starting the build with "make all"
This way I runned out of memory, although I had 28G.
Next I tried to remove the T_CFLAGS settings from the Makefile and give -finstrument-functions to the make command: make CFLAGS="-finstrument-functions". The build was successful this way but when I tried to compile something it resulted empty output files. (Theoretically end_test should have written its result to a given file.)
What do I make wrong?
Thanks in advance!
Unless you specifically exclude it from being instrumented, main itself is subject to instrumentation, so placing calls to your start_test and end_test inside main is not how you want to do it. The 'correct' way to ensure that the file is opened and closed at the right times is to define a 'constructor' and 'destructor', and GCC automatically generates calls to them before and after main:
void start_test (void)
__attribute__ ( (no_instrument_function, constructor));
void end_test (void)
__attribute__ ( (no_instrument_function, destructor));
/* FILE to write profiling information. */
static FILE *profiler_out;
void start_test (void)
{
profiler_out = fopen ("profiler.out", "w");
if (profiler_out == NULL)
exit (-1);
}
void end_test (void)
{
fclose (profiler_out);
}
Footnotes:
Read more about constructor, destructor and no_instrument_function attributes here. They are function attributes that GCC understands.
Read this excellent guide to instrumentation, on the IBM website.

Resources