I am running grails 2.3.3 in a GGTS.
I am successfully running a single unit test for a service function within the Spring GGTS.
I am hoping to be able to use this unit test to develop the particular function - such an approach will really speed up my development going forward.
This means I need to make changes to the service function that is being tested and then retest over and over again (no doubt a sad reflection on my coding skills!). The problem is when I make a change to the logic or any log.debug output it does not come through in the test. In other words the test continues to run against the original service function and not the updated one.
In order for me to force it to use the updated function the only way I have found that will do this is to restart the GGTS!
Is there a command I can use in GGTS to force a test on the most recent version of the function I am testing?
Here are the commands I am using within the GTTS:
test-app unit: UtilsService
I do run a clean after a function update without any success:
test-app -clean
I am also struggling with getting additional output from within the test function - introducing 'println' or 'log.debug' commands results in a failure of the test.
It would be useful to know of a good link to documentation about the test syntax - I have looked at grails section 12 about testing in general.
Here is the test file:
package homevu1
import grails.test.mixin.TestFor
import spock.lang.Specification
/**
* See the API for {#link grails.test.mixin.services.ServiceUnitTestMixin} for usage instructions
*/
#TestFor(UtilsService)
class UtilsServiceSpec extends Specification {
// to test utilSumTimes for example use the command :
// test-app utilSumTimes
// test-app HotelStay
def setup() {
}
def cleanup() {
}
void "test something"() {
when:
def currSec = service.utilSumTimeSecs( 27, 1, false)
//println "currSec" , currSec
then:
//println "currSec" , currSec
assert currSec == "26"
}
}
If I uncomment either of the println lines these comments are not displayed and the test fails.
Welcome any suggestions.
-mike
I've to get this working now by running grail from a command prompt (in MS Windows).
In the command prompt I moved to the root folder/directory of the grails project - in my case:
cd C:\grails\workspace\rel_3.1.0\HomeVu
Then I type grails to start a grails command line session.
The unit test command I used being:
test-app -unit UtilsService -echoOut -echoErr
That said I still am unable to successfully put any print commands in the test file - but I can use the assert to determine any problems.
Also output from the last log.debug line of the grails code of the service function fails to appears. Perhaps there is some output buffering issue with MS Windows here.
At least I can now do some rapid function development, by making changes to the service/function code and instantly test is against a set of known requirement conditions.
Hope this helps others.
-mike
Related
I have inherited a webdriver io - mocha test framework. Until now the tests have been ran one at a time. There was one test spec that had to be ran before the other. This was just handled in the file naming convention:
aFirstTest.js
xLastTest.js
So when the whole suite was ran, this ensured that aFirstTest.js was ran before xLastTest.js
I now want to run the tests in parallel mode.
How can I ensure that aFirstTest.js is ran before xLastTest.js?
This post might give you some ideas.
Otherwise, you'd need to present the specs to WebdriverIO as one. An easy way to do this would be wrapping them in another file.
wrapper.spec.js:
const first = require('./aFirstTest')
const last = require('./xLastTest')
And in your config:
suites: {
firstLast: [
'./specs/wrapper.spec.js'
]
}
I'm still getting up to speed on Puppet and rspec and all that, but...
We've currently got a CI runner that tests our Puppet module code using a Docker container running on Linux. Well, we're now delving into using Windows-specific features of Puppet, and our tests are failing. So we're wondering if there's any way to have the tests get ignored if the platform running them is Linux?
For example, if we need to run our unit tests against our code that manages the local groups (https://puppet.com/docs/pe/2018.1/managing_windows_configurations.html#manage_local_groups), is there a way so that when we run it locally on our Windows Dev boxes, it works, but when it runs on our (Linux-based) CI runner, it skips that particular test?
Per request, here is an example of the code we're looking to use to manage a local group:
class my_repo::profile::windows::remote_desktop_users (
Array $members = ['MyDomain\MyUserAccount', 'MyDomain\ARandomDomainGroup'],
) {
group{'Set local Remote Desktop Users memberships':
ensure => present,
name => 'Remote Desktop Users',
members => $members,
auth_membership => false
}
}
Note: We're using the Role-Profile pattern
The above code seems to work. It just bombs out when our unit tests run via our CI:
describe 'my_repo::profile::windows::remote_desktop_users' do
on_supported_os.select { |_, f| f[:os]['family'] == 'windows' }.each do |os, os_facts|
context "on #{os}" do
let(:facts) { os_facts }
it { is_expected.to compile }
end
end
end
Thanks
As I remarked in a comment, I was able to reproduce a test failure using the manifest and Spec code (now) presented in the question. If I understand correctly that the failure is observed only when running the unit tests and not when serving up catalogs for real, then it follows that the problem is in the test environment's configuration, which may or may not be on you.
But as for the actual question:
So we're wondering if there's any way to have the tests get ignored if the platform running them is Linux?
Sure you can. Rspec tests are written in Ruby, and you can use substantially all standard Ruby features in them, including control flow statements and mechanisms for executing system commands. Thus, as a temporary workaround, you can put your breaks-when-not-running-on-Windows tests into a conditional statement, like this:
describe 'my_repo::profile::windows::remote_desktop_users' do
on_supported_os.select { |_, f| f[:os]['family'] == 'windows' }.each do |os, os_facts|
if %x{facter os.family}.chomp == 'windows'
context "on #{os}" do
let(:facts) { os_facts }
it { is_expected.to compile }
end
end
end
end
Note in particular the use of a %x expression to execute a system command on the host where the test runs, and in it use of facter to request the specific system fact that tells you whether the test is running on Windows. That resolves the issue for me. Note that this particular implementation will require facter to be installed on your CI machine.
Based on the instructions given at cucumber-cpp github repo and cucumber-cpp step definition quick-start guide , I created my cucumber step definition files. The features and their step_definition files are under features/ folder, and the cpp code is built with cucumber-cpp headers and linked against libcucumber-cpp.a as instructed.
Cucumber step definition runners should stay running as a seperate process and cucumber command should execute while the runner is running. Indeed, the examples in the cucumber-cpp repository execute like that, but when I create my own step definitions, with gtest or boost test, they execute immediately, without waiting for calls from cucumber.
Onats-MacBook-Pro:bin onatbas$ ./tests/AdditionTest_TESTTARGET
Running main() from gtest_main.cc
[==========] Running 0 tests from 0 test cases.
[==========] 0 tests from 0 test cases ran. (0 ms total)
[ PASSED ] 0 tests.
Onats-MacBook-Pro:bin onatbas$
Instead of executing immediately, it should say nothing and wait for cucumber calls. I copy-pasted the example code from the cucumber-cpp into my project and they, too, exit immediately. So even though there's no source code difference between cucumber-cpp's examples and mine, they act differently.
I suspected the cmake build scripts might be linking with different libraries, but the linkage process is exactly the same too.
Does anybody have any idea on why this might be happening?
Here's the repository with minimum code that reproduces the error I have. https://github.com/onatbas/CucumberCppTest
The complete trace is at readme.
The cucumber files are under features/, and ther's only one feature that's identical to what's here
The runner executable is defined in tests/CMakeLists.txt
For quick reference: Here's the step-definition file
AdditionTest.cxx
#include <boost/test/unit_test.hpp>
#include <cucumber-cpp/defs.hpp>
#include <CucumberApp.hxx>
using cucumber::ScenarioScope;
struct CalcCtx {
Calculator calc;
double result;
};
GIVEN("^I have entered (\\d+) into the calculator$") {
REGEX_PARAM(double, n);
ScenarioScope<CalcCtx> context;
context->calc.push(n);
}
WHEN("^I press add") {
ScenarioScope<CalcCtx> context;
context->result = context->calc.add();
}
WHEN("^I press divide") {
ScenarioScope<CalcCtx> context;
context->result = context->calc.divide();
}
THEN("^the result should be (.*) on the screen$") {
REGEX_PARAM(double, expected);
ScenarioScope<CalcCtx> context;
BOOST_CHECK_EQUAL(expected, context->result);
}
and here's the tests/CMakeLists.txt file where the executable is added.
cmake_minimum_required(VERSION 3.1)
find_package(Threads)
set(CUCUMBERTEST_TEST_DEPENDENCIES cucumberTest
${CMAKE_THREAD_LIBS_INIT}
${GTEST_BOTH_LIBRARIES}
${GMOCK_BOTH_LIBRARIES}
${CMAKE_THREAD_LIBS_INIT}
${Boost_LIBRARIES}
${CUCUMBER_BINARIES}
)
macro(ADD_NEW_CUCUMBER_TEST TEST_SOURCE FOLDER_NAME)
set (TARGET_NAME ${TEST_SOURCE}_TESTTARGET)
add_executable(${TARGET_NAME} ${CMAKE_SOURCE_DIR}/features/step_definitions/${TEST_SOURCE})
target_link_libraries(${TARGET_NAME} ${CUCUMBERTEST_TEST_DEPENDENCIES})
add_test(NAME ${TEST_SOURCE} COMMAND ${TARGET_NAME})
set_property(TARGET ${TARGET_NAME} PROPERTY FOLDER ${FOLDER_NAME})
endmacro()
ADD_NEW_CUCUMBER_TEST(AdditionTest "cucumberTest_tests")
Your example outputs
Running main() from gtest_main.cc
That main method will run the test runner's default behaviour instead of Cucumber-CPP's. The main mathod that you want (src/main.cpp) is included as part of the compiled cucumber-cpp library.
Try moving ${CUCUMBER_BINARIES} in CUCUMBERTEST_TEST_DEPENDENCIES before all others, or linking to testing libraries that do not contain a main method (e.g. GoogleTest ships with two libraries: one with and one without the main method).
I'm using Ruby 2.2. I need to run a unit test and get information if it succeeded or failed. I'm browsing through docs of both test-unit and minitest (suggested gems for unit testing in Ruby 2.2) but I can't seem to find a method that would return or store somewhere information about the test result.
All I need is information whether the test failed/succeeded, and I need to access it from the level of Ruby. I imagine I would have to use a specific method to run the test - so far, I was only able to run a single test by running the test file, not by invoking any method.
Maybe it's just my poor knowledge of Ruby, anyway I would appreciate any help.
May be you can run the tests using Ruby's ability to run shell command and return results.
Here is an example for test-unit:
test_output = `ruby test.rb --runner console --verbose=progress`
failed_tests = test_output.chomp.split('').count('F')
passed_tests = test_output.chomp.split('').count('.')
puts "P: #{passed_tests}, F: #{failed_tests}"
We are using --verbose=progress option so that we get minimum output. It will look something like below:
.F...F
We count number of F to figure out how many tests failed.
For about test output, the sample program will print:
P: 4, F: 2
Another option is to use passed? method:
https://ruby-doc.org/stdlib-2.1.1/libdoc/minitest/unit/rdoc/MiniTest/Unit/TestCase.html#method-i-passed-3F
Not sure if it's still available in the latest versions of Ruby, so please check that before using.
I'm trying to use RSpec from within existing ruby runtime and run specs every time when file changes. This is because of JRuby and JVM startup time. To eliminate this on every run I'd like to start ruby once, then only reload changed files and run specs. I was using guard (with diffrent extensions) and watchr but all seem to suffer from an issue described below.
I nailed the issue down to RSpec itself. The problem is, when running RSpec via RSpec::Core::Runner.run several times it works fine until spec file is reloaded using load. Then RSpecs starts running specs twice.
I've created sample project showing this issue live: https://github.com/mostr/rspec_double_run_issue
Below is sample output:
ruby run_spec_in_loop.rb
Running spec from within ruby runtime
.
Finished in 0.00047 seconds
1 example, 0 failures
loading spec file via 'load' as if it was changed and we wanted changes to be picked up
Running spec from within ruby runtime
..
Finished in 0.001 seconds
2 examples, 0 failures
Is there any way to tell RSpec to clear its context between subsequent runs when run from within existing ruby runtime? I've also raised this as an issue #826 for RSpec Core project.
Summarizing the answer here in order to remove this question from the "Unanswered" filter...
Per RJHunter's observation, the explanation has been documented on the GitHub RSpec Core project here:
https://github.com/rspec/rspec-core/issues/826#issuecomment-15089030
For posterity (in case the above link dies), here are the details:
The RSpec runner is already calling load internally, your second load is what's causing the double run issue.
I quickly knocked up a script based off your example which reruns a single spec file, changes the specs to something else, then reruns them, work's correctly without the second load
See: https://gist.github.com/JonRowe/5192007
The aforementioned Gist contains:
require 'rspec'
spec_file = 'spec/sample_spec.rb'
File.open(spec_file, 'w') { |file| file.write 'describe { specify { expect(true).to eq false } }' }
1.upto(5) do |i|
puts "Running spec from within ruby runtime"
::RSpec::Core::Runner.run([spec_file], STDERR, STDOUT)
#rewriting the spec file
File.open(spec_file, 'w') { |file| file.write "describe { specify { expect(#{i}).to eq false } }" }
end