How do I make Bamboo understand phpspec tests? - continuous-integration

Bamboo gives us the ability to run phpunit tests, but likely given a lower popularity, nothing for phpspec.
I suspect however (Googling) that it must be possible!
https://revive.beccati.com/bamboo/browse/PHP-PHPSPEC-813/test
Has anyone successfully fed phpspec tests into Bamboo?
Thanks.

One option is to run the phpspec test as a Script Task. If the task returns a non-0 exit code, the build fails. phpspec obligingly returns a 0 exit code only when all your tests pass.
In order to do this you would need to ensure that phpspec is available to your Bamboo build. If you're using Composer, you can add it to the require-dev section of your composer.json file.
Then, in the Plan Configuration, under Default Job (or some other job), you can add a new task of type Script. This Script Task can then make a call to phpspec:
vendor/bin/phpspec run 1>&2
You may also want (as above) to redirect the output to stderr because Bamboo seems to suppress any output on stdout. This will then allow you to see the output of phpspec in your Bamboo log.

The answer was to run the test with the junit formatter. Bamboo has built-in support for the former, and this made the tests run smoothly.

Related

Running gradle test tasks in Microsoft App-V5.1 virtualized environment

I'm very new to App-V, which is evaluated in my office.
I have a Selenium test suite written in JUnit5 and can launch it as gradle test task using gradle-wrapper. My final goal is to run this on App-V5.1 virtualized environment, similar as this question.
As followed the link which was mentioned the answer, I tried to launch cmd.exe within the App-V environment, and it seeded works. Then, I tried to do this:
./gradlew --no-daemon clean test
Then the testClasses phase works perfectly, but in the test phase, I got an error like:
Could not write standard input into: Gradle Worker 1.
java.io.IOException: The pipe is being closed
...
(Sorry I couldn't show you the actual error log due to security reason, but it is similar to this question.)
May I wrong something? What's the right way to launch a gradle test in App-V env?
Have you tried launching cmd.exe from within the virtual bubble? I find the best way to do this is to create a shortcut to cmd.exe during sequencing and use this to troubleshoot.
If your process works within the bubble, the solution may be as simple as allowing Local Interaction. Have a read here about that.

Configure Mocha Test Runner in Bamboo

I've configured and am executing mocha tests in WebStorm, so I know the module is working properly. But I can't seem to make it run from a Bamboo task. The task runs with Success but there are 0 tests executed.
This is my configuration atm:
app/ is my working dir. Tried also with app/node_modules/mocha/bin/ and other possibilities. I am not sure which exactly is the Mocha executable of all the mocha named files in the module...
Or maybe the problem lies in the tests dir? I've got test files, respectively in app/test/unit/models/ and app/test/unit/services/. But in WebStorm I configured it with the general test dir - just /app/test. Configuring the Mocha task in Bamboo with the specific test folders did not yield result...
I believe the problem comes from wrong directory configurations in the task, but I've tried writing whatever paths already and I've got no idea what's missing or wrong...
I noticed from your screenshot that the "Parse test results produced by this task" box isn't checked. This is what tells Bamboo to parse the output of the tests that you run.

TeamCity and CTest test results

I have a number of unit tests written for my project, executed with CTest. I would like to integrate the results into my TeamCity build. I've downloaded and set up the plugin for my testing framework (Boost Test).
The problem that I have run into is that the tests run with CTest output to Testing/Temporary/LastTest.log, whereas TeamCity is trying to read the results from standard out. To get around this, my testing step is.
make test
cat Testing/Temporary/LastTest.log
which works, but feels like a hack.
Is there any way to get TeamCity to read from this file in addition to standard out? Alternatively, is there any way to tell ctest to output to standard out in addition to this LastTest.log file?
This question is similar, but I would like it to work for all output rather than just on failure: CMake: setting an environmental variable for ctest (or otherwise getting failed test output from ctest/make test automatically)
Teamcity has additional build features which allow to process CTest reports. I am not sure if it'll work or not but you could try adding an additional build feature in your build step to read CTest report.

Fitnesse in Cruisecontrol.net exec: do not fail the build when tests fail

I'm trying to integrate fitnesse with our cruisecontrol setup.
I'd like to have a set of acceptance tests that we develop before the features for the release are worked on. Develop our acceptance tests, run them against our nightly build (not on every check in, we have a job for that but I suspect our acceptance tests would slow that down too much).
So I want to be able to run the fitnesse test suite and not have it fail the build when some tests fail (the expectation is that some of the tests will fail, until we have finished the release).
I have the acceptance tests building on the integration server and the the fitnesse suite running from the command line (as an exec task in the integration job).
At the moment it is failing the build (runner.exe has a non-zero exit code when any test fails).
So... does anyone have a list of exit codes for fitsharp runner.exe? Is there any way to tell a cruisecontrol exec task that I really don't care about the return value from the job? Is there another cc.net task I should use instead?
Edit:
Current best idea is to wrap the fitsharp runner in a batch file or powershell script and swallow the return code from fitness
Edit2:
Return code from fitsharp runner.exe is the number of tests that failed (making setting the success return codes element for the cruisecontrol.net exec task difficult)
I think the best way to do it is with nant, which integrates very nicely with ccnet. You can simply tell a nant exec task to not fail the build, see http://nant.sourceforge.net/release/latest/help/tasks/exec.html
When you're close to release, simply set the failonerror property to true.

How to set up a CI environment using jenkins, rvm and cucumber

I am new to CI and would like your thoughts and input on how to go about my problem. I would like to first start off that I have been wrestling with this for 2 days(and I don't have that much background in sys ad) so please play nice?(I am mainly a front-end web dev) :)
Basically my plan was to install jenkins then make a CI env with these steps:
poll for any changes to github
if there are, run the build script:
a. migrate the development and test dbs?(does that mean i have to put the config/database.yml in my repo?)
b. run cucumber
c. if all tests pass go to 3, else fail
run any rake setup stuff
run the server(deploy)
I have done some of the stuff by cheating:
in my local, i switch my rvm to the correct one i need(rvm use 1.8.7-p174#mygemset)
run jenkins(java -jar jenkins.war) so that it gets the RVM ruby as default
run spork in a separate terminal(because for some reason my cucumbers don't run without spork - that's another problem)
build the project manually by clicking Build
so basically, I want to automate these stuff. Maybe what I need is a set of steps to follow(general or specific, depending on your taste) so I can setup my CI up and running.
Keep in mind that my "cheats" won't do as I want to test different projects with different setups and the startup cheat just won't do. Currently, my project build was successful because all I did was to run cucumber(and all my cukes pass). I want it to be able to deploy after it passes so maybe some help there also? Thanks
Okay I will try and help you as best I can.
poll for any changes to github
This can be easily done with the Github Plugin located here
if there are, run the build script: a. migrate the development and test dbs?(does that mean i have to put the config/database.yml in my
repo?) b. run cucumber c. if all tests pass go to 3, else fail
Then all you would is run the build script you have configure in the in the build from
Select "Add Build Step" -> "Execute shell".
You can either do that which is probably what I would do because when you create build you want them to be portable so you can start up in new jenkins instances, so you dont have to setup your build machine, with build specific files.
Then you run your tests, if they fail the build should fail regardless here is some information on running ruby on rails tests. if you need to manually fail a build in a script based on a result usually exiting a script with non-zero will fail the build. If not continue and run your rake and deployment scripts.
Just a few notes on Jenkins it wont do everything for you but if you can do it manaually Jenkins can automate it. So anything you have setup running manually with a little bit of effort you can get up and running automated with Jenkins
Here is another answer you might find helpful in your general setup and ideology behind Jenkins.
Goodluck!

Resources