I am actually looking to regression test our jenkins shared libraries and could not find any framework or solution so far. I tried Jenkins pipeline unit-writing the unit test cases and mocking the jenkins global variables but discovered that it did not work in our case as we have a lot of jenkins shared libraries wrriten in groovy/boto3/aws cloudformation templates. need some info if there is any solution available or if anyone has done this before so that i can get some help
I followed a much basic approach, for (IMHO) real unit tests: https://www.linkedin.com/pulse/jenkins-global-shared-pipeline-libraries-real-unit-delgado-garrido/
Groovy code can be tested as plain Groovy (even CPS trasnformation cause it not to be real Groovy)
Related
I need to produce a number of component integration tests to fit into an existing framework composed of Jmeter and run via Bamboo.
My problem is that I know nothing of Bamboo.
Working in a Linux environment.
Does anyone have any recommendations for either basic tutorials for Bamboo, or links to existing questions on here which would form a good start point for me?
Any help greatly appreciated.
The main idea of adding a JMeter test under continuous integration system control is having unattended JMeter test executions basing on triggers (on version control system commit, periodically, on-demand, whatever).
There are several ways of running a JMeter test via Bamboo, the easiest would be either running a shell Script or Custom Command Executable.
The results can be viewed and analysed using JMeter Aggregator plugin.
See How to Run JMeter in a Continuous Integration Environment With Bamboo for step-by-step configuration instructions and examples.
We are looking at cucumber for our automation test framework because everyone including business people can understand it.
We use Angualr JS frontend and Java REST backend. Our team that is going to write the step definitions likes Ruby so we want to stick with Ruby for that.
Also we would like to use Maven to tie this process into our build process.
Will cucumber be a good fit given that story above ?
Hui Peztherez, from my prospective cucumber is a great choice, using it with the same architecture expect for Angular.
We are using Maven too, and it's so useful to orchestrate them with Jenkins, using maven to run the tags..
mvn test -Dcucumber.options="--tags #smoke"
ref: https://cucumber.io/docs/reference/jvm
Also Jenkins have several plugin to report the Cucumber Analysis, so useful for testers, and in the end, we are now working about the HPQ server integration with a plugin called Bumblebee (this part is still under development for both sides, our and bumblebee)
Another good choice is Ruby, you can take the step definition so easily defined with Ruby...
We also have a integration with Selenium for the front end side, and it works as well...
So go further!
We are using Cucumber in Java with gradle in past, It was in Maven and It works fine. We have framework for UI and API, In UI we used WebDriver to write step definition and In API, We used RestAssured to write step definition. You can do same thing in Java what you can do in Ruby.
Maven for Java Cucumber :
http://mvnrepository.com/artifact/info.cukes/cucumber-java/1.2.4 - Please add other dependency as per requirement.
Jenkin Plugin : https://wiki.jenkins-ci.org/display/JENKINS/Cucumber+Reports+Plugin
Will cucumber be a good fit given that story above ?
- Yes It is good fit. I will request you to show POC(Proof of concept) to management. I had experience in past that management have no clue about BDD and they have very hard to time to understand coverage. We did very deep dive to provide all information to them. It is very important to answer following question to management
BDD report is providing accurate test converage idea to management ?
Everyone in team is able to write feature file and able to provide same quality of feature file
Feature file and BDD report will be starting tool for check test converage
Thank you.
Please be aware that Cucumber is a BDD framework that can be used on top of a browser automation framework like Selenium WebDriver/Watir/Protractor they are two distinct things. Most of them implements Selenium WebDriver's protocol.
My only concern is for you using Maven in that project setup, I know that you can run ruby code in a JVM by using JRuby. But I'm not sure which plugin you'd use to trigger that from Maven.
I am using tools like Puppet/Chef/Ansible to set up and config development environments and production servers.
Whenever I update the configuration, I run the tool against my development environment and log in to check manually if things works as expected.
But this is tedious to do, and I can't test everything every time, so is there any way I can automate the testing?
There are Infrastructure Testing Frameworks for this:
ServerSpec / InSpec - ruby-based. Famous, big community, nice looking and best in his class.
BATS - Bash Automated Testing System, which is a bit easier.
TestInfra - Python-based infra testing framework. Still pretty young, very small community. Intro.
Goss - Fast (written in Go), small tool for validating server/infra configuration. Test scenarios are written in yaml.
Automation:
There is interesting Molecule project - some automation for testing Ansible roles, designed by Cisco. Never tried it yet.
Step further would be using TestKitchen which handles automation to spin up Vagrant or Docker or even AWS instance and test Puppet/Chef/Ansible with Rspec/BATS against just spinned up machines.
So what you need - pick up framework, write tests and run your playbooks/recipes & tests against mock VMs.
Ideally is to keep your "infra as code" in vcs and configure ci like TravisCI to run your tests for every PR once you bring new changes in your repository.
You can even follow tdd here: write tests first, make them fail, then write actual implementation in your favorite configuration management tool and see if that change makes tests green/passed.
MOAR Infrastructure Testing & Automation!
If you can let us know, what you want to test. We can help better.
But,
Did you check the dry-run mode? I think, Puppet and Ansible supports it, you can have a cron or some automated script which runs all the puppet/ansible modules against a single(test) node.
More info:
1. http://docs.ansible.com/ansible/playbooks_checkmode.html
2. Check the noop mode in https://docs.puppet.com/puppet/latest/reference/man/agent.html
My case is about having too much complex project configuration logic inside Jenkins jobs definition and in time this becoming harder and hard to deal with. This also prevents you from easily execute build jobs under other build/CI tools.
If those projects would be Java based anyone would probably tell me to use maven as I could put most of the things inside the pom.xml files and have them with the project. Still, in my case is more about C/C++ or even .NET projects for which the all the build scripts are usually in bash (cygwin being a dependency on Windows).
I do know that theoretically I could code the parts that are now inside jenkins job configuration in those bash files but this would clearly require significant effort and would be really hard to tune them to allow to enable and disable different steps based on external conditions.
So, what I am trying here is to achieve a high level of independency regarding the build system, so if I want I could switch it in the long future.
What would you recommend as a solution for that? Obviously I need something that can be used multiplatform and not tightened to a specific build system.
Does it make sense to use maven for that, even if those projects are not Java ones? Personally I am not a big fan of XML configuration files, YAML, JSON and INI being seen as more friendly.
What kind of extra logic existing in Jenkins configuration are we talking about?
One would deployment, as I want to be able to deploy to Nexus or similar repositories, executing tests, code coverage and maybe posting the results somewhere.
As a sidenote, looking at Travis configuration files makes me wonder why Jenkins didn't go for such approach.
Look at Groovy. Jenkins allows direct Groovy code manipulating almost everything. A Groovy script could be used to take care of everything from project specific configuration, and it could even be checked in together with the source code. Then in the Jenkins job, you just have a single build step to call the Groovy script.
The above suggestion, however, is very Jenkins dependent.
Another possibility is an Ant script. The AntExec plugin allows to execute Ant script, along with ant-contrib if needed, using the same tools installation process that the rest of Jenkins use. Therefore, you don't need to worry about Ant being installed on the node: Jenkins will take care of it on demand.
The benefits of the Ant script is that it's not tied to Java concepts as Maven is, it's cross platform (Windows and Linux), and just like the Groovy script example above, it can be checked in along with the rest of the source code.
Setting up an integration server, I’m in doubt about the best approach regarding using multiple tasks to complete the build. Is the best way to set all in just one big-job or make small dependent ones?
You definitely want to break up the tasks. Here is a nice example of CruiseControl.NET configuration that has different targets (tasks) for each step. It also uses a common.build file which can be shared among projects with little customization.
http://code.google.com/p/dot-net-reference-app/source/browse/#svn/trunk
I use TeamCity with an nant build script. TeamCity makes it easy to setup the CI server part, and nant build script makes it easy to do a number of tasks as far as report generation is concerned.
Here is an article I wrote about using CI with CruiseControl.NET, it has a nant build script in the comments that can be re-used across projects:
Continuous Integration with CruiseControl
The approach I favour is the following setup (Actually assuming you are in a .NET project):
CruiseControl.NET.
NANT tasks for each individual step. Nant.Contrib for alternative CC templates.
NUnit to run unit tests.
NCover to perform code coverage.
FXCop for static analysis reports.
Subversion for source control.
CCTray or similar on all dev boxes to get notification of builds and failures etc.
On many projects you find that there are different levels of tests and activities which take place when someone does a checkin. Sometimes these can increase in time to the point where it can be a long time after a build before a dev can see if they have broken the build with a checkin.
What I do in these cases is create three builds (or maybe two):
A CI build is triggered by checkin and does a clean SVN Get, Build and runs lightweight tests. Ideally you can keep this down to minutes or less.
A more comprehensive build which could be hourly (if changes) which does the same as the CI but runs more comprehensive and time consuming tests.
An overnight build which does everything and also runs code coverage and static analysis of the assemblies and runs any deployment steps to build daily MSI packages etc.
The key thing about any CI system is that it needs to be organic and constantly being tweaked. There are some great extensions to CruiseControl.NET which log and chart build timings etc for the steps and let you do historical analysis and so allow you to continously tweak the builds to keep them snappy. It's something that managers find hard to accept that a build box will probably keep you busy for a fifth of your working time just to stop it grinding to a halt.
We use buildbot, with the build broken down into discrete steps. There is a balance to be found between having build steps be broken down with enough granularity and being a complete unit.
For example at my current position, we build the sub-pieces for each of our platforms (Mac, Linux, Windows) on their respective platforms. We then have a single step (with a few sub steps) that compiles them into the final version that will end up in the final distributions.
If something goes wrong in any of those steps it is pretty easy to diagnose.
My advice is to write the steps out on a whiteboard in as vague terms as you can and then base your steps on that. In my case that would be:
Build Plugin Pieces
Compile for Mac
Compile for PC
Compile for Linux
Make final Plugins
Run Plugin tests
Build intermediate IDE (We have to bootstrap building)
Build final IDE
Run IDE tests
I would definitely break down the jobs. Chances are you're likely to make changes in the builds, and it'll be easier to track down issues if you have smaller tasks instead of searching through one monolithic build.
You should be able to create one big job from the smaller pieces, anyways.
G'day,
As you're talking about integration testing my big (obvious) tip would be to make the test server built and configured as close as possible to the deployment environment as possible.
</thebloodyobvious> (-:
cheers,
Rob
Break your tasks up into discrete goal/operations, then use a higher-level script to tie them all together appropriately.
This makes your build process easier to understand for other people (you're documenting as you go so anyone on your team can pick it up, right?), as well as increasing the potential for re-use. It's likely you won't reuse the high-level scripts (although this could be possible if you have similar projects), but you can definitely reuse (even if it's copy/paste) the discrete operations rather easily.
Consider the example of getting the latest source from your repository. You'll want to group the tasks/operations for retrieving the code with some logging statements and reference the appropriate account information. This is the sort of thing that's very easy to reuse from one project to the next.
For my team's environment, we use NAnt since it provides a common scripting environment between dev machines (where we write/debug the scripts) and the CI server (since we just execute the same scripts in a clean environment). We use Jenkins to manage our builds, but at their core each project is just calling into the same NAnt scripts and then we manipulate the results (ie, archive the build output, flag failing tests etc).