How to break up Jenkins Testing - maven

Background
I am currently in the process of writing test scripts in Java with Testng, Maven, selenium and Jenkins. I have a plan to write hundreds of scripts. At this current time I have about 80 scripts written. Only 8 scripts have been uploaded to Bitbucket. Note that each script can have anywhere between 5-25 tests in it based on complexity, for example the 8 scripts currently on the server run 100 tests.
Problem
The issue I can see arising here very very quickly is that a huge amount of test scripts running.Jenkins runs the entire Maven project that sits on Bitbucket. Currently with only 8 scripts Jenkins takes a total of 20 mins to run. By the time I have the more complex ones up this could take hours to run even days with all of the scripts I plan on uploading.
Research
So far I've looked around for some way to break up the testing process so for instance I would have separate Maven projects in my Bitbucket repository for different areas. Then I would have several different builds on Jenkins one for each area of the site. I'm not sure how that would work though since Jenkins seems to just go in and read all of the tests on my repo.
I'm almost certain that having all of the tests in a single build is bad practice but I just cant find information on how to handle a huge test suit, I'm hoping someone with real experience can clarify this for me.
Software
Added as a side note in-case anyone wants to know what I'm using
Maven: 3.3.3
Java: 1.7.0_79
Selenium: 2.46 & 2.47(currently 2.47)
Jenkins: 1.622
Conclusion
I believe there must be a way of breaking the test suit up without having separate bit-bucket repos for each section.

One possible solution is using selenium grid. This will allow you to distribute tests over several vm's. Its fairly well documented.

Related

How to minimize build server project specific configuration?

My case is about having too much complex project configuration logic inside Jenkins jobs definition and in time this becoming harder and hard to deal with. This also prevents you from easily execute build jobs under other build/CI tools.
If those projects would be Java based anyone would probably tell me to use maven as I could put most of the things inside the pom.xml files and have them with the project. Still, in my case is more about C/C++ or even .NET projects for which the all the build scripts are usually in bash (cygwin being a dependency on Windows).
I do know that theoretically I could code the parts that are now inside jenkins job configuration in those bash files but this would clearly require significant effort and would be really hard to tune them to allow to enable and disable different steps based on external conditions.
So, what I am trying here is to achieve a high level of independency regarding the build system, so if I want I could switch it in the long future.
What would you recommend as a solution for that? Obviously I need something that can be used multiplatform and not tightened to a specific build system.
Does it make sense to use maven for that, even if those projects are not Java ones? Personally I am not a big fan of XML configuration files, YAML, JSON and INI being seen as more friendly.
What kind of extra logic existing in Jenkins configuration are we talking about?
One would deployment, as I want to be able to deploy to Nexus or similar repositories, executing tests, code coverage and maybe posting the results somewhere.
As a sidenote, looking at Travis configuration files makes me wonder why Jenkins didn't go for such approach.
Look at Groovy. Jenkins allows direct Groovy code manipulating almost everything. A Groovy script could be used to take care of everything from project specific configuration, and it could even be checked in together with the source code. Then in the Jenkins job, you just have a single build step to call the Groovy script.
The above suggestion, however, is very Jenkins dependent.
Another possibility is an Ant script. The AntExec plugin allows to execute Ant script, along with ant-contrib if needed, using the same tools installation process that the rest of Jenkins use. Therefore, you don't need to worry about Ant being installed on the node: Jenkins will take care of it on demand.
The benefits of the Ant script is that it's not tied to Java concepts as Maven is, it's cross platform (Windows and Linux), and just like the Groovy script example above, it can be checked in along with the rest of the source code.

How to split tasks between CI Tool (like Jenkins) and a build script (Ant or Maven)? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Our team has some controversy on which tasks should be handled by CI tool and which should be in a build script (we use Ant for building and FinalBuilder for CI).
My thought is that all the tasks which are useful not only on a build server but also on developers/QA machines should be placed to the Ant build script (but I'm not sure about the actual best practices).
For now we have the next list of tasks:
update directory (svn update)
compile
run tests
make coverage report
run static analyzers and generate reports
package (make war-file)
deploy to a web-server
send email notifications (with linked reports and build status)
run DB update tool
put a build result (war file and reports) to a special place
(any other CI-common tasks?)
Which tasks would you do by means of your CI tool and which would you place to the build script?
My approach is next:
Ant tasks: compile, tests, coverage reports, analyzers, package, deploy, DB update.
CI tool: svn update, email notifications, putting build result to a special place.
(ant tasks set is partly inspired by default maven set of tasks).
Good question.
I do think that anything you want to do routinely outside of the build server should be in scripts, but not necessarily in your "build" script.
For instance, your deployment and database upgrade steps I would put into a separate script (and yes I disagree with David W and think you absolutely should automate these). We've used Ant for deployment tasks in the past and done ok with it. But I've also heard that Ant is a bad build scripting language because it's not as good with procedural deployment tasks. That's backwards. Use Ant for build, and if it's not a good fit for deployments, script that with something else.
The core role of your build server is to consistently and automatically run these processes and report on the results. For unit tests, etc, this may mean invoking the script that runs the tests, but having the intelligence to parse the results in a meaningful way for things like trending and analysis.
All of the above advice is framed by "within reason". If you occasionally do something outside of the build server, scripting it is hard, and the integration at the build server level is easy, by all means save yourself the work and just do it there.
Let's see the tasks you want to do...
update directory (svn update)
Well, Jenkins will do that anyway.
compile
And, that too...
run tests
Why not? Jenkins can display the JUnit tests results right on the build page. If the tests take a really long time to complete, you can setup a second job to do the tests. Have Jenkins copy the working files from the old job to the new job, and run the second job. There's a Copy Artifacts plugin that will help you do this.
make coverage report
Jenkins can do that too. And, just like the JUnit tests, Jenkins can display the results on the build page.
run static analyzers and generate reports.
Jenkins can do that too.
package (make war-file)
Jenkins can do that. You can even store the war file on Jenkins. People will be able to copy it and deploy it on their systems. Or, you can have Jenkins store it in your Maven repository. Heck, you can do both.
deploy to a web-server
Jenkins can do this, but I prefer to do this manually -- unless there's some testing I want to do as part of the build process. When it comes to deployment, I'd rather do things myself.
send email notifications (with linked reports and build status)
Standard Jenkins is to send out notifications on bad and unstable builds (builds that built, but where tests failed), then send an email once the build is good again. Do you really want an email sent out with each and every build? If so, use the ext-mail plugin.
run DB update tool
Again, this is something I prefer to do manually -- unless this is part of my testing.
put a build result (war file and reports) to a special place
No need to do that. The Jenkins individual build webpage itself can store the war file, the testing results, who started the build, and what was changed. The changes can be linked to Fisheye, Sventon, or another source repository web browser which allows a user to click on a file and see exactly the lines changed.
Jenkins also has a permanent link to the last good build, the last bad build, and the last build. I use iframes (Bad David! Using obsolete HTML code) to embed these pages in the official corporate web pages.
In short, Jenkins can do all of that stuff for you, so why not let it?

Continuous Integration testing with physical devices

I've been wondering how you go about doing CI-style testing when you're dealing with physical devices.
I imagine you have a suite of tests, and a pool of devices against which they can be run.
Additionally:
Some tests may require specific device models.
Some tests may require the use of more than one device.
What CI servers have support for this?
I'm still interested in those which have partial support, either natively or through plugins, as I'm interested in how it's done.
Continuous Integration enables a team integrate & test their work frequently. Automated builds are meant to compile, link and run unit-tests. You want your CI to run fast, especially if you run it at every check-in. That is why you would want to restrict CI activities to simple confirmation of the build and unit tests alone. What you're asking seems more-along the lines of quality assurance (QA) testing...and having QA failures mixed into your CI efforts would detract development efforts from progressing.
As such, I'm more under the impression activities associated with CI are not dependent upon the final physical machine said work may eventually be migrated to.
Now...this doesn't mean you CAN'T take the CI-compiled package and run it against some final target-machine....but again...that is really considered a seperate activity.
This seems to be re-enforced in the following article by Martin Fowler.
Notice he doesn't talk about the final targeted devices...only the build machine.
I can suggest Test Manager that is part of the TFS suite of Microsoft. I have not tried it with many different environments apart from windows based though I know there are many connectors. For windows based environments I believe it will satisfy most needs.
I use it for nightly builds to perform smoke tests (Turn it on, see if any smoke comes out) but you have to be careful to keep tests small in order to have them finished in a matter of hours and not days, if you want it to be part of your CI.
Then when you have a good enough quality you can proceed onto regression tests and integration tests if needed.
I wouldn't get too caught up in what a CI system is supposed to do or not do. Instead I would focus on the problem you are trying to solve. It sounds like that problem is to facilitate development on multiple platforms. You can use the concept of Continuous Integration and add to it successfully address the issue. I know, because I've done it in the past.
I implemented a build system for code that needed to compile and test successfully on 4 different platforms (nt, wince, linux-arm, linux-x86). The CI server would:
Used a linux and winnt build server compilation (and cross compilation)
The compiled tests and supporting libs would then be copied to the appropriate devices and an automated test run executed.
After the test suite was completed the log would be copied back, (or it was written to a network mounted fs)
If the test suite was successful we would tag the source, and package the libs and executables.
This same platform was reused for developer verification before commits. Developers would run a partial build and test (only updated source would be recompiled and those tests rerun). The CI would execute a full build (from scratch).
Our build were pretty fast because we had a proper DAG for build dependencies. This allowed for concurrent compilation within a platform build. Each platform build was also concurrent. As a result partial builds took a few seconds, full builds took ~30 minutes. Our build servers were quite beefy (optimized for fast compiles) and the codebase was of moderate size (I don't remember the stats).

Is CI worth implementing for a one or two man project?

At work where we do LOB .NET/MSSQL developement, many projects we have are 2 person or even 1 person projects that have development life cycles of 1-3 months. The developers serve as business analyst/project managers/QA so things get done fast with minimal 'BS time' spent. We do get the bigger projects that can take 6 months and have a team of 5 devs on it, but these are more uncommon.
We're doing a push to initiate everyone doing TDD going forward (my most recent project has full code coverage and was developed solely), and I was doing research on the architecture required to take maximum benefit of it. It seems that most people doing TDD are doing CI, have a build server and are doing automated builds and have some kind of automated client build tool (FinalBuilder or nAnt) etc.
So my questions - I see the obvious benefits on the uncommon large projects where you have 5 people working on the same codebase at once - but will we see much benefit from doing CI on the small 2 man projects? What about on a 1 man project - for those is it just a complete waste since you're really not 'integrating' with anyone? And, how would you pitch CI / automated builds / build server to management?
Having an automated/repeatable build process, and being able to prove that the current build passes all tests and runs in a server environment is worth the effort on any size project IMHO.
I'd pitch it this way: manual builds are manual. Things can get mixed up even on small projects . An automated build solves this problem. The amount of time it takes to set up the build script will be made up many times over during the lifecycle of the application.
As far as CI with test runs etc... goes: It's a constant health check on the quality of the code base. It's good to know as soon as possible when one thing inadvertently breaks another thing.
On a small project, you don't need most of the infrastructure to do CI, especially the build server. What you do need is the tests, the build automation, revision control, and a controlled build environment. You can just as well have your build and test servers be virtual machine images you run on your workstations... just so long as the images are under revision control like the rest of the project.

What is the best way to setup an integration testing server?

Setting up an integration server, I’m in doubt about the best approach regarding using multiple tasks to complete the build. Is the best way to set all in just one big-job or make small dependent ones?
You definitely want to break up the tasks. Here is a nice example of CruiseControl.NET configuration that has different targets (tasks) for each step. It also uses a common.build file which can be shared among projects with little customization.
http://code.google.com/p/dot-net-reference-app/source/browse/#svn/trunk
I use TeamCity with an nant build script. TeamCity makes it easy to setup the CI server part, and nant build script makes it easy to do a number of tasks as far as report generation is concerned.
Here is an article I wrote about using CI with CruiseControl.NET, it has a nant build script in the comments that can be re-used across projects:
Continuous Integration with CruiseControl
The approach I favour is the following setup (Actually assuming you are in a .NET project):
CruiseControl.NET.
NANT tasks for each individual step. Nant.Contrib for alternative CC templates.
NUnit to run unit tests.
NCover to perform code coverage.
FXCop for static analysis reports.
Subversion for source control.
CCTray or similar on all dev boxes to get notification of builds and failures etc.
On many projects you find that there are different levels of tests and activities which take place when someone does a checkin. Sometimes these can increase in time to the point where it can be a long time after a build before a dev can see if they have broken the build with a checkin.
What I do in these cases is create three builds (or maybe two):
A CI build is triggered by checkin and does a clean SVN Get, Build and runs lightweight tests. Ideally you can keep this down to minutes or less.
A more comprehensive build which could be hourly (if changes) which does the same as the CI but runs more comprehensive and time consuming tests.
An overnight build which does everything and also runs code coverage and static analysis of the assemblies and runs any deployment steps to build daily MSI packages etc.
The key thing about any CI system is that it needs to be organic and constantly being tweaked. There are some great extensions to CruiseControl.NET which log and chart build timings etc for the steps and let you do historical analysis and so allow you to continously tweak the builds to keep them snappy. It's something that managers find hard to accept that a build box will probably keep you busy for a fifth of your working time just to stop it grinding to a halt.
We use buildbot, with the build broken down into discrete steps. There is a balance to be found between having build steps be broken down with enough granularity and being a complete unit.
For example at my current position, we build the sub-pieces for each of our platforms (Mac, Linux, Windows) on their respective platforms. We then have a single step (with a few sub steps) that compiles them into the final version that will end up in the final distributions.
If something goes wrong in any of those steps it is pretty easy to diagnose.
My advice is to write the steps out on a whiteboard in as vague terms as you can and then base your steps on that. In my case that would be:
Build Plugin Pieces
Compile for Mac
Compile for PC
Compile for Linux
Make final Plugins
Run Plugin tests
Build intermediate IDE (We have to bootstrap building)
Build final IDE
Run IDE tests
I would definitely break down the jobs. Chances are you're likely to make changes in the builds, and it'll be easier to track down issues if you have smaller tasks instead of searching through one monolithic build.
You should be able to create one big job from the smaller pieces, anyways.
G'day,
As you're talking about integration testing my big (obvious) tip would be to make the test server built and configured as close as possible to the deployment environment as possible.
</thebloodyobvious> (-:
cheers,
Rob
Break your tasks up into discrete goal/operations, then use a higher-level script to tie them all together appropriately.
This makes your build process easier to understand for other people (you're documenting as you go so anyone on your team can pick it up, right?), as well as increasing the potential for re-use. It's likely you won't reuse the high-level scripts (although this could be possible if you have similar projects), but you can definitely reuse (even if it's copy/paste) the discrete operations rather easily.
Consider the example of getting the latest source from your repository. You'll want to group the tasks/operations for retrieving the code with some logging statements and reference the appropriate account information. This is the sort of thing that's very easy to reuse from one project to the next.
For my team's environment, we use NAnt since it provides a common scripting environment between dev machines (where we write/debug the scripts) and the CI server (since we just execute the same scripts in a clean environment). We use Jenkins to manage our builds, but at their core each project is just calling into the same NAnt scripts and then we manipulate the results (ie, archive the build output, flag failing tests etc).

Resources