Hook build step in Teamcity - continuous-integration

My build project have 3 steps:
-file preparation
-deployment
-functional tests
I have set all the dependencies between them, but I really wish also to hook the deployment step to the functional test, so that if the functional test are running and new code is committed the deployment wait till the functional tests finish.
I know there are the build triggering, the dependencies and the artifact dependencies but each of them doesn't seem good for my case.
The first run a deployment every time the functional test step finish and obviously is not what I want.
The second force the deployment to use the same code as in the functional test instead it should use new freshly committed code and for the third is more or less the same situation.
Where I'm thinking wrong? I'm missing something or there's a shortcut to use to make this working?

You can create 2 build configurations:
"Functional tests" configuration with VCS trigger.
"Deployment" configuration with snapshot dependency on first configuration and also VCS trigger (or other trigger, e.g. Schedule trigger with 'Trigger build only if there are pending changes' option selected).
Files will be deployed only if functional tests not failed on same code base.
It is what you need?

Related

Maven and Jenkinsfile - skipping previous phases

I'm exploring Jenkins staging functionality and I want to devise a fast and lean setup.
Basically, Jenkins promotes the use of stages to partition in the build process and provide nice visual feedback about the progress of the build.
So the Jenkinsfile goes something like
stage("Build")
bat("mvn compile")
stage("Test")
bat("mvn test")
stage("Deploy")
bat("mvn deploy")
This works well, but feels wrong, because deploy and test will both do activities from previous phases again.
As a result, in this setup I am building three times (although skipping compilation due to no changes) and testing two times (in the test and the deploy runs).
When I google around I can find various switches and one of them works for skipping unit tests, but the compilation and dependency resolution steps happen regardless of what I do.
Do I need to choose between speed and stages in this case or can I have both?
I mean:
stage("Resolve dependencies, build, test and deploy")
bat("mvn deploy")
is by far the fastest approach, but it doesn't produce a nice progress table in Jenkins.
In order to bring incremental builds in Maven phases as Gradle does, you can use takari-lifecycle maven plugin.
So, once the plugin is apply you will get all the benefits. In your example, Test stage which will perform mvn test will avoid compilation because it was compiled in the previous stage and Deploy stage will avoid compilation from your main source code and test source code but tests will be executed again so I suggest to add -DskipTests.

Is there a tool to keep track of manual steps in a CI or CD process?

I'm looking for a tool of some kind that i can integrate into our CI process to keep track of the manual steps we have.
As an example, we want to run through some manual test scripts on the integration server before pushing the version to the test server. Currently QA gets a notification when the build is done, executes the manual testing and then tells someone to push the version to test if it's okay.
What i would love to find is something that will keep track of when the manual tests have been successfully completed and automatically push the version to test.
It should will be possible to notify/trigger the tool from Visual Studio Online and have it trigger the next step in VSO as well.
I've been googling various different things, but can't seem to find anything close to what i'm looking for. Todo list tools like Asana doesn't seem to have the integration point we need, but maybe i'm just missing something?
You can use the new Release Management tools in conjunction with test cases to get what you want.
In VSTS you can create test cases to reflect the steps of the tests that you want and then create a Test Plan or Suit to reflect the list of tests that you need to run manually.
Then as part of your release process you could create a custom task that waits for a Test Run to be completed against you list. If that test run has all pass then move to the next step, if any tests fail then fail the release.
This should be fairly easy to setup and you need to call the API to check the Test Run. If your Testers use Microsoft Test Manager you can also have the results associated with the Build that you are deploying and get full traceability.
You can try with the Release Management tool in VSTS. It can achieve partial features you want.
I assume you have three steps in your build definition:
Build solution
Publish to integration server
Publish to test server
You can keep the first two steps in build definition. And then create a release definition in Release Management and add the third steps in it. Configure the definition to "Continuous deployment" and link it to your build definition. Assign your QA to be the approver for this release task. Now, a release task will be created as soon as a build task completes. But it will be pending for the approver (Your QA) to approve. Approve the release task after the test is passed, the build will be published to the test server. Otherwise, reject it.

TeamCity - Stop build when a test fails

Is it possible to stop a TeamCity build (the entire build, i.e. it won't execute subsequent steps) when a unit test fails? Ideally I'd also like it to terminate the currently executing step which in my case would be the Nunit Test Runner. In my circumstance there is no point in continuing the build if a single unit test fails. I've looked at Failure Conditions but I don't think they are applicable as the build continues to run.
Feature requested: http://youtrack.jetbrains.com/issue/YTF-3275
As you noted, TeamCity can run no further build steps on test failures in NUnit tests with "Only if build status is successful" step execution condition. However, that does not make the tests run stop until the step finish.
A related feature request is TW-23766.
The only workaround I can consider currently is not to use NUnit test runner and implement the logic inside the build script. For example, with nunit-console.exe like Manuel noted. If you choose to follow this route, consider using TeamCity Addin for NUNit.
You can do it using nunit-console.exe.
accordingly to the official documentation (http://nunit.org/index.php?p=consoleCommandLine&r=2.6.2) it provides a /stoponerror switch that does exactly what you need.
it can also generate an XML output that can be parsed by teamcity (there is a build feature for that) in order to populate the "test" tab.
on the build step after the unit tests, change the setting to Only if Build status is successful instead of if all previous steps finished successfully
see screenshot
also make sure that under failure conditions menu item the at least one test failed option is also ticked.

Run unit tests in Jenkins / Hudson in automated fashion from dev to build server

We are currently running a Jenkins (Hudson) CI server to build and package our .net web projects and database projects. Everything is working great but I want to start writing unit tests and then only passing the build if the unit tests pass. We are using the built in msbuild task to build the web project. With the following arguments ...
MsBuild Version .NET 4.0
MsBuild Build File ./WebProjectFolder/WebProject.csproj
Command Line Arguments ./target:Rebuild /p:Configuration=Release;DeployOnBuild=True;PackageLocation=".\obj\Release\WebProject.zip";PackageAsSingleFile=True
We need to run automated tests over our code that run automatically when we build on our machines (post build event possibly) but also run when Jenkins does a build for that project.
If you run it like this it doesn't build the unit tests project because the web project doesn't reference the test project. The test project would reference the web project but I'm pretty sure that would be butchering our automated builds as they exist primarily to build and package our deployments. Running these tests should be a step in that automated build and package process.
Options ...
Create two Jenkins jobs. one to run the tests ... if the tests pass another build is triggered which builds and packages the web project. Put the post build event on the test project.
Build the solution instead of the project (make sure the solution contains the required tests) and put post build events on any test projects that would run the nunit console to run the tests. Then use the command line to copy all the required files from each of the bin and content directories into a package.
Just build the test project in jenkins instead of the web project in jenkins. The test project would reference the web project (depending on what you're testing) and build it.
Problems ...
There's two jobs and not one. Two things to debug not one. One to see if the tests passed and one to build and compile the web project. The tests could pass but the build could fail if its something that isn't used by what you're testing ...
This requires us to know exactly what goes into the build. Right now msbuild does it all for us. If you have multiple teams working on a project everytime an extra folder is created you have to worry about the possibly brittle command line statements.
This seems like a corruption of our main purpose here. The tests should be a step in this process not the overriding most important thing in this process. I'm also not 100% sure that a triggered build is the same as a normal build does it do all the same things as a normal build. Move all the correct files in the same way move them all into the same directories etc.
Initial problem.
We want to run our tests whenever our main project is built. But adding a post build event to the web project that runs against the test project doesn't work because the web project doesn't reference the test project and won't trigger a build of this project. I could go on ... but that's enough ...
We've spent about a week trying to make this work nicely but haven't succeeded. Feel free to edit this if you feel you can get a better response ...
In Jenkins/Hudson, it's quite ok to have many jobs. some for doing compilation triggered version control changes, some for running (unit) tests triggered by successful builds, some for doing more tests (integeration) trigered by successful earlier tests, some for deploying, triggered by successfully passing all tests.
Look at plugins like join, build pipeline, parametrized trigger and more to help out with this.
This will also allow things to happen in parallel, by using multiple nodes. Trying to cram everything in one job is not the way to go.

How to split a big Jenkins job/project into smaller jobs without compromising functuality?

we're trying to improve our Jenkins setup. So far we have two directories: /plugins and /tests.
Our project is a multi-module project of Eclipse Plugins. The test plugins in the /tests folder are fragment projects dependent on their corresponding productive code plugins in /plugins.
Until now, we had just one Jenkins job which checked out both /plugins and /tests, built all of them and produced the Surefire results etc.
We're now thinking about splitting the project into smaller jobs corresponding to features we provide. It seems that the way we tried to do it is suboptimal.
We tried the following:
We created a job for the core feature. This job checks out the whole /plugins and /tests directories and only builds the plugins the feature is comprised of. This job has a separate pom.xml which defines the core artifact and tells about the modules contained in the feature.
We created a separate job for the tests that should be run on the feature plugins. This job uses the cloned workspace from the core job. This job is to be run after the core feature is built.
I somehow think this is less than optimal.
For instance, only the core job can update the checked out files. If only the tests are updated, the core feature does not need to be built again, but it will be.
As soon as I have a feature which is dependent on the core feature, this feature would either need to use a clone of the core feature workspace or check out its own copy of /plugins and /tests, which would lead to bloat.
Using a cloned workspace, I can't update my sources. So when I have a feature depending on another feature, I can only do the job if the core feature is updated and built.
I think I'm missing some basic stuff here. Can someone help? There definitely is an easier way for this.
EDIT: I'll try to formulate what I think would ideally happen if everything works:
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
Finally, the project job should
do a nightly build
check out all sources from /plugins and /tests
build all, test all, send results to Sonar
Additionally, it would be neat if the nightly build was unnecessary because the builds and test results of the projects' features would be combined in the project job results.
Is something like this possible?
Starting from the end of the question. I would keep a separate nightly job that does a clean check-out (gets rid of any generated stuff before check-out), builds everything from scratch, and runs all tests. If you aren't doing a clean build, you can't guarantee that what is checked into your repository really builds.
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
[I am assuming that by "dependent features" in 1 you mean the things needed by the "feature" in 2.]
To do this, I would say that you have multiple jobs.
a job for every individual feature and every dependent feature that simply builds that feature. The jobs should be started by SCM changes for the (dependent) feature.
I wouldn't keep separate test jobs from compile jobs. It allows the possibility that successfully compiled code is never tested. Instead, I would rely on the fact that wen a build step fails in Jenkins, it normally aborts further build steps.
The trick is going to be in how you thread all of these together.
Let's say we have a feature and it's build job called F1 that is built on 2 dependent features DF1.1 and DF1.2, each with their own build jobs.
Both DF1.1 and DF1.2 should be configured to trigger the build of F1.
F1 should be configured to get the artifacts it needs from the latest successful DF1.1 and DF1.2 builds. Unfortunately, the very nice "Clone SCM" plugin is not going to be of much help here as it only pulls from one previous job. Perhaps one of the artifact publisher plugins might be useful, or you may need to add some custom build steps to put/get artifacts.

Resources