Hudson/SVN job configuration - continuous-integration

I am working on a project that is built in a modular way. When the branch is checked out it, there are folders for each module. I would like to setup a hudson job for each module to build each module individually, but I cannot figure out how to have one workspace for all the jobs and have each hudson job only check for changes for its respective module within the common workspace without triggering an update of the whole workspace. Is this even possible what I am trying to do?

Each job needs it's own workspace. everything else asks for trouble. If you have a shared workspace, you might need to synchronize the jobs so that they don't interfere with each other. That can be trickier than you expect.

Related

Share Git repository directory across multiple build definitions

When a private agent build starts in VSTS, it gets assigned a directory, e.g. C:\vstsagent_work\1\s
Is there a way to set this to a different path? On other CI servers, like Jenkins, I can define a custom workspace for a job. I'm dealing with a huge monorepo and have dozens of build definitions around the same repository. It makes sense (to me anyway) to share a single directory on the build agent computer.
The benefit to me is that my builds can use pre-built components from upstream repositories, if they have already been built.
Thanks for any help
VSTS build always creates a working directory per build definition. This leaves you two options:
Create a single build definition and use conditionals on steps to skip certain steps in order to only run what is needed. This allows you to use the standard steps and may require a powershell script to figure out which steps to run and which ones to skip. Set variables from powershell using the special logging commands.
Disable the get sources step and add a step that manually fetches sources. You'll need to clean the working directory, checkout the right commit, basically replicating the actions in the get sources step manually. It may require some fidgeting to get all the behavior correctly for normal build, pull request builds etc. That way you can take full control over the location where sources are checked out.
I'd also recommend you investigate the 2017 project formats that use the new <packageReference> in the project files to fetch packages. The new system supports configuring a version range which can always fetch the latest available version of packages. It's a better long-term solution.
No, it isn’t available in VSTS build system.
You can change working directory of agent (C:\vstsagent_work) (Re-configure it and specify another working folder), but it won’t uses the same source folder for different build definitions, the folder would be 1, 2, 3 ….

how do I stop pre-build steps in the parent job of a jenkins matrix job?

Has anyone else noticed that the parent job of a set of matrix builds appears to still do anything before the "Build" section in the configuration?
This means that things like Delete workspace and Source Code Management happen on the node running the parent job, even though I can't see when you would ever want that and how it could be useful.
In particular, when using a custom, shared workspace, this means that the parent job of a matrix can stomp on other jobs that use the same workspace.
Does anyone know why this is? More importantly, how do I stop this happening?

Is it possible on TeamCity to build a module against all agents when a VCS Trigger is met?

I would like a module to be run by all agents when a VCS trigger condition is met.
Is this possible?
One way you can do this is by adding a Schedule Trigger which has an option to run on all agents.
Having looked into it, as far as I can see, not directly,
the behaviour could potentially be achieved by using the command line remote runner plugin,(http://confluence.jetbrains.net/display/TW/Command+Line+Remote+Run+Tool) through a seperate build configuration linked to the VCS to detect the changes, calling the Remote Run tool from a command line build step to build the project on each required agent.
Further research into the Command Line Remote Run tool would be required to confirm this is possible.
There may also be some functionality allowing this in the REST API, although my look through the documentation didn't show anything up.
Have you had much luck working on alternative solutions?
I've created a build configuration to update our source managed third party referenced assemblies directory and this is a snapshot dependency on most if not all build configurations. When I update this directory with a new or more recent assembly, I'll too would like this configuration to be run on all build agents.
At the moment, I've simply duplicated the configuration and bound each to a specific agent. It adds management overhead, but has temporarily resolved the issue.
You could install this plugin and specify the list of agent names and it will run once per "value" in the matrix.
https://github.com/presidentio/teamcity-matrix-build-plugin

How to split a big Jenkins job/project into smaller jobs without compromising functuality?

we're trying to improve our Jenkins setup. So far we have two directories: /plugins and /tests.
Our project is a multi-module project of Eclipse Plugins. The test plugins in the /tests folder are fragment projects dependent on their corresponding productive code plugins in /plugins.
Until now, we had just one Jenkins job which checked out both /plugins and /tests, built all of them and produced the Surefire results etc.
We're now thinking about splitting the project into smaller jobs corresponding to features we provide. It seems that the way we tried to do it is suboptimal.
We tried the following:
We created a job for the core feature. This job checks out the whole /plugins and /tests directories and only builds the plugins the feature is comprised of. This job has a separate pom.xml which defines the core artifact and tells about the modules contained in the feature.
We created a separate job for the tests that should be run on the feature plugins. This job uses the cloned workspace from the core job. This job is to be run after the core feature is built.
I somehow think this is less than optimal.
For instance, only the core job can update the checked out files. If only the tests are updated, the core feature does not need to be built again, but it will be.
As soon as I have a feature which is dependent on the core feature, this feature would either need to use a clone of the core feature workspace or check out its own copy of /plugins and /tests, which would lead to bloat.
Using a cloned workspace, I can't update my sources. So when I have a feature depending on another feature, I can only do the job if the core feature is updated and built.
I think I'm missing some basic stuff here. Can someone help? There definitely is an easier way for this.
EDIT: I'll try to formulate what I think would ideally happen if everything works:
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
Finally, the project job should
do a nightly build
check out all sources from /plugins and /tests
build all, test all, send results to Sonar
Additionally, it would be neat if the nightly build was unnecessary because the builds and test results of the projects' features would be combined in the project job results.
Is something like this possible?
Starting from the end of the question. I would keep a separate nightly job that does a clean check-out (gets rid of any generated stuff before check-out), builds everything from scratch, and runs all tests. If you aren't doing a clean build, you can't guarantee that what is checked into your repository really builds.
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
[I am assuming that by "dependent features" in 1 you mean the things needed by the "feature" in 2.]
To do this, I would say that you have multiple jobs.
a job for every individual feature and every dependent feature that simply builds that feature. The jobs should be started by SCM changes for the (dependent) feature.
I wouldn't keep separate test jobs from compile jobs. It allows the possibility that successfully compiled code is never tested. Instead, I would rely on the fact that wen a build step fails in Jenkins, it normally aborts further build steps.
The trick is going to be in how you thread all of these together.
Let's say we have a feature and it's build job called F1 that is built on 2 dependent features DF1.1 and DF1.2, each with their own build jobs.
Both DF1.1 and DF1.2 should be configured to trigger the build of F1.
F1 should be configured to get the artifacts it needs from the latest successful DF1.1 and DF1.2 builds. Unfortunately, the very nice "Clone SCM" plugin is not going to be of much help here as it only pulls from one previous job. Perhaps one of the artifact publisher plugins might be useful, or you may need to add some custom build steps to put/get artifacts.

In Hudson or Jenkins, how can you restore deleted builds?

I accidentally deleted some builds for a job that I would have rather kept. I restored the builds on disk from backup, but they still do not show up when I am on the status page for that job.
I have tried both triggering another build for the job and re-starting Hudson.
How can I fully-restore these builds? Is there a DB that Hudson uses to store this type of information?
Inside your %HUDSON_HOME% directory is a subdirectory called "jobs". Under "jobs" are subdirectories for every project. Inside each one are subdirectories for each build.
You need to make sure that jobs\<projectname> exists, and then copy the missing build directories inside.
Click "Manage Hudson/Reload Configuration From Disk" to make Hudson recognize the newly added builds. Not necessary to restart your servlet container (e.g. Tomcat) if you use one.
Depends what you restored. It sounds like you restored the workspace folder, but this folder is irrelevant for historical information. I would first shut down Hudson and than restore the build folder and the modules folder. Than you can restart Hudson. For more information ask the Hudson project.
In addition to Williams (https://stackoverflow.com/a/3399103/142207) answer you'll need to re-add it to all the custom views.

Resources