Specific cleanup interval for artifacts in TeamCity - teamcity

I do have a project in TeamCity, that has a build configuration for the master release branch. This is compiled, every time a new version of our product is released.
In order to be able to pinpoint the introduction of errors, I do need a big retention time for some artifacts on this build configuration. As some other artifacts are rather big (full cd installation packages), my server's hard drive gets pretty full when simply upping the cleanup interval of this configuration.
Is it possible to configure two different cleanup intervals somehow? I would love to have a big retention time for the really important artifacts, while throwing the big ones away early.
I currently use TeamCity 9.0.3
Let's say for example, that my project has two artifacs:
smallupdatepack.zip (32 mb)
reallybigupdatecd.iso (700 mb)
I would like to configure TeamCity in a way that has the .iso kept for e.g. the last 10 builds, but the .zip is kept for the last 150 builds.
What I do not want, is a solution where all the .zip files are kept forever, while only the .iso files are deleted by an interval, which is all that seemed possible to me by using the build configuration's setting's artifact patterns alone.

You can specify custom cleanup rules for porjects/targets in Build History Clean-up page.
In your case, you can have a aggressive cleanup for all builds and a lenient cleanup for the Project/target for the master build
I have uploaded an example via an image below , if it helps
If you edit any of the settings, you can set individual period for artefacts. You can setup artefacts cleanup per target. However, for the same target you cannot setup different cleanup rules for multiple artefacts.

The answer by #Biswajit_86 looks like it's the only thing available for setting special clean up rules. I looked at it and it seems like the configuration specific settings should override the project settings and give you what you need, but maybe it doesn't work that way. Try it out and see if it works. If not, file a bug/suggestion with JetBrains.
The only other thing I could think of was to create a separate build configuration that only publishes the artifacts that you want to keep longer than your default rule. Give it a snapshot dependency on the configuration that creates the files and check the box to run on the same build agent. That way it doesn't need to rebuild them and can just publish what was already created. Set up a build trigger so that this new configuration runs whenever the other one finishes. Then set the clean up rules for this configuration to the longer retention setting.

Related

VSTS minimize get source task

We have a VSTS build setup. Currently we have a single repository hosting multiple services. We have a build definition per service, and triggers each only when the current service is touched, by a trigger pattern.
Now the issue is that each build definition hence the single repository makes the GetSource download the whole repo and also we do a clean.
I have been searching to see if there is a solution like the trigger where we can set a pattern, to just get a part of the repository downloadet. This should be to reduce build/download time.
A workaround might be to not make a clean each time or make multiple repositories. At the moment we would like to avoid the latter.
Let me hear if anyone knows of a good solution.
There is no way to specify the files to be downloaded in Get Sources step. Instead, VSTS will download the whole repo.
The workaround is set Clean option as false so that only the modified files and new added files will be updated in Get Sources step.

Share Git repository directory across multiple build definitions

When a private agent build starts in VSTS, it gets assigned a directory, e.g. C:\vstsagent_work\1\s
Is there a way to set this to a different path? On other CI servers, like Jenkins, I can define a custom workspace for a job. I'm dealing with a huge monorepo and have dozens of build definitions around the same repository. It makes sense (to me anyway) to share a single directory on the build agent computer.
The benefit to me is that my builds can use pre-built components from upstream repositories, if they have already been built.
Thanks for any help
VSTS build always creates a working directory per build definition. This leaves you two options:
Create a single build definition and use conditionals on steps to skip certain steps in order to only run what is needed. This allows you to use the standard steps and may require a powershell script to figure out which steps to run and which ones to skip. Set variables from powershell using the special logging commands.
Disable the get sources step and add a step that manually fetches sources. You'll need to clean the working directory, checkout the right commit, basically replicating the actions in the get sources step manually. It may require some fidgeting to get all the behavior correctly for normal build, pull request builds etc. That way you can take full control over the location where sources are checked out.
I'd also recommend you investigate the 2017 project formats that use the new <packageReference> in the project files to fetch packages. The new system supports configuring a version range which can always fetch the latest available version of packages. It's a better long-term solution.
No, it isn’t available in VSTS build system.
You can change working directory of agent (C:\vstsagent_work) (Re-configure it and specify another working folder), but it won’t uses the same source folder for different build definitions, the folder would be 1, 2, 3 ….

Maven publishing artefacts to remote repository and using $release in the artefact version

Wondering how people manage their project artefacts through an environment lifecycle of say DEV - AQA - CQA - RELEASE and if there's some best practices to follow.
I use a Jenkins build server to build my projects (code checkout then maven build). My artefacts all have version 1.0.0-SNAPSHOT and are published to a local .m2 repo on the build server. There are also Jenkins jobs that rebuild the DEV system (on the same server) using those artefacts. The project build is automated whenever someone checks in code. The DEV build is automated on a nightly basis.
At some point, my lead developer determines that our project is fit to go to AQA (the first level of testing environment on a different server).
For this I need to mark the artefacts as version 1.0.0-1 and publish to a remote AQA repository (it's actually a Nexus repo).
The Maven deploy plugin sounds like the right approach, but how do I change the version number to be effectively 1.0.0-$release (where $release is just an incrementing number starting from 1)? Would Maven/Nexus be able to manage the value of $release, or would I need a simple properties file in my project to store/update the last used $release.
Furthermore, someone tests AQA and determines its fit to move on to CQA (second testing env). This is 'promote to AQA'. So my requirement is to copy the artefact from the AQA Nexus repo and publish to the CQA Nexus repo.
Likewise, after CQA, there'd be a 'promote to RELEASE' job too.
I think the version value remains unchanged during the 'promote' phases. I'd expect the AQA repo to see all versions 1-50, but CQA only 25 and 50, then RELEASE only 50, for example.
I can find loads of info about Maven plugins/goals/phases, but very little about a prescriptive method on how or where to use outside of the immediate development environment.
Any suggestions gratefully received.
Staging/promoting is out of scope for Maven. Once deployed/uploaded to a remote repository, that system is responsible for the completion of the release cycle. Read this chapter about staging: http://books.sonatype.com/nexus-book/reference/staging.html if you use Nexus.
Build numbers are just that build numbers. They are not promotion / staging numbers.
You should come up with another means of tracking your promotions, because otherwise one might get confused in "knowing" that build 10.1.4-2 is the same as 10.1.4-6. Certainly, all of the Maven related software will see those two builds as different builds.
In addition, if a person "grabs" the wrong copy of the build, the way you are managing staging within your build number will increase confusion. As if you don't kill all of the 10.1.4-2 builds, then someone might get a copy of that not realizing that the build has been promoted to 10.1.4-6. This means that for the "last" staging number to be the most likely one to be grabbed, you must do two things (which are impossible in combination)
Remove all the old staging numbers, updating them to the new ones.
Ensure that no copy of an old staging number escaped the update.
Since people generally can copy files without being tracked, or said files might not be reachable at time up "update", or timing between reaching all the files cannot be simultaneous, such a system is doomed to fail.
Instead, I recommend (if you must track by file), placing the same file in different "staging directories". This defines release gateways by whether the file exists in a certain directory, and makes it clear that it is the same file that is going through the entire process. In addition, it becomes easy to have various stages of verification poll their respective directories (and you can write Jenkins tasks to promote from one directory to another, if you really wish).

Version Control for Hudson Continuous Integration Build Jobs

We have a continuous integration server with over 40 jobs that are constantly changing. I would like to version control continuous integration build jobs in Hudson so we can roll back changes if we have problems.
Is there a Hudson plugin that will do this or other solution that already exists or should I keep the config.xml files in SVN.
Hudson Labs has a really great write up on this, Keeping your configuration and data in Subversion
This is the first bit of the article
We all know that keeping important
files in version control is critical,
as it ensures problematic changes can
be reverted and can serve as a backup
mechanism as well. Code and resources
are often kept in version control, but
it can be easy to forget your
continuous integration (CI) server
itself! If a disk were to die or fall
victim to a misplaced rm -rf, you
could lose all the history and
configuration associated with the jobs
your CI server manages.
It’s pretty simple to create a
repository, but it isn’t obvious which
parts of your $HUDSON_HOME you’ll want
to backup. You’ll also want to have
some automation so new projects get
added to the repository, and deleted
ones get removed. Luckily we have a
great tool to handle this: Hudson!
We have a Hudson job which runs
nightly, performs the appropriate SVN
commands, and checks in
You only seem to be interested in the configuration, which is fine, just ignore or filter out the bits about the data and focus on the configuration.
This is one of the more recent threads about using version control with Hudson's configuration on the Hudson users list.
There are no plugins to do store configuration in an SCM right now (March 2010) though the backup plugin might do something close to what you want, but perhaps with less of a view of 'change' and more of just a snapshot at any given time.
The relatively new Job Config History plugin gets part of the way there - it doesn't actually store the configurations in source control, but it does provide history and auditing of changes to jobs.
You could look into the SCM Sync configuration plugin.
It automatically commits all of your jenkins config changes to svn. that way you can track configuration errors easily.
https://wiki.jenkins-ci.org/display/JENKINS/SCM+Sync+configuration+plugin

Pre-build task - deleting the working copy in CruiseControl.NET

I'm currently in the process of setting up a continuous integration environment at work. We are using VisualSVN Server and CrusieControl.NET. Occasionally a build will fail and a symptom is that there are conflicts in the CruiseControl.NET working copy. I believe this is due to the way I've setup the Visual Studio solutions. Hopefully the more projects we run in this environment the better our understanding of how to set them up will be so I'm not questioning why the conflicts happen at this stage. To fix the builds I delete the working copy and force a new build - this works every time (currently). So my questions are: is deleting the working copy a valid part of a continuous integration build process, and how do I go about it?
I've tried solutions including MSTask and calling delete from the command line but I'm not having any luck.
Sorry for being so wordy - good job this is a beta :)
Doing a full delete before or after your build is good practice. This means that there is no chance of your build environment picking up an out of date file. Your building exactly against what is in the repository.
Deleting the working copy is possible as I have done it with Nant.
In Nant I would have a clean script in its own folder outwith the one I want to delete and would then invoke it from CC.net.
I assume this should also be possible with a batch file. Take a look at the rmdir command http://www.computerhope.com/rmdirhlp.htm
#pauldoo
I prefer my CI server to do a full delete as I don't want any surprise when I go to do a release build, which should always be done from a clean state. But it should be able to handle both, no reason why not
#jamie: There is one reason why you may not be able to do a clean build every time when using a continuous integration server -- build time. On some projects I've worked on, clean builds take 80+ minutes (an embedded project consisting of thousands of C++ files to checkout and then compile against multiple targets). In this case, you have to weigh the benefit of fast feedback against the likelihood that a clean build will catch something that an incremental build won't. In our case, we worked on improving and parallelizing the build process while at the same time allowing incremental builds on our CI machine. We did have a few problems because we weren't doing clean builds, but by doing a clean build nightly or weekly you could remove the risk without losing the fast feedback of your CI machine.
If you check out CC.NET's jira there is a patch checked in to implement CleanCopy for Subversion which does exactly what you want and just set CleanCopy equal to true inside your source control block just like with the TFS one.
It is very common and generally a good practice for any build process to do a 'clean' before doing any significant build. This prevents any 'artifacts' from previous builds to taint the output.
A clean is essentially what you are doing by deleting the working copy.
#Brad Barker
Clean means to just wipe out build products.
Deleting the working copy deletes everything else too (source and project files etc).
In general it's nice if you're build machine can operate without doing a full delete, as this replicates what a normal developer does. Any conflicts it finds during update are an early warning to what your developers can expect.
#jamie
For formal releases yes it's better to do a completely clean checkout. So I guess it depends on the purpose of the build.

Resources