In Bamboo, how do I pull a component library repository to a fixed location to avoid per-branch duplication? - windows

I have several projects which use code from a large set of component libraries. These libraries are under source control.
The libraries repository contains all the libraries used by all my projects and contains multiple versions of multiple libraries. Each library/version pair lives in its own folder. Each of my projects identifies the specific library/version pairs it needs through the folder paths of the references in its project file.
For example $(LibraryPath)\SomeLibrary\v1.1.5
Please note that the libraries repository is only ever added to. No changes are made to stuff already in the repository. Ever.
I have been of course been able to configure my build plan to pull the libraries repository to a libraries subfolder of the working directory. So far so good. However, using the auto branch management feature of Bamboo, this setup means that the libraries repository is cloned for each and every branch in all projects.
Not funny. No, really, not funny...
What I would like to do is:
pull the libraries repository in each build plan
but pull it to a fixed location that is the same for all build plans
it doesn't have to be an absolute path
but it does need to be outside the working directory of the current build plan to avoid unnecessary duplication
Unfortunately the Checkout Directory of the Source Code Checkout configuration task in a Bamboo build plan doesn't allow me to specify either an absolute path or a relative one that goes "up" for one or more levels from the working dir. The hint text explicitly states "(Optional) Specify an alternative sub-directory to which the code will be checked out." And indeed, specifying something like ..\Library gets punished with the message "Checkout to parent directory is forbidden".
I have seen information on the "artifact sharing" feature of Bamboo. This will probably work, but it seems like overkill for what I want to achieve.
What would be the easiest and least complicated way to achieve my goal using Atlassian's Bamboo Continuous Integration?
Out-of-the-box alternatives are welcome, but please don't direct me to any products that require intimate CLI use and/or whose documentation assumes (extensive) knowledge of 'nix and/or Java setup. I am on Windows and spoiled rotten by powerful (G)UI's.

I have the same problem - with a repository weighing in at around 2GB.
I'd like to simply "git checkout myBranch" and "git clean -fxd" instead of cloning every time (which should save a lot of time and disk space). However I also like Bamboo's automatic trigger with new branches showing up.
Like the OP, I'd love to be able to put "..\SharedDirectory" in the "CheckoutDirectory" for the
"Source Code Checkout" task but it won't let me go out above the \JOB_KEY\ folder
One possible solution is: replacing the "Source Code Checkout" task with the two git commands above. That way I can specify exact when/where/how to do the checkout. I think there may be problems with the initial checkout in this case - but once that is solved, all subsequent branches would use the same shared folder, and no more pulling down 2GB every time.

Related

VSTS minimize get source task

We have a VSTS build setup. Currently we have a single repository hosting multiple services. We have a build definition per service, and triggers each only when the current service is touched, by a trigger pattern.
Now the issue is that each build definition hence the single repository makes the GetSource download the whole repo and also we do a clean.
I have been searching to see if there is a solution like the trigger where we can set a pattern, to just get a part of the repository downloadet. This should be to reduce build/download time.
A workaround might be to not make a clean each time or make multiple repositories. At the moment we would like to avoid the latter.
Let me hear if anyone knows of a good solution.
There is no way to specify the files to be downloaded in Get Sources step. Instead, VSTS will download the whole repo.
The workaround is set Clean option as false so that only the modified files and new added files will be updated in Get Sources step.

Share Git repository directory across multiple build definitions

When a private agent build starts in VSTS, it gets assigned a directory, e.g. C:\vstsagent_work\1\s
Is there a way to set this to a different path? On other CI servers, like Jenkins, I can define a custom workspace for a job. I'm dealing with a huge monorepo and have dozens of build definitions around the same repository. It makes sense (to me anyway) to share a single directory on the build agent computer.
The benefit to me is that my builds can use pre-built components from upstream repositories, if they have already been built.
Thanks for any help
VSTS build always creates a working directory per build definition. This leaves you two options:
Create a single build definition and use conditionals on steps to skip certain steps in order to only run what is needed. This allows you to use the standard steps and may require a powershell script to figure out which steps to run and which ones to skip. Set variables from powershell using the special logging commands.
Disable the get sources step and add a step that manually fetches sources. You'll need to clean the working directory, checkout the right commit, basically replicating the actions in the get sources step manually. It may require some fidgeting to get all the behavior correctly for normal build, pull request builds etc. That way you can take full control over the location where sources are checked out.
I'd also recommend you investigate the 2017 project formats that use the new <packageReference> in the project files to fetch packages. The new system supports configuring a version range which can always fetch the latest available version of packages. It's a better long-term solution.
No, it isn’t available in VSTS build system.
You can change working directory of agent (C:\vstsagent_work) (Re-configure it and specify another working folder), but it won’t uses the same source folder for different build definitions, the folder would be 1, 2, 3 ….

Build dependencies and local builds with continuous integration

Our company currently uses TFS for source control and build server. Most of our projects are written in C/C++, but we also have some .NET projects and wouldn't want to be limited if we need to use other languages in the future.
We'd like to use Git for our source control and we're trying to understand what would be the best choice for a build server. We have started looking into TeamCity, but there are some issues we're having trouble with which will probably be relevant regardless of our choice of build server:
Build dependencies - We'd like to be able to control the build dependencies for each <project, branch>. For example, have <MyProj, feature_branch> depend on <InfraProj1, feature_branch> and <InfraProj2, master>.
From what we’ve seen, to do that we might need to use Gradle or something similar to build our projects instead of plain MSBuild. Is this correct? Are there simpler ways of achieving this?
Local builds - Obviously we'd like to be able to build projects locally as well. This becomes somewhat of a problem when project dependencies are introduced, as we need a way to reference these resources or copy them locally for the build to succeed. How is this usually solved?
I'd appreciate any input, but a sample setup which covers these issues will also be a great help.
IMHO both issues you mention fall really in the config management category, thus, as you say, unrelated to the build server choice.
A workspace for a project build (doesn't matter if centralized or local) should really contain all necessary resources for the build.
How can you achieve that? Have a project "metadata" git repo with a "content" file containing all your project components and their dependencies (each with its own git/other repo) and their exact versions - effectively tying them together coherently (you may find it useful to store other metadata in this component down the road as well, like component specific SCM info if using a mix of SCMs across the workspace).
A workspace pull wrapper script would first pull this metadata git repo, parse the content file and then pull all the other project components and their dependencies according with the content file info. Any build in such workspace would have all the parts it needs.
When time comes to modify either the code in a project component or the version of one of the dependencies you'll need to also update this content file in the metadata git repo to reflect the update and commit it - this is how your project makes progress coherently, as a whole.
Of course, actually managing dependencies is another matter. Tons of opinions out there, some even conflicting.

is it bad form to have your continuous integration system commit to a repository

I have recently been charged with building out our "software infrastructure" and so I am putting together a continuous integration server.
After a build completes would it be considered bad form for the CI system to check in some of the artifacts it creates into a tag so that it can be fetched easily later (or if the build breaks you can more easily recreate the problem.)
For the record we use SVN and BuildMaster (free edition) here.
This is more of a best practices question rather than a how-to question. (It is pretty easy to do with BuildMaster)
Seth
If you believe this approach would be beneficial to you, go ahead and do it. As long as you maintain a clear trace of what source code was used to build each artifact, you'll be fine.
You should keep this artifact repository separated from the source code repository.
It is however a little odd to use a source code repository for this - these are typically used for things that will change, something your artifacts most definitely should not.
Source code repositories are also often used in a context where you want to check out "everything", for example the entire trunk. With artifacts you are typically looking for a specific version, and checking out all of the would only be done if exporting them to some other medium.
There are several artifact repositories specialized for this, for example Artifactory or Apache Archiva, but a properly backed up file server will thought-through access settings might be a simple and good-enough solution.
I would say it's a smell to check in binaries as a tag. Your build artifacts should be associated with a particular build version in your build system, and that build should be associated with a particular checkin. You should be able to recreate the exact source code from that information. If what you're looking for is a one-stop-function to open the precise source-code revision that generated the broken build, I'd suggest that you invest some time into building a Powershell module that will do that for you.
Something with a signature like:
OpenBuild -projectName "some project name" -buildNumber "some build number"

Maintaining upstream vendor source with Xcode and SVN

Question: What is the best way to maintain a project based on another OSS project, through Xcode and version managed by SVN?
I'd like to start a fork (?) of a reasonably popular open source project (it's allowed). Mostly, I want to build my own user interface written in Cocoa/ObjC for it and throw in a few custom features of my own as well.
Now, this OSS project isn't exactly small. The project itself has over 3000 files, and the build process is pretty intense- consisting of multiple stages and steps, which need to compile build tools, run those, then compile the results.
All this is fine and dandy in Xcode, since it's easy enough to setup build phases and rules to handle everything.
What I'm not clear on, is how best to manage patches from upstream. They are constantly working on the project and I'd like to be able to keep up to date with those patches as easily as possible, as many of the diff files effect sometimes up to a hundred (!) files at once.
So maintaining a pristine unmodified copy of that source tree so I can apply patches to it seems like a smart thing to do, because I really don't want to be sorting through hundreds of files every few weeks merging patches by hand.
What I'm thinking of doing in this regard is:
1) Setup an "upstream" SVN repo to hold a copy of the upstream source, plus the bare minimum required to compile it in Xcode (so an xcproject, a few xcconfigs, some prefix header files and that's it)
2) Setup my own "downstream" SVN repo where I do all my work and apply my own modifications.
Whenever upstream releases a patch, I can apply it to #1 then synchronize across to #2, and deal with any issues created by my own modifications.
What I'm not clear about, is if this is a sane way of handling things- or if there's some better practice I should be following.
Is this the best way to handle things, or should I be looking at doing this some other way?
In SVN-world it was named "Vendor Branches" long time ago and intensively used by many teams (you can additionally google this phrase)
Technically it's
one SVN repo
at least one special branch (special in terms of usage, nothing more), which, with svn:externals, linked to 3-rd party repo of upstream code
your place for changes (trunk or any other place, I prefer trunk), initially created as copy of vanilla code and there you perform all code-hacks
If (or "when") vendor branch got updates from upstream, you have just merge branch to /your place/, integrate changes and continue to work

Resources