Jenkins + Cmake + JIRA = CI of multiple interdependent projects? - continuous-integration

We have a number of small projects within our system running on Linux (Slackware 7-11, slowly migrating to RHEL 6.0). Around 50-100 applications and 15-20 libraries. Almost all our applications use one or more of our libraries. Our source tree looks something like this:
/app1
/app2
/app3
/include
/foo/app4
/foo/app5
/foo/app6
/foo/lib1
/foo/lib2
/lib/lib3
/lib/lib4
/lib/include
Now, I've done some work creating some CMakeLists.txt files and built most of the libs and some of the apps. I'm fairly comfortable with using cmake to build. I did this with v2.6, and I recently (an hour ago) upgraded to 2.8. Each of the above projects have their own CMakeLists.txt file specific to the project to do building and installation (no packaging, yet).
I have a requirement to make use of and enforce continuous integration. I've installed and played around with Jenkins, and from what I've seen I'm very impressed. I'm also evaluating JIRA to do our issue tracking.
Just to get things up and going, I've done a cmake install on all the libs, so the apps can find them in the filesystem. Headers are installed to /usr/local/include and libs to /usr/local/lib. Is this a bad thing to do? Would it be better to tell cmake to look for the lib's source directory, use the export interface or the recently introduced ExternalProject_Add?
Because I'm going to be using Jenkins, I cannot be guaranteed that cmake can find the source or build directory. Of course, I can tell Jenkins to build the projects in order (or at least, build the dependencies first). If an update to a library breaks the building of another project, then I guess it'll be up to someone with 3/4 of a wit to determine this.
Thank you in advance

Just to get things up and going, I've done a cmake install on all the libs, so the apps can find them in the filesystem. Headers are installed to /usr/local/include and libs to /usr/local/lib. Is this a bad thing to do?
No it is not a bad thing to do, but your build should reproduce resources from scratch. Things like portability and fixing build bugs will become an issue if things need to be pre-installed in the system outside of the build process. If you are able to do it other ways as you mentioned I would suggest that way, but if its going to make your build that much longer, its something you need to feel out. My ideology is everything should be movable to a new Jenkins machine with a fresh install at the drop of a hat, again this always isn't achievable, but something to strive for.
Because I'm going to be using Jenkins, I cannot be guaranteed that
cmake can find the source or build directory. Of course, I can tell
Jenkins to build the projects in order (or at least, build the
dependencies first). If an update to a library breaks the building of
another project, then I guess it'll be up to someone with 3/4 of a wit
to determine this.
Well one of the things I do in interdependent jobs is that on the successful building of one jobs triggers the job that depends on it. So for example if A depends on B, and A fail, B will never be run and whoever created the issue in build A is responsible for it and so on. This prevents a cascading affect of broken build that all were caused by a broken dependency. I would suggest that you keep files in a particular build in its job folder and specify to the dependency the location of the required files. Again keep your builds separate and clean.
I'm also evaluating JIRA to do our issue tracking.
I highly recommend JIRA as an issue tracking system for company; You might want to look at this Jenkins plugin for integration. If your using git, and you dont mind hosting your code off site, I would GitHub issues a shot as well.
Goodluck you seem to be on the right track.

Related

Least-impact solution to binary references in VCS

We are using TeamCity 2017.1 and have been using it for years with great joy. A long time ago, someone decided that all third-party binaires should be put into Subversion (our VCS of choice).
This has worked fine, but over time this repos has grown quite large, and combined with our being better and better at using TeamCity, we now have dozens of build configurations which all uses third-party binaries.
Our third-party folder is called Department and is around 2.6 GB in size. As such this is not so bad, but remember that this folder is used by pretty much every single project on the build server!
Now, I will agree with everyone that says that we should use Nugets, network shares etc., and that would work great with new projects. However, we have a lot of history and we cannot begin to change every single solution and branch.
A co-worker came up with the idea, that IF we made a single build project that in reality did nothing but keep a single folder updated with our Department stuff. Then we just need to find a way to reference this, without have to change all our projects and solutions.
My initial though is using Snapshot dependencies and then create a symbolic link as the first build step and remove it as the last, in order to achieve the same relative levels.
But is there a better way? What do other people do?
And keep in mind, that replacing with nugets or something else is not an option.
Let me follow the idea of your colleague and improve it. There would be a build configuration that monitors the Subversion repository and copies packages to a network share. That network share will be used by development teams as nuget repository. Projects that will convert their dependencies from Binary reference to nuget reference will enjoy faster build. When all the teams will use nuget repositories you may kill that Subversion.

How to set up and maintain directory structure in TFS build server?

So I have this pretty huge solution with many projects, few of them use dlls from other projects in this solution, some projects copy files to other directories after build is performed. (as post build events)
when I build the solution locally on my machine, everything is great and working, but when i configure a build, and build it on build server (we use TFS) something goes wrong and i get a an error when i try to load one of the applications in this solution. (the error does not give me much data on what went wrong)
so before i sit to debug all of this. does anybody know how can i smartly manage all the build actions that are performed locally and via build server and see the deltas?
I would like to be able to build the solution exactly the same on build server as i do on my machine (with directory structure, post build events..etc)
thanks a lot
The generally accepted way to do what you're after is to use NuGet for managing your assembly references. You can publish your dependent assemblies into NuGet as part of a continuous delivery process, then reference (and update!) those dependencies in the solutions that consume them as necessary.
This removes ambiguity ("What version of Foo.dll is Project X using?") and reduces runtime errors ("Why is Project X using Foo.dll 3.0? It was never tested with 3.0! It needs to run with 2.7!").

Project structure. Scientific Python projects

I am looking for a better way to structure my research projects. I have the following setup:
There are projects a,b,c and a library lib. Each project tackles a different research question and the library carries code that is used across projects. Thus all projects depend on lib. Things get more complicated as project c depends on projects a and b as well. When I work on project c, I will also update a,b or lib simultaneously. Each project is in a separate git repository.
So far I have dealt with this situation by including the dependencies above via git submodule and all the source files are located in the root dir of the project. The advantage is that I keep track of which version of lib my projects depend. Also one of my projects could depend on an outdated version of lib. I run everything from the root directory without "installing" any of the packages to site-packages or so. When a path is not set correctly, I override it via sys.path.insert.
However, the following points make me want to change layout:
I keep losing track of which version of lib I am editing.
I want to make use of automated testing tools (tox,jenkins etc.) which seem to be much easier to handle with a standard project setup.
sys.path.insert can lead to subtle problems which are hard to debug.
I usually want all my projects to work with the tip of lib anyway.
Therefore I am currently rearranging all projects (especiall lib) to be in line with the standard Python directory structure (source stored in a subdirectory, root contains a setup.py file) to be able to work in a virtualenv. Then I can list all my dependencies in requirements.txt. First I install lib as develop via pip install -e . Then I run pip freeze > requirements.txt which then includes a line similar to this.
-e git+<path_to_remote>#<sha>#egg=`lib`
So again I have generated a dependency to a specific commit (sha) as with git submodule, ensuring that I can checkout an old commit and the project should run. I can now install everything in a virtualenv and got rid of my path problems. Great.
I face some new trouble though. One problem is, how to update the sha in requirements.txt. The easiest (but probably not most elegant) solution I see is to write a pre-commit hook that updates the sha before commiting. Is there a better way?
And more generally - do you see a better solution given my setup?
As far as I see you have mostly solved your problem and there are only small bits left.
1) Don't use hashes to identify versions of your libraries. Even if you don't publish your libraries to the Cheese Shop, do a normal library versioning (semver) and tag you git repositories accordingly. Thing way you will have human-readable and manageable version in your git+https://github.com/... URLs of dependencies.
2) Make your tox setup in the way that will let you test stable version of dependencies (that you have tagged last time) and master version right from the latest repo revision.

What is the appropriate way to build WSO2 Carbon tags?

I'm trying to build multiple tags of WSO2 Carbon side-by-side for comparison purposes, but I'm concerned I may be missing something about the directory layout and how to do the builds. Please could I have some help?
At present, I've checked out what I think are the relevant tags from:
https://svn.wso2.org/repos/wso2/tags/carbon/3.0.0/
https://svn.wso2.org/repos/wso2/tags/carbon/3.1.0_core/
https://svn.wso2.org/repos/wso2/tags/carbon/3.2.0/
https://svn.wso2.org/repos/wso2/tags/carbon/3.2.2/
https://svn.wso2.org/repos/wso2/tags/carbon/3.2.3/
I've then tried running Maven builds from the top-level directories of each of the checkouts (in various ways, some involving skipping the tests and others not), with varying results (almost all of them unsuccessful in one way or another, whether due to missing artifacts, failing tests or other reasons). I also tried building 3.2.2 and 3.2.3 from the .../carbon/3.2.2/patch-releases/3.2.2 directory and the .../carbon/3.2.3/patch-releases/3.2.3 directories, as per the answer #ThiliniIshaka gave here:
WSO2 sourcecode of identity server (wso2is-3.2.3-src.zip) is always built with errors
This seemed to work (after some fiddling around) for 3.2.2, but some of the tests for 3.2.3 fail and this pulls down the build (I can make it work with the -fn flag to Maven, but that just results in what looks like an incomplete build). Furthermore, the earlier tags don't seem to have a corresponding patch-releases directory, so the same technique won't work for them even if I get it working for 3.2.3.
As an aside, I'm also deeply confused by things such as the 3.2.2 tag containing a 3.2.3 directory under patch-releases, etc.
All of this leads me to think I may be missing the point in some fundamental manner :)
The questions I thus have are:
Am I checking out the right things in the first place?
From which directories and how should I be building each of the tags please?
Do I need the same version of Maven for all of the tags?
Is there any good build documentation for the various different versions explaining some of this please? I've found various technical blogs, but seemingly nothing foolproof and comprehensive (I'm probably looking in the wrong places).
Many thanks.
Answering for the above queries;
Yes, these tags are created for relevant branch and point releases of carbon.
As the previous thread suggests [1] you can build the source, could you please provide us with the issues you get when building the source?
Yes, you need to build above tags with maven2.
Only the trunk [2] (where the normal developments going on) needs maven 3.
Some hints are provided in this blog post.
Start from the root level with mvn install (to skip running tests, build with mvn install -Dmaven.test.skip=true). If you are to build tags related to point releases, build from patch-releases directory.
Hope this helps.
Thanks

How to work on a Cocoa app and plugins in parallel?

I have a relatively simple goal: I want to create a Cocoa application which doesn't have much functionality itself, but is extendable through plugins. In addition I want to work on a few plugins to supply users with real functionality (and working examples).
As I am planning to make the application and each plugin separate open-source projects (and Git repositories), I'm now searching for the best way to organize my files and the Xcode projects. I'm not very experienced with Xcode and right now I don't see a simple way to get it working without copying files after building.
This is the simple monolithic setup I used for development up until now:
There's only one Xcode project with multiple products:
The main application
A framework for plugin development
Several plugin bundles
What I'm searching for is a comfortable way to split these into several Xcode projects (one for the application and framework) and one for each plugin. As my application is still in an early stage of development, I'm still changing lots of things in both the application and the plugins. So what I mean by "comfortable" is, that I don't want to copy files manually or similar inconvenience.
What I need is that the plugin projects know where they can find the current development framework and the application needs to know where it can find the development plugins. The best would be something like a inter-project dependency, but I couldn't find a way to setup something like that in Xcode.
One possible solution I have in mind is to copy both (the plugins and the framework) in a "Copy Files Build Phase" to a known location, e.g. /tmp/development, so production and development files aren't mixed up.
I think that my solution would be enough, but I'm curious if there's a better way to achieve what I want. So any suggestions are welcome.
First, don't use a static "known location" like you mention. I've worked in this kind of project; it's a royal pain. As soon as you get to the point of needing a couple of different copies of the project around (for fixing bugs in parallel, for testing a "clean" build versus your latest changes, for working on multiple branches), the builds start trashing each other and you find yourself having to do completely clean/builds much more often than you'd want.
You can create inter-project dependencies by adding the dependent project (Add File), right click the Target and choose "Get Info," and then add a Direct Dependency on the General pane.
In terms of structure, you can either put the main app and framework together, or put them in separate projects. In either case, I recommend a directory tree like:
/MyProject
/Framework
/Application
/Plugins
/Plugin1
/Plugin2
Projects should then refer to each other by relative paths. This means you can easily work on multiple copies of the project in parallel.
You can also look at a top-level build script that changes into each directory and runs "xcodebuild". I dislike complex build scripts (we have one; it's called Xcode), but if all it does is call "xcodebuild" with parameters if needed, then a simple build script is useful.

Resources