Gradle build-cache in composite builds - gradle

I'm dealing with weird behavior using gradle build-cache with composite builds.
I have a repo where i have 3 independent projects:
app
mocks
integration-tests
Integration-tests are including 2 builds - mocks and app and starting both services as java apps (depending on jar task of both projects).
Running locally
When running locally, everything seems fine (also using build cache), but on integration machine, something weird is happening.
Integration testing
We use gitlab, and pipiline is quite simple:
---|-- junit-mocks --|--- integration-tests
|-- junit-app ----|
junit mocks and junit app in parallel (with build cache) and next stage is int-testing where java compilation of app and mocks is loaded from build-cache. (as you can see in picture attached).
The error is only a consequence, that Main class was not found (weird#1). When i looked inside the jar file, it was truly half empty, missing a lot of classes, but static resources were there. When i looked into gradle's build/classes dir, it was completely empty. It's like cache was hit, but classes were not copied?
Weird#2: is that mocks has built successfully (and started), but not the app.
When I turned off build cache, build passed.
Any ideas what could cause the issue?

What type of gitlab executor/runner are you using?
Your cache is intended to be shared between gitlab stages however if you are using a docker executor, each stage will use a fresh docker image/environment so unless you are using artifacts, you would not be able to share a cache between them.
If you were using a shell executor which targets a specific machine on every build then you would be able to leverage a cache as long as it was not within the build directory because that changes on every gitlab stage in the pipeline.

Related

Building multiple "things" on wercker

We are trying to migrate a project that is currently build on Jenkins to Wercker:
We have a Play Framework (2.4.x) application that makes have use of multi-module architecture. To be more specific, it's quite close to that of the Guardian.
Within one Git repository, we have the following structure:
What we did earlier was to detect where changes occurred since the last build. The result of that query may be this:
/app1/
/app3/
/app5/
We only build those projects usually by invoking parallel and parameterized builds for each of those modules.
Is there a pattern that would allow us to keep this mechanism running on wercker?

Using TeamCity how can I deploy to an environment then run tests against that environment?

I am struggling to get my head around this!
I wish to have TeamCity deploy our windows service to a particular environment, then a separate project run acceptance tests against that environment.
Currently I have a project that builds then runs unit tests, and finally packages up the deployable elements.
A second project takes the package (artefact dependency) and deploys to the environment.
Now I wish to run acceptance tests against that deployment. The tests are not in the deployable package so I must return to the "build" project... I thought I could use a Snapshot dependency to use the already compiled files (I don't want to checkout/re-compile anything)
However I just get an empty folder on the agent when I hit 'run' on this project.
I must have misunderstood how this works!
Are there any blog posts to help elucidate this?
The tests are specflow/nunint tests.
Please ask for more info if I have not been clear!
You might want to set up the tests as an artifact of the build project, then deploy the tests to the deployment environment.
Then run a separate TeamCity agent on the deployment environment to actually execute the tests on that environment.

Maven, switching to a different profile

I have a problem with proper maven profile configuration of a project that is deployed to a continuous integration server.
In my project, there are some resources that needs to be included only during tests at the daily building phase and others that needs to be included during nightly builds, and they can never be included both at the same time, because building process will fail, I can achive this locally by activating one profile at the same time.
Continuous integration server runs following maven commands:
-during daily builds:
mvn clean package -Pci -Dci
-during nightly builds
mvn clean install -Dmaven.test.failure.ignore -Pci,nightly -Dci -Dnightly
As you see, nightly build command include maven variables and profiles defined in daily build command, which makes some troubles for me, becouse I want to have only one profile activated at the same time.
Specifically, what I want is having 3 separate profiles:
-my-pforile (activated by default, not used on CI server)
-ci-profile (activated only on daily builds, used on CI server)
-nightly-profile (activated only on nightly builds, used on CI server)
How can I achieve that? I tried almost everything. Reconfiguring CI server is not an option.
When I have to configure the same build with different profiles, using Jenkins as a CI,
I usually create as much builds as profiles, so each build uses the correct configuration.
If adding a new build is not an option probably you can try to create a workaround
using something like the exec plugin (http://mojo.codehaus.org/exec-maven-plugin/) to download
the resources from a ftp (or something else).
You will have also to create a cron job (or equivalent) to replace the correct resources between the builds:
in the evening you put there the resources for the night, in the morning the ones for the day.
But considering how cumbersome this process will be, probably it is better to try to add
a new build.

Run unit tests in Jenkins / Hudson in automated fashion from dev to build server

We are currently running a Jenkins (Hudson) CI server to build and package our .net web projects and database projects. Everything is working great but I want to start writing unit tests and then only passing the build if the unit tests pass. We are using the built in msbuild task to build the web project. With the following arguments ...
MsBuild Version .NET 4.0
MsBuild Build File ./WebProjectFolder/WebProject.csproj
Command Line Arguments ./target:Rebuild /p:Configuration=Release;DeployOnBuild=True;PackageLocation=".\obj\Release\WebProject.zip";PackageAsSingleFile=True
We need to run automated tests over our code that run automatically when we build on our machines (post build event possibly) but also run when Jenkins does a build for that project.
If you run it like this it doesn't build the unit tests project because the web project doesn't reference the test project. The test project would reference the web project but I'm pretty sure that would be butchering our automated builds as they exist primarily to build and package our deployments. Running these tests should be a step in that automated build and package process.
Options ...
Create two Jenkins jobs. one to run the tests ... if the tests pass another build is triggered which builds and packages the web project. Put the post build event on the test project.
Build the solution instead of the project (make sure the solution contains the required tests) and put post build events on any test projects that would run the nunit console to run the tests. Then use the command line to copy all the required files from each of the bin and content directories into a package.
Just build the test project in jenkins instead of the web project in jenkins. The test project would reference the web project (depending on what you're testing) and build it.
Problems ...
There's two jobs and not one. Two things to debug not one. One to see if the tests passed and one to build and compile the web project. The tests could pass but the build could fail if its something that isn't used by what you're testing ...
This requires us to know exactly what goes into the build. Right now msbuild does it all for us. If you have multiple teams working on a project everytime an extra folder is created you have to worry about the possibly brittle command line statements.
This seems like a corruption of our main purpose here. The tests should be a step in this process not the overriding most important thing in this process. I'm also not 100% sure that a triggered build is the same as a normal build does it do all the same things as a normal build. Move all the correct files in the same way move them all into the same directories etc.
Initial problem.
We want to run our tests whenever our main project is built. But adding a post build event to the web project that runs against the test project doesn't work because the web project doesn't reference the test project and won't trigger a build of this project. I could go on ... but that's enough ...
We've spent about a week trying to make this work nicely but haven't succeeded. Feel free to edit this if you feel you can get a better response ...
In Jenkins/Hudson, it's quite ok to have many jobs. some for doing compilation triggered version control changes, some for running (unit) tests triggered by successful builds, some for doing more tests (integeration) trigered by successful earlier tests, some for deploying, triggered by successfully passing all tests.
Look at plugins like join, build pipeline, parametrized trigger and more to help out with this.
This will also allow things to happen in parallel, by using multiple nodes. Trying to cram everything in one job is not the way to go.

How to split a big Jenkins job/project into smaller jobs without compromising functuality?

we're trying to improve our Jenkins setup. So far we have two directories: /plugins and /tests.
Our project is a multi-module project of Eclipse Plugins. The test plugins in the /tests folder are fragment projects dependent on their corresponding productive code plugins in /plugins.
Until now, we had just one Jenkins job which checked out both /plugins and /tests, built all of them and produced the Surefire results etc.
We're now thinking about splitting the project into smaller jobs corresponding to features we provide. It seems that the way we tried to do it is suboptimal.
We tried the following:
We created a job for the core feature. This job checks out the whole /plugins and /tests directories and only builds the plugins the feature is comprised of. This job has a separate pom.xml which defines the core artifact and tells about the modules contained in the feature.
We created a separate job for the tests that should be run on the feature plugins. This job uses the cloned workspace from the core job. This job is to be run after the core feature is built.
I somehow think this is less than optimal.
For instance, only the core job can update the checked out files. If only the tests are updated, the core feature does not need to be built again, but it will be.
As soon as I have a feature which is dependent on the core feature, this feature would either need to use a clone of the core feature workspace or check out its own copy of /plugins and /tests, which would lead to bloat.
Using a cloned workspace, I can't update my sources. So when I have a feature depending on another feature, I can only do the job if the core feature is updated and built.
I think I'm missing some basic stuff here. Can someone help? There definitely is an easier way for this.
EDIT: I'll try to formulate what I think would ideally happen if everything works:
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
Finally, the project job should
do a nightly build
check out all sources from /plugins and /tests
build all, test all, send results to Sonar
Additionally, it would be neat if the nightly build was unnecessary because the builds and test results of the projects' features would be combined in the project job results.
Is something like this possible?
Starting from the end of the question. I would keep a separate nightly job that does a clean check-out (gets rid of any generated stuff before check-out), builds everything from scratch, and runs all tests. If you aren't doing a clean build, you can't guarantee that what is checked into your repository really builds.
check if the feature components have changed (i.e. an update on them is possible)
if changed, build the feature
Build the dependent features, if necessary (i.e. check ob corresponding job)
Build the feature itself
if build successful, start feature test job
let me see the results of the test job in the feature job
[I am assuming that by "dependent features" in 1 you mean the things needed by the "feature" in 2.]
To do this, I would say that you have multiple jobs.
a job for every individual feature and every dependent feature that simply builds that feature. The jobs should be started by SCM changes for the (dependent) feature.
I wouldn't keep separate test jobs from compile jobs. It allows the possibility that successfully compiled code is never tested. Instead, I would rely on the fact that wen a build step fails in Jenkins, it normally aborts further build steps.
The trick is going to be in how you thread all of these together.
Let's say we have a feature and it's build job called F1 that is built on 2 dependent features DF1.1 and DF1.2, each with their own build jobs.
Both DF1.1 and DF1.2 should be configured to trigger the build of F1.
F1 should be configured to get the artifacts it needs from the latest successful DF1.1 and DF1.2 builds. Unfortunately, the very nice "Clone SCM" plugin is not going to be of much help here as it only pulls from one previous job. Perhaps one of the artifact publisher plugins might be useful, or you may need to add some custom build steps to put/get artifacts.

Resources