Context: I have a monorepo that contains several projects. They are built with Gradle. Currently they are packed as OCI containers using Docker in an additional pipeline step.
Goal: I want to use Jib inside Gradle's incremental build to construct containers only for the services that were really updated during the build.
Problem: Adding the Jib plugin and running gradle jib creates new containers for all the projects where the plugin has been added.
What configuration should I apply so that I can construct just the needed containers. In Maven that should be achievable by binding jib to the package goal.
Adding tasks.build.dependsOn tasks.jib on root project level did not work for me.
Related
After first attempts of setting up automated docker builds for my personal spring-boot github repos (I'm a newbie) they were constantly failing due to the jar file not being found. While my first impression was that I had the pathing wrong, it dawned on me that docker wasn't building the jars before attempting the builds.
FROM openjdk:8-jdk-alpine
VOLUME /tmp
ARG JAR_FILE=target/myjar*.jar
ADD ${JAR_FILE} myjar.jar
EXPOSE 8080
ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/myjar.jar"]
While I have found a couple of resouces that describe how to integrate maven into the Dockerfile, it seemed like my image could easily get too big too easily with that approach. Has anybody tackled this issue before and could recommend recommend a way to integrate maven into the Dockerfile build? An alternative I've given thought to is to learn Jenkins and develop a solution that way for a pipeline.
There are many approaches to achieve this.
Approach 1
Rely on the maven with its docker maven plugin
This indeed allows building the image during the maven build.
This approach will work in general and allows a level of customizations sufficient for
many use cases. Usually, spring boot applications come as a single Jar with all the dependencies packed inside it, so there is no need for multiple layers in docker image (at least in my experience).
The point here is to call the plugin when the jar of spring boot is already built.
The jar is prepared with the help of another plugin and it is usually invoked in package phase, so if you go with this approach, make sure that you invoke the image creation plugin after the spring boot maven plugin that builds the single jar artifact.
The artifact must reside in <you_module>/target folder and it will be pretty big in size. The original module will reside next to it but will have the suffix .original.
Approach 2
Let maven build the artifact but not the image. Maven will end after this step.
Then invoke a script in Jenkins that will run the docker build command and it will build the image.
The result will be the same, you'll work here with docker directly possibly utilizing all its perks if you really need it.
Both approaches can work for you, choosing between them depends on the following factors:
You're more a Java / maven guy rather than devops guy - then go with approach 1 otherwise go with approach 2
You would like to actually locally run the image that you've built, for example, if you have a kubernetes cluster installed locally - in this case, go with approach 1
You need to utilize some latest features of docker not available for use from the plugin - then go with approach 2
I am running Gradle builds in a Docker container and I wanted to create a Docker image that would already contain all most common dependencies, so the build itself does not need to download them.
Is there an easy way to tell Gradle to download particular library (or plugin) with all dependencies without a specific build file? I want to use the image to run different builds that do not share any configuration.
I am looking for something similar to Maven's dependency:get.
I did not find any solutions other than creating a small build script in Gradle with some dependencies defined there and running build command.
However, it was easier to just mount Gradle cache as a volume in the container I was using. That has added benefit of being reusable accross many Docker containers. This is the approach I ended up using, if anyone is interested.
When you define multi module project in Maven, you have one root project and its modules. When you build the root project, Maven transitivelly builds all its modules in correct order. So far pretty similar to Gradle.
But with Maven, you can clone only one submodule from repository and build it locally without need to download the whole project structure. This is because you define dependencies on other modules within the same project just as any other external dependency and it is downloaded and cached from your local repository (Nexus).
With Gradle, you define cross module dependencies as compile project(':other'). So you need to clone whole project structure from repository in order to resolve and build correctly. Is there any way to use Gradle multi module project support, without having to locally clone whole project structure?
I would argue that Maven's multi-module support is a slapped on after-thought. Unlike Gradle, a project dependency is not a first class concept. Instead the maven "reactor" substitutes local artifacts for dependencies when the GAV (group/artifact/version) matches.
If you'd like to use the same approach in Gradle then you can specify your dependencies using the GAV notation and then use the new composite build feature to join two or more separate gradle builds together and substitute repository dependencies for local source dependencies. Note that that you can define the projects included in the composite using groovy so you could easily script this based on custom logic (eg if a subfolder exists in some root folder etc)
Note that composite build support is a new feature added in Gradle 3.1. Prior to Gradle 3.1 you can use Prezi Pride to achieve the same
Actually I see two alternatives how can I deploy my project to NEXUS:
Configure distributionManagement and deploy-plugin in pom.xml. That in jenkins I should only call mvn deploy and my project will be deployed to the environment
Create in Jenkins Post-build Actions -> Deploy artifacts to maven repository, where I can set repository URL, repository ID and so on
Question
What is pros and cons of each approach comparing with one another?
If you are configuring the deployment in Jenkins build configuration you are doing two things
you are separating the deployment from the project itself and therefore potentially can have different deployments for the same project
you remove the deployment setup from your version control setup/your source code
If you are leaving it in the pom using the default Maven setup you can run deployment of the project without modification from the commandline on any machine that has the credentials set up correctly. This can greatly help wit troubleshooting and it makes the setup independent of whatever CI server you use.
Both approaches as well as more custom setups like using the Artifactory Build Integration or the Nexus Staging Maven Plugin usage are fine. It will mostly depend on what you are aiming to achieve.
Personally I believe that the configuration should not be isolated to Jenkins and should remain with the project in the pom. But that is just my 2c.
Thanks for adding the Artifactory tag, now I can give you one more option - Artifactory Build Integration. With Artifactory Jenkins plugin you can configure your deployment options (target repository, whether or not you want to deploy build information, environment variables and custom properties etc.) without polluting your developers pom with ci-eyes only information.
We have a Build that compiles and creates an artifact. Then we have another Build that uses the last Compile build and Deploys it to the proper environment. Once that is complete, I have to go and Tag the build in TC that it was pushed to the environment. Is there a way that I can tag the Compile Build that is was deployed using the Deploy Build?
I'm not aware of an easy way to do this (i.e. through a TeamCity configuration setting) but you probably could accomplish this using the REST API from your build script.
If you are using TeamCity 6 or above because you have a build dependency chain from the Deployment Build to the Main Build either through artifact dependencies, snapshot dependencies or both you can just tag your Deployment Build. This is because the UI will show you a tree view of the dependencies that the deployment used and you can navigate to the actual build.
One thing you can do, and in my opinion should do, is to tag your source control from TeamCity if you are using a source control that supports tagging/labelling. You should probably set your Deployment Build up with a snapshot dependency as well as the artifact dependency, especially if your build files are in the same repository. On your Main Build you should get TeamCity to label your repository on a successful build with something like "build-1.2.3.4". Then on your Deployment Build you should get it to label the repository after a successful build with "deployed-1.2.3.4". If you deploy to different environments then you can get it to label the repository accordingly.