Docker development version/snapshot - maven

I'm trying to achieve the same mechanism as maven provides for SNAPSHOT versions.
Basically, whenever I have a development on a Dockerfile, I want to be able to create temporary versions in my private registry.
I could create a tag and force push on this tag, but I'd rather keep one version in the registry for a given tag.
If Docker image A depends FROM B (on 1.2.SNAPSHOT for instance), then the latest 1.2.x tag will get pulled.
Is there a special keyword in image version that performs what I'm actually trying to achieve ?
Hope this is clear enough :)

I hadn't used them, but it sounds like Maven snapshots are used to indicate a development version. As a result new versions are always checked for and fetched.
To get similar behavior in docker, I believe you'd need to check for a newer base image on each build. That's an option in docker build: --pull
docker build --pull -t myimage .

Related

How to install an extra software package in a buildpack? [duplicate]

I'm currently developping a Spring Native application, it's building using paketo buildpack and generating a Docker image.
I was wondering if it's possible to customize the generated Docker image by adding third party tools (like a Datadog agent for example).
Also, for now the generated container image is installed locally, is it possible to send it directly in another Docker repo ?
I'm currently developping a Spring Native application, it's building using paketo buildpack and generating a Docker image. I was wondering if it's possible to customize the generated Docker image by adding third party tools (like a Datadog agent for example).
This applies to Spring Boot apps, but really also any other app you can build with buildpacks.
There are a couple of options:
You can customize the base image that you use (called a stack).
You can add additional buildpacks which will perform more customizations during the build.
#2 is obviously easier if there is a buildpack that provides the functionality that you require. In regards to Datadog specifically, the Paketo buildpack now has a Datadog Buildpack you can use with Java and Node.js apps.
It's more work, but you can also create a buildpack if you are looking to add specific functionality. I wouldn't recommend this if you have one application that needs the functionality, but if you have lots of applications it can be worth the effort.
A colleague of mine put this basic sample buildpack together, which installs and configures a fictitious APM agent. It is a pretty concise example of this scenario.
#1 is also possible. You can create your own base image and stack. The process isn't that hard, especially if you base it on a well-known and trusted image that is getting regular updates. The Paketo team also has the jam create-stack command which you can use to streamline the process.
What's more difficult with both options is that you need to keep them up-to-date. That requires some CI to watch for software updates & publish new versions of your buildpack or stack. If you cannot commit to this, then both are a bad idea because your customization will get out of date and potentially cause security problems down the road.
UPDATE
You can bundle dependencies with your application. This option works well if you have static binaries you need to include, perhaps a cli you call to from your application.
In this case, you'd just create a folder in your project called binaries/ (or whatever you want) and place the static binaries in there (make sure to download versions compatible with the container image you're using, Paketo is Ubuntu Bionic at the time I write this). Then when you call the cli commands from your application, simply use the full path to them. That would be /workspace/binaries or /workspace/<path to binaries in your project>.
You can use the apt buildpack to install packages with apt. This is a generic buildpack that you provide a list of apt packages to and it will install them.
This can work in some cases, but the main drawback is that buildpacks don't run as root, so this buildpack cannot install these packages into their standard locations. It attempts to work around this by setting env variables like PATH, LD_LIBRARY_PATH, etc to help other applications find the packages that have been installed.
This works ok most of the time, but you may encounter situations where an application is not able to locate something that you install with the apt buildpack. Worth noting if you see problems when trying this approach.
END OF UPDATE
For what it's worth, this is a common scenario that is a bit painful to work through. Fortunately, there is an RFC that should make the process easier in the future.
Also, for now the generated container image is installed locally, is it possible to send it directly in another Docker repo ?
You can docker push it or you can add the --publish flag to pack build and it will send the image to whatever registry you tell it to use.
https://stackoverflow.com/a/28349540/1585136
The publish flag works the same way, you need to name your image [REGISTRYHOST/][USERNAME/]NAME[:TAG].
For me what worked was in my build.gradle file (I'm using kotlin) I added this:
bootBuildImage {
val ecrRepository: String? by project
buildpacks = listOf("urn:cnb:builder:paketo-buildpacks/java", "urn:cnb:builder:paketo-buildpacks/datadog")
imageName = "$ecrRepository:${project.version}"
environment = mapOf("BP_JVM_VERSION" to "17.*", "BP_DATADOG_ENABLED" to "true")
isPublish = true
docker {
val ecrPassword: String? by project
publishRegistry {
url = ecrRepository
username = "AWS"
password = ecrPassword
}
}
}
notice the buildpacks part where I added first the base default oci and then the datadog oci. I also added on the environment the BP_DATADOG_ENABLED to true, so that it adds the agent.

Why Use Spring Boot with Docker?

I'm quite new in docker, and i'm wondering that when using spring-boot, we can easily build, ship and deploy the application with maven or gradle plugin; and we can easily add Load Balance feature. So, what is the main reason to use docker in this case? Is containerized really needed in everywhere? Thanks for the reply!
Containers helps you to get the software to run reliably when moved from one computing environment to another. Docker consists of an entire runtime environment: your application and all its dependencies, libraries and other binaries and configuration files needed for its execution.
It also simplifies your deployment process, reducing a hell lot of mess to just one file.
Once you are done with your code, you can simply build and push the image on docker hub. All you need to do now on other systems is to pull the image and run container. It will take care of all the dependencies and everything.

Docker set limit (restrict, fix) image version tag to pull (FROM) only one concrete major release version number

How to write a FROM statement in the Dockerfile that pulls the latest image from a specific Major version.
Let's say there is an application that currently has three major versions: v1.7, v2.5, v3.0.5 and I want to have always the latest version of image v2. if i put in the Dockerfile statement:
FROM imagename:latest
Then I'll get build version 3.0.5 because it is the latest version. If I put:
FROM imagename:2.5
Then I'll get exact version 2.5 but will not be able to update the image with this statement once the version 2.6 becomes available.
How to set FROM statement that will get always the latest version 2 updates that will not brake backwards compatibility and stick to that major version 2.?
Docker image tags (including both the Dockerfile FROM line and docker run images) are always exact matches. As a corollary, once Docker believes it has a particular image locally, it won't try to fetch it again unless explicitly instructed to.
Many common Docker images have a convention of publishing the same image under multiple tags, that sort of reflect what you're suggesting. For instance, as of this writing, for the standard python image, python:latest, python:3, python:3.7, python:3.7.0, and python:3.7.0-stretch all point at the same image. If you said FROM python:3 you'd get this image. But, if you already had that image, and a Python 3.8 were released, and you rebuilt without explicitly doing a docker pull first, you'd use the same image you already had (the Python 3.7 one). If you did docker pull python:3 then you'd get the updated image but you'd have to know to do that (or tell your CI tool to do it for you).

How best to use Docker in continuous delivery?

What is the best way to use Docker in a Continuous Delivery pipeline?
Should the build artefact be a Docker Image rather than a Jar/War? If so, how would that work - I'm struggling to work out how to seamlessly use Docker in development (on laptop) and then have the CI server use the same base image to build the artefact with.
Well, there are of course multiple best practices and many approaches on how to do this. One approach that I have found successful is the following:
Separate the deployable code (jars/wars etc) from the docker containers in separate VCS-repos (we used two different Git-repos in my latest project). This means that the docker images that you use to deploy your code on is built in a separate step. A docker-build so to say. Here you can e.g. build the docker images for your database, application server, redis cache or similar. When a `Dockerfile´ or similar has changed in your VCS then Jenkins or whatever you use can trigger the build of the docker images. These images should be tagged and pushed to some registry (may it be Docker Hub or some local registry).
The deployable code (jars/wars etc) should be built as usual using Jenkins and commit-hooks. In one of my projects we actually ran Jenkins in a Docker container as described here.
All docker containers that uses dynamic data (such as the storage for a database, the war-files for a Tomcat/Jetty or configuration files that is part of the code base) should mount these files as data volumes or as data volume containers.
The test servers or whatever steps that are part of your pipeline should be set up according to a spec that is known by your build server. We used a descriptor that connected your newly built tag from the code base to the tag on the docker containers. Jenkins pipeline plugin could then run a script that first moved the build artifacts to the host server, pulled the correct docker images from the local registry and then started all processes using data volume containers (we used Fig for managing the docker lifecycle.
With this approach, we were also able to run our local processes (databases etc) as Docker containers. These containers were of course based on the same images as the ones in production and could also be developed on the dev machines. The only real difference between local dev environment and the production environment was the operating system. The dev machines typically ran Mac OS X/Boot2Docker and prod ran on Linux.
Yes, you should shift from jar/war files to images as your build artefacts. The process #wassgren describes should work well, however I found the following to fit our use case better, especially for development:
1- Make a base image. It looks like you're a java shop so as an example, lets pretend your base image is FROM ubuntu:14.04 and installs the jdk and some of the more common libs. Let's call it myjava.
2- During development, use fig to bring up your container(s) locally and mount your dev code to the right spot. Fig uses the myjava base image and doesn't care about the Dockerfile.
3- When you build the project for deployment it uses the Dockerfile, and somewhere it does a COPY of the code/build artefacts to the right place. That image then gets tagged appropriately and deployed.
simple steps to follow.
1)Install jenkins in a container
2)Install framework tool in the same container.(I used SBT).
3)Create a project in jenkins with necessary plugins to integrate data from gitlab and build all jar's to a compressed format (say build.tgz).
4)This build.tgz can be moved anywhere and be triggered but it should satisfy all its environment dependencies (for eg say it required mysql).
5)Now we create a base environment image(in our case mysql installed).
6)With every build triggered, we should trigger a dockerfile on the server which will combine build.tgz and environment image.
7)Now we have our build.tgz along with its environment satisfied.This image should be pushed into registry.This is our final image.It is portable and can be deployed anywhere.
8)This final image can be kept on a container with necessary mountppoints to get outputs and a script(script.sh) can be triggered by mentioning the entrypoint command in dockerfile.
9)This script.sh will be inside the image(base image) and will be configured to do things according to our purpose.
10)Before making this container up you need to stop the previously running container.
Important notes:
An image is created everytime you build and is stored in registry.
Thus this can be reused.This image can be pushed into main production server and can be triggered.
This helps to maintain a clean environment everytime because we are using the base image.
You can also create a stable CD pipeline with Bamboo and Docker. Bamboo Docker integrations come in both a build agent form and as a bundled task within the application. You might find this article helpful: http://blogs.atlassian.com/2015/06/docker-containers-bamboo-winning-continuous-delivery/
You can use the task to build a Docker image that you can use in another environment or deploy your application to a container.
Good luck!

Can I force Fuse Fabric Maven Proxy to push an updated version of the same jar to containers

I've developed a project that has a bundle whose only purpose is to write a file to a certain location on all of the containers running it.
This file will change often, but does not really constitute an increase in version number. I also don't want to have 100 versions of this bundle in my repository. So I have left it as a snapshot. This question would also apply if I was doing active development on a project for fuse fabric.
Once built, I deploy the bundle to my fabric's maven proxy with:
mvn deploy:deploy-file -Dfile=target/file-1.0-SNAPSHOT.jar -DartifactId=file -DgroupId=com.some.id -Dversion=1.0.0 -Dtype=bundle -Durl=http:// username:password#hostname:port/maven/upload
I can then add my bundle to a profile with:
mvn:com.some.id/file/1.0.0
This works the first time.
Then I make a change to the file, rebuild the bundle, and deploy with exactly the same command. I remove the bundle from the profile and add it back in. The maven proxy on the fabric server has the new bundle in it if I check $FUSE_HOME/data/maven/proxy/com/some/id/file/1.0.0/
But on all of the containers running the profile on a separate server, the bundle is not updated. I assume because the version has not changed. However, fabric should be smart enough to tell the difference, as the md5 should be different.
For now I can change the version number and my problem is solved, or clear the maven proxy by hand. But in production I will not be able to clear the proxy on every server, nor can I expect someone to come up with a unique version for the bundle every time they make a small change to this file (which should happen often).
I have already tried adding updatePolicy=always to the fabric maven configuration, but I believe that only affects repositories that it is pulling from, not the proxy.
Any advice on the best way to solve this problem is welcome.
If you are using containers, your old artefacts must be cached in
$FUSE_HOME/instances/CONTAINER_NAME/data/maven/agent/
Delete the old artefacts from here and stop/start your container.

Resources