I tried to setup Continuous Deployment using jenkins for own microservice project which is organized as multi-module maven project (each submodule representing a micro service). I use "Incremental build - only build changed modules" in jenkins to avoid unnessesary building, and then use docker-maven-plugin to build docker image. However, how could I do to only redeploy changed images to kubernetes cluster?
You can use local docker image registry.
docker run -d -p 5000:5000 --restart=always --name registry registry:2
You can then push the development images to this registry as a build step and make your kubernetes containers use this registry.
After you are ready, push the image to your production image registry and adjust container manifests to use proper registry.
More info on private registry server: https://docs.docker.com/registry/deploying/
Currently Kubernetes does not provide proper solution for this. But there are few workarounds mentioned [here]: https://github.com/kubernetes/kubernetes/issues/33664
I like this one 'Fake a change to the Deployment by changing something other than the image'. We can do it in this way :
Define environment variable say "TIMESTAMP" and any value to it in deployment manifest. In CI\CD pipeline we set the value to current timestamp and then pass this updated manifest to 'kubectl apply'. This way, we are faking a change and kubernetes will pull the latest image and deploy to the cluster. Please make sure that 'imagePullPolicy : always' is set.
Related
So, I created a basic quarkus app. Then I added the kubernetes and helm extension. I did ./mvnw clean package command. In the target directory, a helm directory was added with the chat.yaml, values.yaml and the templates. All these are based on the app I deployed firstly: meaning with a specific name. Now in the deployment.yaml there is a section of image: myimage. What is the image that should be there. Also I followed the instructions of the documentation of quarkus with helm, but nothing happens.
I tried to install with helm by doing: helm install helm-example ./target/helm/kubernetes/. What should I do in order to see my app in the browser?
Quarkus Kubernetes and Quarkus Helm are extensions to generate Kubernetes resources. The component that you might be missing is to also build/generate the container image of your application (the binaries).
By default, the Quarkus Kubernetes extension will use a container image based on your system properties/application metadata, which might not be correct when installing the generated Kubernetes resources.
Luckily, Quarkus provides a few extensions to generate container images (see documentation).
For example, if you decide to use the Quarkus Container Image Jib, now Quarkus will also build the container image locally when packaging your application. Still, the image won't be available when installing the Kubernetes resources because the image is in your local machine and this is not accessible by the remote Kubernetes instance, so you need to push this generated image into a container registry (docker hub or quay.io for example). You can configure these properties by adding:
# To specify the container registry
quarkus.container-image.registry=quay.io
# To instruct Quarkus to also push the generated image into the above registry
quarkus.container-image.push=true
After building again your application, now Quarkus will generate both the Kubernetes resources and the container image which will be available in a container registry.
Additionally, if you want to install the generated chart by the Quarkus Helm extension, you can overwrite the container image before installing it:
helm install helm-example ./target/helm/kubernetes/<chart name> --set app.image=<full container image>
I hope it helps!
I've created a complete new Xamarin Forms App project in Visual Studio for Mac and added it to a GitLab repository. After that I created a .gitlab-ci.yml file for setting up my CI build. But the problem is that I get error messages:
error MSB4019: The imported project "/usr/lib/mono/xbuild/Xamarin/iOS/Xamarin.iOS.CSharp.targets" was not found. Confirm that the expression in the Import declaration "/usr/lib/mono/xbuild/Xamarin/iOS/Xamarin.iOS.CSharp.targets" is correct, and that the file exists on disk.
This error pops up also for Xamarin.Android.Csharp.targets.
My YML file look like this:
image: mono:latest
stages:
- build
build:
stage: build
before_script:
- msbuild -version
- 'echo BUILDING'
- 'echo NuGet Restore'
- nuget restore 'XamarinFormsTestApp.sln'
script:
- 'echo Cleaning'
- MONO_IOMAP=case msbuild 'XamarinFormsTestApp.sln' $BUILD_VERBOSITY /t:Clean /p:Configuration=Debug
Some help would be appreciated ;)
You will need a mac os host to build Xamarin.iOS application and AFAIK it's not there yet in GitLab. You can find the discussion here and private beta here. For now, I would recommend going your own MacOS Host and registered GitLab runner on that host:
https://docs.gitlab.com/runner/
You can set up the host where you want (VM or Physical device) and install the GitLab runner and Xamarin environment there, tag it and use with the GitLab pipelines as with any other shared runner.
From the comments on your question, it looks like Xamarin isn't available in the mono:latest image, but that's ok because you can create your own docker images to use in Gitlab CI. You will need to have access to a registry, but if you use gitlab.com (opposed to a self-hosted instance) the registry is enabled for all users. You can find more information on that in the docs: https://docs.gitlab.com/ee/user/packages/container_registry/
If you are using self-hosted, the registry is still available (even for free versions) but it has to be enabled by an admin (docs here: https://docs.gitlab.com/ee/administration/packages/container_registry.html).
Another option is to use Docker's own registry, Docker Hub. It doesn't matter what registry you use, but you'll have to have access to one of them so your runners can pull down your image. This is especially true if you're using shared runners that you (or your admins) don't have direct control over. If you can directly control your runners, another option is to build the docker image on all of your runners that need it.
I'm not familiar with Xaramin, but here's how you can create a new Docker image based on mono:latest:
# ./mono-xamarin/Dockerfile
FROM mono:latest # this lets us build off of an existing image rather than starting from scratch. Everything in the mono:latest image will be available in this image
RUN ./install_xamarin.sh # Run whatever you need to in order to install xamarin or anything else you need.
RUN apt-get install git # just an example
Once your Dockerfile is written, you can build it like this:
docker build --file path/to/Dockerfile --tag mono-xamarin:latest
If you build the image on your runners, you can use it immediately like:
# .gitlab-ci.yml
image: mono-xamarin:latest
stages:
- build
...
Otherwise you can now push it to whichever registry you want to use.
I recently converted my kubernetes deployment service to a knative serverless application. I am looking for a way how to update the image of a container the the knative app from a CI/CD pipeline without using yml file (CI pipeline doesn't have access to the yaml config used to deploy the file). Previously, I was using kubectl set image command to update the image from CI to the latest version for a deployment but it does not appear to work for a knative service, e.g. the command I tried is:
kubectl set image ksvc/hello-world hello-world=some-new-image --record
Is there a way to update the image of a knative app using a kubectl command without having access to the original yaml config?
You can use kn CLI:
https://github.com/knative/client/blob/master/docs/cmd/kn_service_update.md
kn service update hello-world --image some-new-image
This would create a new revision for the Knative service though.
You can clean up old revisions with kn.
Get kn here: https://knative.dev/docs/install/install-kn/
I'm new to k8's setup, I wanted to know what is the best way to deploy the services in production. Below are a few way's I could think of, can you guide me in the right direction.
1) Deploy each *.war file into a apache tomcat docker container, and using the service discovery mechanism of k8's.
2) Run each application normally using "java -jar *.war" into pods and expose their ports using port binding.
Thanks.
The canonical way to deploy applications to Kubernetes is as follows:
Package each application component in a container image and upload it to a container registry (e.g. Docker Hub)
Create a Deployment resource for each container that runs the container as a Pod (or a set of replicas of Pods) in the cluster
Expose the Pod(s) in each Deployment with a Service so that they can be accessed by other Pods or by the user
I would suggest to use embedded Tomcat server in Springboot .jar file to deploy your microservices. Below the answer of #weibeld that I also use to deploy my springboot apps.
Package each application component in a container image and upload it
to a container registry (e.g. Docker Hub)
You can use Jib to easily build distroless image. The container image can be built using maven plugin.
mvn compile jib:build -Djib.to.image=MY_REGISRY_IMAGE:MY_TAG -Djib.to.auth.username=USER -Djib.to.auth.password=PASSWORD
Create a Deployment resource for each container that runs the container as a Pod (or a set of replicas of Pods) in the cluster
Create your deployment .yml file structure and adjust the deployment parameters as you need in the file.
kubectl create deployment my-springboot-app --image MY_REGISRY_IMAGE:MY_TAG --dry-run -o yaml > my-springboot-app-deployment.yml
Create the deployment:
kubectl apply -f my-springboot-app-deployment.yml
Expose the Pod(s) in each Deployment with a Service so that they can be accessed by other Pods or by the user
kubectl expose deployment my-springboot-app --port=8080 --target-port=8080 --dry-run -o yaml > my-springboot-app-service.yml
kubectl apply -f my-springboot-app-service.yml
Is it somehow possible to build images without having docker installed. On maven build of my project I'd like to produce docker image, but I don't want to force others to install docker on their machines.
I can think of some virtual box image with docker installed, but it is kind of heavy solution. Is there some way to build the image with some maven plugin only, some Go code or already prepared virtual box image for exactly this purpose?
It boils down to question how to use docker without forcing users to install anything. Either just for build or even for running docker images.
UPDATE
There are some, not really up to date, maven plugins for virtual machine provisioning with vagrant or with vbox. I have found article about building docker images without docker on basel
So far I see two options either I can somehow build the images only or run some VM with docker daemon inside(which can be used not only for builds, but even for integration tests)
We can create Docker image without Docker being installed.
Jib Maven and Gradle Plugins
Google has an open source tool called Jib that is relatively new, but
quite interesting for a number of reasons. Probably the most interesting
thing is that you don’t need docker to run it - it builds the image using
the same standard output as you get from docker build but doesn’t use
docker unless you ask it to - so it works in environments where docker is
not installed (not uncommon in build servers). You also don’t need a
Dockerfile (it would be ignored anyway), or anything in your pom.xml to
get an image built in Maven (Gradle would require you to at least install
the plugin in build.gradle).
Another interesting feature of Jib is that it is opinionated about
layers, and it optimizes them in a slightly different way than the multi-
layer Dockerfile created above. Just like in the fat jar, Jib separates
local application resources from dependencies, but it goes a step further
and also puts snapshot dependencies into a separate layer, since they are
more likely to change. There are configuration options for customizing the
layout further.
Pls refer this link https://cloud.google.com/blog/products/gcp/introducing-jib-build-java-docker-images-better
For example with Spring Boot refer https://spring.io/blog/2018/11/08/spring-boot-in-a-container
Have a look at the following tools:
Fabric8-maven-plugin - http://maven.fabric8.io/ - good maven integration, uses a remote docker (openshift) cluster for the builds.
Buildah - https://github.com/containers/buildah - builds without a docker daemon but does have other pre-requisites.
Fabric8-maven-plugin
The fabric8-maven-plugin brings your Java applications on to Kubernetes and OpenShift. It provides a tight integration into Maven and benefits from the build configuration already provided. This plugin focus on two tasks: Building Docker images and creating Kubernetes and OpenShift resource descriptors.
fabric8-maven-plugin seems particularly appropriate if you have a Kubernetes / Openshift cluster available. It uses the Openshift APIs to build and optionally deploy an image directly to your cluster.
I was able to build and deploy their zero-config spring-boot example extremely quickly, no Dockerfile necessary, just write your application code and it takes care of all the boilerplate.
Assuming you have the basic setup to connect to OpenShift from your desktop already, it will package up the project .jar in a container and start it on Openshift. The minimum maven configuration is to add the plugin to your pom.xml build/plugins section:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>3.5.41</version>
</plugin>
then build+deploy using
$ mvn fabric8:deploy
If you require more control and prefer to manage your own Dockerfile, it can handle this too, this is shown in samples/secret-config.
Buildah
Buildah is a tool that facilitates building Open Container Initiative (OCI) container images. The package provides a command line tool that can be used to:
create a working container, either from scratch or using an image as a starting point
create an image, either from a working container or via the instructions in a Dockerfile
images can be built in either the OCI image format or the traditional upstream docker image format
mount a working container's root filesystem for manipulation
unmount a working container's root filesystem
use the updated contents of a container's root filesystem as a filesystem layer to create a new image
delete a working container or an image
rename a local container
I don't want to force others to install docker on their machines.
If by "without Docker installed" you mean without having to install Docker locally on every machine running the build, you can leverage the Docker Engine API which allow you to call a Docker Daemon from a distant host.
The Docker Engine API is a RESTful API accessed by an HTTP client such
as wget or curl, or the HTTP library which is part of most modern
programming languages.
For example, the Fabric8 Docker Maven Plugin does just that using the DOCKER_HOST parameter. You'll need a recent Docker version and you'll have to configure at least one Docker Daemon properly so it can securely accept remote requests (there are lot of resources on this subject, such as the official doc, here or here). From then on, your Docker build can be done remotely without having to install Docker locally.
Google has released Kaniko for this purpose. It should be run as a container, whether in Kubernetes, Docker or gVisor.
I was running into the same problems, and I did not find any solution, thus i developed odagrun, it's a runner for Gitlab with integrated registry api, update DockerHub, Microbadger etc.
OpenSource and has a MIT license.
Ideal to create a docker image on the fly, without the need of a docker daemon nor the need of a root account, or any image at all (image: scratch will do), currrently still in development, but i use it every day.
Requirements
project repository on Gitlab
an openshift cluster (an openshift-online-starter will do for most medium/small
extract how the docker image for this project was created:
# create and push image to ImageStream:
build_rootfs:
image: centos
stage: build-image
dependencies:
- build
before_script:
- mkdir -pv rootfs
- cp -v output/oc-* rootfs/
- mkdir -pv rootfs/etc/pki/tls/certs
- mkdir -pv rootfs/bin-runner
- cp -v /etc/pki/tls/certs/ca-bundle.crt rootfs/etc/pki/tls/certs/ca-bundle.crt
- chmod -Rv 777 rootfs
tags:
- oc-runner-shared
script:
- registry_push --rootfs --name=test-$CI_PIPELINE_ID --ISR --config