So, I created a basic quarkus app. Then I added the kubernetes and helm extension. I did ./mvnw clean package command. In the target directory, a helm directory was added with the chat.yaml, values.yaml and the templates. All these are based on the app I deployed firstly: meaning with a specific name. Now in the deployment.yaml there is a section of image: myimage. What is the image that should be there. Also I followed the instructions of the documentation of quarkus with helm, but nothing happens.
I tried to install with helm by doing: helm install helm-example ./target/helm/kubernetes/. What should I do in order to see my app in the browser?
Quarkus Kubernetes and Quarkus Helm are extensions to generate Kubernetes resources. The component that you might be missing is to also build/generate the container image of your application (the binaries).
By default, the Quarkus Kubernetes extension will use a container image based on your system properties/application metadata, which might not be correct when installing the generated Kubernetes resources.
Luckily, Quarkus provides a few extensions to generate container images (see documentation).
For example, if you decide to use the Quarkus Container Image Jib, now Quarkus will also build the container image locally when packaging your application. Still, the image won't be available when installing the Kubernetes resources because the image is in your local machine and this is not accessible by the remote Kubernetes instance, so you need to push this generated image into a container registry (docker hub or quay.io for example). You can configure these properties by adding:
# To specify the container registry
quarkus.container-image.registry=quay.io
# To instruct Quarkus to also push the generated image into the above registry
quarkus.container-image.push=true
After building again your application, now Quarkus will generate both the Kubernetes resources and the container image which will be available in a container registry.
Additionally, if you want to install the generated chart by the Quarkus Helm extension, you can overwrite the container image before installing it:
helm install helm-example ./target/helm/kubernetes/<chart name> --set app.image=<full container image>
I hope it helps!
Related
I recently converted my kubernetes deployment service to a knative serverless application. I am looking for a way how to update the image of a container the the knative app from a CI/CD pipeline without using yml file (CI pipeline doesn't have access to the yaml config used to deploy the file). Previously, I was using kubectl set image command to update the image from CI to the latest version for a deployment but it does not appear to work for a knative service, e.g. the command I tried is:
kubectl set image ksvc/hello-world hello-world=some-new-image --record
Is there a way to update the image of a knative app using a kubectl command without having access to the original yaml config?
You can use kn CLI:
https://github.com/knative/client/blob/master/docs/cmd/kn_service_update.md
kn service update hello-world --image some-new-image
This would create a new revision for the Knative service though.
You can clean up old revisions with kn.
Get kn here: https://knative.dev/docs/install/install-kn/
I'm new to k8's setup, I wanted to know what is the best way to deploy the services in production. Below are a few way's I could think of, can you guide me in the right direction.
1) Deploy each *.war file into a apache tomcat docker container, and using the service discovery mechanism of k8's.
2) Run each application normally using "java -jar *.war" into pods and expose their ports using port binding.
Thanks.
The canonical way to deploy applications to Kubernetes is as follows:
Package each application component in a container image and upload it to a container registry (e.g. Docker Hub)
Create a Deployment resource for each container that runs the container as a Pod (or a set of replicas of Pods) in the cluster
Expose the Pod(s) in each Deployment with a Service so that they can be accessed by other Pods or by the user
I would suggest to use embedded Tomcat server in Springboot .jar file to deploy your microservices. Below the answer of #weibeld that I also use to deploy my springboot apps.
Package each application component in a container image and upload it
to a container registry (e.g. Docker Hub)
You can use Jib to easily build distroless image. The container image can be built using maven plugin.
mvn compile jib:build -Djib.to.image=MY_REGISRY_IMAGE:MY_TAG -Djib.to.auth.username=USER -Djib.to.auth.password=PASSWORD
Create a Deployment resource for each container that runs the container as a Pod (or a set of replicas of Pods) in the cluster
Create your deployment .yml file structure and adjust the deployment parameters as you need in the file.
kubectl create deployment my-springboot-app --image MY_REGISRY_IMAGE:MY_TAG --dry-run -o yaml > my-springboot-app-deployment.yml
Create the deployment:
kubectl apply -f my-springboot-app-deployment.yml
Expose the Pod(s) in each Deployment with a Service so that they can be accessed by other Pods or by the user
kubectl expose deployment my-springboot-app --port=8080 --target-port=8080 --dry-run -o yaml > my-springboot-app-service.yml
kubectl apply -f my-springboot-app-service.yml
Is it somehow possible to build images without having docker installed. On maven build of my project I'd like to produce docker image, but I don't want to force others to install docker on their machines.
I can think of some virtual box image with docker installed, but it is kind of heavy solution. Is there some way to build the image with some maven plugin only, some Go code or already prepared virtual box image for exactly this purpose?
It boils down to question how to use docker without forcing users to install anything. Either just for build or even for running docker images.
UPDATE
There are some, not really up to date, maven plugins for virtual machine provisioning with vagrant or with vbox. I have found article about building docker images without docker on basel
So far I see two options either I can somehow build the images only or run some VM with docker daemon inside(which can be used not only for builds, but even for integration tests)
We can create Docker image without Docker being installed.
Jib Maven and Gradle Plugins
Google has an open source tool called Jib that is relatively new, but
quite interesting for a number of reasons. Probably the most interesting
thing is that you don’t need docker to run it - it builds the image using
the same standard output as you get from docker build but doesn’t use
docker unless you ask it to - so it works in environments where docker is
not installed (not uncommon in build servers). You also don’t need a
Dockerfile (it would be ignored anyway), or anything in your pom.xml to
get an image built in Maven (Gradle would require you to at least install
the plugin in build.gradle).
Another interesting feature of Jib is that it is opinionated about
layers, and it optimizes them in a slightly different way than the multi-
layer Dockerfile created above. Just like in the fat jar, Jib separates
local application resources from dependencies, but it goes a step further
and also puts snapshot dependencies into a separate layer, since they are
more likely to change. There are configuration options for customizing the
layout further.
Pls refer this link https://cloud.google.com/blog/products/gcp/introducing-jib-build-java-docker-images-better
For example with Spring Boot refer https://spring.io/blog/2018/11/08/spring-boot-in-a-container
Have a look at the following tools:
Fabric8-maven-plugin - http://maven.fabric8.io/ - good maven integration, uses a remote docker (openshift) cluster for the builds.
Buildah - https://github.com/containers/buildah - builds without a docker daemon but does have other pre-requisites.
Fabric8-maven-plugin
The fabric8-maven-plugin brings your Java applications on to Kubernetes and OpenShift. It provides a tight integration into Maven and benefits from the build configuration already provided. This plugin focus on two tasks: Building Docker images and creating Kubernetes and OpenShift resource descriptors.
fabric8-maven-plugin seems particularly appropriate if you have a Kubernetes / Openshift cluster available. It uses the Openshift APIs to build and optionally deploy an image directly to your cluster.
I was able to build and deploy their zero-config spring-boot example extremely quickly, no Dockerfile necessary, just write your application code and it takes care of all the boilerplate.
Assuming you have the basic setup to connect to OpenShift from your desktop already, it will package up the project .jar in a container and start it on Openshift. The minimum maven configuration is to add the plugin to your pom.xml build/plugins section:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>3.5.41</version>
</plugin>
then build+deploy using
$ mvn fabric8:deploy
If you require more control and prefer to manage your own Dockerfile, it can handle this too, this is shown in samples/secret-config.
Buildah
Buildah is a tool that facilitates building Open Container Initiative (OCI) container images. The package provides a command line tool that can be used to:
create a working container, either from scratch or using an image as a starting point
create an image, either from a working container or via the instructions in a Dockerfile
images can be built in either the OCI image format or the traditional upstream docker image format
mount a working container's root filesystem for manipulation
unmount a working container's root filesystem
use the updated contents of a container's root filesystem as a filesystem layer to create a new image
delete a working container or an image
rename a local container
I don't want to force others to install docker on their machines.
If by "without Docker installed" you mean without having to install Docker locally on every machine running the build, you can leverage the Docker Engine API which allow you to call a Docker Daemon from a distant host.
The Docker Engine API is a RESTful API accessed by an HTTP client such
as wget or curl, or the HTTP library which is part of most modern
programming languages.
For example, the Fabric8 Docker Maven Plugin does just that using the DOCKER_HOST parameter. You'll need a recent Docker version and you'll have to configure at least one Docker Daemon properly so it can securely accept remote requests (there are lot of resources on this subject, such as the official doc, here or here). From then on, your Docker build can be done remotely without having to install Docker locally.
Google has released Kaniko for this purpose. It should be run as a container, whether in Kubernetes, Docker or gVisor.
I was running into the same problems, and I did not find any solution, thus i developed odagrun, it's a runner for Gitlab with integrated registry api, update DockerHub, Microbadger etc.
OpenSource and has a MIT license.
Ideal to create a docker image on the fly, without the need of a docker daemon nor the need of a root account, or any image at all (image: scratch will do), currrently still in development, but i use it every day.
Requirements
project repository on Gitlab
an openshift cluster (an openshift-online-starter will do for most medium/small
extract how the docker image for this project was created:
# create and push image to ImageStream:
build_rootfs:
image: centos
stage: build-image
dependencies:
- build
before_script:
- mkdir -pv rootfs
- cp -v output/oc-* rootfs/
- mkdir -pv rootfs/etc/pki/tls/certs
- mkdir -pv rootfs/bin-runner
- cp -v /etc/pki/tls/certs/ca-bundle.crt rootfs/etc/pki/tls/certs/ca-bundle.crt
- chmod -Rv 777 rootfs
tags:
- oc-runner-shared
script:
- registry_push --rootfs --name=test-$CI_PIPELINE_ID --ISR --config
I am using kubernetes helm to deploy apps to my cluster. Everything works fine from my laptop when helm uses the cluster's kube-config file to deploy to the cluster.
I want to use helm from my CI/CD server (which is separate from my cluster) to automatically deploy apps to my cluster. I have created a k8s service account for my CI/CD server to use. But how do I create a kube-config file for the service account so that helm can use it to connect to my cluster from my CI/CD server??
Or is this not the right way to use Helm from a CI/CD server?
Helm works by using the installed kubectl to talk to your cluster. That means that if you can access your cluster via kubectl, you can use helm with that cluster.
Don't forget to make sure you're using to proper context in case you have more than one cluster in you kubcfg file. You can check that by running kubectl config current-context and comparing that to the cluster details in the kubecfg.
You can find more details in Helm's docs, check the quick start guide for more information.
why not just run your CI server inside your kubernetes cluster then you don't have to manage secrets for accessing the cluster? We do that on Jenkins X and it works great - we can run kubectl or helm inside pipelines just fine.
In this case you will want to install kubectl on whichever slave or agent you have identified for use by your CI/CD server, OR install kubectl on-the-fly in your automation, AND then make sure you have OR are able to generate a kubeconfig to use.
To answer the question:
But how do I create a kube-config file for the service account ...
You can set new clusters, credentials, and contexts for use with kubectl in a default or custom kubeconfig file using kubectl config set-cluster, kubectl config set-credentials, and kubectl config set-context. If you have KUBECONFIG env variable set and pointing to a kubeconfig file, that works or when setting new entries simply pass -kubeconfig to point to a custom file.
Here's the relevant API documentation for v1.6.
We created helmsman which provides you with declarative syntax to manage helm charts in your cluster. It configures kubectl (and therefore helm) for you wherever you run it. It can also be used from a docker container.
I tried to setup Continuous Deployment using jenkins for own microservice project which is organized as multi-module maven project (each submodule representing a micro service). I use "Incremental build - only build changed modules" in jenkins to avoid unnessesary building, and then use docker-maven-plugin to build docker image. However, how could I do to only redeploy changed images to kubernetes cluster?
You can use local docker image registry.
docker run -d -p 5000:5000 --restart=always --name registry registry:2
You can then push the development images to this registry as a build step and make your kubernetes containers use this registry.
After you are ready, push the image to your production image registry and adjust container manifests to use proper registry.
More info on private registry server: https://docs.docker.com/registry/deploying/
Currently Kubernetes does not provide proper solution for this. But there are few workarounds mentioned [here]: https://github.com/kubernetes/kubernetes/issues/33664
I like this one 'Fake a change to the Deployment by changing something other than the image'. We can do it in this way :
Define environment variable say "TIMESTAMP" and any value to it in deployment manifest. In CI\CD pipeline we set the value to current timestamp and then pass this updated manifest to 'kubectl apply'. This way, we are faking a change and kubernetes will pull the latest image and deploy to the cluster. Please make sure that 'imagePullPolicy : always' is set.