Deploying spring boot micro services with K8's in production - spring-boot

I'm new to k8's setup, I wanted to know what is the best way to deploy the services in production. Below are a few way's I could think of, can you guide me in the right direction.
1) Deploy each *.war file into a apache tomcat docker container, and using the service discovery mechanism of k8's.
2) Run each application normally using "java -jar *.war" into pods and expose their ports using port binding.
Thanks.

The canonical way to deploy applications to Kubernetes is as follows:
Package each application component in a container image and upload it to a container registry (e.g. Docker Hub)
Create a Deployment resource for each container that runs the container as a Pod (or a set of replicas of Pods) in the cluster
Expose the Pod(s) in each Deployment with a Service so that they can be accessed by other Pods or by the user

I would suggest to use embedded Tomcat server in Springboot .jar file to deploy your microservices. Below the answer of #weibeld that I also use to deploy my springboot apps.
Package each application component in a container image and upload it
to a container registry (e.g. Docker Hub)
You can use Jib to easily build distroless image. The container image can be built using maven plugin.
mvn compile jib:build -Djib.to.image=MY_REGISRY_IMAGE:MY_TAG -Djib.to.auth.username=USER -Djib.to.auth.password=PASSWORD
Create a Deployment resource for each container that runs the container as a Pod (or a set of replicas of Pods) in the cluster
Create your deployment .yml file structure and adjust the deployment parameters as you need in the file.
kubectl create deployment my-springboot-app --image MY_REGISRY_IMAGE:MY_TAG --dry-run -o yaml > my-springboot-app-deployment.yml
Create the deployment:
kubectl apply -f my-springboot-app-deployment.yml
Expose the Pod(s) in each Deployment with a Service so that they can be accessed by other Pods or by the user
kubectl expose deployment my-springboot-app --port=8080 --target-port=8080 --dry-run -o yaml > my-springboot-app-service.yml
kubectl apply -f my-springboot-app-service.yml

Related

I want to deploy my quakrus app on kubernetes using helm chart

So, I created a basic quarkus app. Then I added the kubernetes and helm extension. I did ./mvnw clean package command. In the target directory, a helm directory was added with the chat.yaml, values.yaml and the templates. All these are based on the app I deployed firstly: meaning with a specific name. Now in the deployment.yaml there is a section of image: myimage. What is the image that should be there. Also I followed the instructions of the documentation of quarkus with helm, but nothing happens.
I tried to install with helm by doing: helm install helm-example ./target/helm/kubernetes/. What should I do in order to see my app in the browser?
Quarkus Kubernetes and Quarkus Helm are extensions to generate Kubernetes resources. The component that you might be missing is to also build/generate the container image of your application (the binaries).
By default, the Quarkus Kubernetes extension will use a container image based on your system properties/application metadata, which might not be correct when installing the generated Kubernetes resources.
Luckily, Quarkus provides a few extensions to generate container images (see documentation).
For example, if you decide to use the Quarkus Container Image Jib, now Quarkus will also build the container image locally when packaging your application. Still, the image won't be available when installing the Kubernetes resources because the image is in your local machine and this is not accessible by the remote Kubernetes instance, so you need to push this generated image into a container registry (docker hub or quay.io for example). You can configure these properties by adding:
# To specify the container registry
quarkus.container-image.registry=quay.io
# To instruct Quarkus to also push the generated image into the above registry
quarkus.container-image.push=true
After building again your application, now Quarkus will generate both the Kubernetes resources and the container image which will be available in a container registry.
Additionally, if you want to install the generated chart by the Quarkus Helm extension, you can overwrite the container image before installing it:
helm install helm-example ./target/helm/kubernetes/<chart name> --set app.image=<full container image>
I hope it helps!

Environment variables for Spring Cloud Config in Docker

So, I am learning about microservices (I am a beginner) and I'm facing an issue. I've went through the Spring Cloud Config and Docker docs hoping to find a solution, but I didn't.
I have an app with 3 microservices (Spring Boot) and 1 config server (Spring Cloud Config). I'm using a private Github repository for storing config files and this is my application.properties file for config server:
spring.profiles.active=git
spring.cloud.config.server.git.uri=https://github.com/username/microservices-config.git
spring.cloud.config.server.git.clone-on-start=true
spring.cloud.config.server.git.default-label=master
spring.cloud.config.server.git.username=${GIT_USERNAME}
spring.cloud.config.server.git.password=${GIT_ACCCESS_TOKEN}
I have a Dockerfile based on which I have created a Docker image for config server (with no problems). I created a docker-compose.yml which I use to create and run containers, but it fails because of an exception in my cloud config app. The exception is:
org.eclipse.jgit.api.errors.TransportException: https://github.com/username/microservices-config.git: not authorized
Which basically means that my environment variables GIT_USERNAME and GIT_ACCCESS_TOKEN (that I set up in Intellij's "Edit configuration" and use in application.properties) are not available for the config server to use in a container.
The question is: Do I need to somehow add those environment variables to .jar or to Docker image or to Docker container? Like I'm not sure how do I make them available for the config server to use in a container.
Any help or explanation is welcomed :)

Netflix Conductor WorkflowStatusListener

I'm using Netflix Conductor over Rest API. I'm able to create a workflow and run it but I would like to know how to use the workflowStatusListener feature.
I'm running Conductor on my localhost with Docker and I saw that the server is a simple jar, possibly a SpringBoot app. So, how to pass my on jar with my Listener or Simple Tasks in this scenario?
I found how to deploy it using the Docker image.
I copied the folder /app from my docker container, changed the startup.sh script and mounted my local folder.
I copied my jar into /app/libs
java -cp libs/*.jar com.netflix.conductor.bootstrap.Main $config_file $log4j_file

Build docker image without docker installed

Is it somehow possible to build images without having docker installed. On maven build of my project I'd like to produce docker image, but I don't want to force others to install docker on their machines.
I can think of some virtual box image with docker installed, but it is kind of heavy solution. Is there some way to build the image with some maven plugin only, some Go code or already prepared virtual box image for exactly this purpose?
It boils down to question how to use docker without forcing users to install anything. Either just for build or even for running docker images.
UPDATE
There are some, not really up to date, maven plugins for virtual machine provisioning with vagrant or with vbox. I have found article about building docker images without docker on basel
So far I see two options either I can somehow build the images only or run some VM with docker daemon inside(which can be used not only for builds, but even for integration tests)
We can create Docker image without Docker being installed.
Jib Maven and Gradle Plugins
Google has an open source tool called Jib that is relatively new, but
quite interesting for a number of reasons. Probably the most interesting
thing is that you don’t need docker to run it - it builds the image using
the same standard output as you get from docker build but doesn’t use
docker unless you ask it to - so it works in environments where docker is
not installed (not uncommon in build servers). You also don’t need a
Dockerfile (it would be ignored anyway), or anything in your pom.xml to
get an image built in Maven (Gradle would require you to at least install
the plugin in build.gradle).
Another interesting feature of Jib is that it is opinionated about
layers, and it optimizes them in a slightly different way than the multi-
layer Dockerfile created above. Just like in the fat jar, Jib separates
local application resources from dependencies, but it goes a step further
and also puts snapshot dependencies into a separate layer, since they are
more likely to change. There are configuration options for customizing the
layout further.
Pls refer this link https://cloud.google.com/blog/products/gcp/introducing-jib-build-java-docker-images-better
For example with Spring Boot refer https://spring.io/blog/2018/11/08/spring-boot-in-a-container
Have a look at the following tools:
Fabric8-maven-plugin - http://maven.fabric8.io/ - good maven integration, uses a remote docker (openshift) cluster for the builds.
Buildah - https://github.com/containers/buildah - builds without a docker daemon but does have other pre-requisites.
Fabric8-maven-plugin
The fabric8-maven-plugin brings your Java applications on to Kubernetes and OpenShift. It provides a tight integration into Maven and benefits from the build configuration already provided. This plugin focus on two tasks: Building Docker images and creating Kubernetes and OpenShift resource descriptors.
fabric8-maven-plugin seems particularly appropriate if you have a Kubernetes / Openshift cluster available. It uses the Openshift APIs to build and optionally deploy an image directly to your cluster.
I was able to build and deploy their zero-config spring-boot example extremely quickly, no Dockerfile necessary, just write your application code and it takes care of all the boilerplate.
Assuming you have the basic setup to connect to OpenShift from your desktop already, it will package up the project .jar in a container and start it on Openshift. The minimum maven configuration is to add the plugin to your pom.xml build/plugins section:
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>fabric8-maven-plugin</artifactId>
<version>3.5.41</version>
</plugin>
then build+deploy using
$ mvn fabric8:deploy
If you require more control and prefer to manage your own Dockerfile, it can handle this too, this is shown in samples/secret-config.
Buildah
Buildah is a tool that facilitates building Open Container Initiative (OCI) container images. The package provides a command line tool that can be used to:
create a working container, either from scratch or using an image as a starting point
create an image, either from a working container or via the instructions in a Dockerfile
images can be built in either the OCI image format or the traditional upstream docker image format
mount a working container's root filesystem for manipulation
unmount a working container's root filesystem
use the updated contents of a container's root filesystem as a filesystem layer to create a new image
delete a working container or an image
rename a local container
I don't want to force others to install docker on their machines.
If by "without Docker installed" you mean without having to install Docker locally on every machine running the build, you can leverage the Docker Engine API which allow you to call a Docker Daemon from a distant host.
The Docker Engine API is a RESTful API accessed by an HTTP client such
as wget or curl, or the HTTP library which is part of most modern
programming languages.
For example, the Fabric8 Docker Maven Plugin does just that using the DOCKER_HOST parameter. You'll need a recent Docker version and you'll have to configure at least one Docker Daemon properly so it can securely accept remote requests (there are lot of resources on this subject, such as the official doc, here or here). From then on, your Docker build can be done remotely without having to install Docker locally.
Google has released Kaniko for this purpose. It should be run as a container, whether in Kubernetes, Docker or gVisor.
I was running into the same problems, and I did not find any solution, thus i developed odagrun, it's a runner for Gitlab with integrated registry api, update DockerHub, Microbadger etc.
OpenSource and has a MIT license.
Ideal to create a docker image on the fly, without the need of a docker daemon nor the need of a root account, or any image at all (image: scratch will do), currrently still in development, but i use it every day.
Requirements
project repository on Gitlab
an openshift cluster (an openshift-online-starter will do for most medium/small
extract how the docker image for this project was created:
# create and push image to ImageStream:
build_rootfs:
image: centos
stage: build-image
dependencies:
- build
before_script:
- mkdir -pv rootfs
- cp -v output/oc-* rootfs/
- mkdir -pv rootfs/etc/pki/tls/certs
- mkdir -pv rootfs/bin-runner
- cp -v /etc/pki/tls/certs/ca-bundle.crt rootfs/etc/pki/tls/certs/ca-bundle.crt
- chmod -Rv 777 rootfs
tags:
- oc-runner-shared
script:
- registry_push --rootfs --name=test-$CI_PIPELINE_ID --ISR --config

How to get app Incremental deployment to kubernetes

I tried to setup Continuous Deployment using jenkins for own microservice project which is organized as multi-module maven project (each submodule representing a micro service). I use "Incremental build - only build changed modules" in jenkins to avoid unnessesary building, and then use docker-maven-plugin to build docker image. However, how could I do to only redeploy changed images to kubernetes cluster?
You can use local docker image registry.
docker run -d -p 5000:5000 --restart=always --name registry registry:2
You can then push the development images to this registry as a build step and make your kubernetes containers use this registry.
After you are ready, push the image to your production image registry and adjust container manifests to use proper registry.
More info on private registry server: https://docs.docker.com/registry/deploying/
Currently Kubernetes does not provide proper solution for this. But there are few workarounds mentioned [here]: https://github.com/kubernetes/kubernetes/issues/33664
I like this one 'Fake a change to the Deployment by changing something other than the image'. We can do it in this way :
Define environment variable say "TIMESTAMP" and any value to it in deployment manifest. In CI\CD pipeline we set the value to current timestamp and then pass this updated manifest to 'kubectl apply'. This way, we are faking a change and kubernetes will pull the latest image and deploy to the cluster. Please make sure that 'imagePullPolicy : always' is set.

Resources