Creating a Simple Hello World app in Kubernetes - spring-boot

Most software tech has a "Hello World" type example to get started on. With Kubernetes this seems to be lacking.
My scenario cannot be simpler. I have a simple hello world app made with Spring-Boot with one Rest controller that just returns: "Hello Hello!"
After I create my docker file, I build an image like this :
docker build -t helloworld:1.0 .
Then I run it in a container like this :
docker run -p 8080:8080 helloworld:1.0
If I open up a browser now, I can access my application here :
http://localhost:8080/hello/
and it returns :
"Hello Hello!"
Great! So far so good.
Next I tag it (my docker-hub is called ollyw123, and the ID of my image is 776...)
docker tag 7769f3792278 ollyw123/helloworld:firsttry
and push :
docker push ollyw123/helloworld
If I log into Docker-Hub I will see
Now I want to connect this to Kubernetes. This is where I have plunged deep into the a state of confusion.
My thinking is, I need to create a cluster. Somehow I need to connect this cluster to my image, and as I understand, I just need to use the URL of the image to connect to (ie.
https://hub.docker.com/repository/docker/ollyw123/helloworld)
Next I would have to create a service. This service would then be able to expose my "Hello World!" rest call through some port. This is my logical thinking, and for me this would seem like a very simple thing to do, but the tutorials and documentation on Kubernetes is a mine field of confusion and dead ends.
Following on from the spring-boot kubernetes tutorial (https://spring.io/guides/gs/spring-boot-kubernetes/) I have to create a deployment object, and then a service object, and then I have to "apply" it :
kubectl create deployment hello-world-dep --image=ollyw123/helloworld --dry-run -o=yaml > deployment.yaml
kubectl create service clusterip hello-world-dep --tcp=8080:8080 --dry-run -o=yaml >> deployment.yaml
kubectl apply -f deployment.yaml
OK. Now I see a service :
But now what???
How do I push this to the cloud? (eg. gcloud) Do I need to create a cluster first, or is this already a cluster?
What should my next step be?

There are a couple of concepts that we need to go through regarding your question.
The first would be about the "Hello World" app in Kubernetes. Even this existing (as mentioned by Limido in the comments [link]), the app itself is not a Kubernetes app, but an app created in the language of your choice, which was containerized and it is deployed in Kubernetes.
So I would call it (in your case) a Dockerized SpringBoot HelloWorld app.
Okay, now that we have a container we could simply deploy it running docker, but what if your container dies, or you need to scale it up and down, manage volumes, network traffic and a bunch of other things, this starts to become complicated (imagine a real life scenario, with hundreds or even thousands of containers running at the same time). That's exactly where the Container Orchestration comes into place.
Kubernetes helps you managing this complexity, in a single place.
The third concept that I'd like to talk, is the create and apply commands. You can definitely find a more detailed explanation in here, but both of then can be used to create the resource in Kubernetes.
In your case, the create command is not creating the resources, because you are using the --dry-run and adding the output to your deployment file, which you apply later on, but the following command would also create your resource:
kubectl create deployment hello-world-dep --image=ollyw123/helloworld
kubectl create service clusterip hello-world-dep --tcp=8080:8080
Note that even this working, if you need to share this deployment, or commit it in a repository you would need to get it:
kubectl get deployment hello-world-dep -o yaml > your-file.yaml
So having the definition file is really helpful and recommended.
Great... Going further...
When you have a deployment you will also have a number of replicas that is expected to be running (even when you don't define it - the default value is 1). In your case your deployment is managing one pod.
If you run:
kubectl get pods -o wide
You will get your pod hello-world-dep-hash and an IP address. This IP is the IP of your container and you can access your application using it, but as pods are ephemeral, if your pod dies, Kubernetes will create a new one for you (automatically) with a new IP address, so if you have for instance a backend and its IP is constantly changing, you would need to manage this change in the frontend every time you have a new backend pod.
To solve that, Kubernetes has the Service, which will expose the deployment in a persistent way. So if your pod dies and a new one comes back, the address of your service will continue the same, and all the traffic will be automatically routed to your new pod.
When you have more than one replica of your deployment, the service also load balance the load across all the available pods.
Last but not least, your question!
You have asked, now what?
So basically, once you have your application containerized, you can deploy it almost anywhere. There are N different places you can get it. In your case you are running it locally, but you could get your deployment.yaml file and deploy your application in GKE, AKS, EKS, just to quote the biggest ones, but all cloud providers have some type of Kubernetes service available, where you can spin up a cluster and start playing around.
Actually, to play around I'd recommend Katakoda, as they have scenarios for free, and you can use the cluster to play around.
Wow... That was a long answer...
Just to finish, I'd recommend the Network Introduction in Katakoda, as there are different types of Services, depending on your scenario or what you need, and the tutorial is goes through the different types in a hands-on approach.

In the context of Kubernetes, Cluster is the environment where your PODS and Services are running. Think of it like a VM environment where you setup your Web Server and etc.. (although I don't like my own analogy)
If you want to run the same thing in GCloud, then you create a Kubernetes cluster there and all you need to do is to apply your YAML files that contains the Service and Deployment there via the CLI that Google Cloud provides to interact with your Cluster.
In order to interact with GCloud GKS Cluster via your local command prompt, you need to get the credentials for that cluster. This official GCloud document explain how to retrieve your cluster credential. once done, you can start interacting with the Kubernetes instance running in GCloud via kubectl command using your command prompt.

The service that you have is of type clusterIP which is only accessible from within the kubernetes cluster. You need to either use NodePort or LoadBalanacer type service or ingress to expose the application outside the remote kubernetes cluster(a set of VMs or bare metal servers in public or private cloud environment with kubernetes deployed on them) or local minikube/docker desktop. Once you do that you should be able to access it using a browser or curl

Related

Trouble running Elastic agent in a Kubernetes deployment with official Docker image

I'm trying to run only the Elastic agent as a deployment in a Kubernetes cluster. The reason I'm doing this is maybe an atypical usage of the Elastic agent: I only want to deploy the HTTP log endpoint integration and have other pods send logs to this Elastic agent. I'm not using it to collect cluster metrics (so the manifest they supply is not relevant to me).
I'm using the image docker.elastic.co/beats/elastic-agent:8.4.2. Apparently, this image needs to write files and directories to /usr/share/elastic-agent/, which at first was leading to errors along the lines of failed: mkdir /usr/share/elastic-agent/state: read-only file system. So, I created an emptyDir volume and mounted it at /usr/share/elastic-agent. Now, that error disappears, but is replaced with a new error:
/usr/local/bin/docker-entrypoint: line 14: exec: elastic-agent: not found
The entrypoint of the image is
ENTRYPOINT ["/usr/bin/tini" "--" "/usr/local/bin/docker-entrypoint"]
and it is apparently unable to find /usr/local/bin/docker-entrypoint.
A couple questions:
Why is it not finding the elastic-agent executable? It is definitely at that path.
More broadly: I am new to Elasticsearch -- this is only to set up a QA environment meant to test a product feature where we forward data from certain of our services to customers' Elastic Cloud deployments. I thought deploying the agent as a service in the same cluster where these services run would be the least painful way to do this. Is this not a good way to achieve what I describe in the first paragraph?
Assuming I can get the deployment to actually work, is this the way the next steps would go?
Create the "Custom HTTP Endpoint Logs" integration on the agent policy, listening on a given port and on all interfaces.
Map that port to an external port for the pod.
Send data to the pod at that external port.
The issue is that mounting the emptyDir volume to /usr/share overwrites the elastic-agent binary. Remove this volume and set readOnlyRootFilesystem: false.

Get a list of all running pods for own service

The situation
I have a Spring Boot service running inside Kubernetes
Multiple instance are running (the kubernetes deployment has multiple replicas configured)
I have an incoming HTTP from outside the cluster that hits on of the instances
I want to relay the fact to all other instances
Question
Is there an easy, no external things needed, solution to find a list of "all pods of my service" ? I think I remember that this could be possible via DNS resolution but my Google-Fu seems to weak to find something.
Of course I can always to this via some other solution (e.g. Redis, Kafka, whatever) but since I only need this for a one-off feature I would like to not introduce any moving parts and keep it as simple as possible.
Internal K8s DNS would only get you the IP-address of the service. Luckily, there is the Endpoints resource exactly for the purpose of getting the Pods that back a Service.
With kubectl you can check endpoints out like this:
kubectl get endpoints nginx -o yaml
With this command you get hold of the Pod names:
kubectl get endpoints nginx -o=jsonpath='{.subsets[*].addresses[*].targetRef.name}'
To execute the same thing from your Spring Boot app, you can try to make use of the official Kubernetes Java client library.
You can list all pods behind a service by running
$ kubectl get endpoints <service-name> -o=jsonpath='{.subsets[*].addresses[*].ip}' | tr ' ' '\n' | xargs -I % kubectl get pods --field-selector=status.podIP=%

Run several microservices docker image together on local dev with Minikube

I have several microservices around 20 or something to check their services in my local development. The micro-services are spring boot services with maven build. So wanted to know when I have to run them on my aws server can I run all these containers individually like they might have shared database so will that be one issue i might face.Or is it possible to run all these services together in one single docker image.
Also I have to configure it with Kubernetes so I have configured Minikube in my local dev would be helpful if there are some considerations to be taken while running around 20services on my minikube or even Kubernetes env
PS: I know this is a basic question but dont have much idea about Devops
Ideally you should have different docker image for each of the micro services and create kubernetes deployment for each of the micro services.This makes scaling individual micro services de coupled from each other. Also communication between micro services should be via kubernetes service. This makes communication stable because service IPs and FQDN don't change even if pods are created, deleted, scaled up and down.
Just be cautious of how much memory and CPU the micros services will need and if the system with minikube has that much resource or not. If the available memory and CPU of a Kubernetes node is not enough to schedule the pod then pods will be stuck in pending state.
As you have too many microservices, I suggest you make a Kubernetes cluster on AWS of 3-4 VMs (more info here). Then try to deploy all your microservices on that. For that you need to build the containers individually for each service and create kubernetes deployment for each service.
I run all these containers individually like they might have shared database so will that be one issue i might face.
As you have shared database, I suggest you run your database server on individual host and then remotely connect with your database from your services. This way you would be able to share database between your microservices.

Running Oracle database as a docker container in Kubernetes

I am actually new to the Kubernetes and i am in the process of learning Kubernetes by installing minikube in my local desktop. May be i am unaware of this, but i wanted to ask this question to the experts
Here is what i am trying to achieve. I have 3 docker containers created in my local environment. 2 java web based application(app1, app2) docker containers and 1 oracle database container (oracleDB).
app1 application depends on the oracleDB. I installed minikube in my local environment for trying out Kubernetes.
I was able to deploy my applications app1, app2 and oracleDB in the minikube and bring up the applications and also able to access those application using the url like http://local_minikube_ip:31213/app1
After few hours my app was not responding, so i had to restart the minikube. When i restart the minikube, i found out that i lost the database imported into the oracle docker container. I had to re-import the database and also ssh into the app1 and start the app1 and app2 containers.
So i want to know how everyone handles this scenario ? is there anyway i can maintain the data in the oracle database container during the restarts of Kubernetes ?
Can someone help me with this please ?
For data persistence you need to define Volumes in your PODs, it is often done in conjunction with Persistent Volume Clams and Persistent Volumes.
Take a look at https://kubernetes.io/docs/concepts/storage/volumes/ and https://kubernetes.io/docs/concepts/storage/persistent-volumes/ to get a better picture of how to achieve that

High availability issue with rethinkdb cluster in kubernetes

I'm setting up rethinkdb cluster inside kubernetes, but it doesn't work as expected for high availability requirement. Because when a pod is down, kubernetes will creates another pod, which runs another container of the same image, old mounted data (which is already persisted on host disk) will be erased and the new pod will join the cluster as a brand new instance. I'm running k8s in CoreOS v773.1.0 stable.
Please correct me if i'm wrong, but that way it seems impossible to setup a database cluster inside k8s.
Update: As documented here http://kubernetes.io/v1.0/docs/user-guide/pod-states.html#restartpolicy, if RestartPolicy: Always it will restart the container if exits failure. It means by "restart" that it brings up the same container, or create another one? Or maybe because I stop the pod via command kubectl stop po so it doesn't restart the same container?
That's how Kubernetes works, and other solution works probably same way. When a machine is dead, the container on it will be rescheduled to run on another machine. That other machine has no state of container. Event when it is the same machine, the container on it is created as a new one instead of restarting the exited container(with data inside it).
To persistent data, you need some kind of external storage(NFS, EBS, EFS,...). In case of k8s, you may want to look into this https://github.com/kubernetes/kubernetes/blob/master/docs/design/persistent-storage.md This Github issue also has many information https://github.com/kubernetes/kubernetes/issues/6893
And in deed, that's the way to achieve HA in my opinion. Container are all stateless, they don't hold anything inside them. Any configuration needs for them should be store outside such as using thing like Consul or Etcd. By separating this like this, it's easier to restart a container
Try using PetSets http://kubernetes.io/docs/user-guide/petset/
That allows you to name your (pet) pods. If a pod is killed, then it will come back with the same name.
Summary of the petset feature is as follows.
Stable hostname
Stable domain name
Multiple pets of a similar type will be named with a "-n" (rethink-0,
rethink-1, ... rethink-n for example)
Persistent volumes
Now apps can cluster/peer together
When a pet pod dies, a new one will be started and will assume all the same "state" (including disk) of the previous one.

Resources