Trouble running Elastic agent in a Kubernetes deployment with official Docker image - elasticsearch

I'm trying to run only the Elastic agent as a deployment in a Kubernetes cluster. The reason I'm doing this is maybe an atypical usage of the Elastic agent: I only want to deploy the HTTP log endpoint integration and have other pods send logs to this Elastic agent. I'm not using it to collect cluster metrics (so the manifest they supply is not relevant to me).
I'm using the image docker.elastic.co/beats/elastic-agent:8.4.2. Apparently, this image needs to write files and directories to /usr/share/elastic-agent/, which at first was leading to errors along the lines of failed: mkdir /usr/share/elastic-agent/state: read-only file system. So, I created an emptyDir volume and mounted it at /usr/share/elastic-agent. Now, that error disappears, but is replaced with a new error:
/usr/local/bin/docker-entrypoint: line 14: exec: elastic-agent: not found
The entrypoint of the image is
ENTRYPOINT ["/usr/bin/tini" "--" "/usr/local/bin/docker-entrypoint"]
and it is apparently unable to find /usr/local/bin/docker-entrypoint.
A couple questions:
Why is it not finding the elastic-agent executable? It is definitely at that path.
More broadly: I am new to Elasticsearch -- this is only to set up a QA environment meant to test a product feature where we forward data from certain of our services to customers' Elastic Cloud deployments. I thought deploying the agent as a service in the same cluster where these services run would be the least painful way to do this. Is this not a good way to achieve what I describe in the first paragraph?
Assuming I can get the deployment to actually work, is this the way the next steps would go?
Create the "Custom HTTP Endpoint Logs" integration on the agent policy, listening on a given port and on all interfaces.
Map that port to an external port for the pod.
Send data to the pod at that external port.

The issue is that mounting the emptyDir volume to /usr/share overwrites the elastic-agent binary. Remove this volume and set readOnlyRootFilesystem: false.

Related

Build a Kubernetes Operator For rolling updates

I have created a Kubernetes application (Say deployment D1, using docker image I1), that will run on client clusters.
Requirement 1 :
Now, I want to roll updates whenever I update my docker image I1, without any efforts from client side
(Somehow, client cluster should automatically pull the latest docker image)
Requirement 2:
Whenever, I update a particular configMap, the client cluster should automatically start using the new configMap
How should I achieve this ?
Using Kubernetes Cronjobs ?
Kubernetes Operators ?
Or something else ?
I heard that k8s Operator can be useful
Starting with the Requirement 2:
Whenever, I update a particular configMap, the client cluster should
automatically start using the new configMap
If configmap is mounted to the deployment it will get auto-updated however if getting injected as the Environment restart is only option unless you are using the sidecar solution or restarting the process.
For ref : Update configmap without restarting POD
How should I achieve this ?
ImagePullpolicy is not a good option i am seeing however, in that case, manual intervention is required to restart deployment and it
pulls the latest image from the client side and it won't be in a
controlled manner.
Using Kubernetes Cronjobs ?
Cronjobs you will run which side ? If client-side it's fine to do
that way also.
Else you can keep deployment with Exposed API which will run Job to
update the deployment with the latest tag when any image gets pushed
to your docker registry.
Kubernetes Operators ?
An operator is a good native K8s option you can write in Go,
Python or your preferred language with/without Operator framework or Client Libraries.
Or something else?
If you just looking for updating the deployment, Go with running the API in the deployment or Job you can schedule in a controlled manner, no issue with the operator too would be a more native and a good approach if you can create, manage & deploy one.
If in the future you have a requirement to manage all clusters (deployment, service, firewall, network) of multiple clients from a single source of truth place you can explore the Anthos.
Config management from Git repo sync with Anthos
You can build a Kubernetes operator to watch your particular configmap and trigger cluster restart. As for the rolling updates, you can configure the deployment according to your requirement. A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, .spec.template) is changed, for example, if the labels or container images of the template are updated. Add the specifications for rolling update on your Kubernetes deployment .spec section:
type: RollingUpdate
rollingUpdate:
maxSurge: 3 //the maximum number of pods to be created beyond the desired state during the upgrade
maxUnavailable: 1 //the maximum number of unavailable pods during an update
timeoutSeconds: 100 //the time (in seconds) that waits for the rolling event to timeout
intervalSeconds: 5 //the time gap in seconds after an update
updatePeriodSeconds: 5 //time to wait between individual pods migrations or updates

Creating a Simple Hello World app in Kubernetes

Most software tech has a "Hello World" type example to get started on. With Kubernetes this seems to be lacking.
My scenario cannot be simpler. I have a simple hello world app made with Spring-Boot with one Rest controller that just returns: "Hello Hello!"
After I create my docker file, I build an image like this :
docker build -t helloworld:1.0 .
Then I run it in a container like this :
docker run -p 8080:8080 helloworld:1.0
If I open up a browser now, I can access my application here :
http://localhost:8080/hello/
and it returns :
"Hello Hello!"
Great! So far so good.
Next I tag it (my docker-hub is called ollyw123, and the ID of my image is 776...)
docker tag 7769f3792278 ollyw123/helloworld:firsttry
and push :
docker push ollyw123/helloworld
If I log into Docker-Hub I will see
Now I want to connect this to Kubernetes. This is where I have plunged deep into the a state of confusion.
My thinking is, I need to create a cluster. Somehow I need to connect this cluster to my image, and as I understand, I just need to use the URL of the image to connect to (ie.
https://hub.docker.com/repository/docker/ollyw123/helloworld)
Next I would have to create a service. This service would then be able to expose my "Hello World!" rest call through some port. This is my logical thinking, and for me this would seem like a very simple thing to do, but the tutorials and documentation on Kubernetes is a mine field of confusion and dead ends.
Following on from the spring-boot kubernetes tutorial (https://spring.io/guides/gs/spring-boot-kubernetes/) I have to create a deployment object, and then a service object, and then I have to "apply" it :
kubectl create deployment hello-world-dep --image=ollyw123/helloworld --dry-run -o=yaml > deployment.yaml
kubectl create service clusterip hello-world-dep --tcp=8080:8080 --dry-run -o=yaml >> deployment.yaml
kubectl apply -f deployment.yaml
OK. Now I see a service :
But now what???
How do I push this to the cloud? (eg. gcloud) Do I need to create a cluster first, or is this already a cluster?
What should my next step be?
There are a couple of concepts that we need to go through regarding your question.
The first would be about the "Hello World" app in Kubernetes. Even this existing (as mentioned by Limido in the comments [link]), the app itself is not a Kubernetes app, but an app created in the language of your choice, which was containerized and it is deployed in Kubernetes.
So I would call it (in your case) a Dockerized SpringBoot HelloWorld app.
Okay, now that we have a container we could simply deploy it running docker, but what if your container dies, or you need to scale it up and down, manage volumes, network traffic and a bunch of other things, this starts to become complicated (imagine a real life scenario, with hundreds or even thousands of containers running at the same time). That's exactly where the Container Orchestration comes into place.
Kubernetes helps you managing this complexity, in a single place.
The third concept that I'd like to talk, is the create and apply commands. You can definitely find a more detailed explanation in here, but both of then can be used to create the resource in Kubernetes.
In your case, the create command is not creating the resources, because you are using the --dry-run and adding the output to your deployment file, which you apply later on, but the following command would also create your resource:
kubectl create deployment hello-world-dep --image=ollyw123/helloworld
kubectl create service clusterip hello-world-dep --tcp=8080:8080
Note that even this working, if you need to share this deployment, or commit it in a repository you would need to get it:
kubectl get deployment hello-world-dep -o yaml > your-file.yaml
So having the definition file is really helpful and recommended.
Great... Going further...
When you have a deployment you will also have a number of replicas that is expected to be running (even when you don't define it - the default value is 1). In your case your deployment is managing one pod.
If you run:
kubectl get pods -o wide
You will get your pod hello-world-dep-hash and an IP address. This IP is the IP of your container and you can access your application using it, but as pods are ephemeral, if your pod dies, Kubernetes will create a new one for you (automatically) with a new IP address, so if you have for instance a backend and its IP is constantly changing, you would need to manage this change in the frontend every time you have a new backend pod.
To solve that, Kubernetes has the Service, which will expose the deployment in a persistent way. So if your pod dies and a new one comes back, the address of your service will continue the same, and all the traffic will be automatically routed to your new pod.
When you have more than one replica of your deployment, the service also load balance the load across all the available pods.
Last but not least, your question!
You have asked, now what?
So basically, once you have your application containerized, you can deploy it almost anywhere. There are N different places you can get it. In your case you are running it locally, but you could get your deployment.yaml file and deploy your application in GKE, AKS, EKS, just to quote the biggest ones, but all cloud providers have some type of Kubernetes service available, where you can spin up a cluster and start playing around.
Actually, to play around I'd recommend Katakoda, as they have scenarios for free, and you can use the cluster to play around.
Wow... That was a long answer...
Just to finish, I'd recommend the Network Introduction in Katakoda, as there are different types of Services, depending on your scenario or what you need, and the tutorial is goes through the different types in a hands-on approach.
In the context of Kubernetes, Cluster is the environment where your PODS and Services are running. Think of it like a VM environment where you setup your Web Server and etc.. (although I don't like my own analogy)
If you want to run the same thing in GCloud, then you create a Kubernetes cluster there and all you need to do is to apply your YAML files that contains the Service and Deployment there via the CLI that Google Cloud provides to interact with your Cluster.
In order to interact with GCloud GKS Cluster via your local command prompt, you need to get the credentials for that cluster. This official GCloud document explain how to retrieve your cluster credential. once done, you can start interacting with the Kubernetes instance running in GCloud via kubectl command using your command prompt.
The service that you have is of type clusterIP which is only accessible from within the kubernetes cluster. You need to either use NodePort or LoadBalanacer type service or ingress to expose the application outside the remote kubernetes cluster(a set of VMs or bare metal servers in public or private cloud environment with kubernetes deployed on them) or local minikube/docker desktop. Once you do that you should be able to access it using a browser or curl

How to monitor an ElasticSearch Cluster on the Elastic Cloud with Datadog?

We have an elasticsearch cluster deployed to the Elastic Cloud and would like to send monitoring/health metrics to Datadog. What is the best way to do that?
It seems like our options are:
Installing the datadog agent binary via the plugins upload
Using metric beat -> logstash -> datadog_metrics output
You can deploy the Datadog agent in a container / instance that you manage and the configure it according to these instructions to gather metrics from the remote ElasticSearch cluster that is hosted on Elastic Cloud. You need to create a conf.yaml file in the elastic.d/ directory and provide the required information (Elasticsearch endpoint/URL, username, password, port, etc) for the agent to be able to connect to the cluster. You may find a sample configuration file here.
As George Tseres mentioned above, the way I had to get this working was to set up collection on a separate instance (through docker) and then to configure it to read the specific Elastic Cloud instances.
I ended up making this: https://github.com/crwang/datadog-elasticsearch, building that docker image, and then pushing it up to AWS ECR.
Then, I spun up a Fargate service / task to run the container.
I also set it to run locally with docker-compose as a test.

Deploying services in GKE k8s cluster using Golang k8s client

I'm able to create a GKE cluster using the golang container lib here.
Now for my golang k8s client to be able to deploy my k8s deployment files there, I need to get the kubeconfig from the GKE cluster. However I can't find the relevant api for that in the container lib above. Can anyone please point out what am I missing ?
As per #Subhash suggestion I am posting the answer from this question:
The GKE API does not have a call that outputs a kubeconfig file (or
fragment). The specific processing between fetching a full cluster
definition and updating the kubeconfig file are implemented in python
in the gcloud tooling. It isn't part of the Go SDK so you'd need to
implement it yourself.
You can also try using kubectl config set-credentials (see
this) and/or see if you can vendor the libraries that implement
that function if you want to do it programmatically.

Cannot get Fabric8 to fully launch in AWS using stackpoint

I have been trying to spin up a Kubernetes/Fabric8 installation on AWS using Stackpoint as described in this video: https://www.youtube.com/watch?v=lNRpGJTSMKA
My problem is that three of the apps wont start becuase no volumes are available and I cannot see how to resolve those PV requests. For example Gogs is reporting the following error:
Unable to mount volumes for pod "gogs-2568819805-bcw8e_default(03d618b9-7477-11e6-8c6b-0a945216fb91)": timeout expired waiting for volumes to attach/mount for pod "gogs-2568819805-bcw8e"/"default". list of unattached/unmounted volumes=[gogs-data]
Error syncing pod, skipping: timeout expired waiting for volumes to attach/mount for pod "gogs-2568819805-bcw8e"/"default". list of unattached/unmounted volumes=[gogs-data]
I am pretty sure this is very simple but cannot see how to connect the dots here from the various K8, Fabric8 docs. I can create a new EBS volume in AWS easily enough but cannot see how to then update this running stack to attach it to these services. Any help would be greatly appreciated!
Sorry about that, what version of gofabric8 are you using? We're currently adding persistent volume support for the core platform apps although the integration our stackpoint isn't there quite yet. Hopefully soon though.
For now you should be able to disable the PV claims using --pv=false during the deploy. So gofabric8 deploy --pv=false. We'll look at using this as the default until the integration is there and we can leverage AWS persistent volumes
We just shipped functionality that allows you to create and manage AWS volumes for Kubernetes. You get a volume, PV, and claim - just name the claim to be what is required by Fabric8. Eventually, you'll be able to use dynamic volume creation.

Resources