I have created a Kubernetes application (Say deployment D1, using docker image I1), that will run on client clusters.
Requirement 1 :
Now, I want to roll updates whenever I update my docker image I1, without any efforts from client side
(Somehow, client cluster should automatically pull the latest docker image)
Requirement 2:
Whenever, I update a particular configMap, the client cluster should automatically start using the new configMap
How should I achieve this ?
Using Kubernetes Cronjobs ?
Kubernetes Operators ?
Or something else ?
I heard that k8s Operator can be useful
Starting with the Requirement 2:
Whenever, I update a particular configMap, the client cluster should
automatically start using the new configMap
If configmap is mounted to the deployment it will get auto-updated however if getting injected as the Environment restart is only option unless you are using the sidecar solution or restarting the process.
For ref : Update configmap without restarting POD
How should I achieve this ?
ImagePullpolicy is not a good option i am seeing however, in that case, manual intervention is required to restart deployment and it
pulls the latest image from the client side and it won't be in a
controlled manner.
Using Kubernetes Cronjobs ?
Cronjobs you will run which side ? If client-side it's fine to do
that way also.
Else you can keep deployment with Exposed API which will run Job to
update the deployment with the latest tag when any image gets pushed
to your docker registry.
Kubernetes Operators ?
An operator is a good native K8s option you can write in Go,
Python or your preferred language with/without Operator framework or Client Libraries.
Or something else?
If you just looking for updating the deployment, Go with running the API in the deployment or Job you can schedule in a controlled manner, no issue with the operator too would be a more native and a good approach if you can create, manage & deploy one.
If in the future you have a requirement to manage all clusters (deployment, service, firewall, network) of multiple clients from a single source of truth place you can explore the Anthos.
Config management from Git repo sync with Anthos
You can build a Kubernetes operator to watch your particular configmap and trigger cluster restart. As for the rolling updates, you can configure the deployment according to your requirement. A Deployment's rollout is triggered if and only if the Deployment's Pod template (that is, .spec.template) is changed, for example, if the labels or container images of the template are updated. Add the specifications for rolling update on your Kubernetes deployment .spec section:
type: RollingUpdate
rollingUpdate:
maxSurge: 3 //the maximum number of pods to be created beyond the desired state during the upgrade
maxUnavailable: 1 //the maximum number of unavailable pods during an update
timeoutSeconds: 100 //the time (in seconds) that waits for the rolling event to timeout
intervalSeconds: 5 //the time gap in seconds after an update
updatePeriodSeconds: 5 //time to wait between individual pods migrations or updates
I have a Spring Boot web application, packaged as a container that I am deploying in Kubernetes. The application has a scheduled job that must run periodically only on one of the running pods.
I'm using Spring Cloud Kubernetes's leader election capability, with Fabric8 as the provider, to select the pod that runs this job.
Suppose I run this service with one pod instance. The first time I start this service, it elects the one pod as the leader, and everything works as expected. But if I kill the pod, the new pod instance does not recover; it is unable to elect itself as the leader. The configmap that indicates which pod is the leader still points to the old killed pod.
If I delete the configmap entry, the pod detects that there is no leader and then elects itself.
How can I fix this so that my new pod elects itself as the leader automatically, without me having to delete the configmap?
Ideally upon the leader died lease won't get renewed as per example after sometime, thus lease resources should be acquired by another candidate after a timeout.
Otherwise terminate the pod gracefully, in the shutdown hook execute the cleanup logic.
I have a clickhouse cluster that is running on pods in kubernetes and I have a clickhouse-explorer that should be checking one clickhouse node each. I was thinking about using daemonset for this but I don't know how to force in the config that each pod to connect to a different service. Note I don't want to use sidecontainer for the clickhouse.
Any suggestions?
I have a logstash server that somehow stopped listening on its syslog input (but didn't crash thats odd enough on itself but case for another question), it was configured to have a max queue of 100GB and, after some time (31gb of queue) i decided to restart it.
After restarting logstash it gets "stuck" on
[INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
and it doesnt start sending to Elasticsearch the messages on the queue (neither the ones on the queue nor newly received ones).
If i delete the queue folder and restart logstash then I get new events but obviously I lost all the old ones.
Why is logstash taking so long to process the persistent queue when its big? are there any settings i should tune to make the pipeline flow ?
I have saved the old Persistent Queue for looking deeper into it, any pointers?
I am currently monitoring Usage Metrics of Pods from Kubernetes (using Metricbeat) and storing it in DB for historical analysis.
If my Pod gets restarted the Pod name gets changed and will not be able to map the old Pod information with new Pod information.
Do we have any way to connect or identify the old and new information to form the final consolidated dataset?