I'm trying to deploy elk stack in kubernetes cluster with helm, using this chart. When I launch
helm install elk-stack stable/elastic-stack
I receive the following message:
NAME: elk-stack
LAST DEPLOYED: Mon Aug 24 07:30:31 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
The elasticsearch cluster and associated extras have been installed.
Kibana can be accessed:
* Within your cluster, at the following DNS name at port 9200:
elk-stack-elastic-stack.default.svc.cluster.local
* From outside the cluster, run these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app=elastic-stack,release=elk-stack" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:5601 to use Kibana"
kubectl port-forward --namespace default $POD_NAME 5601:5601
But when I run
kubectl get pods
the result is:
NAME READY STATUS RESTARTS AGE
elk-stack-elasticsearch-client-7fcfc7b858-5f7fw 0/1 Running 0 12m
elk-stack-elasticsearch-client-7fcfc7b858-zdkwd 0/1 Running 1 12m
elk-stack-elasticsearch-data-0 0/1 Pending 0 12m
elk-stack-elasticsearch-master-0 0/1 Pending 0 12m
elk-stack-kibana-cb7d9ccbf-msw95 1/1 Running 0 12m
elk-stack-logstash-0 0/1 Pending 0 12m
Using kubectl describe pods command, I see that for elasticsearch pods the problem is:
Warning FailedScheduling 6m29s default-scheduler running "VolumeBinding" filter plugin for pod "elk-stack-elasticsearch-data-0": pod has unbound immediate PersistentVolumeClaims
and for logstash pods:
Warning FailedScheduling 7m53s default-scheduler running "VolumeBinding" filter plugin for pod "elk-stack-logstash-0": pod has unbound immediate PersistentVolumeClaims
Output of kubectl get pv,pvc,sc -A:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/elasticsearch-data 10Gi RWO Retain Bound default/elasticsearch-data manual 16d
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default persistentvolumeclaim/claim1 Pending slow 64m
default persistentvolumeclaim/data-elk-stack-elasticsearch-data-0 Pending 120m
default persistentvolumeclaim/data-elk-stack-elasticsearch-master-0 Pending 120m
default persistentvolumeclaim/data-elk-stack-logstash-0 Pending 120m
default persistentvolumeclaim/elasticsearch-data Bound elasticsearch-data 10Gi RWO manual 16d
default persistentvolumeclaim/elasticsearch-data-elasticsearch-data-0 Pending 17d
default persistentvolumeclaim/elasticsearch-data-elasticsearch-data-1 Pending 17d
default persistentvolumeclaim/elasticsearch-data-quickstart-es-default-0 Pending 16d
default persistentvolumeclaim/elasticsearch-master-elasticsearch-master-0 Pending 17d
default persistentvolumeclaim/elasticsearch-master-elasticsearch-master-1 Pending 17d
default persistentvolumeclaim/elasticsearch-master-elasticsearch-master-2 Pending 16d
NAMESPACE NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/slow (default) kubernetes.io/gce-pd Delete Immediate false 66m
Storage class slow and Persistent volume claim claim1 are my experiments. I create they using kubectl create and a yaml file, the others is automatically created by helm (I think).
Output of kubectl get pvc data-elk-stack-elasticsearch-master-0 -o yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2020-08-24T07:30:38Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: elasticsearch
release: elk-stack
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:release: {}
f:spec:
f:accessModes: {}
f:resources:
f:requests:
.: {}
f:storage: {}
f:volumeMode: {}
f:status:
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2020-08-24T07:30:38Z"
name: data-elk-stack-elasticsearch-master-0
namespace: default
resourceVersion: "201123"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-elk-stack-elasticsearch-master-0
uid: de58f769-f9a7-41ad-a449-ef16d4b72bc6
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
volumeMode: Filesystem
status:
phase: Pending
Can somebody please help me to fix this problem? Thanks in advance.
The reason why pod is pending is below PVCs are pending because corresponding PVs are not created.
data-elk-stack-elasticsearch-master-0
data-elk-stack-logstash-0
data-elk-stack-elasticsearch-data-0
Since you have mentioned this is for local development you can use hostPath volume for the PV. So create PV for each of the pending PVCs using the sample PV below. So you will create 3 PVs in total.
apiVersion: v1
kind: PersistentVolume
metadata:
name: elk-master
labels:
type: local
spec:
capacity:
storage: 4Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elk-logstash
labels:
type: local
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elk-data
labels:
type: local
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
Related
I have the following cronjob which deletes pods in a specific namespace.
I run the job as-is but it seems that the job doesn't run for each 20 min, it runs every few (2-3) min,
what I need is that on each 20 min the job will start deleting the pods in the specified namespace and then terminate, any idea what could be wrong here?
apiVersion: batch/v1
kind: CronJob
metadata:
name: restart
spec:
schedule: "*/20 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 0
jobTemplate:
spec:
backoffLimit: 0
template:
spec:
serviceAccountName: sa
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl:1.22.3
command:
- /bin/sh
- -c
- kubectl get pods -o name | while read -r POD; do kubectl delete "$POD"; sleep 30; done
I'm really not sure why this happens...
Maybe the delete of the pod collapse
update
I tried the following but no pods were deleted,any idea?
apiVersion: batch/v1
kind: CronJob
metadata:
name: restart
spec:
schedule: "*/1 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 0
jobTemplate:
spec:
backoffLimit: 0
template:
metadata:
labels:
name: restart
spec:
serviceAccountName: pod-exterminator
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl:1.22.3
command:
- /bin/sh
- -c
- kubectl get pods -o name --selector name!=restart | while read -r POD; do kubectl delete "$POD"; sleep 10; done.
This cronjob pod will delete itself at some point during the execution. Causing the job to fail and additionally resetting its back-off count.
The docs say:
The back-off count is reset when a Job's Pod is deleted or successful without any other Pods for the Job failing around that time.
You need to apply an appropriate filter. Also note that you can delete all pods with a single command.
Add a label to spec.jobTemplate.spec.template.metadata that you can use for filtering.
apiVersion: batch/v1
kind: CronJob
metadata:
name: restart
spec:
jobTemplate:
spec:
template:
metadata:
labels:
name: restart # label the pod
Then use this label to delete all pods that are not the cronjob pod.
kubectl delete pod --selector name!=restart
Since you state in the comments, you need a loop, a full working example may look like this.
---
apiVersion: batch/v1
kind: CronJob
metadata:
name: restart
namespace: sandbox
spec:
schedule: "*/20 * * * *"
concurrencyPolicy: Forbid
successfulJobsHistoryLimit: 0
failedJobsHistoryLimit: 0
jobTemplate:
spec:
backoffLimit: 0
template:
metadata:
labels:
name: restart
spec:
serviceAccountName: restart
restartPolicy: Never
containers:
- name: kubectl
image: bitnami/kubectl:1.22.3
command:
- /bin/sh
- -c
- |
kubectl get pods -o name --selector "name!=restart" |
while read -r POD; do
kubectl delete "$POD"
sleep 30
done
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: restart
namespace: sandbox
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: pod-management
namespace: sandbox
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "watch", "list", "delete"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: restart-pod-management
namespace: sandbox
subjects:
- kind: ServiceAccount
name: restart
namespace: sandbox
roleRef:
kind: Role
name: pod-management
apiGroup: rbac.authorization.k8s.io
kubectl create namespace sandbox
kubectl config set-context --current --namespace sandbox
kubectl run pod1 --image busybox -- sleep infinity
kubectl run pod2 --image busybox -- sleep infinity
kubectl apply -f restart.yaml # the above file
Here you can see how the first pod is getting terminated.
$ kubectl get all
NAME READY STATUS RESTARTS AGE
pod/pod1 1/1 Terminating 0 43s
pod/pod2 1/1 Running 0 39s
pod/restart-27432801-rrtvm 1/1 Running 0 16s
NAME SCHEDULE SUSPEND ACTIVE LAST SCHEDULE AGE
cronjob.batch/restart */1 * * * * False 1 17s 36s
NAME COMPLETIONS DURATION AGE
job.batch/restart-27432801 0/1 17s 17s
Note that this is actually slightly buggy. Because from the time you're reading the pod list to the time you delete an individual pod in the list, the pod may not exist any more. You could use the below to ignore those cases, since when they are gone you don't need to delete them.
kubectl delete "$POD" || true
That said, since you name your job restart, I assume the purpose of this is to restart the pods of some deployments. You could actually use a proper restart, leveraging Kubernetes update strategies.
kubectl rollout restart $(kubectl get deploy -o name)
With the default update strategy, this will lead to new pods being created first and making sure they are ready before terminating the old ones.
$ kubectl rollout restart $(kubectl get deploy -o name)
NAME READY STATUS RESTARTS AGE
pod/app1-56f87fc665-mf9th 0/1 ContainerCreating 0 2s
pod/app1-5cbc776547-fh96w 1/1 Running 0 2m9s
pod/app2-7b9779f767-48kpd 0/1 ContainerCreating 0 2s
pod/app2-8d6454757-xj4zc 1/1 Running 0 2m9s
This also works with deamonsets.
$ kubectl rollout restart -h
Restart a resource.
Resource rollout will be restarted.
Examples:
# Restart a deployment
kubectl rollout restart deployment/nginx
# Restart a daemon set
kubectl rollout restart daemonset/abc
I am using the Kubernetes deployment on local Docker VM (MacOs) To develop.
Instead of creating a new image and killing the pod for each code change iteration I mount the code From the IDE into the docker
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: monitor-storage
namespace: wielder-services
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: monitor-pvc
namespace: wielder-services
labels:
app: monitor
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 200Mi
storageClassName: monitor-storage
apiVersion: v1
kind: PersistentVolume
metadata:
name: monitor-pv
spec:
capacity:
storage: 200Mi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: monitor-storage
local:
path: /<path to code folder>
nodeAffinity:
required:
nodeSelectorTerms:
-
matchExpressions:
-
key: kubernetes.io/hostname
operator: In
values:
- docker-desktop
This works fine and I used to immediately see changes in the IDE file in the pod shell and vice versa. Now I cant view changes in the IDE even if I delete and re-deploy the entire volume deployment combination. It seems the directory content is copied to a different place in cache or storage.
following a few threads I stopped the IDE to see if there is an effect but the changes in the files don't appear in the mounted directory.
according to documentation this should be possible:
https://docs.docker.com/desktop/mac/#file-sharing
Is there a way to force a remounting of the host(MacOs) Directory?
I am trying to rename a nodeset in my ECK cluster. Below is my Elastisearch cluster yaml:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: elastic-test
spec:
version: 7.11.1
auth:
roles:
- secretName: elastic-roles-secret
fileRealm:
- secretName: elastic-filerealm-secret
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
volumeClaimTemplates:
- metadata:
name: azure-pvc
spec:
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 25Gi
volumeName: elasticsearch-azure-pv
podTemplate:
spec:
initContainers:
- name: install-plugins
command:
- sh
- -c
- |
bin/elasticsearch-plugin install --batch ingest-attachment
I want to change the nodeset name from default to default2.
However, the new pod created is stuck on Pending.
kubectl describe the new pod:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 4m12s default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
Warning FailedScheduling 4m12s default-scheduler 0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
Because both the old PVC was not deleted, the new PVC cannot bind to the same PV. AFAIK, for the intended behaviour the old PVC and pod should be deleted and the new pod and PVC can bind to the PV.
To provide some context, my deployment environment only allows me to apply yaml files (no running kubectl delete), and the goal is to add the ingest-attachment plugin. So I am trying to restart the existing pod by renaming it.
I'm playing with the Elasticsearch operator Kubernetes and created two stateful sets (see https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-orchestration.html):
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.12.1
nodeSets:
- name: master-nodes
count: 3
config:
node.roles: ["master"]
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard
- name: data-nodes
count: 3
config:
node.roles: ["data"]
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard
The problem is that I cannot delete the stateful sets. After deletion, they're recreated automatically:
my-PC:~$ kubectl get sts
NAME READY AGE
quickstart-es-data-nodes 0/0 14m
quickstart-es-master-nodes 0/0 18m
my-PC:~$ kubectl delete sts quickstart-es-data-nodes --force --grace-period=0
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
statefulset.apps "quickstart-es-data-nodes" force deleted
my-PC:~$ kubectl get sts
NAME READY AGE
quickstart-es-data-nodes 0/3 3s
quickstart-es-master-nodes 0/0 18m
Before deletion I already scaled down the statefulset to 0 to ensure that all pods are terminated. But after deletion, the stateful is recreated (see quickstart-es-data-nodes).
So, anyone having any idea how I can delete the stateful sets without being recreated?
it's due to the operator you are using for the Elasticsearch. Operator manage the statefulset and will update if you delete it.
Behind the scenes, ECK translates each NodeSet specified in the
Elasticsearch resource into a StatefulSet in Kubernetes.
if you read the documentation: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-orchestration.html#k8s-statefulsets
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#on-delete
You have to delete the custom object. The operator owns those StatefulSets and will continually update them to match its expected content.
I finally got the answer... I need to run the following command for deletion:
kubectl delete elasticsearch quickstart
This finally removed the quickstart examples.
Trying to create a local persistent volume for ECK
Creating persistent volumes with the following definition
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mo/esdata"
And PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-data
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Getting Error on PVC
kubectl apply -f pvc-es.yml
Warning: kubectl apply should be used on resource created by either kubectl create --save-config or kubectl apply
The PersistentVolumeClaim "elasticsearch-data-quickstart-es-default-0" is invalid: spec: Forbidden: is immutable after creation except resources.requests for bound claims
First, have you created the storage class "manual"?
Second, it seems that in the error message is saying the pvc is immutable after creation. Is it possible you have created a pvc with the same name before? Please run kubectl get pvc and show output. you can delete the pvc and reapply the yaml if it is the case.