Statefulsets in K8S are being recreated after deletion - elasticsearch

I'm playing with the Elasticsearch operator Kubernetes and created two stateful sets (see https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-orchestration.html):
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.12.1
nodeSets:
- name: master-nodes
count: 3
config:
node.roles: ["master"]
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard
- name: data-nodes
count: 3
config:
node.roles: ["data"]
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: standard
The problem is that I cannot delete the stateful sets. After deletion, they're recreated automatically:
my-PC:~$ kubectl get sts
NAME READY AGE
quickstart-es-data-nodes 0/0 14m
quickstart-es-master-nodes 0/0 18m
my-PC:~$ kubectl delete sts quickstart-es-data-nodes --force --grace-period=0
warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.
statefulset.apps "quickstart-es-data-nodes" force deleted
my-PC:~$ kubectl get sts
NAME READY AGE
quickstart-es-data-nodes 0/3 3s
quickstart-es-master-nodes 0/0 18m
Before deletion I already scaled down the statefulset to 0 to ensure that all pods are terminated. But after deletion, the stateful is recreated (see quickstart-es-data-nodes).
So, anyone having any idea how I can delete the stateful sets without being recreated?

it's due to the operator you are using for the Elasticsearch. Operator manage the statefulset and will update if you delete it.
Behind the scenes, ECK translates each NodeSet specified in the
Elasticsearch resource into a StatefulSet in Kubernetes.
if you read the documentation: https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-orchestration.html#k8s-statefulsets
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#on-delete

You have to delete the custom object. The operator owns those StatefulSets and will continually update them to match its expected content.

I finally got the answer... I need to run the following command for deletion:
kubectl delete elasticsearch quickstart
This finally removed the quickstart examples.

Related

ElasticSearch CrashLoopBackoff when deploying with ECK in Kubernetes OKD 4.11

I am running Kubernetes using OKD 4.11 (running on vSphere) and have validated the basic functionality (including dyn. volume provisioning) using applications (like nginx).
I also applied
oc adm policy add-scc-to-group anyuid system:authenticated
to allow authenticated users to use anyuid (which seems to have been required to deploy the nginx example I was testing with).
Then I installed ECK using this quickstart with kubectl to install the CRD and RBAC manifests. This seems to have worked.
Then I deployed the most basic ElasticSearch quickstart example with kubectl apply -f quickstart.yaml using this manifest:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.4.2
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
The deployment proceeds as expected, pulling image and starting container, but ends in a CrashLoopBackoff with the following error from ElasticSearch at the end of the log:
"elasticsearch.cluster.name":"quickstart",
"error.type":"java.lang.IllegalStateException",
"error.message":"failed to obtain node locks, tried
[/usr/share/elasticsearch/data]; maybe these locations
are not writable or multiple nodes were started on the same data path?"
Looking into the storage, the PV and PVC are created successfully, the output of kubectl get pv,pvc,sc -A -n my-namespace is:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pvc-9d7b57db-8afd-40f7-8b3d-6334bdc07241 1Gi RWO Delete Bound my-namespace/elasticsearch-data-quickstart-es-default-0 thin 41m
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
my-namespace persistentvolumeclaim/elasticsearch-data-quickstart-es-default-0 Bound pvc-9d7b57db-8afd-40f7-8b3d-6334bdc07241 1Gi RWO thin 41m
NAMESPACE NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/thin (default) kubernetes.io/vsphere-volume Delete Immediate false 19d
storageclass.storage.k8s.io/thin-csi csi.vsphere.vmware.com Delete WaitForFirstConsumer true 19d
Looking at the pod yaml, it appears that the volume is correctly attached :
volumes:
- name: elasticsearch-data
persistentVolumeClaim:
claimName: elasticsearch-data-quickstart-es-default-0
- name: downward-api
downwardAPI:
items:
- path: labels
fieldRef:
apiVersion: v1
fieldPath: metadata.labels
defaultMode: 420
....
volumeMounts:
...
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
I cannot understand why the volume would be read-only or rather why ES cannot create the lock.
I did find this similar issue, but I am not sure how to apply the UID permissions (in general I am fairly naive about the way permissions work in OKD) when when working with ECK.
Does anyone with deeper K8s / OKD or ECK/ElasticSearch knowledge have an idea how to better isolate and/or resolve this issue?
Update: I believe this has something to do with this issue and am researching the optionas related to OKD.
For posterity, the ECK starts an init container that should take care of the chown on the data volume, but can only do so if it is running as root.
The resolution for me was documented here:
https://repo1.dso.mil/dsop/elastic/elasticsearch/elasticsearch/-/issues/7
The manifest now looks like this:
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 8.4.2
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
# run init container as root to chown the volume to uid 1000
podTemplate:
spec:
securityContext:
runAsUser: 1000
runAsGroup: 0
initContainers:
- name: elastic-internal-init-filesystem
securityContext:
runAsUser: 0
runAsGroup: 0
And the pod starts up and can write to the volume as uid 1000.

Share Folder Between kubernetes Nodes

I have 3 Kubernetes nodes, one of them is the Master and the others are worker. I deployed Laravel application to the Master Node and created volumes and storage class which points to that folder.
These are my YAML files to create volumes and the persistent volume claim
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: qsinav-pv-www-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storage class
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: manual
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
apiVersion: v1
kind: PersistentVolume
metadata:
name: qsinav-pv-www
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: qsinav-pv-www-claim
hostPath:
path: "/var/www/html/Test/qSinav-starter"
The problem is that the pods in every node try to mount the folder in its parent node.
So as I am running a Web application and I have load balancing between these nodes, if I logged in node one and the next request went to node 2 it redirects me to the login page as I don't have a session there.
So I need to share 1 folder from a master node with all worker nodes.
I don't know what should I do to achieve my goal, so please help me to solve it
Thanks in advance
Yeah that's expected and is clearly mentioned in the docs for hostPath volume.
Pods with identical configuration (such as created from a PodTemplate)
may behave differently on different nodes due to different files on
the nodes
You need to use something like nfs which can be shared between nodes. You PV definition would end up looking something like this (change IP & path as per your setup):
apiVersion: v1
kind: PersistentVolume
metadata:
name: qsinav-pv-www
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: qsinav-pv-www-claim
nfs:
path: /tmp
server: 172.17.0.2
don't use the type hostPath, because this type of volume is just for a single node cluster,
it Work well just for single node environnement, because if the pod is assigne to another node, then the pod can’t get the data nedded.
so use the type Local
It remembers which node was used for provisioning the volume, thus making sure that a restarting POD will always find the data storage in the state it had left it before the reboot.
Ps 1: Once a node has died, the data of both hostpath and local persitent volumes of that node are lost.
ps 2: the local and the hostPath type don't work with dynamic provisioning
apiVersion: v1
kind: PersistentVolume
metadata:
name: qsinav-pv-www
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: default
name: qsinav-pv-www-claim
local:
path: "/var/www/html/Test/qSinav-starter"

deploy elk stack in kubernetes with helm VolumeBinding error

I'm trying to deploy elk stack in kubernetes cluster with helm, using this chart. When I launch
helm install elk-stack stable/elastic-stack
I receive the following message:
NAME: elk-stack
LAST DEPLOYED: Mon Aug 24 07:30:31 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
The elasticsearch cluster and associated extras have been installed.
Kibana can be accessed:
* Within your cluster, at the following DNS name at port 9200:
elk-stack-elastic-stack.default.svc.cluster.local
* From outside the cluster, run these commands in the same shell:
export POD_NAME=$(kubectl get pods --namespace default -l "app=elastic-stack,release=elk-stack" -o jsonpath="{.items[0].metadata.name}")
echo "Visit http://127.0.0.1:5601 to use Kibana"
kubectl port-forward --namespace default $POD_NAME 5601:5601
But when I run
kubectl get pods
the result is:
NAME READY STATUS RESTARTS AGE
elk-stack-elasticsearch-client-7fcfc7b858-5f7fw 0/1 Running 0 12m
elk-stack-elasticsearch-client-7fcfc7b858-zdkwd 0/1 Running 1 12m
elk-stack-elasticsearch-data-0 0/1 Pending 0 12m
elk-stack-elasticsearch-master-0 0/1 Pending 0 12m
elk-stack-kibana-cb7d9ccbf-msw95 1/1 Running 0 12m
elk-stack-logstash-0 0/1 Pending 0 12m
Using kubectl describe pods command, I see that for elasticsearch pods the problem is:
Warning FailedScheduling 6m29s default-scheduler running "VolumeBinding" filter plugin for pod "elk-stack-elasticsearch-data-0": pod has unbound immediate PersistentVolumeClaims
and for logstash pods:
Warning FailedScheduling 7m53s default-scheduler running "VolumeBinding" filter plugin for pod "elk-stack-logstash-0": pod has unbound immediate PersistentVolumeClaims
Output of kubectl get pv,pvc,sc -A:
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/elasticsearch-data 10Gi RWO Retain Bound default/elasticsearch-data manual 16d
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default persistentvolumeclaim/claim1 Pending slow 64m
default persistentvolumeclaim/data-elk-stack-elasticsearch-data-0 Pending 120m
default persistentvolumeclaim/data-elk-stack-elasticsearch-master-0 Pending 120m
default persistentvolumeclaim/data-elk-stack-logstash-0 Pending 120m
default persistentvolumeclaim/elasticsearch-data Bound elasticsearch-data 10Gi RWO manual 16d
default persistentvolumeclaim/elasticsearch-data-elasticsearch-data-0 Pending 17d
default persistentvolumeclaim/elasticsearch-data-elasticsearch-data-1 Pending 17d
default persistentvolumeclaim/elasticsearch-data-quickstart-es-default-0 Pending 16d
default persistentvolumeclaim/elasticsearch-master-elasticsearch-master-0 Pending 17d
default persistentvolumeclaim/elasticsearch-master-elasticsearch-master-1 Pending 17d
default persistentvolumeclaim/elasticsearch-master-elasticsearch-master-2 Pending 16d
NAMESPACE NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/slow (default) kubernetes.io/gce-pd Delete Immediate false 66m
Storage class slow and Persistent volume claim claim1 are my experiments. I create they using kubectl create and a yaml file, the others is automatically created by helm (I think).
Output of kubectl get pvc data-elk-stack-elasticsearch-master-0 -o yaml:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: "2020-08-24T07:30:38Z"
finalizers:
- kubernetes.io/pvc-protection
labels:
app: elasticsearch
release: elk-stack
managedFields:
- apiVersion: v1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:labels:
.: {}
f:app: {}
f:release: {}
f:spec:
f:accessModes: {}
f:resources:
f:requests:
.: {}
f:storage: {}
f:volumeMode: {}
f:status:
f:phase: {}
manager: kube-controller-manager
operation: Update
time: "2020-08-24T07:30:38Z"
name: data-elk-stack-elasticsearch-master-0
namespace: default
resourceVersion: "201123"
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-elk-stack-elasticsearch-master-0
uid: de58f769-f9a7-41ad-a449-ef16d4b72bc6
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 4Gi
volumeMode: Filesystem
status:
phase: Pending
Can somebody please help me to fix this problem? Thanks in advance.
The reason why pod is pending is below PVCs are pending because corresponding PVs are not created.
data-elk-stack-elasticsearch-master-0
data-elk-stack-logstash-0
data-elk-stack-elasticsearch-data-0
Since you have mentioned this is for local development you can use hostPath volume for the PV. So create PV for each of the pending PVCs using the sample PV below. So you will create 3 PVs in total.
apiVersion: v1
kind: PersistentVolume
metadata:
name: elk-master
labels:
type: local
spec:
capacity:
storage: 4Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elk-logstash
labels:
type: local
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elk-data
labels:
type: local
spec:
capacity:
storage: 30Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"

How to resize an ECK cluster

I have an elasticsearch cluster that has the storage field set to 10Gi, I want to resize this cluster (for testing purposes to 15Gi). However, after changing the storage value from 10Gi to 15Gi I can see that the cluster still did not resize and the generated PVC is still set to 10Gi.
From what I can tell the aws-ebs storage https://kubernetes.io/docs/concepts/storage/storage-classes/ allows for volume expansion when the field allowVolumeExpansion is true. But even when I have this, the volume is never expanded when I change that storage value
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: elasticsearch-storage
namespace: test
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Delete
allowVolumeExpansion: true
---
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: elasticsearch
namespace: test
spec:
version: 7.4.2
spec:
http:
tls:
certificate:
secretName: es-cert
nodeSets:
- name: default
count: 3
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
annotations:
volume.beta.kubernetes.io/storage-class: elasticsearch-storage
spec:
accessModes:
- ReadWriteOnce
storageClassName: elasticsearch-storage
resources:
requests:
storage: 15Gi
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
xpack.security.authc.realms:
native:
native1:
order: 1
---
Technically it should work but your Kubernetes cluster might not be able to connect to the AWS API to expand the volume. Did you check the actual EBS volume on the EC2 console or AWS CLI? You can debug this issue by looking at the kube-controller-manager and cloud-controller manager logs.
My guess is that there is some type of permission issue that from your K8s cluster that cannot talk to your AWS/EC2 API.
If you are running EKS, make sure that the IAM cluster role that you are using has permissions for EC2/EBS. You can check the control plane logs (kube-controller-manager, kube-apiserver, cloud-controller-manager, etc) on CloudWatch.
EDIT:
The Elasticsearch operator uses StatefulSets and as of this date Volume expansion is not supported on StatefulSets.

Elasticsearch deployment on kubernetes using Persistent Volume

I am trying to deploy a Elasticsearch cluster(replicas: 3) using Statefulset in kubernetes and need to store the Elasticsearch data in a Persistent Volume (PV). Since each Elasticsearch instance has its own data folder, I need to have separate data folder for each replica in the PV. I am trying to use volumeClaimTemplates and mountPath: /usr/share/elasticsearch/data but this is resulting in an error: pod has unbound immediate PersistentVolumeClaims in the second pod. Hence how can I achieve this using Statefulset?
Thanks in advance.
There is no information how you are trying to install elastic-search however:
As an example please follow:
this tutorial,
helm-charts,
As per documentation for StatefulSet - limitations:
The storage for a given Pod must either be provisioned by a PersistentVolume Provisioner based on the requested storage class, or pre-provisioned by an admin.
This looks like your example, problem with dynamic storage provisioning.
Please verify storage class, if pv and pvc were created and bind together and storage class in volumeClaimTemplates:
volumeMounts:
- name: "elasticsearch-master"
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elasticsearch-master
spec:
accessModes:
- ReadWriteOnce
storageClassName: name #please refer to this settings to see if you are using default storage class. In other case you should spceify this parameter manually
resources:
requests:
storage: 30Gi
Hope this help.
If you are using dynamic provisioning then you can get the volume created automatically at backend, like disk is storage for PVs in Azure ( for Read Write Once kind of operations), else you need to create that manually
Once you create the volume, just create a pvc in the appropriate namespace which is of size matching the pv, then you are just supposed to pass the volume name in pvc definition, it will get bound automatically.
You can try something like this -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claimName
namespace: namespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: default
volumeName: pv-volumeName
status:
accessModes:
- ReadWriteOnce
capacity:
storage: 1Gi
Please share if you still face issues

Resources