I have a gcePersistentDisk with an elasticsearch index. When I delete and recreate the statefulset, I want to reuse this disk so the same index is available on the new statefulset.
I've managed to rebind the disk, my-es-disk, to a new statefulset, but when I connect to the container and check for the index, there are no indices.
I've reviewed Google and k8s docs, and it seems like the yamls are correct, but I still cannot retain the index. I am sure it's something simple! Thanks for your help!
My workflow:
1 - Create storage class
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ssd
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-ssd
zone: us-central1-a
reclaimPolicy: Retain
allowVolumeExpansion: true
2 - Create PersistentVolume and PersistentVolumeClaim
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-es
spec:
storageClassName: ssd
capacity:
storage: 3G
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: my-es-disk
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pv-claim-es
spec:
storageClassName: ssd
volumeName: pv-es
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3G
3 - Deploy statefulSet
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: elastic-data
labels:
app: elastic-data
area: devs
role: nosql
version: "6.1.4"
environment: elastic
spec:
serviceName: elastic-data
replicas: 1
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: elastic-data
area: devs
role: nosql
version: "6.1.4"
environment: elastic
spec:
terminationGracePeriodSeconds: 10
securityContext:
runAsUser: 1000
fsGroup: 1000
containers:
- name: elastic-data
image: docker.elastic.co/elasticsearch/elasticsearch:6.1.4
resources:
requests:
memory: "512Mi"
limits:
memory: "1024Mi"
ports:
- containerPort: 9300
name: transport
- containerPort: 9200
name: http
volumeMounts:
- name: data-volume
mountPath: /usr/share/elasticsearch/data
volumes:
- name: data-volume
persistentVolumeClaim:
claimName: pv-claim-es
Related
I am trying to install Elasticsearch on Kubernetes using bitnami/elasticsearch. I use the following commands:
helm repo add bitnami https://charts.bitnami.com/bitnami
kubectl apply -f ./es-pv.yaml
helm install elasticsearch --set name=elasticsearch,master.replicas=3,data.persistence.size=6Gi,data.replicas=2,coordinating.replicas=1 bitnami/elasticsearch -n elasticsearch
This is what I get, when I check pods:
# kubectl get pods -n elasticsearch
NAME READY STATUS RESTARTS AGE
elasticsearch-coordinating-only-0 0/1 Init:0/1 0 18m
elasticsearch-data-0 0/1 Running 6 18m
elasticsearch-data-1 0/1 Init:0/1 0 18m
elasticsearch-master-0 0/1 Init:0/1 0 18m
elasticsearch-master-1 0/1 Running 6 18m
elasticsearch-master-2 0/1 Init:0/1 0 18m
When I try kubectl describe pod for elasticsearch-data and elasticsearch-master pods, they all have the same message:
0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
es-pv.yaml describing PersistentVolumes:
apiVersion: v1
kind: PersistentVolume
metadata:
name: elastic-master-pv
labels:
type: local
spec:
storageClassName: ''
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: elasticsearch
name: data-elasticsearch-master-0
hostPath:
path: "/usr/share/elasticsearch"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node_name_1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elastic-master-pv-1
labels:
type: local
spec:
storageClassName: ''
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: elasticsearch
name: data-elasticsearch-master-1
hostPath:
path: "/usr/share/elasticsearch"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node_name_0
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elastic-master-pv-2
labels:
type: local
spec:
storageClassName: ''
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: elasticsearch
name: data-elasticsearch-master-2
hostPath:
path: "/usr/share/elasticsearch"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node_name_1
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elastic-data-pv
labels:
type: local
spec:
storageClassName: ''
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: elasticsearch
name: data-elasticsearch-data-0
hostPath:
path: "/usr/share/elasticsearch"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node_name_0
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elastic-data-pv-1
labels:
type: local
spec:
storageClassName: ''
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
claimRef:
namespace: elasticsearch
name: data-elasticsearch-data-1
hostPath:
path: "/usr/share/elasticsearch"
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- node_name_1
root#shy-fog-vs:~/elasticsearch# cat es-values.yaml
resources:
requests:
cpu: "200m"
memory: "512M"
limits:
cpu: "1000m"
memory: "512M"
volumeClaimTemplate:
storageClassName: local-storage
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 10Gi
minimumMasterNodes: 1
clusterHealthCheckParams: "wait_for_status=yellow&timeout=2s"
readinessProbe:
failureThreshold: 3
initialDelaySeconds: 200
periodSeconds: 10
successThreshold: 3
timeoutSeconds: 5
PersistentVolume and PersistentVolumeClaims seem to be alright:
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
airflow-dags-pv 2Gi RWX Retain Bound airflow/airflow-dags-pvc manual 112d
airflow-logs-pv 2Gi RWX Retain Bound airflow/airflow-logs-pvc manual 112d
airflow-pv 2Gi RWX Retain Bound airflow/airflow-pvc manual 112d
elastic-data-pv 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-data-0 15m
elastic-data-pv-1 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-data-1 15m
elastic-master-pv 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-master-0 15m
elastic-master-pv-1 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-master-1 15m
elastic-master-pv-2 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-master-2 15m
# kubectl get pvc -n elasticsearch
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
data-elasticsearch-data-0 Bound elastic-data-pv 10Gi RWO 16m
data-elasticsearch-data-1 Bound elastic-data-pv-1 10Gi RWO 16m
data-elasticsearch-master-0 Bound elastic-master-pv 10Gi RWO 16m
data-elasticsearch-master-1 Bound elastic-master-pv-1 10Gi RWO 16m
data-elasticsearch-master-2 Bound elastic-master-pv-2 10Gi RWO 16m
Short answer: everything is fine
Longer answer (and why you got that error):
This is what I get, when I check pods:
# kubectl get pods -n elasticsearch
NAME READY STATUS RESTARTS AGE
elasticsearch-coordinating-only-0 0/1 Init:0/1 0 18m
elasticsearch-data-0 0/1 Running 6 18m
elasticsearch-data-1 0/1 Init:0/1 0 18m
elasticsearch-master-0 0/1 Init:0/1 0 18m
elasticsearch-master-1 0/1 Running 6 18m
elasticsearch-master-2 0/1 Init:0/1 0 18m
This actually indicates the volumes mounted and the pod has started (see the second master pod is running and the other two are are in "Init" stage)
When I try kubectl describe pod for elasticsearch-data and
elasticsearch-master pods, they all have the same message:
0/3 nodes are available: 3 pod has unbound immediate PersistentVolumeClaims.
This is actually expected the first time you start the chart. Kubernetes has detected you don't have the volumes, and goes off to provision them for you. During that time, the pods can't start as those disks haven't been provisioned (and therefore the PersistentVolumeClaims have not been bound -- hence the error.)
You should also be able to see from the events section in the kubectl describe how recently that message appeared and frequently it has appeared. It should read something like below:
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Pulling 51m (x112 over 10h) kubelet Pulling image "broken-image:latest"
So here, the "broken-image" image has been pulled 112 times over the past 10 hours, and that message is 51 minutes old
Once the disks have been provisioned, and the PersistentVolumeClaims have been bound (the disks have been allocated to your claim), your pods can start. You can also confirm this by your other referenced snippet:
# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
airflow-dags-pv 2Gi RWX Retain Bound airflow/airflow-dags-pvc manual 112d
airflow-logs-pv 2Gi RWX Retain Bound airflow/airflow-logs-pvc manual 112d
airflow-pv 2Gi RWX Retain Bound airflow/airflow-pvc manual 112d
elastic-data-pv 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-data-0 15m
elastic-data-pv-1 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-data-1 15m
elastic-master-pv 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-master-0 15m
elastic-master-pv-1 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-master-1 15m
elastic-master-pv-2 10Gi RWO Retain Bound elasticsearch/data-elasticsearch-master-2 15m
You can see from this that the pv (Persistent Volume) has been bound to the claim and that is why your pods have started.
I am using it in a kubernetes cluster as part of the a magento app. I need to upgrade my elastic version to 7.9.0. My current version is 7.8.0 and it is working:
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: elasticsearch
namespace: default
spec:
version: 7.8.0
nodeSets:
- name: elasticsearch
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
xpack.security.authc:
anonymous:
username: anonymous
roles: superuser
authz_exception: false
podTemplate:
spec:
containers:
- name: elasticsearch
env:
- name: ES_JAVA_OPTS
value: -Xms512m -Xmx512m
resources:
requests:
memory: 1Gi
cpu: 0.5
limits:
memory: 1Gi
cpu: 1
http:
tls:
selfSignedCertificate:
disabled: true
When I change to 7.9.0, it crashes with the error:
failed to load plugin class [org.elasticsearch.xpack.security.Security
Any idea how to fix?
It was solved by using v1 instead of v1beta:
apiVersion: elasticsearch.k8s.elastic.co/v1
I'm trying to expose Kibana behind of a GCE ingress, but the ingress is reporting the kibana service as UNHEALTHY while it is healthy and ready. Just note that the healthcheck created by the Ingress is still using the default value HTTP on the root / and Port: ex:32021.
Changing the healthcheck in GCP console to HTTPS on /login and Port: 5601 doesn't change anything and the service is still reported as Unhealthy. The healthcheck port is also being overwritten to the original value, which is strange.
I'm using ECK 1.3.1 and below are my configs. I'm I missing anything? Thank you in advance.
apiVersion: elasticsearch.k8s.elastic.co/v1beta1
kind: Elasticsearch
metadata:
name: d3m0
spec:
version: 7.10.1
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
---
apiVersion: kibana.k8s.elastic.co/v1beta1
kind: Kibana
metadata:
name: d3m0
spec:
version: 7.10.1
count: 1
elasticsearchRef:
name: d3m0
podTemplate:
metadata:
labels:
kibana: node
spec:
containers:
- name: kibana
resources:
limits:
memory: 1Gi
cpu: 1
readinessProbe:
httpGet:
scheme: HTTPS
path: "/login"
port: 5601
http:
service:
spec:
type: NodePort
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: kibana-ingress
spec:
backend:
serviceName: d3m0-kb-http
servicePort: 5601
When using ECK, all the security feature are enabled on ES and Kibana, which means that their services do not accept HTTP traffic used by the default GCP loadbalancer Healthcheck. You must add the required annotations to the services and override the healthcheck paths as in the code below. Please find more details here.
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: d3m0
spec:
version: 7.10.1
count: 1
elasticsearchRef:
name: d3m0
http:
service:
metadata:
labels:
app: kibana
annotations:
# Enable TLS between GCLB and the application
cloud.google.com/app-protocols: '{"https":"HTTPS"}'
service.alpha.kubernetes.io/app-protocols: '{"https":"HTTPS"}'
# Uncomment the following line to enable container-native load balancing.
cloud.google.com/neg: '{"ingress": true}'
podTemplate:
metadata:
labels:
name: kibana-fleet
spec:
containers:
- name: kibana
resources:
limits:
memory: 1Gi
cpu: 1
readinessProbe:
# Override the readiness probe as GCLB reuses it for its own healthchecks
httpGet:
scheme: HTTPS
path: "/login"
port: 5601
I want to deploy my web service to Azure Kubernetes Service. I currently have 3 microservices in one node. 1 API gateway and 2 backend microservices. Error messages: none. Because I do not use routes in the API gateway, I cannot control my backend microservice via the API gateway. Now I have created a Eureka Naming Server, and want to deloy him in the same node, that I want to use so that my microservices can communicate with each other.
This is my Yaml-File that i used for the Naming Server
apiVersion: apps/v1
kind: Deployment
metadata:
name: discoeryservice-front
spec:
replicas: 1
selector:
matchLabels:
app: discoveryservice-front
template:
metadata:
labels:
app: discoveryservice-front
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: discoveryservice-front
image: registry.azurecr.io/discoveryservice:16
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8762
name: discovery
---
apiVersion: v1
kind: Service
metadata:
name: discoveryservice-front
spec:
ports:
- port: 8762
selector:
app: discoveryservice-front
---
First of all I don't get an external IP address and I don't know why. Can someone tell me how to get an external IP for my naming server?
This is my Yaml-File for the rest of microservices
apiVersion: apps/v1
kind: Deployment
metadata:
name: apigateway-front
spec:
replicas: 1
selector:
matchLabels:
app: apigateway-front
template:
metadata:
labels:
app: apigateway-front
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: apigateway-front
image: registry.azurecr.io/apigateway:11
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8800
name: apigateway
---
apiVersion: v1
kind: Service
metadata:
name: apigateway-front
spec:
type: LoadBalancer
ports:
- port: 8800
selector:
app: apigateway-front
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: contacts-back
spec:
replicas: 1
selector:
matchLabels:
app: contacts-back
template:
metadata:
labels:
app: contacts-back
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: contacts-back
image: registry.azurecr.io/contacts:12
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8100
name: contacts-back
---
apiVersion: v1
kind: Service
metadata:
name: contacts-back
spec:
ports:
- port: 8100
selector:
app: contacts-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: templates-back
spec:
replicas: 1
selector:
matchLabels:
app: templates-back
template:
metadata:
labels:
app: templates-back
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: templates-back
image: registry.azurecr.io/templates:13
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 512Mi
ports:
- containerPort: 8200
name: templates-back
---
apiVersion: v1
kind: Service
metadata:
name: templates-back
spec:
ports:
- port: 8200
selector:
app: templates-back
Secondly. Can someone tell me how I register my microservices with the naming server?
If I start my microservice without AKS and without Docker then the naming server works.
Since you are using kubernetes, the suggested approach would be to use k8s service discovery instead of eureka service discovery.
For API Gateway, you can use any k8s native api gateways like ambassador and kong which integrate very well with kubernetes.
To answer above questions
External IP is provided to services which are of type load balancer (I am assuming by external ip you meant IP address which can be used from outside of cluster).
To register to eureka service discovery, I guess you will need to make minor changes in the code (to inform eureka once the application is up in order to register it's instance).
If there is some doubt in above answers, please comment, will try to explain in depth too.
I created simple nodejs express application, it just listen single get port and returns hello word string.
Then i deployed that application on a kubernete cluster with 1 node with 4vCpu 8GB Ram. Routed all traffic to app with nginx ingress controller.
So after completed deployment, i performed simple http load test. Results was 70 - 100 req/sec. I tried to increase replica set to 6 but results was still same. I also tried to specify resource limits but nothing changed.
Lastly i added 2 more node to pool with 4vCpu 8GB Ram.
After that i performed load test again but results was still same. Total of 3 nodes with 12vCpu and 24GB Ram, it can barely handle 80 Req/Sec.
Results form load testing.
12 threads and 400 connections
Latency 423.68ms 335.38ms 1.97s 84.88%
Req/Sec 76.58 34.13 212.00 72.37%
4457 requests in 5.08s, 1.14MB read
Probably i am doing something wrong but i couldn't figure out.
This is my deployment and service yaml file.
apiVersion: v1
kind: Service
metadata:
name: app-3
spec:
type: ClusterIP
ports:
- port: 80
targetPort: 8080
selector:
app: app-3
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-3
spec:
replicas: 6
selector:
matchLabels:
app: app-3
template:
metadata:
labels:
app: app-3
spec:
containers:
- name: app-3-name
image: s1nc4p/node:v22
ports:
- containerPort: 8080
resources:
requests:
memory: "1024Mi"
cpu: "1000m"
limits:
memory: "1024Mi"
cpu: "1000m"
This is ingress service yaml file.
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
And this is ingress file.
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: hello-kubernetes-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- host: kkk.storadewebservices.com
http:
paths:
- path: '/'
backend:
serviceName: app-3
servicePort: 80