I install elasticsearch from helm stable chart and get stuck at pending bound pvc to pc.
I have create a PC with same label to PVC of helm elastic.
- How to bound this elastic pvc to pv.
Update config:
PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
finalizers:
- kubernetes.io/pvc-protection
labels:
app: elasticsearch
component: data
release: es
role: data
name: data-es-elasticsearch-data-0
selfLink: /api/v1/namespaces/default/persistentvolumeclaims/data-es-elasticsearch-data-0
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
status: {}
PV:
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"labels":{"type":"local"},"name":"task-pv-volume"},"spec":{"accessModes":["ReadWriteOnce"],"capacity":{"storage":"15Gi"},"hostPath":{"path":"/app/k8s-volumes"},"storageClassName":"manual"}}
creationTimestamp: null
finalizers:
- kubernetes.io/pv-protection
labels:
app: elasticsearch
component: data
release: es
role: data
type: local
name: task-pv-volume
selfLink: /api/v1/persistentvolumes/task-pv-volume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 15Gi
hostPath:
path: /app/k8s-volumes
type: ""
persistentVolumeReclaimPolicy: Retain
storageClassName: manual
volumeMode: Filesystem
status: {}
The PersistentVolumeClaim and PersistentVolume classes (not labels) must match.The PVC has no storageClassName, so change the PV to have storageClassName: '' and it should work.
I guess you're using no storage class if you create the PV manually. But you're defining a storage class in the PV. So, try to re-create the PV without the storageClassName field:
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 15Gi
hostPath:
path: /app/k8s-volumes
type: ""
persistentVolumeReclaimPolicy: Retain
volumeMode: Filesystem
Related
I have an ElasticSearch + Kibana cluster on Kubernetes. We want to bypass authentification to let user to directly to dashboard without having to log in.
We have managed to implement Elastic Anoynmous access on our elastic nodes. Unfortunately, it is not what we want as we want user to bypass Kibana login, what we need is Anonymous Authentication.
Unfortunately we can't figure how to implement it. We are declaring Kubernetes Objects with yaml such as Deployment, Services etc.. without using ConfigMap. In order to add Elastic/kibana config, we pass them through env variables.
For example, here how we define es01 Kubernetes Deployment yaml :
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "37"
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
objectset.rio.cattle.io/applied: H4sIAAAAAAAA/7RVTW/jNhD9KwueWkCmJfl
objectset.rio.cattle.io/id: 3dc011c2-d20c-465b-a143-2f27f4dc464f
creationTimestamp: "2022-05-24T15:17:53Z"
generation: 37
labels:
io.kompose.service: es01
objectset.rio.cattle.io/hash: 83a41b68cabf516665877d6d90c837e124ed2029
name: es01
namespace: waked-elk-pre-prod-test
resourceVersion: "403573505"
uid: e442cf0a-8100-4af1-a9bc-ebf65907398a
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
io.kompose.service: es01
strategy:
type: Recreate
template:
metadata:
annotations:
cattle.io/timestamp: "2022-09-09T13:41:29Z"
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
creationTimestamp: null
labels:
io.kompose.service: es01
spec:
affinity: {}
containers:
- env:
- name: ELASTIC_PASSWORD
valueFrom:
secretKeyRef:
key: ELASTIC_PASSWORD
name: elastic-credentials
optional: false
- name: cluster.initial_master_nodes
value: es01,es02,es03
And here is the one for Kibana node :
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "41"
field.cattle.io/publicEndpoints: '[{"addresses":["10.130.10.6","10.130.10.7","10.130.10.8"],"port":80,"protocol":"HTTP","serviceName":"waked-elk-pre-prod-test:kibana","ingressName":"waked-elk-pre-prod-test:waked-kibana-ingress","hostname":"waked-kibana-pre-prod.cws.cines.fr","path":"/","allNodes":false}]'
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
objectset.rio.cattle.io/applied: H4sIAAAAAAAA/7ST34/.........iNhDH/5WTn1o
objectset.rio.cattle.io/id: 5b109127-cb95-4c93-857d-12399979d85a
creationTimestamp: "2022-05-19T08:37:59Z"
generation: 49
labels:
io.kompose.service: kibana
objectset.rio.cattle.io/hash: 0d2e2477ef3e7ee3c8f84b485cc594a1e59aea1d
name: kibana
namespace: waked-elk-pre-prod-test
resourceVersion: "403620874"
uid: 6f22f8b1-81da-49c0-90bf-9e773fbc051b
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
io.kompose.service: kibana
strategy:
type: Recreate
template:
metadata:
annotations:
cattle.io/timestamp: "2022-09-21T13:00:47Z"
kompose.cmd: kompose convert
kompose.version: 1.26.1 (a9d05d509)
kubectl.kubernetes.io/restartedAt: "2022-11-08T14:04:53+01:00"
creationTimestamp: null
labels:
io.kompose.service: kibana
spec:
affinity: {}
containers:
- env:
- name: xpack.security.authc.providers.anonymous.anonymous1.order
value: "0"
- name: xpack.security.authc.providers.anonymous.anonymous1.credentials.username
value: username
- name: xgrzegrgepack.security.authc.providers.anonymous.anonymous1.credentials.password
value: password
image: docker.elastic.co/kibana/kibana:8.2.0
imagePullPolicy: IfNotPresent
name: kibana
ports:
- containerPort: 5601
name: 5601tcp
protocol: TCP
resources:
limits:
memory: "1073741824"
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/kibana/config/certs
name: certs
- mountPath: /usr/share/kibana/data
name: kibanadata
dnsPolicy: ClusterFirst
nodeName: k8-worker-cpu-3.cines.fr
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: certs
persistentVolumeClaim:
claimName: certs
- name: kibanadata
persistentVolumeClaim:
claimName: kibanadata
status:
availableReplicas: 1
conditions:
- lastTransitionTime: "2022-11-08T13:45:44Z"
lastUpdateTime: "2022-11-08T13:45:44Z"
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
- lastTransitionTime: "2022-06-07T14:12:17Z"
lastUpdateTime: "2022-11-08T13:45:44Z"
message: ReplicaSet "kibana-84b65ffb69" has successfully progressed.
reason: NewReplicaSetAvailable
status: "True"
type: Progressing
observedGeneration: 49
readyReplicas: 1
replicas: 1
updatedReplicas: 1
We don't face any problem when modifying/applying the yaml and the pod run flawlessly. But it just doesn't work, if we try to access Kibana, we land on the login page.
Both files are a bit cropped. Feel free to ask for full file if needed.
Have a good night!
I follow instructions from : https://www.elastic.co/guide/en/cloud-on-k8s/1.0/k8s-quickstart.html#k8s-deploy-elasticsearch. But I am wondering how to differentiate the installation between ECK for production and ECK for development.
Should I install Elasticsearch operator for production and development respectively?
What is relation between elastic operator and elastic node? And how do I know which elastic operator manages a node in development environment?
this is how I did it:
kubectl config set-context --current --namespace=default
kubectl create -f https://download.elastic.co/downloads/eck/2.4.0/crds.yaml
kubectl apply -f https://download.elastic.co/downloads/eck/2.4.0/operator.yaml
production deployment:
kubectl apply -f prod-depl.yaml
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
name: production
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data-prod
namespace: production
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-data-prod
namespace: production
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: data-es
namespace: production
spec:
version: 8.4.3
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
podTemplate:
spec:
containers:
- name: elasticsearch
# resources:
# limits:
# memory: 2Gi
# cpu: 2
# env:
# - name: ES_JAVA_OPTS
# value: "-Xms2g -Xmx4g"
volumeMounts:
- name: elasticsearch-data-prod
mountPath: /usr/share/production/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elasticsearch-data-prod
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 10Gi
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: data-kibana
namespace: production
spec:
version: 8.4.3
count: 1
elasticsearchRef:
name: data-es
dev deployment:
kubectl apply -f dev-depl.yaml
apiVersion: v1
kind: Namespace
metadata:
name: development
labels:
name: development
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: elasticsearch-data-dev
namespace: development
labels:
type: local
spec:
storageClassName: standard
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: elasticsearch-data-dev
namespace: development
spec:
storageClassName: standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: data-es
namespace: development
spec:
version: 8.4.3
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
podTemplate:
spec:
containers:
- name: elasticsearch
# resources:
# limits:
# memory: 2Gi
# cpu: 2
# env:
# - name: ES_JAVA_OPTS
# value: "-Xms2g -Xmx4g"
volumeMounts:
- name: elasticsearch-data-dev
mountPath: /usr/share/development/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elasticsearch-data-dev
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 1Gi
---
apiVersion: kibana.k8s.elastic.co/v1
kind: Kibana
metadata:
name: data-kibana
namespace: development
spec:
version: 8.4.3
count: 1
elasticsearchRef:
name: data-es
Here is my Elasticsearch yaml:
---
# Source: elastic/templates/elastic.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: ichat-els-deployment
spec:
# updateStrategy:
# changeBudget:
# maxSurge: -1
# maxUnavailable: -1
version: 7.11.1
auth:
roles:
- secretName: elastic-roles-secret
fileRealm:
- secretName: elastic-filerealm-secret
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
volumeName: elasticsearch-azure-pv
podTemplate:
spec:
initContainers:
- name: install-plugins
command:
- sh
- -c
- |
bin/elasticsearch-plugin install --batch ingest-attachment
- name: default2
count: 0
config:
node.store.allow_mmap: false
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
After creating this, I have the 2 nodesets running, kubectl get pods:
NAME READY STATUS RESTARTS AGE
elastic-operator-0 1/1 Running 8 7d23h
ichat-els-deployment-es-default-0 1/1 Running 0 24m
ichat-els-deployment-es-default2-0 1/1 Running 0 26m
Everything is working fine, but now I want to delete the default2 nodeset, how can I do that?
I tried removing the nodeset from the manifest and reapply it, but nothing happened:
---
# Source: elastic/templates/elastic.yaml
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: ichat-els-deployment
spec:
# updateStrategy:
# changeBudget:
# maxSurge: -1
# maxUnavailable: -1
version: 7.11.1
auth:
roles:
- secretName: elastic-roles-secret
fileRealm:
- secretName: elastic-filerealm-secret
nodeSets:
- name: default
count: 1
config:
node.store.allow_mmap: false
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
storageClassName: ""
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
volumeName: elasticsearch-azure-pv
podTemplate:
spec:
initContainers:
- name: install-plugins
command:
- sh
- -c
- |
bin/elasticsearch-plugin install --batch ingest-attachment
The pods and shards are still running and there are no errors in elastic operator. What is the correct way to remove a nodeset? Thanks.
I solved that issue with elasticsearch Kind object deletion:
kubectl delete elasticsearch <elasticsearch_object_name>
so all objects (StatefulSet -> Pods, Secrets, PVCs -> PVs, etc. ) related to elasticsearch CRD object were deleted due to it, but I didn't care about data loss here.
Hello I'm currently setting up a rook-cephfs test environment using minikube running on Windows 10.
So far I've ran crds.yaml, common.yaml, operator.yaml and cluster-test.yaml. I following the guide at https://github.com/kubernetes/kubernetes/tree/release-1.9/cluster/addons/registry to set up the storage.
From this guide, I've created the ReplicationController and the service. The issue that I'm having is that when I run kubectl get svc, I don't see the service. Any idea on why its not showing up? Thanks
service.yaml
apiVersion: v1
kind: Service
metadata:
name: kube-registry
namespace: kube-system
labels:
k8s-app: kube-registry-upstream
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "KubeRegistry"
spec:
selector:
k8s-app: kube-registry-upstream
ports:
- name: registry
port: 5000
protocol: TCP
Docker registry
apiVersion: v1
kind: ReplicationController
metadata:
name: kube-registry-v0
namespace: kube-system
labels:
k8s-app: kube-registry-upstream
version: v0
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
k8s-app: kube-registry-upstream
version: v0
template:
metadata:
labels:
k8s-app: kube-registry-upstream
version: v0
kubernetes.io/cluster-service: "true"
spec:
containers:
- name: registry
image: registry:2
resources:
limits:
cpu: 100m
memory: 100Mi
env:
- name: REGISTRY_HTTP_ADDR
value: :5000
- name: REGISTRY_STORAGE_FILESYSTEM_ROOTDIRECTORY
value: /var/lib/registry
volumeMounts:
- name: image-store
mountPath: /var/lib/registry
ports:
- containerPort: 5000
name: registry
protocol: TCP
volumes:
- name: image-store
emptyDir: {}
Based on the service yaml you shared, the service in getting created in kube-system namespace.
You can view the service using the -n option to specify the namespace
kubectl get svc kube-registry -n kube-system
I am trying to set up EFK stack on my k8s cluster using ansible repo.
When i tried to browse kibana dashboard it shows me next output:
After making some research, i found out that i don't have any log detected by Fluentd.
I am running k8s 1.2.4 on minions and 1.2.0 on master.
What i succeeded to understand, is that kubelet creates /var/log/containers directory, and make symlinks from all containers running in the cluster into it. After that Fluentd mounts share /var/log volume from the minion and have eventually access to all logs containers. So , it can send these logs to elastic search.
In my case i had /var/log/containers created, but it is empty, even /var/lib/docker/containers does not contain any log file.
I used to use the following controllers and services for EFK stack setup:
es-controller.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: elasticsearch-logging-v1
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
replicas: 2
selector:
k8s-app: elasticsearch-logging
version: v1
template:
metadata:
labels:
k8s-app: elasticsearch-logging
version: v1
kubernetes.io/cluster-service: "true"
spec:
containers:
- image: gcr.io/google_containers/elasticsearch:v2.4.1
name: elasticsearch-logging
resources:
# need more cpu upon initialization, therefore burstable class
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 9200
name: db
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: es-persistent-storage
mountPath: /data
env:
- name: "NAMESPACE"
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: es-persistent-storage
emptyDir: {}
es-service.yaml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-logging
namespace: kube-system
labels:
k8s-app: elasticsearch-logging
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Elasticsearch"
spec:
ports:
- port: 9200
protocol: TCP
targetPort: db
selector:
k8s-app: elasticsearch-logging
fluentd-es.yaml
apiVersion: v1
kind: Pod
metadata:
name: fluentd-es-v1.20
namespace: kube-system
labels:
k8s-app: fluentd-es
version: v1.20
spec:
containers:
- name: fluentd-es
image: gcr.io/google_containers/fluentd-elasticsearch:1.20
command:
- '/bin/sh'
- '-c'
- '/usr/sbin/td-agent 2>&1 >> /var/log/fluentd.log'
resources:
limits:
cpu: 100m
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
kibana-controller.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana-logging
template:
metadata:
labels:
k8s-app: kibana-logging
spec:
containers:
- name: kibana-logging
image: gcr.io/google_containers/kibana:v4.6.1
resources:
# keep request = limit to keep this container in guaranteed class
limits:
cpu: 100m
requests:
cpu: 100m
env:
- name: "ELASTICSEARCH_URL"
value: "http://elasticsearch-logging:9200"
ports:
- containerPort: 5601
name: ui
protocol: TCP
kibana-service.yaml
apiVersion: v1
kind: Service
metadata:
name: kibana-logging
namespace: kube-system
labels:
k8s-app: kibana-logging
kubernetes.io/cluster-service: "true"
kubernetes.io/name: "Kibana"
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
selector:
k8s-app: kibana-logging
update:
I changed fluentd-es.yaml as following:
apiVersion: v1
kind: Pod
metadata:
name: fluentd-elasticsearch
namespace: kube-system
labels:
k8s-app: fluentd-logging
spec:
containers:
- name: fluentd-elasticsearch
image: gcr.io/google_containers/fluentd-elasticsearch:1.15
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
But when i run a pod "named gateway", i got in the fluentd log the next error:
/var/log/containers/gateway-c3cuu_default_gateway-d5966a86e7cb1519329272a0b900182be81f55524227db2f524e6e23cd75ba04.log unreadable. It is excluded and would be examined next time.
Finally i found out what was causing the issue.
when installing docker from CentOS 7 repo, there is an option (--log-driver=journald) which force docker to run log output to journald. The default behavior is to write these logs to json.log files.So, the only thing i had to do, delete the last mentioned option from /etc/sysconfig/docker.