How to mount saved objects to kibana in kubernetes? - elasticsearch

I'm runnig EFK stack in my kubernetes cluster however each time i start kibana dashboard i will need to manually import export.ndjson i've heard that all kibana objects are stored in elasticsearch so a mounted this file to /usr/share/elasticsearch/data/ but still can't see it in the dashboard.
Here are my yaml files:
kibana :
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
namespace: tools
spec:
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.6.0
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: kibana
namespace: tools
spec:
selector:
app: kibana
ports:
- name: client
port: 5601
protocol: TCP
type: ClusterIP
es:
apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
namespace: tools
spec:
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: elasticsearch-volume
mountPath: /usr/share/elasticsearch/data
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.6.0
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
env:
- name: discovery.type
valueFrom:
configMapKeyRef:
name: tools-config
key: discovery.type
volumeMounts:
- name: elasticsearch-volume
mountPath: /usr/share/elasticsearch/data
- name: kibana-dashboard
mountPath: /usr/share/elasticsearch/data/export.ndjson
subPath: export.ndjson
volumes:
- name: elasticsearch-volume
persistentVolumeClaim:
claimName: elasticsearch-storage-pvc
- name: kibana-dashboard
configMap:
name: kibana-dashboard
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: tools
spec:
selector:
app: elasticsearch
ports:
- name: http
protocol: TCP
port: 9200
- name: transport
protocol: TCP
port: 9300
type: ClusterIP

Related

Nodeport works but ingress doesn't

I am new to kubernetes and I am trying to set up a web app for production. I tested an nginx app and it works perfectly in both nodeport and ingress.
When I setupped my web app exposing it using nodeport it works but when I expose it via ingress it doesn't work. I know that I deleted all nginx app deployment and service and ingress but my web app is still the same (WORK via NODEPORT AND NOT via INGRESS). Any help getting this to work is appreciated.
THIS IS THE NGINX YAML CODE (WORK via BOTH NODEPORT AND INGRESS)
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: default
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:stable
imagePullPolicy: Always
ports:
- containerPort: 80
name: web
protocol: TCP
readinessProbe:
httpGet:
port: web
path: /
---
apiVersion: v1
kind: Service
metadata:
name: nginx-nodeport
namespace: default
spec:
type: NodePort
selector:
# app: nginx
app: mruser-dev
ports:
- name: web
port: 80
targetPort: web
nodePort: 31122
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-nodeport
port:
number: 80
THIS IS MY WEB APP CODE (WORK via NODEPORT AND NOT via INGRESS)
apiVersion: apps/v1
kind: Deployment
metadata:
name: mywebapp-dev
spec:
replicas: 2
selector:
matchLabels:
app: mywebapp-dev
template:
metadata:
labels:
app: mywebapp-dev
annotations:
prometheus.io/scrape: "true"
prometheus.io/path: /metrics
prometheus.io/port: "80"
scheduler.alpha.kubernetes.io/affinity: >
{
"nodeAffinity": {
"requiredDuringSchedulingIgnoredDuringExecution": {
"nodeSelectorTerms": [
{
"matchExpressions": [
{
"key": "kubernetes.io/hostname",
"operator": "In",
"values": ["server-node-1"]
}
]
}
]
}
}
}
spec:
volumes:
- name: logs
emptyDir: {}
- name: cache
emptyDir: {}
- name: testing
emptyDir: {}
- name: sessions
emptyDir: {}
- name: views
emptyDir: {}
- name: mywebapp-dev-nfsvolume
nfs:
server: 10.0.0.20
path: /var/www/html/misteruser-dev
- name: mywebapp-nginx-nfsvolume
nfs:
server: 10.0.0.20
path: /var/www/nginx
securityContext:
fsGroup: 82
initContainers:
- name: database-migrations
image: mywebapp-dev-laravel-fpm:8.1-fpm-alpine
imagePullPolicy: Never
envFrom:
- configMapRef:
name: mywebapp-dev-laravel
- secretRef:
name: mywebapp-dev-laravel
volumeMounts:
- name: mywebapp-dev-nfsvolume
mountPath: /var/www/html/
command:
- "php"
args:
- "artisan"
- "migrate"
- "--force"
containers:
- name: nginx
imagePullPolicy: Never
image: mywebapp-dev-laravel-nginx:stable-alpine
volumeMounts:
- name: mywebapp-nginx-nfsvolume
mountPath: /etc/nginx/conf.d/
resources:
limits:
cpu: 200m
memory: 50M
ports:
- containerPort: 80
name: web
protocol: TCP
readinessProbe:
httpGet:
port: web
path: /
- name: fpm
imagePullPolicy: Never
envFrom:
- configMapRef:
name: mywebapp-dev-laravel
- secretRef:
name: mywebapp-dev-laravel
securityContext:
runAsUser: 82
readOnlyRootFilesystem: true
volumeMounts:
- name: logs
mountPath: /var/www/html/storage/logs
- name: cache
mountPath: /var/www/html/storage/framework/cache
- name: sessions
mountPath: /var/www/html/storage/framework/sessions
- name: views
mountPath: /var/www/html/storage/framework/views
- name: testing
mountPath: /var/www/html/storage/framework/testing
- name: mywebapp-dev-nfsvolume
mountPath: /var/www/html/
resources:
limits:
cpu: 500m
memory: 200Mi
image: mywebapp-dev-laravel-fpm:8.1-fpm-alpine
ports:
- containerPort: 9000
---
apiVersion: v1
kind: Service
metadata:
name: mywebapp-dev-svc
namespace: default
spec:
type: NodePort
selector:
app: mywebapp-dev
ports:
- name: web
port: 80
targetPort: 80
nodePort: 31112
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: mywebapp-nginx-ingress
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: mywebapp-dev-svc
port:
number: 80

how to setup efk logging in aks cluster nodes?

how to set up efk logging in aks cluster nodes?
Below are my spec files for efk logging in aks clusters.
# Elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-premium-retain-sc"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
########################
# Kibana yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
##################
# fluentd daemonset and rbac,sa,clusterrole specs
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
The setup is working fine only thing no logs are coming to elasticsearch cluster from fluentd whereas the same spec files work fine inside minikube cluster.
As for this setup kibana is up and able to connect with elasticsearch and the same is the case with fluentd, just logs are not coming inside elasticseach.
What extra configuration needs to be configured to make these config files work with azure k8 service(AKS) cluster nodes?
Had to add below environment variables for Fluentd.
Reference Link: https://github.com/fluent/fluentd-kubernetes-daemonset/issues/434
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
Here's the complete spec.
# Elasticsearch.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
namespace: logging
spec:
serviceName: logs-elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
ports:
- containerPort: 9200
name: rest
protocol: TCP
- containerPort: 9300
name: inter-node
protocol: TCP
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.logs-elasticsearch,es-cluster-1.logs-elasticsearch,es-cluster-2.logs-elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
initContainers:
- name: fix-permissions
image: busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
securityContext:
privileged: true
volumeMounts:
- name: data-logging
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
volumeClaimTemplates:
- metadata:
name: data-logging
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: "managed-premium-retain-sc"
resources:
requests:
storage: 100Gi
---
kind: Service
apiVersion: v1
metadata:
name: logs-elasticsearch
namespace: logging
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
########################
# Kibana yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
app: kibana
spec:
replicas: 1
selector:
matchLabels:
app: kibana
template:
metadata:
labels:
app: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana:7.5.0
resources:
limits:
cpu: 1000m
requests:
cpu: 500m
env:
- name: ELASTICSEARCH_HOSTS
value: http://logs-elasticsearch.logging.svc.cluster.local:9200
ports:
- containerPort: 5601
---
apiVersion: v1
kind: Service
metadata:
name: logs-kibana
spec:
selector:
app: kibana
type: ClusterIP
ports:
- port: 5601
targetPort: 5601
##################
# fluentd daemonset and rbac,sa,clusterrole specs
apiVersion: v1
kind: ServiceAccount
metadata:
name: fluentd
labels:
app: fluentd
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: fluentd
labels:
app: fluentd
rules:
- apiGroups:
- ""
resources:
- pods
- namespaces
verbs:
- get
- list
- watch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: fluentd
roleRef:
kind: ClusterRole
name: fluentd
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: fluentd
namespace: default
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd
labels:
app: fluentd
spec:
selector:
matchLabels:
app: fluentd
template:
metadata:
labels:
app: fluentd
spec:
serviceAccount: fluentd
serviceAccountName: fluentd
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1.4.2-debian-elasticsearch-1.1
env:
- name: FLUENT_ELASTICSEARCH_HOST
value: "logs-elasticsearch.logging.svc.cluster.local"
- name: FLUENT_ELASTICSEARCH_PORT
value: "9200"
- name: FLUENT_ELASTICSEARCH_SCHEME
value: "http"
- name: FLUENTD_SYSTEMD_CONF
value: disable
- name: FLUENT_UID
value: "0"
- name: FLUENT_CONTAINER_TAIL_EXCLUDE_PATH
value: /var/log/containers/fluent*
- name: FLUENT_CONTAINER_TAIL_PARSER_TYPE
value: /^(?<time>.+) (?<stream>stdout|stderr)( (?<logtag>.))? (?<log>.*)$/
resources:
limits:
memory: 512Mi
cpu: 500m
requests:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: varlog
mountPath: /var/log/
# - name: varlibdockercontainers
# mountPath: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
mountPath: /var/log/pods
readOnly: true
terminationGracePeriodSeconds: 30
volumes:
- name: varlog
hostPath:
path: /var/log/
# - name: varlibdockercontainers
# hostPath:
# path: /var/lib/docker/containers
- name: dockercontainerlogsdirectory
hostPath:
path: /var/log/pods

How can I deploy Elasticsearch cluster in Kubernete replicaset?

I am trying to deploy Elasticsearch search to K8S cluster. Below is my configuration. It works fine for replicas: 1. But if I change it to a value more than 1 there will be multiple pods in the K8S cluster get deployed. But how can I make them working together as one Elasticsearch cluster nodes?
I know I can manually add the nodes in the Elasticsearch cluster by using the API. But is there any way to let the cluster discover the extra nodes automatically?
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es
namespace: default
spec:
serviceName: es-entrypoint
replicas: 1
selector:
matchLabels:
name: es
template:
metadata:
labels:
name: es
spec:
volumes:
- name: es-config
configMap:
name: es-config
items:
- key: elasticsearch.yml
path: elasticsearch.yml
- name: persistent-storage
persistentVolumeClaim:
claimName: es-claim
initContainers:
- name: permissions-fix
image: busybox
volumeMounts:
- name: persistent-storage
mountPath: /usr/share/elasticsearch/data
command: [ 'chown' ]
args: [ '1000:1000', '/usr/share/elasticsearch/data' ]
containers:
- name: es
image: elasticsearch:7.10.1
resources:
requests:
cpu: 2
memory: 8
ports:
- name: http
containerPort: 9200
- containerPort: 9300
name: inter-node
volumeMounts:
- name: es-config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
- name: persistent-storage
mountPath: /usr/share/elasticsearch/data
---
apiVersion: v1
kind: Service
metadata:
name: es-entrypoint
spec:
selector:
name: es
ports:
- port: 9200
targetPort: 9200
protocol: TCP
type: NodePort

How can I make two container connect each other via serviceName in K8S?

I declared two containers in minikube cluster, elasticsearch and kibana. kibana needs to access elasticsearch endpoint at 9200 port. I declared elasticsearch as StatefulSet and give a serviceName elasticsearch.
When I look at kibana log I can see this error:
{"type":"log","#timestamp":"2020-12-31T03:37:45Z","tags":["warning","elasticsearch","monitoring"],"pid":6,"message":"Unable to revive connection: http://elasticsearch:9200/"}
{"type":"log","#timestamp":"2020-12-31T03:37:45Z","tags":["warning","elasticsearch","monitoring"],"pid":6,"message":"No living connections"}
it means kibana can't reach elasticsearch hostname. Is there anything wrong with my configuration?
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
spec:
serviceName: elasticsearch-entrypoint
replicas: 1
selector:
matchLabels:
name: elasticsearch
template:
metadata:
labels:
name: elasticsearch
spec:
containers:
- name: elasticsearch
image: elasticsearch:7.10.1
ports:
- containerPort: 9200
name: rest
- containerPort: 9300
name: inter-node
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
spec:
replicas: 1
selector:
matchLabels:
name: kibana
template:
metadata:
labels:
name: kibana
spec:
containers:
- name: kibana
image: kibana:7.10.1
ports:
- containerPort: 5601
env:
- name: ELASTICSEARCH_HOSTS
value: http://es-cluster-0.elasticsearch-entrypoint.default.svc.local:9200
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-entrypoint
namespace: default
spec:
clusterIP: None
selector:
name: elasticsearch
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
---
apiVersion: v1
kind: Service
metadata:
name: kibana-entrypoint
namespace: default
spec:
selector:
name: kibana
ports:
- port: 5601
You need to create a headless governing service for your statefulSet:
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-entrypoint
namespace: default
spec:
clusterIP: None
selector:
name: elasticsearch
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
In your deployment of kibana env variable ELASTICSEARCH_HOSTS need to set http://es-cluster-0.elasticsearch-entrypoint.default.svc.cluster.local:9200.
The template is like my_pod_name.my_Service_Name.my_Namespace.svc.cluster-domain.example , but you can skip the cluster-domain.example part. Only Service_Name.Namespace.svc will work fine.
here is the full yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
spec:
serviceName: elasticsearch-entrypoint
replicas: 1
selector:
matchLabels:
name: elasticsearch
template:
metadata:
labels:
name: elasticsearch
spec:
containers:
- name: elasticsearch
image: elasticsearch:7.10.1
ports:
- containerPort: 9200
name: rest
- containerPort: 9300
name: inter-node
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
spec:
replicas: 1
selector:
matchLabels:
name: kibana
template:
metadata:
labels:
name: kibana
spec:
containers:
- name: kibana
image: kibana:7.10.1
ports:
- containerPort: 5601
env:
- name: ELASTICSEARCH_HOSTS
value: http://es-cluster-0.elasticsearch-entrypoint.default.svc.cluster.local:9200
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-entrypoint
namespace: default
spec:
clusterIP: None
selector:
name: elasticsearch
ports:
- port: 9200
name: rest
- port: 9300
name: inter-node
---
apiVersion: v1
kind: Service
metadata:
name: kibana-entrypoint
namespace: default
spec:
selector:
name: kibana
ports:
- port: 5601
Ref
From the docs
StatefulSets currently require a Headless Service to be responsible for the network identity of the Pods. You are responsible for creating this service by specifying clusterIP: None
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: default
spec:
clusterIP: None
selector:
name: elasticsearch
ports:
- port: 9200
- port: 9300
Then you can access it via elasticsearch:9200 and elasticsearch:9300

How can i creat pod with out this error elasticsearch on kubernetes

I want to create elasticsearch pod on kubernetes.
I make some config change to edit path.data and path.logs
But I'm getting this error.
error: error validating "es-deploy.yml": error validating data:
ValidationError(Deployment.spec.template.spec.containers[0]): unknown
field "volumes" in io.k8s.api.core.v1.Container; if you choose to
ignore these errors, turn validation off with --validate=false
service-account.yml
apiVersion: v1
kind: ServiceAccount
metadata:
name: elasticsearch
es-svc.yml
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
labels:
component: elasticsearch
spec:
# type: LoadBalancer
selector:
component: elasticsearch
ports:
- name: http
port: 9200
protocol: TCP
- name: transport
port: 9300
protocol: TCP
elasticsearch.yml
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-config
data:
elasticsearch.yml: |
cluster:
name: ${CLUSTER_NAME:elasticsearch-default}
node:
master: ${NODE_MASTER:true}
data: ${NODE_DATA:true}
name: ${NODE_NAME}
ingest: ${NODE_INGEST:true}
max_local_storage_nodes: ${MAX_LOCAL_STORAGE_NODES:1}
processors: ${PROCESSORS:1}
network.host: ${NETWORK_HOST:_site_}
path:
data: ${DATA_PATH:"/data/elk"}
repo: ${REPO_LOCATIONS:[]}
bootstrap:
memory_lock: ${MEMORY_LOCK:false}
http:
enabled: ${HTTP_ENABLE:true}
compression: true
cors:
enabled: true
allow-origin: "*"
discovery:
zen:
ping.unicast.hosts: ${DISCOVERY_SERVICE:elasticsearch-discovery}
minimum_master_nodes: ${NUMBER_OF_MASTERS:1}
xpack:
license.self_generated.type: basic
es-deploy.yml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es
labels:
component: elasticsearch
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
spec:
serviceAccount: elasticsearch
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es
securityContext:
capabilities:
add:
- IPC_LOCK
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "DISCOVERY_SERVICE"
value: "elasticsearch"
- name: NODE_MASTER
value: "true"
- name: NODE_DATA
value: "true"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: "-Xms256m -Xmx256m"
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: storage
mountPath: /data/elk
- name: config-volume
mountPath: /usr/share/elasticsearch/elastic.yaml
volumes:
- name: storage
emptyDir: {}
- name: config-volume
configMap:
name: elasticsearch-config
There is syntax problem in your es-deploy.yaml file.
This should work.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: es
labels:
component: elasticsearch
spec:
replicas: 1
template:
metadata:
labels:
component: elasticsearch
spec:
serviceAccount: elasticsearch
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
containers:
- name: es
securityContext:
capabilities:
add:
- IPC_LOCK
image: docker.elastic.co/elasticsearch/elasticsearch:7.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: "DISCOVERY_SERVICE"
value: "elasticsearch"
- name: NODE_MASTER
value: "true"
- name: NODE_DATA
value: "true"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: "-Xms256m -Xmx256m"
ports:
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
volumeMounts:
- name: storage
mountPath: /data/elk
- name: config-volume
mountPath: /usr/share/elasticsearch/elastic.yaml
volumes:
- name: storage
emptyDir: {}
- name: config-volume
configMap:
name: elasticsearch-config
The volumes section is not under containers section, it should be under spec section as the error suggest.
You can validate your k8s yaml files for syntax error online using this site.
Hope this helps.

Resources