Filebeat not harvesting logs with autodiscover - elasticsearch

I'm having an issue with Filebeat on an environment which suddenly stopped sending logs to elasticsearch. On both environments we have the same setup but on this one it just stopped.. Filebeat, ElasticSearch and Kibana version 7.15.0 all helm deployments
/var/lib/docker/containers/ is empty on the filebeat container but so is in the other working environment..
Filebeat logs:
2022-07-02T16:56:12.731Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:12.731Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "31e0e6d8-e599-453a-a8d0-69afdf5b52d6"}
2022-07-02T16:56:12.731Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "31e0e6d8-e599-453a-a8d0-69afdf5b52d6"}
2022-07-02T16:56:12.976Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:12.976Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "89b55ab8-8fb3-49c4-9d9e-2372c956cf49"}
2022-07-02T16:56:12.977Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "89b55ab8-8fb3-49c4-9d9e-2372c956cf49"}
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "ac5b2c6d-189a-420a-bb00-f9d9e6d5aef7"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "be885467-72ea-44c1-bdce-cdd91fb03e79"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:215 Start next scan {"input_id": "1fa30d44-77e8-42ec-8d22-55abd4f8f60b"}
2022-07-02T16:56:13.074Z DEBUG [input] input/input.go:139 Run input
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "ac5b2c6d-189a-420a-bb00-f9d9e6d5aef7"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "1fa30d44-77e8-42ec-8d22-55abd4f8f60b"}
2022-07-02T16:56:13.074Z DEBUG [input] log/input.go:279 input states cleaned up. Before: 0, After: 0, Pending: 0 {"input_id": "be885467-72ea-44c1-bdce-cdd91fb03e79"}
Inside the filebeat container:
ls data/registry/filebeat
log.json
meta.json
cat logs/filebeat
2022-07-02T17:37:30.639Z INFO instance/beat.go:665 Home path: [/usr/share/filebeat] Config path: [/usr/share/filebeat] Data path: [/usr/share/filebeat/data] Logs path: [/usr/share/filebeat/logs]
2022-07-02T17:37:30.640Z DEBUG [beat] instance/beat.go:723 Beat metadata path: /usr/share/filebeat/data/meta.json
2022-07-02T17:37:30.640Z INFO instance/beat.go:673 Beat ID: b0e19db9-df61-4eec-9a95-1cd5ef653718
2022-07-02T17:37:30.640Z INFO [index-management] idxmgmt/std.go:184 Set output.elasticsearch.index to 'filebeat-7.15.0' as ILM is enabled.
2022-07-02T17:37:30.641Z INFO [esclientleg] eslegclient/connection.go:100 elasticsearch url: http://elasticsearch.logging:9200
2022-07-02T17:37:30.740Z DEBUG [esclientleg] eslegclient/connection.go:249 ES Ping(url=http://elasticsearch.logging:9200)
2022-07-02T17:37:30.742Z DEBUG [esclientleg] transport/logging.go:41 Completed dialing successfully {"network": "tcp", "address": "elasticsearch.logging:9200"}
2022-07-02T17:37:30.743Z DEBUG [esclientleg] eslegclient/connection.go:272 Ping status code: 200
2022-07-02T17:37:30.743Z INFO [esclientleg] eslegclient/connection.go:273 Attempting to connect to Elasticsearch version 7.15.0
2022-07-02T17:37:30.743Z DEBUG [esclientleg] eslegclient/connection.go:328 GET http://elasticsearch.logging:9200/_license?human=false <nil>
cat data/meta.json
{"uuid":"b0e19db9-df61-4eec-9a95-1cd5ef653718","first_start":"2022-05-29T00:10:26.137238912Z"}
ls data/registry/filebeat
log.json
meta.json
cat data/registry/filebeat/log.json
cat data/registry/filebeat/meta.json
{"version":"1"}
apiVersion: v1
kind: Pod
metadata:
annotations:
cni.projectcalico.org/containerID: 1e66a1c066aa10de73834586c605c7adf71b2c652498b0de7a9d94b44633f919
cni.projectcalico.org/podIP: 10.0.4.120/32
cni.projectcalico.org/podIPs: 10.0.4.120/32
co.elastic.logs/enabled: "false"
configChecksum: 9e8011c4cd9f9bf36cafe98af8e7862345164b1c11f062f4ab9a67492248076
kubectl.kubernetes.io/restartedAt: "2022-04-14T16:22:07+03:00"
creationTimestamp: "2022-07-01T13:53:29Z"
generateName: filebeat-filebeat-
labels:
app: filebeat-filebeat
chart: filebeat-7.15.0
controller-revision-hash: 79bdd78b56
heritage: Helm
pod-template-generation: "21"
release: filebeat
name: filebeat-filebeat-95l2d
namespace: logging
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: DaemonSet
name: filebeat-filebeat
uid: 343f6f76-ffde-11e9-bf3f-42010a9c01ac
resourceVersion: "582889515"
uid: 916d7dc9-f4b2-498a-9963-91213f568560
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchFields:
- key: metadata.name
operator: In
values:
- ..mynode
containers:
- args:
- -e
- -E
- http.enabled=true
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: spec.nodeName
- name: ELASTICSEARCH_HOSTS
value: elasticsearch.logging:9200
image: docker.elastic.co/beats/filebeat:7.15.0
imagePullPolicy: IfNotPresent
livenessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
curl --fail 127.0.0.1:5066
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
name: filebeat
readinessProbe:
exec:
command:
- sh
- -c
- |
#!/usr/bin/env bash -e
filebeat test output
failureThreshold: 3
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 5
resources:
limits:
cpu: 100m
memory: 200Mi
requests:
cpu: 50m
memory: 50Mi
securityContext:
privileged: false
runAsUser: 0
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /usr/share/filebeat/filebeat.yml
name: filebeat-config
readOnly: true
subPath: filebeat.yml
- mountPath: /usr/share/filebeat/my_ilm_policy.json
name: filebeat-config
readOnly: true
subPath: my_ilm_policy.json
- mountPath: /usr/share/filebeat/data
name: data
- mountPath: /var/lib/docker/containers
name: varlibdockercontainers
readOnly: true
- mountPath: /var/log
name: varlog
readOnly: true
- mountPath: /var/run/docker.sock
name: varrundockersock
readOnly: true
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access-2gvbn
readOnly: true
dnsPolicy: ClusterFirst
enableServiceLinks: true
nodeName: ..mynode
preemptionPolicy: PreemptLowerPriority
priority: 0
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
serviceAccount: filebeat-filebeat
serviceAccountName: filebeat-filebeat
terminationGracePeriodSeconds: 30
tolerations:
- effect: NoExecute
key: node.kubernetes.io/not-ready
operator: Exists
- effect: NoExecute
key: node.kubernetes.io/unreachable
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/disk-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/memory-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/pid-pressure
operator: Exists
- effect: NoSchedule
key: node.kubernetes.io/unschedulable
operator: Exists
volumes:
- configMap:
defaultMode: 384
name: filebeat-filebeat-daemonset-config
name: filebeat-config
- hostPath:
path: /var/lib/filebeat-filebeat-logging-data
type: DirectoryOrCreate
name: data
- hostPath:
path: /var/lib/docker/containers
type: ""
name: varlibdockercontainers
- hostPath:
path: /var/log
type: ""
name: varlog
- hostPath:
path: /var/run/docker.sock
type: ""
name: varrundockersock
- name: kube-api-access-3axln
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace

Actually it worked with another configuration posted on elastic.co website:
filebeat.autodiscover:
providers:
- type: kubernetes
hints.enabled: true
hints.default_config:
type: container
paths:
- /var/log/containers/*-${data.container.id}.log # CRI path
https://www.elastic.co/guide/en/beats/filebeat/current/configuration-autodiscover-hints.html
I'm still not sure why this happend suddenly but it the reason might be a container runtime change for kubernetes on the node but I don't have access to check that

Related

Getting "event.agent_id_status auth_metadata_missing" error while sending logs from standalone elasticsearch agent

I am trying to send the logs from my AKS cluster into Elasticsearch the log that I am getting is "event.agent_id_status auth_metadata_missing" in my kibana even after all the volume mounts are done correctly
here's my configmap for standalone elastic agent
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-node-datastreams
namespace: elastic
labels:
k8s-app: elastic-agent-standalone
data:
agent.yml: |-
outputs:
default:
type: elasticsearch
protocol: https
ssl.verification_mode: 'none'
allow_older_versions: true
hosts:
- >-
${ES_HOST}
username: ${ES_USERNAME}
password: ${ES_PASSWORD}
indices:
- index: "journalbeat-alias"
when:
and:
- has_fields: ['fields.k8s.component']
- equals:
fields.k8s.component: "journal"
- index: "test-audit"
when:
and:
- has_fields: ['fields.k8s.component']
- equals:
fields.k8s.component: "audit"
- index: "filebeat-%{[beat.version]}-%{+yyyy.MM.dd}"
when.not:
has_fields: ['kubernetes.namespace']
agent:
monitoring:
enabled: true
use_output: default
logs: true
metrics: false
providers.kubernetes:
node: ${NODE_NAME}
scope: node
#Uncomment to enable hints' support
#hints.enabled: true
inputs:
- name: system-logs
type: logfile
use_output: default
meta:
package:
name: system
version: 0.10.7
data_stream:
namespace: filebeat
streams:
- data_stream:
dataset: audit
type: logfile
paths:
- /var/log/*.log
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_fields:
target: ''
fields:
ecs.version: 1.12.0
- data_stream:
dataset: journald
type: logfile
paths:
- /var/lib/host/log/journal
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_fields:
target: ''
fields:
ecs.version: 1.12.0
- data_stream:
dataset: container
type: logfile
paths:
- /var/log/containers/*.log
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_fields:
target: ''
fields:
ecs.version: 1.12.0
and here's my daemonset file
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: elastic-agent-standalone
namespace: elastic
labels:
app: elastic-agent-standalone
spec:
selector:
matchLabels:
app: elastic-agent-standalone
template:
metadata:
labels:
app: elastic-agent-standalone
spec:
# Tolerations are needed to run Elastic Agent on Kubernetes control-plane nodes.
# Agents running on control-plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: elastic-agent-standalone
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: elastic-agent-standalone
image: docker.elastic.co/beats/elastic-agent:8.4.3
args: [
"-c", "/etc/elastic-agent/agent.yml",
"-e",
]
env:
# The basic authentication username used to connect to Elasticsearch
# This user needs the privileges required to publish events to Elasticsearch.
- name: FLEET_ENROLL_INSECURE
value: "1"
- name: ES_USERNAME
value: <CORRECT USER>
# The basic authentication password used to connect to Elasticsearch
- name: ES_PASSWORD
value: <MY CORRECT PASSWORD>
# The Elasticsearch host to communicate with
- name: ES_HOST
value: <CORRECT HOST>
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: STATE_PATH
value: "/etc/elastic-agent"
securityContext:
runAsUser: 0
resources:
limits:
memory: 700Mi
requests:
cpu: 100m
memory: 400Mi
volumeMounts:
- name: datastreams
mountPath: /etc/elastic-agent/agent.yml
readOnly: true
subPath: agent.yml
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: varlogcontainers
mountPath: /var/log/containers
readOnly: true
- name: varlogpods
mountPath: /var/log/pods
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: kubenodevarlogs
mountPath: /var/lib/host/log
readOnly: true
# - name: varlog
# mountPath: /var/log
# readOnly: true
- name: etc-full
mountPath: /hostfs/etc
readOnly: true
- name: var-lib
mountPath: /hostfs/var/lib
readOnly: true
volumes:
- name: datastreams
configMap:
defaultMode: 0640
name: agent-node-datastreams
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: kubenodevarlogs
hostPath:
path: /var/log
# The following volumes are needed for Cloud Security Posture integration (cloudbeat)
# If you are not using this integration, then these volumes and the corresponding
# mounts can be removed.
- name: etc-full
hostPath:
path: /etc
- name: var-lib
hostPath:
path: /var/lib
and this is the log that i get in kibana for path /var/log/containers but it's same for all the other inputs, and there are no datastreams getting generated for either path

Run elasticsearch 7.17 on Openshift error chroot: cannot change root directory to '/': Operation not permitted

When starting the cluster Elasticsearch 7.17 in Openshift . Cluster writes an error
chroot: cannot change root directory to '/': Operation not permitted`
Kibana started ok.
Code :
`apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
namespace: elasticsearch-pp
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- name: elasticsearch
image: NEXUS/elasticsearch:7.17.7
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: discovery.seed_hosts
value: "es-cluster-0.elasticsearch,es-cluster-1.elasticsearch,es-cluster-2.elasticsearch"
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: ES_JAVA_OPTS
value: "-Xms4012m -Xmx4012m"
ports:
- containerPort: 9200
name: client
- containerPort: 9300
name: nodes
volumeMounts:
- name: data-cephfs
mountPath: /usr/share/elasticsearch/data
initContainers:
- name: fix-permissions
image: NEXUS/busybox
command: ["sh", "-c", "chown -R 1000:1000 /usr/share/elasticsearch/data"]
volumeMounts:
- name: data-cephfs
mountPath: /usr/share/elasticsearch/data
- name: increase-vm-max-map
image: NEXUS/busybox
imagePullPolicy: IfNotPresent
command: ["/bin/sh"]
args: ["-c", "sysctl -w vm.max_map_count=262144; echo vm.max_map_count=262144 >> /etc/sysctl.conf ; sysctl -p"]
# image: NEXUS/busybox
# command: ["sysctl", "-w", "vm.max_map_count=262144"]
- name: increase-fd-ulimit
image: NEXUS/busybox
command: ["sh", "-c", "ulimit -n 65536"]
serviceAccount: elk-anyuid
serviceAccountName: elk-anyuid
restartPolicy: Always
volumes:
- name: data-cephfs
persistentVolumeClaim:
claimName: data-cephfs `
I tried changing the cluster settings, disabled initContainers , the error persists

Metricbeat not showing pod cpu usage

i have a kubernetes with minikube in which i would like to get metrics via elasticsearch metricbeat. Almost all the metrics are showing less the cpu usage, which is always 0. Even when i'm running a pod with high cpu usage. Here are my yaml's what can i possibly be doing wrong?
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: metricbeat
namespace: default
spec:
type: metricbeat
version: 8.1.0
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
setup:
kibana:
path: "/kibana"
metricbeat:
autodiscover:
providers:
- hints:
default_config: {}
enabled: "true"
node: ${NODE_NAME}
type: kubernetes
modules:
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
process:
include_top_n:
by_cpu: 5
by_memory: 5
processes:
- .*
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event:
when:
regexp:
system:
filesystem:
mount_point: ^/(sys|cgroup|proc|dev|etc|host|lib)($|/)
- module: kubernetes
period: 10s
node: ${NODE_NAME}
hosts:
- https://${NODE_NAME}:10250
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl:
verification_mode: none
metricsets:
- node
- system
- pod
- container
- volume
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
daemonSet:
podTemplate:
spec:
serviceAccountName: metricbeat
automountServiceAccountToken: true
containers:
- args:
- -e
- -c
- /etc/beat.yml
- -system.hostfs=/hostfs
name: metricbeat
volumeMounts:
- mountPath: /hostfs/sys/fs/cgroup
name: cgroup
- mountPath: /var/run/docker.sock
name: dockersock
- mountPath: /hostfs/proc
name: proc
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /sys/fs/cgroup
name: cgroup
- hostPath:
path: /var/run/docker.sock
name: dockersock
- hostPath:
path: /proc
name: proc
And here are the cluster role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metricbeat
namespace: default
rules:
- apiGroups:
- ""
resources:
- nodes
- namespaces
- events
- pods
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- statefulsets
- deployments
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/stats
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get

Elasticsearch pods failing at readiness probe

Elasticsearch pod is not becoming active.
logging-es-data-master-ilmz5zyt-3-deploy 1/1 Running 0 5m
logging-es-data-master-ilmz5zyt-3-qxkml 1/2 Running 0 5m
and events are.
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m default-scheduler Successfully assigned logging-es-data-master-ilmz5zyt-3-qxkml to digi-srv-pp-01
Normal Pulled 5m kubelet, digi-srv-pp-01 Container image "docker.io/openshift/origin-logging-elasticsearch:v3.10" already present on machine
Normal Created 5m kubelet, digi-srv-pp-01 Created container
Normal Started 5m kubelet, digi-srv-pp-01 Started container
Normal Pulled 5m kubelet, digi-srv-pp-01 Container image "docker.io/openshift/oauth-proxy:v1.0.0" already present on machine
Normal Created 5m kubelet, digi-srv-pp-01 Created container
Normal Started 5m kubelet, digi-srv-pp-01 Started container
Warning Unhealthy 13s (x55 over 4m) kubelet, digi-srv-pp-01 Readiness probe failed: Elasticsearch node is not ready to accept HTTP requests yet [response code: 000]
Deployment config is
# oc export dc/logging-es-data-master-ilmz5zyt -o yaml
Command "export" is deprecated, use the oc get --export
apiVersion: v1
kind: DeploymentConfig
metadata:
creationTimestamp: null
generation: 5
labels:
component: es
deployment: logging-es-data-master-ilmz5zyt
logging-infra: elasticsearch
provider: openshift
name: logging-es-data-master-ilmz5zyt
spec:
replicas: 1
revisionHistoryLimit: 0
selector:
component: es
deployment: logging-es-data-master-ilmz5zyt
logging-infra: elasticsearch
provider: openshift
strategy:
activeDeadlineSeconds: 21600
recreateParams:
timeoutSeconds: 600
resources: {}
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
component: es
deployment: logging-es-data-master-ilmz5zyt
logging-infra: elasticsearch
provider: openshift
name: logging-es-data-master-ilmz5zyt
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchExpressions:
- key: logging-infra
operator: In
values:
- elasticsearch
topologyKey: kubernetes.io/hostname
weight: 100
containers:
- env:
- name: DC_NAME
value: logging-es-data-master-ilmz5zyt
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: KUBERNETES_TRUST_CERTIFICATES
value: "true"
- name: SERVICE_DNS
value: logging-es-cluster
- name: CLUSTER_NAME
value: logging-es
- name: INSTANCE_RAM
value: 12Gi
- name: HEAP_DUMP_LOCATION
value: /elasticsearch/persistent/heapdump.hprof
- name: NODE_QUORUM
value: "1"
- name: RECOVER_EXPECTED_NODES
value: "1"
- name: RECOVER_AFTER_TIME
value: 5m
- name: READINESS_PROBE_TIMEOUT
value: "30"
- name: POD_LABEL
value: component=es
- name: IS_MASTER
value: "true"
- name: HAS_DATA
value: "true"
- name: PROMETHEUS_USER
value: system:serviceaccount:openshift-metrics:prometheus
image: docker.io/openshift/origin-logging-elasticsearch:v3.10
imagePullPolicy: IfNotPresent
name: elasticsearch
ports:
- containerPort: 9200
name: restapi
protocol: TCP
- containerPort: 9300
name: cluster
protocol: TCP
readinessProbe:
exec:
command:
- /usr/share/java/elasticsearch/probe/readiness.sh
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 5
successThreshold: 1
timeoutSeconds: 120
resources:
limits:
memory: 12Gi
requests:
cpu: "1"
memory: 12Gi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/elasticsearch/secret
name: elasticsearch
readOnly: true
- mountPath: /usr/share/java/elasticsearch/config
name: elasticsearch-config
readOnly: true
- mountPath: /elasticsearch/persistent
name: elasticsearch-storage
- args:
- --upstream-ca=/etc/elasticsearch/secret/admin-ca
- --https-address=:4443
- -provider=openshift
- -client-id=system:serviceaccount:openshift-logging:aggregated-logging-elasticsearch
- -client-secret-file=/var/run/secrets/kubernetes.io/serviceaccount/token
- -cookie-secret=endzaVczSWMzb0NoNlVtVw==
- -basic-auth-password=NXd9xTjg4npjIM0E
- -upstream=https://localhost:9200
- '-openshift-sar={"namespace": "openshift-logging", "verb": "view", "resource":
"prometheus", "group": "metrics.openshift.io"}'
- '-openshift-delegate-urls={"/": {"resource": "prometheus", "verb": "view",
"group": "metrics.openshift.io", "namespace": "openshift-logging"}}'
- --tls-cert=/etc/tls/private/tls.crt
- --tls-key=/etc/tls/private/tls.key
- -pass-access-token
- -pass-user-headers
image: docker.io/openshift/oauth-proxy:v1.0.0
imagePullPolicy: IfNotPresent
name: proxy
ports:
- containerPort: 4443
name: proxy
protocol: TCP
resources:
limits:
memory: 64Mi
requests:
cpu: 100m
memory: 64Mi
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /etc/tls/private
name: proxy-tls
readOnly: true
- mountPath: /etc/elasticsearch/secret
name: elasticsearch
readOnly: true
dnsPolicy: ClusterFirst
nodeSelector:
node-role.kubernetes.io/compute: "true"
restartPolicy: Always
schedulerName: default-scheduler
securityContext:
supplementalGroups:
- 65534
serviceAccount: aggregated-logging-elasticsearch
serviceAccountName: aggregated-logging-elasticsearch
terminationGracePeriodSeconds: 30
volumes:
- name: proxy-tls
secret:
defaultMode: 420
secretName: prometheus-tls
- name: elasticsearch
secret:
defaultMode: 420
secretName: logging-elasticsearch
- configMap:
defaultMode: 420
name: logging-elasticsearch
name: elasticsearch-config
- name: elasticsearch-storage
persistentVolumeClaim:
claimName: logging-es-0
test: false
triggers: []
status:
availableReplicas: 0
latestVersion: 0
observedGeneration: 0
replicas: 0
unavailableReplicas: 0
updatedReplicas: 0

Getting error 'unknown field hostPath' Kubernetes Elasticsearch using with local volume

I am trying to deploy elastic-search in kubernetes with local drive volume but I get the following error, can you please correct me.
using ubuntu 16.04
kubernetes v1.11.0
Docker version 17.03.2-ce
Getting error 'unknown field hostPath' Kubernetes Elasticsearch using with local volume
error: error validating "es-d.yaml": error validating data: ValidationError(StatefulSet.spec.template.spec.containers[1]): unknown field "hostPath" in io.k8s.api.core.v1.Container; if you choose to ignore these errors, turn validation off with --validate=false
This is the yaml file of the statefulSet:
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: alpine:3.6
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-data
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: storage
mountPath: /es
volumes:
- name: storage
You have the wrong structure. volumes must be on the same level as containers, initContainers.
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: es-data
labels:
component: elasticsearch
role: data
spec:
serviceName: elasticsearch-data
replicas: 1
template:
metadata:
labels:
component: elasticsearch
role: data
spec:
initContainers:
- name: init-sysctl
image: alpine:3.6
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- name: es-data
image: quay.io/pires/docker-elasticsearch-kubernetes:6.3.0
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: myesdb
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "true"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
- name: PROCESSORS
valueFrom:
resourceFieldRef:
resource: limits.cpu
resources:
requests:
cpu: 0.25
limits:
cpu: 1
ports:
- containerPort: 9200
name: http
- containerPort: 9300
name: transport
livenessProbe:
tcpSocket:
port: transport
initialDelaySeconds: 20
periodSeconds: 10
readinessProbe:
httpGet:
path: /_cluster/health
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: storage
mountPath: /es
volumes:
- name: storage
You can find example here.
Check your format, hostPath is not supposed to be under container part, 'volume' is not in it's position.

Resources