Using ES toolchain in version 7.17.0
I'd like to setup ILM + index_template with customised name.
However from documentation
If index lifecycle management is enabled (which is typically the default), setup.template.name and setup.template.pattern are ignored.
It seems like it's not possible.
Now the questions:
is it ok to setup custom template name (with custom setup) when ILM is/was enabled?
is it ok to run two setup files in filebeat? (e.g. filebeat setup --index-management --dashboards -c setup-ilm.yml && filebeat setup --index-management --dashboards -c setup-template.yml)?
am I able to put those setup files somewhere in filebeat (docker image) to be execute automatically? I've seen that there is only modules and inputs folder setup.
when I've executed those setup files above I've seen following:
Loading ILM policy and write alias without loading template is not recommended. Check your configuration.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
ILM policy and write alias loading not enabled.
Index setup finished.
Loading dashboards (Kibana must be running and reachable)
Loaded dashboards
setup-ilm.yml
setup:
ilm:
enabled: true
policy_file: "ilm-policy.json"
template:
enabled: false
output.elasticsearch:
hosts: ["elasticsearch:9200"]
setup-template.yml
setup:
ilm:
enabled: false
template:
enabled: true
name: "${ES_NAMESPACE:+${ES_NAMESPACE}-}filebeat-%{[agent.version]}"
pattern: "${ES_NAMESPACE:+${ES_NAMESPACE}-}filebeat-%{[agent.version]}-*"
kibana:
host: "kibana:5601"
index:
number_of_shards: 1
mapping:
total_fields:
limit: 5000
output.elasticsearch:
hosts: ["elasticsearch:9200"]
This is how I did to have custom index name + lifecycle policy, I'm using the elastic operators from ECK so if it's not your case it may be different. So, assuming ECK and elastic are already installed this is my beats yaml file:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: filebeat
spec:
type: filebeat
version: 8.6.0
config:
filebeat:
autodiscover:
providers:
- type: kubernetes
node: ${NODE_NAME}
hints:
enabled: true
default_config:
type: container
paths:
- /var/log/containers/*${data.kubernetes.container.id}.log
output:
elasticsearch:
index: "custom-name-%{[agent.version]}-%{+yyyy.MM.dd}"
hosts: [ "elastic-host:9200" ]
username: "filebeat_user"
password: "password" # pending to load from secret
ssl:
verification_mode: "none"
setup:
template:
name: "filebeat"
pattern: "*-filebeat-*"
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
daemonSet:
podTemplate:
spec:
serviceAccountName: filebeat
automountServiceAccountToken: true
terminationGracePeriodSeconds: 30
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true # Allows to provide richer host metadata
containers:
- name: filebeat
securityContext:
runAsUser: 0
volumeMounts:
- name: varlogcontainers
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
volumes:
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
- nodes
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
As I understood, this will create an index template called filebeat that match the pattern *-filebeat-* and a policy for that index called filebeat. This is the important part:
output:
elasticsearch:
index: "custom-name-%{[agent.version]}-%{+yyyy.MM.dd}"
hosts: [ "elastic-host:9200" ]
username: "filebeat_user"
password: "password" # pending to load from secret
ssl:
verification_mode: "none"
setup:
template:
name: "filebeat"
pattern: "*-filebeat-*"
I hope this helps because is poorly documented, they even have an open issue to improve the documentation: https://github.com/elastic/beats/issues/11866
Related
I am trying to send the logs from my AKS cluster into Elasticsearch the log that I am getting is "event.agent_id_status auth_metadata_missing" in my kibana even after all the volume mounts are done correctly
here's my configmap for standalone elastic agent
apiVersion: v1
kind: ConfigMap
metadata:
name: agent-node-datastreams
namespace: elastic
labels:
k8s-app: elastic-agent-standalone
data:
agent.yml: |-
outputs:
default:
type: elasticsearch
protocol: https
ssl.verification_mode: 'none'
allow_older_versions: true
hosts:
- >-
${ES_HOST}
username: ${ES_USERNAME}
password: ${ES_PASSWORD}
indices:
- index: "journalbeat-alias"
when:
and:
- has_fields: ['fields.k8s.component']
- equals:
fields.k8s.component: "journal"
- index: "test-audit"
when:
and:
- has_fields: ['fields.k8s.component']
- equals:
fields.k8s.component: "audit"
- index: "filebeat-%{[beat.version]}-%{+yyyy.MM.dd}"
when.not:
has_fields: ['kubernetes.namespace']
agent:
monitoring:
enabled: true
use_output: default
logs: true
metrics: false
providers.kubernetes:
node: ${NODE_NAME}
scope: node
#Uncomment to enable hints' support
#hints.enabled: true
inputs:
- name: system-logs
type: logfile
use_output: default
meta:
package:
name: system
version: 0.10.7
data_stream:
namespace: filebeat
streams:
- data_stream:
dataset: audit
type: logfile
paths:
- /var/log/*.log
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_fields:
target: ''
fields:
ecs.version: 1.12.0
- data_stream:
dataset: journald
type: logfile
paths:
- /var/lib/host/log/journal
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_fields:
target: ''
fields:
ecs.version: 1.12.0
- data_stream:
dataset: container
type: logfile
paths:
- /var/log/containers/*.log
exclude_files:
- .gz$
multiline:
pattern: ^\s
match: after
processors:
- add_fields:
target: ''
fields:
ecs.version: 1.12.0
and here's my daemonset file
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: elastic-agent-standalone
namespace: elastic
labels:
app: elastic-agent-standalone
spec:
selector:
matchLabels:
app: elastic-agent-standalone
template:
metadata:
labels:
app: elastic-agent-standalone
spec:
# Tolerations are needed to run Elastic Agent on Kubernetes control-plane nodes.
# Agents running on control-plane nodes collect metrics from the control plane components (scheduler, controller manager) of Kubernetes
tolerations:
- key: node-role.kubernetes.io/control-plane
effect: NoSchedule
- key: node-role.kubernetes.io/master
effect: NoSchedule
serviceAccountName: elastic-agent-standalone
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: elastic-agent-standalone
image: docker.elastic.co/beats/elastic-agent:8.4.3
args: [
"-c", "/etc/elastic-agent/agent.yml",
"-e",
]
env:
# The basic authentication username used to connect to Elasticsearch
# This user needs the privileges required to publish events to Elasticsearch.
- name: FLEET_ENROLL_INSECURE
value: "1"
- name: ES_USERNAME
value: <CORRECT USER>
# The basic authentication password used to connect to Elasticsearch
- name: ES_PASSWORD
value: <MY CORRECT PASSWORD>
# The Elasticsearch host to communicate with
- name: ES_HOST
value: <CORRECT HOST>
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: STATE_PATH
value: "/etc/elastic-agent"
securityContext:
runAsUser: 0
resources:
limits:
memory: 700Mi
requests:
cpu: 100m
memory: 400Mi
volumeMounts:
- name: datastreams
mountPath: /etc/elastic-agent/agent.yml
readOnly: true
subPath: agent.yml
- name: proc
mountPath: /hostfs/proc
readOnly: true
- name: cgroup
mountPath: /hostfs/sys/fs/cgroup
readOnly: true
- name: varlogcontainers
mountPath: /var/log/containers
readOnly: true
- name: varlogpods
mountPath: /var/log/pods
readOnly: true
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: kubenodevarlogs
mountPath: /var/lib/host/log
readOnly: true
# - name: varlog
# mountPath: /var/log
# readOnly: true
- name: etc-full
mountPath: /hostfs/etc
readOnly: true
- name: var-lib
mountPath: /hostfs/var/lib
readOnly: true
volumes:
- name: datastreams
configMap:
defaultMode: 0640
name: agent-node-datastreams
- name: proc
hostPath:
path: /proc
- name: cgroup
hostPath:
path: /sys/fs/cgroup
- name: varlogcontainers
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: kubenodevarlogs
hostPath:
path: /var/log
# The following volumes are needed for Cloud Security Posture integration (cloudbeat)
# If you are not using this integration, then these volumes and the corresponding
# mounts can be removed.
- name: etc-full
hostPath:
path: /etc
- name: var-lib
hostPath:
path: /var/lib
and this is the log that i get in kibana for path /var/log/containers but it's same for all the other inputs, and there are no datastreams getting generated for either path
Trying to setup elasticsearch cluster on kube, the problem i am having is that each pod isn't able to talk to the others by the respective hostnames, but the ip address works.
So for example i'm trying to currently setup 3 master nodes, es-master-0, es-master-1 and es-master-2 , if i log into one of the containers and ping another based on the pod ip it's fine, but i i try to ping say es-master-1 from es-master-0 based on the hostname it can't find it.
Clearly missing something here. Currently launching this config to try get it working:
apiVersion: v1
kind: Service
metadata:
name: ed
labels:
component: elasticsearch
role: master
spec:
selector:
component: elasticsearch
role: master
ports:
- name: transport1
port: 9300
protocol: TCP
clusterIP: None
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-master
labels:
component: elasticsearch
role: master
spec:
selector:
matchLabels:
component: elasticsearch
role: master
serviceName: ed
replicas: 3
template:
metadata:
labels:
component: elasticsearch
role: master
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- { key: es-master, operator: In, values: [ "true" ] }
initContainers:
- name: init-sysctl
image: busybox:1.27.2
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
dnsPolicy: "None"
dnsConfig:
options:
- name: ndots
value: "6"
nameservers:
- 10.85.0.10
searches:
- ed.es.svc.cluster.local
- es.svc.cluster.local
- svc.cluster.local
- cluster.local
- home
- node1
containers:
- name: es-master
image: docker.elastic.co/elasticsearch/elasticsearch:7.17.5
imagePullPolicy: Always
securityContext:
privileged: true
env:
- name: ES_JAVA_OPTS
value: -Xms2048m -Xmx2048m
resources:
requests:
cpu: "0.25"
limits:
cpu: "2"
ports:
- containerPort: 9300
name: transport1
livenessProbe:
tcpSocket:
port: transport1
initialDelaySeconds: 60
periodSeconds: 10
volumeMounts:
- name: storage
mountPath: /data
- name: config
mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
subPath: elasticsearch.yml
volumes:
- name: config
configMap:
name: es-master-config
volumeClaimTemplates:
- metadata:
name: storage
spec:
storageClassName: "local-path"
accessModes: [ ReadWriteOnce ]
resources:
requests:
storage: 2Gi
It's clearly somehow not resolving the hostnames
For pod to pod communication you can use k8s service which you had defined.
i have a kubernetes with minikube in which i would like to get metrics via elasticsearch metricbeat. Almost all the metrics are showing less the cpu usage, which is always 0. Even when i'm running a pod with high cpu usage. Here are my yaml's what can i possibly be doing wrong?
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: metricbeat
namespace: default
spec:
type: metricbeat
version: 8.1.0
elasticsearchRef:
name: elasticsearch
kibanaRef:
name: kibana
config:
setup:
kibana:
path: "/kibana"
metricbeat:
autodiscover:
providers:
- hints:
default_config: {}
enabled: "true"
node: ${NODE_NAME}
type: kubernetes
modules:
- module: system
period: 10s
metricsets:
- cpu
- load
- memory
- network
- process
- process_summary
process:
include_top_n:
by_cpu: 5
by_memory: 5
processes:
- .*
- module: system
period: 1m
metricsets:
- filesystem
- fsstat
processors:
- drop_event:
when:
regexp:
system:
filesystem:
mount_point: ^/(sys|cgroup|proc|dev|etc|host|lib)($|/)
- module: kubernetes
period: 10s
node: ${NODE_NAME}
hosts:
- https://${NODE_NAME}:10250
bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
ssl:
verification_mode: none
metricsets:
- node
- system
- pod
- container
- volume
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
daemonSet:
podTemplate:
spec:
serviceAccountName: metricbeat
automountServiceAccountToken: true
containers:
- args:
- -e
- -c
- /etc/beat.yml
- -system.hostfs=/hostfs
name: metricbeat
volumeMounts:
- mountPath: /hostfs/sys/fs/cgroup
name: cgroup
- mountPath: /var/run/docker.sock
name: dockersock
- mountPath: /hostfs/proc
name: proc
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
dnsPolicy: ClusterFirstWithHostNet
hostNetwork: true
securityContext:
runAsUser: 0
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /sys/fs/cgroup
name: cgroup
- hostPath:
path: /var/run/docker.sock
name: dockersock
- hostPath:
path: /proc
name: proc
And here are the cluster role
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: metricbeat
namespace: default
rules:
- apiGroups:
- ""
resources:
- nodes
- namespaces
- events
- pods
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- apps
resources:
- statefulsets
- deployments
- replicasets
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/stats
verbs:
- get
- nonResourceURLs:
- /metrics
verbs:
- get
I have an Elasticsearch cluster (6.3) running on Kubernetes (GKE) with the following manifest file:
---
# Source: elasticsearch/templates/manifests.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: elasticsearch-configmap
labels:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
data:
elasticsearch.yml: |
cluster.name: "${CLUSTER_NAME}"
node.name: "${NODE_NAME}"
path.data: /usr/share/elasticsearch/data
path.repo: ["${BACKUP_REPO_PATH}"]
network.host: 0.0.0.0
discovery.zen.minimum_master_nodes: 1
discovery.zen.ping.unicast.hosts: ${DISCOVERY_SERVICE}
log4j2.properties: |
status = error
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}] %marker%m%n
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: elasticsearch
labels: &ElasticsearchDeploymentLabels
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
spec:
selector:
matchLabels: *ElasticsearchDeploymentLabels
serviceName: elasticsearch-svc
replicas: 2
updateStrategy:
# The procedure for updating the Elasticsearch cluster is described at
# https://www.elastic.co/guide/en/elasticsearch/reference/current/rolling-upgrades.html
type: OnDelete
template:
metadata:
labels: *ElasticsearchDeploymentLabels
spec:
terminationGracePeriodSeconds: 180
initContainers:
# This init container sets the appropriate limits for mmap counts on the hosting node.
# https://www.elastic.co/guide/en/elasticsearch/reference/current/vm-max-map-count.html
- name: set-max-map-count
image: marketplace.gcr.io/google/elasticsearch/ubuntu16_04#...
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
command:
- /bin/bash
- -c
- 'if [[ "$(sysctl vm.max_map_count --values)" -lt 262144 ]]; then sysctl -w vm.max_map_count=262144; fi'
containers:
- name: elasticsearch
image: eu.gcr.io/projectId/elasticsearch6.3#sha256:...
imagePullPolicy: Always
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: "elasticsearch-cluster"
- name: DISCOVERY_SERVICE
value: "elasticsearch-svc"
- name: BACKUP_REPO_PATH
value: ""
ports:
- name: prometheus
containerPort: 9114
protocol: TCP
- name: http
containerPort: 9200
- name: tcp-transport
containerPort: 9300
volumeMounts:
- name: configmap
mountPath: /etc/elasticsearch/elasticsearch.yml
subPath: elasticsearch.yml
- name: configmap
mountPath: /etc/elasticsearch/log4j2.properties
subPath: log4j2.properties
- name: elasticsearch-pvc
mountPath: /usr/share/elasticsearch/data
readinessProbe:
httpGet:
path: /_cluster/health?local=true
port: 9200
initialDelaySeconds: 5
livenessProbe:
exec:
command:
- /usr/bin/pgrep
- -x
- "java"
initialDelaySeconds: 5
resources:
requests:
memory: "2Gi"
- name: prometheus-to-sd
image: marketplace.gcr.io/google/elasticsearch/prometheus-to-sd#sha256:8e3679a6e059d1806daae335ab08b304fd1d8d35cdff457baded7306b5af9ba5
ports:
- name: profiler
containerPort: 6060
command:
- /monitor
- --stackdriver-prefix=custom.googleapis.com
- --source=elasticsearch:http://localhost:9114/metrics
- --pod-id=$(POD_NAME)
- --namespace-id=$(POD_NAMESPACE)
- --monitored-resource-types=k8s
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: configmap
configMap:
name: "elasticsearch-configmap"
volumeClaimTemplates:
- metadata:
name: elasticsearch-pvc
labels:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
spec:
accessModes: ["ReadWriteOnce"]
storageClassName: standard
resources:
requests:
storage: 50Gi
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-prometheus-svc
labels:
app.kubernetes.io/name: elasticsearch
app.kubernetes.io/component: elasticsearch-server
spec:
clusterIP: None
ports:
- name: prometheus-port
port: 9114
protocol: TCP
selector:
app.kubernetes.io/name: elasticsearch
app.kubernetes.io/component: elasticsearch-server
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch-svc-internal
labels:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
spec:
ports:
- name: http
port: 9200
- name: tcp-transport
port: 9300
selector:
app.kubernetes.io/name: "elasticsearch"
app.kubernetes.io/component: elasticsearch-server
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: ilb-service-elastic
annotations:
cloud.google.com/load-balancer-type: "Internal"
labels:
app: elasticsearch-svc
spec:
type: LoadBalancer
loadBalancerIP: some-ip-address
selector:
app.kubernetes.io/component: elasticsearch-server
app.kubernetes.io/name: elasticsearch
ports:
- port: 9200
protocol: TCP
This manifest was written from the template that used to be available on the GCP marketplace.
I'm encountering the following issue: the cluster is supposed to have 2 nodes, and indeed 2 pods are running.
However
a call to ip:9200/_nodes returns just one node
there still seems to be a second node running that receives traffic (at least, read traffic), as visible in the logs. Those requests typically fail because the requested entities don't exist on that node (just on the master node).
I can't wrap my head around the fact that the node at the same time isn't visible to the master node, and receives read traffic from the load balanced pointing to the stateful set.
Am I missing something subtle ?
Did you try checking which types of both Nodes are?
There are Master nodes and data nodes, at a time only one master gets elected while the other just stay in the background if the first master node goes down new Node gets elected and handles the further request.
i cant see Node type config in stateful sets. i would recommand checking out the helm of Elasticsearch to set up and deploy on GKE.
Helm chart : https://github.com/elastic/helm-charts/tree/main/elasticsearch
Sharing example Env config for reference :
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: CLUSTER_NAME
value: my-es
- name: NODE_MASTER
value: "false"
- name: NODE_INGEST
value: "false"
- name: HTTP_ENABLE
value: "false"
- name: ES_JAVA_OPTS
value: -Xms256m -Xmx256m
read more at : https://faun.pub/https-medium-com-thakur-vaibhav23-ha-es-k8s-7e655c1b7b61
I'm trying to deploy Packetbeat as a DaemonSet on a Kubernetes cluster. But Kubernetes giving CrashLoopBackOff error while running the Packetbeat. I have checked the pod logs of Packetbeat. Below are the logs.
2020-08-23T14:28:00.054Z INFO instance/beat.go:475 Beat UUID: 69d32e5f-c8f2-41bf-9242-48435688c540
2020-08-23T14:28:00.054Z INFO instance/beat.go:213 Setup Beat: packetbeat; Version: 6.2.4
2020-08-23T14:28:00.061Z INFO add_cloud_metadata/add_cloud_metadata.go:301 add_cloud_metadata: hosting provider type detected as ec2, metadata={"availability_zone":"us-east-1f","instance_id":"i-05b8121af85c94236","machine_type":"t2.medium","provider":"ec2","region":"us-east-1"}
2020-08-23T14:28:00.061Z INFO kubernetes/watcher.go:77 kubernetes: Performing a pod sync
2020-08-23T14:28:00.074Z INFO kubernetes/watcher.go:108 kubernetes: Pod sync done
2020-08-23T14:28:00.074Z INFO elasticsearch/client.go:145 Elasticsearch url: http://elasticsearch:9200
2020-08-23T14:28:00.074Z INFO kubernetes/watcher.go:140 kubernetes: Watching API for pod events
2020-08-23T14:28:00.074Z INFO pipeline/module.go:76 Beat name: ip-172-31-72-117
2020-08-23T14:28:00.075Z INFO procs/procs.go:78 Process matching disabled
2020-08-23T14:28:00.076Z INFO [monitoring] log/log.go:97 Starting metrics logging every 30s
2020-08-23T14:28:00.076Z INFO elasticsearch/client.go:145 Elasticsearch url: http://elasticsearch:9200
2020-08-23T14:28:00.083Z WARN transport/tcp.go:36 DNS lookup failure "elasticsearch": lookup elasticsearch on 172.31.0.2:53: no such host
2020-08-23T14:28:00.083Z ERROR elasticsearch/elasticsearch.go:165 Error connecting to Elasticsearch at http://elasticsearch:9200: Get http://elasticsearch:9200: lookup elasticsearch on 172.31.0.2:53: no such host
2020-08-23T14:28:00.085Z INFO [monitoring] log/log.go:132 Total non-zero metrics {"monitoring": {"metrics": {"beat":{"cpu":{"system":{"ticks":20,"time":28},"total":{"ticks":160,"time":176,"value":160},"user":{"ticks":140,"time":148}},"info":{"ephemeral_id":"70e07383-3aae-4bc1-a6e1-540a6cfa8ad8","uptime":{"ms":35}},"memstats":{"gc_next":26511344,"memory_alloc":21723000,"memory_total":23319008,"rss":51834880}},"libbeat":{"config":{"module":{"running":0}},"output":{"type":"elasticsearch"},"pipeline":{"clients":5,"events":{"active":0}}},"system":{"cpu":{"cores":2},"load":{"1":0.11,"15":0.1,"5":0.14,"norm":{"1":0.055,"15":0.05,"5":0.07}}}}}}
2020-08-23T14:28:00.085Z INFO [monitoring] log/log.go:133 Uptime: 37.596889ms
2020-08-23T14:28:00.085Z INFO [monitoring] log/log.go:110 Stopping metrics logging.
2020-08-23T14:28:00.085Z ERROR instance/beat.go:667 Exiting: Error importing Kibana dashboards: fail to create the Elasticsearch loader: Error creating Elasticsearch client: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://elasticsearch:9200: Get http://elasticsearch:9200: lookup elasticsearch on 172.31.0.2:53: no such host]
Exiting: Error importing Kibana dashboards: fail to create the Elasticsearch loader: Error creating Elasticsearch client: Couldn't connect to any of the configured Elasticsearch hosts. Errors: [Error connection to Elasticsearch http://elasticsearch:9200: Get http://elasticsearch:9200: lookup elastic search on 172.31.0.2:53: no such host]
Here is Packetbeat.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: packetbeat-dynamic-config
namespace: kube-system
labels:
k8s-app: packetbeat-dynamic
kubernetes.io/cluster-service: "true"
data:
packetbeat.yml: |-
setup.dashboards.enabled: true
setup.template.enabled: true
setup.template.settings:
index.number_of_shards: 2
packetbeat.interfaces.device: any
packetbeat.protocols:
- type: dns
ports: [53]
include_authorities: true
include_additionals: true
- type: http
ports: [80, 8000, 8080, 9200]
- type: mysql
ports: [3306]
- type: redis
ports: [6379]
packetbeat.flows:
timeout: 30s
period: 10s
processors:
- add_cloud_metadata:
- add_kubernetes_metadata:
host: ${HOSTNAME}
indexers:
- ip_port:
matchers:
- field_format:
format: '%{[ip]}:%{[port]}'
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
#setup.kibana.host: kibana:5601
setup.ilm.overwrite: true
output.elasticsearch:
hosts: ['${ELASTICSEARCH_HOST:elasticsearch}:${ELASTICSEARCH_PORT:9200}']
username: ${ELASTICSEARCH_USERNAME}
password: ${ELASTICSEARCH_PASSWORD}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: packetbeat-dynamic
namespace: kube-system
labels:
k8s-app: packetbeat-dynamic
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
k8s-app: packetbeat-dynamic
kubernetes.io/cluster-service: "true"
template:
metadata:
labels:
k8s-app: packetbeat-dynamic
kubernetes.io/cluster-service: "true"
spec:
serviceAccountName: packetbeat-dynamic
terminationGracePeriodSeconds: 30
hostNetwork: true
containers:
- name: packetbeat-dynamic
image: docker.elastic.co/beats/packetbeat:6.2.4
imagePullPolicy: Always
args: [
"-c", "/etc/packetbeat.yml",
"-e",
]
securityContext:
runAsUser: 0
capabilities:
add:
- NET_ADMIN
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
- name: KIBANA_HOST
value: kibana
- name: KIBANA_PORT
value: "5601"
volumeMounts:
- name: config
mountPath: /etc/packetbeat.yml
readOnly: true
subPath: packetbeat.yml
- name: data
mountPath: /usr/share/packetbeat/data
volumes:
- name: config
configMap:
defaultMode: 0600
name: packetbeat-dynamic-config
- name: data
emptyDir: {}
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: packetbeat-dynamic
subjects:
- kind: ServiceAccount
name: packetbeat-dynamic
namespace: kube-system
roleRef:
kind: ClusterRole
name: packetbeat-dynamic
apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: packetbeat-dynamic
labels:
k8s-app: packetbeat-dynamic
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: packetbeat-dynamic
namespace: kube-system
labels:
k8s-app: packetbeat-dynamic
Could anyone suggest me to resolve this issue? any suggestible link also more helpful.
kubectl describe daemonset packetbeat-dynamic -n kube-system
Name: packetbeat-dynamic
Selector: k8s-app=packetbeat-dynamic,kubernetes.io/cluster-service=true
Node-Selector: <none>
Labels: k8s-app=packetbeat-dynamic
kubernetes.io/cluster-service=true
Annotations: deprecated.daemonset.template.generation: 1
Desired Number of Nodes Scheduled: 1
Current Number of Nodes Scheduled: 1
Number of Nodes Scheduled with Up-to-date Pods: 1
Number of Nodes Scheduled with Available Pods: 0
Number of Nodes Misscheduled: 1
Pods Status: 2 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: k8s-app=packetbeat-dynamic
kubernetes.io/cluster-service=true
Service Account: packetbeat-dynamic
Containers:
packetbeat-dynamic:
Image: docker.elastic.co/beats/packetbeat:6.2.4
Port: <none>
Host Port: <none>
Args:
-c
/etc/packetbeat.yml
-e
Environment:
ELASTICSEARCH_HOST: elasticsearch
ELASTICSEARCH_PORT: 9200
ELASTICSEARCH_USERNAME: elastic
ELASTICSEARCH_PASSWORD: changeme
CLOUD_ID:
ELASTIC_CLOUD_AUTH:
KIBANA_HOST: kibana
KIBANA_PORT: 5601
Mounts:
/etc/packetbeat.yml from config (ro,path="packetbeat.yml")
/usr/share/packetbeat/data from data (rw)
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: packetbeat-dynamic-config
Optional: false
data:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
Events: <none>