Cannot access Kibana dashboard - elasticsearch

I am trying to deploy Kibana in my Kubernetes cluster which is on AWS. To access the Kibana dashboard I have created an ingress which is mapped to xyz.com. Here is my Kibana deployment file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: kibana
labels:
component: kibana
spec:
replicas: 1
selector:
matchLabels:
component: kibana
template:
metadata:
labels:
component: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana-oss:6.3.2
env:
- name: CLUSTER_NAME
value: myesdb
- name: SERVER_BASEPATH
value: /
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 5601
name: http
readinessProbe:
httpGet:
path: /api/status
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config
readOnly: true
volumes:
- name: config
configMap:
name: kibana-config
Whenever I deploy it, it gives me the following error. What should my SERVER_BASEPATH be in order for it to work? I know it defaults to /app/kibana.
FATAL { ValidationError: child "server" fails because [child "basePath" fails because ["basePath" with value "/" fails to match the start with a slash, don't end with one pattern]]
at Object.exports.process (/usr/share/kibana/node_modules/joi/lib/errors.js:181:19)
at internals.Object._validateWithOptions (/usr/share/kibana/node_modules/joi/lib/any.js:651:31)
at module.exports.internals.Any.root.validate (/usr/share/kibana/node_modules/joi/lib/index.js:121:23)
at Config._commit (/usr/share/kibana/src/server/config/config.js:119:35)
at Config.set (/usr/share/kibana/src/server/config/config.js:89:10)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:62:10)
at _lodash2.default.each.child (/usr/share/kibana/src/server/config/config.js:51:14)
at arrayEach (/usr/share/kibana/node_modules/lodash/index.js:1289:13)
at Function.<anonymous> (/usr/share/kibana/node_modules/lodash/index.js:3345:13)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:50:31)
at new Config (/usr/share/kibana/src/server/config/config.js:41:10)
at Function.withDefaultSchema (/usr/share/kibana/src/server/config/config.js:34:12)
at KbnServer.exports.default (/usr/share/kibana/src/server/config/setup.js:9:37)
at KbnServer.mixin (/usr/share/kibana/src/server/kbn_server.js:136:16)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
isJoi: true,
name: 'ValidationError',
details:
[ { message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern',
path: 'server.basePath',
type: 'string.regex.name',
context: [Object] } ],
_object:
{ pkg:
{ version: '6.3.2',
branch: '6.3',
buildNum: 17307,
buildSha: '53d0c6758ac3fb38a3a1df198c1d4c87765e63f7' },
dev: { basePathProxyTarget: 5603 },
pid: { exclusive: false },
cpu: { cgroup: [Object] },
cpuacct: { cgroup: [Object] },
server: { name: 'kibana', host: '0', basePath: '/' } },
annotate: [Function] }
I followed this guide https://github.com/pires/kubernetes-elasticsearch-cluster
Any idea what might be the issue ?

I believe that the example config in the official kibana repository gives a hint on the cause of this problem, here's the server.basePath setting:
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
The fact that the server.basePath cannot end in a slash could mean that kibana interprets your setting as ending in a slash basically. I've not dug deeper into this though.
This error message is interesting:
message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern'
So this error message are a complement to the documentation: don't end in a slash and don't start with a slash. Something like that.
I reproduced this in minikube using your Deployment manifest but i removed the volume mount parts at the end. Changing SERVER_BASEPATH to /<SOMETHING> works fine, so basically i think you just need to set a proper basepath.

Related

POST https://apm.<acme>.com/intake/v2/rum/events net::ERR_BLOCKED_BY_CLIENT

I have an elastic stack on kubernetes (k8s) using ECK.
Kibana version:
7.13.2
Elasticsearch version:
7.13.2
APM Server version:
7.13.2
APM Agent language and version:
https://www.npmjs.com/package/#elastic/apm-rum - 5.9.1
Browser version:
Chrome latest
Description of the problem
Frontend apm-run agent fails to send messages to apm server. if i disable cors on the browser it works - google-chrome --disable-web-security --user-data-dir=temp then navigate to my frontend http://localhost:4201/
[Elastic APM] Failed sending events! Error: https://apm.<redacted>.com/intake/v2/rum/events HTTP status: 0
at ApmServer._constructError (apm-server.js:120)
at eval (apm-server.js:48)
POST https://apm.<acme>.com/intake/v2/rum/events net::ERR_BLOCKED_BY_CLIENT
Code:
apm.yml
apiVersion: apm.k8s.elastic.co/v1
kind: ApmServer
metadata:
name: apm-server-prod
namespace: elastic-system
spec:
version: 7.13.2
count: 1
elasticsearchRef:
name: "elasticsearch-prod"
kibanaRef:
name: "kibana-prod"
http:
service:
spec:
type: NodePort
config:
apm-server:
rum.enabled: true
ilm.enabled: true
elastic.ingress.yml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: elastic-ingress
namespace: elastic-system
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/group.name: "<redacted>"
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP":80,"HTTPS": 443}]'
alb.ingress.kubernetes.io/backend-protocol: 'HTTPS'
alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-east-2:<rd>:certificate/0250a551-8971-468d-a483-cad28f890463
alb.ingress.kubernetes.io/tags: Environment=prod,Team=dev
alb.ingress.kubernetes.io/healthcheck-path: /health
alb.ingress.kubernetes.io/healthcheck-interval-seconds: '300'
alb.ingress.kubernetes.io/load-balancer-attributes: access_logs.s3.enabled=true,access_logs.s3.bucket=<redacted>-aws-ingress-logs,access_logs.s3.prefix=dev-ingress
spec:
rules:
- host: elasticsearch.<redacted>.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: elasticsearch-prod-es-http
port:
number: 9200
- host: kibana.<redacted>.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: kibana-prod-kb-http
port:
number: 5601
- host: apm.<redacted>.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: apm-server-prod-apm-http
port:
number: 8200
frontend.js
import { init as initApm } from "#elastic/apm-rum";
import config from "<redacted>-web/config/environment";
export const apm = initApm({
serviceName: "frontend",
serverUrl: "https://apm.<redacted>.com",
environment: config.environment,
logLevel: "debug",
});
Errors in browser console:
apm server:
apm-server pod does not apear to display any errors in this case, i assume the client never reaches the server .
I was running into the same problem. Check your ad blocker. I found that UBlock was blocking requests to */rum/events.
I'm guessing that they consider this as a type of user "tracker" and that's why they're blocked, really no way around it though unless you change the endpoint path I guess.

Logstash not able to connect to Elasticsearch deployed on Kubernetes cluster

I have deployed Logstash and elasticsearch pod on EKS cluster. When I am checking the logs for logstash pod it is showing unreachable elasticserach server. Though my elasticsearch is up and running. Please find below yaml files and log error.
configMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: "logstash-configmap-development"
namespace: "development"
labels:
app: "logstash-development"
data:
logstash.conf: |-
input {
http {
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => ["https://my-server.com/elasticsearch-development/"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
}
deployment.yaml
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "logstash-development"
namespace: "development"
spec:
selector:
matchLabels:
app: "logstash-development"
replicas: 1
strategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: "logstash-development"
spec:
containers:
-
name: "logstash-development"
image: "logstash:7.10.2"
imagePullPolicy: "Always"
env:
-
name: "XPACK_MONITORING_ELASTICSEARCH_HOSTS"
value: "https://my-server.com/elasticsearch-development/"
-
name: "XPACK_MONITORING_ELASTICSEARCH_URL"
value: "https://my-server.com/elasticsearch-development/"
-
name: "SERVER_BASEPATH"
value: "logstash-development"
securityContext:
privileged: true
ports:
-
containerPort: 8080
protocol: TCP
volumeMounts:
-
name: "logstash-conf-volume"
mountPath: "/usr/share/logstash/pipeline/"
volumes:
-
name: "logstash-conf-volume"
configMap:
name: "logstash-configmap-development"
items:
- key: "logstash.conf"
path: "logstash.conf"
imagePullSecrets:
-
name: "logstash"
service.yaml
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "logstash-development"
namespace: "development"
labels:
app: "logstash-development"
spec:
ports:
-
port: 55770
targetPort: 8080
selector:
app: "logstash-development"
Logstash pod log error
[2021-06-09T08:22:38,708][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://my-server.com/elasticsearch-development/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://my-server.com/elasticsearch-development/][Manticore::ConnectTimeout] connect timed out"}
Note:- Elasticsearch is up and running. And when I hit the logstash url it is giving as status ok.
I have checked with elasticsearch cluster-ip, their logstash is able to connect with Elasticsearch, but when I am giving ingress path url it is not able to connect to elasticsearch.
Also from the logs, I noticed it is taking incorrect url for elasticsearch.
My elasticsearch url is something like this:- https://my-server.com/elasticserach
but instead logstash is looking for https://my-server.com:9200/elasticsearch
With this url (https://my-server.com:9200/elasticsearch) elasticsearch is not accessible as a result it is giving connection timeout.
Can someone tell why it is taking (https://my-server.com:9200/elasticsearch) and not (https://my-server.com/elasticsearch)
I am now able to connect logstash with elasticsearch, if you are using elasticsearch with dns name, logstash by default will take the port of elasticsearch as 9200, so in my case it was taking the elasticsearch url as https://my-server.com:9200/elasticsearch-development/. But with that url, elasticsearch was not accessible, it was only accessible with (https://myserver.com/elasticsearch-development/). So I need to add the https port i.e 443 in my elasticsearch url, through which the logstash would be able to connect to elasticserach (https://my-server.com:443/elasticsearch-development/)
Long story short:-
In deployment.yaml file under env variable XPACK_MONITORING_ELASTICSEARCH_HOSTS and XPACK_MONITORING_ELASTICSEARCH_URL given the value as https://my-server.com:443/elasticsearch-development/
Same value was given in logstash.conf file.

how to access go templated kubernetes secret in manifest

I'm running this tutorial https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-deploy-elasticsearch.html and found that the elasticsearch operator comes included with a pre-defined secret which is accessed through kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'. I was wondering how I can access it in a manifest file for a pod that will make use of this as an env var. The pod's manifest is as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-depl
spec:
replicas: 1
selector:
matchLabels:
app: user
template:
metadata:
labels:
app: user
spec:
containers:
- name: user
image: reactor/user
env:
- name: PORT
value: "3000"
- name: ES_SECRET
valueFrom:
secretKeyRef:
name: quickstart-es-elastic-user
key: { { .data.elastic } }
---
apiVersion: v1
kind: Service
metadata:
name: user-svc
spec:
selector:
app: user
ports:
- name: user
protocol: TCP
port: 3000
targetPort: 3000
When trying to define ES_SECRET as I did in this manifest, I get this error message: invalid map key: map[interface {}]interface {}{\".data.elastic\":interface {}(nil)}\n. Any help on resolving this would be much appreciated.
The secret returned via API (kubectl get secret ...) is a JSON-structure, where there:
{
"data": {
"elastic": "base64 encoded string"
}
}
So you just need to replace
key: { { .data.elastic } }
with
key: elastic
since it's secretKeyReference (e.g. you refer a value in some key in data (=contents) of some secret, which name you specified above). No need to worry about base64 decoding; Kubernetes does it for you.

How to deploy filebeat to fetch nginx logs with logstash in kubernetes?

I deplyed a nginx pod as deployment kind in k8s.
Now I want to deploy filebeat and logstash in the same cluster to get nginx logs.
Here are my manifest files.
nginx.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: logs
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: logs
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
namespace: logs
name: nginx
labels:
app: nginx
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: http
selector:
app: nginx
filebeat.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: logs
---
apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
namespace: logs
labels:
k8s-app: filebeat
data:
filebeat.yml: |-
filebeat.autodiscover:
providers:
- type: kubernetes
host: ${NODE_NAME}
hints.enabled: true
templates:
- condition.contains:
kubernetes.namespace: logs
config:
- module: nginx
access:
enabled: true
var.paths: ["/var/log/nginx/access.log*"]
subPath: access.log
tags: ["access"]
error:
enabled: true
var.paths: ["/var/log/nginx/error.log*"]
subPath: error.log
tags: ["error"]
processors:
- add_cloud_metadata:
- add_host_metadata:
cloud.id: ${ELASTIC_CLOUD_ID}
cloud.auth: ${ELASTIC_CLOUD_AUTH}
output.logstash:
hosts: ["logstash:5044"]
loadbalance: true
index: filebeat
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: filebeat
namespace: logs
labels:
k8s-app: filebeat
spec:
selector:
matchLabels:
k8s-app: filebeat
template:
metadata:
labels:
k8s-app: filebeat
spec:
serviceAccountName: filebeat
terminationGracePeriodSeconds: 30
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: filebeat
image: docker.elastic.co/beats/filebeat:7.10.0
args: [
"-c", "/usr/share/filebeat/filebeat.yml",
"-e",
]
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
securityContext:
runAsUser: 0
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
volumeMounts:
- name: config
mountPath: /etc/filebeat.yml
subPath: filebeat.yml
readOnly: true
- name: data
mountPath: /usr/share/filebeat/data
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
readOnly: true
- name: varlog
mountPath: /var/log
readOnly: true
volumes:
- name: config
configMap:
defaultMode: 0600
name: filebeat-config
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: varlog
hostPath:
path: /var/log
- name: data
hostPath:
path: /var/lib/filebeat-data
type: DirectoryOrCreate
logstash.yaml
---
apiVersion: v1
kind: Namespace
metadata:
name: logs
---
apiVersion: v1
kind: Service
metadata:
namespace: logs
labels:
app: logstash
name: logstash
spec:
ports:
- name: "25826"
port: 25826
targetPort: 25826
- name: "5044"
port: 5044
targetPort: 5044
selector:
app: logstash
status:
loadBalancer: {}
---
apiVersion: v1
kind: ConfigMap
metadata:
namespace: logs
name: logstash-configmap
data:
logstash.yml: |
http.host: "0.0.0.0"
path.config: /usr/share/logstash/pipeline
logstash.conf: |
input {
beats {
port => 5044
host => "0.0.0.0"
}
}
filter {
if [fileset][module] == "nginx" {
if [fileset][name] == "access" {
grok {
match => { "message" => ["%{IPORHOST:[nginx][access][remote_ip]} - %{DATA:[nginx][access][user_name]} \[%{HTTPDATE:[nginx][access][time]}\] \"%{WORD:[nginx][access][method]} %{DATA:[nginx][access][url]} HTTP/%{NUMBER:[nginx][access][http_version]}\" %{NUMBER:[nginx][access][response_code]} %{NUMBER:[nginx][access][body_sent][bytes]} \"%{DATA:[nginx][access][referrer]}\" \"%{DATA:[nginx][access][agent]}\""] }
remove_field => "message"
}
mutate {
add_field => { "read_timestamp" => "%{#timestamp}" }
}
date {
match => [ "[nginx][access][time]", "dd/MMM/YYYY:H:m:s Z" ]
remove_field => "[nginx][access][time]"
}
useragent {
source => "[nginx][access][agent]"
target => "[nginx][access][user_agent]"
remove_field => "[nginx][access][agent]"
}
geoip {
source => "[nginx][access][remote_ip]"
target => "[nginx][access][geoip]"
}
}
else if [fileset][name] == "error" {
grok {
match => { "message" => ["%{DATA:[nginx][error][time]} \[%{DATA:[nginx][error][level]}\] %{NUMBER:[nginx][error][pid]}#%{NUMBER:[nginx][error][tid]}: (\*%{NUMBER:[nginx][error][connection_id]} )?%{GREEDYDATA:[nginx][error][message]}"] }
remove_field => "message"
}
mutate {
rename => { "#timestamp" => "read_timestamp" }
}
date {
match => [ "[nginx][error][time]", "YYYY/MM/dd H:m:s" ]
remove_field => "[nginx][error][time]"
}
}
}
}
output {
stdout { codec => rubydebug }
}
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: logstash-nginx-to-gcs
namespace: logs
spec:
serviceName: "logstash"
selector:
matchLabels:
app: logstash
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: logstash
spec:
terminationGracePeriodSeconds: 10
volumes:
- name: logstash-service-account-credentials
secret:
secretName: logstash-credentials
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.10.0
volumeMounts:
- name: logstash-service-account-credentials
mountPath: /secrets/logstash
readOnly: true
resources:
limits:
memory: 2Gi
volumeClaimTemplates:
- metadata:
name: logstash-nginx-to-gcs-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Ki
I deployed them. But I'm not sure how filebeat can fetch the nginx log in a pod. It's a DaemonSet kind.
When I check logstash's logs
kubectl logs -f logstash-nginx-to-gcs-0 -n logs
Using bundled JDK: /usr/share/logstash/jdk
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.jruby.ext.openssl.SecurityHelper (file:/tmp/jruby-1/jruby18310714590719622705jopenssl.jar) to field java.security.MessageDigest.provider
WARNING: Please consider reporting this to the maintainers of org.jruby.ext.openssl.SecurityHelper
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Sending Logstash logs to /usr/share/logstash/logs which is now configured via log4j2.properties
[2020-12-04T09:37:17,563][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.10.0", "jruby.version"=>"jruby 9.2.13.0 (2.5.7) 2020-08-03 9a89c94bcc OpenJDK 64-Bit Server VM 11.0.8+10 on 11.0.8+10 +indy +jit [linux-x86_64]"}
[2020-12-04T09:37:17,659][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}
[2020-12-04T09:37:17,712][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}
[2020-12-04T09:37:18,748][INFO ][logstash.agent ] No persistent UUID file found. Generating new UUID {:uuid=>"8b873949-cf90-491a-b76a-e3e7caa7f593", :path=>"/usr/share/logstash/data/uuid"}
[2020-12-04T09:37:20,050][WARN ][logstash.monitoringextension.pipelineregisterhook] xpack.monitoring.enabled has not been defined, but found elasticsearch configuration. Please explicitly set `xpack.monitoring.enabled: true` in logstash.yml
[2020-12-04T09:37:20,060][WARN ][deprecation.logstash.monitoringextension.pipelineregisterhook] Internal collectors option for Logstash monitoring is deprecated and targeted for removal in the next major version.
Please configure Metricbeat to monitor Logstash. Documentation can be found at:
https://www.elastic.co/guide/en/logstash/current/monitoring-with-metricbeat.html
[2020-12-04T09:37:21,056][WARN ][deprecation.logstash.outputs.elasticsearch] Relying on default value of `pipeline.ecs_compatibility`, which may change in a future major release of Logstash. To avoid unexpected changes when upgrading Logstash, please explicitly declare your desired ECS Compatibility mode.
[2020-12-04T09:37:22,082][INFO ][logstash.licensechecker.licensereader] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[http://elasticsearch:9200/]}}
[2020-12-04T09:37:22,572][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
[2020-12-04T09:37:22,705][WARN ][logstash.licensechecker.licensereader] Marking url as dead. Last error: [LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError] Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch {:url=>http://elasticsearch:9200/, :error_message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch", :error_class=>"LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError"}
[2020-12-04T09:37:22,791][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch"}
[2020-12-04T09:37:22,921][ERROR][logstash.monitoring.internalpipelinesource] Failed to fetch X-Pack information from Elasticsearch. This is likely due to failure to reach a live Elasticsearch cluster.
[2020-12-04T09:37:25,467][INFO ][org.reflections.Reflections] Reflections took 262 ms to scan 1 urls, producing 23 keys and 47 values
[2020-12-04T09:37:26,613][INFO ][logstash.javapipeline ][main] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50, "pipeline.max_inflight"=>125, "pipeline.sources"=>["/usr/share/logstash/pipeline/logstash.conf"], :thread=>"#<Thread:0x726c3bee run>"}
[2020-12-04T09:37:28,365][INFO ][logstash.javapipeline ][main] Pipeline Java execution initialization time {"seconds"=>1.74}
[2020-12-04T09:37:28,464][INFO ][logstash.inputs.beats ][main] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2020-12-04T09:37:28,525][INFO ][logstash.javapipeline ][main] Pipeline started {"pipeline.id"=>"main"}
[2020-12-04T09:37:28,819][INFO ][logstash.agent ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-12-04T09:37:29,036][INFO ][org.logstash.beats.Server][main][0710cad67e8f47667bc7612580d5b91f691dd8262a4187d9eca8cf87229d04aa] Starting server on port: 5044
[2020-12-04T09:37:29,732][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2020-12-04T09:37:52,828][ERROR][logstash.licensechecker.licensereader] Unable to retrieve license information from license server {:message=>"No Available connections"}
[2020-12-04T09:37:53,109][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"http://elasticsearch:9200/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [http://elasticsearch:9200/][Manticore::ResolutionFailure] elasticsearch: Name or service not known"}
But I don't want to connect to elasticsearch now. Just test get data.

How to configure Elasticsearch Index Lifecycle Management (ILM) durring installation in YAML file

I would like to configure default Index Lifecycle Management (ILM) policy and index template durring installation ES in kubernetes cluster, in the YAML installation file, instead of calling ES API after installation. How can I do that?
I have Elasticsearch installed in kubernetes cluster based on YAML file.
The following works queries work.
PUT _ilm/policy/logstash_policy
{
"policy": {
"phases": {
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
PUT _template/logstash_template
{
"index_patterns": ["logstash-*"],
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1,
"index.lifecycle.name": "logstash_policy"
}
}
I would like to have above setup just after installation, without making any curl queries.
I'll try to answer both of your questions.
index template
You can pass the index template with this configuration in your elasticsearch yaml. For instance:
setup.template:
name: "<chosen template name>-%{[agent.version]}"
pattern: "<chosen pattern name>-%{[agent.version]}-*"
Checkout the ES documentation to see where exactly this setup.template belongs and you're good to go.
ilm policy
The way to make this work is to get the ilm-policy.json file that has your ilm configuration to the pod's /usr/share/filebeat/ directory. in your YAML installation file, you can then use this line in your config to get it to work (I've added my whole ilm config):
setup.ilm:
enabled: true
policy_name: "<policy name>"
rollover_alias: "<rollover alias name
policy_file: "ilm-policy.json"
pattern: "{now/d}-000001"
So, how to get the file there? The ingredients are 1 configmap containing your ilm-policy.json, and a volume and volumeMount in your daemonset configuration to mount the configmap's contents to the pod's directories.
Note: I used helm for deploying filebeat to an AKS cluster (v 1.15), which connects to Elastic cloud. In your case, the application folder to store your json will probably be /usr/share/elasticsearch/ilm-policy.json.
Below, you'll see a line like {{ .Files.Get <...> }}, which is a templating function for helm getting the contents of the files. Alternatively, you can copy the file contents directly into the configmap yaml, but to have the file separate makes it better managable in my opinion.
The configMap
Make sure your ilm-policy.json is somewhere reachable by your deployments. This is how the configmap can look:
apiVersion: v1
kind: ConfigMap
metadata:
name: ilmpolicy-config
namespace: logging
labels:
k8s-app: filebeat
data:
ilm-policy.json: |-
{{ .Files.Get "ilm-policy.json" | indent 4 }}
The Daemonset
at the deamonSet's volumeMounts section, append this:
- name: ilm-configmap-volume
mountPath: /usr/share/filebeat/ilm-policy.json
subPath: ilm-policy.json
readOnly: true
and at the volume section append this:
- name: ilm-configmap-volume
configMap:
name: ilmpolicy-config
I'm not exactly sure the spacing is correct in the browser, but this should give a pretty good idea.
I hope this works for your setup! good luck.
I've used the answer to get a custom policy in place for Packetbeat running with ECK.
The ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: packetbeat-ilmpolicy
labels:
k8s-app: packetbeat
data:
ilm-policy.json: |-
{
"policy": {
"phases": {
"hot": {
"min_age": "0ms",
"actions": {
"rollover": {
"max_age": "1d"
}
}
},
"delete": {
"min_age": "1d",
"actions": {
"delete": {}
}
}
}
}
}
The Beat config:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: packetbeat
spec:
type: packetbeat
elasticsearchRef:
name: demo
kibanaRef:
name: demo
config:
pipeline: geoip-info
packetbeat.interfaces.device: any
packetbeat.protocols:
- type: dns
ports: [53]
include_authorities: true
include_additionals: true
- type: http
ports: [80, 8000, 8080, 9200, 9300]
- type: tls
ports: [443, 993, 995, 5223, 8443, 8883, 9243]
packetbeat.flows:
timeout: 30s
period: 30s
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
setup.ilm:
enabled: true
overwrite: true
policy_name: "packetbeat"
policy_file: /usr/share/packetbeat/ilm-policy.json
pattern: "{now/d}-000001"
daemonSet:
podTemplate:
spec:
terminationGracePeriodSeconds: 30
hostNetwork: true
automountServiceAccountToken: true # some older Beat versions are depending on this settings presence in k8s context
dnsPolicy: ClusterFirstWithHostNet
tolerations:
- operator: Exists
containers:
- name: packetbeat
securityContext:
runAsUser: 0
capabilities:
add:
- NET_ADMIN
volumeMounts:
- name: ilmpolicy-config
mountPath: /usr/share/packetbeat/ilm-policy.json
subPath: ilm-policy.json
readOnly: true
volumes:
- name: ilmpolicy-config
configMap:
name: packetbeat-ilmpolicy
The important parts in the Beat config are the Volume mount where we mount the configmap into the container.
After this we can reference the file in the config with setup.ilm.policy_file.

Resources