How to health check elasticsearch cluster from outside - elasticsearch

I want to write a script to health check our elasticsearch cluster (deploy on kubernetes)
I go inside pod which run elasticsearch master container and run below commands:
[elasticsearch#elasticsearch-master-0 ~]$ curl localhost:9200/frontend-dev-2021.12.03/_count
{"count":76,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0}}
[elasticsearch#elasticsearch-master-0 ~]$ curl localhost:9200/_cluster/health?pretty
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 617,
"active_shards" : 1234,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
As you can see, both index count and health check command are success.
But when I run these command from outside (I give elasticsearch cluster an public endpoint)
root#ip-192-168-1-1:~# curl --user username:password esdev.example.com/frontend-dev-2021.12.03/_count
{"count":76,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0}}
root#ip-192-168-1-1:~# curl --user username:password esdev.example.com/_cluster/health
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx</center>
</body>
</html>
Only the index count command is success, the health check command always produce 403 Forbidden error.
I have searched and read through the official docs from elasticsearch but event the offcial docs only run command internal elasticsearch cluster or using kibana (http service kubernetes - internal k8s cluster).
How can I health check elasticsearch from outside? Or we can not do this because some mechanism of elasticsearch cluster?
Notes: I create a basic auth nginx (username:password) stand before the elasticsearch and this nginx has an ingressroute from traefik-v2
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
annotations:
meta.helm.sh/release-name: basic-auth-nginx-dev
meta.helm.sh/release-namespace: dev
creationTimestamp: "2021-01-23T08:12:55Z"
generation: 2
labels:
app: basic-auth-nginx-dev
app.kubernetes.io/managed-by: Helm
managedFields:
- apiVersion: traefik.containo.us/v1alpha1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app: {}
f:app.kubernetes.io/managed-by: {}
f:spec:
.: {}
f:entryPoints: {}
f:routes: {}
manager: Go-http-client
operation: Update
time: "2021-01-23T08:12:55Z"
name: basic-auth-nginx-dev-web
namespace: dev
resourceVersion: "103562796"
selfLink: /apis/traefik.containo.us/v1alpha1/namespaces/dev/ingressroutes/basic-auth-nginx-dev-web
uid: 5832b501-b2d7-4600-93b6-b3c72c420115
spec:
entryPoints:
- web
routes:
- kind: Rule
match: Host(`esdev.example.com`) && PathPrefix(`/`)
priority: 1
services:
- kind: Service
name: basic-auth-nginx-dev
port: 80

Could you please show us your nginx config?
I think the problem come from your nginx because I see the output you show that nginx return 403 for you, not the elasticsearch.
Could you please try another command start with _ like _template or something like that, there is a chance your nginx prevent access to path start with _ character.

Related

Logstash not able to connect to Elasticsearch deployed on Kubernetes cluster

I have deployed Logstash and elasticsearch pod on EKS cluster. When I am checking the logs for logstash pod it is showing unreachable elasticserach server. Though my elasticsearch is up and running. Please find below yaml files and log error.
configMap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: "logstash-configmap-development"
namespace: "development"
labels:
app: "logstash-development"
data:
logstash.conf: |-
input {
http {
}
}
filter {
json {
source => "message"
}
}
output {
elasticsearch {
hosts => ["https://my-server.com/elasticsearch-development/"]
index => "%{[#metadata][beat]}-%{[#metadata][version]}-%{+YYYY.MM.dd}"
}
stdout {
codec => rubydebug
}
}
deployment.yaml
---
apiVersion: "apps/v1"
kind: "Deployment"
metadata:
name: "logstash-development"
namespace: "development"
spec:
selector:
matchLabels:
app: "logstash-development"
replicas: 1
strategy:
type: "RollingUpdate"
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: "logstash-development"
spec:
containers:
-
name: "logstash-development"
image: "logstash:7.10.2"
imagePullPolicy: "Always"
env:
-
name: "XPACK_MONITORING_ELASTICSEARCH_HOSTS"
value: "https://my-server.com/elasticsearch-development/"
-
name: "XPACK_MONITORING_ELASTICSEARCH_URL"
value: "https://my-server.com/elasticsearch-development/"
-
name: "SERVER_BASEPATH"
value: "logstash-development"
securityContext:
privileged: true
ports:
-
containerPort: 8080
protocol: TCP
volumeMounts:
-
name: "logstash-conf-volume"
mountPath: "/usr/share/logstash/pipeline/"
volumes:
-
name: "logstash-conf-volume"
configMap:
name: "logstash-configmap-development"
items:
- key: "logstash.conf"
path: "logstash.conf"
imagePullSecrets:
-
name: "logstash"
service.yaml
---
apiVersion: "v1"
kind: "Service"
metadata:
name: "logstash-development"
namespace: "development"
labels:
app: "logstash-development"
spec:
ports:
-
port: 55770
targetPort: 8080
selector:
app: "logstash-development"
Logstash pod log error
[2021-06-09T08:22:38,708][WARN ][logstash.licensechecker.licensereader] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>"https://my-server.com/elasticsearch-development/", :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>"Elasticsearch Unreachable: [https://my-server.com/elasticsearch-development/][Manticore::ConnectTimeout] connect timed out"}
Note:- Elasticsearch is up and running. And when I hit the logstash url it is giving as status ok.
I have checked with elasticsearch cluster-ip, their logstash is able to connect with Elasticsearch, but when I am giving ingress path url it is not able to connect to elasticsearch.
Also from the logs, I noticed it is taking incorrect url for elasticsearch.
My elasticsearch url is something like this:- https://my-server.com/elasticserach
but instead logstash is looking for https://my-server.com:9200/elasticsearch
With this url (https://my-server.com:9200/elasticsearch) elasticsearch is not accessible as a result it is giving connection timeout.
Can someone tell why it is taking (https://my-server.com:9200/elasticsearch) and not (https://my-server.com/elasticsearch)
I am now able to connect logstash with elasticsearch, if you are using elasticsearch with dns name, logstash by default will take the port of elasticsearch as 9200, so in my case it was taking the elasticsearch url as https://my-server.com:9200/elasticsearch-development/. But with that url, elasticsearch was not accessible, it was only accessible with (https://myserver.com/elasticsearch-development/). So I need to add the https port i.e 443 in my elasticsearch url, through which the logstash would be able to connect to elasticserach (https://my-server.com:443/elasticsearch-development/)
Long story short:-
In deployment.yaml file under env variable XPACK_MONITORING_ELASTICSEARCH_HOSTS and XPACK_MONITORING_ELASTICSEARCH_URL given the value as https://my-server.com:443/elasticsearch-development/
Same value was given in logstash.conf file.

Elasticsearch with xpack security fails

I am trying to set up a simple ELK stack using docker. While I disable xpack security it starts fine and I can access the Kibana interface. If xpack security is enabled I get an "Kibana server is not ready yet" error from the Kibana interface. This error is most likely caused by this Elasticsearch error:
{"type": "server", "timestamp": "2020-08-03T15:35:10,134Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elastic-cluster", "node.name": "elasticsearch", "message": "Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.monitoring-es-7-2020.08.03][0]]]).", "cluster.uuid": "Vdk1-_4sSvuqlEspQcF-6A", "node.id": "PZMUpi_JSJS6IZ7tv6H22g" }
{"type": "server", "timestamp": "2020-08-03T15:35:10,560Z", "level": "ERROR", "component": "o.e.x.s.a.e.NativeUsersStore", "cluster.name": "elastic-cluster", "node.name": "elasticsearch", "message": "security index is unavailable. short circuiting retrieval of user [elasticadmin]", "cluster.uuid": "Vdk1-_4sSvuqlEspQcF-6A", "node.id": "PZMUpi_JSJS6IZ7tv6H22g" }
This is my elasticsearch.yml:
cluster.name: elastic-cluster
node.name: elasticsearch
network.host: 0.0.0.0
transport.host: 0.0.0.0
## Cluster Settings
discovery.seed_hosts: elasticsearch
cluster.initial_master_nodes: elasticsearch
## License
xpack.license.self_generated.type: basic
# Security
xpack.security.enabled: true
## - ssl
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: certs/elasticsearch.key
xpack.security.transport.ssl.certificate: certs/elasticsearch.crt
xpack.security.transport.ssl.certificate_authorities: certs/ca.crt
## - http
#xpack.security.http.ssl.enabled: true
#xpack.security.http.ssl.key: certs/elasticsearch.key
#xpack.security.http.ssl.certificate: certs/elasticsearch.crt
#xpack.security.http.ssl.certificate_authorities: certs/ca.crt
#xpack.security.http.ssl.client_authentication: optional
# Monitoring
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
This is the error log from Kibana:
{"type":"log","#timestamp":"2020-08-03T15:42:22Z","tags":["warning","plugins","licensing"],"pid":6,"
message":"License information could not be obtained from Elasticsearch due to [security_exception] unable to authenticate user [elasticadmin] for REST request [/_xpack], with { header={ WWW-Authenticate=\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\" } } :: {\"path\":\"/_xpack\",\"statusCode\":401,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [elasticadmin] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}}],\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [elasticadmin] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}},\\\"status\\\":401}\",\"wwwAuthenticateDirective\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"} error"}
Basic curl request:
curl -H "Authorization: Basic ZWxhc3RpY2FkbWluOjEyMzQ1Njc4OQ==" -XGET "http://localhost:9200/_cat/nodes?v&pretty"
{
"error" : {
"root_cause" : [
{
"type" : "security_exception",
"reason" : "unable to authenticate user [elasticadmin] for REST request [/_cat/nodes?v&pretty]",
"header" : {
"WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
}
}
],
"type" : "security_exception",
"reason" : "unable to authenticate user [elasticadmin] for REST request [/_cat/nodes?v&pretty]",
"header" : {
"WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
}
},
"status" : 401
}
Another Auth request:
docker#docker:~$ curl -H "Authorization: Basic ZWxhc3RpY2FkbWluOjEyMzQ1Njc4OQ" -XGET "http://localhost:9200/_security/_authenticate"
{"error":{"root_cause":[{"type":"security_exception","reason":"unable to authenticate user [elasticadmin] for REST request [/_security/_authenticate]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"unable to authenticate user [elasticadmin] for REST request [/_security/_authenticate]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}
Docker-Compose:
secrets:
elasticsearch.keystore:
file: ${ELK_DATA}/secrets/keystore/elasticsearch.keystore
elastic.ca:
file: ${ELK_DATA}/secrets/certs/ca/ca.crt
elasticsearch.certificate:
file: ${ELK_DATA}/secrets/certs/elasticsearch/elasticsearch.crt
elasticsearch.key:
file: ${ELK_DATA}/secrets/certs/elasticsearch/elasticsearch.key
kibana.certificate:
file: ${ELK_DATA}/secrets/certs/kibana/kibana.crt
kibana.key:
file: ${ELK_DATA}/secrets/certs/kibana/kibana.key
services:
####################################################################
############################# ELK ##################################
####################################################################
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}
restart: unless-stopped
environment:
ELASTIC_USERNAME: ${ELASTIC_USERNAME}
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
ELASTIC_CLUSTER_NAME: ${ELASTIC_CLUSTER_NAME}
ELASTIC_NODE_NAME: ${ELASTIC_NODE_NAME}
ELASTIC_INIT_MASTER_NODE: ${ELASTIC_INIT_MASTER_NODE}
ELASTIC_DISCOVERY_SEEDS: ${ELASTIC_DISCOVERY_SEEDS}
ES_JAVA_OPTS: -Xmx${ELASTICSEARCH_HEAP} -Xms${ELASTICSEARCH_HEAP} -Des.enforce.bootstrap.checks=true
bootstrap.memory_lock: "true"
volumes:
- ${ELK_DATA}/elasticsearch/data:/usr/share/elasticsearch/data
- ${ELK_DATA}/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ${ELK_DATA}/elasticsearch/config/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties
secrets:
- source: elasticsearch.keystore
target: /usr/share/elasticsearch/config/elasticsearch.keystore
- source: elastic.ca
target: /usr/share/elasticsearch/config/certs/ca.crt
- source: elasticsearch.certificate
target: /usr/share/elasticsearch/config/certs/elasticsearch.crt
- source: elasticsearch.key
target: /usr/share/elasticsearch/config/certs/elasticsearch.key
ports:
- 9200:9200
- 9300:9300
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 200000
hard: 200000
networks:
- traefik_proxy
logstash:
container_name: logstash
image: docker.elastic.co/logstash/logstash:${ELK_VERSION}
restart: unless-stopped
volumes:
- ${ELK_DATA}/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ${ELK_DATA}/logstash/config/pipelines.yml:/usr/share/logstash/config/pipelines.yml
- ${ELK_DATA}/logstash/pipeline:/usr/share/logstash/pipeline
environment:
ELASTIC_USERNAME: ${ELASTIC_USERNAME}
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
ELASTICSEARCH_HOST_PORT: ${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}
LS_JAVA_OPTS: "-Xmx${LOGSTASH_HEAP} -Xms${LOGSTASH_HEAP}"
ports:
- 5044:5044
- 9600:9600
networks:
- traefik_proxy
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:${ELK_VERSION}
restart: unless-stopped
volumes:
- ${ELK_DATA}/kibana/config:/usr/share/kibana/config
environment:
ELASTIC_USERNAME: ${ELASTIC_USERNAME}
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
ELASTICSEARCH_HOST_PORT: ${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}
secrets:
- source: elastic.ca
target: /certs/ca.crt
- source: kibana.certificate
target: /certs/kibana.crt
- source: kibana.key
target: /certs/kibana.key
ports:
- 5601:5601
networks:
- traefik_proxy
Where should I start looking to find the source of this issue?
Thanks for any help!
when you enable x-pack, elasticsearch is getting started, But it seems your kibana is not getting authenicated.please see below part of your error message which explains this.
elasticadmin user is not authenticated
Please see this user and see you are passing the correction authentication while accessing elasticsearch. You need to pass username and password under basic authentication mechanism.
I have the same issue but I solve it:
1 Step
you can configure you docker compose as
kibana:
build: kibana
container_name: kibana
ports:
- 5601:5601
volumes:
- ./kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
networks:
backend:
aliases:
- "kibana"
2 Step
and my kibana file is that:
...
elasticsearch.username: "kibana"
elasticsearch.password: "mypwd"
...
and my Dockerfile is:
FROM docker.elastic.co/kibana/kibana:7.10.2
COPY kibana.yml /usr/share/kibana/kibana.yml
USER root
RUN chown root:kibana /usr/share/kibana/config/kibana.yml
USER kibana
I got this issue when the data folder of ElasticSearch was deleted and re-initialized from scratch afterwards. The point is that the built-in users were not initialized.
As soon as I initialized the built-in users the error disappeared and the system worked again.
bin/elasticsearch-setup-passwords interactive|auto [-u "https://<host_name>:9200"]

How to configure Elasticsearch Index Lifecycle Management (ILM) durring installation in YAML file

I would like to configure default Index Lifecycle Management (ILM) policy and index template durring installation ES in kubernetes cluster, in the YAML installation file, instead of calling ES API after installation. How can I do that?
I have Elasticsearch installed in kubernetes cluster based on YAML file.
The following works queries work.
PUT _ilm/policy/logstash_policy
{
"policy": {
"phases": {
"delete": {
"min_age": "30d",
"actions": {
"delete": {}
}
}
}
}
}
PUT _template/logstash_template
{
"index_patterns": ["logstash-*"],
"settings": {
"number_of_shards": 1,
"number_of_replicas": 1,
"index.lifecycle.name": "logstash_policy"
}
}
I would like to have above setup just after installation, without making any curl queries.
I'll try to answer both of your questions.
index template
You can pass the index template with this configuration in your elasticsearch yaml. For instance:
setup.template:
name: "<chosen template name>-%{[agent.version]}"
pattern: "<chosen pattern name>-%{[agent.version]}-*"
Checkout the ES documentation to see where exactly this setup.template belongs and you're good to go.
ilm policy
The way to make this work is to get the ilm-policy.json file that has your ilm configuration to the pod's /usr/share/filebeat/ directory. in your YAML installation file, you can then use this line in your config to get it to work (I've added my whole ilm config):
setup.ilm:
enabled: true
policy_name: "<policy name>"
rollover_alias: "<rollover alias name
policy_file: "ilm-policy.json"
pattern: "{now/d}-000001"
So, how to get the file there? The ingredients are 1 configmap containing your ilm-policy.json, and a volume and volumeMount in your daemonset configuration to mount the configmap's contents to the pod's directories.
Note: I used helm for deploying filebeat to an AKS cluster (v 1.15), which connects to Elastic cloud. In your case, the application folder to store your json will probably be /usr/share/elasticsearch/ilm-policy.json.
Below, you'll see a line like {{ .Files.Get <...> }}, which is a templating function for helm getting the contents of the files. Alternatively, you can copy the file contents directly into the configmap yaml, but to have the file separate makes it better managable in my opinion.
The configMap
Make sure your ilm-policy.json is somewhere reachable by your deployments. This is how the configmap can look:
apiVersion: v1
kind: ConfigMap
metadata:
name: ilmpolicy-config
namespace: logging
labels:
k8s-app: filebeat
data:
ilm-policy.json: |-
{{ .Files.Get "ilm-policy.json" | indent 4 }}
The Daemonset
at the deamonSet's volumeMounts section, append this:
- name: ilm-configmap-volume
mountPath: /usr/share/filebeat/ilm-policy.json
subPath: ilm-policy.json
readOnly: true
and at the volume section append this:
- name: ilm-configmap-volume
configMap:
name: ilmpolicy-config
I'm not exactly sure the spacing is correct in the browser, but this should give a pretty good idea.
I hope this works for your setup! good luck.
I've used the answer to get a custom policy in place for Packetbeat running with ECK.
The ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: packetbeat-ilmpolicy
labels:
k8s-app: packetbeat
data:
ilm-policy.json: |-
{
"policy": {
"phases": {
"hot": {
"min_age": "0ms",
"actions": {
"rollover": {
"max_age": "1d"
}
}
},
"delete": {
"min_age": "1d",
"actions": {
"delete": {}
}
}
}
}
}
The Beat config:
apiVersion: beat.k8s.elastic.co/v1beta1
kind: Beat
metadata:
name: packetbeat
spec:
type: packetbeat
elasticsearchRef:
name: demo
kibanaRef:
name: demo
config:
pipeline: geoip-info
packetbeat.interfaces.device: any
packetbeat.protocols:
- type: dns
ports: [53]
include_authorities: true
include_additionals: true
- type: http
ports: [80, 8000, 8080, 9200, 9300]
- type: tls
ports: [443, 993, 995, 5223, 8443, 8883, 9243]
packetbeat.flows:
timeout: 30s
period: 30s
processors:
- add_cloud_metadata: {}
- add_host_metadata: {}
setup.ilm:
enabled: true
overwrite: true
policy_name: "packetbeat"
policy_file: /usr/share/packetbeat/ilm-policy.json
pattern: "{now/d}-000001"
daemonSet:
podTemplate:
spec:
terminationGracePeriodSeconds: 30
hostNetwork: true
automountServiceAccountToken: true # some older Beat versions are depending on this settings presence in k8s context
dnsPolicy: ClusterFirstWithHostNet
tolerations:
- operator: Exists
containers:
- name: packetbeat
securityContext:
runAsUser: 0
capabilities:
add:
- NET_ADMIN
volumeMounts:
- name: ilmpolicy-config
mountPath: /usr/share/packetbeat/ilm-policy.json
subPath: ilm-policy.json
readOnly: true
volumes:
- name: ilmpolicy-config
configMap:
name: packetbeat-ilmpolicy
The important parts in the Beat config are the Volume mount where we mount the configmap into the container.
After this we can reference the file in the config with setup.ilm.policy_file.

Multiline string annotations for terraform kubernetes provider

I would like to set up Ambassador as an API Gateway for kubernetes using terraform. There are several ways how to configure Ambassador. The recommended way, according to documentation, is by using kubernetes annotations for each service that is routed and exposed outside the cluster. This is done easily using kubernetes yaml configuration:
kind: Service
apiVersion: v1
metadata:
name: my-service
annotations:
getambassador.io/config: |
---
apiVersion: ambassador/v0
kind: Mapping
name: my_service_mapping
prefix: /my-service/
service: my-service
spec:
selector:
app: MyApp
ports:
- protocol: TCP
port: 80
targetPort: 9376
The getambassador.io/config field's value starting with | suggest it is a multiline string value. How to achieve the same thing using terraform HCL?
Terraform documentation contains a section about multiline strings using <<EOF your multiline string EOF:
resource "kubernetes_service" "my-service" {
"metadata" {
name = "my-service"
annotations {
"getambassador.io/config" = <<EOF
apiVersion: ambassador/v0
kind: Mapping
name: my_service_mapping
prefix: /my-service/
service: my-service
EOF
}
}
"spec" {
selector {
app = "MyApp"
}
port {
protocol = "TCP"
port = 80
target_port = "9376"
}
}
}
Make sure there is no triple dash (---) from yaml configuration. Terraform parses it incorrectly.

Cannot access Kibana dashboard

I am trying to deploy Kibana in my Kubernetes cluster which is on AWS. To access the Kibana dashboard I have created an ingress which is mapped to xyz.com. Here is my Kibana deployment file.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: kibana
labels:
component: kibana
spec:
replicas: 1
selector:
matchLabels:
component: kibana
template:
metadata:
labels:
component: kibana
spec:
containers:
- name: kibana
image: docker.elastic.co/kibana/kibana-oss:6.3.2
env:
- name: CLUSTER_NAME
value: myesdb
- name: SERVER_BASEPATH
value: /
resources:
limits:
cpu: 1000m
requests:
cpu: 100m
ports:
- containerPort: 5601
name: http
readinessProbe:
httpGet:
path: /api/status
port: http
initialDelaySeconds: 20
timeoutSeconds: 5
volumeMounts:
- name: config
mountPath: /usr/share/kibana/config
readOnly: true
volumes:
- name: config
configMap:
name: kibana-config
Whenever I deploy it, it gives me the following error. What should my SERVER_BASEPATH be in order for it to work? I know it defaults to /app/kibana.
FATAL { ValidationError: child "server" fails because [child "basePath" fails because ["basePath" with value "/" fails to match the start with a slash, don't end with one pattern]]
at Object.exports.process (/usr/share/kibana/node_modules/joi/lib/errors.js:181:19)
at internals.Object._validateWithOptions (/usr/share/kibana/node_modules/joi/lib/any.js:651:31)
at module.exports.internals.Any.root.validate (/usr/share/kibana/node_modules/joi/lib/index.js:121:23)
at Config._commit (/usr/share/kibana/src/server/config/config.js:119:35)
at Config.set (/usr/share/kibana/src/server/config/config.js:89:10)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:62:10)
at _lodash2.default.each.child (/usr/share/kibana/src/server/config/config.js:51:14)
at arrayEach (/usr/share/kibana/node_modules/lodash/index.js:1289:13)
at Function.<anonymous> (/usr/share/kibana/node_modules/lodash/index.js:3345:13)
at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:50:31)
at new Config (/usr/share/kibana/src/server/config/config.js:41:10)
at Function.withDefaultSchema (/usr/share/kibana/src/server/config/config.js:34:12)
at KbnServer.exports.default (/usr/share/kibana/src/server/config/setup.js:9:37)
at KbnServer.mixin (/usr/share/kibana/src/server/kbn_server.js:136:16)
at <anonymous>
at process._tickCallback (internal/process/next_tick.js:188:7)
isJoi: true,
name: 'ValidationError',
details:
[ { message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern',
path: 'server.basePath',
type: 'string.regex.name',
context: [Object] } ],
_object:
{ pkg:
{ version: '6.3.2',
branch: '6.3',
buildNum: 17307,
buildSha: '53d0c6758ac3fb38a3a1df198c1d4c87765e63f7' },
dev: { basePathProxyTarget: 5603 },
pid: { exclusive: false },
cpu: { cgroup: [Object] },
cpuacct: { cgroup: [Object] },
server: { name: 'kibana', host: '0', basePath: '/' } },
annotate: [Function] }
I followed this guide https://github.com/pires/kubernetes-elasticsearch-cluster
Any idea what might be the issue ?
I believe that the example config in the official kibana repository gives a hint on the cause of this problem, here's the server.basePath setting:
# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""
The fact that the server.basePath cannot end in a slash could mean that kibana interprets your setting as ending in a slash basically. I've not dug deeper into this though.
This error message is interesting:
message: '"basePath" with value "/" fails to match the start with a slash, don\'t end with one pattern'
So this error message are a complement to the documentation: don't end in a slash and don't start with a slash. Something like that.
I reproduced this in minikube using your Deployment manifest but i removed the volume mount parts at the end. Changing SERVER_BASEPATH to /<SOMETHING> works fine, so basically i think you just need to set a proper basepath.

Resources