master_not_discovered_exception for Elasticsearch 5.5 - elasticsearch

This is how my master's elasticsearch.yml looks like :
cluster.name: myelastic
node.name: myelastic-master1
node.master: true
node.data: false
indices.queries.cache.size: 20%
network.host: _ec2_
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["masterip1","masterip2","masterip3","dataip1","dataip2","dataip3","dataip4","dataip5"]
and this is how the elasticsearch.yml of the data node looks like:
cluster.name: myelastic
node.name: myelastic-data1
node.master: true
node.data: false
indices.queries.cache.size: 20%
network.host: _ec2_
discovery.zen.minimum_master_nodes: 2
discovery.zen.ping.unicast.hosts: ["masterip1","masterip2","masterip3","dataip1","dataip2","dataip3","dataip4","dataip5"]
When I try to do a curl -XGET 'myelastic-data1:9200/_cat/master?v&pretty'
{
"error":{
"root_cause":[
{
"type":"master_not_discovered_exception",
"reason":null
}
],
"type":"master_not_discovered_exception",
"reason":null
},
"status":503
}
Any idea if I am missing something?

set
network.host: eth0
and try to restart elasticsearch service

Related

Logstash cannot connect to the elastic search cluster with Xpack enabled

The difficulty I encountered was that Logstash could not connect to the Elasticsearch cluster with Xpack enabled.
This is an Elasticsearch cluster composed of at least four node nodes, which enables xpack. I set a new certificate for the transport.ssl of this cluster and applied it in the configuration file.
In the above screenshot, an index named "jsonfile-daemonset-syslog-2022.12.21" was created before xpack was enabled in the cluster. After xpack is enabled in the cluster, new logs cannot be sent to the cluster from the logstash and new indexes cannot be created.
root#esnode-1:/etc/elasticsearch# cat /etc/hosts
127.0.0.1 localhost
172.16.20.66 esnode-1
172.16.20.60 esnode-2
172.16.20.105 esnode-3
172.16.100.28 esnode-4
172.16.20.87 logstash
root#esnode-1:/etc/elasticsearch# cat elasticsearch.yml |grep -v '^$' | grep -v '^#'
cluster.name: will-cluster1
node.name: esnode-1
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 172.16.20.66
http.port: 9200
discovery.seed_hosts: ["esnode-1","esnode-2","esnode-3","esnode-4"]
cluster.initial_master_nodes: ["esnode-1","esnode-2","esnode-3","esnode-4"]
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/elastic-certificates.p12
truststore.path: certs/elastic-certificates.p12
http.host: 0.0.0.0
$ /usr/share/elasticsearch/bin/elasticsearch-certutil ca (no set password)
$ /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 (set password: 123456)
Note:
user: elastic
password: ednFPXyz357##
user: kibana_system
password: kibana357xy#
user: logstash_system
password: logstashXyZ235#
root#esnode-1:/etc/elasticsearch# curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://localhost:9200
Enter host password for user 'elastic':
{
"name" : "esnode-1",
"cluster_name" : "will-cluster1",
"cluster_uuid" : "5aT8AVA5STity523pJhvGQ",
"version" : {
"number" : "8.5.3",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "4ed5ee9afac63de92ec98f404ccbed7d3ba9584e",
"build_date" : "2022-12-05T18:22:22.226119656Z",
"build_snapshot" : false,
"lucene_version" : "9.4.2",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
root#logstash:/etc/logstash# cat /etc/logstash/logstash.yml |grep -v '^$' | grep -v '^#'
path.data: /var/lib/logstash
path.logs: /var/log/logstash
root#logstash:/etc/logstash#
root#logstash:/etc/logstash# cat /etc/logstash/conf.d/logsatsh-daemonset-jsonfile-kafka-to-es.conf
input {
kafka {
bootstrap_servers => "172.16.1.67:9092,172.16.1.37:9092,172.16.1.203:9092"
topics => ["jsonfile-log-topic"]
codec => "json"
}
}
output {
stdout { codec => rubydebug }
}
output {
#if [fields][type] == "app1-access-log" {
if [type] == "jsonfile-daemonset-applog" {
elasticsearch {
hosts => ["https://172.16.20.66:9200","https://172.16.20.60:9200","https://172.16.20.105:9200","https://172.16.100.28:9200"]
index => "jsonfile-daemonset-applog-%{+YYYY.MM.dd}"
truststore => "/etc/logstash/elastic-certificates.p12"
user => "logstash_system"
password => "logstashXyZ235#"
}}
if [type] == "jsonfile-daemonset-syslog" {
elasticsearch {
hosts => ["https://172.16.20.66:9200","https://172.16.20.60:9200","https://172.16.20.105:9200","https://172.16.100.28:9200"]
index => "jsonfile-daemonset-syslog-%{+YYYY.MM.dd}"
truststore => "/etc/logstash/elastic-certificates.p12"
user => "logstash_system"
password => "logstashXyZ235#"
}}
}
The error message of starting logstash is posted here:
root#logstash:/etc/logstash/conf.d# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logsatsh-daemonset-jsonfile-kafka-to-es.conf --path.settings=/etc/logstash
Using bundled JDK: /usr/share/logstash/jdk
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2022-12-24T12:09:04,135][INFO ][logstash.runner ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2022-12-24T12:09:04,143][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"8.5.3", "jruby.version"=>"jruby 9.3.9.0 (2.6.8) 2022-10-24 537cd1f8bc OpenJDK 64-Bit Server VM 17.0.5+8 on 17.0.5+8 +indy +jit [x86_64-linux]"}
[2022-12-24T12:09:04,152][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2022-12-24T12:09:04,702][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-12-24T12:09:06,947][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::OrgLogstashSecretStore::SecretStoreException::LoadException", :message=>"Found a file at /etc/logstash/logstash.keystore, but it is not a valid Logstash keystore.", :backtrace=>["org.logstash.secret.store.backend.JavaKeyStore.load(JavaKeyStore.java:294)", "org.logstash.secret.store.backend.JavaKeyStore.load(JavaKeyStore.java:77)", "org.logstash.secret.store.SecretStoreFactory.doIt(SecretStoreFactory.java:129)", "org.logstash.secret.store.SecretStoreFactory.load(SecretStoreFactory.java:115)", "org.logstash.secret.store.SecretStoreExt.getIfExists(SecretStoreExt.java:60)", "org.logstash.execution.AbstractPipelineExt.getSecretStore(AbstractPipelineExt.java:582)", "org.logstash.execution.AbstractPipelineExt.initialize(AbstractPipelineExt.java:181)", "org.logstash.execution.JavaBasePipelineExt.initialize(JavaBasePipelineExt.java:72)", "org.logstash.execution.JavaBasePipelineExt$INVOKER$i$1$0$initialize.call(JavaBasePipelineExt$INVOKER$i$1$0$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:846)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1229)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuperSplatArgs(IRRuntimeHelpers.java:1202)", "org.jruby.ir.targets.indy.InstanceSuperInvokeSite.invoke(InstanceSuperInvokeSite.java:29)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$initialize$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:48)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:139)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:112)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:329)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:87)", "org.jruby.RubyClass.newInstance(RubyClass.java:911)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.ir.targets.indy.InvokeSite.invoke(InvokeSite.java:208)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:50)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:49)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:139)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:112)", "org.jruby.ir.targets.indy.InvokeSite.invoke(InvokeSite.java:208)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.RUBY$block$converge_state$2(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:386)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:141)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:64)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58)", "org.jruby.runtime.Block.call(Block.java:143)", "org.jruby.RubyProc.call(RubyProc.java:309)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:107)", "java.base/java.lang.Thread.run(Thread.java:833)"]}
[2022-12-24T12:09:07,088][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-12-24T12:09:07,161][INFO ][logstash.runner ] Logstash shut down.
[2022-12-24T12:09:07,178][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:790) ~[jruby.jar:?]
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:753) ~[jruby.jar:?]
at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:91) ~[?:?]
#Wei Yu, it seems you are missing:
ssl => true
in your logstash yaml for ES output.

master_not_discovered_exception - Error when setting up multi node cluster

I tried configuring multi node cluster using aws EC2 instance. I created 3 instances and made one as master node and other two as data node. but when I hit the public IP of master node it gave me response that’s mean its working but even if I hit the public IP of data nodes also it gave me response so through the security groups I set the inbound rules for the data nodes it can only access by master node. [ port 9200 - 9300 source publicIPofMaster/32 ] but after this change in security group also am getting response if I hit the public IP of master node ( publicIP:9200 ) but getting an error if I hit publicIP:9200/_cluster/health
error is
{
"error": {
"root_cause": [
{
"type": "master_not_discovered_exception",
"reason": null
}
],
"type": "master_not_discovered_exception",
"reason": null
},
"status": 503
}
how can I fix this. data nodes should not be accessible from outside no ?? it should only be accessible by master node. thats why I made the change in security group.
data node -2: -
cluster.name: Cluster-Presentation
node.name: node-2
network.host: 172.31.34.104
discovery.seed_hosts:["172.31.34.104","172.31.40.191","172.31.38.85"]
cluster.initial_master_nodes: ["172.31.38.85"]
xpack.security.enabled: false
data node -1: -
cluster.name: Cluster-Presentation
node.name: node-3
network.host: 172.31.40.191
discovery.seed_hosts: ["172.31.34.104","172.31.40.191","172.31.38.85"]
cluster.initial_master_nodes: ["172.31.38.85"]
xpack.security.enabled: false
master node:-
cluster.name: Cluster-Presentation
node.name: node-1
network.host: 172.31.38.85
discovery.seed_hosts: ["172.31.34.104","172.31.40.191","172.31.38.85"]
cluster.initial_master_nodes: ["172.31.38.85"]
xpack.security.enabled: false

Kibana not able to connect to ES services

I am trying to setup ES with Kibana on AKS and having a bit of issue. The setup worked before need of Security plugin enabled. Now I need security plugin enabled, although not able to get Kibana connected. Do you have any idea please ? I tried adding, disabling calling with/without https, seems it is all the same. Thanks
Deploying with helm:
ES: image: docker.elastic.co/elasticsearch/elasticsearch imageTag: 7.16.2
Kibana: image: "docker.elastic.co/kibana/kibana" imageTag: "7.10.2"
My full configs:
elastisearch.yml
xpack.security.enabled: "true"
xpack.security.transport.ssl.enabled: "true"
xpack.security.transport.ssl.supported_protocols: "TLSv1.2"
xpack.security.transport.ssl.client_authentication: "none"
xpack.security.transport.ssl.key: "/usr/share/elasticsearch/config/certkey/apps-com-key.pem"
xpack.security.transport.ssl.certificate: "/usr/share/elasticsearch/config/cert/apps-com-fullchain.pem"
xpack.security.transport.ssl.certificate_authorities: "/usr/share/elasticsearch/config/certs/fullchain-ca.pem"
xpack.security.transport.ssl.verification_mode: "certificate"
xpack.security.http.ssl.enabled: "false"
xpack.security.http.ssl.client_authentication: "none"
xpack.security.http.ssl.key: "/usr/share/elasticsearch/config/certkey/key.pem"
xpack.security.http.ssl.certificate: "/usr/share/elasticsearch/config/cert/fullchain.pem"
xpack.security.http.ssl.certificate_authorities: "/usr/share/elasticsearch/config/certs/fullchain-ca.pem"
kibana.yml
logging.root.level: all
logging.verbose: true
elasticsearch.hosts: ["https://IP:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: ${KIBANA_PASSWORD}
server.ssl:
enabled: "true"
key: "/usr/share/kibana/config/certkey/key.pem"
certificate: "/usr/share/kibana/config/cert/fullchain.pem"
clientAuthentication: "none"
supportedProtocols: [ "TLSv1.2"]
elasticsearch.ssl:
certificateAuthorities: [ "/usr/share/kibana/config/certs/fullchain-ca.pem" ]
verificationMode: "certificate"
elasticsearch.requestHeadersWhitelist: [ authorization ]
newsfeed.enabled: "false"
telemetry.enabled: "false"
telemetry.optIn: "false"
The errors I receive on Kibana pod.
{"type":"log","#timestamp":"2022-10-10T13:24:57Z","tags":["error","elasticsearch","data"],"pid":8,"message":"[ConnectionError]: write EPROTO 140676394411840:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:\n

Getting UnavailableShardsException when setting the password for "elastic" user

On AWS EC2, I have deployed an Elasticsearch cluster. I have added the following setting to the "elasticsearch.yml" file in order to create a password for the "elastic" user:
xpack.security.enabled: true
The complete elasticsearch.yml file is :
http.port: 9900
xpack.ml.enabled: false
xpack.security.enabled: true
network.host: [_local_, _site_]
path.data: /data/esdata
path.logs: /data/logs
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
cluster.name: es-cluster
node.name: node1
node.roles: [ "master" ]
cluster.initial_master_nodes:
- node1
discovery.seed_hosts:
- 172.**.*.***:9300
Now, I am trying to set the password for the "elastic" user using the below command
./bin/elasticsearch-reset-password -u elastic
However, the moment I issue this command, the cluster enters the red status.
The logs are :
[2022-07-19T04:32:18,733][INFO ][o.e.x.s.a.f.FileUserPasswdStore] [node1] users file [/home/ubuntu/elasticsearch-8.3.2/config/users] changed. updating users...
[2022-07-19T04:32:18,736][INFO ][o.e.x.s.a.f.FileUserRolesStore] [node1] users roles file [/home/ubuntu/elasticsearch-8.3.2/config/users_roles] changed. updating users roles...
[2022-07-19T04:32:51,113][INFO ][o.e.x.s.s.SecurityIndexManager] [node1] security index does not exist, creating [.security-7] with alias [.security]
[2022-07-19T04:32:51,183][INFO ][o.e.c.m.MetadataCreateIndexService] [node1] [.security-7] creating index, cause [api], templates [], shards [1]/[0]
[2022-07-19T04:32:51,196][INFO ][o.e.c.r.a.AllocationService] [node1] current.health="RED" message="Cluster health status changed from [YELLOW] to [RED] (reason: [index [.security-7] created])." previous.health="YELLOW" reason="index [.security-7] created"
[2022-07-19T04:33:28,767][INFO ][o.e.x.s.a.f.FileUserPasswdStore] [node1] users file [/home/ubuntu/elasticsearch-8.3.2/config/users] changed. updating users...
[2022-07-19T04:33:28,769][INFO ][o.e.x.s.a.f.FileUserRolesStore] [node1] users roles file [/home/ubuntu/elasticsearch-8.3.2/config/users_roles] changed. updating users roles...
[2022-07-19T04:34:21,234][WARN ][r.suppressed ] [node1] path: /_security/user/elastic/_password, params: {pretty=, username=elastic}
org.elasticsearch.action.UnavailableShardsException: [.security-7][0] [1] shardIt, [0] active : Timeout waiting for [1m], request: indices:data/write/update
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction.retry(TransportInstanceSingleOperationAction.java:231) [elasticsearch-8.3.2.jar:?]
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction.doStart(TransportInstanceSingleOperationAction.java:181) [elasticsearch-8.3.2.jar:?]
at org.elasticsearch.action.support.single.instance.TransportInstanceSingleOperationAction$AsyncSingleAction$2.onTimeout(TransportInstanceSingleOperationAction.java:254) [elasticsearch-8.3.2.jar:?]
at org.elasticsearch.cluster.ClusterStateObserver$ContextPreservingListener.onTimeout(ClusterStateObserver.java:345) [elasticsearch-8.3.2.jar:?]
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:263) [elasticsearch-8.3.2.jar:?]
at org.elasticsearch.cluster.service.ClusterApplierService$NotifyTimeout.run(ClusterApplierService.java:649) [elasticsearch-8.3.2.jar:?]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:710) [elasticsearch-8.3.2.jar:?]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
Connection to ec2-52-66-242-245.ap-south-1.compute.amazonaws.com closed by remote host.
Connection to ec2-52-66-242-245.ap-south-1.compute.amazonaws.com closed.
Can anyone please help me resolve this issue?

Missing authentication credentials for REST request when using sniffing when Kibana starts

I just upgraded ELK from 7.1.0 to 7.5.0 and Kibana fails to start with
{"type":"log","#timestamp":"2020-01-22T17:27:54Z","tags":["error","elasticsearch","data"],"pid":23107,"message":"Request error, retrying\nGET http://localhost:9200/_xpack => socket hang up"}
{"type":"log","#timestamp":"2020-01-22T17:27:55Z","tags":["info","plugins-system"],"pid":23107,"message":"Starting [8] plugins: [security,licensing,code,timelion,features,spaces,translations,data]"}
{"type":"log","#timestamp":"2020-01-22T17:27:55Z","tags":["warning","plugins","licensing"],"pid":23107,"message":"License information could not be obtained from Elasticsearch for the [data] cluster. [security_exception] missing authentication credentials for REST request [/_xpack], with { header={ WWW-Authenticate=\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\" } } :: {\"path\":\"/_xpack\",\"statusCode\":401,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"missing authentication credentials for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}}],\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"missing authentication credentials for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}},\\\"status\\\":401}\",\"wwwAuthenticateDirective\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"}"}
when having the following two options enabled:
elasticsearch.sniffOnStart: true
elasticsearch.sniffOnConnectionFault: true
Any idea what I am doing wrong ?
The complete Kibana config follows:
server.port: 5601
server.host: 0.0.0.0
server.name: kibana
kibana.index: ".kibana"
kibana.defaultAppId: "discover"
elasticsearch.hosts: ["http://node1.test.com:9200", "http://node2.test.com:9200", "http://node3.test.com:9200", "http://node4.test.com:9200", "http://node5.test.com:9200"]
elasticsearch.pingTimeout: 1500
elasticsearch.requestTimeout: 30000
elasticsearch.logQueries: true
elasticsearch.sniffOnStart: true
elasticsearch.sniffOnConnectionFault: true
elasticsearch.username: "kibana"
elasticsearch.password: "XXX"
logging.dest: /var/log/kibana.log
logging.verbose: false
xpack.security.enabled: true
xpack.monitoring.enabled: true
xpack.monitoring.ui.enabled: true
xpack.security.encryptionKey: "XXX"
If I remove elasticsearch.sniffOnStart: true all is well.
This "xpack.security.enabled: false" worked for 6.2.x version as well

Resources