How to curl elasticsearch kubernetes operator without passing -k over tls? - elasticsearch

Hello ElasticSearch Champs,
I deployed EK from https://www.elastic.co/guide/en/cloud-on-k8s/current/k8s-quickstart.html on Kubernetes.
I Am unable to curl ElasticSearch without -k option!
curl --cacert ca-bundle.crt -u "elastic:9sg8q9h4tncvdl2srqiptn9z" "https://10.4.1.14:9200"
curl: (60) Peer's certificate issuer has been marked as not trusted by the user.
More details here: http://curl.haxx.se/docs/sslcerts.html
curl performs SSL certificate verification by default, using a "bundle"
of Certificate Authority (CA) public keys (CA certs). If the default
bundle file isn't adequate, you can specify an alternate file
using the --cacert option.
If this HTTPS server uses a certificate signed by a CA represented in
the bundle, the certificate verification probably failed due to a
problem with the certificate (it might be expired, or the name might
not match the domain name in the URL).
If you'd like to turn off curl's verification of the certificate, use
the -k (or --insecure) option.
[root#quickstart-es-default-0 certs]#

If you are unable to connect elasatic search using curl without -k option.
i.e. curl --cacert public-http.crt -u "elastic:9sg8q9h4tncvdl2srq9ptn9z" "https://35.193.165.24:9200"
Note: in above public-http.crt is the ca.crt(CA) in <clustername>--es-http-certs-public like quickstart-es-http-certs-public secret(kubectl get secrets -all-namespaces)
This can be resolved by passing the kubernetes cluster-IP/loadbalancer/server-hostname(DNS) from which you are running curl command in the kind:Elasticsearch manifest under subjectAltNames as shown below:
cat <<EOF | kubectl apply -f
apiVersion: elasticsearch.k8s.elastic.co/v1
kind: Elasticsearch
metadata:
name: quickstart
spec:
version: 7.6.2
nodeSets:
- name: default
count: 1
config:
node.master: true
node.data: true
node.ingest: true
node.store.allow_mmap: false
http:
tls:
selfSignedCertificate:
subjectAltNames:
- ip: 10.4.0.16
- dns: logstash
- ip: 10.4.0.14
- ip: 10.8.0.117
- ip: 35.193.165.24
- dns: localhost
EOF
==========================================================================
root#logstash:/certificates/espod# curl --cacert public-http.crt -u "elastic:9sg8q9h4tncvdl2srq9ptn9z" "https://35.193.165.24:9200"
{
"name" : "quickstart-es-default-0",
"cluster_name" : "quickstart",
"cluster_uuid" : "H-ftq8B7Q6e4Swuq6mfDew",
"version" : {
"number" : "7.6.2",
"build_flavor" : "default",
"build_type" : "docker",
"build_hash" : "ef48eb35cf30adf4db14086e8aabd07ef6fb113f",
"build_date" : "2020-03-26T06:34:37.794943Z",
"build_snapshot" : false,
"lucene_version" : "8.4.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
More references:
https://www.elastic.co/guide/en/logstash/7.7/ls-security.html#ls-http-ssl
https://www.elastic.co/guide/en/cloud-on-k8s/master/k8s-tls-certificates.html

Related

Logstash cannot connect to the elastic search cluster with Xpack enabled

The difficulty I encountered was that Logstash could not connect to the Elasticsearch cluster with Xpack enabled.
This is an Elasticsearch cluster composed of at least four node nodes, which enables xpack. I set a new certificate for the transport.ssl of this cluster and applied it in the configuration file.
In the above screenshot, an index named "jsonfile-daemonset-syslog-2022.12.21" was created before xpack was enabled in the cluster. After xpack is enabled in the cluster, new logs cannot be sent to the cluster from the logstash and new indexes cannot be created.
root#esnode-1:/etc/elasticsearch# cat /etc/hosts
127.0.0.1 localhost
172.16.20.66 esnode-1
172.16.20.60 esnode-2
172.16.20.105 esnode-3
172.16.100.28 esnode-4
172.16.20.87 logstash
root#esnode-1:/etc/elasticsearch# cat elasticsearch.yml |grep -v '^$' | grep -v '^#'
cluster.name: will-cluster1
node.name: esnode-1
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 172.16.20.66
http.port: 9200
discovery.seed_hosts: ["esnode-1","esnode-2","esnode-3","esnode-4"]
cluster.initial_master_nodes: ["esnode-1","esnode-2","esnode-3","esnode-4"]
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl:
enabled: true
keystore.path: certs/http.p12
xpack.security.transport.ssl:
enabled: true
verification_mode: certificate
keystore.path: certs/elastic-certificates.p12
truststore.path: certs/elastic-certificates.p12
http.host: 0.0.0.0
$ /usr/share/elasticsearch/bin/elasticsearch-certutil ca (no set password)
$ /usr/share/elasticsearch/bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12 (set password: 123456)
Noteļ¼š
user: elastic
password: ednFPXyz357##
user: kibana_system
password: kibana357xy#
user: logstash_system
password: logstashXyZ235#
root#esnode-1:/etc/elasticsearch# curl --cacert /etc/elasticsearch/certs/http_ca.crt -u elastic https://localhost:9200
Enter host password for user 'elastic':
{
"name" : "esnode-1",
"cluster_name" : "will-cluster1",
"cluster_uuid" : "5aT8AVA5STity523pJhvGQ",
"version" : {
"number" : "8.5.3",
"build_flavor" : "default",
"build_type" : "deb",
"build_hash" : "4ed5ee9afac63de92ec98f404ccbed7d3ba9584e",
"build_date" : "2022-12-05T18:22:22.226119656Z",
"build_snapshot" : false,
"lucene_version" : "9.4.2",
"minimum_wire_compatibility_version" : "7.17.0",
"minimum_index_compatibility_version" : "7.0.0"
},
"tagline" : "You Know, for Search"
}
root#logstash:/etc/logstash# cat /etc/logstash/logstash.yml |grep -v '^$' | grep -v '^#'
path.data: /var/lib/logstash
path.logs: /var/log/logstash
root#logstash:/etc/logstash#
root#logstash:/etc/logstash# cat /etc/logstash/conf.d/logsatsh-daemonset-jsonfile-kafka-to-es.conf
input {
kafka {
bootstrap_servers => "172.16.1.67:9092,172.16.1.37:9092,172.16.1.203:9092"
topics => ["jsonfile-log-topic"]
codec => "json"
}
}
output {
stdout { codec => rubydebug }
}
output {
#if [fields][type] == "app1-access-log" {
if [type] == "jsonfile-daemonset-applog" {
elasticsearch {
hosts => ["https://172.16.20.66:9200","https://172.16.20.60:9200","https://172.16.20.105:9200","https://172.16.100.28:9200"]
index => "jsonfile-daemonset-applog-%{+YYYY.MM.dd}"
truststore => "/etc/logstash/elastic-certificates.p12"
user => "logstash_system"
password => "logstashXyZ235#"
}}
if [type] == "jsonfile-daemonset-syslog" {
elasticsearch {
hosts => ["https://172.16.20.66:9200","https://172.16.20.60:9200","https://172.16.20.105:9200","https://172.16.100.28:9200"]
index => "jsonfile-daemonset-syslog-%{+YYYY.MM.dd}"
truststore => "/etc/logstash/elastic-certificates.p12"
user => "logstash_system"
password => "logstashXyZ235#"
}}
}
The error message of starting logstash is posted here:
root#logstash:/etc/logstash/conf.d# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logsatsh-daemonset-jsonfile-kafka-to-es.conf --path.settings=/etc/logstash
Using bundled JDK: /usr/share/logstash/jdk
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2022-12-24T12:09:04,135][INFO ][logstash.runner ] Log4j configuration path used is: /etc/logstash/log4j2.properties
[2022-12-24T12:09:04,143][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"8.5.3", "jruby.version"=>"jruby 9.3.9.0 (2.6.8) 2022-10-24 537cd1f8bc OpenJDK 64-Bit Server VM 17.0.5+8 on 17.0.5+8 +indy +jit [x86_64-linux]"}
[2022-12-24T12:09:04,152][INFO ][logstash.runner ] JVM bootstrap flags: [-Xms1g, -Xmx1g, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djruby.compile.invokedynamic=true, -Djruby.jit.threshold=0, -XX:+HeapDumpOnOutOfMemoryError, -Djava.security.egd=file:/dev/urandom, -Dlog4j2.isThreadContextMapInheritable=true, -Djruby.regexp.interruptible=true, -Djdk.io.File.enableADS=true, --add-exports=jdk.compiler/com.sun.tools.javac.api=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.file=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.parser=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.tree=ALL-UNNAMED, --add-exports=jdk.compiler/com.sun.tools.javac.util=ALL-UNNAMED, --add-opens=java.base/java.security=ALL-UNNAMED, --add-opens=java.base/java.io=ALL-UNNAMED, --add-opens=java.base/java.nio.channels=ALL-UNNAMED, --add-opens=java.base/sun.nio.ch=ALL-UNNAMED, --add-opens=java.management/sun.management=ALL-UNNAMED]
[2022-12-24T12:09:04,702][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2022-12-24T12:09:06,947][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"Java::OrgLogstashSecretStore::SecretStoreException::LoadException", :message=>"Found a file at /etc/logstash/logstash.keystore, but it is not a valid Logstash keystore.", :backtrace=>["org.logstash.secret.store.backend.JavaKeyStore.load(JavaKeyStore.java:294)", "org.logstash.secret.store.backend.JavaKeyStore.load(JavaKeyStore.java:77)", "org.logstash.secret.store.SecretStoreFactory.doIt(SecretStoreFactory.java:129)", "org.logstash.secret.store.SecretStoreFactory.load(SecretStoreFactory.java:115)", "org.logstash.secret.store.SecretStoreExt.getIfExists(SecretStoreExt.java:60)", "org.logstash.execution.AbstractPipelineExt.getSecretStore(AbstractPipelineExt.java:582)", "org.logstash.execution.AbstractPipelineExt.initialize(AbstractPipelineExt.java:181)", "org.logstash.execution.JavaBasePipelineExt.initialize(JavaBasePipelineExt.java:72)", "org.logstash.execution.JavaBasePipelineExt$INVOKER$i$1$0$initialize.call(JavaBasePipelineExt$INVOKER$i$1$0$initialize.gen)", "org.jruby.internal.runtime.methods.JavaMethod$JavaMethodN.call(JavaMethod.java:846)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuper(IRRuntimeHelpers.java:1229)", "org.jruby.ir.runtime.IRRuntimeHelpers.instanceSuperSplatArgs(IRRuntimeHelpers.java:1202)", "org.jruby.ir.targets.indy.InstanceSuperInvokeSite.invoke(InstanceSuperInvokeSite.java:29)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.RUBY$method$initialize$0(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:48)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:139)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:112)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:329)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:87)", "org.jruby.RubyClass.newInstance(RubyClass.java:911)", "org.jruby.RubyClass$INVOKER$i$newInstance.call(RubyClass$INVOKER$i$newInstance.gen)", "org.jruby.ir.targets.indy.InvokeSite.invoke(InvokeSite.java:208)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:50)", "usr.share.logstash.logstash_minus_core.lib.logstash.pipeline_action.create.RUBY$method$execute$0$__VARARGS__(/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:49)", "org.jruby.internal.runtime.methods.CompiledIRMethod.call(CompiledIRMethod.java:139)", "org.jruby.internal.runtime.methods.MixedModeIRMethod.call(MixedModeIRMethod.java:112)", "org.jruby.ir.targets.indy.InvokeSite.invoke(InvokeSite.java:208)", "usr.share.logstash.logstash_minus_core.lib.logstash.agent.RUBY$block$converge_state$2(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:386)", "org.jruby.runtime.CompiledIRBlockBody.callDirect(CompiledIRBlockBody.java:141)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:64)", "org.jruby.runtime.IRBlockBody.call(IRBlockBody.java:58)", "org.jruby.runtime.Block.call(Block.java:143)", "org.jruby.RubyProc.call(RubyProc.java:309)", "org.jruby.internal.runtime.RubyRunnable.run(RubyRunnable.java:107)", "java.base/java.lang.Thread.run(Thread.java:833)"]}
[2022-12-24T12:09:07,088][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600, :ssl_enabled=>false}
[2022-12-24T12:09:07,161][INFO ][logstash.runner ] Logstash shut down.
[2022-12-24T12:09:07,178][FATAL][org.logstash.Logstash ] Logstash stopped processing because of an error: (SystemExit) exit
org.jruby.exceptions.SystemExit: (SystemExit) exit
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:790) ~[jruby.jar:?]
at org.jruby.RubyKernel.exit(org/jruby/RubyKernel.java:753) ~[jruby.jar:?]
at usr.share.logstash.lib.bootstrap.environment.<main>(/usr/share/logstash/lib/bootstrap/environment.rb:91) ~[?:?]
#Wei Yu, it seems you are missing:
ssl => true
in your logstash yaml for ES output.

How to health check elasticsearch cluster from outside

I want to write a script to health check our elasticsearch cluster (deploy on kubernetes)
I go inside pod which run elasticsearch master container and run below commands:
[elasticsearch#elasticsearch-master-0 ~]$ curl localhost:9200/frontend-dev-2021.12.03/_count
{"count":76,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0}}
[elasticsearch#elasticsearch-master-0 ~]$ curl localhost:9200/_cluster/health?pretty
{
"cluster_name" : "elasticsearch",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 617,
"active_shards" : 1234,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
As you can see, both index count and health check command are success.
But when I run these command from outside (I give elasticsearch cluster an public endpoint)
root#ip-192-168-1-1:~# curl --user username:password esdev.example.com/frontend-dev-2021.12.03/_count
{"count":76,"_shards":{"total":1,"successful":1,"skipped":0,"failed":0}}
root#ip-192-168-1-1:~# curl --user username:password esdev.example.com/_cluster/health
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>nginx</center>
</body>
</html>
Only the index count command is success, the health check command always produce 403 Forbidden error.
I have searched and read through the official docs from elasticsearch but event the offcial docs only run command internal elasticsearch cluster or using kibana (http service kubernetes - internal k8s cluster).
How can I health check elasticsearch from outside? Or we can not do this because some mechanism of elasticsearch cluster?
Notes: I create a basic auth nginx (username:password) stand before the elasticsearch and this nginx has an ingressroute from traefik-v2
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
annotations:
meta.helm.sh/release-name: basic-auth-nginx-dev
meta.helm.sh/release-namespace: dev
creationTimestamp: "2021-01-23T08:12:55Z"
generation: 2
labels:
app: basic-auth-nginx-dev
app.kubernetes.io/managed-by: Helm
managedFields:
- apiVersion: traefik.containo.us/v1alpha1
fieldsType: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.: {}
f:meta.helm.sh/release-name: {}
f:meta.helm.sh/release-namespace: {}
f:labels:
.: {}
f:app: {}
f:app.kubernetes.io/managed-by: {}
f:spec:
.: {}
f:entryPoints: {}
f:routes: {}
manager: Go-http-client
operation: Update
time: "2021-01-23T08:12:55Z"
name: basic-auth-nginx-dev-web
namespace: dev
resourceVersion: "103562796"
selfLink: /apis/traefik.containo.us/v1alpha1/namespaces/dev/ingressroutes/basic-auth-nginx-dev-web
uid: 5832b501-b2d7-4600-93b6-b3c72c420115
spec:
entryPoints:
- web
routes:
- kind: Rule
match: Host(`esdev.example.com`) && PathPrefix(`/`)
priority: 1
services:
- kind: Service
name: basic-auth-nginx-dev
port: 80
Could you please show us your nginx config?
I think the problem come from your nginx because I see the output you show that nginx return 403 for you, not the elasticsearch.
Could you please try another command start with _ like _template or something like that, there is a chance your nginx prevent access to path start with _ character.

failed to authenticate user [elastic]

I had ELK stack wroking perfectly before adding the two lignes to elacticsearch.yml:
http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers: kbn-version, Origin, X-Requested-With, Content-Type, Accept, Engaged-Auth-Token
After restarting elastic and kibana i got the below error message to both user Kibana and elastic:
[INFO ][o.e.x.s.a.AuthenticationService] [myserver] Authentication of [kibana] was terminated by realm [reserved] - failed to authenticate user [kibana]
the problem still occurs same after deleted the added lignes to the elasticsearch.yml
my initial elasticsearch.yml:
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
# Set the bind address to a specific IP (IPv4 or IPv6):
## IP
network.host: 10.xx.xx.xx
http.port: 9200
xpack.security.enabled: true
xpack.watcher.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/elastic-certificates.p12
repositories.url.allowed_urls: "http://10.xx.xx.xx/home/User"
http.cors.enabled: true
http.cors.allow-origin: "*"
i saw some forums speaking about losing the .security index (when restarting elastic)
below is the elastic state using curl request:
[root#myserver elasticsearch]# curl -XGET 'http://10.x.x.x:9200/_cluster/state?pretty'
{
"error" : {
"root_cause" : [
{
"type" : "security_exception",
"reason" : "missing authentication credentials for REST request [/_cluster/state?pretty]",
"header" : {
"WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
}
}
],
"type" : "security_exception",
"reason" : "missing authentication credentials for REST request [/_cluster/state?pretty]",
"header" : {
"WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
}
},
"status" : 401
}
Have you please an idea about this issue?
thanks in advance
I suspect you have other issues, but to get a response with curl when xpack.security is enabled, you also have to pass authentication details like this:
curl -XGET --user elastic:changeme 'http://10.x.x.x:9200/_cluster/state?pretty'

Elasticsearch with xpack security fails

I am trying to set up a simple ELK stack using docker. While I disable xpack security it starts fine and I can access the Kibana interface. If xpack security is enabled I get an "Kibana server is not ready yet" error from the Kibana interface. This error is most likely caused by this Elasticsearch error:
{"type": "server", "timestamp": "2020-08-03T15:35:10,134Z", "level": "INFO", "component": "o.e.c.r.a.AllocationService", "cluster.name": "elastic-cluster", "node.name": "elasticsearch", "message": "Cluster health status changed from [RED] to [GREEN] (reason: [shards started [[.monitoring-es-7-2020.08.03][0]]]).", "cluster.uuid": "Vdk1-_4sSvuqlEspQcF-6A", "node.id": "PZMUpi_JSJS6IZ7tv6H22g" }
{"type": "server", "timestamp": "2020-08-03T15:35:10,560Z", "level": "ERROR", "component": "o.e.x.s.a.e.NativeUsersStore", "cluster.name": "elastic-cluster", "node.name": "elasticsearch", "message": "security index is unavailable. short circuiting retrieval of user [elasticadmin]", "cluster.uuid": "Vdk1-_4sSvuqlEspQcF-6A", "node.id": "PZMUpi_JSJS6IZ7tv6H22g" }
This is my elasticsearch.yml:
cluster.name: elastic-cluster
node.name: elasticsearch
network.host: 0.0.0.0
transport.host: 0.0.0.0
## Cluster Settings
discovery.seed_hosts: elasticsearch
cluster.initial_master_nodes: elasticsearch
## License
xpack.license.self_generated.type: basic
# Security
xpack.security.enabled: true
## - ssl
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.key: certs/elasticsearch.key
xpack.security.transport.ssl.certificate: certs/elasticsearch.crt
xpack.security.transport.ssl.certificate_authorities: certs/ca.crt
## - http
#xpack.security.http.ssl.enabled: true
#xpack.security.http.ssl.key: certs/elasticsearch.key
#xpack.security.http.ssl.certificate: certs/elasticsearch.crt
#xpack.security.http.ssl.certificate_authorities: certs/ca.crt
#xpack.security.http.ssl.client_authentication: optional
# Monitoring
xpack.monitoring.enabled: true
xpack.monitoring.collection.enabled: true
This is the error log from Kibana:
{"type":"log","#timestamp":"2020-08-03T15:42:22Z","tags":["warning","plugins","licensing"],"pid":6,"
message":"License information could not be obtained from Elasticsearch due to [security_exception] unable to authenticate user [elasticadmin] for REST request [/_xpack], with { header={ WWW-Authenticate=\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\" } } :: {\"path\":\"/_xpack\",\"statusCode\":401,\"response\":\"{\\\"error\\\":{\\\"root_cause\\\":[{\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [elasticadmin] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}}],\\\"type\\\":\\\"security_exception\\\",\\\"reason\\\":\\\"unable to authenticate user [elasticadmin] for REST request [/_xpack]\\\",\\\"header\\\":{\\\"WWW-Authenticate\\\":\\\"Basic realm=\\\\\\\"security\\\\\\\" charset=\\\\\\\"UTF-8\\\\\\\"\\\"}},\\\"status\\\":401}\",\"wwwAuthenticateDirective\":\"Basic realm=\\\"security\\\" charset=\\\"UTF-8\\\"\"} error"}
Basic curl request:
curl -H "Authorization: Basic ZWxhc3RpY2FkbWluOjEyMzQ1Njc4OQ==" -XGET "http://localhost:9200/_cat/nodes?v&pretty"
{
"error" : {
"root_cause" : [
{
"type" : "security_exception",
"reason" : "unable to authenticate user [elasticadmin] for REST request [/_cat/nodes?v&pretty]",
"header" : {
"WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
}
}
],
"type" : "security_exception",
"reason" : "unable to authenticate user [elasticadmin] for REST request [/_cat/nodes?v&pretty]",
"header" : {
"WWW-Authenticate" : "Basic realm=\"security\" charset=\"UTF-8\""
}
},
"status" : 401
}
Another Auth request:
docker#docker:~$ curl -H "Authorization: Basic ZWxhc3RpY2FkbWluOjEyMzQ1Njc4OQ" -XGET "http://localhost:9200/_security/_authenticate"
{"error":{"root_cause":[{"type":"security_exception","reason":"unable to authenticate user [elasticadmin] for REST request [/_security/_authenticate]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}}],"type":"security_exception","reason":"unable to authenticate user [elasticadmin] for REST request [/_security/_authenticate]","header":{"WWW-Authenticate":"Basic realm=\"security\" charset=\"UTF-8\""}},"status":401}
Docker-Compose:
secrets:
elasticsearch.keystore:
file: ${ELK_DATA}/secrets/keystore/elasticsearch.keystore
elastic.ca:
file: ${ELK_DATA}/secrets/certs/ca/ca.crt
elasticsearch.certificate:
file: ${ELK_DATA}/secrets/certs/elasticsearch/elasticsearch.crt
elasticsearch.key:
file: ${ELK_DATA}/secrets/certs/elasticsearch/elasticsearch.key
kibana.certificate:
file: ${ELK_DATA}/secrets/certs/kibana/kibana.crt
kibana.key:
file: ${ELK_DATA}/secrets/certs/kibana/kibana.key
services:
####################################################################
############################# ELK ##################################
####################################################################
elasticsearch:
container_name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}
restart: unless-stopped
environment:
ELASTIC_USERNAME: ${ELASTIC_USERNAME}
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
ELASTIC_CLUSTER_NAME: ${ELASTIC_CLUSTER_NAME}
ELASTIC_NODE_NAME: ${ELASTIC_NODE_NAME}
ELASTIC_INIT_MASTER_NODE: ${ELASTIC_INIT_MASTER_NODE}
ELASTIC_DISCOVERY_SEEDS: ${ELASTIC_DISCOVERY_SEEDS}
ES_JAVA_OPTS: -Xmx${ELASTICSEARCH_HEAP} -Xms${ELASTICSEARCH_HEAP} -Des.enforce.bootstrap.checks=true
bootstrap.memory_lock: "true"
volumes:
- ${ELK_DATA}/elasticsearch/data:/usr/share/elasticsearch/data
- ${ELK_DATA}/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml
- ${ELK_DATA}/elasticsearch/config/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties
secrets:
- source: elasticsearch.keystore
target: /usr/share/elasticsearch/config/elasticsearch.keystore
- source: elastic.ca
target: /usr/share/elasticsearch/config/certs/ca.crt
- source: elasticsearch.certificate
target: /usr/share/elasticsearch/config/certs/elasticsearch.crt
- source: elasticsearch.key
target: /usr/share/elasticsearch/config/certs/elasticsearch.key
ports:
- 9200:9200
- 9300:9300
ulimits:
memlock:
soft: -1
hard: -1
nofile:
soft: 200000
hard: 200000
networks:
- traefik_proxy
logstash:
container_name: logstash
image: docker.elastic.co/logstash/logstash:${ELK_VERSION}
restart: unless-stopped
volumes:
- ${ELK_DATA}/logstash/config/logstash.yml:/usr/share/logstash/config/logstash.yml
- ${ELK_DATA}/logstash/config/pipelines.yml:/usr/share/logstash/config/pipelines.yml
- ${ELK_DATA}/logstash/pipeline:/usr/share/logstash/pipeline
environment:
ELASTIC_USERNAME: ${ELASTIC_USERNAME}
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
ELASTICSEARCH_HOST_PORT: ${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}
LS_JAVA_OPTS: "-Xmx${LOGSTASH_HEAP} -Xms${LOGSTASH_HEAP}"
ports:
- 5044:5044
- 9600:9600
networks:
- traefik_proxy
kibana:
container_name: kibana
image: docker.elastic.co/kibana/kibana:${ELK_VERSION}
restart: unless-stopped
volumes:
- ${ELK_DATA}/kibana/config:/usr/share/kibana/config
environment:
ELASTIC_USERNAME: ${ELASTIC_USERNAME}
ELASTIC_PASSWORD: ${ELASTIC_PASSWORD}
ELASTICSEARCH_HOST_PORT: ${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}
secrets:
- source: elastic.ca
target: /certs/ca.crt
- source: kibana.certificate
target: /certs/kibana.crt
- source: kibana.key
target: /certs/kibana.key
ports:
- 5601:5601
networks:
- traefik_proxy
Where should I start looking to find the source of this issue?
Thanks for any help!
when you enable x-pack, elasticsearch is getting started, But it seems your kibana is not getting authenicated.please see below part of your error message which explains this.
elasticadmin user is not authenticated
Please see this user and see you are passing the correction authentication while accessing elasticsearch. You need to pass username and password under basic authentication mechanism.
I have the same issue but I solve it:
1 Step
you can configure you docker compose as
kibana:
build: kibana
container_name: kibana
ports:
- 5601:5601
volumes:
- ./kibana/kibana.yml:/usr/share/kibana/config/kibana.yml
networks:
backend:
aliases:
- "kibana"
2 Step
and my kibana file is that:
...
elasticsearch.username: "kibana"
elasticsearch.password: "mypwd"
...
and my Dockerfile is:
FROM docker.elastic.co/kibana/kibana:7.10.2
COPY kibana.yml /usr/share/kibana/kibana.yml
USER root
RUN chown root:kibana /usr/share/kibana/config/kibana.yml
USER kibana
I got this issue when the data folder of ElasticSearch was deleted and re-initialized from scratch afterwards. The point is that the built-in users were not initialized.
As soon as I initialized the built-in users the error disappeared and the system worked again.
bin/elasticsearch-setup-passwords interactive|auto [-u "https://<host_name>:9200"]

SpringData Elasticsearch NoNodeAvailableException

I am using SpringData to connect my application to Elastic search local instance. When I do a regular curl to get ES info, it works fine, but I am unable to connect to it from Springboot application.
Elasticsearch local version ./elasticsearch -V => Version: 7.6.0
SpringData Elastic search version 3.1.11
> curl -XGET 'http://localhost:9200/_cluster/state?pretty'
{
"cluster_name" : "elasticsearch",
"cluster_uuid" : "1_8HMIK5QDug_xH80VZLgQ",
"version" : 54,
"state_uuid" : "YEe1FSwfRUuw0uw-T69fJQ",
"master_node" : "Nbktx7KrREetbyfL7v0Fog",
"blocks" : { },
"nodes" : {
"Nbktx7KrREetbyfL7v0Fog" : {
"name" : "k***-macOS",
"ephemeral_id" : "pqMw40oPTUmBoHsyTAz9cg",
"transport_address" : "127.0.0.1:9301",
"attributes" : {
"ml.machine_memory" : "17179869184",
"xpack.installed" : "true",
"ml.max_open_jobs" : "20"
}
}
},
#Value("$ELASTIC_HOST")
private String EsHost;
#Value("$ELASTIC_PORT")
private String EsPort;
#Bean
public ElasticsearchOperations elasticsearchTemplate() throws UnknownHostException {
return new ElasticsearchTemplate(elasticsearchClient());
}
#Bean
public Client elasticsearchClient() throws UnknownHostException {
Settings settings = Settings.builder()
.put("client.transport.sniff", true).build();
TransportClient client = new PreBuiltTransportClient(settings);
client.addTransportAddress( new TransportAddress(InetAddress.getByName(EsHost), Integer.valueOf(EsPort));
return client;
}
Tried all the above ways to get a host and port ALSO TRIED WITH 9300 but still no luck. Also, my elasticsearch.yml is the default file and did not add any explicit host or ports.
Docker-compose
version: '3'
services:
elastic:
restart: always
image: docker.elastic.co/elasticsearch/elasticsearch:6.2.2
environment:
- cluster.name=elasticsearch
- node.name=es01
- discovery.type=single-node
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ulimits:
memlock:
soft: -1
hard: -1
ports:
- "9201:9200"
- "9301:9300"
db:
restart: always
image: postgres
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: 'xxx'
POSTGRES_USER: 'xx'
POSTGRES_DB: 'xx'
api:
build:
context: .
dockerfile: Dockerfile
ports:
- "8080:8080"
environment:
ENVIRONMENT_NAME: "dev"
REGION_NAME: "local"
POSTGRES_PASSWORD: "xx"
POSTGRES_USER: "xx"
POSTGRES_HOST: "db"
ELASTIC_HOST: "elastic"
ELASTIC_PORT: "9200"
depends_on:
- db
- elastic
ERROR:
"failed to load elasticsearch nodes : org.elasticsearch.client.transport.NoNodeAvailableException: None of the configured nodes are available: [{#transport#-1}{JjFZc4y-RBCYbdELAsgaAQ}{elastic}{172.20.0.2:9200}]"}
It works if I change this to
environment:
ENVIRONMENT_NAME: "dev"
REGION_NAME: "local"
POSTGRES_PASSWORD: "xxx"
POSTGRES_USER: "xx"
POSTGRES_HOST: "db"
ELASTIC_HOST: "elastic"
ELASTIC_PORT: "9300" --> this is changed from 9200
client.addTransportAddress(new TransportAddress(InetAddress.getLocalHost(), 9201));
No, idea why !!
Spring Data Elasticsearch 3.1.11 is built with Elasticsearch client libraries in version 6.2.2. So even if you manage to get a connection to the cluster, the chances are very high, that the client and the cluster can't communicate properly.
As for the setup of the connection: You should add the name of the cluster you want to connect to into the settings:
Settings settings = Settings.builder()
.put("client.transport.sniff", true)
.put("cluster.name", "elasticsearch")
.build();

Resources