sgadmin.sh failed elasticsearch && searchguard - elasticsearch

While trying to exucte
./sgadmin.sh -ts truststore.jks -tspass 90f3cbdb3eabe04f815b -ks CN=sgadmin-keystore.jks -kspass a65d2a4fa62d7ed7a4d5 -h host -p 9200 -nhnv -cn eslcl1 -cd ../sgconfig/
I am getting the following error:
Cannot retrieve cluster state due to: None of the configured nodes are available: [{#transport#-1}{6PPXnCNqTt-W5g-0fmeZuQ}{host}{host:9200}]. This is not an error, will keep on trying
error:
WARNING: JAVA_HOME not set, will use /usr/bin/java
Search Guard Admin v5
WARNING: Seems you want connect to the a HTTP port.
sgadmin connect through the transport port which is normally 9300.
Will connect to host:9200 ... done
### LICENSE NOTICE Search Guard ###
If you use one or more of the following features in production
make sure you have a valid Search Guard license
(See https://floragunn.com/searchguard-validate-license)
* Kibana Multitenancy
* LDAP authentication/authorization
* Active Directory authentication/authorization
* REST Management API
* JSON Web Token (JWT) authentication/authorization
* Kerberos authentication/authorization
* Document- and Fieldlevel Security (DLS/FLS)
* Auditlogging
In case of any doubt mail to <sales#floragunn.com>
###################################
Contacting elasticsearch cluster 'eslcl1' and wait for YELLOW clusterstate ...
Cannot retrieve cluster state due to: None of the configured nodes are available: [{#transport#-1}{6PPXnCNqTt-W5g-0fmeZuQ}{host}{host:9200}]. This is not an error, will keep on trying ...
Root cause: NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{6PPXnCNqTt-W5g-0fmeZuQ}{host}{host:9200}]] (org.elasticsearch.client.transport.NoNodeAvailableException/org.elasticsearch.client.transport.NoNodeAvailableException)
* Try running sgadmin.sh with -icl (but no -cl) and -nhnv (If thats works you need to check your clustername as well as hostnames in your SSL certificates)
* Make also sure that your keystore or cert is a client certificate (not a node certificate) and configured properly in elasticsearch.yml
* If this is not working, try running sgadmin.sh with --diagnose and see diagnose trace log file)
* Add --accept-red-cluster to allow sgadmin to operate on a red cluster.
My conf in elasticsearch.yml is
######## Start Search Guard Demo Configuration ########
searchguard.ssl.transport.keystore_filepath: CN=x.x.x.x-keystore.jks
searchguard.ssl.transport.keystore_password: 8a17368ff585a2c3afdc
searchguard.ssl.transport.truststore_filepath: truststore.jks
searchguard.ssl.transport.truststore_password: 90f3cbdb3eabe04f815b
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.http.enabled: false
searchguard.ssl.http.keystore_filepath: CN=x.x.x.x-keystore.jks
searchguard.ssl.http.keystore_password: 8a17368ff585a2c3afdc
searchguard.ssl.http.truststore_filepath: truststore.jks
searchguard.ssl.http.truststore_password: 90f3cbdb3eabe04f815b
searchguard.authcz.admin_dn:
- CN=sgadmin
cluster.name: eslcl1
network.host: x.x.x.x
Is there any configuration that I might need to look into ?

You need to connect to port 9300 (the transport protocol), not 9200 which is typically the HTTP/S port. On port 9300 the elasticsearch nodes talk to each other through a binary TCP protocol. On port 9200 the REST api is accessible via HTTP/S.
sgadmin connect through the binary TCP protocol to elasticsearch, so you need to use port 9300.
Therefore you get this warning
WARNING: Seems you want connect to the a HTTP port.
sgadmin connect through the transport port which is normally 9300.
So your command should look like
./sgadmin.sh -ts truststore.jks -tspass 90f3cbdb3eabe04f815b -ks CN=sgadmin-keystore.jks -kspass a65d2a4fa62d7ed7a4d5 -h host -p 9300 -nhnv -cn eslcl1 -cd ../sgconfig/ -nhnv
(Add -nhnv to disable hostname verification if your certificates are not matching your hostnames)

Related

use tls and elastic in fluentbit

I'm trying to send logs to my elastic pod with FluentBit service on a different VM.
I configured ingress for elastic.
I configured the FluentBit that way:
[OUTPUT]
Name es
Match *
Host <host_ip>
Port 443
#Retry_Limit 1
URI /elastic
tls On
tls.verify Off
but I keep getting the following error :
[2020/10/25 07:34:09] [debug] [out_es] HTTP Status=413 URI=/_bulk
it is possible to yo use TLS in elastic output? if yes can you suggest what I configured wrong?
HTTP 413 is a code for Payload Too Large. Try increasing the http.max_content_length in elasticsearch.yml
Also note that you are using tls.verify Off which does not make sense longterm. If you have an ingress with a certificate (LetsEncrypt?) it should be OK to set tls.verify On. Otherwise all looks correct.

Searchguard could not be Initialized with error: Root cause: MasterNotDiscoveredException[null]

I use k8s to deploy Elasticsearch.
Dockerfile:
FROM docker.elastic.co/elasticsearch/elasticsearch:6.3.1
USER elasticsearch
RUN elasticsearch-plugin install --batch analysis-kuromoji
RUN elasticsearch-plugin install --batch org.codelibs:elasticsearch-analysis-kuromoji-neologd:6.3.1
RUN elasticsearch-plugin install --batch com.floragunn:search-guard-6:6.3.1-22.3
RUN mkdir -p /usr/share/elasticsearch/batch/scripts
RUN mkdir -p /usr/share/elasticsearch/batch/logs
COPY searchguard_not_initialized_check_batch.sh /usr/share/elasticsearch/batch/scripts/
RUN rm -f /usr/share/elasticsearch/config/elasticsearch.yml && rm -f /usr/share/elasticsearch/plugins/search-guard-6/sgconfig/* && mkdir -p /usr/share/elasticsearch/data
File searchguard_not_initialized_check_batch.sh to run
sh /usr/share/elasticsearch/plugins/search-guard-6/tools/sgadmin.sh -diagnose -cd /usr/share/elasticsearch/plugins/search-guard-6/sgconfig -cn pnt-es.stg -key /usr/share/elasticsearch/config/cert/kirk-key.pem -cert /usr/share/elasticsearch/config/cert/kirk.pem -cacert /usr/share/elasticsearch/config/cert/root-ca.pem -nhnv
Elasticsearch.yml:
cluster.name: "pnt-es.stg"
discovery.zen.ping.unicast.hosts: elasticsearch-service
discovery.zen.minimum_master_nodes: {{ elasticsearch_log_minimum_master_nodes }}
network.host: ['_site_', '_local_']
node.name: ${HOSTNAME}
http.port: 9200
transport.tcp.port: 9300
path.data: /usr/share/elasticsearch/data
path.logs: /var/log/elasticsearch
searchguard.ssl.transport.pemcert_filepath: cert/esnode.pem
searchguard.ssl.transport.pemkey_filepath: cert/esnode-key.pem
searchguard.ssl.transport.pemtrustedcas_filepath: cert/root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: cert/esnode.pem
searchguard.ssl.http.pemkey_filepath: cert/esnode-key.pem
searchguard.ssl.http.pemtrustedcas_filepath: cert/root-ca.pem
searchguard.allow_unsafe_democertificates: true
searchguard.allow_default_init_sgindex: true
searchguard.enterprise_modules_enabled: false
searchguard.authcz.admin_dn:
- CN=kirk,OU=client,O=client,L=test,C=de
xpack.security.enabled: false
Error as below:
How can I fix it?
enter image description here
When I run this command, Error as below.
[elasticsearch#elasticsearch-daemonset-4qwcj ~]$ sh /usr/share/elasticsearch/plugins/search-guard-6/tools/sgadmin.sh --diagnos -cd /usr/share/elasticsearch/plugins/search-guard-6/sgconfig -cn pnt-es.stg -key /usr/share/elasticsearch/config/cert/kirk-key.pem -cert /usr/share/elasticsearch/config/cert/kirk.pem -cacert /usr/share/elasticsearch/config/cert/root-ca.pem -nhnv
Search Guard Admin v6
Will connect to localhost:9300 ... done
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by io.netty.util.internal.ReflectionUtil (file:/usr/share/elasticsearch/plugins/search-guard-6/netty-common-4.1.16.Final.jar) to constructor java.nio.DirectByteBuffer(long,int)
WARNING: Please consider reporting this to the maintainers of io.netty.util.internal.ReflectionUtil
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Elasticsearch Version: 6.3.1
Search Guard Version: 6.3.1-22.3
Connected as CN=kirk,OU=client,O=client,L=test,C=de
Diagnostic trace written to: /usr/share/elasticsearch/sgadmin_diag_trace_2019-Oct-03_05-35-51.txt
Contacting elasticsearch cluster 'pnt-es.stg' and wait for YELLOW clusterstate ...
vi /usr/share/elasticsearch/sgadmin_diag_trace_2019-Oct-03_05-35-51.Cannot retrieve cluster state due to: null. This is not an error, will keep on trying ...
Root cause: MasterNotDiscoveredException[null] (org.elasticsearch.discovery.MasterNotDiscoveredException/org.elasticsearch.discovery.MasterNotDiscoveredException)
* Try running sgadmin.sh with -icl (but no -cl) and -nhnv (If that works you need to check your clustername as well as hostnames in your TLS certificates)
* Make sure that your keystore or PEM certificate is a client certificate (not a node certificate) and configured properly in elasticsearch.yml
* If this is not working, try running sgadmin.sh with --diagnose and see diagnose trace log file)
* Add --accept-red-cluster to allow sgadmin to operate on a red cluster.
Cannot retrieve cluster state due to: null. This is not an error, will keep on trying ...
Root cause: MasterNotDiscoveredException[null] (org.elasticsearch.discovery.MasterNotDiscoveredException/org.elasticsearch.discovery.MasterNotDiscoveredException)
* Try running sgadmin.sh with -icl (but no -cl) and -nhnv (If that works you need to check your clustername as well as hostnames in your TLS certificates)
* Make sure that your keystore or PEM certificate is a client certificate (not a node certificate) and configured properly in elasticsearch.yml
* If this is not working, try running sgadmin.sh with --diagnose and see diagnose trace log file)
* Add --accept-red-cluster to allow sgadmin to operate on a red cluster.
Cannot retrieve cluster state due to: null. This is not an error, will keep on trying ...
Root cause: MasterNotDiscoveredException[null] (org.elasticsearch.discovery.MasterNotDiscoveredException/org.elasticsearch.discovery.MasterNotDiscoveredException)
* Try running sgadmin.sh with -icl (but no -cl) and -nhnv (If that works you need to check your clustername as well as hostnames in your TLS certificates)
* Make sure that your keystore or PEM certificate is a client certificate (not a node certificate) and configured properly in elasticsearch.yml
* If this is not working, try running sgadmin.sh with --diagnose and see diagnose trace log file)
* Add --accept-red-cluster to allow sgadmin to operate on a red cluster.

etcd2 in proxy mode doesn't do anything useful

I have an etcd cluster using TLS for security. I want other machines to use etcd proxy, so the localhost clients don't need to use TLS. Proxy is configured like this:
[Service]
Environment="ETCD_PROXY=on"
Environment="ETCD_INITIAL_CLUSTER=etcd1=https://master1.example.com:2380,etcd2=https://master2.example.com:2380"
Environment="ETCD_PEER_TRUSTED_CA_FILE=/etc/kubernetes/ssl/ca.pem"
Environment="ETCD_PEER_CERT_FILE=/etc/kubernetes/ssl/worker.pem"
Environment="ETCD_PEER_KEY_FILE=/etc/kubernetes/ssl/worker-key.pem"
Environment="ETCD_TRUSTED_CA_FILE=/etc/kubernetes/ssl/ca.pem"
And it works, as far as the first connection goes. But the etcd client does an initial query to discover the full list of servers, and then it performs its real query against one of the servers in that list:
$ etcdctl --debug ls
start to sync cluster using endpoints(http://127.0.0.1:4001,http://127.0.0.1:2379)
cURL Command: curl -X GET http://127.0.0.1:4001/v2/members
got endpoints(https://1.1.1.1:2379,https://1.1.1.2:2379) after sync
Cluster-Endpoints: https://1.1.1.1:2379, https://1.1.1.2:2379
cURL Command: curl -X GET https://1.1.1.1:2379/v2/keys/?quorum=false&recursive=false&sorted=false
cURL Command: curl -X GET https://1.1.1.2:2379/v2/keys/?quorum=false&recursive=false&sorted=false
Error: client: etcd cluster is unavailable or misconfigured
error #0: x509: certificate signed by unknown authority
error #1: x509: certificate signed by unknown authority
If I change the etcd masters to --advertise-client-urls=http://localhost:2379, then the proxy will connect to itself and get into an infinite loop. And the proxy doesn't modify the traffic between the client and the master, so it doesn't rewrite the advertised client URLs.
I must not be understanding something, because the etcd proxy seems useless.
Turns out that most etcd clients (locksmith, flanneld, etc.) will work just fine with a proxy in this mode. It's only etcdctl that behaves differently. Because I was testing with etcdctl, I thought the proxy config wasn't working at all.
If etcdctl is run with --skip-sync, then it will communicate through the proxy rather than retrieving the list of public endpoints.
etcdctl cluster-health ignores --skip-sync and always touches the public etcd endpoints. It will never work with a proxy.
With option --endpoints "https://{YOUR_ETCD_ADVERTISE_CILENT_URL}:2379".
Because you configured TLS for etcd, you should add options --ca-file, --cert-file, --key-file.

Kibana deployment issue on server . . . client not able to access GUI

I have configured Logstash + ES + kibana on 100.100.0.158 VM and Kibana is running under apache server. port 8080
Now what my need is . . I just have to give URL "100.100.0.158:8080/kibana" to client so client can see his data on web.
When when I put this URL on client browser I am getting this error
"can't contact elasticsearch at http://"127.0.0.1":9200 please ensure that elastic search is reachable from your system"
Do I need to configure ES with IP 100.100.0.158:9200 or 127.0.0.1:9200 is ok . . !
Help . . !
Thanks
Tushar
If your Kibana and ES are installed on the same box, you can have it auto-detect the the ES URL/IP by using this line in your Kibana's config.js file:
/** #scratch /configuration/config.js/5
* ==== elasticsearch
*
* The URL to your elasticsearch server. You almost certainly don't
* want +http://localhost:9200+ here. Even if Kibana and Elasticsearch are on
* the same host. By default this will attempt to reach ES at the same host you have
* elasticsearch installed on. You probably want to set it to the FQDN of your
* elasticsearch host
*/
elasticsearch: "http://"+window.location.hostname+":9200",
This is because the interface between Kibana and ES is via JavaScript, and so using 127.0.0.1 or localhost actually points to the client machine (that the browser is running on) rather than the server.
Modify elasticsearch configuration file elasticsearch.yml
Append or modify following configurations:
# Enable or disable cross-origin resource sharing.
http.cors.enabled: true
# Which origins to allow.
http.cors.allow-origin: /https?:\/\/<*your\.kibana\.host*>(:[0-9]+)?/
It is caused by kibana page trying to load jason data from elasticsearch which will be blocked for security reason.
It is about iptables rules. Kibana uses 9292 for web port, but for elasticsearch queries uses 9200. So you must add line to iptables for these ports.
netstat -napt | grep -i LISTEN
you will see these ports: 9200 9300 9301 9302 9292
iptables -I INPUT 4 -p tcp -m state --state NEW -m tcp --dport 9200 -j ACCEPT
see detail: http://logstash.net/docs/1.3.3/tutorials/getting-started-simple

ElasticSearch: Allow only local requests

How can allow only local requests for elasticsearch?
So command like:
curl -XGET 'http://localhost:9200/twitter/_settings'
can only be running on localhost and request like:
curl -XGET 'http://mydomain.com:9200/twitter/_settings'
would get rejected?
Because, from what i see, elasticsearch allows it by default.
EDIT:
According to http://www.elasticsearch.org/guide/reference/modules/network.html
you can manage bind_host parameter to allow hosts. And by default, it is set to anyLocalAddress
For elasticsearch prior to v2.0.0, if you want both http transport and internal elasticsearch transport to listen only on localhost simply add the following line to elasticsearch.yml file.
network.host: "127.0.0.1"
If you want only http transport to listen on localhost add the following line instead.
http.host: "127.0.0.1"
Starting from v2.0 elasticsearch is listening only on localhost by default. So, no additional configuration is needed.
If your final goal is to deny any requests from outside the host machine, the most reliable way would be to modify the host's iptables so that it denies any incoming requests to the service ports used by ElasticSearch (9200-9300).
If the end goal is to make sure that everyone refers to the service using an exclusive DNS, you're better off achieving this with an HTTP server that can proxy requests such as HTTPd or nginx.
I use this parameter:
http.host: "127.0.0.1"
This parameter not accept http requests for external request.

Resources