Elastic Cloud on Kubernetes change config of the server - elasticsearch

I'm running ECK cluster with rancher2. There are 3 nodes: 2 for elasticsearch, 1 for kibana.
I want to change Elastic-server configuration with operator, for example, disable ssl communication.
But what right way to do it? Mount config-file from host? Please give some ideas

Quoting the documentation:
You can explicitly disable TLS for Kibana, APM Server, Enterprise Search and the HTTP layer of Elasticsearch.
spec:
http:
tls:
selfSignedCertificate:
disabled: true
That is generally useful when you want to run ECK with Istio and want to let that manage TLS.
However, you cannot disable TLS for the transport communication (between the Elasticsearch nodes). For security reasons that is always enabled.
PS: For a highly available cluster, you'd want at least 3 Elasticsearch nodes. Having 2 isn't helping you — if one of them is going down, the other one will degrade as well, since Elasticsearch is built around a majority based consensus protocol.

Related

Elasticsearch Python - No viable nodes were discovered on the initial sniff attempt

I have a cluster of Elasticsearch nodes running on different AWS EC2 instances. They internally connect via a network within AWS, so their network and discovery addresses are set up within this internal network. I want to use the python elasticsearch library to connect to these nodes from the outside. The EC2 instances have static public IP addresses attached, and the elastic instances allow https connections from anywhere. The connection works fine, i.e. I can connect to the instances via browser and via the Python elasticsearch library. However, I now want to set up sniffing, so I set up my Python code as follows:
self.es = Elasticsearch([f'https://{elastic_host}:{elastic_port}' for elastic_host in elastic_hosts],
sniff_on_start=True, sniff_on_connection_fail=True, sniffer_timeout=60,
sniff_timeout=10, ca_certs=ca_location, verify_certs=True,
http_auth=(elastic_user, elastic_password))
If I remove the sniffing parameters, I can connect to the instances just fine. However, with sniffing, I immediately get elastic_transport.SniffingError: No viable nodes were discovered on the initial sniff attempt upon startup.
http.publish_host in the elasticsearch.yml configuration is set to the public IP address of my EC2 machines, and the /_nodes/_all/http endpoint returns the public IPs as the publish_address (i.e. x.x.x.x:9200).
We have localized this problem to the elasticsearch-py library after further testing with our other microservices, which could perform sniffing with no problem.
After testing with our other microservices, we found out that this problem was related to the elasticsearch-py library rather than our elasticsearch configuration, as our other microservice, which is golang based, could perform sniffing with no problem.
After further investigation we linked the problem to this open issue on the elasticsearch-py library: https://github.com/elastic/elasticsearch-py/issues/2005.
The problem is that the authorization headers are not properly passed to the request made to Elasticsearch to discover the nodes. To my knowledge, there is currently no fix that does not involve altering the library itself. However, the error message is clearly misleading.

connect dotCMS cluster to external elasticsearch

I'm trying to create a cluster of three servers with dotCMS 5.2.6 installed.
They have to interface with a second cluster of 3 elasticsearch nodes.
Despite my attempts to combine them, the best case I've obtained is with both dotCMS and elastic up and running but from dot admin backend (Control panel > Configuration > Network) I always see my three servers with red status due to Index red status.
I have tested the following combinations:
In plugins/com.dotcms.config/conf/dotcms-config-cluster-ext.properties
AUTOWIRE_CLUSTER_TRANSPORT=false
es.path.home=WEB-INF/elasticsearch
Using AUTOWIRE_CLUSTER_TRANSPORT=true seems not to change the result
In plugins/com.dotcms.config/ROOT/dotserver/tomcat-8.5.32/webapps/ROOT/WEB-INF/elasticsearch/config/elasticsearch-override.yml
transport.tcp.port: 9301
discovery.zen.ping.unicast.hosts: first_es_server:9300, second_es_server:9300, third_es_server:9300
Using transport.tcp.port: 9300 cause dotCMS startup failure with error:
ERROR cluster.ClusterFactory - Unable to rewire cluster:Failed to bind to [9300]
Caused by: com.dotmarketing.exception.DotRuntimeException: Failed to bind to [9300]
Of course, port 9300 is listening on the three elasticsearch nodes they are configured with transport.tcp.port: 9300 and have no problem to start and create their cluster.
Using transport.tcp.port: 9301 dotCMS can start and join the elastic cluster but the index status is always red even if the indexation seems to work and nothing is apparently affected.
Using transport.tcp.port: 9309 (as suggested in the dotCMS online reference) or any other port number lead to the same result as 9301 case but from dot admin backend (Control panel > Configuration > Network) the Index information for each machine still repot 9301 as ES port.
Main Question
I would like to know where the ES port can be edited considering my Elasticsearch cluster is performing well (all indices are green) and the elasticsearch-override.yml within dotCMS plugin doesn't affect the default 9301 reported by the backend.
Is the HTTP interface enabled on ES? If not, I would enable it and see what the cluster health is and what the index health is. It might be that you need to adjust your expected replicas.
https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-health.html
and
https://www.elastic.co/guide/en/elasticsearch/reference/current/cat-indices.html
FWIW, the upcoming version of dotCMS (5.3.0) does not support embedded elasticsearch and requires a vanilla external ES node/custer to connect to.

Kibana not showing monitoring data from external Elasticsearch node

yesterday I setup a dedicated single monitoring node following this guide.
I managed to fire up the new monitoring node with the same ES 6.6.0 version of the cluster, then added those lines to my elasticsearch.yml file on all ES cluster nodes :
xpack.monitoring.exporters:
id1:
type: http
host: ["http://monitoring-node-ip-here:9200"]
Then restarted all nodes and Kibana (that is actually running in one of the node of the ES cluster).
Now I can see today monitoring data indices being sent to the new monitoring external node but Kibana is showing a "You need to make some adjustments" when accessing the "Monitoring" section.
We checked the `cluster defaults` settings for `xpack.monitoring.exporters` , and found the
reason: `Remote exporters indicate a possible misconfiguration: id1`
Check that the intended exporters are enabled for sending statistics to the monitoring cluster,
and that the monitoring cluster host matches the `xpack.monitoring.elasticsearch` setting in
`kibana.yml` to see monitoring data in this instance of Kibana.
I already checked that all nodes are pingable each other , also I don't have xpack security so I haven't created any additional "remote_monitor" user.
I followed the error message and tried to add the xpack.monitoring.elasticsearch in kibana.yml file but I ended up with the following error :
FATAL ValidationError: child "xpack" fails because [child "monitoring" fails because [child
"elasticsearch" fails because ["url" is not allowed]]]
Hope anyone can help me in figuring what's wrong.
EDIT #1
Solved : problem was due to monitoring not being disabled in the monitoring cluster :
PUT _cluster/settings
{
"persistent": {
"xpack.monitoring.collection.enabled": false
}
}
Additional I made a mistake in kibana.yml configuration,
xpack.monitoring.elasticsearch should have been xpack.monitoring.elasticsearch.hosts
i had exactly the same problem but the root of cause was smth different.
here have a look
okay, i used to have the same problem.
my kibana did not show monitoring graphs, however
i had monitoring index index .monitoring-es-* available
the root of problem in my case was that my master nodes did not have :9200 HTTP socket available from the LAN. that is my config on master nodes was:
...
transport.host: [ "192.168.7.190" ]
transport.port: 9300
http.port: 9200
http.host: [ "127.0.0.1" ]
...
as you can see HTTP socket is available only from within host.
i didnt want if some one will make HTTP request for masters from LAN because there is
no point to do that.
However as i uderstand Kibana do not only read data from monitoring index
index .monitoring-es-*
but also make some requests directly for masters to get some information.
It was exactly why Kibana did not show anything about monitoring.
After i changed one line in the config on master node as
http.host: [ "192.168.0.190", "127.0.0.1" ]
immidiately kibana started to show monitoring graphs.
i recreated this expereminet several times.
Now all is working.
Also i want to underline in spite that now all is fine my monitoring index .monitoring-es-*
do NOT have "cluster_stats" documents.
So if your kibana do not show monitoring graphs i suggest
check if index .monitoring-es-* exists
check if your master nodes can serve HTTP requests from LAN

Graylog2 'cluster' over VPN

We have 2 locations connected by VPN.
Currently we have 2 independent graylog servers.
We want to create some kind co cluster, so we can reach logs on both sides even if VPN is down.
Is is something like this:
We already tried to create Elasticsearch cluster, but this is not the way.
If VPN is down whole cluster is down and logs not working on either side.
I found this article: https://www.elastic.co/blog/scaling_elasticsearch_across_data_centers_with_kafka
with such topology:
but I have no idea how to configure Apache Kafka so it will be broker for graylog and input for syslog server.
Any help/another idea / link will be much appreciated.

Reconnection in Elasticsearch Cluster

I have a question about the clustering respectively the reconnection in the clustering in Elasticsearch.
I have 2 Elasticsearch-Server on 2 different servers within a network. Both Elasticsearch's are in the same cluster.
In an error scenario the network connection could be broken. I simulate this behaviour while pulling the network cable on one server.
After reconnecting the server to the network the clustering won't be working. When I put some data to one Elasticsearch, the data would not be transferred to the other Elasticsearch.
Does anybody know if there are some settings about the reconnecting?
Best Regards
Thomas
Why dont just put all Elasticsearch servers behind the load balancer with single DNS name, there could be issue in server which go down and need manual intervention , after correcting problem in server it will be available under load balancer automatically.
Did you check if all nodes join the cluster again?
You may want to try following APIs:
Check nodes status
http://es-host:9200/_nodes
Check cluster status
http://es-host:9200/_cluster/health

Resources