Kibana configuration foe es with production and monitoring clusters - elasticsearch

I referred this and this
Why do we need two fields to let Kibana know where is the monitoring data.
elasticsearch.hosts
monitoring.ui.elasticsearch.hosts
But when I give my monitoring cluster at either of these properties, it works. I assumed badly that elasticsearch.hosts is my actual production cluster than monitoring cluster.
Apart from why part, is my understanding correct about this integration attributes?
Any thoughts? Thanks.
Kibana.yml:
server.host: "ip.ad.re.ss"
#elasticsearch.hosts: ["http://host1:9200","http://host2:9200","FewMoreHosts"]
monitoring.ui.elasticsearch.hosts: ["http://MonitoringNode:9200"]
I haven't changed any part in elasticsearch.yml of monitoring node.
metricbeat.yml:
output.elasticsearch:
host: ["http://MonitoringNode:9200"]
setup.kibana:
host: kibanaHost
In modules.d/elasticsearch-xpack.yml, I left the default configurations.
elasticsearch.yml:
cluster.name: es_cluster
node.name: master-1
node.data: false
node.master: true
node.ingest: true
node.max_local_storage_nodes: 3
transport.tcp.port: 9300
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["master1.ip", "master2.ip","master3.ip"]
cluster.initial_master_nodes: ["master-1","master-2"]
Monitoring cluster yml:
network.host: 0.0.0.0
discovery.type: single-node
When I enable both properties in Kibana.yml, I get the below error in the log.
{
"type": "log",
"#timestamp": "2021-04-21T14:48:34-04:00",
"tags": [
"error",
"plugins",
"data",
"data",
"indexPatterns"
],
"pid": 29959,
"message": "Error: No indices match pattern \"metricbeat-*\"\n at createNoMatchingIndicesError (/usr/share/kibana/src/plugins/data/server/index_patterns/fetcher/lib/errors.js:45:29)\n at convertEsError (/usr/share/kibana/src/plugins/data/server/index_patterns/fetcher/lib/errors.js:71:12)\n at callFieldCapsApi (/usr/share/kibana/src/plugins/data/server/index_patterns/fetcher/lib/es_api.js:69:38)\n at runMicrotasks (<anonymous>)\n at processTicksAndRejections (internal/process/task_queues.js:93:5)\n at getFieldCapabilities (/usr/share/kibana/src/plugins/data/server/index_patterns/fetcher/lib/field_capabilities/field_capabilities.js:35:23)\n at IndexPatternsFetcher.getFieldsForWildcard (/usr/share/kibana/src/plugins/data/server/index_patterns/fetcher/index_patterns_fetcher.js:49:31)\n at IndexPatternsApiServer.getFieldsForWildcard (/usr/share/kibana/src/plugins/data/server/index_patterns/index_patterns_api_client.js:27:12)\n at IndexPatternsService.refreshFieldSpecMap (/usr/share/kibana/src/plugins/data/common/index_patterns/index_patterns/index_patterns.js:216:27)\n at IndexPatternsService.getSavedObjectAndInit (/usr/share/kibana/src/plugins/data/common/index_patterns/index_patterns/index_patterns.js:320:23) {\n data: null,\n isBoom: true,\n isServer: false,\n output: {\n statusCode: 404,\n payload: {\n statusCode: 404,\n error: 'Not Found',\n message: 'No indices match pattern \"metricbeat-*\"',\n code: 'no_matching_indices'\n },\n headers: {}\n }\n}"
}
But if I set only monitoring.ui.elasticsearch.hosts, Kibana shows the data.

The elasticsearch.hosts is the place where you will set the hosts where your data is stored, the data you want to query, this should be your production cluster.
The monitoring.ui.elasticsearch.hosts is the place where you will set the hosts of your monitoring cluster if you have a separated monitoring cluster.
Depending on the size of your cluster it is recommended to have a separated cluster just for monitoring, this could be a single-node cluster using the basic license, for example.

The security tokens that are used in these contexts are cluster-specific, therefore you cannot use a single Kibana instance to connect to both production and monitoring clusters
As mentioned in the documentation, I think that we need to have separate Kibana instance to collect data from both clusters.
I am not sure much about the security tokens.

Related

Elasticsearch node can't connect to cluster

First of all, I want to be clear that I looked at several guides and suggested similar questions before opening this post, but none of them worked for our case.
Here is our situation:
Last week, our Elasticsearch stopped adding new records as it reached the state where the disk was almost full. We changed the config to get some time and now is working as expected. However, we want to add a new server with elasticsearch and form a cluster as we don't want to resize the disk because we don't want to lose anything.
Here is the configuration for the main server /etc/elasticsearch/elasticsearch.yml
cluster.name: my-cluster
node.name: master-node
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.max_content_length: 100mb
discover.seed_hosts: ["ip_address_server_1", "ip_address_server_2"]
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.security.http.ssl.enabled: true
As I said, we created another server with the same elasticsearch version (7.6.2)
cluster.name: my-cluster
node.name: another-node
node.data: true
node.master: false
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: ip_of_server
discover.seed_hosts: ["ip_address_server_1", "127.0.0.1"]
We tried restarting nodes at the same time, curls and everything but they can't see each other. At this point, I'm starting to think that the problem is the SSL configuration on the master, but I am not an elasticsearch expert, so I don't know what is exactly happening.
EDIT:
I took a look at the logs and here is what I found:
Node 2
[ico-elastic-node-2] master not discovered yet: have discovered [{node-2}{G5SfrEv0RxaxmYf8urIFtQ}{XpfNNouCQx2HmYfw2AvoQw}{ip_server_2}{ip_server_29300}{dil}{ml.machine_memory=12558602240, xpack.installed=true, ml.max_open_jobs=20}]; discovery will continue using [ip_server_1, 127.0.0.1:9300] from hosts providers and [] from last-known cluster state; node term 0, last-accepted version 0 in term 0
And the master:
[master-node] exception caught on transport layer [Netty4TcpChannel{localAddress=/ip:9300, remoteAddress=/ip:36660}], closing connection
io.netty.handler.codec.DecoderException: javax.net.ssl.SSLHandshakeException: No available authentication scheme
Currently master is yellow. What can we try next?

Failed to send join request to master in Elasticsearch, Unknown NamedWriteable [org.elasticsearch.cluster.metadata.MetaData$Custom][licenses]]

We have a long running single node ELK cluster running (master/data). I have decided to add additional data node. However Im getting the below error on the data node
30.X.XXX}{172.30.X.XXX:9300}{ml.enabled=true}], reason [RemoteTransportException[[master][172.30.X.XXX:9300][internal:discovery/zen/join]];
nested: IllegalStateException[failure when sending a validation request to node];
nested: RemoteTransportException[[data1][172.30.X.XXX:9300][internal:discovery/zen/join/validate]];
nested: IllegalArgumentException[Unknown NamedWriteable [org.elasticsearch.cluster.metadata.MetaData$Custom][licenses]]; ]
Below are the config files on master and new data node
Master Node:
cluster.name: my-application
node.name: master
node.master: true
node.data: true
path.data: /opt/elasticsearch
network.host: ["172.30.X.XX1","localhost"]
http.port: 9200
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["172.30.X.XX1"]
discovery.zen.minimum_master_nodes: 1
Data1 Node:
cluster.name: my-application
node.name: data1
node.master: false
node.data: true
path.data: /opt/elasticsearch
network.host: ["172.30.X.XX2","localhost"]
http.port: 9200
transport.tcp.port: 9300
discovery.zen.ping.unicast.hosts: ["172.30.X.XX1"]
discovery.zen.minimum_master_nodes: 1
Tried pinging and checked telnet on 9200 and 9300 from master to data node and vice versa and it is working fine
I have tried deleting the data from /var/lib/elasticsearch/nodes/0 and restarted the data1, it didnt work
This happens if you try with a mix of xpack/commercial/non-open-source binaries of Elasticsearch and some nodes with the open-source binaries.
Unfortunately Elasticsearch tries to "trick" you into using their non-open-source version nowadays and this causes many unintended non-open-source installations.
A simple solution is to install the non-oss version everywhere, however you may not want to run the commercial version as you then need to adhere to the commercial license!
In order to convert to the open-source license on all nodes you can do the following:
You can set the following in /etc/elasticsearch/elasticsearch.yml and restart all nodes to disable some commercial features:
xpack.security.enabled: false
xpack.ml.enabled: false
Then you can change all nodes to the open-source binaries one by one in rolling fashion.
See also the following similar discussions:
https://discuss.elastic.co/t/elasticsearch-cluster-cant-join-new-node/126964
https://discuss.elastic.co/t/adding-a-new-node-on-a-different-subnet/125377/2- https://github.com/codelibs/elasticsearch-module/issues/3
https://discuss.elastic.co/t/transport-client-error-after-installing-x-pack-on-es-5-5-1/97021/4
https://discuss.elastic.co/t/bulk-indexing-with-x-pack-exception/92086/5

Elasticsearch Cluster can't join new node

I use es-2.2 version, build cluster with 3 node in different server. now some server get more memory to used, so plan to start other node in existing server.
server1:
10.1.192.31 port use default 9200 and 9300
server2:
10.1.192.32 port use default 9200 and 9300
server3:
10.216.90.225 port default 9200 and 9300
now i wanna add two new node in 31 and 32 server
newnode1: get new configure like below:
cluster.name: EScluster
node.name: ESnode-1-1
network.host: 10.1.192.32
node.master: false
node.data: true
http.port: 9202
transport.tcp.port: 9302
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["10.216.90.225", "10.1.192.31:9300", "10.1.192.31:9302", "10.1.192.32:9300"]
newnod2: config like below
cluster.name: EScluster
node.name: ESnode-2-1
network.host: 10.1.192.31
node.master: false
node.data: true
http.port: 9202
transport.tcp.port: 9302
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["10.216.90.225", "10.1.192.31:9300", "10.1.192.32:9300", "10.1.192.32:9302"]
after start get error like below:
[INFO ][discovery.zen ] [ESnode-1-1] failed to send join request to master [{ESnode-2}{C4Z7lLTASmiZYtswsljZYA}{10.1.192.31}{10.1.192.31:9300}{max_local_storage_nodes=1, master=true}], reason [RemoteTransportException[[ESnode-2][10.1.192.31:9300][internal:discovery/zen/join]]; nested: IllegalStateException[failure when sending a validation request to node]; nested: RemoteTransportException[[ESnode-1-1][10.1.192.32:9302][internal:discovery/zen/join/validate]]; nested: IllegalArgumentException[No custom metadata prototype registered for type [licenses], node like missing plugins]; ]
[2016-05-26 10:35:26,408][WARN ][transport.netty ] [ESnode-1-1] exception caught on transport layer [[id: 0x770dcb9e, /10.1.192.31:37584 => /10.1.192.32:9302]], closing connection
I met this error just now.
There are three nodes(one master node with two data node, all in docker) running on my vps.
I install marvel in master node with command:
bin/plugin install license
bin/plugin install marvel-agent
But I got No Marvel Data Found on kibana page.
Soon, I found other data node died.
The error was
IllegalArgumentException[No custom metadata prototype registered for type [licenses], node like missing plugins
and
So just install license in other two data nodes:
bin/plugin install license
restart them would work.
guys
The problem been solved, main reason like below:
[internal:discovery/zen/join/validate]]; nested: IllegalArgumentException[No custom metadata prototype registered for type [licenses], node like missing plugins]
I install some plugin like marvel but not working then disabled, but it's not install that plugin for new node。 This is big mistake, so don't try any unused plugin in your enviroment, should be carefully on it.
So, my configuration is correctly hope this can help other people, thanks.
My logs were filled with annoying license errors even though I had uninstalled marvel and was not running anything that should require a license. Nodes would not join the cluster.
I ran:
bin/plugin remove license
and restarted the nodes. Things came back online fine and the logspam stopped.

ElasticSearch : observer: timeout notification from cluster service

I have a ElasticSearch Cluster with 3 Data Master Nodes, one dedicated Client Node & a logstash sending events to Elasticsearch Cluster via the elasticsearch client node.
The Client is not able to connect to the cluster and seeing the below errors in log:-
[2015-10-24 00:18:29,657][DEBUG][action.admin.indices.create] [ESClient] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2015-10-24 00:18:30,743][DEBUG][action.admin.indices.create] [ESClient] no known master node, scheduling a retry
I have gone through this answer but it is not working for me. My Master-Data elastic search config looks like below:-
cluster.name: elasticsearch
node.name: "ESMasterData1"
node.master: true
node.data: true
index.number_of_shards: 7
index.number_of_replicas: 1
bootstrap.mlockall: true
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["es-master3:9300", "kibana:9300", "es-master2:9300", "es-master1:9300"]
cloud.aws.access_key: AK
cloud.aws.secret_key: J0
ES Client Config looks like below:-
cluster.name: elasticsearch
node.name: "ESClient"
node.master: false
node.data: false
index.number_of_shards: 7
index.number_of_replicas: 1
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["es-master1:9300", "es-master2:9300", "es-master3:9300", "kibana:9300"]
bootstrap.mlockall: true
cloud.aws.access_key: AK
cloud.aws.secret_key: J0
The nodes are having all the standard configuration like JVM Heap set to 30GB & mlockall set to true.
Logstash Output looks like below:
elasticsearch {
index => "j-%{env}-%{app}-%{iver}-%{[#metadata][app_log_time]}"
cluster => "elasticsearch"
host => "kibana"
port => "9300"
protocol => "transport"
}
Telnet is working fine from ES Client Node to ES Master-Data nodes on port 9300. Also all the three ES Master-Data nodes are able to talk to each other. I have also verified TCP & UDP is enabled between the client & data-master machine by using iperf.
I am using Elastic Search Version 1.7.1 on Debian 7
Can some one let me know what is going wrong or how can I debug this?

How to set up ES cluster?

Assuming I have 5 machines I want to run an elasticsearch cluster on, and they are all connected to a shared drive. I put a single copy of elasticsearch onto that shared drive so all three can see it. Do I just start the elasticsearch on that shared drive on eall of my machines and the clustering would automatically work its magic? Or would I have to configure specific settings to get the elasticsearch to realize that its running on 5 machines? If so, what are the relevant settings? Should I worry about configuring for replicas or is it handled automatically?
its super easy.
You'll need each machine to have it's own copy of ElasticSearch (simply copy the one you have now) -- the reason is that each machine / node whatever is going to keep it's own files that are sharded accross the cluster.
The only thing you really need to do is edit the config file to include the name of the cluster.
If all machines have the same cluster name elasticsearch will do the rest automatically (as long as the machines are all on the same network)
Read here to get you started:
https://www.elastic.co/guide/en/elasticsearch/guide/current/deploy.html
When you create indexes (where the data goes) you define at that time how many replicas you want (they'll be distributed around the cluster)
It is usually handled automatically.
If autodiscovery doesn't work. Edit the elastic search config file, by enabling unicast discovery
Node 1:
cluster.name: mycluster
node.name: "node1"
node.master: true
node.data: true
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["node1.example.com"]
Node 2:
cluster.name: mycluster
node.name: "node2"
node.master: false
node.data: true
discovery.zen.ping.multicast.enabled: false
discovery.zen.ping.unicast.hosts: ["node1.example.com"]
and so on for node 3,4,5. Make node 1 master, and the rest only as data nodes.
Edit: Please note that by ES rule, if you have N nodes, then by convention, N/2+1 nodes should be masters for fail-over mechanisms They may or may not be data nodes, though.
Also, in case auto-discovery doesn't work, most probable reason is because the network doesn't allow it (and therefore disabled). If too many auto-discovery pings take place across multiple servers, the resources to manage those pings will prevent other services from running correctly.
For ex, think of a 10,000 node cluster and all 10,000 nodes doing the auto-pings.
Elastic Search 7 changed the configurations for cluster initialisation.
What is important to note is the ES instances communicate internally using the Transport layer(TCP) and not the HTTP protocol which is normally used to perform ops on the indices. Below is sample config for 2 machines cluster.
cluster.name: cluster-new
node.name: node-1
node.master: true
node.data: true
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
transport.host: 102.123.322.211
transport.tcp.port: 9300
discovery.seed_hosts: [“102.123.322.211:9300”,"102.123.322.212:9300”]
cluster.initial_master_nodes:
- "node-1"
- "node-2”
Machine 2 config:-
cluster.name: cluster-new
node.name: node-2
node.master: true
node.data: true
bootstrap.memory_lock: true
network.host: 0.0.0.0
http.port: 9200
transport.host: 102.123.322.212
transport.tcp.port: 9300
discovery.seed_hosts: [“102.123.322.211:9300”,"102.123.322.212:9300”]
cluster.initial_master_nodes:
- "node-1"
- "node-2”
cluster.name: This has be same across all the machines that are going to be part of a cluster.
node.name : Identifier for the ES instance. Defaults to machine name if not given.
node.master: specifies whether this ES instance is going to be master or not
node.data: specifies whether this ES instance is going to be data node or not(hold data)
bootsrap.memory_lock: disable swapping.You can start the cluster without setting this flag. But its recommended to set the lock.More info: https://www.elastic.co/guide/en/elasticsearch/reference/master/setup-configuration-memory.html
network.host: 0.0.0.0 if you want to expose the ES instance over network. 0.0.0.0 is different from 127.0.0.1( aka localhost or loopback address).
It means all IPv4 addresses on the machine. If machine has multiple ip addresses with a server listening on 0.0.0.0, the client can reach the machine from any of the IPv4 addresses.
http.port: port on which this ES instance will listen to for HTTP requests
transport.host: The IPv4 address of the host(this will be used to communicate with other ES instances running on different machines). More info: https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-transport.html
transport.tcp.port: 9300 (the port where the machine will accept the tcp connections)
discovery.seed_hosts: This was changed in recent versions. Initialise all the IPv4 addresses with TCP port(important) of ES instances that are going to be part of this cluster. This is going to be same across all ES instances that are part of this cluster.
cluster.initial_master_nodes: node names(node.name) of the ES machines that are going to participate in master election.(Quorum based decision making :- https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-discovery-quorums.html#modules-discovery-quorums)
I tried the steps that #KannarKK suggested on ES 2.0.2, however, I could not bring the cluster up and running. Evidently, I figured out something, as I had set tcp port number on Master, on the Slave configuration discovery.zen.ping.unicast.hosts needs Master's port number along with IP address ( tcp port number ) for discovery. So when I try following configuration it works for me.
Node 1
cluster.name: mycluster
node.name: "node1"
node.master: true
node.data: true
http.port : 9200
tcp.port : 9300
discovery.zen.ping.multicast.enabled: false
# I think unicast.host on master is redundant.
discovery.zen.ping.unicast.hosts: ["node1.example.com"]
Node 2
cluster.name: mycluster
node.name: "node2"
node.master: false
node.data: true
http.port : 9201
tcp.port : 9301
discovery.zen.ping.multicast.enabled: false
# The port number of Node 1
discovery.zen.ping.unicast.hosts: ["node1.example.com:9300"]

Resources