I started two clusters of ElasticSearch with different names but the other one won't show up either in Marvel or querying for health manually.
curl 'http://127.0.0.1:9200/_cat/health?v'
epoch timestamp cluster status node.total node.data shards pri relo init unassign pending_tasks max_task_wait_time active_shards_percent
1501062768 15:22:48 Cove_dev_cluster yellow 1 1 8 8 0 0 8 0 - 50.0%
But it's running on my screen.
I am assuming you are running both clusters (single nodes I believe in this case) on the same machine... In this case the nodes have a default port range setting of 9200-9300 and they are configured to bind to first available port in the specified range. More details available in Network Settings documentation.
So in your case the other cluster is running on port 9201 most likely. If you check for Marvel or query the health manually on port 9201 you should find the other cluster.
However, if you want to have two nodes participating in the same cluster, then make sure that the cluster name matches in the configuration of both instances of elasticsearch you have running.
Hope this helps.
Related
Using Nest,Asp.net Core 3.1 and Elasticsearch , I have created a 3-nodes-Cluster, with default roles.
How could I check that the queries/search queries are balanced between my local machines?
I tried to monitor metrics of each server/node while indexing large data, and I saw that only nodes having related replica and primary shard were engaged during the large indexing process.
But I need to check and make sure that the requests are balanced/divided between my nodes in a round robin manner, but I do not know how to check that? Is there any way or any tools that I make sure that for example, at first search query node-1 is engaged and at second search query node-3 is engaged?
Any hint, keyword and any help is appreciable.
My each .KML configuration : (all 3 nodes are
cluster.name: my-cluster
node.name: node-1
network.host: 192.168.254.137
http.port: 9200
discovery.seed_hosts: ["192.168.254.137", "192.168.254.135", "192.168.254.136"]
cluster.initial_master_nodes: ["192.168.254.137", "192.168.254.135", "192.168.254.136"]
My index is distributed as below:
index shard prirep state docs store ip node
suggestionindex 0 p STARTED 2000 170.5kb 192.168.254.136 node-3
suggestionindex 0 r STARTED 2000 90.5kb 192.168.254.137 node-1
My appsettings.json :
"ElasticsearchSettings": {
// IP of one of the 3 master eligible nodes
"uri": "http://192.168.254.137:9200/",
"basicAuthUser": "",
"basicAuthPassword": ""
},
Does all the search queries send to primary shard (node-1) always?? or the search queries are balanced between node-1 and node-3 in my case?
If it is balanced, how can I check it?
Who balances it between nodes?? Nest or my Master node ?
Elasticsearch internally load-balance the queries on all the data-nodes, so you don't have to do anything from your side, if you are on Elasticsearch version 7.X, than elasticsearch uses the Smart load balancing technique called Adaptive replica selection before that by default it was based on round-robin technique.
Elastic Blog which I mentioned has all the details of its working.
I have an elastic search cluster of 3 nodes (1 master and 2 data nodes), I have enabled xpack after that I was not able to start the master node. So I ran the elasticsearch-node repurpose command. And the cluster restarted.
But now I have the shards which are unassigned.
analytics-2019-11-19 0 p UNASSIGNED
analytics-2019-11-19 0 r UNASSIGNED
and the cluster status is red. I am new to elk. Let me know how to fix this and make the cluster green?
Thanks
In order to resolve UNASSIGNED shards issue you have to follow these steps:
Let's find out which shards are unassigned, and why run:
curl -XGET localhost:9200/_cat/shards?h=index,shard,prirep,state,unassigned.reason| grep UNASSIGNED
Via Kibana
GET _cat/shards?h=index,shard,prirep,state,unassigned.reason| grep UNASSIGNED
Let's use the cluster allocation explain API to try to garner more information about shard allocation issues
curl -XGET localhost:9200/_cluster/allocation/explain?pretty
Via Kibana
GET _cluster/allocation/explain?pretty
The resulting output will provide helpful details about why certain shards in your cluster remain unassigned.
For example:
You might see this explanation: "explanation" : "the shard cannot be allocated to the same node on which a copy of the shard already exists"
Meaning there is an index that you don’t need anymore and you can delete it to restore your cluster status to green.
If it is not the issue (the example) then it could be one of the following reasons:
-Shard allocation is purposefully delayed
-Too many shards, not enough nodes
-You need to re-enable shard allocation
-Shard data no longer exists in the cluster
-Low disk watermark
-Multiple Elasticsearch versions
Follow this guide to resolve unassigned shards issue
Hope this helps
I am using Elasticsearch 2.3.1 and Kibana 4.5. I have 2 elasticsearch clusters.
Cluster 1 - 1 Master Node, 1 Data Node and 1 client node.
Cluster 2 - 1 Master Node, 1 Data Node and 1 Tribe node.
The tribe node is able to communicate with the nodes in both clusters. I also have 2 indices in both clusters, cluster1index in cluster 1 and cluster2index in cluster 2. I am able to view the indices :
yellow open cluster2index 5 1 22400 0 24.6mb 24.6mb
yellow open cluster1index 5 1 129114 0 109.9mb 109.9mb
However, if I try to connect Kibana with the tribe node, I get an error
[2016-05-05 11:49:03,162][DEBUG][action.admin.indices.create] [tribe-node-MS2] no known master node, scheduling a retry
[2016-05-05 11:49:33,163][DEBUG][action.admin.indices.create] [tribe-node-MS2] timed out while retrying [indices:admin/create] after failure (timeout [30s])
[2016-05-05 11:49:33,165][WARN ][rest.suppressed ] /.kibana Params: {index=.kibana}
MasterNotDiscoveredException[null]
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:226)
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:236)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:804)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
I tried to connect kibana to the client node instead, and was able to view my indices. After this, if I connect Kibana to the tribe node, I am able to view the dashboard.
My kibana config :
server.port: 5601
server.host: "hostname"
elasticsearch.url: "http://hostname:port"
kibana.index: ".kibana"
I am not sure why Kibana was not working with tribe node intially and if I am missing anything in my configuration.
I read in one of the answers in the elasticsearch forum :
"Regarding the issue you have with kibana, you can't create a .kibana index directly with the tribe node because it's a tribe node :slight_smile: sitting in a cluster that has no master node and data node. Yes, this tribe node is connected to two clusters in this case but it does not know where to put .kibana index if you are under the assumption that it should write to one of the clusters."
Is this the reason that I was unable to create the kibana index directly in the tribe node intially, but later when the index was already created, i was able to point Kibana the tribe node? If so, is there any configuration available to connect Kibana with tribe node directly?
Good additional information and, also, confirmations for this behavior you can find in this github issue and, also, in this one.
As a summary...
The Tribe Node documentation states that you cannot execute Master Level Write Operations such as Create Index, both of which are required when using Kibana 4 for the first time. Simply creating the index is not sufficient, because Put Mapping is also required, and is also a Master Level Write Operation.
As a workaround, first bring up the Kibana 4 instance and configure it to point directly at one of the remote clusters so that it will initialize the .kibana index in that cluster. While Kibana 4 is connected to this single cluster, create and save the Index Settings/Index Pattern that you will be using for the tribe node there, and create/save at least 1 visualization and 1 dashboard. Then update Kibana yml file to reconfigure its ES connection to point to the tribe node and restart Kibana 4.
From that point on, you should be able to continue managing Kibana Dashboards & Visualizations through the tribe node, providing that the .kibana index exists in only one of the remote clusters. If the index must exist in more than one cluster (e.g., you are doing snapshot/restore for redundancy), then instruct the Tribe node to prefer the master version with these settings (where clusterA holds the master .kibana index):
tribe:
on_conflict: prefer_clusterA
I have 2 nodes running elastic search cluster with 5 shards and 1 replica.It shows cluster health to green but suddenly it shows cluster health to yellow and I fix it by shard re-routing.
.
I wants to understand root cause of unassigned shards because when it goes to yellow state I tried telnet between both nodes on port 9300 and 9200 and connected successfully
I also faced that problem earlier and that time I went to elasticsearch logs.So based on that I fixed the issue.
So my suggestion is please go to logs and check what is the route cause of the issue.
regards
Kartheek Gummaluri
elasticsearch 1.7.2 on centos
3 node cluster
This question is how to manage ES config via mods to elasticsearch.yml + restart of elasticsearch service. (Not via api.)
Out of box, the config is:
index.number_of_replicas: 1
So on a 3 node cluster, any 2 nodes have the whole package.
If I want any 1 node to be complete, I would set:
index.number_of_replicas: 2
a) Correct?
b) Can I just walk up to an existing setup and make this change?
c) And, can I just walk up , and adjust it up to 2, and down to 1, whenever? (up to make each node a possible stand alone, down to save disk space)
The number of replica can be changed at any point of time. You can increase or decrease the replica dynamically. There is a good example shown here.
Also please note that , you cant change the number of shards after index creation , but number of replica is open to change via index settings API.
fwiw, another way to do this (I have now proven out) is to update the yml file (elasticsearch.yml). Change the element:
index.number_of_replicas: 2
Up or down, as desired, and restart the elasticsearch service
service elasticsearch restart
The cluster will go yellow (yellow status) while the replicas are being created/moved, and then will go green.