Index creation move elastic search cluster to red - elasticsearch

I have setup an elastic search cluster with 1 master node and 1 client node, but problem is as I am creating index my cluster move to red state with 3 initializing_shards on client node, master node shards working fine.
don't know how to resolve it.

It was installation issue we have re installed elastic search and that solved our problem.

As you said in question, You have only 1 master and 1 client node but you should have at least 1 data node to store at least primary shards.

Related

Can you run an elasticsearch data node after deleting the data folder?

I am running a three node Elasticsearch (ELK) cluster. All nodes have all and the same roles, e.g. data, master, etc. The disk on node 3 where the data folder is assigned became corrupt and that data is probably unrecoverable. The other nodes are running normally and one of them assumed the master role instead.
Will the cluster work normally if I replace the disk and make the empty directory available to elastic again, or am I risking crashing the whole cluster?
EDIT: As this is not explicitly mentioned in the answer, yes, if you add your node with an empty data folder, the cluster will continue normally as if you added a new node to the cluster, but you have to deal with the missing data. In my case, I lost the data as I do not have replicas.
Let me try to explain that in simple way.
Your data got corrupt at node-3 so if you add that that node again, it will not have the older data, i.e. the shards stored in node-3 will remain unavailable for the cluster.
Did you have the replica shards configured for the indexes?
What is the current status(yellow/red) of the cluster when you have
node-3 removed?
If a primary shard isn't available then the master-node promotes one of the active replicas to become the new primary. If there are currently no active replicas then status of the cluster will remain red.

Corrupted index on elasticsearch after the network connectivity issue with AWS

We have 3 nodes ES cluster and that has been hosted on AWS. We can see below error message, after the amazon network connectivity issue( see https://status.aws.amazon.com/) which is happen on today. Could you please advise how I can bring up the cluster again to good state, without any data?
[index.store ] [ [.marvel-2015.03.19][0] Failed to open / find files while reading metadata snapshot
[2017-02-10 01:54:54,379][WARN ][index.engine.internal ] [.marvel-2015.03.16][0] failed engine [corrupted preexisting index]
org.apache.lucene.index.CorruptIndexException: [.marvel-2015.03.16][0] Preexisting corrupted index [corrupted_Jja1GRiPTFyzm4G_tuEvsg] caused by: CorruptIndexException[codec footer mismatch: actua
l footer=1431655765 vs expected footer=-1071082520 (resource: NIOFSIndexInput(path="/es-data//nodes/0/indices/.marvel-2015.03.16/0/index/_83k_es090_0.doc"))]
I would say, compare the data nodes on each node. Try to identify the node with anomaly which may be in the form of file entries with corruptted??? flag or higher data node folder size than the other nodes. If you are lucky then you would have a balanced cluster and other nodes would have full index in the form of primary and replica shards. In such a scenario you can then delete the data folder of the node with anomaly and restart the cluster which then will balance itself again.

Adding cluster to existing elastic search in elk

Currently I have existing
1. Elastic search
2. Logstash
3. Kibana
I have existing data on them.
Now i have setup ELK cluster with 3 Master nodes , 5 data nodes 3 client nodes.
But i am not sure how can i get existing data into them.
Is it possible that if i make the existing ES node as data node and then attach it to the cluster . Then will that data gets replicated to other data nodes as well? and then take that node offline
Option 1
How about just try with fewer nodes? It is not hard to test if it is supported if you setup one node, feed some data, and add one more and configure them as a cluster to see if data get synchronized.
Option 2
Another option is to use an elasticsearch migration tool like https://github.com/taskrabbit/elasticsearch-dump, basically, you could setup a clean cluster and migrate all your data in old node to this cluster.

Elasticsearch Tribe Node and Kibana - no known master node

I am using Elasticsearch 2.3.1 and Kibana 4.5. I have 2 elasticsearch clusters.
Cluster 1 - 1 Master Node, 1 Data Node and 1 client node.
Cluster 2 - 1 Master Node, 1 Data Node and 1 Tribe node.
The tribe node is able to communicate with the nodes in both clusters. I also have 2 indices in both clusters, cluster1index in cluster 1 and cluster2index in cluster 2. I am able to view the indices :
yellow open cluster2index 5 1 22400 0 24.6mb 24.6mb
yellow open cluster1index 5 1 129114 0 109.9mb 109.9mb
However, if I try to connect Kibana with the tribe node, I get an error
[2016-05-05 11:49:03,162][DEBUG][action.admin.indices.create] [tribe-node-MS2] no known master node, scheduling a retry
[2016-05-05 11:49:33,163][DEBUG][action.admin.indices.create] [tribe-node-MS2] timed out while retrying [indices:admin/create] after failure (timeout [30s])
[2016-05-05 11:49:33,165][WARN ][rest.suppressed ] /.kibana Params: {index=.kibana}
MasterNotDiscoveredException[null]
at org.elasticsearch.action.support.master.TransportMasterNodeAction$AsyncSingleAction$5.onTimeout(TransportMasterNodeAction.java:226)
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onTimeout(ClusterStateObserver.java:236)
at org.elasticsearch.cluster.service.InternalClusterService$NotifyTimeout.run(InternalClusterService.java:804)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
I tried to connect kibana to the client node instead, and was able to view my indices. After this, if I connect Kibana to the tribe node, I am able to view the dashboard.
My kibana config :
server.port: 5601
server.host: "hostname"
elasticsearch.url: "http://hostname:port"
kibana.index: ".kibana"
I am not sure why Kibana was not working with tribe node intially and if I am missing anything in my configuration.
I read in one of the answers in the elasticsearch forum :
"Regarding the issue you have with kibana, you can't create a .kibana index directly with the tribe node because it's a tribe node :slight_smile: sitting in a cluster that has no master node and data node. Yes, this tribe node is connected to two clusters in this case but it does not know where to put .kibana index if you are under the assumption that it should write to one of the clusters."
Is this the reason that I was unable to create the kibana index directly in the tribe node intially, but later when the index was already created, i was able to point Kibana the tribe node? If so, is there any configuration available to connect Kibana with tribe node directly?
Good additional information and, also, confirmations for this behavior you can find in this github issue and, also, in this one.
As a summary...
The Tribe Node documentation states that you cannot execute Master Level Write Operations such as Create Index, both of which are required when using Kibana 4 for the first time. Simply creating the index is not sufficient, because Put Mapping is also required, and is also a Master Level Write Operation.
As a workaround, first bring up the Kibana 4 instance and configure it to point directly at one of the remote clusters so that it will initialize the .kibana index in that cluster. While Kibana 4 is connected to this single cluster, create and save the Index Settings/Index Pattern that you will be using for the tribe node there, and create/save at least 1 visualization and 1 dashboard. Then update Kibana yml file to reconfigure its ES connection to point to the tribe node and restart Kibana 4.
From that point on, you should be able to continue managing Kibana Dashboards & Visualizations through the tribe node, providing that the .kibana index exists in only one of the remote clusters. If the index must exist in more than one cluster (e.g., you are doing snapshot/restore for redundancy), then instruct the Tribe node to prefer the master version with these settings (where clusterA holds the master .kibana index):
tribe:
on_conflict: prefer_clusterA

elasticsearch: Poss to change number of replicas after system is running?

elasticsearch 1.7.2 on centos
3 node cluster
This question is how to manage ES config via mods to elasticsearch.yml + restart of elasticsearch service. (Not via api.)
Out of box, the config is:
index.number_of_replicas: 1
So on a 3 node cluster, any 2 nodes have the whole package.
If I want any 1 node to be complete, I would set:
index.number_of_replicas: 2
a) Correct?
b) Can I just walk up to an existing setup and make this change?
c) And, can I just walk up , and adjust it up to 2, and down to 1, whenever? (up to make each node a possible stand alone, down to save disk space)
The number of replica can be changed at any point of time. You can increase or decrease the replica dynamically. There is a good example shown here.
Also please note that , you cant change the number of shards after index creation , but number of replica is open to change via index settings API.
fwiw, another way to do this (I have now proven out) is to update the yml file (elasticsearch.yml). Change the element:
index.number_of_replicas: 2
Up or down, as desired, and restart the elasticsearch service
service elasticsearch restart
The cluster will go yellow (yellow status) while the replicas are being created/moved, and then will go green.

Resources