Why Druid segments become unavailable after data ingestion - segment

Druid cluster shows unavailable for certain segments of data of data source after data ingestion.
Ex: 72.4% available (2352 segments, 647 segments unavailable)
We have a clustered deployment 3 nodes :
master node (coordinator amd overlord)
Data node (historical and middlemanager)
Query node (broker and router)
Any specific reason why it is happening so.

The issue is resolved after clean restart of master and data nodes. However just restarting nodes without cleaning data did not work

Related

Can you run an elasticsearch data node after deleting the data folder?

I am running a three node Elasticsearch (ELK) cluster. All nodes have all and the same roles, e.g. data, master, etc. The disk on node 3 where the data folder is assigned became corrupt and that data is probably unrecoverable. The other nodes are running normally and one of them assumed the master role instead.
Will the cluster work normally if I replace the disk and make the empty directory available to elastic again, or am I risking crashing the whole cluster?
EDIT: As this is not explicitly mentioned in the answer, yes, if you add your node with an empty data folder, the cluster will continue normally as if you added a new node to the cluster, but you have to deal with the missing data. In my case, I lost the data as I do not have replicas.
Let me try to explain that in simple way.
Your data got corrupt at node-3 so if you add that that node again, it will not have the older data, i.e. the shards stored in node-3 will remain unavailable for the cluster.
Did you have the replica shards configured for the indexes?
What is the current status(yellow/red) of the cluster when you have
node-3 removed?
If a primary shard isn't available then the master-node promotes one of the active replicas to become the new primary. If there are currently no active replicas then status of the cluster will remain red.

How to add dedicated master node to existing elasticsearch cluster

We have 6 elasticsearch 6.4 with 3 of them are master eligible does both master and data node operations.
We are thinking of getting 3 dedicated Master as we see the 3 Master/Data node uses high resource utilization sometime and feel that it might crash during working hours some day.
Looking for procedure to add 3 new dedicated master server to existing cluster and how to make the current 3 Master/Data node to just data node.
We found our procedure on how to do this from below link.
https://discuss.elastic.co/t/introduction-of-dedicated-master-nodes/43601
We followed following steps (except disabling http port) mentioned in the post.
shutdown cluster
modify actual 5 nodes with master: false flag and data: true
make 3 new nodes master:true and data: false
modify all nodes to discover using 3 new master nodes addresses
we can optionnally disable http port on master nodes to make them not receiving REST requests.
start cluster
We are still in experimental stage So full cluster restart is not an issue for us however the link has discussion about how to add dedicated master dynamically and avoid split brain issue.

Corrupted index on elasticsearch after the network connectivity issue with AWS

We have 3 nodes ES cluster and that has been hosted on AWS. We can see below error message, after the amazon network connectivity issue( see https://status.aws.amazon.com/) which is happen on today. Could you please advise how I can bring up the cluster again to good state, without any data?
[index.store ] [ [.marvel-2015.03.19][0] Failed to open / find files while reading metadata snapshot
[2017-02-10 01:54:54,379][WARN ][index.engine.internal ] [.marvel-2015.03.16][0] failed engine [corrupted preexisting index]
org.apache.lucene.index.CorruptIndexException: [.marvel-2015.03.16][0] Preexisting corrupted index [corrupted_Jja1GRiPTFyzm4G_tuEvsg] caused by: CorruptIndexException[codec footer mismatch: actua
l footer=1431655765 vs expected footer=-1071082520 (resource: NIOFSIndexInput(path="/es-data//nodes/0/indices/.marvel-2015.03.16/0/index/_83k_es090_0.doc"))]
I would say, compare the data nodes on each node. Try to identify the node with anomaly which may be in the form of file entries with corruptted??? flag or higher data node folder size than the other nodes. If you are lucky then you would have a balanced cluster and other nodes would have full index in the form of primary and replica shards. In such a scenario you can then delete the data folder of the node with anomaly and restart the cluster which then will balance itself again.

Adding cluster to existing elastic search in elk

Currently I have existing
1. Elastic search
2. Logstash
3. Kibana
I have existing data on them.
Now i have setup ELK cluster with 3 Master nodes , 5 data nodes 3 client nodes.
But i am not sure how can i get existing data into them.
Is it possible that if i make the existing ES node as data node and then attach it to the cluster . Then will that data gets replicated to other data nodes as well? and then take that node offline
Option 1
How about just try with fewer nodes? It is not hard to test if it is supported if you setup one node, feed some data, and add one more and configure them as a cluster to see if data get synchronized.
Option 2
Another option is to use an elasticsearch migration tool like https://github.com/taskrabbit/elasticsearch-dump, basically, you could setup a clean cluster and migrate all your data in old node to this cluster.

If you create a table with 32 shards on one server, when you add more servers will those shards rebalance?

When you have a one node cluster and you create a table with 32 shards, and then you add, say, 7 more nodes to the cluster, will those shards automatically migrate to the rest of the cluster so I have 4 shards per node ?
Is manual intervention required for this ?
How about the replicas created on one node ? Do those migrate to other nodes as well ?
Nothing will be automatically redistributed. In current versions of RethinkDB changing the number/distribution of replicas or changing shard boundaries will cause a loss of availability, so you have to explicitly ask for it happen (either in the web UI or with the command line administration tool).

Resources