ElasticSearch UNASSIGNED indices fix without data loss - elasticsearch

for whatever reason a bunch of indices became UNASSIGNED. I'm looking for a way of assigning them to a cluster node without loosing any data.
I tried using the following API call, but it results in data loss, unfortunately (due to allow_primary):
curl -XPOST 'localhost:9200/_cluster/reroute?pretty' -d '{
"commands" : [ {
"allocate" : {
"index" : "index-name",
"shard" : "0",
"allow_primary" : true,
"node" : "node-name"
}
}
]
}'
I also keep getting the following entries in elasticsearch.log:
[2015-03-16 11:51:12,181][DEBUG][action.search.type ] [cluster node] All shards failed for phase: [query_fetch]
[2015-03-16 11:51:12,450][DEBUG][action.search.type ] [cluster node] All shards failed for phase: [query_fetch]
[2015-03-16 11:51:19,349][DEBUG][action.bulk ] [cluster node] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
[2015-03-16 11:51:20,057][DEBUG][action.bulk ] [cluster node] observer: timeout notification from cluster service. timeout setting [1m], time since start [1m]
Any help would be appreciated.

Related

AWS Elasticsearch showing cluster health yellow, how should I fix it?

I am using AWS Elasticsearch. My cluster status is yellow for past 48 hours on the recommendation provided here:
https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-handling-errors.html
I've updated my nodes to be 15 data and it has 3 master nodes.
Even though it has more spaces for around 60 Gb in each nodes , it is still in yellow state.
When i executed this command GET /_cluster/allocation/explain
"index" : "***********************************",
"shard" : 4,
"primary" : false,
"current_state" : "unassigned",
"unassigned_info" : {
"reason" : "ALLOCATION_FAILED",
"at" : "2020-10-09T16:19:41.803Z",
"failed_allocation_attempts" : 5,
"details" : "failed shard on node [f6hB7EYOSR-GiJLFXBn01w]: failed recovery, failure RecoveryFailedException[[******************************][4]: Recovery failed from {70c36ff18063566c3a6089f3d696440a}{*******************}{*************}{di}{di_number=39, zone=us-east-1d, distributed_snapshot_deletion_enabled=true} into {**********************}{****************}{*************}{*****}{*******}{di}{distributed_snapshot_deletion_enabled=true, zone=us-east-1d, di_number=39}]; nested: RemoteTransportException[[****************][*********][internal:index/shard/recovery/start_recovery]]; nested: CircuitBreakingException[[parent] Data too large, data for [<transport_request>] would be [1554462628/1.4gb], which is larger than the limit of [1513521152/1.4gb], real usage: [1554460888/1.4gb], new bytes reserved: [1740/1.6kb], usages [request=0/0b, fielddata=621718551/592.9mb, in_flight_requests=73378/71.6kb, accounting=35794764/34.1mb]]; ",
"last_allocation_status" : "no_attempt"
}
This is what it says. How can i resolve this?

Elasticsearch restore to a new cluster with differnt number of nodes

I have a ops cluster with 5 nodes (1 master, 1 client, and 3 data nodes). I want to restore a backup of this onto a new test cluster with only 3 nodes (1 master, 1 client, 1 data). I only have 1 data node in my test cluster at the moment and wasn't planning to add any additional data nodes on my test cluster.
The issue I'm having is that when I try to restore to my test cluster, only some of the shards get assigned. Most of them stay in the UNASSIGNED state. I've tried to use the reroute api but it fails. See below
Does my test cluster have to have the same number of nodes as my ops cluster I'm restoring from? If so is there any work around for this?
{
"error": {
"root_cause": [
{
"type": "reroute_transport_exception",
"reason": ["myhost_master"[myhostip:9200][cluster:admin/reroute]"
}
],
"type": "illegal_argument_exception",
"reason": "resovled [myhostip] into [3] nodes, where excpeted to be resolved to a single node"
],
"status": 400
}

How to delete data from a particular shard

I have got a index with 5 primary shards and no replicas.
One of my shard(shard 1) is in unassigned state. When i checked the log file, i found out below error:
2obv65.nvd, _2vfjgt.fdx, _3e3109.si, _3dwgm5_Lucene45_0.dvm, _3aks2g_Lucene45_0.dvd, _3d9u9f_76.del, _3e30gm.cfs, _3cvkyl_es090_0.tim, _3e309p.nvd, _3cvkyl_es090_0.blm]]; nested: FileNotFoundException[_101a65.si]; ]]
When i checked the index, i could not find the 101a65.si file for the shard 1.
I am unable to locate the missing .si file. I tried a lot but could not assign the shard 1 again.
Is there any other way to make the shard 1 assign again? or do i need to delete the entire shard 1 data?
Please suggest.
Normally in the stack trace you should see the path to the corrupted shard, something like MMapIndexInput(path="path/to/es/db/nodes/node_number/indices/name_of_index/1/index/some_file) (here the 1 is the shard number)
Normally deleting path/to/es/db/nodes/node_number/indices/name_of_index/1 should help the shard recover. If you still see it unassigned try sending this command to your cluster (normally as per the documentation, it should work, though I'm not sure about ES 1.x syntax and commands):
POST _cluster/reroute
{
"commands" : [
{
"allocate" : {
"index" : "myIndexName",
"shard" : 1,
"node" : "myNodeName",
"allow_primary": true
}
}
]
}

Rejected Execution of org.elasticsearch.transport.TransportService Error

I am trying to run elastic search and using the following command I am trying to put data-
'curl -XPOST http://localhost:9200/_bulk?pretty --data-binary #data_.json'
But I am getting the following error-
"create" : {
"_index" : "appname-docm",
"_type" : "HYD",
"_id" : "AVVYfsk7M5xgvmX8VR_B",
"status" : 429,
"error" : {
"type" : "es_rejected_execution_exception",
"reason" : "rejected execution of org.elasticsearch.transport.TransportService$4#c8998f4 on EsThreadPoolExecutor[bulk, queue capacity = 50, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor#553aee29[Running, pool size = 4, active threads = 4, queued tasks = 50, completed tasks = 0]]"
}
}
},
I tried increasing the queue size by-
threadpool.search.queue_size: 100000
But I still get the same error.
The problem that you are getting is because the bulk operations queue is full.
A node ES has many threads pools, generic, search, index, suggest, bulk, etc.
In your case the problem is due to the queue of bulk operations is full.
Try adjusting the queue size of thread pool of bulk operation:
thread_pool.bulk.queue_size: 100
Or reduce the amount of bulk operations that you are sending at once.
For more details see https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-threadpool.html
Try the following:
curl -XPUT localhost:9200/_cluster/settings -d '{ "transient" : { "threadpool.bulk.queue_size" : 500 } }'
Edit:
And to Get current settings
curl -X GET "localhost:9200/_cluster/settings?include_defaults=true"

Courier Fetch: shards failed

Why do I get these warnings after adding more data to my elasticsearch?
And the warnings are different every time I browse the dashboard.
"Courier Fetch: 30 of 60 shards failed."
More details:
It's a sole node on a CentOS 7.1
/etc/elasticsearch/elasticsearch.yml
index.number_of_shards: 3
index.number_of_replicas: 1
bootstrap.mlockall: true
threadpool.bulk.queue_size: 1000
indices.fielddata.cache.size: 50%
threadpool.index.queue_size: 400
index.refresh_interval: 30s
index.number_of_shards: 5
index.number_of_replicas: 1
/usr/share/elasticsearch/bin/elasticsearch.in.sh
ES_HEAP_SIZE=3G
#I use this Garbage Collector instead of the default one.
JAVA_OPTS="$JAVA_OPTS -XX:+UseG1GC"
cluster status
{
"cluster_name" : "my_cluster",
"status" : "yellow",
"timed_out" : false,
"number_of_nodes" : 1,
"number_of_data_nodes" : 1,
"active_primary_shards" : 61,
"active_shards" : 61,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 61
}
cluster details
{
"cluster_name" : "my_cluster",
"nodes" : {
"some weird number" : {
"name" : "ES 1",
"transport_address" : "inet[localhost/127.0.0.1:9300]",
"host" : "some host",
"ip" : "150.244.58.112",
"version" : "1.4.4",
"build" : "c88f77f",
"http_address" : "inet[localhost/127.0.0.1:9200]",
"process" : {
"refresh_interval_in_millis" : 1000,
"id" : 7854,
"max_file_descriptors" : 65535,
"mlockall" : false
}
}
}
}
I'm curious about the "mlockall" : false because on the yml I did write bootstrap.mlockall: true
logs
lots of lines like:
org.elasticsearch.common.util.concurrent.EsRejectedExecutionException: rejected execution (queue capacity 1000) on org.elasticsearch.search.action.SearchServiceTransportAction$23#a9a34f5
For me tuning the threadpool search queue_size solved the issue. I tried a number of other things and this is the one that solved it.
I added this to my elasticsearch.yml
threadpool.search.queue_size: 10000
and then restarted elasticsearch.
Reasoning... (from the docs)
A node holds several thread pools in order to improve how threads
memory consumption are managed within a node. Many of these pools also
have queues associated with them, which allow pending requests to be
held instead of discarded.
and for search in particular...
For count/search operations. Defaults to fixed with a size of int((#
of available_processors * 3) / 2) + 1, queue_size of 1000.
For more information you can refer to the elasticsearch docs here...
I had trouble finding this information so I hope this helps others!
I got this error when my query was missing a closing quote:
field:"value
In my ElasticSearch logs I see these exceptions:
Caused by: org.elasticsearch.index.query.QueryShardException:
Failed to parse query [field:"value]
...
Caused by: org.apache.lucene.queryparser.classic.ParseException:
Cannot parse 'field:"value': Lexical error at line 1, column 13.
Encountered: <EOF> after : "\"value"
Using Elasticsearch 5.4 thread_pool has an underscore it it.
thread_pool.search.queue_size: 10000
See documentation at Elasticsearch Thread Pool module documentation
This is likely an indication that there's a problem with your cluster's health. Without knowing more about your cluster, there's not much more that can be said.
I agree with #Philip's opinion, But it's necessary to restart elasticsearch at least on Elasticsearch >=1.5.2, because you can dynamically set threadpool.search.queue_size.
curl -XPUT http://your_es:9200/_cluster/settings
{
"transient":{
"threadpool.search.queue_size":10000
}
}
from Elasticsearch >= version 5, its not possible to update cluster settings for thread_pool.search.queue_size using _cluster/settings API. In my case updating ElasticSearch Node yml file is not an option either since if node fails then auto scaling code would bring other ES node with default yml settings.
I have a cluster with 3 nodes and having 400 active primary shards with 7 active threads for queue size of 1000. Increasing number of nodes to 5 with similar config has resolved the issue as queries are getting distributed horizontally to more available nodes.
this will not work on elasticsearch 5.6.
{
"error": {
"root_cause": [
{
"type": "remote_transport_exception",
"reason": "[colmbmiscxx.xx][172.29.xx.xx:9300][cluster:admin/settings/update]"
}
],
"type": "illegal_argument_exception",
"reason": "transient setting [threadpool.search.queue_size], not dynamically updateable"
},
"status": 400
}

Resources