TransportError(403, u'cluster_block_exception', u'blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];') - elasticsearch

When I try to store anything in elasticsearch, An error says that:
TransportError(403, u'cluster_block_exception', u'blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];')
I already inserted about 200 millions documents in my index. But I don't have an idea why this error is happening.
I've tried:
curl -u elastic:changeme -XPUT 'localhost:9200/_cluster/settings' -H 'Content-Type: application/json' -d '{"persistent":{"cluster.blocks.read_only":false}}'
As mentioned here:
ElasticSearch entered "read only" mode, node cannot be altered
And the results is:
{"acknowledged":true,"persistent":{"cluster":{"blocks":{"read_only":"false"}}},"transient":{}}
But nothing changed. what should I do?

Try GET yourindex/_settings, this will show yourindex settings. If read_only_allow_delete is true, then try:
PUT /<yourindex>/_settings
{
"index.blocks.read_only_allow_delete": null
}
I got my issue fixed.
plz refer to es config guide for more detail.
The curl command for this is
curl -X PUT "localhost:9200/twitter/_settings?pretty" -H 'Content-Type: application/json' -d '
{
"index.blocks.read_only_allow_delete": null
}'

Last month I facing the same problem, you can try this code on your Kibana Dev Tools
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
I hope it helps

I had faced the same issue when my disk space was full,
please see the steps that I did
1- Increase the disk space
2- Update the index read-only mode, see the following curl request
curl -XPUT -H "Content-Type: application/json"
http://localhost:9200/_all/_settings -d
'{"index.blocks.read_only_allow_delete": null}'

This happens because of the default watermark disk usage of Elastic Search. Usually it is 95% of disk size.
This happens when Elasticsearch thinks the disk is running low on space so it puts itself into read-only mode.
By default Elasticsearch's decision is based on the percentage of disk space that's free, so on big disks this can happen even if you have many gigabytes of free space.
The flood stage watermark is 95% by default, so on a 1TB drive you need at least 50GB of free space or Elasticsearch will put itself into read-only mode.
For docs about the flood stage watermark see https://www.elastic.co/guide/en/elasticsearch/reference/6.2/disk-allocator.html.
Quoted from part of this answer
One solution is to disable it enitrely (I found it useful in my local and CI setup). To do it run the 2 commands:
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

Tagging into this later on as I just encountered the problem myself - I accomplished the following steps.
1) Deleted older indexes to free up space immediately - this brought me to around 23% free.
2) Update the index read-only mode.
I still had the same issue. I checked the Dev Console to see what might be locked still and none were. Restarted the cluster and had the same issue.
Finally under index management I selected the indexes with ILM lifecycle issues and picked to reapply ILM step. Had to do that a couple of times to clear them all out but it did.

The problem may be a disk space problem, i had this problem despite i cleaned many space my disk, so, finally i delete the data folder and it worked: sudo rm -rf /usr/share/elasticsearch/data/

This solved the issue;
PUT _settings { "index": { "blocks": { "read_only_allow_delete": "false" }
}

Related

1000 max shards reached. I would like to increase or clear exisitng and start again. I have 5 servers I am monitoring

I tried to increase the shards with this...but to no avail.
curl -XPUT 'http://206.189.196.214:9200/_cluster/settings -H 'Content-type: application/json' --data-binary $'{"transient":{"cluster.max_shards_per_node":5100}}'`
I have a typo in the above ... it returned the below error:
"error":{"root_cause":[{"type":"illegal_argument_exception","reason":"invalid
version format: -H CONTENT-TYPE:
HTTP/1.1"}],"type":"illegal_argument_exception","reason":"invalid
version format: -H CONTENT-TYPE: HTTP/1.1"},"status":400}curl: (3)
[globbing] nested brace in column 44
Please advise. Thoughts. Elasticsearch is running, Zabbix is running, logstash is running, all seems happy but reached a limit on 1000/1000 shards.
It would be a better option if you set this limit into your elasticserch.yml file. Because if you restart your cluster you will lose these configs. But your request would be something like this:
curl -XPUT "http://elasticsearch_host:9200/_cluster/settings" -H 'Content-Type: application/json' -d'
{
"transient": {
"cluster.routing.allocation.total_shards_per_node": 5100
}
}'

Cluster has already maximum shards open

I'm using Windows 10 and I'm getting
Elasticsearch exception [type=validation_exception, reason=Validation Failed: 1: this action would add [2] total shards, but this cluster currently has [1000]/[1000] maximum shards open;]
How can I resolve this? I don't mind if I lose data since it only runs locally.
Aside from the answers mentioned above, you can also try increasing the shards until you try to rearchitect the nodes
curl -X PUT localhost:9200/_cluster/settings -H "Content-Type: application/json" -d '{ "persistent": { "cluster.max_shards_per_node": "3000" } }'
Besides, the following can be useful the should be proceeded with CAUTION ofcourse
Get total number of unassigned shards in cluster
curl -XGET -u elasticuser:yourpassword http://localhost:9200/_cluster/health\?pretty | grep unassigned_shards
USE WITH CAUTION
To DELETE the unassigned shards in a cluster (USE WITH CAUTION)
curl -XGET -u elasticuser:yourpassword http://localhost:9200/_cat/shards | grep UNASSIGNED | awk {'print $1'} #(USE WITH CAUTION) | xargs -i curl -XDELETE -u elasticuser:yourpassword "http://localhost:9200/{}"
You are reaching the limit cluster.max_shards_per_node. Add more data node or reduce the number of shards in cluster.
If you don't mind about the data loss, delete old indicies.
The easy way is do it from the GUI Kibana > Management > DevTools, then to get all indicies:
GET /_cat/indices/
You can delete within a pattern like below:
DELETE /<index-name>
e.g.:
DELETE /logstash-2020-10*
You probably have too many shards per node.
May I suggest you look at the following resources about sizing:
https://www.elastic.co/elasticon/conf/2016/sf/quantitative-cluster-sizing

Elasticsearch read only user

I wanted to add a read only user to my cluster, my app prefixes all its indexes with myapp_.
Following https://www.elastic.co/blog/user-impersonation-with-x-pack-integrating-third-party-auth-with-kibana (what a strange title for the only actually usable blog post on this...) I have first added a role with
curl -XPOST '$ELASTIC_URL:9200/_xpack/security/role/name_of_readonly_role' \
-H 'Content-Type: application/json' \
-d'{"indices":[{"names":"myapp_*","privileges":["read"]}]}'
and then added it to a user:
curl -XPOST $ELASTIC_URL:9200/_xpack/security/user/name_of_user \
-H 'Content-Type: application/json' \
-d'{"roles":["name_of_readonly_role"],"password":"some_password"}'
but when opening $ELASTIC_URL:9200 I got
action [cluster:monitor/main] is unauthorized for user
what's next?
There's a complete dearth of examples for this as far as I can see, to fix this problem the role command needs to be re-run with -d'{"cluster":["monitor"], "indices":[{"names":"myapp_*","privileges":["read"]}]}' (same curl command works for creating or updating roles). This seems to leak the name of all indexes but not much else aside from their names and I was fine with that. And even that seems to be not enough for some apps like the ElasticSearch Head brower extension, I needed to add the index level monitor privilege as well: -d'{"cluster":["monitor"], "indices":[{"names":"myapp_*","privileges":["read", "monitor"]}]}'. Role changes are automatically applied to users.
I still have no idea what the "/main" relates to in the error message but this works.

What is the permitted mapping type in elasticsearch-7.0?

I am migrating my code from Elasticsearch-5.6 to Elasticsearch-7.0. What is the allowed mapping type that I should use?
As per the documentation: https://www.elastic.co/guide/en/elasticsearch/reference/7.0/removal-of-types.html#_schedule_for_removal_of_mapping_types
For Elasticsearch 7.x
"... indexing a document no longer requires a document type. ... _doc is a permanent part of the path, and represents the endpoint name rather than the document type."
This seems pretty clear, but I was able to execute both of the following successfully:
curl -XPUT "http://localhost:9200/twitter/doc/1" -H 'Content-Type: application/json' -d'{"x":"val"}'
curl -XPUT "http://localhost:9200/twitter/_doc/1" -H 'Content-Type: application/json' -d'{"x":"val"}'
As per the documentation, inserting at /index_name/doc should not function, as /_doc is part of the endpoint-path.
Am I missing something from the documentation?
(My migration strategy would depend on this, as Elasticsearch-5 does not allow type names with preceding '_' (e.g. '_doc') and I would want to change my code to write to 'doc' if that works with Elasticsearch-7)

elasticsearch 6 index change to read only after few second

I want to use elasticsearch 6 on mac os but when I create an index by adding a document to none exist index after few second index change to read-only and if add document or update document give this error
"error" : {
"root_cause" : [
{
"type" : "cluster_block_exception",
"reason" : "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
}
],
"type" : "cluster_block_exception",
"reason" : "blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];"
},
"status" : 403
}
I test to disable read only by
curl -H'Content-Type: application/json' -XPUT localhost:9200/test/_settings?pretty -d'
{
"index": {
"blocks.read_only": false
}
}'
{
"acknowledged" : true
}
but nothing change
I test elastic 6 on another system with ubuntu os it's ok and there is no error then I think maybe something wrong with my system but elasticsearch 5.6.2 works correctly without any error
the elastic log is
[2018-01-05T21:56:52,254][WARN ][o.e.c.r.a.DiskThresholdMonitor] [gCjouly] flood stage disk watermark [95%] exceeded on [gCjoulysTFy1DDU7f7dOWQ][gCjouly][/Users/peter/Downloads/elasticsearch-6.1.1/data/nodes/0] free: 15.7gb[3.3%], all indices on this node will marked read-only
I had this problem
I think in elastic 6 add new setting to close index when empty hard less than 5%
you can disable this by below line in elasticsearch.yml
cluster.routing.allocation.disk.threshold_enabled: false
Then restart elasticsearch.
I hope this work for you
Convenience for copy/pasting into Kibana console
# disable threshold alert
PUT /_cluster/settings
{
"persistent" : {
"cluster.routing.allocation.disk.threshold_enabled" : false
}
}
# unlock indices from read-only state
PUT /_all/_settings
{
"index.blocks.read_only_allow_delete": null
}
If you are working in with elastic search in Docker, it's possible that Docker has run out of space. Either run docker volume prune to remove unused local volumes or increase your disk image size in Docker Preferences.
Run these commands
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
If you are using docker containers then you wanted to make sure that you have enough space in your container and its usage should be less than 85%.
You can make up space by clear the dangling images and volumes by running the following commands
# remove the dangling images
docker rmi $(docker images -f "dangling=true" -q)
# remove the dangling volumes
docker volume rm $(docker volume ls -qf dangling=true)
If you still having space issues then better to increase the space for your docker by going into Docker > Preferences
After making space for the docker, you need to run the above CURL command shared at the top of this post.
Its not recommended,
for a sudden debugging you can add below in docker,
elasticsearch:
image: elasticsearch:7.9.3
environment:
"cluster.routing.allocation.disk.threshold_enabled": "false"

Resources