How to undo setting Elasticsearch Index to readonly? - elasticsearch

So I just set one of my indices to readonly, and now want to delete it.
To set it to readonly:
PUT my_index/_settings
{ "index": { "index.blocks.read_only" : true } }
When I tried to delete it I got this response:
ClusterBlockException[blocked by: [FORBIDDEN/5/index read-only (api)];]
Then I tried to set the index to readonly false:
PUT my_index/_settings
{ "index": { "index.blocks.read_only" : false } }
But that gives the same error message as above. So how to set readonly back to false?

Answers are really old so I'll add a elastic-6+ answer too:
PUT /[_all|<index-name>]/_settings
{
"index.blocks.read_only_allow_delete": null
}
https://www.elastic.co/guide/en/elasticsearch/reference/6.x/disk-allocator.html
FYI (for context): I ran into read-only indices due to running out of disk and got error messages from logstash:
...retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked"
elasticsearch:
ClusterBlockException[blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];]

The correct way to make es index read-only is
PUT your_index/_settings
{
"index": {
"blocks.read_only": true
}
}
change true to false to undo it.
You set non dynamic setting with
{
"index": {
"blocks.read_only": false
}
}
which I think was not your intention. Also I think you should have seen an error during first operation itself as non dynamic settings can be updated only on close indices.
run
POST your_index/_close
and then try changing it.

curl -X PUT "localhost:9200/_all/_settings" -H 'Content-Type: application/json' -d'{ "index.blocks.read_only" : false } }'

In version 2.x of ElasticSearch (ES) you have to do the following
PUT your_index/_settings
{
"index": {
"blocks": {
"write": "false",
"read_only": "false"
}
}
}
While setting the index to read_only to true internally ES changes the write to true as well and just reverting read_only to false still does not allow you to update the index so you have to update write setting explicitly.

If you have Kibana installed, you can go to your kibana url:
Management (Left pane) -> Elasticseach Index Management -> Select your Index -> Edit Settings
then update:
"index.blocks.read_only_allow_delete": "false"
Also, to set it globally on kibana you can go to dev tools (left pane) and make the following request:
PUT _settings
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}

For 6.x to get the settings:
curl elasticsearch-sc:9200/_settings?pretty
To make the Indices / Cluster Writable :
curl -XPUT -H "Content-Type: application/json" \
http://elasticsearch-sc:9200/_all/_settings \
-d '{"index.blocks.read_only_allow_delete": null}'

Related

How to create a duplicate index in ElasticSearch from existing index?

I have an existing index with mappings and data in ElasticSearch which I need to duplicate for testing new development. Is there anyway to create a temporary/duplicate index from the already existing one?
Coming from an SQL background, I am looking at something equivalent to
SELECT *
INTO TestIndex
FROM OriginalIndex
WHERE 1 = 0
I have tried the Clone API but can't get it to work.
I'm trying to clone using:
POST /originalindex/_clone/testindex
{
}
But this results in the following exception:
{
"error": {
"root_cause": [
{
"type": "invalid_type_name_exception",
"reason": "Document mapping type name can't start with '_', found: [_clone]"
}
],
"type": "invalid_type_name_exception",
"reason": "Document mapping type name can't start with '_', found: [_clone]"
},
"status": 400
}
I know someone would guide me quickly. Thanks in advance all you wonderful folks.
First you have to set the source index to be read-only
PUT /originalindex/_settings
{
"settings": {
"index.blocks.write": true
}
}
Then you can clone
POST /originalindex/_clone/testindex
If you need to copy documents to a new index, you can use the reindex api
curl -X POST "localhost:9200/_reindex?pretty" -H 'Content-Type:
application/json' -d'
{
"source": {
"index": "someindex"
},
"dest": {
"index": "someindex_copy"
}
}
'
(See: https://wrossmann.medium.com/clone-an-elasticsearch-index-b3e9b295d3e9)
Shortly after posting the question, I figured out a way.
First, get the properties of original index:
GET originalindex
Copy the properties and put to a new index:
PUT /testindex
{
"aliases": {...from the above GET request},
"mappings": {...from the above GET request},
"settings": {...from the above GET request}
}
Now I have a new index for testing.

Programatically setting the read_only_allow_delete property of an index

I’m trying to execute the following line, but it throws an error (that I’m supposed to avoid by running the same code):
es.indices.put_settings(index="demo_index", body={
"blocks": {
"read_only_allow_delete": "false"
}
})
Error: elasticsearch.exceptions.AuthorizationException: AuthorizationException(403, 'cluster_block_exception', 'blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];')
It I trigger the same query by using curl, it is sucessfully executed and I don’t have the error:
curl -XPUT 'localhost:9200/demo_index/_settings' -H 'Content-Type: application/json' -d '{ "index": { "blocks": { "read_only_allow_delete": "false" } } }'
I also tried to use "null" instead of "false", but I’m getting the same result. Any idea?
I don't have enough reputation to add a comment, but have you tried wrapping the body parameter with index to match the curl command?
es.indices.put_settings(index="demo_index", body={
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
})
With new API you can achieve this as :
import elasticsearch
def connect_elasticsearch():
_es = None
_es = elasticsearch.Elasticsearch([{'host': 'localhost', 'port': 9200}])
if _es.ping():
print('Yay Connect')
else:
print('Awww it could not connect!')
return _es
es = connect_elasticsearch()
try:
body = {"index.blocks.read_only_allow_delete": "false"}
es_index_settings = es.indices.put_settings(index="demo_index",body=body)
except elasticsearch.ElasticsearchException as exp:
print(exp)

Elasticsearch : How to disable automatic date detection globally for all indices

How to disable automatic detection for all indices globally in elasticsearch ? I have found that disabling for a single index is possible by dynamic mapping ( Source : https://www.elastic.co/guide/en/elasticsearch/reference/current/dynamic-field-mapping.html )
But I want to do it globally by some command in elasticsearch.yml. Is there any way to do this ?
I've resolved it by changing a global Elasticsearch template (check, if you allready have there any important setting in the global template, that you would like to keep, then you would also need to copy paste them here in json):
curl -X PUT "$HOSTNAME:9200/_template/global?pretty" -H 'Content-Type: application/json' -d'
{
"index_patterns" : ["logstash-*"], ####here your index pattern for the setting####
"order" : 0,
"mappings": {
"doc": {
"date_detection": false
}
}
}'

Elasticsearch settings not applied

I'm trying to change the logging level of elasticsearch like this:
PUT /_cluster/settings
{
"transient" : {
"logger.discovery" : "DEBUG"
}
}
I performed the PUT, and got a response:
{
"acknowledged": true,
"persistent": {},
"transient": {
"logger": {
"discovery": "DEBUG"
}
}
}
I'm expecting the log-level to change immediately to DEBUG, but it's still on INFO.
Any ideas, what the problem is, or how to debug this problem?
I assume you want to set the root log level and not just discovery to debug:
PUT /_cluster/settings
{
"transient" : {
"logger._root" : "DEBUG"
}
}
For Elasticserach 5 you need a different command (with full package name in it):
PUT /_cluster/settings
{"persistent": {"logger.org.elasticsearch.discovery":"DEBUG"}}
Relevant documentation: https://www.elastic.co/guide/en/elasticsearch/reference/5.1/misc-cluster.html#cluster-logger
You can change the log level in the following file
/etc/elasticsearch/log4j.properties
In there, you can change the value for the logger you want, or simply set the rootLogger.level to debug. Prepare for an avalanche of logs if you do so.
You need to restart the service for this to be effective.

Elasticsearch Bulk Index - Update only if exists

I'm using Elasticsearch Bulk Index to update some stats of a documents, but it may happen the document I am trying to update does not exist - in this case I want it to do nothing.
I don't want it to create the document in this case.
I haven't found anything in the docs, or perhaps missed it.
My current actions (In this case it creates the document):
{
update: {
_index: "index1",
_type: "interaction",
_id: item.id
}
},
{
script: {
file: "update-stats",
lang: "groovy",
params: {
newCommentsCount: newRetweetCount,
}
},
upsert: normalizedItem
}
How do I update the document only if it exists, otherwise nothing?
Thank you
Dont use upsert and use a normal update.
Also if the document does not exist while updating , the update will fail.
There by it should work well for you.
Following worked for me with elasticsearch 7.15.2 (need to check lowest supported version for this, ref: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update.html#update-api-example)
curl --location --request POST 'http://127.0.0.1:9200/exp/_update/8' \
--header 'Content-Type: application/json' \
--data-raw '
{
"scripted_upsert": true,
"script": {
"source": "if ( ctx.op == \"create\" ) {ctx.op=\"noop\"} else {ctx._source.name=\"updatedName\"} ",
"params": {
"count": 4
}
},
"upsert": {}
}
'
If ES is about to create a new record (ctx.op is "create" then we change the op to "noop" and nothing is done, otherwise we do the normal update through the script.

Resources