Programatically setting the read_only_allow_delete property of an index - elasticsearch

I’m trying to execute the following line, but it throws an error (that I’m supposed to avoid by running the same code):
es.indices.put_settings(index="demo_index", body={
"blocks": {
"read_only_allow_delete": "false"
}
})
Error: elasticsearch.exceptions.AuthorizationException: AuthorizationException(403, 'cluster_block_exception', 'blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];')
It I trigger the same query by using curl, it is sucessfully executed and I don’t have the error:
curl -XPUT 'localhost:9200/demo_index/_settings' -H 'Content-Type: application/json' -d '{ "index": { "blocks": { "read_only_allow_delete": "false" } } }'
I also tried to use "null" instead of "false", but I’m getting the same result. Any idea?

I don't have enough reputation to add a comment, but have you tried wrapping the body parameter with index to match the curl command?
es.indices.put_settings(index="demo_index", body={
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
})

With new API you can achieve this as :
import elasticsearch
def connect_elasticsearch():
_es = None
_es = elasticsearch.Elasticsearch([{'host': 'localhost', 'port': 9200}])
if _es.ping():
print('Yay Connect')
else:
print('Awww it could not connect!')
return _es
es = connect_elasticsearch()
try:
body = {"index.blocks.read_only_allow_delete": "false"}
es_index_settings = es.indices.put_settings(index="demo_index",body=body)
except elasticsearch.ElasticsearchException as exp:
print(exp)

Related

How to create a duplicate index in ElasticSearch from existing index?

I have an existing index with mappings and data in ElasticSearch which I need to duplicate for testing new development. Is there anyway to create a temporary/duplicate index from the already existing one?
Coming from an SQL background, I am looking at something equivalent to
SELECT *
INTO TestIndex
FROM OriginalIndex
WHERE 1 = 0
I have tried the Clone API but can't get it to work.
I'm trying to clone using:
POST /originalindex/_clone/testindex
{
}
But this results in the following exception:
{
"error": {
"root_cause": [
{
"type": "invalid_type_name_exception",
"reason": "Document mapping type name can't start with '_', found: [_clone]"
}
],
"type": "invalid_type_name_exception",
"reason": "Document mapping type name can't start with '_', found: [_clone]"
},
"status": 400
}
I know someone would guide me quickly. Thanks in advance all you wonderful folks.
First you have to set the source index to be read-only
PUT /originalindex/_settings
{
"settings": {
"index.blocks.write": true
}
}
Then you can clone
POST /originalindex/_clone/testindex
If you need to copy documents to a new index, you can use the reindex api
curl -X POST "localhost:9200/_reindex?pretty" -H 'Content-Type:
application/json' -d'
{
"source": {
"index": "someindex"
},
"dest": {
"index": "someindex_copy"
}
}
'
(See: https://wrossmann.medium.com/clone-an-elasticsearch-index-b3e9b295d3e9)
Shortly after posting the question, I figured out a way.
First, get the properties of original index:
GET originalindex
Copy the properties and put to a new index:
PUT /testindex
{
"aliases": {...from the above GET request},
"mappings": {...from the above GET request},
"settings": {...from the above GET request}
}
Now I have a new index for testing.

Elasticsearch 5.4.0 - How to add new field to existing document

In Production, we already had 2000+ documents. we need to add new field into existing document. is it possible to add new field ? How can i add new field to exisitng field
You can use the update by query API in order to add a new field to all your existing documents:
POST your_index/_update_by_query
{
"query": {
"match_all": {}
},
"script": {
"inline": "ctx._source.new_field = 0",
"lang": "painless"
}
}
Note: if your new field is a string, change 0 to '' instead
We can also add the new field using curl and directly running the following command in the terminal.
curl -X PUT "localhost:9200/you_index/_mapping/defined_mapping" -H 'Content-Type: application/json' -d '{ "properties":{"field_name" : {"type" : type_of_data}} }'

error while running the elasticsearch-reindexer from command prompt

elasticsearch-reindex -f http://localhost:9200/artist -t http://localhost:9200/painter
Getting error as:
TypeError: Invalid hosts config. Expected a URL, an array of urls, a host config object, or an array of host config objects.
worker exited with error code: 1
Reindexing completed sucessfully.
Do you really need a plugin to do that?
curl -XPOST 'localhost:9200/_reindex?pretty' -H 'Content-Type: application/json' -d'
{
"source": {
"index": "artist"
},
"dest": {
"index": "painter"
}
}
'
This should work just fine. More options and info here.

Elasticsearch Bulk Index - Update only if exists

I'm using Elasticsearch Bulk Index to update some stats of a documents, but it may happen the document I am trying to update does not exist - in this case I want it to do nothing.
I don't want it to create the document in this case.
I haven't found anything in the docs, or perhaps missed it.
My current actions (In this case it creates the document):
{
update: {
_index: "index1",
_type: "interaction",
_id: item.id
}
},
{
script: {
file: "update-stats",
lang: "groovy",
params: {
newCommentsCount: newRetweetCount,
}
},
upsert: normalizedItem
}
How do I update the document only if it exists, otherwise nothing?
Thank you
Dont use upsert and use a normal update.
Also if the document does not exist while updating , the update will fail.
There by it should work well for you.
Following worked for me with elasticsearch 7.15.2 (need to check lowest supported version for this, ref: https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-update.html#update-api-example)
curl --location --request POST 'http://127.0.0.1:9200/exp/_update/8' \
--header 'Content-Type: application/json' \
--data-raw '
{
"scripted_upsert": true,
"script": {
"source": "if ( ctx.op == \"create\" ) {ctx.op=\"noop\"} else {ctx._source.name=\"updatedName\"} ",
"params": {
"count": 4
}
},
"upsert": {}
}
'
If ES is about to create a new record (ctx.op is "create" then we change the op to "noop" and nothing is done, otherwise we do the normal update through the script.

How to undo setting Elasticsearch Index to readonly?

So I just set one of my indices to readonly, and now want to delete it.
To set it to readonly:
PUT my_index/_settings
{ "index": { "index.blocks.read_only" : true } }
When I tried to delete it I got this response:
ClusterBlockException[blocked by: [FORBIDDEN/5/index read-only (api)];]
Then I tried to set the index to readonly false:
PUT my_index/_settings
{ "index": { "index.blocks.read_only" : false } }
But that gives the same error message as above. So how to set readonly back to false?
Answers are really old so I'll add a elastic-6+ answer too:
PUT /[_all|<index-name>]/_settings
{
"index.blocks.read_only_allow_delete": null
}
https://www.elastic.co/guide/en/elasticsearch/reference/6.x/disk-allocator.html
FYI (for context): I ran into read-only indices due to running out of disk and got error messages from logstash:
...retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked"
elasticsearch:
ClusterBlockException[blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];]
The correct way to make es index read-only is
PUT your_index/_settings
{
"index": {
"blocks.read_only": true
}
}
change true to false to undo it.
You set non dynamic setting with
{
"index": {
"blocks.read_only": false
}
}
which I think was not your intention. Also I think you should have seen an error during first operation itself as non dynamic settings can be updated only on close indices.
run
POST your_index/_close
and then try changing it.
curl -X PUT "localhost:9200/_all/_settings" -H 'Content-Type: application/json' -d'{ "index.blocks.read_only" : false } }'
In version 2.x of ElasticSearch (ES) you have to do the following
PUT your_index/_settings
{
"index": {
"blocks": {
"write": "false",
"read_only": "false"
}
}
}
While setting the index to read_only to true internally ES changes the write to true as well and just reverting read_only to false still does not allow you to update the index so you have to update write setting explicitly.
If you have Kibana installed, you can go to your kibana url:
Management (Left pane) -> Elasticseach Index Management -> Select your Index -> Edit Settings
then update:
"index.blocks.read_only_allow_delete": "false"
Also, to set it globally on kibana you can go to dev tools (left pane) and make the following request:
PUT _settings
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}
For 6.x to get the settings:
curl elasticsearch-sc:9200/_settings?pretty
To make the Indices / Cluster Writable :
curl -XPUT -H "Content-Type: application/json" \
http://elasticsearch-sc:9200/_all/_settings \
-d '{"index.blocks.read_only_allow_delete": null}'

Resources