Elasticsearch : How to disable automatic date detection globally for all indices - elasticsearch

How to disable automatic detection for all indices globally in elasticsearch ? I have found that disabling for a single index is possible by dynamic mapping ( Source : https://www.elastic.co/guide/en/elasticsearch/reference/current/dynamic-field-mapping.html )
But I want to do it globally by some command in elasticsearch.yml. Is there any way to do this ?

I've resolved it by changing a global Elasticsearch template (check, if you allready have there any important setting in the global template, that you would like to keep, then you would also need to copy paste them here in json):
curl -X PUT "$HOSTNAME:9200/_template/global?pretty" -H 'Content-Type: application/json' -d'
{
"index_patterns" : ["logstash-*"], ####here your index pattern for the setting####
"order" : 0,
"mappings": {
"doc": {
"date_detection": false
}
}
}'

Related

Elasticsearch 5.4.0 - How to add new field to existing document

In Production, we already had 2000+ documents. we need to add new field into existing document. is it possible to add new field ? How can i add new field to exisitng field
You can use the update by query API in order to add a new field to all your existing documents:
POST your_index/_update_by_query
{
"query": {
"match_all": {}
},
"script": {
"inline": "ctx._source.new_field = 0",
"lang": "painless"
}
}
Note: if your new field is a string, change 0 to '' instead
We can also add the new field using curl and directly running the following command in the terminal.
curl -X PUT "localhost:9200/you_index/_mapping/defined_mapping" -H 'Content-Type: application/json' -d '{ "properties":{"field_name" : {"type" : type_of_data}} }'

Cannot turn Elasticsearch dynamic mapping on

I disabled dynamic mapping with
curl -XPUT 'localhost:9200/_template/template_all?pretty' -H 'Content-Type: application/json' -d' { "template": "*", "order":0, "settings": { "index.mapper.dynamic": false }}'
I wanted to turn it back on with
curl -XPUT 'localhost:9200/_template/template_all?pretty' -H 'Content-Type: application/json' -d' { "template": "*", "order":0, "settings": { "index.mapper.dynamic": true }}'
It has confirmed it as true, but when I try to have logstash send information to it, in logstash error logs I get back-
"reason"=>"trying to auto create mapping, but dynamic mapping is disabled"
How do I actually turn dynamic mapping back on?
Looks like index for logstash was created with old template (before you update template). Because when you update you template only new indexes will have updated mapping and settings.
Check is index exists:
curl -XGET 'localhost:9200/LOGSTASH_INDEX_NAME_HERE'
If index exists and you can delete this index - do it. After this when logstash will try to send something - index will be created with the new mapping.

Change Mapping for Field for ALL OF LOGSTASH Created indexes

I would like to change the type of the field location to geo_point. I'm using ES with Logstash, as y'all know, indices are generated with the name logstash-yyyy-mm-dd
I first created a logstash index and named it logstash-2016-03-29, like so:
curl -XPUT 'http://localhost:9200/logstash-2016-03-29'
then, I changed the mapping for supposedly all the indices called Logstash-* using the following:
curl -XPOST "http://localhost:9200/logstash-*/_mapping/logs" -d '{
"properties" : {
"location" : { "type":"geo_point" }
}
}'
And when I ran the Logstash configuration file, all the location fields in the index logstash-2016-03-29 were indeed of type geo_point.
However, today, the auto-generated index logstash-2016-03-30 had field location of type String instead of geo_point. I thought the type should be applied on ANY index that starts with the name logstash-*. Apparently, I was wrong. How can I fix this so that any future index created by logstash that have the location field would have that field type set to geo_point instead of String?
Thanks.
You should define it using the index template
curl -XPUT localhost:9200/_template/template_2 -d '
{
"template" : "logstash-",
"mappings" : {
"logs" : {
"properties": {
"location" : { "type" : "geo_point" }
}
}
}
}
https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-templates.html

How to undo setting Elasticsearch Index to readonly?

So I just set one of my indices to readonly, and now want to delete it.
To set it to readonly:
PUT my_index/_settings
{ "index": { "index.blocks.read_only" : true } }
When I tried to delete it I got this response:
ClusterBlockException[blocked by: [FORBIDDEN/5/index read-only (api)];]
Then I tried to set the index to readonly false:
PUT my_index/_settings
{ "index": { "index.blocks.read_only" : false } }
But that gives the same error message as above. So how to set readonly back to false?
Answers are really old so I'll add a elastic-6+ answer too:
PUT /[_all|<index-name>]/_settings
{
"index.blocks.read_only_allow_delete": null
}
https://www.elastic.co/guide/en/elasticsearch/reference/6.x/disk-allocator.html
FYI (for context): I ran into read-only indices due to running out of disk and got error messages from logstash:
...retrying failed action with response code: 403 ({"type"=>"cluster_block_exception", "reason"=>"blocked"
elasticsearch:
ClusterBlockException[blocked by: [FORBIDDEN/12/index read-only / allow delete (api)];]
The correct way to make es index read-only is
PUT your_index/_settings
{
"index": {
"blocks.read_only": true
}
}
change true to false to undo it.
You set non dynamic setting with
{
"index": {
"blocks.read_only": false
}
}
which I think was not your intention. Also I think you should have seen an error during first operation itself as non dynamic settings can be updated only on close indices.
run
POST your_index/_close
and then try changing it.
curl -X PUT "localhost:9200/_all/_settings" -H 'Content-Type: application/json' -d'{ "index.blocks.read_only" : false } }'
In version 2.x of ElasticSearch (ES) you have to do the following
PUT your_index/_settings
{
"index": {
"blocks": {
"write": "false",
"read_only": "false"
}
}
}
While setting the index to read_only to true internally ES changes the write to true as well and just reverting read_only to false still does not allow you to update the index so you have to update write setting explicitly.
If you have Kibana installed, you can go to your kibana url:
Management (Left pane) -> Elasticseach Index Management -> Select your Index -> Edit Settings
then update:
"index.blocks.read_only_allow_delete": "false"
Also, to set it globally on kibana you can go to dev tools (left pane) and make the following request:
PUT _settings
{
"index": {
"blocks": {
"read_only_allow_delete": "false"
}
}
}
For 6.x to get the settings:
curl elasticsearch-sc:9200/_settings?pretty
To make the Indices / Cluster Writable :
curl -XPUT -H "Content-Type: application/json" \
http://elasticsearch-sc:9200/_all/_settings \
-d '{"index.blocks.read_only_allow_delete": null}'

Delete all documents from index/type without deleting type

I know one can delete all documents from a certain type via deleteByQuery.
Example:
curl -XDELETE 'http://localhost:9200/twitter/tweet/_query' -d '{
"query" : {
"term" : { "user" : "kimchy" }
}
}'
But i have NO term and simply want to delete all documents from that type, no matter what term. What is best practice to achieve this? Empty term does not work.
Link to deleteByQuery
I believe if you combine the delete by query with a match all it should do what you are looking for, something like this (using your example):
curl -XDELETE 'http://localhost:9200/twitter/tweet/_query' -d '{
"query" : {
"match_all" : {}
}
}'
Or you could just delete the type:
curl -XDELETE http://localhost:9200/twitter/tweet
Note: XDELETE is deprecated for later versions of ElasticSearch
The Delete-By-Query plugin has been removed in favor of a new Delete By Query API implementation in core. Read here
curl -XPOST 'localhost:9200/twitter/tweet/_delete_by_query?conflicts=proceed&pretty' -H 'Content-Type: application/json' -d'
{
"query": {
"match_all": {}
}
}'
From ElasticSearch 5.x, delete_by_query API is there by default
POST: http://localhost:9200/index/type/_delete_by_query
{
"query": {
"match_all": {}
}
}
You can delete documents from type with following query:
POST /index/type/_delete_by_query
{
"query" : {
"match_all" : {}
}
}
I tested this query in Kibana and Elastic 5.5.2
Torsten Engelbrecht's comment in John Petrones answer expanded:
curl -XDELETE 'http://localhost:9200/twitter/tweet/_query' -d
'{
"query":
{
"match_all": {}
}
}'
(I did not want to edit John's reply, since it got upvotes and is set as answer, and I might have introduced an error)
Starting from Elasticsearch 2.x delete is not anymore allowed, since documents remain in the index causing index corruption.
Since ElasticSearch 7.x, delete-by-query plugin was removed in favor of new Delete By Query API.
The curl option:
curl -X POST "localhost:9200/my-index/_delete_by_query" -H 'Content-Type: application/json' -d' { "query": { "match_all":{} } } '
Or in Kibana
POST /my-index/_delete_by_query
{
"query": {
"match_all":{}
}
}
The above answers no longer work with ES 6.2.2 because of Strict Content-Type Checking for Elasticsearch REST Requests. The curl command which I ended up using is this:
curl -H'Content-Type: application/json' -XPOST 'localhost:9200/yourindex/_doc/_delete_by_query?conflicts=proceed' -d' { "query": { "match_all": {} }}'
In Kibana Console:
POST calls-xin-test-2/_delete_by_query
{
"query": {
"match_all": {}
}
}
(Reputation not high enough to comment)
The second part of John Petrone's answer works - no query needed. It will delete the type and all documents contained in that type, but that can just be re-created whenever you index a new document to that type.
Just to clarify:
$ curl -XDELETE 'http://localhost:9200/twitter/tweet'
Note: this does delete the mapping! But as mentioned before, it can be easily re-mapped by creating a new document.
Note for ES2+
Starting with ES 1.5.3 the delete-by-query API is deprecated, and is completely removed since ES 2.0
Instead of the API, the Delete By Query is now a plugin.
In order to use the Delete By Query plugin you must install the plugin on all nodes of the cluster:
sudo bin/plugin install delete-by-query
All of the nodes must be restarted after the installation.
The usage of the plugin is the same as the old API. You don't need to change anything in your queries - this plugin will just make them work.
*For complete information regarding WHY the API was removed you can read more here.
You have these alternatives:
1) Delete a whole index:
curl -XDELETE 'http://localhost:9200/indexName'
example:
curl -XDELETE 'http://localhost:9200/mentorz'
For more details you can find here -https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-index.html
2) Delete by Query to those that match:
curl -XDELETE 'http://localhost:9200/mentorz/users/_query' -d
'{
"query":
{
"match_all": {}
}
}'
*Here mentorz is an index name and users is a type
I'm using elasticsearch 7.5 and when I use
curl -XPOST 'localhost:9200/materials/_delete_by_query?conflicts=proceed&pretty' -d'
{
"query": {
"match_all": {}
}
}'
which will throw below error.
{
"error" : "Content-Type header [application/x-www-form-urlencoded] is not supported",
"status" : 406
}
I also need to add extra -H 'Content-Type: application/json' header in the request to make it works.
curl -XPOST 'localhost:9200/materials/_delete_by_query?conflicts=proceed&pretty' -H 'Content-Type: application/json' -d'
{
"query": {
"match_all": {}
}
}'
{
"took" : 465,
"timed_out" : false,
"total" : 2275,
"deleted" : 2275,
"batches" : 3,
"version_conflicts" : 0,
"noops" : 0,
"retries" : {
"bulk" : 0,
"search" : 0
},
"throttled_millis" : 0,
"requests_per_second" : -1.0,
"throttled_until_millis" : 0,
"failures" : [ ]
}
Just to add couple cents to this.
The "delete_by_query" mentioned at the top is still available as a plugin in elasticsearch 2.x.
Although in the latest upcoming version 5.x it will be replaced by
"delete by query api"
Elasticsearch 2.3 the option
action.destructive_requires_name: true
in elasticsearch.yml do the trip
curl -XDELETE http://localhost:9200/twitter/tweet
For future readers:
in Elasticsearch 7.x there's effectively one type per index - types are hidden
you can delete by query, but if you want remove everything you'll be much better off removing and re-creating the index. That's because deletes are only soft deletes under the hood, until the trigger Lucene segment merges*, which can be expensive if the index is large. Meanwhile, removing an index is almost instant: remove some files on disk and a reference in the cluster state.
* The video/slides are about Solr, but things work exactly the same in Elasticsearch, this is Lucene-level functionality.
If you want to delete document according to a date.
You can use kibana console (v.6.1.2)
POST index_name/_delete_by_query
{
"query" : {
"range" : {
"sendDate" : {
"lte" : "2018-03-06"
}
}
}
}

Resources