elasticsearch cluster update API does not work - elasticsearch

My Elasticsearch version is 2.1.1 .
I just want to update discovery.zen.minimum_master_nodes as described in elasticsearch docs by running this:
curl -XPUT localhost:9200/_cluster/settings -d '{
"transient" : {"discovery.zen.minimum_master_nodes" : "2"}
}'
the response is {"acknowledged":true,"persistent":{},"transient":{}} which means there is nothing updated (although it is acknowledged). the response should be something like this:
{
"persistent" : {},
"transient" : {"discovery.zen.minimum_master_nodes" : "2"}
}

Related

How to log all requests to ElasticSearch?

I have a problem with debugging my application,
so I would like to log all the requests that are send to Elasticsearch.
I learnt that I can do this via slowlog by setting the long time to 0s.
I tried this both in ES 2.4.2 and in ES 5.6.4, but no requests were logged.
In ES 2.4.2 I set in logging.yml:
index.search.slowlog: INFO, index_search_slow_log_file
index.indexing.slowlog: INFO, index_indexing_slow_log_file
In ES 5.6.4 I also changed the level to INFO (in log4j2.properties):
logger.index_search_slowlog_rolling.level = info
logger.index_indexing_slowlog.level = info
Then I started ES and issued:
curl -XPUT 'http://localhost:9200/com.example.app.model.journal/_settings' -d '
{
"index.search.slowlog.threshold.query.info" : "0s",
"index.search.slowlog.threshold.fetch.info": "0s",
"index.indexing.slowlog.threshold.index.info": "0s"
}
'
(I would prefer to set this settings for all indexes in configuration
file, is it possible?)
Then I searched for some data (and got results):
curl -XGET 'localhost:9200/com.example.app.model.journal/_search?pretty' -d '
{
"query":
{
"match" : { "rank" : "2" } }
}
}'
This requests was not logged, in ES 2.4.2 the slowlog files are created
and empty, in ES 5.6.4 files are not created. What am I doing wrong?
I couldn't find a solution to this, so as a workaround I used mitmdump. Run the proxy:
mitmdump -v -dddd -R http://localhost:9200
Replace the ES address with the proxy address:
curl -XGET 'localhost:8080/com.example.app.model.journal/_search?pretty' -d '
{
...

How to update date field in elasticsearch

I have data in my Elastic index as below :
Indexed data as:
curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elasticsearch"
}'
Mapping obtained using
curl -XGET localhost:9200/twitter
{"twitter":{"aliases":{},"mappings":{"tweet":{"properties":{"message":{"type":"string"},"post_date":{"type":"date","format":"strict_date_optional_time||epoch_millis"},"user":{"type":"string"}}}},"settings":{"index":{"creation_date":"1456739938440","number_of_shards":"5","number_of_replicas":"1","uuid":"DwhS57l1TsKQFyzY23LSiQ","version":{"created":"2020099"}}},"warmers":{}}}
Now if I am modifying user as below I am able to do the same:
curl -XPOST 'http://localhost:9200/twitter/tweet/1/_update' -d '{
"script" : "ctx._source.user=new_user",
"params" : {
"new_user" : "search"
}
}'
But when I tried modifying the date fields as below it is giving an exception :
curl -XPOST 'http://localhost:9200/twitter/tweet/1/_update' -d '{
"script" : "ctx._source.post_date='new-date'",
"params" : {
"new-date" : "2016-02-02T16:12:23"
}
}'
Exception received is :
{"error":{"root_cause":[{"type":"remote_transport_exception","reason":"[Anomaly][127.0.0.1:9300][indices:data/write/update[s]]"}],"type":"illegal_argument_exception","reason":"failed
to execute
script","caused_by":{"type":"script_exception","reason":"Failed to
compile inline script [ctx._source.post_date=new-date] using lang
[groovy]","caused_by":{"type":"script_exception","reason":"failed to
compile groovy
script","caused_by":{"type":"multiple_compilation_errors_exception","reason":"startup failed:\ne90a551666b36d90e4fc5b08d04250da5c4d552d: 1: unexpected
token: - # line 1, column 26.\n ctx._source.post_date=new-date\n
^\n\n1 error\n"}}}}
Now can anyone let me know how I can handle the same.
~Prashant
POST your_index_name/_update_by_query?conflicts=proceed
{
"script" : {
"source": "ctx._source.publishedDate=new SimpleDateFormat('yyyy-MM-dd').parse('2021-05-07')",
"lang": "painless"
}
}
publishedDate - the name of your date field.
Your_index_name the name of you index
In Groovy (or in Java), an identifier can't contain a '-'. If you write
ctx._source.last-login = new-login
Groovy (and java!) parses this into :
(ctx._source.last)-(login) = (new)-(login)
You should quote these properties :
ctx._source.'last-login' = 'new-login'

NoSuchMethodError when creating mapping for attachment type in ElasticSearch

I'm following this tutorial.
I start with installing attachment-mapper (replaced their link with latest version).
bin/plugin -install elasticsearch/elasticsearch-mapper-attachments/2.4.1
Start new, delete "test" index and then create new one:
curl -X DELETE "localhost:9200/test"
Create index, I presume:
curl -X PUT "localhost:9200/test" -d '{
"settings" : { "index" : { "number_of_shards" : 1, "number_of_replicas" : 0 }}
}'
Then I try to create mapping:
curl -X PUT "localhost:9200/test/attachment/_mapping" -d '{
"attachment" : {
"properties" : {
"file" : {
"type" : "attachment",
"fields" : {
"title" : { "store" : "yes" },
"file" : { "term_vector":"with_positions_offsets", "store":"yes" }
}
}
}
}
}'
Then I get this error:
{
"error" : "NoSuchMethodError[org.elasticsearch.index.mapper.core.TypeParsers.parseMultiField(Lorg/elasticsearch/index/mapper/core/AbstractFieldMapper$Builder;Ljava/lang/String;Lorg/elasticsearch/index/mapper/Mapper$TypeParser$ParserContext;Ljava/lang/String;Ljava/lang/Object;)V]",
"status" : 500
}
Any idea what's going on?
Could it be a problem with the attachment-mapper plugin installation?
attachment-mapper uses Tika. I've installed Tika, maybe that's installed wrong? How do I check?
Any insight would be helpful.
I had the wrong version of ElasticSearch installed.
For the attachment-mapper plugin I had installed, I needed elasticsearch version 1.4.
Removed old version, installed new version, installed attachment-mapper plugin, started service, and ran through tutorial again and it worked.

Delete all documents from index/type without deleting type

I know one can delete all documents from a certain type via deleteByQuery.
Example:
curl -XDELETE 'http://localhost:9200/twitter/tweet/_query' -d '{
"query" : {
"term" : { "user" : "kimchy" }
}
}'
But i have NO term and simply want to delete all documents from that type, no matter what term. What is best practice to achieve this? Empty term does not work.
Link to deleteByQuery
I believe if you combine the delete by query with a match all it should do what you are looking for, something like this (using your example):
curl -XDELETE 'http://localhost:9200/twitter/tweet/_query' -d '{
"query" : {
"match_all" : {}
}
}'
Or you could just delete the type:
curl -XDELETE http://localhost:9200/twitter/tweet
Note: XDELETE is deprecated for later versions of ElasticSearch
The Delete-By-Query plugin has been removed in favor of a new Delete By Query API implementation in core. Read here
curl -XPOST 'localhost:9200/twitter/tweet/_delete_by_query?conflicts=proceed&pretty' -H 'Content-Type: application/json' -d'
{
"query": {
"match_all": {}
}
}'
From ElasticSearch 5.x, delete_by_query API is there by default
POST: http://localhost:9200/index/type/_delete_by_query
{
"query": {
"match_all": {}
}
}
You can delete documents from type with following query:
POST /index/type/_delete_by_query
{
"query" : {
"match_all" : {}
}
}
I tested this query in Kibana and Elastic 5.5.2
Torsten Engelbrecht's comment in John Petrones answer expanded:
curl -XDELETE 'http://localhost:9200/twitter/tweet/_query' -d
'{
"query":
{
"match_all": {}
}
}'
(I did not want to edit John's reply, since it got upvotes and is set as answer, and I might have introduced an error)
Starting from Elasticsearch 2.x delete is not anymore allowed, since documents remain in the index causing index corruption.
Since ElasticSearch 7.x, delete-by-query plugin was removed in favor of new Delete By Query API.
The curl option:
curl -X POST "localhost:9200/my-index/_delete_by_query" -H 'Content-Type: application/json' -d' { "query": { "match_all":{} } } '
Or in Kibana
POST /my-index/_delete_by_query
{
"query": {
"match_all":{}
}
}
The above answers no longer work with ES 6.2.2 because of Strict Content-Type Checking for Elasticsearch REST Requests. The curl command which I ended up using is this:
curl -H'Content-Type: application/json' -XPOST 'localhost:9200/yourindex/_doc/_delete_by_query?conflicts=proceed' -d' { "query": { "match_all": {} }}'
In Kibana Console:
POST calls-xin-test-2/_delete_by_query
{
"query": {
"match_all": {}
}
}
(Reputation not high enough to comment)
The second part of John Petrone's answer works - no query needed. It will delete the type and all documents contained in that type, but that can just be re-created whenever you index a new document to that type.
Just to clarify:
$ curl -XDELETE 'http://localhost:9200/twitter/tweet'
Note: this does delete the mapping! But as mentioned before, it can be easily re-mapped by creating a new document.
Note for ES2+
Starting with ES 1.5.3 the delete-by-query API is deprecated, and is completely removed since ES 2.0
Instead of the API, the Delete By Query is now a plugin.
In order to use the Delete By Query plugin you must install the plugin on all nodes of the cluster:
sudo bin/plugin install delete-by-query
All of the nodes must be restarted after the installation.
The usage of the plugin is the same as the old API. You don't need to change anything in your queries - this plugin will just make them work.
*For complete information regarding WHY the API was removed you can read more here.
You have these alternatives:
1) Delete a whole index:
curl -XDELETE 'http://localhost:9200/indexName'
example:
curl -XDELETE 'http://localhost:9200/mentorz'
For more details you can find here -https://www.elastic.co/guide/en/elasticsearch/reference/current/indices-delete-index.html
2) Delete by Query to those that match:
curl -XDELETE 'http://localhost:9200/mentorz/users/_query' -d
'{
"query":
{
"match_all": {}
}
}'
*Here mentorz is an index name and users is a type
I'm using elasticsearch 7.5 and when I use
curl -XPOST 'localhost:9200/materials/_delete_by_query?conflicts=proceed&pretty' -d'
{
"query": {
"match_all": {}
}
}'
which will throw below error.
{
"error" : "Content-Type header [application/x-www-form-urlencoded] is not supported",
"status" : 406
}
I also need to add extra -H 'Content-Type: application/json' header in the request to make it works.
curl -XPOST 'localhost:9200/materials/_delete_by_query?conflicts=proceed&pretty' -H 'Content-Type: application/json' -d'
{
"query": {
"match_all": {}
}
}'
{
"took" : 465,
"timed_out" : false,
"total" : 2275,
"deleted" : 2275,
"batches" : 3,
"version_conflicts" : 0,
"noops" : 0,
"retries" : {
"bulk" : 0,
"search" : 0
},
"throttled_millis" : 0,
"requests_per_second" : -1.0,
"throttled_until_millis" : 0,
"failures" : [ ]
}
Just to add couple cents to this.
The "delete_by_query" mentioned at the top is still available as a plugin in elasticsearch 2.x.
Although in the latest upcoming version 5.x it will be replaced by
"delete by query api"
Elasticsearch 2.3 the option
action.destructive_requires_name: true
in elasticsearch.yml do the trip
curl -XDELETE http://localhost:9200/twitter/tweet
For future readers:
in Elasticsearch 7.x there's effectively one type per index - types are hidden
you can delete by query, but if you want remove everything you'll be much better off removing and re-creating the index. That's because deletes are only soft deletes under the hood, until the trigger Lucene segment merges*, which can be expensive if the index is large. Meanwhile, removing an index is almost instant: remove some files on disk and a reference in the cluster state.
* The video/slides are about Solr, but things work exactly the same in Elasticsearch, this is Lucene-level functionality.
If you want to delete document according to a date.
You can use kibana console (v.6.1.2)
POST index_name/_delete_by_query
{
"query" : {
"range" : {
"sendDate" : {
"lte" : "2018-03-06"
}
}
}
}

Elasticsearch ActiveMQ River Configuration

I start to configure an ActiveMQ river, I'm already installed the (ActiveMQ plugin) but I feel confused about how to make it working, the documentation was so brief, Actually, I follow exactly the steps of creating a new river but I don't know what are the following steps to follow?
Note:
I have the an ActiveMQ server up and running and I tested it using a
simple JMS app to push a message into a queue.
I created a new river using:
curl -XPUT 'localhost:9200/_river/myindex_river/_meta' -d '{
"type" : "activemq",
"activemq" : {
"user" : "guest",
"pass" : "guest",
"brokerUrl" : "failover://tcp://localhost:61616",
"sourceType" : "queue",
"sourceName" : "elasticsearch",
"consumerName" : "activemq_elasticsearch_river_myindex_river",
"durable" : false,
"filter" : ""
},
"index" : {
"bulk_size" : 100,
"bulk_timeout" : "10ms"
}
}'
After creating the previous river, I could get it's status using
curl -XGET 'localhost:9200/my_index/_status', it give me the index
status, not the created river.
Please, any help to get me the right road with ActiveMQ river configuration with the elasticsearch.
I told you on the mailing list. Define index.index value or set the name of your river to be your index name (easier):
curl -XPUT 'localhost:9200/_river/my_index/_meta' -d '
{
"type":"activemq",
"activemq":{
"user":"guest",
"pass":"guest",
"brokerUrl":"failover://tcp://localhost:61616",
"sourceType":"queue",
"sourceName":"elasticsearch",
"consumerName":"activemq_elasticsearch_river_myindex_river",
"durable":false,
"filter":""
},
"index":{
"bulk_size":100,
"bulk_timeout":"10ms"
}
}'
or
curl -XPUT 'localhost:9200/_river/myindex_river/_meta' -d '
{
"type":"activemq",
"activemq":{
"user":"guest",
"pass":"guest",
"brokerUrl":"failover://tcp://localhost:61616",
"sourceType":"queue",
"sourceName":"elasticsearch",
"consumerName":"activemq_elasticsearch_river_myindex_river",
"durable":false,
"filter":""
},
"index":{
"index":"my_index",
"bulk_size":100,
"bulk_timeout":"10ms"
}
}'
It should help.
If not, update your question with what you can see in logs.

Resources