How to update date field in elasticsearch - elasticsearch

I have data in my Elastic index as below :
Indexed data as:
curl -XPUT 'http://localhost:9200/twitter/tweet/1' -d '{
"user" : "kimchy",
"post_date" : "2009-11-15T14:12:12",
"message" : "trying out Elasticsearch"
}'
Mapping obtained using
curl -XGET localhost:9200/twitter
{"twitter":{"aliases":{},"mappings":{"tweet":{"properties":{"message":{"type":"string"},"post_date":{"type":"date","format":"strict_date_optional_time||epoch_millis"},"user":{"type":"string"}}}},"settings":{"index":{"creation_date":"1456739938440","number_of_shards":"5","number_of_replicas":"1","uuid":"DwhS57l1TsKQFyzY23LSiQ","version":{"created":"2020099"}}},"warmers":{}}}
Now if I am modifying user as below I am able to do the same:
curl -XPOST 'http://localhost:9200/twitter/tweet/1/_update' -d '{
"script" : "ctx._source.user=new_user",
"params" : {
"new_user" : "search"
}
}'
But when I tried modifying the date fields as below it is giving an exception :
curl -XPOST 'http://localhost:9200/twitter/tweet/1/_update' -d '{
"script" : "ctx._source.post_date='new-date'",
"params" : {
"new-date" : "2016-02-02T16:12:23"
}
}'
Exception received is :
{"error":{"root_cause":[{"type":"remote_transport_exception","reason":"[Anomaly][127.0.0.1:9300][indices:data/write/update[s]]"}],"type":"illegal_argument_exception","reason":"failed
to execute
script","caused_by":{"type":"script_exception","reason":"Failed to
compile inline script [ctx._source.post_date=new-date] using lang
[groovy]","caused_by":{"type":"script_exception","reason":"failed to
compile groovy
script","caused_by":{"type":"multiple_compilation_errors_exception","reason":"startup failed:\ne90a551666b36d90e4fc5b08d04250da5c4d552d: 1: unexpected
token: - # line 1, column 26.\n ctx._source.post_date=new-date\n
^\n\n1 error\n"}}}}
Now can anyone let me know how I can handle the same.
~Prashant

POST your_index_name/_update_by_query?conflicts=proceed
{
"script" : {
"source": "ctx._source.publishedDate=new SimpleDateFormat('yyyy-MM-dd').parse('2021-05-07')",
"lang": "painless"
}
}
publishedDate - the name of your date field.
Your_index_name the name of you index

In Groovy (or in Java), an identifier can't contain a '-'. If you write
ctx._source.last-login = new-login
Groovy (and java!) parses this into :
(ctx._source.last)-(login) = (new)-(login)
You should quote these properties :
ctx._source.'last-login' = 'new-login'

Related

Elasticsearch 5.4.0 - How to add new field to existing document

In Production, we already had 2000+ documents. we need to add new field into existing document. is it possible to add new field ? How can i add new field to exisitng field
You can use the update by query API in order to add a new field to all your existing documents:
POST your_index/_update_by_query
{
"query": {
"match_all": {}
},
"script": {
"inline": "ctx._source.new_field = 0",
"lang": "painless"
}
}
Note: if your new field is a string, change 0 to '' instead
We can also add the new field using curl and directly running the following command in the terminal.
curl -X PUT "localhost:9200/you_index/_mapping/defined_mapping" -H 'Content-Type: application/json' -d '{ "properties":{"field_name" : {"type" : type_of_data}} }'

elasticsearch cluster update API does not work

My Elasticsearch version is 2.1.1 .
I just want to update discovery.zen.minimum_master_nodes as described in elasticsearch docs by running this:
curl -XPUT localhost:9200/_cluster/settings -d '{
"transient" : {"discovery.zen.minimum_master_nodes" : "2"}
}'
the response is {"acknowledged":true,"persistent":{},"transient":{}} which means there is nothing updated (although it is acknowledged). the response should be something like this:
{
"persistent" : {},
"transient" : {"discovery.zen.minimum_master_nodes" : "2"}
}

Elasticsearch query does not take variable?

Dynamic value or a variable doesn't work inside a elasticsearch "range" query.
For explaining more this is a elasticsearch range query which find productId from 1000 to 11100, which is working perfectly ---
$json = '{
"query" : {
"range" : {
"productId" : {
"from" : '1000',
"to" : '11100'
}
}
}
}';
On the other hand using the same query with a variable with the same value it returns me error like ---
{"error":"SearchPhaseExecutionException[Failed to execute phase [query], all shards failed; shardFailures
$a =1000;
$b = 11100;
$json = '{
"query" : {
"range" : {
"productId" : {
"from" : '$a',
"to" : '$b'
}
}
}
}';
Do anyone knows where i am making the mistake.
Any suggestion will be great help. Thanks in advanced.
If this is PHP, there's a problem with string concatenation:
$a = 1000;
$b = 11100;
$json = '{
"query" : {
"range" : {
"productId" : {
"from" : '.$a.',
"to" : '.$b.'
}
}
}
}';
see the dots around the variables.
If you run the original piece of code by itself, the PHP parser should give you a parse error.

Elasticsearch has_child query/filter in Kibana 4

I cannot seem to get the has_child query (or filter) to function in Kibana 4. My code works in elasticsearch directly as a curl script, but not in Kibana 4, yet I understood this was a key feature of the upgrade. Can anybody shed any light?
The curl script as follows works in elasticsearch, returning all of the parents where they have a child object:
curl -XPOST localhost:port/indexname/_search?pretty -d '{
"query" : {
"has_child" : {
"type" : "object",
"query" : {
"match_all" : {}
}
}
}
}'
The above runs fine. Then to convert it to the JSON query to submit within Kibana, I've followed the general formatting rules: I've dropped the curl line and added the index name (and sometimes a blank filter [], but it doesn't seem to make much difference); no error is thrown but the whole dataset returns.
{
"index" : "indexname",
"query" : {
"has_child" : {
"type" : "object",
"query" : {
"match_all" : {}
}
}
}
}
Am I missing something? Has anybody else got a has_child query to run in Kibana 4?
Many thanks in advance
Toby

Elasticsearch ActiveMQ River Configuration

I start to configure an ActiveMQ river, I'm already installed the (ActiveMQ plugin) but I feel confused about how to make it working, the documentation was so brief, Actually, I follow exactly the steps of creating a new river but I don't know what are the following steps to follow?
Note:
I have the an ActiveMQ server up and running and I tested it using a
simple JMS app to push a message into a queue.
I created a new river using:
curl -XPUT 'localhost:9200/_river/myindex_river/_meta' -d '{
"type" : "activemq",
"activemq" : {
"user" : "guest",
"pass" : "guest",
"brokerUrl" : "failover://tcp://localhost:61616",
"sourceType" : "queue",
"sourceName" : "elasticsearch",
"consumerName" : "activemq_elasticsearch_river_myindex_river",
"durable" : false,
"filter" : ""
},
"index" : {
"bulk_size" : 100,
"bulk_timeout" : "10ms"
}
}'
After creating the previous river, I could get it's status using
curl -XGET 'localhost:9200/my_index/_status', it give me the index
status, not the created river.
Please, any help to get me the right road with ActiveMQ river configuration with the elasticsearch.
I told you on the mailing list. Define index.index value or set the name of your river to be your index name (easier):
curl -XPUT 'localhost:9200/_river/my_index/_meta' -d '
{
"type":"activemq",
"activemq":{
"user":"guest",
"pass":"guest",
"brokerUrl":"failover://tcp://localhost:61616",
"sourceType":"queue",
"sourceName":"elasticsearch",
"consumerName":"activemq_elasticsearch_river_myindex_river",
"durable":false,
"filter":""
},
"index":{
"bulk_size":100,
"bulk_timeout":"10ms"
}
}'
or
curl -XPUT 'localhost:9200/_river/myindex_river/_meta' -d '
{
"type":"activemq",
"activemq":{
"user":"guest",
"pass":"guest",
"brokerUrl":"failover://tcp://localhost:61616",
"sourceType":"queue",
"sourceName":"elasticsearch",
"consumerName":"activemq_elasticsearch_river_myindex_river",
"durable":false,
"filter":""
},
"index":{
"index":"my_index",
"bulk_size":100,
"bulk_timeout":"10ms"
}
}'
It should help.
If not, update your question with what you can see in logs.

Resources