Update by query in elasticsearch - elasticsearch

I am trying to run update by query on my elasticsearch index using the method provided in this answer. This is the query that I've been trying to run:
curl -XPOST 'localhost:9200/my_index/_update_by_query' -d '
{
"query":{
"match":{
"latest_uuid":"d56ffe2095f511e6bcdd0acbdf0298e3"
}
},
"script" : "ctx._source.is_in_stock = \"false\";"
}'
But I keep getting the following error:
{
"error": {
"root_cause": [
{
"type": "class_cast_exception",
"reason": "java.lang.String cannot be cast to java.util.Map"
}
],
"type": "class_cast_exception",
"reason": "java.lang.String cannot be cast to java.util.Map"
},
"status": 500
}
What am I doing wrong here?

Found the solution.
Turns out that I had to use the following as script:
"script":{"inline":"ctx._source.is_in_stock = false"}

I think problem can be the \"false\" (String value) who no want to be cast.
curl -XPOST 'localhost:9200/my_index/_update_by_query' -d '
{
"query":{
"match":{
"latest_uuid":"d56ffe2095f511e6bcdd0acbdf0298e3"
}
},
"script" : "ctx._source.is_in_stock = false;"
}'
You can first try it. Waiting your feedback ! :)

Related

How can I create a meta data on `Elasticsearch`?

I am using Elasticsearch 6.8. And I'd like to save some meta data on my index. The index already existed. I followed this doc https://www.elastic.co/guide/en/elasticsearch/reference/current/mapping.html#add-field-mapping
curl "http://localhost:9200/idx_1/_mapping"
{
"idx_1": {
"mappings": {
"1": {
"properties": {
"name": {
"type": "text",
"fields": {
"keyword": {
"type": "keyword",
"ignore_above": 256
}
}
}
}
}
}
}
}
In order to create _meta data, I need to create mapping type first.
And I run below code to create a _meta mapping type for version.
curl -X PUT -H 'Content-Type: application/json' "http://localhost:9200/idx_1/_mapping" -d '
{"_meta": { "version": {"type": "text"}}}'
I got below errors:
{
"error": {
"root_cause": [
{
"type": "action_request_validation_exception",
"reason": "Validation Failed: 1: mapping type is missing;"
}
],
"type": "action_request_validation_exception",
"reason": "Validation Failed: 1: mapping type is missing;"
},
"status": 400
}
It says mapping type is missing. I have specified the type for version as text. Why does it say missing type?
It turns out that I looked at the wrong document version. Based on the doc for Elasticsearch6, https://www.elastic.co/guide/en/elasticsearch/reference/6.3/mapping-meta-field.html, the correct request is:
curl -X PUT "http://localhost:9200/idx1/_mapping/_doc" -H 'Content-Type: application/json' -d '{"_meta": {"version": "1235kljsdlkf"}}'

changing the timestamp format of elasticsearch index

I am trying to load log records into elasticsearch (7.3.1) and showing the results in kibana. I am facing the fact that although records are loaded into elasticearch and a curl GET shows them, they are not visible in kibana.
Most of the time, this is because of the timestamp format. In my case, the proper timestamp format should be basic_date_time, but the index only has:
# curl -XGET 'localhost:9200/og/_mapping'
{"og":{"mappings":{"properties":{"#timestamp":{"type":"date"},"componentName":{"type":"text","fields":{"keyword":{"type":"keyword","ignore_above":256}}}}}}}%
I would like to add format 'basic_date_time' to the #timestamp properties, but each try I do is either not accepted by elasticsearch or does not change the index field.
I simply fail to get the right command to do the job.
For example, the simplest I could think of,
Z cr 23;curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/og/_mapping' -d'
{"mappings":{"properties":{"#timestamp":{"type":"date","format":"basic_date_time"}}}}
'
gives error
{"error":{"root_cause":[{"type":"mapper_parsing_exception","reason":"Root mapping definition has unsupported parameters: [mappings : {properties={#timestamp={format=basic_date_time, type=date}}}]"}],"type":"mapper_parsing_exception","reason":"Root mapping definition has unsupported parameters: [mappings : {properties={#timestamp={format=basic_date_time, type=date}}}]"},"status":400}%
and trying to do it via kibana with
PUT /og
{
"mappings": {
"properties": {
"#timestamp": { "type": "date", "format": "basic_date_time" }
}
}
}
gives
{
"error": {
"root_cause": [
{
"type": "resource_already_exists_exception",
"reason": "index [og/NIT2FoNfQpuPT3Povp97bg] already exists",
"index_uuid": "NIT2FoNfQpuPT3Povp97bg",
"index": "og"
}
],
"type": "resource_already_exists_exception",
"reason": "index [og/NIT2FoNfQpuPT3Povp97bg] already exists",
"index_uuid": "NIT2FoNfQpuPT3Povp97bg",
"index": "og"
},
"status": 400
}
I am not sure if I should even try this in kibana. But I would be very glad if I could find the right curl command to get the index changed.
Thanks for helping, Ruud
You can do it either via curl like this:
curl -H 'Content-Type: application/json' -XPUT 'http://localhost:9200/og/_mapping' -d '{
"properties": {
"#timestamp": {
"type": "date",
"format": "basic_date_time"
}
}
}
'
Or in Kibana like this:
PUT /og/_mapping
{
"properties": {
"#timestamp": {
"type": "date",
"format": "basic_date_time"
}
}
}
Also worth noting is that once an index/mapping is created you can usually not modify it (very few exceptions). You can create a new index with the correct mapping and reindex your data into it.

How to ignore 404 errors when removing an ElasticSearch alias?

I'm trying to remove the alias my_alias on ElasticSearch 2.3 as:
curl -XPOST 'http://localhost:9200/_aliases' -d '
{
"actions" : [
{ "remove" : { "index" : "*", "alias" : "my_alias" } }
]
}'
However, this alias can not exist. In such case, I get the error:
{
"error": {
"reason": "aliases [my_alias] missing",
"resource.id": "my_alias",
"resource.type": "aliases",
"root_cause": [
{
"reason": "aliases [my_alias] missing",
"resource.id": "my_alias",
"resource.type": "aliases",
"type": "aliases_not_found_exception"
}
],
"type": "aliases_not_found_exception"
},
"status": 404
}
I tried adding ignore_unavailable to the action as { "remove" : { "index" : "*", "alias" : "my_alias", "ignore_unavailable": true } }, or simply ignore: 404, but to no avail. I looked at ElasticSearch's test suite for update_aliases in https://github.com/elastic/elasticsearch/tree/7560101ec73331acb39a042bde6130e59e4bb630/rest-api-spec/src/main/resources/rest-api-spec/test/indices.update_aliases, but couldn't find a test that did this.
I'm starting to think that there's no way to do that. Am I wrong?
I don't think this is possible. There is no delete "if exists" and that is somewhat expected from any HTTP API.
However, you can definitely ignore the response for this particular case in your code instead of treating it as an error.

Elasticsearch Indexed Script Issue

I am using elasticsearch 2.3.3. I am trying to set up a template query using the following mustache script:
{
"from":"{{from}}",
"size":"{{size}}",
"query": {
"multi_match": {
"query": "{{query}}",
"type": "most_fields",
"fields": [ "meta.court^1.5",
"meta.judge^1.5",
"meta.suit_no^4",
"meta.party1^1.5",
"meta.party2^1.5",
"meta.subject^3",
"content"]
}
}
}
I have successfully indexed this script as follows:
POST /_search/template/myscript
{
"script":{
"from":"{{from}}"
,"size":"{{size}}"
,"query": {
"multi_match": {
"query": "{{query}}",
"type": "most_fields",
"fields": [ "meta.court^1.5", "meta.judge^1.5", "meta.suit_no^4", "meta.party1^1.5", "meta.party2^1.5", "meta.subject^3", "content"]
}
}
}
}
However when I try to render the template for example with:
GET _render/template
{
"id": "myscript",
"params":{
"from":"0"
,"size":"10"
,"query":"walter"
}
}
I get the following error:
{
"error": {
"root_cause": [
{
"type": "json_parse_exception",
"reason": "Unexpected character ('=' (code 61)): was expecting a colon to separate field name and value\n at [Source: [B#39f8927e; line: 1, column: 8]"
}
],
"type": "json_parse_exception",
"reason": "Unexpected character ('=' (code 61)): was expecting a colon to separate field name and value\n at [Source: [B#39f8927e; line: 1, column: 8]"
},
"status": 500
}
The funny thing is I can successfully execute the script if it is stored as a file in the config/scripts directory of an es node.
What am I missing here? Any help would be greatly appreciated.
Many thanks

No query registered for [match]

I'm working through some examples in the ElasticSearch Server book and trying to write a simple match query
{
"query" : {
"match" : {
"displayname" : "john smith"
}
}
}
This gives me the error:
{\"error\":\"SearchPhaseExecutionException[Failed to execute phase [query],
....
SearchParseException[[scripts][4]: from[-1],size[-1]: Parse Failure [Failed to parse source
....
QueryParsingException[[kb.cgi] No query registered for [match]]; }
I also tried
{
"match" : {
"displayname" : "john smith"
}
}
as per examples on http://www.elasticsearch.org/guide/reference/query-dsl/match-query/
EDIT: I think the remote server I'm using is not the latest 0.20.5 version because using "text" instead of "match" seems to allow the query to work
I've seen a similar issue reported here: http://elasticsearch-users.115913.n3.nabble.com/Character-escaping-td4025802.html
It appears the remote server I'm using is not the latest 0.20.5 version of ElasticSearch, consequently the "match" query is not supported - instead it is "text", which works
I came to this conclusion after seeing a similar issue reported here: http://elasticsearch-users.115913.n3.nabble.com/Character-escaping-td4025802.html
Your first query looks fine, but perhaps the way you use in the request is not correct. Here is a complete example that works:
curl -XDELETE localhost:9200/test-idx
curl -XPUT localhost:9200/test-idx -d '{
"settings": {
"index": {
"number_of_shards": 1,
"number_of_replicas": 0
}
},
"mappings": {
"doc": {
"properties": {
"name": {
"type": "string", "index": "analyzed"
}
}
}
}
}
'
curl -XPUT localhost:9200/test-idx/doc/1 -d '{
"name": "John Smith"
}'
curl -XPOST localhost:9200/test-idx/_refresh
echo
curl "localhost:9200/test-idx/_search?pretty=true" -d '{
"query": {
"match" : {
"name" : "john smith"
}
}
}
'
echo

Resources