ElasticSearch script query & geo distance query - elasticsearch

ElasticSearch version is 2.3.4
How to query geo_distance_query and script_query combination ?
I have an index, which has a lot of store information, there are location and distance fields.
Given a geo_point, the index based on the top of the query based on geo_distance_query this point to all store the distance, and calculated the distance should be less than the current store in the distance field value.
I think the script_query and geo_distance_query combination, do not know how to achieve.
Try the following code:
query: {
bool: {
must: [
{
script: {
script: {
inline: "doc['location'].arcDistance(" + _.lat + "," + _.lon + ") < doc['distance'].value"
, lang: "painless"
}
}
}
]
}
}
Results Elasticsearch error:
[illegal_argument_exception] script_lang not supported [painless]
Whether someone has encountered and solved this problem, using what method to implement the query ?
change code :
{
bool: {
must: [
{
script: {
script: {
inline: "doc['location'].arcDistance(" + _.lat + "," + _.lon + ") < doc['distance'].value"
, lang: "groovy"
}
}
}
]
}
}
also error :
message: '[script_exception] failed to run inline script [doc[\'location\'].arcDistance(31.89484,120.287825) < doc[\'distance\'].value] using lang [groovy]',
detail image :
{
"error": {
"root_cause": [{
"type": "script_exception",
"reason": "failed to run inline script [doc[\'location\'].arcDistance(31.89484,120.287825) < 3000] using lang [groovy]"
}],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query_fetch",
"grouped": true,
"failed_shards": [{
"shard": 0,
"index": "business_stores_v1",
"node": "Z_65eOYXT6u8aDf7mp2ZRg",
"reason": {
"type": "script_exception",
"reason": "failed to run inline script [doc[\'location\'].arcDistance(31.89484,120.287825) < 3000] using lang [groovy]",
"caused_by": {"type": "null_pointer_exception", "reason": null}
}
}]
}, "status": 500
}

The error you're getting, i.e. null_pointer_exception is because one of the documents in your business_stores_v1 index has a null location field and thus the formula fails
doc['location'].arcDistance(...)
^
|
null_pointer_exception here

Related

_update_by_query + script do not work correctly,error:Trying to create too many scroll contexts

Elasticsearch version: 7.6.2
JVM:13.0.2
OS version:centeros7
This is my code
POST recommend_index/_update_by_query
{
"script": {
"source": "ctx._source.rec_doctor_id = 1"
},
"query": {
"bool": {
"must": [{
"terms": {
"id": ["22222"]
}
}]
}
}
}
This code does not return the result correctly,The error message is
{
"error": {
"root_cause": [
{
"type": "exception",
"reason": "Trying to create too many scroll contexts. Must be less than or equal to: [5000]. This limit can be set by changing the [search.max_open_scroll_context] setting."
}
],
"type": "search_phase_execution_exception",
"reason": "Partial shards failure",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 1,
"index": "recommend_index",
"node": "XXX",
"reason": {
"type": "exception",
"reason": "Trying to create too many scroll contexts. Must be less than or equal to: [5000]. This limit can be set by changing the [search.max_open_scroll_context] setting."
}
}
]
},
"status": 500
}
I'm sure the current scroll is 0
When I replace _UPDATE_BY_QUERY with _UPDATE, it updates normally
No change has been made in ES since last Friday, and suddenly an error is reported
No configuration changes have been made to the ES server
follow-up:
I set the search.max_open_scroll_context parameter to 5000 and found nothing to do with it.
I looked up the 7.6.2 release and found that someone was having the same problem as me.Link on this
here #71354 #56202
I guess this is due to scrolling triggering the 7.6.2 bug.I restarted the cluster node without upgrading and found that it worked!!

Accessing Text Keyword Field through a Script

I am trying to do some scripting in elasticsearch
Here is an example of the JSON segment in the request.
{
"script_score": {
"script": {
"source": "doc.containsKey('var')?params.adder[doc['var'].keyword]:0 ",
"params": {
"adder": {
"type1": 1,
"type2": 1000
}
}
}
},
"weight": 100000
}
This is the error that is thrown
{
"shard": 0,
"index": "",
"node": "4eX6EgO2QAuBdc5zkUiDBg",
"reason": {
"type": "script_exception",
"reason": "runtime error",
"script_stack": [
"org.elasticsearch.index.mapper.TextFieldMapper$TextFieldType.fielddataBuilder(TextFieldMapper.java:759)",
"org.elasticsearch.index.fielddata.IndexFieldDataService.getForField(IndexFieldDataService.java:116)",
"org.elasticsearch.index.query.QueryShardContext.lambda$lookup$0(QueryShardContext.java:290)",
"org.elasticsearch.search.lookup.LeafDocLookup$1.run(LeafDocLookup.java:101)",
"org.elasticsearch.search.lookup.LeafDocLookup$1.run(LeafDocLookup.java:98)",
"java.base/java.security.AccessController.doPrivileged(AccessController.java:312)",
"org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:98)",
"org.elasticsearch.search.lookup.LeafDocLookup.get(LeafDocLookup.java:41)",
"doc.containsKey('var')?params.adder[doc['var'].keyword]:0 ",
" ^---- HERE"
],
"script": "doc.containsKey('var')?params.adder[doc['var'].keyword]:0 ",
"lang": "painless",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "Fielddata is disabled on text fields by default. Set fielddata=true on [var] in order to load fielddata in memory by uninverting the inverted index. Note that this can however use significant memory. Alternatively use a keyword field instead."
}
}
}
It's surprising to me that they cannot access the keyword field because it's nested. Do I need to make another field that is the keyword field?
Thank you
To access nested fields try doc['var.keyword'] or doc['var.keyword'].value

How does elasticsearch handle returns inside a scripted update query?

I can't find the relevant documentation describing the return keyword. Where is this documented?
I am running the following query
POST /myindex/mytype/FwOaGmQBdhLB1nuQhK1Q/_update
{
"script": {
"source": """
if (ctx._source.owner._id.equals(params.signedInUserId)){
for (int i = 0; i < ctx._source.managers.length; i++) {
if (ctx._source.managers[i].email.equals(params.managerEmail)) {
ctx._source.managers.remove(i);
return;
}
}
}
ctx.op = 'noop';
""",
"lang": "painless",
"params": {
"signedInUserId": "auth0|5a78c1ccebf64a46ecdd0d9c",
"managerEmail": "d#d.com"
}
},
"_source": true
}
but I'm getting the error
"type": "illegal_argument_exception",
"reason": "failed to execute script",
"caused_by": {
"type": "script_exception",
"reason": "compile error",
"script_stack": [
"... ve(i);\n return;\n }\n }\n ...",
" ^---- HERE"
],
"script": <the script here>,
"lang": "painless",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "invalid sequence of tokens near [';'].",
"caused_by": {
"type": "no_viable_alt_exception",
"reason": null
}
}
If I remove return keyword, then the script runs but I get the wrong behavior as expected. I can correct the behavior by using a Boolean to keep track of email removal, but why can't I return early?
It's hard to say, you could avoid null/void returns by passing a lambda comparator to either retainAll or removeIf
ctx._source.managers.removeIf(m -> m.email.equals(params.managerEmail))
Lambda expressions and method references work the same as Java’s.

Elasticsearch Filter Query by CIDR

For example, how would you build an Elasticsearch query that filtered by documents containing an ip field that matches 192.168.100.14/24?
{
query: {
filtered: {
filter: {
???
}
}
}
}
To clarify, the documents I am searching have a property that is indexed as an IP field, and I want to find all documents that have an IP that matches a CIDR mask (to be specified in a filter).
try this if using ES 2.2 or later:
{"query": {"term" : {"<ip_field_name>" : "192.168.100.14/24"}}}
The elasticsearch type ip does not support that type of input. Here is an example showing that it will fail:
input
PUT index1
{
"mappings": {
"type1": {
"properties": {
"ip_addr": {
"type": "ip"
}
}
}
}
}
POST index1/type1
{
ip_addr: "192.168.100.14/24"
}
result
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "failed to parse [ip_addr]"
}
],
"type": "mapper_parsing_exception",
"reason": "failed to parse [ip_addr]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "failed to parse ip [192.168.100.14/24], not a valid ip address"
}
},
"status": 400
}
Instead, if you strip off the /24 it will work properly.

Update type of a field in Elasticsearch

I have a index in Elasticsearch, and want to update the type of a field named currentTimeStamp from long to date, so that Kibana can work on it. Following is my current output of _mapping (Other fields have been removed for brevity).
{
"myIndexname": {
"mappings": {
"myType": {
"properties": {
"currentTimeStamp": {
"type": "long"
}
}
}
}
}
}
When I try to run the following command for updating the type of the column to date type, I get the below mentioned error response. Any help on this is highly appreciated.
curl -X PUT myIndexname/_mapping/myType with the following payload
{
"myIndexname": {
"properties": {
"currentTimeStamp": {
"type": "date",
"format": "date_optional_time || epoch_millis"
}
}
}
}
Error response:
{
"error": {
"root_cause": [
{
"type": "mapper_parsing_exception",
"reason": "Root mapping definition has unsupported parameters: [optimizationframework : {properties={currentTimeStamp={type=date, format=date_optional_time || epoch_millis}}}]"
}
],
"type": "mapper_parsing_exception",
"reason": "Root mapping definition has unsupported parameters: [optimizationframework : {properties={currentTimeStamp={type=date, format=date_optional_time || epoch_millis}}}]"
},
"status": 400
}

Resources