Watcher was working and succesfully alerting the Slack channel but now I'm having a trouble.
The only change that I've made was to update its refresh interval. When I run the following GET, it returns watcher actions's state as "awaits_successful_execution".
GET _watcher/watch/my_watcher
{
"found": true,
"_id": "etl_incr_morp_to_hermes",
"_status": {
"version": 432497,
"state": {
"active": true,
"timestamp": "2017-03-24T07:14:41.301Z"
},
"actions": {
"notify-slack": {
"ack": {
"timestamp": "2017-03-24T07:14:41.301Z",
"state": "awaits_successful_execution"
}
}
}
}
...
I've checked Elastic's documentation. When I try to get more info about watcher by calling the following API, I get this result:
GET _watcher
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "No endpoint or operation is available at [_watcher]"
}
],
"type": "illegal_argument_exception",
"reason": "No endpoint or operation is available at [_watcher]"
},
"status": 400
}
How can I troubleshot watcher? Is there any logs that I can check?
I found the answer!
The following request returns specified watcher's execution history.
GET .watcher-history*/_search
{
"query": {
"query_string": {
"query": "watch_id: my_watcher"
}
},
"size": 1,
"sort": [
{
"result.execution_time": { "order": "desc"}
}
]
}
Related
I'm trying to create a gsub pipeline, but before I do I'm trying to simulate it by following the many examples on the internet. Here's my code:
PUT _ingest/pipeline/removescript/_simulate
{
"pipeline" :{
"description": "remove script",
"processors": [
{ "gsub" :{
"field": "content",
"pattern": "(?:..)[^<%]+[^%>](?:..)",
"replacement": ""
}
}]
},
"docs": [
{
"_id": "tt",
"_source": {
"content": "leave <% remove me %> Me"
}
}]
}
However when I run it I receive the following error:
No handler found for uri [/_ingest/pipeline/removescript/_simulate] and method [PUT]
If I change the PUT line to be:
PUT _ingest/pipeline/_simulate or PUT _ingest/pipeline/removescript
then I receive the following error:
{
"error": {
"root_cause": [
{
"type": "parse_exception",
"reason": "[processors] required property is missing",
"header": {
"property_name": "processors"
}
}
],
"type": "parse_exception",
"reason": "[processors] required property is missing",
"header": {
"property_name": "processors"
}
},
"status": 400
}
The _simulate endpoint works only with POST and not PUT:
POST _ingest/pipeline/removescript/_simulate
{
...
}
I'm trying to sort search results by distance. However, when i try i get the following error:
{
"error": {
"root_cause": [
{
"type": "illegal_argument_exception",
"reason": "sort option [location] not supported"
}
],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": "roeselaredev",
"node": "2UYlfd7sTd6qlJWgdK2wzQ",
"reason": {
"type": "illegal_argument_exception",
"reason": "sort option [location] not supported"
}
}
]
},
"status": 400
}
The query i sent looks like this:
GET _search
{
"query": {
"match_all": []
},
"sort": [
{
"geo_distance": {
"location": {
"lat": 50.9436034,
"long": 3.1242917
},
"order":"asc",
"unit":"km",
"distance_type":"plane"
}
},
{
"_score": {
"order":"desc"
}
}
]
}
As near as i can tell i followed the instructions in the documentation to the letter. I'm not getting a malformed query result. I'm just getting a not supported result for the sort by distance option. Any ideas as to what i'm doing wrong?
The query dsl is invalid the OP is almost-correct :) but missing an under-score.
While sorting by distance it is _geo_distance and not geo_distance.
Example:
GET _search
{
"query": {
"match_all": []
},
"sort": [
{
"_geo_distance": {
"location": {
"lat": 50.9436034,
"long": 3.1242917
},
"order":"asc",
"unit":"km",
"distance_type":"plane"
}
},
{
"_score": {
"order":"desc"
}
}
]
}
below is my query, I want to change score calculation using function_score feature:
{
"size": 1,
"query":{
"function_score": {
"query": {
"bool": {
"must": [
{
"match": {
"messageText": "car"
}
}
]
}
},
"script_score" : {
"script" : "doc['time_views'].values[doc['time_views'].values.length-1]"
}
,
"boost_mode": "replace"
}
},
"from": 0
}
but I got this error response
{
"error": {
"root_cause": [
{
"type": "script_exception",
"reason": "failed to run inline script [doc['time_views'].values[doc['time_views'].values.length-1]] using lang [groovy]"
}
],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": "datacollection",
"node": "TWeZV3R6Rq-WYQ2YIHjILQ",
"reason": {
"type": "script_exception",
"reason": "failed to run inline script [doc['time_views'].values[doc['time_views'].values.length-1]] using lang [groovy]",
"caused_by": {
"type": "illegal_argument_exception",
"reason": "No field found for [time_views] in mapping with types [message]"
}
}
}
]
},
"status": 500
}
some solutions says using quotation in "doc['time_views']" causes the problem when query has been send from command prompt tools. I don't know why!
I don't use any command prompt tools. I create the query in java code directly
EDIT
this is my index mapping:
"mappings": {
"message": {
"properties": {
"text": {
"type": "string"
},
"time_views": {
"type": "nested",
"properties": {
"backupTimestamp": {
"type": "long"
},
"views": {
"type": "integer"
}
}
}
}
}
}
}
I want to use "views" of last item of "time_views". so I try below scripts too, but each of them throw different error:
"doc['time_views.views'].values[doc['time_views.views'].values.length-1]"
error: java.util.ArrayList cannot be cast to java.lang.Number
"doc['time_views.views'].values[doc['time_views.views'].values.size()-1]"
error: failed to run inline script [doc['time_views.views'].values[doc['time_views.views'].values.size()-1]] using lang [groovy]
"doc['time_views'].values[doc['time_views'].values.size()-1].views"
error: failed to run inline script [doc['time_views'].values[doc['time_views'].values.size()-1].views] using lang [groovy]"
I'm really new in elasticsearch and groovy language. I didn't care about that "time_views" is nested Object, also I don't know syntax of groovy exactly, after some affort I found my mistakes and the solution:
{
"size": 1,
"query":{
"function_score": {
"query": {
"bool": {
"must": [
{
"match": {
"messageText": "car"
}
}
]
}
},
"script_score" : {
"script" : "doc['time_views.views'].values.get(doc['time_views.views'].values.size()-1)"
}
,
"boost_mode": "replace"
}
},
"from": 0
}
It's work as I expected
I can't figure out what's wrong in my ES query.
I want to filter on a specific field, and also sort by other field.
Request:
GET /_search
{
"query" : {
"term": {
"_type" : "monitor"
},
"filtered" : {
"filter" : { "term" : { "ProcessName" : "myProc" }}
}
},
"sort": { "TraceDateTime": { "order": "desc", "ignore_unmapped": "true" }}
}
Response:
{
"error": {
"root_cause": [
{
"type": "parse_exception",
"reason": "failed to parse search source. expected field name but got [START_OBJECT]"
}
],
"type": "search_phase_execution_exception",
"reason": "all shards failed",
"phase": "query",
"grouped": true,
"failed_shards": [
{
"shard": 0,
"index": ".kibana",
"node": "94RPDCjhQh6eoTe6XoRmSg",
"reason": {
"type": "parse_exception",
"reason": "failed to parse search source. expected field name but got [START_OBJECT]"
}
}
]
},
"status": 400
}
You have a syntax error in your query, you need to enclose both of your term queries inside a bool/must compound query, it needs to be like this:
POST /_search
{
"query": {
"filtered": {
"filter": {
"bool": {
"must": [
{
"term": {
"ProcessName": "myProc"
}
},
{
"term": {
"_type": "monitor"
}
}
]
}
}
}
},
"sort": {
"TraceDateTime": {
"order": "desc",
"ignore_unmapped": "true"
}
}
}
PS: Always use POST when sending a payload in your query.
I am facing an issue while trying to execute a script within an ES JSON request
The request:
POST _search
{
"query": {
"bool": {
"must": [
{
"match_all": {}
}
]
}
},
"aggs": {
"bucket_histogram": {
"histogram": {
"field": "dayTime",
"interval": 10
},
"aggs": {
"get_average": {
"avg": {
"field": "value"
}
},
"check-threshold": {
"bucket_script": {
"buckets_path": {
"averageValue": "get_average"
},
"script": "averageValue - doc[\"thresholdValue\"].value"
}
}
}
}
}
}
But I get this error instead of returning values
{
"error": {
"root_cause": [],
"type": "reduce_search_phase_exception",
"reason": "[reduce] ",
"phase": "fetch",
"grouped": true,
"failed_shards": [],
"caused_by": {
"type": "groovy_script_execution_exception",
"reason": "failed to run inline script [averageValue - doc[\"thresholdValue\"].value] using lang [groovy]",
"caused_by": {
"type": "missing_property_exception",
"reason": "No such property: doc for class: 7dcca7d142ac809a7192625d43d95bde9883c434"
}
}
},
"status": 503
}
Yet if I remove doc[\"thresholdValue\"] and enter a number everything works fine.
You are using a bucket_script, which is a part of the pipeline aggregations released with Elasticsearch 2.0. Pipeline aggregations work against other aggregations and not documents, which is why the doc context is not supplied to the aggregation.
If you want to process aggregations against specific documents, then perhaps you want the scripted metric aggregation instead.