I'm configuring right now Watcher to search in the access logs and see how many error is so far and send it to a slack account.
Well, the problem that I have is because I can't know how many aggregations I will have when the query is done and in my configurations is something like "hardcoded" to send just like 5 at maximum , but if the result is grather than 5 not works.
I'm searching for 404 status code in the query and filter only for one server, then I just need send all bucket results as notification as:
Total: Total-number-of-its
Logs:
log1: number-of-results
log2: number-of-results
log3: number-of-results
log4: number-of-results
log5: number-of-results
log6: number-of-results
Here my configuration:
"trigger" : {
"schedule" : { "interval" : "1h" }
},
"input" : {
"search" : {
"request": {
"body": {
"query": {
"bool": {
"must": [
{ "range": {
"#timestamp": {
"gte": "now-1h",
"lte": "now"
}
}
},
{
"match": {
"beat.hostname": "someserver"
}
}
],
"filter": {
"term": {
"response": "404"
}
}
}
},
"aggs": {
"host": {
"terms": {
"field": "beat.hostname",
"size": 1
}
},
"logs_list": {
"terms": {
"field": "source",
"size": 10
}
}
}
}
}
}
},
"condition": {
"compare" : { "ctx.payload.hits.total" : { "gt" : 0 }}
},
"actions" : {
"notify-slack" : {
"throttle_period" : "30m",
"slack" : {
"message" : {
"from": "Watcher",
"to" : [ "somechannel" ],
"attachments" : [
{
"title" : "400 code status found",
"text" : "Encountered: {{ctx.payload.hits.total}} in the last hour on {{ctx.payload.aggregations.host.buckets.0.key}} \n Files: \n {{ctx.payload.aggregations.logs_list.buckets.0.key}}: {{ctx.payload.aggregations.logs_list.buckets.0.doc_count}} \n {{ctx.payload.aggregations.logs_list.buckets.1.key}}: {{ctx.payload.aggregations.logs_list.buckets.1.doc_count}} \n {{ctx.payload.aggregations.logs_list.buckets.2.key}}: {{ctx.payload.aggregations.logs_list.buckets.2.doc_count}} \n {{ctx.payload.aggregations.logs_list.buckets.3.key}}: {{ctx.payload.aggregations.logs_list.buckets.3.doc_count}} \n {{ctx.payload.aggregations.logs_list.buckets.4.key}}: {{ctx.payload.aggregations.logs_list.buckets.4.doc_count}} \n {{ctx.payload.aggregations.logs_list.buckets.5.key}}: {{ctx.payload.aggregations.logs_list.buckets.5.doc_count}}",
"color" : "danger"
}
]
}
}
}
}
I don't know how should I send the "text" in actions, any ideas how should I pass all buckets result?
Thanks in advance, i'm using xpack, ELK and logstash.
If I understand your question correctly, you want to loop over your aggregation in the action. Try this:
{{#ctx.payload.aggregations.myAggName.buckets}}
{{key}}: {{doc_count}}
{{/ctx.payload.aggregations.myAggName.buckets}}
Related
I want to configure a elasticsearch webhook watcher , which will look for the keyword "error" in my indices and genarate an OTRS ticket, if found.
Right now I have following configuration :
{
"trigger": {
"schedule": {"interval": "1m"}
},
"input": {
"search": {
"request": {
"body": {
"size": 0,
"query": {"match_all": "Error"}
},
"indices": ["*"]
}
}
},
"condition": {
"compare": {
"ctx.payload.hits.total": {
"gte": 1
}
}
},
"actions" : {
"create_otrs" : {
"transform": {
"script": """{"Ticket":{"Queue":"EngineeringTeam","Priority":"P3","CustomerUser":"root#localhost","Title":"RESTCreateTest","State":"new","Type":"Incident"},"Article":{"ContentType":"text/plain;charset=utf8","Subject":"RestCreateTest","Body":"Thisisonlyatest"}}"""
},
"webhook" : {
"method" : "POST",
"host" : "http://myotrs.com/otrs/nph-genericinterface.pl/Webservice/GenericTicketConnectorREST/Ticket?UserLogin=<user>&Password=<pass>",
"port": 9200,
"body": "{{#toJson}}ctx.payload{{/toJson}}",
"auth" : {
"basic" : {
"username" : "elastic",
"password" : "<elasticsearch pass>"
}
}
}
}
}
}
This gives me Error saving watch : compile error and watcher will not simulate. There is no syntax error in the json by the way. What is wrong in the configuration? A curl operation successfully generates the OTRS ticket but I am getting a hard time configuring it with elasticsearch.
Tldr;
Your transform script is wrong.
As per the documentation:
The executed script may either return a valid model that is the equivalent of a Java™ Map or a JSON object (you will need to consult the documentation of the specific scripting language to find out what this construct is).
Solution
You can do something as simple as, converting your json into a string
{
"Ticket": {
"Queue": "EngineeringTeam",
"Priority": "P3",
"CustomerUser": "root#localhost",
"Title": "RESTCreateTest",
"State": "new",
"Type": "Incident"
},
"Article": {
"ContentType": "text/plain;charset=utf8",
"Subject": "RestCreateTest",
"Body": "Thisisonlyatest"
}
}
Becomes:
"{\"Ticket\":{\"Queue\":\"EngineeringTeam\",\"Priority\":\"P3\",\"CustomerUser\":\"root#localhost\",\"Title\":\"RESTCreateTest\",\"State\":\"new\",\"Type\":\"Incident\"},\"Article\":{\"ContentType\":\"text/plain;charset=utf8\",\"Subject\":\"RestCreateTest\",\"Body\":\"Thisisonlyatest\"}}"
And use the Json.load function to convert the string into a proper object.
Your watch will look like:
{
"watch" : {
"trigger": {
"schedule": {"interval": "1m"}
},
"input": {
"search": {
"request": {
"body": {
"size": 0,
"query": {"match_all": "Error"}
},
"indices": ["*"]
}
}
},
"condition": {
"compare": {
"ctx.payload.hits.total": {
"gte": 1
}
}
},
"actions" : {
"create_otrs" : {
"transform": {
"script": """return Json.load("{\"Ticket\":{\"Queue\":\"EngineeringTeam\",\"Priority\":\"P3\",\"CustomerUser\":\"root#localhost\",\"Title\":\"RESTCreateTest\",\"State\":\"new\",\"Type\":\"Incident\"},\"Article\":{\"ContentType\":\"text/plain;charset=utf8\",\"Subject\":\"RestCreateTest\",\"Body\":\"Thisisonlyatest\"}}");"""
},
"webhook" : {
"method" : "POST",
"host" : "http://myotrs.com/otrs/nph-genericinterface.pl/Webservice/GenericTicketConnectorREST/Ticket?UserLogin=<user>&Password=<pass>",
"port": 9200,
"body": "{{#toJson}}ctx.payload{{/toJson}}",
"auth" : {
"basic" : {
"username" : "elastic",
"password" : "<elasticsearch pass>"
}
}
}
}
}
}
}
Then another error you have in your watch is the query
{
"search": {
"request": {
"body": {
"size": 0,
"query": {"match_all": "Error"}
},
"indices": ["*"]
}
}
}
match_all should take an object such as {} so "Error" is not going to work.
So in the end the watcher looks like:
{
"watch" : {
"trigger": {
"schedule": {"interval": "1m"}
},
"input": {
"search": {
"request": {
"body": {
"size": 0,
"query": {"match_all": {}}
},
"indices": ["*"]
}
}
},
"condition": {
"compare": {
"ctx.payload.hits.total": {
"gte": 1
}
}
},
"actions" : {
"create_otrs" : {
"transform": {
"script": """return Json.load("{\"Ticket\":{\"Queue\":\"EngineeringTeam\",\"Priority\":\"P3\",\"CustomerUser\":\"root#localhost\",\"Title\":\"RESTCreateTest\",\"State\":\"new\",\"Type\":\"Incident\"},\"Article\":{\"ContentType\":\"text/plain;charset=utf8\",\"Subject\":\"RestCreateTest\",\"Body\":\"Thisisonlyatest\"}}");"""
},
"webhook" : {
"method" : "POST",
"host" : "http://myotrs.com/otrs/nph-genericinterface.pl/Webservice/GenericTicketConnectorREST/Ticket?UserLogin=<user>&Password=<pass>",
"port": 9200,
"body": "{{#toJson}}ctx.payload{{/toJson}}",
"auth" : {
"basic" : {
"username" : "elastic",
"password" : "<elasticsearch pass>"
}
}
}
}
}
}
}
I am using elastic search 6.5.
Basically, based on my query my index can return multiple documents, I need only those documents which has the max value for a particular field.
E.g.
{
"query": {
"bool": {
"must": [
{
"match": { "header.date" : "2019-07-02" }
},
{
"match": { "header.field" : "ABC" }
},
{
"bool": {
"should": [
{
"regexp": { "body.meta.field": "myregex1" }
},
{
"regexp": { "body.meta.field": "myregex2" }
}
]
}
}
]
}
},
"size" : 10000
}
The above query will return lots of documents/messages as per the query. The sample data returned is:
"header" : {
"id" : "Text_20190702101200123_111",
"date" : "2019-07-02"
"field": "ABC"
},
"body" : {
"meta" : {
"field" : "myregex1",
"timestamp": "2019-07-02T10:12:00.123Z",
}
}
-----------------
"header" : {
"id" : "Text_20190702151200123_121",
"date" : "2019-07-02"
"field": "ABC"
},
"body" : {
"meta" : {
"field" : "myregex2",
"timestamp": "2019-07-02T15:12:00.123Z",
}
}
-----------------
"header" : {
"id" : "Text_20190702081200133_124",
"date" : "2019-07-02"
"field": "ABC"
},
"body" : {
"meta" : {
"field" : "myregex1",
"timestamp": "2019-07-02T08:12:00.133Z",
}
}
So based on the above 3 documents, I only want the max timestamp one to be shown i.e. "timestamp": "2019-07-02T15:12:00.123Z"
I only want one document in above example.
I tried doing it as below:
{
"query": {
"bool": {
"must": [
{
"match": { "header.date" : "2019-07-02" }
},
{
"match": { "header.field" : "ABC" }
},
{
"bool": {
"should": [
{
"regexp": { "body.meta.field": "myregex1" }
},
{
"regexp": { "body.meta.field": "myregex2" }
}
]
}
}
]
}
},
"aggs": {
"group": {
"terms": {
"field": "header.id",
"order": { "group_docs" : "desc" }
},
"aggs" : {
"group_docs": { "max" : { "field": "body.meta.tiemstamp" } }
}
}
},
"size": "10000"
}
Executing the above, I am still getting all the 3 documents, instead of only one.
I do get the buckets though, but I need only one of them and not all the buckets.
The output in addition to all the records,
"aggregations": {
"group": {
"doc_count_error_upper_bound": 0,
"sum_other_doc_count": 0,
"buckets": [
{
"key": "Text_20190702151200123_121",
"doc_count": 29,
"group_docs": {
"value": 1564551683867,
"value_as_string": "2019-07-02T15:12:00.123Z"
}
},
{
"key": "Text_20190702101200123_111",
"doc_count": 29,
"group_docs": {
"value": 1564551633912,
"value_as_string": "2019-07-02T10:12:00.123Z"
}
},
{
"key": "Text_20190702081200133_124",
"doc_count": 29,
"group_docs": {
"value": 1564510566971,
"value_as_string": "2019-07-02T08:12:00.133Z"
}
}
]
}
}
What am I missing here?
Please note that I can have more than one messages for same timestamp. So I want them all i.e. all the messages/documents belonging to the max time stamp.
In above example there are 29 messages for same timestamp (It can go to any number). So there are 29 * 3 messages being retrieved by my query after using the above aggregation.
Basically I am able to group correctly, I am looking for something like HAVING in SQl?
I'm trying to do a query for server logs. The search is returning results but there are a couple of issues.
1) I'm specifying the server name, yet I'm getting results back for other servers in the same domain.
2) Even though I'm specifying the query get results back from the past hour, they're coming back from two hours before, i.e. if I perform the search at 1pm, the results are returning from 12pm. The search returns the correct results if I specify sorting by timestamp but this seems to take longer for the results to appear so I would rather not do that unless I have to.
Any help you can give is greatly appreciated.
Here's my query (with edited log name and server name):
var searchParams = {
index: 'logs*',
"body": {
"from" : 0, "size": 50,
"sort": [
{
"timestamp": {
"order": "desc",
"unmapped_type": "boolean"
}
}
],
"query": {
"bool": {
"must": [
{
"match" : {"gl2_source_input" : "579f7b6696d78a4f6cbfa745"},
"match" : {"source" : "server01.fakedomain.com"},
"match" : {"EventID" : "5145"}
},
{
"range": {
"timestamp": {
"gte": "now-1h",
"lte": "now/m",
"time_zone": "-05:00"
}
}
}
],
"must_not": []
}
},
}
}
A couple of things here:
If you want to match a keyword exactly, then use a term query on a keyword type field.
Unless you're interested in your queries being scored, you should use a filter clause instead of the must clause.
So your query can look something like this (assuming that your filter fields are keyword type fields).
var searchParams = {
index: 'logs*',
"body": {
"from" : 0, "size": 50,
"sort": [
{
"timestamp": {
"order": "desc",
"unmapped_type": "boolean"
}
}
],
"query": {
"bool": {
"filter": [
{ "term" : {"gl2_source_input" : "579f7b6696d78a4f6cbfa745"} },
{ "term" : {"source" : "server01.fakedomain.com"} },
{ "term" : {"EventID" : "5145"} },
{
"range": {
"timestamp": {
"gte": "now-1h",
"lte": "now/m",
"time_zone": "-05:00"
}
}
}
]
}
},
}
}
I have an index with many documents in this format:
{
"userId": 1234,
"locationDate" "2016-07-19T19:24:51+0000",
"location": {
"lat": -47.38163,
"lon": 26.38916
}
}
In this index I have incremental positions from the user, updated every few seconds.
I would like to execute a search that would return me the latest position (sorted by locationDate) from each user (grouped by userId)
Is this possible with elastic search? the best I could do was get all the positions from the last 30 seconds, using this:
{"query":{
"filtered" : {
"query" : {
"match_all" : { }
},
"filter" : {
"range" : {
"locationDate" : {
"from" : "2016-07-19T18:54:51+0000",
"to" : null,
"include_lower" : true,
"include_upper" : true
}
}
}
}
}}
And then after that I sort them out by hand, but I would like to do this directly on elastic search
IMPORTANT: I am using elasticsearch 1.5.2
Try this (with aggregations):
{
"query": {
"filtered": {
"query": {
"match_all": {}
},
"filter": {
"range": {
"locationDate": {
"from": "2016-07-19T18:54:51+0000",
"to": null,
"include_lower": true,
"include_upper": true
}
}
}
}
},
"aggs": {
"byUser": {
"terms": {
"field": "userId",
"size": 10
},
"aggs": {
"firstOne": {
"top_hits": {
"size": 1,
"sort": [
{
"locationDate": {
"order": "desc"
}
}
]
}
}
}
}
}
}
I need to get ElasticSearch watcher to alert if there is no record matching a pattern inserted into the index in a time frame, it needs to be able to do this whilst grouping on another pair of field.
i.e. the records will be of the pattern:
Date Timestamp Level Message Client Site
It needs to check that Message matches "is running" for each Client's site(s) (i.e. Google Maps and Bing Maps have the same site of Maps). I tihnk the best(?) way to do this right now is to run a wacher per client site.
Sofar I have this, assume the task should write is running into the log every 20 minutes :
{
"trigger" : {
"schedule" : {
"interval" : "25m"
}
},
"input" : {
"search" : {
"request" : {
"search_type" : "count",
"indices" : "<logstash-{now/d}>",
"body" : {
"filtered" : {
"query" : {
"match_phrase" : { "Message" : "Is running" }
},
"filter" : {
"match" : { "Client" : "Example" } ,
"match" : { "Site" : "SomeSite" }
}
}
}
}
}
},
"condition" : {
"script" : "return ctx.payload.hits.total < 1"
},
"actions" : {
},
"email_administrator" : {
"email" : {
"to" : "me#host.tld",
"subject" : "Tasks are not running for {{ctx.payload.client}} on their site {{ctx.payload.site}}",
"body" : "Too many error in the system, see attached data",
"attach_data" : true,
"priority" : "high"
}
}
}
}
For anyone looking how to do this in the future, a few things need nesting in query as part of filter and match becomes term. Fun!...
{
"trigger": {
"schedule": {
"interval": "25m"
}
},
"input": {
"search": {
"request": {
"search_type": "count",
"indices": "<logstash-{now/d}>",
"body": {
"query": {
"filtered": {
"query": {
"match_phrase": {
"Message": "Its running"
}
},
"filter": {
"query": {
"term": {
"Client": "Example"
}
},
"query": {
"term": {
"Site": "SomeSite"
}
},
"query": {
"range": {
"event_timestamp": {
"gte": "now-25m",
"lte": "now"
}
}
}
}
}
}
}
}
}
},
"condition": {
"compare": {
"ctx.payload.hits.total": {
"lte": 1
}
}
},
"actions": {
"email_administrator": {
"email": {
"to": "me#host.tld",
"subject": "Tasks are not running for {{ctx.payload.client}} on their site {{ctx.payload.site}}",
"body": "Tasks are not running for {{ctx.payload.client}} on their site {{ctx.payload.site}}",
"attach_data": true,
"priority": "high"
}
}
}
}
You have to change your condition,It support json format:
"condition" : {
"script" : "return ctx.payload.hits.total : 1"
}
Please refer below link,
https://www.elastic.co/guide/en/watcher/current/condition.html