How to filter on a date range for Sentinl? - elasticsearch

So we've started to implement Sentinl to send alerts. I have managed to get a count of errors sent if it exceeds a specified threshold.
What I'm really struggling with, is filtering for the last day!
Could someone please point me in the right direction!
Herewith the script:
{
"actions": {
"Email Action": {
"throttle_period": "0h0m0s",
"email": {
"to": "juan#company.co.za",
"from": "elk#company.co.za",
"subject": "ELK - ERRORS caused by CreditDecisionServiceAPI.",
"body": "{{payload.hits.total}} ERRORS caused by CreditDecisionServiceAPI. Threshold is 100."
}
},
"Slack Action": {
"throttle_period": "0h0m0s",
"slack": {
"channel": "#alerts",
"message": "{{payload.hits.total}} ERRORS caused by CreditDecisionServiceAPI. Threshold is 100.",
"stateless": false
}
}
},
"input": {
"search": {
"request": {
"search_type": "query_then_fetch",
"index": [
"*"
],
"types": [],
"body": {
"size": 0,
"query": {
"bool": {
"must": [
{
"match": {
"appName": "CreditDecisionServiceAPI"
}
},
{
"match": {
"level": "ERROR"
}
},
{
"range": {
"timestamp": {
"from": "now-1d"
}
}
}
]
}
}
}
}
}
},
"condition": {
"script": {
"script": "payload.hits.total > 100"
}
},
"transform": {},
"trigger": {
"schedule": {
"later": "every 15 minutes"
}
},
"disable": true,
"report": false,
"title": "watcher_CreditDecisionServiceAPI_Errors"
}
So to be clear, this is the part that's being ignored by the query:
{
"range": {
"timestamp": {
"from": "now-1d"
}
}
}

You need to change it and add the filter Json tag before the range one, like that:
"filter": [
{
"range": {
"timestamp": {
"gte": "now-1d"
}
}
}
]

So we've FINALLY solved the problem!
Elastic search has changes their DSL multiple times, so please note that you need to look at what version you're using for the correct solution. We're on Version: 6.2.3
Below query finally worked:
"query": {
"bool": {
"must": [
{
"match": {
"appName": "CreditDecisionServiceAPI"
}
},
{
"match": {
"level": "ERROR"
}
},
{
"range": {
"#timestamp": {
"gte": "now-1d"
}
}
}
]
}
}

Related

Elasticsearch Watcher error while trying to send email attachment, dashboard.pdf

I have created a watcher alert from the advanced option which sends dashboard.pdf as an email attachment when the triggering condition is met. Now when the criteria is matching (threshold is exceeded) then it is throwing error as below in the watcher output.
"root_cause": [
{
"type": "connect_timeout_exception",
"reason": "Connect to mydomainname.com:443 [mydomainname.com/XX.X.XXX.XXX] failed: Connect timed out"
}
],
"type": "connect_timeout_exception",
"reason": "Connect to mydomainname.com:443 [mydomainname.com/XX.X.XXX.XXX] failed: Connect timed out",
"caused_by": {
"type": "socket_timeout_exception",
"reason": "Connect timed out"
Below is found from Elasticsearch log.
[2022-03-29T11:39:54,682][ERROR][o.e.x.w.a.e.ExecutableEmailAction] [node-1] failed to execute action [test_watcher_1_last10mins_gte5_tran_dt_accord_sof/email_admin]
org.apache.http.conn.ConnectTimeoutException: Connect to mydomainname.com:443 [mydomainname.com/XX.X.XXX.XXX] failed: Connect timed out
....
....
Caused by: java.net.SocketTimeoutException: Connect timed out
Below is the watcher script.
{
"trigger": {
"schedule": {
"interval": "6m"
}
},
"input": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
"my_index"
],
"rest_total_hits_as_int": true,
"body": {
"size": 0,
"query": {
"bool": {
"must": [
{
"bool": {
"minimum_should_match": 1,
"should": [
{
"match_phrase": {
"partner.keyword": "RXGTY"
}
},
{
"match_phrase": {
"partner.keyword": "VGHUT"
}
}
]
}
},
{
"match": {
"state.keyword": {"query": "Fail"}
}
},
{
"match": {
"ops.keyword": {"query": "api_name"}
}
}
],
"filter": {
"range": {
"datetime": {
"gte": "{{ctx.trigger.scheduled_time}}||-5m",
"lte": "{{ctx.trigger.scheduled_time}}",
"format": "strict_date_optional_time||epoch_millis"
}
}
}
}
}
}
}
}
},
"condition": {
"script": {
"source": "if (ctx.payload.hits.total >= params.threshold) { return true; } return false;",
"lang": "painless",
"params": {
"threshold": 1
}
}
},
"actions": {
"email_admin": {
"email": {
"profile": "standard",
"attachments": {
"dashboard.pdf": {
"reporting": {
"url": "https://mydomainname.com/api/reporting/generate/printablePdf?jobParams= ..removing the rest portion of the url for security reason",
"auth": {"type":"basic","username":"elastic","password":"pass"}
}
},
"data.yml": {
"data": {
"format": "yaml"
}
}
},
"from": "from_email#xyz.com",
"to": [
"to_email_name <to_email#abc.com>"
],
"subject": "Elastic Watcher : Alert 1",
"body": {
"text": "Too many error in the system, see attached data."
}
}
}
},
"transform": {
"script": {
"source": "HashMap result = new HashMap(); result.result = ctx.payload.hits.total; return result;",
"lang": "painless",
"params": {
"threshold": 1
}
}
}
}
Our elastic stack version is 7.11.1 and the license is activated, basic stack security is enabled.
Note that, when I have tried the same from local kibana (7.10.1), where the trial license is activated, there this alerting action is working perfectly. Also note that, in my local stack, the security feature is not enabled.
Please help!!
Regards,
Souvik

Elasticsearch is setting a 1.0 score to all results

I have the following json in Kibana's console:
GET /sgn-audit/_search
{
"from": "0",
"size": "10",
"sort": [
{
"updated_at": {
"order": "desc"
}
}
],
"query": {
"bool": {
"must": [
{
"match": {
"owner_type": "taskStatus"
}
},
{
"match": {
"owner_id": "1"
}
},
{
"match": {
"tenant_id": "1"
}
},
{
"range": {
"updated_at": {
"gte": "2021-02-11T00:00:00Z",
"lte": "2021-02-11T23:59:59Z"
}
}
}
]
}
}
}
However all results are returning with a 1.0 score and even results that don't match the queries above return as well, what should I do?
This bewildered me a few times too! Simply remove the whitespace between the URI and the search body:
GET /sgn-audit/_search
<---
{
...

Getting "expected [END_OBJECT] but found [FIELD_NAME]" in Kibana

I am working on Kibana 6x and using SentiNL to generate email alerts. Below is my query to generate mail if my application generate log "CREDENTIALS ARE NOT DEFINED FOR PULL EVENT SOURCES" with threshold 1. When i play my watcher i get below error.
Error: Watchers: play watcher : execute watcher : execute advanced watcher : get elasticsearch payload : search : [parsing_exception] [match] malformed query, expected [END_OBJECT] but found [FIELD_NAME], with { line=1 & col=80 }
Query:
"input": {
"search": {
"request": {
"index": [
"filebeat-2019.03.21"
],
"body": {
"query": {
"match": {
"msg": "CREDENTIALS ARE NOT DEFINED FOR PULL EVENT SOURCES"
},
"minimum_number_should_match": 1,
"bool": {
"filter": {
"range": {
"#timestamp": {
"gte": "now-15m/m",
"lte": "now/m",
"format": "epoch_millis"
}
}
}
}
},
"size": 0,
"aggs": {
"dateAgg": {
"date_histogram": {
"field": "#timestamp",
"time_zone": "Europe/Amsterdam",
"interval": "1m",
"min_doc_count": 1
}
}
}
}
}
}
}
Also I have used "minimum_number_should_match" to track threshold value. Is that correct?
Found the solution(Here i have not added threshold value) :
{
"actions": {
"email_html_alarm_2daee075-0f24-408e-a362-59172b5e3a1d": {
"name": "email html alarm",
"throttle_period": "1m",
"email_html": {
"stateless": false,
"subject": "Error v1.9 conditon",
"priority": "high",
"html": "<p>{{payload.hits.hits}} test hits Hi {{watcher.username}}</p>\n<p>There are {{payload.hits.total}} results found by the watcher <i>{{watcher.title}}</i>.</p>\n\n<div style=\"color:grey;\">\n <hr />\n <p>This watcher sends alerts based on the following criteria:</p>\n <ul><li>{{watcher.wizard.chart_query_params.queryType}} of {{watcher.wizard.chart_query_params.over.type}} over the last {{watcher.wizard.chart_query_params.last.n}} {{watcher.wizard.chart_query_params.last.unit}} {{watcher.wizard.chart_query_params.threshold.direction}} {{watcher.wizard.chart_query_params.threshold.n}} in index {{watcher.wizard.chart_query_params.index}}</li></ul>\n</div>",
"to": "abc#qwe.com",
"from": "abc#qwe.com"
}
}
},
"input": {
"search": {
"request": {
"index": [
"file-2019.04.03"
],
"body": {
"query": {
"bool": {
"must": {
"query_string": {
"query": "CREDENTIALS ARE NOT FOUND",
"analyze_wildcard": true,
"default_field": "*"
}
},
"filter": [{
"range": {
"#timestamp": {
"gte": "now-1d",
"lte": "now/m",
"format": "epoch_millis"
}
}
}]
}
}
}
}
}
},
"condition": {
"script": {
"script": "payload.hits.total > 0"
}
},
"trigger": {
"schedule": {
"later": "every 2 minutes"
}
},
"disable": true,
"report": false,
"title": "watcher_title",
"save_payload": false,
"spy": false,
"impersonate": false
}

Set up watcher for alerting high CPU usage by some process

I'm trying to create a Watcher Alert that will be triggered when some process on a node uses over 0.95% of CPU for the last one hour.
Here is an example of my config:
{
"trigger": {
"schedule": {
"interval": "10m"
}
},
"input": {
"search": {
"request": {
"search_type": "query_then_fetch",
"indices": [
"metricbeat*"
],
"types": [],
"body": {
"size": 0,
"query": {
"bool": {
"must": [
{
"range": {
"system.process.cpu.total.norm.pct": {
"gte": 0.95
}
}
},
{
"range": {
"system.process.cpu.start_time": {
"gte": "now-1h"
}
}
},
{
"match": {
"environment": "test"
}
}
]
}
}
}
}
}
},
"condition": {
"compare": {
"ctx.payload.hits.total": {
"gt": 0
}
}
},
"actions": {
"send-to-slack": {
"throttle_period_in_millis": 1800000,
"webhook": {
"scheme": "https",
"host": "hooks.slack.com",
"port": 443,
"method": "post",
"path": "{{ctx.metadata.onovozhylov-test}}",
"params": {},
"headers": {
"Content-Type": "application/json"
},
"body": "{ \"text\": \" ==========\nTest parameters:\n\tthrottle_period_in_millis: 60000\n\tInterval: 1m\n\tcpu.total.norm.pct: 0.5\n\tcpu.start_time: now-1m\n\nThe watcher:*{{ctx.watch_id}}* in env:*{{ctx.metadata.env}}* found that the process *{{ctx.system.process.name}}* has been utilizing CPU over 95% for the past 1 hr on node:\n{{#ctx.payload.nodes}}\t{{.}}\n\n{{/ctx.payload.nodes}}\n\nThe runbook entry is here: *{{ctx.metadata.runbook}}* \"}"
}
}
},
"metadata": {
"onovozhylov-test": "/services/T0U0CFMT4/BBK1A2AAH/MlHAF2QuPjGZV95dvO11111111",
"env": "{{ grains.get('environment') }}",
"runbook": "http://mytest.com"
}
}
This Watcher doesn't work when I set the metric system.process.cpu.start_time. Perhaps this metric is not a correct one... Unfortunately, I don't have relevant experience with Watcher to solve this issue on my own.
And another issue is that I don't know how to add the system.process.name into a message body.
Thanks in advance for any help!
Use timestamp field instead of system.process.cpu.start_time to check for all metrcibeat-* documents in the last 10 mins
"range": {
"timestamp": {
"gte": "now-10m",
"lte": "now"
}
}
To include system.process.name in your message body look at the {{ctx.payload}} and use the appropriate notation to refer to the process name. For ex. in one of our watcher configs we use {{_source.appname}} to refer to the application name.

Function Score On Nested Object

I have this index blog with the following settings and mappings.
PUT /blog
{
"settings": {
"index": {
"number_of_shards": "1"
}
},
"mappings": {
"post": {
"_all": {
"enabled": false
},
"properties": {
"title": {
"type": "string"
},
"content": {
"type": "string"
},
"visitor": {
"type": "nested",
"properties": {
"id": {
"type": "string",
"index": "not_analyzed"
},
"last_visit": {
"type": "date",
"format": "yyyy-MM-dd"
}
}
}
}
}
}
}
I want to rank my posts based on relevancy and visitor's last visit. I tried this query without success. It seems like the gauss function cannot get the value of visitor's last_visit. How to get this worked?
POST /blog/post/_search
{
"query": {
"function_score": {
"functions": [
{
"gauss": {
"visitor.last_visit": {
"origin": "now/d",
"offset": "3d",
"scale": "4d",
"decay": 0.5
}
},
"filter": {
"nested": {
"path": "visitor",
"query": {
"term": {
"visitor.id": "1"
}
}
}
}
}
]
}
}
}
Here is a query with a match for a name that uses a nested object that I had for a particular use case. I didn't use any date fields, but as I said, it does use a nested object. I used relevancy of distance along with a text match, so it's similar.
I used the answer from this question to structure my query as it matched what I was trying to do. Scoring documents by text match and distance
GET dev_search_core_data/_search?size=200
{
"query": {
"bool": {
"should": [
{
"match": {
"NAME": "Amy Smith"
}
},
{
"bool": {
"must": [
{
"function_score": {
"query": {
"nested": {
"path": "LOCATION",
"query": {
"term": {
"LOCATION.SOME_IND": {
"value": true
}
}
}
}
},
"functions": [
{
"gauss": {
"LOCATION.COORDINATES": {
"origin": "-118.309, 34.041",
"scale": "50km",
"offset": "10km",
"decay": 0.5
}
}
}
]
}
}
]
}
}
]
}
}
}
I think the problem is with the structure of your query. I always run this command first to validate my queries if I'm having any problems to eliminate any syntax issues.
GET dev_search_core_data/_validate/query?explain
This was the result:
{
"valid": true,
"_shards": {
"total": 1,
"successful": 1,
"failed": 0
},
"explanations": [
{
"index": "dev_search_core_data_b",
"valid": true,
"explanation": "filtered((NAME:amy NAME:smith) (+function score (ToParentBlockJoinQuery (filtered(LOCATION.SOME_IND:true)->random_access(_type:_LOCATION)),function=org.elasticsearch.index.query.functionscore.DecayFunctionParser$GeoFieldDataScoreFunction#274227b9)))->cache(org.elasticsearch.index.search.nested.NonNestedDocsFilter#1012ada6)"
}
]
}
I also looked at the docs for an in-depth explanation of how the function score worked. You don't mention your version, but I'm using ES 1.6.

Resources