ElastAlert: config.yaml : aggregation option giving error - elasticsearch

I have configured aggregation option in config.yaml to send summary of alerts after every 1 hour. But it is throwing following error when I try to run it.
File "elastalert.py", line 863, in run_rule
self.add_aggregated_alert(match, rule)
File "elastalert.py", line 1614, in add_aggregated_alert
alert_time = ts_now() + rule['aggregation']
TypeError: unsupported operand type(s) for +: 'datetime.datetime' and 'dict'
ERROR:root:Uncaught exception running rule Test Alert : unsupported operand type(s) for +: 'datetime.datetime' and 'dict'
INFO:elastalert:Rule Test Alert disabled
The config parameters are:
rules_folder: test_rules
run_every:
minutes: 15
buffer_time:
minutes: 30
es_host: 100.38.46.3
es_port: 9200
aggregation:
hours: 1
writeback_index: elastalert_status
alert_time_limit:
days: 2
Test Alert rule configuration:
name: Test Alert
type: metric_aggregation
index: logstash-*
buffer_time:
minutes: 30
metric_agg_key: count
metric_agg_type: sum
query_key: "name.keyword"
doc_type: counter
max_threshold: 1
min_threshold: 0
filter:
- query:
query_string:
query: "name.keyword: *timedout_count"
alert:
- "email"
email:
- "admin#abc.com"
I have followed the ElastAlert docs but not able to figure out what is causing this issue.
Thanks

From the error:
TypeError: unsupported operand type(s) for +: 'datetime.datetime' and 'dict'
And from your config.yaml
metric_agg_type: sum
It's trying (and faililng) to perform a sum aggregation on datetime and dict values that do not support summation. You'll need to choose an aggregation like count or unique count, probably, and adjust the logic of your alerts accordingly.

Related

Elast alert trigger mail once when server is up

I am using elastalert to trigger email when elasticsearch url is up/down.
I want to configure in such manner that elastalert will continuously trigger mail when server goes down and when server is up only one mail will be triggered.
Please find below example_frequency rule config
es_host: localhost
es_port: 9200
name: Example frequency rule
type: frequency
index: heartbeat-*
num_events: 1
realert:
seconds: 30
timeframe:
minutes: 3
filter:
- terms:
monitor.status: ["up", "down"]
alert:
- "email"
email:
- "AD#somemail.com"

More than one example_frequency.yaml in Elastalert

I am working on elastalert. I am able to send the email alerts whenever my condition matches.
Now my use case is I want to send email whenever any error is encountered.
So if its error : email content body should be "ERROR OCCURED"
else
if it's FATAL : email content body should be "FATAL ERROR"
I have following example_frequency.yaml :
name: Error Occurred in your Application
type: frequency
index: logstash-*
num_events: 1
timframe:
hours:1
filter:
-query:
query_string:
query: "message: *ERROR* OR message: *FATAL*"
alert:
- "email"
alert_text_type: "alert_text_only"
alert_text: |
Error occured at {0}
Host Machine id: {1}
Error Message: {2}
Log File Location: {3}
alert_text_args:
- "#timestamp"
- "beat.hostname"
- "message"
- "source"
email:
- "test#gmail.com"
With this configuration even its FATAL/ERROR I am getting the same email content. I want to change some text if its FATAL.
Is there any wa to do it in ElastAlert?
Please help guys!

How to get the logs using string matching (ex: logs containing "DEBUG")

In the elasticsearch cluster we get logs indexed from our cluster. Logstach didnt set any filter so we get all the logs with INFO, DEBUG etc levels. I want to know the query to get just the logs with one specific level.
If it's in python we could do it like this.
df = pd.DataFrame({"node.id": [123, 124, 125],
"log":["INFO: run", "DEBUG: fail", "WARN: warn"]})
log node.id
0 INFO: run 123
1 DEBUG: fail 124
2 WARN: warn 125
df[df["log"].str.contains("DEBUG")]
log node.id
1 DEBUG: fail 124
I tried using "regex", "wildcard" but doesn't seem to get the right syntax.

Curator 4.0 : Unable to take snapshot or run any action. Following examples from the document

I am trying to take snapshot of elastic index using curator 4. (Windows machine)
Getting below error (Getting same error for all actions).
Failed to complete action: snapshot. : Not an IndexList object. Type:
Any idea when we get this ?
I am following the examples provided in the documentation
https://www.elastic.co/guide/en/elasticsearch/client/curator/current/snapshot.html
Action yaml file :
actions:
1:
action: snapshot
description: >-
Snapshot logstash- prefixed indices older than 1 day (based on index
creation_date) with the default snapshot name pattern of
'curator-%Y%m%d%H%M%S'. Wait for the snapshot to complete. Do not skip
the repository filesystem access check. Use the other options to create
the snapshot.
options:
repository: myrepo
name: shan
ignore_unavailable: False
include_global_state: True
partial: False
wait_for_completion: True
skip_repo_fs_check: False
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: age
source: creation_date
direction: younger
unit: days
unit_count: 1
field:
stats_result:
epoch:
exclude:
OutPut :
2016-07-25 22:16:40,929 INFO Action #1: snapshot
2016-07-25 22:16:40,929 INFO Starting new HTTP connection (1): 127.0.0.1
2016-07-25 22:16:40,944 INFO GET http://127.0.0.1:9200/ [status:200 request:0.015s]
2016-07-25 22:16:40,946 INFO GET http://127.0.0.1:9200/_all/_settings?expand_wildcards=open%2Cclosed [status:200 request:0.002s]
2016-07-25 22:16:40,950 INFO GET http://127.0.0.1:9200/_cluster/state/metadata/.marvel-es-1-2016.06.27,.marvel-es-1-2016.06.28,.marvel-es-1-2016.06.29,.marvel-es-1-2016.06.30,.marvel-es-data-1,shan-claim-1 [status:200 request:0.004s]
2016-07-25 22:16:40,993 INFO GET http://127.0.0.1:9200/.marvel-es-1-2016.06.27,.marvel-es-1-2016.06.28,.marvel-es-1-2016.06.29,.marvel-es-1-2016.06.30,.marvel-es-data-1,shan-claim-1/_stats/store,docs [status:200 request:0.042s]
2016-07-25 22:16:40,993 ERROR Failed to complete action: snapshot. <class 'TypeError' at 0x000000001DFCC400>: Not an IndexList object. Type: <class 'curator.indexlist.IndexList' at 0x0000000002DB39B8>.
You need to add another filtertype so curator knows which indexs to run against. For example if your indexes are named logstash- your filters would look like
filters:
- filtertype: pattern
kind: prefix
value: logstash-
exclude:
- filtertype: age
source: creation_date
direction: younger
unit: days
unit_count: 1
field:
stats_result:
epoch:
exclude:
There is a bad identation at the beginning of yourfile. The acion list should be within the "actions" keyword. This is your root level.

how to fill a scripted field value by condition in kibana

i am using Kibana 4 and my document contains two integer fields called: 'x' & 'y'. i would like to create a scripted field in Kibana returning the division value of 'x' by 'y' if 'y'<> 0. else: return the value of 'x'.
i have tried to add this script to a new screnter code hereipted field:
doc['x'].value > 0 ? doc['x'].value/doc['y'].value : doc['x'].value;
but got a parsing error when trying to visualize it:
Error: Request to Elasticsearch failed:
{"error":"SearchPhaseExecutionException[Failed to execute phase [query],
all shards failed; shardFailures
how can i create a scripted field with condition in Kibana, step by step?
What you are seeing is not a parsing error, shardFailures just means that the underlying Elasticsearch is not ready yet. When starting Kibana/Elasticsearch, make sure your ES cluster is ready before diving into Kibana, i.e. run curl -XGET localhost:9200/_cluster/health and in the response, you should see something similar to this:
{
cluster_name: your_cluster_name
status: yellow <----- this must be either yellow or green
timed_out: false
number_of_nodes: 2
number_of_data_nodes: 2
active_primary_shards: 227
active_shards: 454
relocating_shards: 0 <----- this must be 0
initializing_shards: 0 <----- this must be 0
unassigned_shards: 25
}
As for your script, it is written correctly, however the condition you mentioned is not correct since you wanted y <> 0 and not x > 0, so it should be
doc['y'].value != 0 ? doc['x'].value / doc['y'].value : doc['x'].value
Please give it a try

Resources