Curator not finding my index - elasticsearch

Trying to have curator clean up an index I have named "syslog"
I have this action yml:
actions:
1:
action: delete_indices
description: >-
Delete indices older than hour
options:
ignore_empty_list: True
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: prefix
value: syslog
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: hours
unit_count: 1
exclude:
I tried both "hours" and "days" but I keeping getting this message from curator:
Skipping action "delete_indices" due to empty list: <class 'curator.exceptions.NoIndices'>

You are telling Curator that you want it to identify indices that have a prefix value of syslog, and also have a Year.Month.Day timestring in the index name. This would match, for example, if the index name were syslog-2017.03.14. But if the name of the index is only syslog, this filter set will not match.
Perhaps you want to catch the index age relative to when the index was created. In this case you'd set
actions:
1:
action: delete_indices
description: >-
Delete indices older than hour
options:
ignore_empty_list: True
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: regex
value: '^syslog$'
- filtertype: age
source: creation_date
direction: older
unit: hours
unit_count: 1
This will match only an index named syslog whose creation_date is at least one hour older than execution time.

Related

YTT overlays: modify arrays using data from that arrays

This question is about YTT.
Is it possible to modify YAML list of items using the data from that items via overlays?
For example we have a template:
---
vlans:
- vlan-id: 10
- vlan-id: 20
- vlan-id: 30
some_other_configuration: #! some other config here
And using overlays we need to transform the template above into this:
---
vlans:
- vlan-id: 10
vlan-name: vlan10
- vlan-id: 20
vlan-name: vlan20
- vlan-id: 30
vlan-name: vlan30
some_other_configuration: #! some other config here
Yes. One can use an overlay within an overlay. 🤯
## load("#ytt:overlay", "overlay")
## def with_name(vlan):
##overlay/match missing_ok=True
vlan-name: ## "vlan{}".format(vlan["vlan-id"])
## end
##overlay/match by=overlay.all
---
vlans:
##overlay/match by=lambda idx, left, right: "vlan-id" in left, expects="1+"
##overlay/replace via=lambda left, right: overlay.apply(left, with_name(left))
-
which can be read:
for all documents,
in the vlans: map item...
for every array item it contains that has a map that includes the key "vland-id"...
replace the map with one that's been overlay'ed with the vlan name
https://carvel.dev/ytt/#gist:https://gist.github.com/pivotaljohn/33cbc52e808422e68c5ec1dc2ca38354
See also:
the #overlay/replace action: https://carvel.dev/ytt/docs/v0.40.0/lang-ref-ytt-overlay/#overlayreplace
Overlays, programmatically: https://carvel.dev/ytt/docs/v0.40.0/lang-ref-ytt-overlay/#programmatic-access.

elastalert sends multiple email alerts instead of sending an aggregated email

Instead of sending one alert, ElastAlert sends email for each document which mapped. Below is my rule file. It works but I want alerts in one email. Please help any suggestion will be appreciated.
skynet.yaml: |-
---
name: skynet
type: frequency
limit_execution: "0/10 * * * *"
index: wpng-httpd-perf-*
num_events: 1
top_count_keys: ["Host_Id", "Host_Group"]
timeframe:
minutes: 15
filter:
- query:
query_string:
query: "Host_Group.keyword:ZOOKEEPER_ZK1_QA"
alert:
- "email"
email_format: html
aggregation:
minutes: 15
aggregation_key: 'Host_Id'
email:
- "johndoe#skynet.com"
from_addr: "sam#skynet.com"
alert_subject: "PLOT1 at {0}."
alert_subject_args:
- "#timestamp"
alert_text: "Hi Team,<br><br/> {0} ERROR event(s) detected in last 15 minutes <br/><br>Hosts where errors are detected :</br> Host_Id is {1} <br></br><br></br> <br>Here are a few of those :</br><br> messages {2} </br><br> </br><br/><br>bye.</br><br></br><br>Thanks <br></br> "
alert_text_type: alert_text_only
alert_text_args:
- num_matches
- Host_Id
- message
- top_count_keys
Below code worked for me.
PLOTTHREE.yaml: |-
---
name: PLOTTHREE
type: frequency
limit_execution: "0/15 * * * *"
index: home-*
num_events: 1
aggregation:
minutes: 10
include:
- Host_Group
- Host_Id
timeframe:
minutes: 15
filter:
- query:
query_string:
query: "Host_Group.keyword:fatal"
alert:
- "email"
email:
- "john#doe.com"
from_addr: "yyy#doe.com"
alert_subject: "PLOTTHREE - ERROR detected in Kafka Zookeeper logs of host group fatal at {0}."
alert_subject_args:
- "#timestamp"
alert_text: "Hello Team, ERROR event(s) detected in last 15 minutes. Hosts where errors are detected in {0}. Here is the num events {1} . "
alert_text_type: alert_text_only
alert_text_args:
- Host_Id
- num_matches

Lookup table not working after training the model in rasa

I am new to rasa. I am training a model to recognize certain entities using lookup tables. I have multiple entities in the single sentence and I am trying to extract them.
nlu.yml
version: "2.0"
nlu:
- intent: intent_1
examples : |
- how many deaths were there last year in [Ohio](Filter-State)?
- death count of [Florida](Filter-State) this year
- death count of [Texas](Filter-State) this year
- what's the death count for this quarter in [CA](Filter-State)?
- lookup: Filter-State
examples: |
- Alabama
- AL
- Alaska
- AK
- Arizona
- AZ
- Arkansas
- AR
- California
- CA
- Colorado
- CO
- Connecticut
- CT
- Delaware
- DE
- District of Columbia
- DC
- Florida
- FL
- Georgia
- GA
config.yml
language: en
pipeline:
- name: WhitespaceTokenizer
- name: RegexFeaturizer
- name: LexicalSyntacticFeaturizer
- name: CountVectorsFeaturizer
- name: CountVectorsFeaturizer
analyzer: "char_wb"
min_ngram: 1
max_ngram: 4
- name: DIETClassifier
epochs: 150
random_seed: 1
- name: FallbackClassifier
threshold: 0.7
- name: DucklingEntityExtractor
url: http://duckling.rasa.com:8000
dimensions:
- email
- time
- name: EntitySynonymMapper
policies:
- name: AugmentedMemoizationPolicy
max_history: 4
- name: TEDPolicy
max_history: 4
epochs: 100
- name: RulePolicy
core_fallback_threshold: 0.4
core_fallback_action_name: "action_default_fallback"
enable_fallback_prediction: True
When I train the model and try using the api, It doesn't recognize cases from the states in the lookup table and as a result can't assign it to slot filter_state.
Can anyone advise me as to what am I doing wrong here for making the lookup table work!
I'm new to Rasa and searching for another issue, but I just ran into and solved this issue last night.
For lookup tables to work, you need to add "RegexEntityExtractor" to your pipeline and possibly remove RegexFeaturizer. You also need to enable lookup tables in the RegexEntityExtractor config.
config.yml
pipeline:
- name: WhitespaceTokenizer
- name: LexicalSyntacticFeaturizer
- name: CountVectorsFeaturizer
- name: RegexEntityExtractor
case_sensitive: False
use_lookup_tables: True
use_regexes: True
...
Could you, please, post on the Rasa forum and include more details of your setup? In particular, what version of Rasa Open Source are you using? Is the above your complete NLU data? (I think that you need at least 2 intents for an intent classifier to train.) I also recommend that you test the system using rasa interactive --debug and share e.g. a screenshot, this will help everyone see the exact input message and how it gets processed by Rasa. I'm sure we'll trace the issue down to its roots :-)

Hierarchical list of name value pairs in YAML

What's the best way to represent a hierarchical list of name value pairs like the following in YAML:
name_1: value_1
subName1_1: subValue1_1
subName1_2: subValue1_2
name_2: value_2
subName2_1: subValue2_1
subName2_2: subValue2_2
name_3: value_3
subName3_1: subValue3_1
subName3_2: subValue3_2
name_4: value_4
subName4_1: subValue4_1
subName4_2: subValue4_2
I am thinking of the following but not sure if this is the best way or not:
- name_1:
ID: 1
subNames:
- subName1_1:
ID: 1
- subName1_2:
ID: 2
- name_2:
ID: 2
subNames:
- subName2_1:
ID: 1
- subName2_2:
ID: 2
or I could also do:
- Name: Name_1
ID: 1
SubNames:
- SubName: subName1_1
ID: 1
- SubName: subName1_2
ID: 2
- Name: Name_2
ID: 2
SubNames:
- SubName: subName2_1
ID: 1
- SubName: subName2_2
ID: 2
I need the name_* to be unique as well as their corresponding values as well so I'd prefer something which python can easily consume to validate there are no duplicates.
Well there's the value key type. It's not part of the standard and defined for YAML 1.1, but it has been designed to solve this problem. It suggests you basically have a value in your mapping named = which contains the default value:
name_1:
=: value_1
subName1_1: subValue1_1
subName1_2: subValue1_2
name_2:
=: value_2
subName2_1: subValue2_1
subName2_2: subValue2_2
name_3:
=: value_3
subName3_1: subValue3_1
subName3_2: subValue3_2
name_4:
=: value_4
subName4_1: subValue4_1
subName4_2: subValue4_2
Alternatively, you could make the values a list with single key_value pairs:
name_1:
- value_1
- subName1_1: subValue1_1
- subName1_2: subValue1_2
name_2:
- value_2
- subName2_1: subValue2_1
- subName2_2: subValue2_2
name_3:
- value_3
- subName3_1: subValue3_1
- subName3_2: subValue3_2
name_4:
- value_4
- subName4_1: subValue4_1
- subName4_2: subValue4_2
You can write this with flow sequences since YAML allows flow sequences to contain single key-value pairs which will be interpreted as implicit mappings:
name_1: [value_1,
subName1_1: subValue1_1,
subName1_2: subValue1_2]
name_2: [value_2,
subName2_1: subValue2_1,
subName2_2: subValue2_2]
name_3: [value_3,
subName3_1: subValue3_1,
subName3_2: subValue3_2]
name_4: [value_4,
subName4_1: subValue4_1,
subName4_2: subValue4_2]
Be aware that when you do this, you can't have any kind of block-style nodes in the subnames, but other flow nodes will be fine.

Curator Allocation action does not change ES index box_type setting from "hot" to "warm"

I'm using Elasticsearch Hot Warm Architecture for large time data analytics.
My curator job will set box_type of indices older than 2 days from "hot" to "warm" nodes. But when I run it at 18pm, 30th september, the 28th september indices' box_type is still "hot".
My curator action setting:
actions:
1:
action: open
description: Open indices younger than warm days (based on index name), for logstash-
prefixed indices.
options:
ignore_empty_list: true
disable_action: false
filters:
- filtertype: age
source: name
direction: younger
timestring: '%Y.%m.%d'
unit: days
unit_count: 30
- filtertype: pattern
kind: prefix
value: logstash-
exclude:
2:
action: allocation
description: Apply shard allocation to hot nodes
options:
key: box_type
value: hot
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
ignore_empty_list: true
disable_action: false
filters:
- filtertype: age
source: name
direction: younger
timestring: '%Y.%m.%d'
unit: days
unit_count: 2
- filtertype: pattern
kind: prefix
value: logstash-
3:
action: allocation
description: Apply shard allocation to warm nodes
options:
key: box_type
value: warm
allocation_type: require
wait_for_completion: true
timeout_override:
continue_if_exception: false
ignore_empty_list: true
disable_action: false
filters:
- filtertype: age
source: name
direction: younger
timestring: '%Y.%m.%d'
unit: days
unit_count: 30
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 2
- filtertype: pattern
kind: prefix
value: logstash-
Logs:
2018-09-30 18:38:26,053 - curator.py:55 - INFO - Action ID: 1, "open" completed.
2018-09-30 18:38:26,053 - curator.py:55 - INFO - Preparing Action ID: 2, "allocation"
2018-09-30 18:38:26,053 - curator.py:55 - INFO - Trying Action ID: 2, "allocation": Apply shard allocation to hot nodes
2018-09-30 18:38:26,053 - curator.py:55 - INFO - Updating index setting {'index.routing.allocation.require.box_type': 'hot'}
2018-09-30 18:38:26,053 - curator.py:55 - INFO - Health Check for all provided keys passed.
2018-09-30 18:38:26,053 - curator.py:55 - INFO - Action ID: 2, "allocation" completed.
2018-09-30 18:38:26,053 - curator.py:55 - INFO - Preparing Action ID: 3, "allocation"
2018-09-30 18:38:26,053 - curator.py:55 - INFO - Trying Action ID: 3, "allocation": Apply shard allocation to warm nodes
2018-09-30 18:38:26,053 - curator.py:55 - INFO - Updating index setting {'index.routing.allocation.require.box_type': 'warm'}
2018-09-30 18:38:26,053 - curator.py:55 - INFO - Health Check for all provided keys passed.
2018-09-30 18:38:26,053 - curator.py:55 - INFO - Health Check for all provided keys passed.
2018-09-30 18:38:26,053 - curator.py:55 - INFO - Health Check for all provided keys passed.
2018-09-30 18:38:26,053 - curator.py:55 - INFO - Action ID: 3, "allocation" completed.
The logs say the action is completed. Shouldn't 28th of sept indices box_type be "warm"?
filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 2
Does this filter change all the 28th sept indices box_type to warm when I run it at 18PM, 30th of sept?
Time is measured from UTC 00:00 of the current day, not from the time of execution, so older than 2 days would depend on when UTC 00:00 is in your time zone. You can see how the time is calculated and measured in the curator log files if you enable DEBUG logging.

Resources