Error while running curator for deleting older indices - elasticsearch

I want to delete the elasticsearch indices older than 7 days. So I have installed curator 4.2 as my elasticsearch version is 5.0.0 (curator version before 4.x are not compatible with elasticsearch v5)
we need to create configuration file and action file to make this work.
I have created my config and action file in root directory
My configuration file curator.yml is
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
client:
hosts:
- 127.0.0.1
port: 9200
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
aws_key:
aws_secret_key:
aws_region:
ssl_no_validate: False
http_auth:
timeout: 30
master_only: False
logging:
loglevel: INFO
logfile:
logformat: default
blacklist: ['elasticsearch', 'urllib3']
My action file curatorAction.yml is
actions:
1:
action: delete_indices
description: >-
Delete indices older than 45 days (based on index name), for logstash-
prefixed indices. Ignore the error if the filter does not result in an
actionable list of indices (ignore_empty_list) and exit cleanly.
options:
ignore_empty_list: True
timeout_override:
continue_if_exception: False
disable_action: True
filters:
- filtertype: pattern
kind: prefix
value: logstash-
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 7
exclude:
I am running the curator with the CLI as
curator --config curator.yml --dry-run curatorAction.yml
but I am getting this error. I can't find anything regarding this anywhere. Any help will be appreciated.
2017-02-15 17:52:02,991 ERROR Schema error: extra keys not allowed # data[1]
Traceback (most recent call last):
File "/usr/local/bin/curator", line 11, in <module>
load_entry_point('elasticsearch-curator==4.2.6', 'console_scripts', 'curator')()
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 722, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 697, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 895, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python2.7/site-packages/click/core.py", line 535, in invoke
return callback(*args, **kwargs)
File "/usr/local/lib/python2.7/site-packages/curator/cli.py", line 126, in cli
action_dict = validate_actions(action_config)
File "/usr/local/lib/python2.7/site-packages/curator/utils.py", line 1085, in validate_actions
root = SchemaCheck(data, actions.root(), 'Actions File', 'root').result()
File "/usr/local/lib/python2.7/site-packages/curator/validators/schemacheck.py", line 68, in result
self.test_what, self.location, self.badvalue, self.error)
curator.exceptions.ConfigurationError: Configuration: Actions File: Location: root: Bad Value: "{'action': 'delete_indices', 'description': 'Delete selected indices', 'filters': [{'exclude': None, 'kind': 'prefix', 'filtertype': 'pattern', 'value': 'logstash-'}, {'source': 'name', 'direction': 'older', 'unit_count': 30, 'timestring': '%Y.%m.%d', 'exclude': None, 'filtertype': 'age', 'unit': 'days'}], 'options': {'continue_if_exception': False, 'timeout_override': None, 'disable_action': False}}", extra keys not allowed # data[1]. Check configuration file.

I cannot find anything incorrect with your curatorAction.yml file. In fact, the below is the output from my running it. I cut/pasted exactly what you have above, minus disable_action: True in test2.yml:
root#esclient:~/.curator# curator --config test.yml --dry-run test2.yml
2017-02-15 15:48:53,705 INFO Preparing Action ID: 1, "delete_indices"
2017-02-15 15:48:53,713 INFO Trying Action ID: 1, "delete_indices": Delete indices older than 45 days (based on index name), for logstash- prefixed indices. Ignore the error if the filter does not result in an actionable list of indices (ignore_empty_list) and exit cleanly.
2017-02-15 15:48:54,034 INFO DRY-RUN MODE. No changes will be made.
2017-02-15 15:48:54,034 INFO (CLOSED) indices may be shown that may not be acted on by action "delete_indices".
2017-02-15 15:48:54,034 INFO DRY-RUN: delete_indices: logstash-2017.01.06 with arguments: {}
2017-02-15 15:48:54,034 INFO DRY-RUN: delete_indices: logstash-2017.02.01 (CLOSED) with arguments: {}
2017-02-15 15:48:54,034 INFO DRY-RUN: delete_indices: logstash-2017.02.02 with arguments: {}
2017-02-15 15:48:54,034 INFO DRY-RUN: delete_indices: logstash-2017.02.03 with arguments: {}
2017-02-15 15:48:54,034 INFO DRY-RUN: delete_indices: logstash-2017.02.04 with arguments: {}
2017-02-15 15:48:54,035 INFO DRY-RUN: delete_indices: logstash-2017.02.05 with arguments: {}
2017-02-15 15:48:54,035 INFO DRY-RUN: delete_indices: logstash-2017.02.06 with arguments: {}
2017-02-15 15:48:54,035 INFO DRY-RUN: delete_indices: logstash-2017.02.07 with arguments: {}
2017-02-15 15:48:54,035 INFO DRY-RUN: delete_indices: logstash-2017.02.08 with arguments: {}
2017-02-15 15:48:54,035 INFO Action ID: 1, "delete_indices" completed.
2017-02-15 15:48:54,035 INFO Job completed.
Again, the only change I made was to set disable_action: False—or just delete the line altogether—as nothing would run with it set to True.
That does not explain your error, which indicates that your file is incorrectly formatted. There is a root-level key it does not like, but cut/paste, and it works for me, so I can't tell how or why yours may be formatted incorrectly.
Did you use a DOS newline character for your curatorAction.yml file, or something like that?

Related

Pre-commit Pylint "exit code: 32" upon committing, no issues on `run --all-files`

When I run pre-commit run --all-files all goes well, when I try to commit, pylint throws an error with: Exit code: 32, followed by the list of usage options. The only files changed are .py files:
git status
On branch include-gitlab-arg
Your branch is up to date with 'origin/include-gitlab-arg'.
Changes to be committed:
(use "git restore --staged <file>..." to unstage)
renamed: code/project1/src/Main.py -> code/project1/src/GitLab/GitLab_runner_token_getter.py
renamed: code/project1/src/get_gitlab_runner_token.py -> code/project1/src/GitLab/get_gitlab_runner_token.py
modified: code/project1/src/__main__.py
modified: code/project1/src/control_website.py
deleted: code/project1/src/get_website_controller.py
modified: code/project1/src/helper.py
Error Output:
The git commit -m "some change." command yields the following pre-commit error:
pylint...................................................................Failed
- hook id: pylint
- exit code: 32
usage: pylint [options]
optional arguments:
-h, --help
show this
help
message and
exit
Commands:
Options which are actually commands. Options in this group are mutually exclusive.
--rcfile RCFILE
whereas pre-commit run --all-files passes.
And the .pre-commit-config.yaml contains:
# This file specifies which checks are performed by the pre-commit service.
# The pre-commit service prevents people from pushing code to git that is not
# up to standards. # The reason mirrors are used instead of the actual
# repositories for e.g. black and flake8, is because those repositories also
# need to contain a pre-commit hook file, which they often don't by default.
# So to resolve that, a mirror is created that includes such a file.
default_language_version:
python: python3.8. # or python3
repos:
# Test if the python code is formatted according to the Black standard.
- repo: https://github.com/Quantco/pre-commit-mirrors-black
rev: 22.6.0
hooks:
- id: black-conda
args:
- --safe
- --target-version=py36
# Test if the python code is formatted according to the flake8 standard.
- repo: https://github.com/Quantco/pre-commit-mirrors-flake8
rev: 5.0.4
hooks:
- id: flake8-conda
args: ["--ignore=E501,W503,W504,E722,E203"]
# Test if the import statements are sorted correctly.
- repo: https://github.com/PyCQA/isort
rev: 5.10.1
hooks:
- id: isort
args: ["--profile", "black", --line-length=79]
## Test if the variable typing is correct. (Variable typing is when you say:
## def is_larger(nr: int) -> bool: instead of def is_larger(nr). It makes
## it explicit what type of input and output a function has.
## - repo: https://github.com/python/mypy
# - repo: https://github.com/pre-commit/mirrors-mypy
#### - repo: https://github.com/a-t-0/mypy
# rev: v0.982
# hooks:
# - id: mypy
## Tests if there are spelling errors in the code.
# - repo: https://github.com/codespell-project/codespell
# rev: v2.2.1
# hooks:
# - id: codespell
# Performs static code analysis to check for programming errors.
- repo: local
hooks:
- id: pylint
name: pylint
entry: pylint
language: system
types: [python]
args:
[
"-rn", # Only display messages
"-sn", # Don't display the score
"--ignore-long-lines", # Ignores long lines.
]
# Runs additional tests that are created by the pre-commit software itself.
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.3.0
hooks:
# Check user did not add large files.
- id: check-added-large-files
# Check if `.py` files are written in valid Python syntax.
- id: check-ast
# Require literal syntax when initializing empty or zero Python builtin types.
- id: check-builtin-literals
# Checks if there are filenames that would conflict if case is changed.
- id: check-case-conflict
# Checks if the Python functions have docstrings.
- id: check-docstring-first
# Checks if any `.sh` files have a shebang like #!/bin/bash
- id: check-executables-have-shebangs
# Verifies json format of any `.json` files in repo.
- id: check-json
# Checks if there are any existing merge conflicts caused by the commit.
- id: check-merge-conflict
# Checks for symlinks which do not point to anything.
- id: check-symlinks
# Checks if xml files are formatted correctly.
- id: check-xml
# Checks if .yml files are valid.
- id: check-yaml
# Checks if debugger imports are performed.
- id: debug-statements
# Detects symlinks changed to regular files with content path symlink was pointing to.
- id: destroyed-symlinks
# Checks if you don't accidentally push a private key.
- id: detect-private-key
# Replaces double quoted strings with single quoted strings.
# This is not compatible with Python Black.
#- id: double-quote-string-fixer
# Makes sure files end in a newline and only a newline.
- id: end-of-file-fixer
# Removes UTF-8 byte order marker.
- id: fix-byte-order-marker
# Add <# -*- coding: utf-8 -*-> to the top of python files.
#- id: fix-encoding-pragma
# Checks if there are different line endings, like \n and crlf.
- id: mixed-line-ending
# Asserts `.py` files in folder `/test/` (by default:) end in `_test.py`.
- id: name-tests-test
# Override default to check if `.py` files in `/test/` START with `test_`.
args: ['--django']
# Ensures JSON files are properly formatted.
- id: pretty-format-json
args: ['--autofix']
# Sorts entries in requirements.txt and removes incorrect pkg-resources entries.
- id: requirements-txt-fixer
# Sorts simple YAML files which consist only of top-level keys.
- id: sort-simple-yaml
# Removes trailing whitespaces at end of lines of .. files.
- id: trailing-whitespace
- repo: https://github.com/PyCQA/autoflake
rev: v1.7.0
hooks:
- id: autoflake
args: ["--in-place", "--remove-unused-variables", "--remove-all-unused-imports", "--recursive"]
name: AutoFlake
description: "Format with AutoFlake"
stages: [commit]
- repo: https://github.com/PyCQA/bandit
rev: 1.7.4
hooks:
- id: bandit
name: Bandit
stages: [commit]
# Enforces formatting style in Markdown (.md) files.
- repo: https://github.com/executablebooks/mdformat
rev: 0.7.16
hooks:
- id: mdformat
additional_dependencies:
- mdformat-toc
- mdformat-gfm
- mdformat-black
- repo: https://github.com/MarcoGorelli/absolufy-imports
rev: v0.3.1
hooks:
- id: absolufy-imports
files: '^src/.+\.py$'
args: ['--never', '--application-directories', 'src']
- repo: https://github.com/myint/docformatter
rev: v1.5.0
hooks:
- id: docformatter
- repo: https://github.com/pre-commit/pygrep-hooks
rev: v1.9.0
hooks:
- id: python-use-type-annotations
- id: python-check-blanket-noqa
- id: python-check-blanket-type-ignore
# Updates the syntax of `.py` files to the specified python version.
# It is not compatible with: pre-commit hook: fix-encoding-pragma
- repo: https://github.com/asottile/pyupgrade
rev: v3.0.0
hooks:
- id: pyupgrade
args: [--py38-plus]
- repo: https://github.com/markdownlint/markdownlint
rev: v0.11.0
hooks:
- id: markdownlint
With pyproject.toml:
# This is used to configure the black, isort and mypy such that the packages don't conflict.
# This file is read by the pre-commit program.
[tool.black]
line-length = 79
include = '\.pyi?$'
exclude = '''
/(
\.git
| \.mypy_cache
| build
| dist
)/
'''
[tool.coverage.run]
# Due to a strange bug with xml output of coverage.py not writing the full-path
# of the sources, the full root directory is presented as a source alongside
# the main package. As a result any importable Python file/package needs to be
# included in the omit
source = [
"foo",
".",
]
# Excludes the following directories from the coverage report
omit = [
"tests/*",
"setup.py",
]
[tool.isort]
profile = "black"
[tool.mypy]
ignore_missing_imports = true
[tool.pylint.basic]
bad-names=[]
[tool.pylint.messages_control]
# Example: Disable error on needing a module-level docstring
disable=[
"import-error",
"invalid-name",
"fixme",
]
[tool.pytest.ini_options]
# Runs coverage.py through use of the pytest-cov plugin
# An xml report is generated and results are output to the terminal
# TODO: Disable this line to disable CLI coverage reports when running tests.
#addopts = "--cov --cov-report xml:cov.xml --cov-report term"
# Sets the minimum allowed pytest version
minversion = 5.0
# Sets the path where test files are located (Speeds up Test Discovery)
testpaths = ["tests"]
And setup.py:
"""This file is to allow this repository to be published as a pip module, such
that people can install it with: `pip install networkx-to-lava-nc`.
You can ignore it.
"""
import setuptools
with open("README.md", encoding="utf-8") as fh:
long_description = fh.read()
setuptools.setup(
name="networkx-to-lava-nc-snn",
version="0.0.1",
author="a-t-0",
author_email="author#example.com",
description="Converts networkx graphs representing spiking neural networks"
+ " (SNN)s of LIF neruons, into runnable Lava SNNs.",
long_description=long_description,
long_description_content_type="text/markdown",
url="https://github.com/a-t-0/networkx-to-lava-nc",
packages=setuptools.find_packages(),
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: AGPL3",
"Operating System :: OS Independent",
],
)
Question
How can I resolve the pylint usage error to ensure the commit passes pre-commit?
The issue was caused by #"--ignore-long-lines", # Ignores long lines. in the .pre-commit-config.yaml. I assume it conflicts with the line length settings for black, and for the pyproject.toml, which are set to 79 respectively.

Telegraf SNMP plugin Error: IF-MIB::ifTable: Unknown Object Identifier

Steps followed to installed SNMP manager and agent on ec2
sudo apt-get update
sudo apt-get install snmp snmp-mibs-downloader
sudo apt-get update
sudo apt-get install snmpd
I opened sudo nano /etc/snmp/snmp.conf and commented the following line:
#mibs :
Then I went into the configuration file and modified file as below:
sudo nano /etc/snmp/snmpd.conf
Listen for connections from the local system only
agentAddress udp:127.0.0.1:161 <--- commented this part.
Listen for connections on all interfaces (both IPv4 and IPv6)
agentAddress udp:161,udp6:[::1]:161 <--remove the comment from this line to make it work.
using below command I can get snmp data
snmpwalk -v 2c -c public 127.0.0.1 .
From inside docker container as well I can get the data
snmpwalk -v 2c -c public host.docker.internal .
Docker-compose:
telegraf_snmp:
image: telegraf:1.22.1
container_name: telegraf_snmp
restart: always
depends_on:
- influxdb
networks:
- analytics
extra_hosts:
- "host.docker.internal:host-gateway"
# ports:
# - "161:161/udp"
volumes:
- /mnt/telegraf/snmp:/var/lib/telegraf
- ./etc/telegraf/snmp/:/etc/telegraf/snmp/
env_file:
- secrets.env
environment:
INFLUXDB_URL: http://influxdb:8086
command:
--config-directory /etc/telegraf/snmp/telegraf.d
--config /etc/telegraf/snmp/telegraf.conf
links:
- influxdb
logging:
options:
max-size: "10m"
max-file: "3"
Telegraf Input conf:
[[inputs.snmp]]
## Agent addresses to retrieve values from.
## format: agents = ["<scheme://><hostname>:<port>"]
## scheme: optional, either udp, udp4, udp6, tcp, tcp4, tcp6.
## default is udp
## port: optional
## example: agents = ["udp://127.0.0.1:161"]
## agents = ["tcp://127.0.0.1:161"]
## agents = ["udp4://v4only-snmp-agent"]
# agents = ["udp://127.0.0.1:161"]
agents = ["udp://host.docker.internal:161"]
## Timeout for each request.
timeout = "15s"
## SNMP version; can be 1, 2, or 3.
version = 2
## SNMP community string.
community = "public"
## Agent host tag
# agent_host_tag = "agent_host"
## Number of retries to attempt.
retries = 3
## The GETBULK max-repetitions parameter.
# max_repetitions = 10
## SNMPv3 authentication and encryption options.
##
## Security Name.
# sec_name = "myuser"
## Authentication protocol; one of "MD5", "SHA", or "".
# auth_protocol = "MD5"
## Authentication password.
# auth_password = "pass"
## Security Level; one of "noAuthNoPriv", "authNoPriv", or "authPriv".
# sec_level = "authNoPriv"
## Context Name.
# context_name = ""
## Privacy protocol used for encrypted messages; one of "DES", "AES", "AES192", "AES192C", "AES256", "AES256C", or "".
### Protocols "AES192", "AES192", "AES256", and "AES256C" require the underlying net-snmp tools
### to be compiled with --enable-blumenthal-aes (http://www.net-snmp.org/docs/INSTALL.html)
# priv_protocol = ""
## Privacy password used for encrypted messages.
# priv_password = ""
## Add fields and tables defining the variables you wish to collect. This
## example collects the system uptime and interface variables. Reference the
## full plugin documentation for configuration details.
[[inputs.snmp.field]]
oid = "RFC1213-MIB::sysUpTime.0"
name = "uptime"
[[inputs.snmp.field]]
oid = "RFC1213-MIB::sysName.0"
name = "source"
is_tag = true
[[inputs.snmp.table]]
oid = "IF-MIB::ifTable"
name = "interface"
inherit_tags = ["source"]
[[inputs.snmp.table.field]]
oid = "IF-MIB::ifDescr"
name = "ifDescr"
is_tag = true
Telegraf logs:
Cannot find module (IF-MIB): At line 1 in (none)
IF-MIB::ifTable: Unknown Object Identifier: exit status 2
2022-09-09T10:10:09Z I! Starting Telegraf 1.22.1
2022-09-09T10:10:09Z I! Loaded inputs: snmp
2022-09-09T10:10:09Z I! Loaded aggregators:
2022-09-09T10:10:09Z I! Loaded processors:
2022-09-09T10:10:09Z I! Loaded outputs: file influxdb_v2
2022-09-09T10:10:09Z I! Tags enabled: host=7a38697f4527
2022-09-09T10:10:09Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"7a38697f4527", Flush Interval:10s
2022-09-09T10:10:09Z E! [telegraf] Error running agent: could not initialize input inputs.snmp: initializing table interface: translating: MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf
Cannot find module (IF-MIB): At line 1 in (none)
IF-MIB::ifTable: Unknown Object Identifier: exit status 2
2022-09-09T10:10:11Z I! Starting Telegraf 1.22.1
2022-09-09T10:10:11Z I! Loaded inputs: snmp
2022-09-09T10:10:11Z I! Loaded aggregators:
2022-09-09T10:10:11Z I! Loaded processors:
2022-09-09T10:10:11Z I! Loaded outputs: file influxdb_v2
2022-09-09T10:10:11Z I! Tags enabled: host=7a38697f4527
2022-09-09T10:10:11Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"7a38697f4527", Flush Interval:10s
2022-09-09T10:10:11Z E! [telegraf] Error running agent: could not initialize input inputs.snmp: initializing table interface: translating: MIB search path: /root/.snmp/mibs:/usr/share/snmp/mibs:/usr/share/snmp/mibs/iana:/usr/share/snmp/mibs/ietf
Cannot find module (IF-MIB): At line 1 in (none)
IF-MIB::ifTable: Unknown Object Identifier: exit status 2
But in telegraf I get above error.
I checked the mibs directory using ls /usr/share/snmp/mibs
I cannot find IF-MIB file here even after installing
$ sudo apt-get install snmp-mibs-downloader
$ sudo download-mibs
How can I resolve this issue ? Do I need to follow some additional steps ?
SNMP Plugin in telegraf should able to pull the data from SNMP

run filebeat as ansible

I want to run filebeats installed using ansible.
It worked fine when I executed command in terminal.
However, when I ran filebeat with ansible, the log was outputted as shown below.
ansible filebeat run role
- name: run vote history filebeat
remote_user: irteam
shell: /home1/irteam/apps/filebeat/filebeat -e -c /home1/irteam/apps/filebeat/vote_history.yml -d publish &
ansible error log
"stderr": "2019-10-18T21:37:39.793+0900\tINFO\tinstance/beat.go:606\tHome path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64] Config path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64] Data path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/data] Logs path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/logs]\n2019-10-18T21:37:39.794+0900\tINFO\tinstance/beat.go:614\tBeat ID: 4e840153-d8fd-44c4-ab32-7e2d3dd34d3f\n2019-10-18T21:37:39.794+0900\tINFO\t[seccomp]\tseccomp/seccomp.go:93\tSyscall filter could not be installed because the kernel does not support seccomp\n2019-10-18T21:37:39.794+0900\tINFO\t[beat]\tinstance/beat.go:902\tBeat info\t{\"system_info\": {\"beat\": {\"path\": {\"config\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64\", \"data\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/data\", \"home\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64\", \"logs\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/logs\"}, \"type\": \"filebeat\", \"uuid\": \"4e840153-d8fd-44c4-ab32-7e2d3dd34d3f\"}}}\n2019-10-18T21:37:39.794+0900\tINFO\t[beat]\tinstance/beat.go:911\tBuild info\t{\"system_info\": {\"build\": {\"commit\": \"dd3f47f0fb299aa5de9c5c1468faacc1b9b3c27f\", \"libbeat\": \"7.2.1\", \"time\": \"2019-07-24T17:10:04.000Z\", \"version\": \"7.2.1\"}}}\n2019-10-18T21:37:39.794+0900\tINFO\t[beat]\tinstance/beat.go:914\tGo runtime info\t{\"system_info\": {\"go\": {\"os\":\"linux\",\"arch\":\"amd64\",\"max_procs\":4,\"version\":\"go1.12.4\"}}}\n2019-10-18T21:37:39.796+0900\tINFO\t[beat]\tinstance/beat.go:918\tHost info\t{\"system_info\": {\"host\": {\"architecture\":\"x86_64\",\"boot_time\":\"2019-09-16T14:01:30+09:00\",\"containerized\":false,\"name\":\"dev-vos-api-ncl\",\"ip\":[\"127.0.0.1/8\",\"10.113.103.6/23\"],\"kernel_version\":\"3.10.0-693.2.2.el7.x86_64\",\"mac\":[\"7e:76:cd:f1:39:c6\"],\"os\":{\"family\":\"redhat\",\"platform\":\"centos\",\"name\":\"CentOS Linux\",\"version\":\"7 (Core)\",\"major\":7,\"minor\":4,\"patch\":1708,\"codename\":\"Core\"},\"timezone\":\"KST\",\"timezone_offset_sec\":32400,\"id\":\"97b3ae2e453f442cb387546f7d3d3214\"}}}\n2019-10-18T21:37:39.796+0900\tINFO\t[beat]\tinstance/beat.go:947\tProcess info\t{\"system_info\": {\"process\": {\"capabilities\": {\"inheritable\":null,\"permitted\":null,\"effective\":null,\"bounding\":[\"chown\",\"dac_override\",\"dac_read_search\",\"fowner\",\"fsetid\",\"kill\",\"setgid\",\"setuid\",\"setpcap\",\"linux_immutable\",\"net_bind_service\",\"net_broadcast\",\"net_admin\",\"net_raw\",\"ipc_lock\",\"ipc_owner\",\"sys_module\",\"sys_rawio\",\"sys_chroot\",\"sys_ptrace\",\"sys_pacct\",\"sys_admin\",\"sys_boot\",\"sys_nice\",\"sys_resource\",\"sys_time\",\"sys_tty_config\",\"mknod\",\"lease\",\"audit_write\",\"audit_control\",\"setfcap\",\"mac_override\",\"mac_admin\",\"syslog\",\"wake_alarm\",\"block_suspend\"],\"ambient\":null}, \"cwd\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64\", \"exe\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/filebeat\", \"name\": \"filebeat\", \"pid\": 115545, \"ppid\": 1, \"seccomp\": {\"mode\":\"disabled\"}, \"start_time\": \"2019-10-18T21:37:39.160+0900\"}}}\n2019-10-18T21:37:39.796+0900\tINFO\tinstance/beat.go:292\tSetup Beat: filebeat; Version: 7.2.1\n2019-10-18T21:37:39.797+0900\tINFO\t[publisher]\tpipeline/module.go:97\tBeat name: dev-vos-api-ncl\n2019-10-18T21:37:39.797+0900\tWARN\tbeater/filebeat.go:152\tFilebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.\n2019-10-18T21:37:39.797+0900\tINFO\tinstance/beat.go:421\tfilebeat start running.\n2019-10-18T21:37:39.797+0900\tINFO\t[monitoring]\tlog/log.go:118\tStarting metrics logging every 30s\n2019-10-18T21:37:39.797+0900\tINFO\tregistrar/registrar.go:145\tLoading registrar data from /home1/irteam/apps/filebeat-7.2.1-linux-x86_64/data/registry/filebeat/data.json\n2019-10-18T21:37:39.798+0900\tINFO\tregistrar/registrar.go:152\tStates Loaded from registrar: 2\n2019-10-18T21:37:39.798+0900\tWARN\tbeater/filebeat.go:368\tFilebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.\n2019-10-18T21:37:39.798+0900\tINFO\tcrawler/crawler.go:72\tLoading Inputs: 1\n2019-10-18T21:37:39.798+0900\tINFO\tlog/input.go:148\tConfigured paths: [/home1/irteam/logs/vos_api/vote_history*.log]\n2019-10-18T21:37:39.798+0900\tINFO\tinput/input.go:114\tStarting input of type: log; ID: 5211242161898657702 \n2019-10-18T21:37:39.798+0900\tINFO\tcrawler/crawler.go:106\tLoading and starting Inputs completed. Enabled inputs: 1",
"stderr_lines": [
"2019-10-18T21:37:39.793+0900\tINFO\tinstance/beat.go:606\tHome path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64] Config path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64] Data path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/data] Logs path: [/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/logs]",
"2019-10-18T21:37:39.794+0900\tINFO\tinstance/beat.go:614\tBeat ID: 4e840153-d8fd-44c4-ab32-7e2d3dd34d3f",
"2019-10-18T21:37:39.794+0900\tINFO\t[seccomp]\tseccomp/seccomp.go:93\tSyscall filter could not be installed because the kernel does not support seccomp",
"2019-10-18T21:37:39.794+0900\tINFO\t[beat]\tinstance/beat.go:902\tBeat info\t{\"system_info\": {\"beat\": {\"path\": {\"config\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64\", \"data\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/data\", \"home\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64\", \"logs\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/logs\"}, \"type\": \"filebeat\", \"uuid\": \"4e840153-d8fd-44c4-ab32-7e2d3dd34d3f\"}}}",
"2019-10-18T21:37:39.794+0900\tINFO\t[beat]\tinstance/beat.go:911\tBuild info\t{\"system_info\": {\"build\": {\"commit\": \"dd3f47f0fb299aa5de9c5c1468faacc1b9b3c27f\", \"libbeat\": \"7.2.1\", \"time\": \"2019-07-24T17:10:04.000Z\", \"version\": \"7.2.1\"}}}",
"2019-10-18T21:37:39.794+0900\tINFO\t[beat]\tinstance/beat.go:914\tGo runtime info\t{\"system_info\": {\"go\": {\"os\":\"linux\",\"arch\":\"amd64\",\"max_procs\":4,\"version\":\"go1.12.4\"}}}",
"2019-10-18T21:37:39.796+0900\tINFO\t[beat]\tinstance/beat.go:918\tHost info\t{\"system_info\": {\"host\": {\"architecture\":\"x86_64\",\"boot_time\":\"2019-09-16T14:01:30+09:00\",\"containerized\":false,\"name\":\"dev-vos-api-ncl\",\"ip\":[\"127.0.0.1/8\",\"10.113.103.6/23\"],\"kernel_version\":\"3.10.0-693.2.2.el7.x86_64\",\"mac\":[\"7e:76:cd:f1:39:c6\"],\"os\":{\"family\":\"redhat\",\"platform\":\"centos\",\"name\":\"CentOS Linux\",\"version\":\"7 (Core)\",\"major\":7,\"minor\":4,\"patch\":1708,\"codename\":\"Core\"},\"timezone\":\"KST\",\"timezone_offset_sec\":32400,\"id\":\"97b3ae2e453f442cb387546f7d3d3214\"}}}",
"2019-10-18T21:37:39.796+0900\tINFO\t[beat]\tinstance/beat.go:947\tProcess info\t{\"system_info\": {\"process\": {\"capabilities\": {\"inheritable\":null,\"permitted\":null,\"effective\":null,\"bounding\":[\"chown\",\"dac_override\",\"dac_read_search\",\"fowner\",\"fsetid\",\"kill\",\"setgid\",\"setuid\",\"setpcap\",\"linux_immutable\",\"net_bind_service\",\"net_broadcast\",\"net_admin\",\"net_raw\",\"ipc_lock\",\"ipc_owner\",\"sys_module\",\"sys_rawio\",\"sys_chroot\",\"sys_ptrace\",\"sys_pacct\",\"sys_admin\",\"sys_boot\",\"sys_nice\",\"sys_resource\",\"sys_time\",\"sys_tty_config\",\"mknod\",\"lease\",\"audit_write\",\"audit_control\",\"setfcap\",\"mac_override\",\"mac_admin\",\"syslog\",\"wake_alarm\",\"block_suspend\"],\"ambient\":null}, \"cwd\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64\", \"exe\": \"/home1/irteam/apps/filebeat-7.2.1-linux-x86_64/filebeat\", \"name\": \"filebeat\", \"pid\": 115545, \"ppid\": 1, \"seccomp\": {\"mode\":\"disabled\"}, \"start_time\": \"2019-10-18T21:37:39.160+0900\"}}}",
"2019-10-18T21:37:39.796+0900\tINFO\tinstance/beat.go:292\tSetup Beat: filebeat; Version: 7.2.1",
"2019-10-18T21:37:39.797+0900\tINFO\t[publisher]\tpipeline/module.go:97\tBeat name: dev-vos-api-ncl",
"2019-10-18T21:37:39.797+0900\tWARN\tbeater/filebeat.go:152\tFilebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.",
"2019-10-18T21:37:39.797+0900\tINFO\tinstance/beat.go:421\tfilebeat start running.",
"2019-10-18T21:37:39.797+0900\tINFO\t[monitoring]\tlog/log.go:118\tStarting metrics logging every 30s",
"2019-10-18T21:37:39.797+0900\tINFO\tregistrar/registrar.go:145\tLoading registrar data from /home1/irteam/apps/filebeat-7.2.1-linux-x86_64/data/registry/filebeat/data.json",
"2019-10-18T21:37:39.798+0900\tINFO\tregistrar/registrar.go:152\tStates Loaded from registrar: 2",
"2019-10-18T21:37:39.798+0900\tWARN\tbeater/filebeat.go:368\tFilebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.",
"2019-10-18T21:37:39.798+0900\tINFO\tcrawler/crawler.go:72\tLoading Inputs: 1",
"2019-10-18T21:37:39.798+0900\tINFO\tlog/input.go:148\tConfigured paths: [/home1/irteam/logs/vos_api/vote_history*.log]",
"2019-10-18T21:37:39.798+0900\tINFO\tinput/input.go:114\tStarting input of type: log; ID: 5211242161898657702 ",
"2019-10-18T21:37:39.798+0900\tINFO\tcrawler/crawler.go:106\tLoading and starting Inputs completed. Enabled inputs: 1"
It looks like the command is not returning 0 as a return code although it succeeded (with warnings).
By default, the shell module in ansible will consider that any command with a return code other than 0 is an error.
You should check your command return codes with echo $? right after executing it in your terminal. Then you can either:
fix everything so that it returns 0 all the time (unless there is really an error)
adjust failed_when on your task to include return codes considered as success (replace X with an actual return code you want to trust):
- name: run vote history filebeat
remote_user: irteam
shell: /home1/irteam/apps/filebeat/filebeat -e -c /home1/irteam/apps/filebeat/vote_history.yml -d publish &
register: filebeat_cmd
failed_when: filebeat_cmd.rc not in [0,X]

Puppet bolt plan in yaml format

I am trying to put a together Puppet bolt plan in YAML format.
I got it working in .pp file and here is the plan
plan profiles::chg123456(
TargetSpec $nodes,
) {
apply($nodes) {
logrotate::rule {'proftpd':
path => ['/var/log/proftpd/*.log', '/var/log/xferlog', '/var/log/proftpd.system.log', '/var/log/sftp.log', '/var/log/sftp-xferlog',],
maxsize => '100m',
rotate_every => 'week',
compress => true,
ifempty => true,
missingok => true,
sharedscripts => true,
postrotate => 'test -f /var/lock/subsys/proftpd && /usr/bin/killall -HUP proftpd || :'
}
}
}
It worked and created /etc/logrotate.d/proftpd with all the correct settings.
Now I want to convert to YAML format but no idea how to do that.
Here is what I guessed but bolt plan show keep saying
$ bolt plan show
Parse error in step "chg123456":
No valid action detected (file: C:/Users/puppet/msys64/home/puppet/.puppetlabs/bolt/modules/profiles/plans/chg123456.yaml)
My YAML plan looks like follows
parameters:
nodes:
type: TargetSpec
steps:
- name: chg123456
target: $nodes
logrotate::rules:
proftpd:
path:
- '/var/log/proftpd/*.log'
- '/var/log/xferlog'
- '/var/log/proftpd.system.log'
- '/var/log/sftp.log'
- '/var/log/sftp-xferlog'
maxsize: '100m'
compress: true
ifempty: true
missingok: true
sharedscripts: true
postrotate: 'test -f /var/lock/subsys/proftpd && /usr/bin/killall -HUP proftpd || :'
return: $chg123456
What am I doing wrong?
Thanks
You'll want to use a resources step, and list the resources you want to use in yaml (documentation):
parameters:
nodes:
type: TargetSpec
steps:
- name: chg123456
target: $nodes
resources:
- logrotate::rules: proftpd
parameters:
path:
- '/var/log/proftpd/*.log'
- '/var/log/xferlog'
- '/var/log/proftpd.system.log'
- '/var/log/sftp.log'
- '/var/log/sftp-xferlog'
maxsize: '100m'
compress: true
ifempty: true
missingok: true
sharedscripts: true
postrotate: 'test -f /var/lock/subsys/proftpd && /usr/bin/killall -HUP proftpd || :'
return: $chg123456
In response to one comment, bolt plan convert is only used to convert yaml plans into Puppet plans, not the other way around.

how to configure the elasticserch.yml for repository-hdfs plugin of elasticsearch

elasticsearch 2.3.2
repository-hdfs 2.3.1
I configure the elasticsearch.yml file as the elastic official
repositories
hdfs:
uri: "hdfs://<host>:<port>/" # optional - Hadoop file-system URI
path: "some/path" # required - path with the file-system where data is stored/loaded
load_defaults: "true" # optional - whether to load the default Hadoop configuration (default) or not
conf_location: "extra-cfg.xml" # optional - Hadoop
configuration XML to be loaded (use commas for multi values)
conf.<key> : "<value>" # optional - 'inlined' key=value added to the Hadoop configuration
concurrent_streams: 5 # optional - the number of concurrent streams (defaults to 5)
compress: "false" # optional - whether to compress the metadata or not (default)
chunk_size: "10mb" # optional - chunk size (disabled by default)
but it raise Exception ,the format is incorrect
error info :
Exception in thread "main" SettingsException
[Failed to load settings from [elasticsearch.yml]]; nested: ScannerException[while scanning a simple key'
in 'reader', line 99, column 2:
repositories
^
could not find expected ':'
in 'reader', line 100, column 10:
hdfs:
^];
Likely root cause: while scanning a simple key
in 'reader', line 99, column 2:
repositories
^
could not find expected ':'
in 'reader', line 100, column 10:
hdfs:
I edit it as:
repositories:
hdfs:
uri: "hdfs://191.168.4.220:9600/"
but it doesn't work
I want know what the format is.
I find the aws configure for elasticsearch.xml
cloud:
aws:
access_key: AKVAIQBF2RECL7FJWGJQ
secret_key: vExyMThREXeRMm/b/LRzEB8jWwvzQeXgjqMX+6br
repositories:
s3:
bucket: "bucket_name"
region: "us-west-2"
private-bucket:
bucket: <bucket not accessible by default key>
access_key: <access key>
secret_key: <secret key>
remote-bucket:
bucket: <bucket in other region>
region: <region>
external-bucket:
bucket: <bucket>
access_key: <access key>
secret_key: <secret key>
endpoint: <endpoint>
protocol: <protocol>
I imitate it,but still doesn't work
I try to install repository-hdfs 2.3.1 in elasticsearch 2.3.2 ,but failed :
ERROR: Plugin [repository-hdfs] is incompatible with Elasticsearch [2.3.2]. Was designed for version [2.3.1]
The plugin can be only installed in elasticsearch 2.3.1.
You should specify uri,path,conf_location option and maybe delete conf.key option. Take the following config as an example.
security.manager.enabled: false
repositories.hdfs:
uri: "hdfs://master:9000" # optional - Hadoop file-system URI
path: "/aaa/bbb" # required - path with the file-system where data is stored/loaded
load_defaults: "true" # optional - whether to load the default Hadoop configuration (default) or not
conf_location: "/home/ec2-user/app/hadoop-2.6.3/etc/hadoop/core-site.xml,/home/ec2-user/app/hadoop-2.6.3/etc/hadoop/hdfs-site.xml" # optional - Hadoop configuration XML to be loaded (use commas for multi values)
concurrent_streams: 5 # optional - the number of concurrent streams (defaults to 5)
compress: "false" # optional - whether to compress the metadata or not (default)
chunk_size: "10mb" # optional - chunk size (disabled by default)
I start es successfully:
[----#----------- elasticsearch-2.3.1]$ bin/elasticsearch
[2016-05-06 04:40:58,173][INFO ][node ] [Protector] version[2.3.1], pid[17641], build[bd98092/2016-04-04T12:25:05Z]
[2016-05-06 04:40:58,174][INFO ][node ] [Protector] initializing ...
[2016-05-06 04:40:58,830][INFO ][plugins ] [Protector] modules [reindex, lang-expression, lang-groovy], plugins [repository-hdfs], sites []
[2016-05-06 04:40:58,863][INFO ][env ] [Protector] using [1] data paths, mounts [[/ (rootfs)]], net usable_space [8gb], net total_space [9.9gb], spins? [unknown], types [rootfs]
[2016-05-06 04:40:58,863][INFO ][env ] [Protector] heap size [1007.3mb], compressed ordinary object pointers [true]
[2016-05-06 04:40:58,863][WARN ][env ] [Protector] max file descriptors [4096] for elasticsearch process likely too low, consider increasing to at least [65536]
[2016-05-06 04:40:59,192][INFO ][plugin.hadoop.hdfs ] Loaded Hadoop [1.2.1] libraries from file:/home/ec2-user/app/elasticsearch-2.3.1/plugins/repository-hdfs/
[2016-05-06 04:41:01,598][INFO ][node ] [Protector] initialized
[2016-05-06 04:41:01,598][INFO ][node ] [Protector] starting ...
[2016-05-06 04:41:01,823][INFO ][transport ] [Protector] publish_address {xxxxxxxxx:9300}, bound_addresses {xxxxxxx:9300}
[2016-05-06 04:41:01,830][INFO ][discovery ] [Protector] hdfs/9H8wli0oR3-Zp-M9ZFhNUQ
[2016-05-06 04:41:04,886][INFO ][cluster.service ] [Protector] new_master {Protector}{9H8wli0oR3-Zp-M9ZFhNUQ}{xxxxxxx}{xxxxx:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-05-06 04:41:04,908][INFO ][http ] [Protector] publish_address {xxxxxxxxx:9200}, bound_addresses {xxxxxxx:9200}
[2016-05-06 04:41:04,908][INFO ][node ] [Protector] started
[2016-05-06 04:41:05,415][INFO ][gateway ] [Protector] recovered [1] indices into cluster_state
[2016-05-06 04:41:06,097][INFO ][cluster.routing.allocation] [Protector] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[website][0], [website][0]] ...]).
But ,when i try to create a snapshot :
PUT /_snapshot/my_backup
{
"type": "hdfs",
"settings": {
"path":"/aaa/bbb/"
}
}
i get the following error:
Caused by: java.io.IOException: Mkdirs failed to create file:/aaa/bbb/tests-zTkKRtoZTLu3m3RLascc1w

Resources