Any ideas why I am getting "yum" is not a valid attribute for a play? [closed] - ansible

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 9 days ago.
This post was edited and submitted for review 9 days ago and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
I have the following and when I do a syntax check just on this part it is saying that "yum" is not a valid attribute for the play:
---
# tasks files for ansible-vsftpd
- name: Packages are installed
yum:
name: "{{ vsftpd_package }}"
state: present
if I run a syntax check just on this part or at the end of the rest of the code, it keeps coming back that "yum" is not a valid attribute for play and is flagging line 3 of the code.
Any ideas?

The syntax error message
"<moduleName>" is not a valid attribute for the play
is because of there seems to be something missing for a valid playbook. In example the keywords hosts and tasks are missing.
---
- hosts: localhost
tasks:
- name: Packages are installed
yum:
name: "{{ vsftpd_package }}"
state: present
Documentation
Playbook syntax

Related

Ansible AWX returns error: template error while templating string: unable to locate collection community.general

I am working on a project using ansible AWX. In this project I have a check to see if all my microk8s pods are in the state running. Before I installed AWX I was testing all my plays on my linux vm. The following play worked fine on my linux vm but does not seem to work in ansible awx.
- name: Wait for pods to become running
hosts: host_company_01
tasks:
- name: wait for pod status
shell: kubectl get pods -o json
register: kubectl_get_pods
until: kubectl_get_pods.stdout|from_json|json_query('items[*].status.phase')|unique == ["Running"]
timeout: 600
AWX gives me the following respons
[WARNING]: an unexpected error occurred during Jinja2 environment setup: unable
to locate collection community.general
fatal: [host_company_01]: FAILED! => {"msg": "The conditional check 'kubectl_get_pods.stdout|from_json|json_query('items[*].status.phase')|unique == [\"Running\"]' failed. The error was: template error while templating string: unable to locate collection community.general. String: {% if kubectl_get_pods.stdout|from_json|json_query('items[*].status.phase')|unique == [\"Running\"] %} True {% else %} False {% endif %}"}
I have looked at the error message and tried different possible solutions but without any effect.
First thing I tried was looking for the community.general collections in the ansible galaxy collection list. After I saw that it was found I tried downloading it once again, but this time with sudo. The collection was installed. After running my workflow template, the error message popped up again.
Second thing was trying to use different Execution environments, I thought that this was not going to make any difference but tried it anyways since someone online fixed a similar issue by changed EE.
Last thing I tried was trying to find a way around this play by building a new play with different commands. Sadly I was not able to build an other play that did what the original one did.
Since I can not build a play that fits my needs I came back at the error message to try and fix it.

Can anyone see what I am doing wrong in the ansible playbook?

Got the following odd error with ansible lint and I can't for the life of me figure out what we did wrong, it's probably something incredibly stupid but there you go.
ansible-lint -p disable-beats.yml
Couldn't parse task at disable-beats.yml:5 (conflicting action statements: systemd, __line__
The error appears to be in '<unicode string>': line 5, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
(could not open file to display line))
{ 'name': 'disable auditbeats',
'skipped_rules': [],
'systemd': { '__file__': 'disable-beats.yml',
'__line__': 7,
'enabled': False,
'name': 'auditbeat'}}
the following is the contents of the file checked with linter:
---
- hosts: linuxservers
tasks:
- name: disable auditbeats
systemd:
name: auditbeat
enabled: no
That's a known issue with ansible-lint; upgrading to a more recent version such as 5.0.12 will make that go away. If it doesn't for your case, you can either comment on that issue or open a regression at which time you should provide the versions you are using

How to correctly configure Alerting yaml rules for Prometheus / Alertmanager

since i'm having a horrid time configuring the Alerting rules for the Prometheus Alertmanager, maybe someone can give me an hint in the right direction.
Here are the rules i'm currently trying to implement (taken straight from:
https://prometheus.io/docs/prometheus/latest/configuration/alerting_rules/)
rules.yml:
groups:
- name: example
rules:
# Alert for any instance that is unreachable for >5 minutes.
- alert: InstanceDown
expr: up == 0
for: 5m
labels:
severity: page
annotations:
summary: "Instance {{ $labels.instance }} down"
description: "{{ $labels.instance }} of job {{ $labels.job }} has been down for more than 5 minutes."
# Alert for any instance that has a median request latency >1s.
- alert: APIHighRequestLatency
expr: api_http_request_latencies_second{quantile="0.5"} > 1
for: 10m
annotations:
summary: "High request latency on {{ $labels.instance }}"
description: "{{ $labels.instance }} has a median request latency above 1s (current value: {{ $value }}s)"
with the amtool and promtool config check i'm getting the following error:
Checking '/etc/prometheus/rules.yml' FAILED: yaml: unmarshal errors:
line 1: field groups not found in type config.plain
amtool: error: failed to validate 1 file(s)
My first guess would be a wrong indentation or some other kind of yaml-syntax error.
However i've tried with multiple Alerting rules and also with different files as well as editors (currently i'm using nano).The yaml has also been checked with multiple yaml Linters.
But for the time being i've always had errors in the line of the one shown.
Any help or suggestion would greatly be appreciated!
prometheus, version 2.22.2 (branch: HEAD, revision: de1c1243f4dd66fbac3e8213e9a7bd8dbc9f38b2)
go version: go1.15.5
platform: linux/amd64
alertmanager, version 0.21.0 (branch: HEAD, revision: 4c6c03ebfe21009c546e4d1e9b92c371d67c021d)
go version: go1.14.4
yaml linters:
https://codebeautify.org/yaml-validator
https://onlineyamltools.com/validate-yaml
tested alerting rules:
https://onlineyamltools.com/validate-yaml
https://grafana.com/blog/2020/02/25/step-by-step-guide-to-setting-up-prometheus-alertmanager-with-slack-pagerduty-and-gmail/
https://rakeshjain-devops.medium.com/prometheus-alerting-most-common-alert-rules-e9e219d4e949
https://github.com/vegasbrianc/prometheus/blob/master/prometheus/alert.rules
The unmarshal of groups fails because it is supposed to be a list:
groups:
- name: GroupName
rules:
- alert: ...
See the documentation about recording rules which is the same as the alerting rules.
UPDATE after post was corrected
Your file seems to be correct. The command line is:
promtool check rules /etc/prometheus/rules.yml
I expect you used the command to check the config and not the rules.
Please note that amtool validates AlertManager's config, not Prometheus'.

Ansible - Centos machine, ansible_lsb is empty [closed]

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 4 years ago.
Improve this question
Running ansible 2.4.2.0,
on a remote Centos machine, ansible_lsb is empty
when running
ansible -m setup hostname
I get
...
"ansible_lsb": {},
"ansible_lvm": {
...
clones of the same machine show a full map, "major_release" is what I really need.
other attributes are fine, such as:
"ansible_distribution": "CentOS",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/redhat-release",
"ansible_distribution_file_variety": "RedHat",
"ansible_distribution_major_version": "7",
"ansible_distribution_release": "Core",
"ansible_distribution_version": "7.5.1804",
What can I do to populate ansible_lsb ?
You have to install the redhat-lsb-core package on the remote machine.
yum -y install redhat-lsb-core

vmware_guest_snapshot: ERROR! no action detected in task [duplicate]

This question already has answers here:
Why does Ansible show "ERROR! no action detected in task" error?
(6 answers)
Closed 5 years ago.
I've an issue with a really basic module which doesn't seem do work like it is supposed to work, I guess?
I actually copied the part out of the documentation.
My playbook looks like this:
---
- hosts: all
tasks:
- name: Create Snapshot
vmware_guest_snapshot:
hostname: vSphereIP
username: vSphereUsername
password: vSpherePassword
name: vmname
state: present
snapshot_name: aSnapshotName
description: aSnapshotDescription
I run this playbook from ansible tower and it throws "ERROR! no action detected in task". It seems like a syntax error for me but I literaly copied it over from the documentation and other modules are working with the same syntax.
So does anyone knows what I'm doing wrong?
The vmware_guest_snapshot module is available since version 2.3 of Ansible (which is not yet released).
If you are running any older version, the module name will not be recognised and Ansible will report the error no action detected in task.
Currently you need to be running Ansible from source (the devel branch) to run the vmware_guest_snapshot module.

Resources