I took volumes 'in-use' of OpenStack instance and filtered those volume ids into a file from which it has to make a backup
shell: openstack volume list | grep 'in-use' | awk '{print $2}' > /home/volumeid
shell: openstack volume backup create {{ item }}
with_items:
- /home/volumeid
error shows like
**failed: [controller2] (item=volumeid) => {"ansible_loop_var": "item", "changed": true, "cmd": "openstack volume backup create volumeid", "delta": "0:00:03.682611", "end": "2022-09-26 12:01:59.961613", "item": "volumeid", "msg": "non-zero return code", "rc": 1, "start": "2022-09-26 12:01:56.279002", "stderr": "No volume with a name or ID of 'volumeid' exists.", "stderr_lines": ["No volume with a name or ID of 'volumeid' exists."], "stdout": "", "stdout_lines": []}
failed: [controller1] (item=volumeid) => {"ansible_loop_var": "item", "changed": true, "cmd": "openstack volume backup create volumeid", "delta": "0:00:04.020051", "end": "2022-09-26 12:02:00.280130", "item": "volumeid", "msg": "non-zero return code", "rc": 1, "start": "2022-09-26 12:01:56.260079", "stderr": "No volume with a name or ID of 'volumeid' exists.", "stderr_lines": ["No volume with a name or ID of 'volumeid' exists."], "stdout": "", "stdout_lines": []}**
Can someone say how to create the volume backup from that file (which has volume ids) in the ansible playbook?
Currently, you are supplying only one element to the with_items, that is, /home/volumeid, meaning your loop will iterate only once for the file name and not its contents.
You need to use the file lookup if you are on localhost or the slurp module on the remote host. Example:
For the localhost:
- name: Show the volume id from the file
debug:
msg: "{{ item }}"
loop: "{{ lookup('file', '/home/volumeid').splitlines() }}"
For the remote host:
- name: Alternate if the file is on remote host
ansible.builtin.slurp:
src: /home/volumeid
register: vol_data
- name: Show the volume id from the file
debug:
msg: "{{ item }}"
loop: "{{ (vol_data['content'] | b64decode).splitlines() }}"
Just one line shell command:
openstack volume list --status in-use -c ID -f value | xargs -n1 openstack volume backup create
One advice, don't use the hardcode command like this grep 'in-use' or awk '{print $2}', openstack has it's optional arguments and output formatters, check it by openstack command [sub command] -h.
Related
I'm trying to pass the content of a text file I have in the local folder to an Ansible shell command and for whatever reason, I can't and I'm sure its something silly that I'm missing.
Any help would be really appreciated
Here is the ansible line:
- name: Connect K8s cluster to Shipa
shell: shipa cluster-add {{ cluster_name }} --addr $(< {{ cluster_name }}.txt) --cacert {{ cluster_name }}.crt --pool {{ cluster_name }} --ingress-service-type=LoadBalancer
args:
executable: /bin/bash
no_log: False
Whenever I run this, this is what I get in the log:
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "shipa cluster-add scale2 --addr $(< scale2.txt) --cacert scale2.crt --pool scale2 --ingress-service-type=LoadBalancer", "delta": "0:00:00.096162", "end": "2020-04-30 21:32:54.245781", "msg": "non-zero return code", "rc": 1, "start": "2020-04-30 21:32:54.149619", "stderr": "Error: 500 Internal Server Error: parse \"https://\\x1b[0;33mhttps://192.196.192.156\\x1b[0m\": net/url: invalid control character in URL", "stderr_lines": ["Error: 500 Internal Server Error: parse \"https://\\x1b[0;33mhttps://192.196.192.156\\x1b[0m\": net/url: invalid control character in URL"], "stdout": "", "stdout_lines": []}
Here is the content of the file:
# cat scale2.txt
https://192.196.192.156
Here is what Im trying to pass in the command:
shipa cluster-add scale2 --addr https://192.196.192.156 --cacert scale2.crt --pool scale2 --ingress-service-type=LoadBalancer
EDIT:
I also tried the following:
- name: Get K8s cluster address
shell: kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}'
args:
executable: /bin/bash
register: cluster_address
no_log: False
- debug: var=cluster_address
- name: Save to file
copy:
content: "{{ ''.join(cluster_address.stdout_lines) | replace('\\u001b[0;33m', '')| replace('\\u001b[0m', '') }}"
dest: "cluster_address.json"
- name: Connect K8s cluster to Shipa
shell: shipa cluster-add {{ cluster_name }} --addr {{ lookup('file', 'cluster_address.json') }} --cacert {{ cluster_name }}.crt --pool {{ cluster_name }} --ingress-service-type=LoadBalancer
args:
executable: /bin/bash
no_log: False
But it saves the following content to the file:
# vi cluster_address.json
^[[0;33mhttps://192.192.56.192^[[0m
Here is the output of the ansible execution:
TASK [debug] **********************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"cluster_address": {
"changed": true,
"cmd": "kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}'",
"delta": "0:00:00.144193",
"end": "2020-04-30 22:54:21.970460",
"failed": false,
"rc": 0,
"start": "2020-04-30 22:54:21.826267",
"stderr": "",
"stderr_lines": [],
"stdout": "\u001b[0;33mhttps://192.192.56.192\u001b[0m",
"stdout_lines": [
"\u001b[0;33mhttps://192.192.56.192\u001b[0m"
]
}
}
TASK [Save to file] ***************************************************************************************************************************************************************************************************************************
changed: [localhost]
TASK [Connect K8s cluster to Shipa] ***********************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "shipa cluster-add scale1 --addr \u001b[0;33mhttps://192.192.56.192\u001b[0m --cacert scale1.crt --pool scale1 --ingress-service-type=LoadBalancer", "delta": "0:00:00.106936", "end": "2020-04-30 22:54:23.213485", "msg": "non-zero return code", "rc": 127, "start": "2020-04-30 22:54:23.106549", "stderr": "Error: 400 Bad Request: either default or a list of pools must be set\n/bin/bash: 33mhttps://192.192.56.192\u001b[0m: No such file or directory", "stderr_lines": ["Error: 400 Bad Request: either default or a list of pools must be set", "/bin/bash: 33mhttps://192.192.56.192\u001b[0m: No such file or directory"], "stdout": "", "stdout_lines": []}
You can disable the fancy terminal output by setting the TERM to dumb
- shell: kubectl cluster-info | awk '/Kubernetes master/ {print $NF}'
environment:
TERM: dumb
register: cluster_info
- set_fact:
cluster_addr: '{{ cluster_info.stdout | trim }}'
then, separately, you don't need to use the file lookup for a value that you have in a jinja2 variable already:
- name: Connect K8s cluster to Shipa
command: shipa cluster-add {{ cluster_name }} --addr {{ cluster_addr }} --cacert {{ cluster_name }}.crt --pool {{ cluster_name }} --ingress-service-type=LoadBalancer
args:
executable: /bin/bash
I have written the following ansible playbook to find the disk failure on the raid
- name: checking raid status
shell: "cat /proc/mdstat | grep nvme"
register: "array_check"
- debug:
msg: "{{ array_check.stdout_lines }}"
Following is the output I got
"msg": [
"md0 : active raid1 nvme0n1p1[0] nvme1n1p1[1]",
"md2 : active raid1 nvme1n1p3[1](F) nvme0n1p3[0]",
"md1 : active raid1 nvme1n1p2[1] nvme0n1p2[0]"
]
I want to extract the disk name which is failed from the register variable array_check.
How do I do this in the ansible? Can I use set_fact module in ansible? Can I use grep, awk, sed command on the register variable array_check
This is the playbook I am using to check the health status of a drive using smartctl
- name: checking the smartctl logs
shell: "smartctl -H /dev/{{ item }}"
with_items:
- nvme0
- nvme1
And I am facing the following error
(item=nvme0) => {"changed": true, "cmd": "smartctl -H /dev/nvme0", "delta": "0:00:00.090760", "end": "2019-09-05 11:21:17.035173", "failed": true, "item": "nvme0", "rc": 127, "start": "2019-09-05 11:21:16.944413", "stderr": "/bin/sh: 1: smartctl: not found", "stdout": "", "stdout_lines": [], "warnings": []}
(item=nvme1) => {"changed": true, "cmd": "smartctl -H /dev/nvme1", "delta": "0:00:00.086596", "end": "2019-09-05 11:21:17.654036", "failed": true, "item": "nvme1", "rc": 127, "start": "2019-09-05 11:21:17.567440", "stderr": "/bin/sh: 1: smartctl: not found", "stdout": "", "stdout_lines": [], "warnings": []}
The desired output should be something like this,
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
Below is the complete playbook including the logic to execute multiple commands in a single task using with_items,
---
- hosts: raid_host
remote_user: ansible
become: yes
become_method: sudo
tasks:
- name: checking raid status
shell: "cat /proc/mdstat | grep 'F' | cut -d' ' -f6 | cut -d'[' -f1"
register: "array_check"
- debug:
msg: "{{ array_check.stdout_lines }}"
- name: checking the samrtctl logs for the drive
shell: "/usr/sbin/smartctl -H /dev/{{ item }} | tail -2|awk -F' ' '{print $6}'"
with_items:
- "nvme0"
- "nvme1"
register: "smartctl_status"
- name: Create directory for python files
file: path=/home/vuser/test/
state=directory
owner={{ user }}
group={{ user }}
mode=755
- name: Copy python file over
copy:
src=sample.py
dest=/home/vuser/test/sample.py
owner={{ user }}
group={{ user }}
mode=777
- name: Execute script
command: python sample.py
args:
chdir: /home/vuser/test/
ignore_errors: yes
error
fatal: [n]: FAILED! => {"changed": true, "cmd": ["python", "sample.py"], "delta": "0:00:00.003200", "end": "2019-07-18 13:57:40.213252", "msg": "non-zero return code", "rc": 1, "start": "2019-07-18 13:57:40.221132", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}
not able to figure out, help would be appreciated
Change the indent like below and remove ignore_errors.
- name: Execute script
command: python sample.py
args:
chdir: /home/vuser/test/
register: cat_contents
- name: Print contents
debug:
msg: "{{ cat_contents.stdout }}"
- name: Create directory for python files
file: path=/home/vuser/test/
state=directory
owner={{ user }}
group={{ user }}
mode=755
- name: Copy python file over
copy:
src=/home/vuser/sample.py
dest=/home/vuser/test/
owner={{ user }}
group={{ user }}
mode=777
- name: Execute script
command: python sample.py
args:
chdir: /home/vuser/test/
The sample.py is copied correctly in the destination folder at the node1 at dest=/home/vuser/test/
but the I get this error after I have done the change also
fatal: [node1]: FAILED! => {"changed": true, "cmd": ["python", "sample.py"], "delta": "0:00:00.002113", "end": "2019-07-19 10:59:53.7535351", "msg": "non-zero return code", "rc": 1, "start": "2019-07-19 10:59:53.358678548", "stderr": "", "stderr_lines": [], "stdout": "hello world", "stdout_lines": ["hello world"]}
As new nodes (CentOS 7.6) are added, there are basic groups and users that need to be created. Some of the nodes have some of the groups and users. I would like to only create the groups and users on the nodes where they don't exist via my Ansible (version 2.8.0) basic role file.
Currently, I'm testing for the group/user, but the there's always a "fatal" printed and my conditionals don't appear to work.
roles/basic/tasks/main.yml
- name: "Does k8s group exist?"
shell: grep -q "^k8s" /etc/group
register: gexist
- name: "Create k8s group"
shell: groupadd -g 8000 k8s
when: gexist.rc != 0
- name: "Does k8s user exist?"
shell: id -u k8s > /dev/null 2>&1
register: uexist
- name: "Create k8s user"
shell: useradd -g 8000 -d /home/k8s -s /bin/bash -u 8000 -m k8s
when: uexist.rc != 0
which yields:
TASK [basic : Does k8s group exist?] *****************************************************************************************************************************
fatal: [master]: FAILED! => {"changed": true, "cmd": "grep -q \"^k8s:\" /etc/group", "delta": "0:00:00.009424", "end": "2019-05-29 14:42:17.947350", "msg": "non-zero return code", "rc": 1, "start": "2019-05-29 14:42:17.937926", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
fatal: [node3]: FAILED! => {"changed": true, "cmd": "grep -q \"^k8s:\" /etc/group", "delta": "0:00:00.012089", "end": "2019-05-29 06:41:36.661356", "msg": "non-zero return code", "rc": 1, "start": "2019-05-29 06:41:36.649267", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
fatal: [node1]: FAILED! => {"changed": true, "cmd": "grep -q \"^k8s:\" /etc/group", "delta": "0:00:00.010104", "end": "2019-05-29 14:42:17.990460", "msg": "non-zero return code", "rc": 1, "start": "2019-05-29 14:42:17.980356", "stderr": "", "stderr_lines": [], "stdout": "", "stdout_lines": []}
changed: [node2]
There has got to be a better to do conditionals (if-then-else) than the way I'm doing it.
See user and group. The code below is probably what you're looking for.
- name: "Create k8s group"
group:
gid: 8000
name: k8s
- name: "Create k8s user"
user:
group: k8s
home: /home/k8s
shell: /bin/bash
uid: 8000
name: k8s
The only if-then-else in Ansible I'm aware of is the ternary filter (for other options see jinja). The control flow is rather poor in Ansible compared to other procedural languages. It's because of the code rather defines a state of the system then a procedure.
To answer your question:
How to do an Ansible condition test of user/group existence?
Your code does it correctly, but the purpose of Ansible is to define the state of the system. It's not important a user or group existed before or not. After successfully having run the code they will exist (definition of a state) and running the code again makes sure they still exist (audit).
While provisioning via Vagrant and Ansible I keep running into this issue.
TASK [postgresql : Create extensions] ******************************************
failed: [myapp] (item=postgresql_extensions) => {"changed": true, "cmd": "psql myapp -c 'CREATE EXTENSION IF NOT EXISTS postgresql_extensions;'", "delta": "0:00:00.037786", "end": "2017-04-01 08:37:34.805325", "failed": true, "item": "postgresql_extensions", "rc": 1, "start": "2017-04-01 08:37:34.767539", "stderr": "ERROR: could not open extension control file \"/usr/share/postgresql/9.3/extension/postgresql_extensions.control\": No such file or directory", "stdout": "", "stdout_lines": [], "warnings": []}
I'm using a railsbox.io generated playbook.
Turns out that railsbox.io is still using a deprecated syntax in the task.
- name: Create extensions
sudo_user: '{{ postgresql_admin_user }}'
shell: "psql {{ postgresql_db_name }} -c 'CREATE EXTENSION IF NOT EXISTS {{ item }};'"
with_items: postgresql_extensions
when: postgresql_extensions
The last line should use full jinja2 syntax.
when: '{{postgresql_extensions}}'