- name: Create directory for python files
file: path=/home/vuser/test/
state=directory
owner={{ user }}
group={{ user }}
mode=755
- name: Copy python file over
copy:
src=sample.py
dest=/home/vuser/test/sample.py
owner={{ user }}
group={{ user }}
mode=777
- name: Execute script
command: python sample.py
args:
chdir: /home/vuser/test/
ignore_errors: yes
error
fatal: [n]: FAILED! => {"changed": true, "cmd": ["python", "sample.py"], "delta": "0:00:00.003200", "end": "2019-07-18 13:57:40.213252", "msg": "non-zero return code", "rc": 1, "start": "2019-07-18 13:57:40.221132", "stderr": "", "stderr_lines": [], "stdout": "1", "stdout_lines": ["1"]}
not able to figure out, help would be appreciated
Change the indent like below and remove ignore_errors.
- name: Execute script
command: python sample.py
args:
chdir: /home/vuser/test/
register: cat_contents
- name: Print contents
debug:
msg: "{{ cat_contents.stdout }}"
- name: Create directory for python files
file: path=/home/vuser/test/
state=directory
owner={{ user }}
group={{ user }}
mode=755
- name: Copy python file over
copy:
src=/home/vuser/sample.py
dest=/home/vuser/test/
owner={{ user }}
group={{ user }}
mode=777
- name: Execute script
command: python sample.py
args:
chdir: /home/vuser/test/
The sample.py is copied correctly in the destination folder at the node1 at dest=/home/vuser/test/
but the I get this error after I have done the change also
fatal: [node1]: FAILED! => {"changed": true, "cmd": ["python", "sample.py"], "delta": "0:00:00.002113", "end": "2019-07-19 10:59:53.7535351", "msg": "non-zero return code", "rc": 1, "start": "2019-07-19 10:59:53.358678548", "stderr": "", "stderr_lines": [], "stdout": "hello world", "stdout_lines": ["hello world"]}
Related
I'm trying to pass the content of a text file I have in the local folder to an Ansible shell command and for whatever reason, I can't and I'm sure its something silly that I'm missing.
Any help would be really appreciated
Here is the ansible line:
- name: Connect K8s cluster to Shipa
shell: shipa cluster-add {{ cluster_name }} --addr $(< {{ cluster_name }}.txt) --cacert {{ cluster_name }}.crt --pool {{ cluster_name }} --ingress-service-type=LoadBalancer
args:
executable: /bin/bash
no_log: False
Whenever I run this, this is what I get in the log:
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "shipa cluster-add scale2 --addr $(< scale2.txt) --cacert scale2.crt --pool scale2 --ingress-service-type=LoadBalancer", "delta": "0:00:00.096162", "end": "2020-04-30 21:32:54.245781", "msg": "non-zero return code", "rc": 1, "start": "2020-04-30 21:32:54.149619", "stderr": "Error: 500 Internal Server Error: parse \"https://\\x1b[0;33mhttps://192.196.192.156\\x1b[0m\": net/url: invalid control character in URL", "stderr_lines": ["Error: 500 Internal Server Error: parse \"https://\\x1b[0;33mhttps://192.196.192.156\\x1b[0m\": net/url: invalid control character in URL"], "stdout": "", "stdout_lines": []}
Here is the content of the file:
# cat scale2.txt
https://192.196.192.156
Here is what Im trying to pass in the command:
shipa cluster-add scale2 --addr https://192.196.192.156 --cacert scale2.crt --pool scale2 --ingress-service-type=LoadBalancer
EDIT:
I also tried the following:
- name: Get K8s cluster address
shell: kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}'
args:
executable: /bin/bash
register: cluster_address
no_log: False
- debug: var=cluster_address
- name: Save to file
copy:
content: "{{ ''.join(cluster_address.stdout_lines) | replace('\\u001b[0;33m', '')| replace('\\u001b[0m', '') }}"
dest: "cluster_address.json"
- name: Connect K8s cluster to Shipa
shell: shipa cluster-add {{ cluster_name }} --addr {{ lookup('file', 'cluster_address.json') }} --cacert {{ cluster_name }}.crt --pool {{ cluster_name }} --ingress-service-type=LoadBalancer
args:
executable: /bin/bash
no_log: False
But it saves the following content to the file:
# vi cluster_address.json
^[[0;33mhttps://192.192.56.192^[[0m
Here is the output of the ansible execution:
TASK [debug] **********************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"cluster_address": {
"changed": true,
"cmd": "kubectl cluster-info | grep 'Kubernetes master' | awk '/http/ {print $NF}'",
"delta": "0:00:00.144193",
"end": "2020-04-30 22:54:21.970460",
"failed": false,
"rc": 0,
"start": "2020-04-30 22:54:21.826267",
"stderr": "",
"stderr_lines": [],
"stdout": "\u001b[0;33mhttps://192.192.56.192\u001b[0m",
"stdout_lines": [
"\u001b[0;33mhttps://192.192.56.192\u001b[0m"
]
}
}
TASK [Save to file] ***************************************************************************************************************************************************************************************************************************
changed: [localhost]
TASK [Connect K8s cluster to Shipa] ***********************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": true, "cmd": "shipa cluster-add scale1 --addr \u001b[0;33mhttps://192.192.56.192\u001b[0m --cacert scale1.crt --pool scale1 --ingress-service-type=LoadBalancer", "delta": "0:00:00.106936", "end": "2020-04-30 22:54:23.213485", "msg": "non-zero return code", "rc": 127, "start": "2020-04-30 22:54:23.106549", "stderr": "Error: 400 Bad Request: either default or a list of pools must be set\n/bin/bash: 33mhttps://192.192.56.192\u001b[0m: No such file or directory", "stderr_lines": ["Error: 400 Bad Request: either default or a list of pools must be set", "/bin/bash: 33mhttps://192.192.56.192\u001b[0m: No such file or directory"], "stdout": "", "stdout_lines": []}
You can disable the fancy terminal output by setting the TERM to dumb
- shell: kubectl cluster-info | awk '/Kubernetes master/ {print $NF}'
environment:
TERM: dumb
register: cluster_info
- set_fact:
cluster_addr: '{{ cluster_info.stdout | trim }}'
then, separately, you don't need to use the file lookup for a value that you have in a jinja2 variable already:
- name: Connect K8s cluster to Shipa
command: shipa cluster-add {{ cluster_name }} --addr {{ cluster_addr }} --cacert {{ cluster_name }}.crt --pool {{ cluster_name }} --ingress-service-type=LoadBalancer
args:
executable: /bin/bash
To disable logins for root I would like to set its shell to the path of nologin, which is determined by a command.
The command module registers the variable properly:
- name: Get nologin path
command: which nologin
register: nologin
- debug:
var: nologin
Debug info:
ok: [192.168.178.25] => {
"nologin": {
"changed": true,
"cmd": [
"which",
"nologin"
],
"delta": "0:00:00.001612",
"end": "2019-08-26 11:23:41.764847",
"failed": false,
"rc": 0,
"start": "2019-08-26 11:23:41.763235",
"stderr": "",
"stderr_lines": [],
"stdout": "/usr/sbin/nologin",
"stdout_lines": [
"/usr/sbin/nologin"
]
}
}
But when I use the user module it takes the registered variable as a string:
- name: Disable root
user:
name: root
shell: nologin.stdout
state: present
Result in /etc/passwd:
$ cat /etc/passwd
root:x:0:0:root:/root:nologin.stdout
Thanks for any help!
It's a variable, to use it you need to put in jinja2 template {{ }} and inside " " as it is required by YAML:
shell: "{{ nologin.stdout }}"
Ref:
https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#using-variables-with-jinja2
https://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#hey-wait-a-yaml-gotcha
I have written the following ansible playbook to find the disk failure on the raid
- name: checking raid status
shell: "cat /proc/mdstat | grep nvme"
register: "array_check"
- debug:
msg: "{{ array_check.stdout_lines }}"
Following is the output I got
"msg": [
"md0 : active raid1 nvme0n1p1[0] nvme1n1p1[1]",
"md2 : active raid1 nvme1n1p3[1](F) nvme0n1p3[0]",
"md1 : active raid1 nvme1n1p2[1] nvme0n1p2[0]"
]
I want to extract the disk name which is failed from the register variable array_check.
How do I do this in the ansible? Can I use set_fact module in ansible? Can I use grep, awk, sed command on the register variable array_check
This is the playbook I am using to check the health status of a drive using smartctl
- name: checking the smartctl logs
shell: "smartctl -H /dev/{{ item }}"
with_items:
- nvme0
- nvme1
And I am facing the following error
(item=nvme0) => {"changed": true, "cmd": "smartctl -H /dev/nvme0", "delta": "0:00:00.090760", "end": "2019-09-05 11:21:17.035173", "failed": true, "item": "nvme0", "rc": 127, "start": "2019-09-05 11:21:16.944413", "stderr": "/bin/sh: 1: smartctl: not found", "stdout": "", "stdout_lines": [], "warnings": []}
(item=nvme1) => {"changed": true, "cmd": "smartctl -H /dev/nvme1", "delta": "0:00:00.086596", "end": "2019-09-05 11:21:17.654036", "failed": true, "item": "nvme1", "rc": 127, "start": "2019-09-05 11:21:17.567440", "stderr": "/bin/sh: 1: smartctl: not found", "stdout": "", "stdout_lines": [], "warnings": []}
The desired output should be something like this,
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
Below is the complete playbook including the logic to execute multiple commands in a single task using with_items,
---
- hosts: raid_host
remote_user: ansible
become: yes
become_method: sudo
tasks:
- name: checking raid status
shell: "cat /proc/mdstat | grep 'F' | cut -d' ' -f6 | cut -d'[' -f1"
register: "array_check"
- debug:
msg: "{{ array_check.stdout_lines }}"
- name: checking the samrtctl logs for the drive
shell: "/usr/sbin/smartctl -H /dev/{{ item }} | tail -2|awk -F' ' '{print $6}'"
with_items:
- "nvme0"
- "nvme1"
register: "smartctl_status"
I am trying to execute command /usr/local/bin/airflow initdb in path /home/ec2-user/ in one of EC2 servers.
TASK [../../roles/airflow-config : Airflow | Config | Initialize Airflow Database] ***
fatal: [127.0.0.1]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/airflow initdb", "delta": "0:00:00.004832", "end": "2019-07-23 14:38:22.928975", "msg": "non-zero return code", "rc": 127, "start": "2019-07-23 14:38:22.924143", "stderr": "/bin/bash: /usr/local/bin/airflow: No such file or directory", "stderr_lines": ["/bin/bash: /usr/local/bin/airflow: No such file or directory"], "stdout": "", "stdout_lines": []}
ansible/roles/airflow-config/main.yml file is
- name: Airflow | Config
import_tasks: config.yml
tags:
- config
ansible/roles/airflow-config/config.yml file is
- name: Airflow | Config | Initialize Airflow Database
shell: "/usr/local/bin/airflow initdb"
args:
chdir: "/home/ec2-user"
executable: /bin/bash
become: yes
become_method: sudo
become_user: root
ansible/plays/airflow/configAirflow.yml
---
- hosts: 127.0.0.1
gather_facts: yes
become: yes
vars:
aws_account_id: "12345678910"
roles:
- {role: ../../roles/airflow-config}
While provisioning via Vagrant and Ansible I keep running into this issue.
TASK [postgresql : Create extensions] ******************************************
failed: [myapp] (item=postgresql_extensions) => {"changed": true, "cmd": "psql myapp -c 'CREATE EXTENSION IF NOT EXISTS postgresql_extensions;'", "delta": "0:00:00.037786", "end": "2017-04-01 08:37:34.805325", "failed": true, "item": "postgresql_extensions", "rc": 1, "start": "2017-04-01 08:37:34.767539", "stderr": "ERROR: could not open extension control file \"/usr/share/postgresql/9.3/extension/postgresql_extensions.control\": No such file or directory", "stdout": "", "stdout_lines": [], "warnings": []}
I'm using a railsbox.io generated playbook.
Turns out that railsbox.io is still using a deprecated syntax in the task.
- name: Create extensions
sudo_user: '{{ postgresql_admin_user }}'
shell: "psql {{ postgresql_db_name }} -c 'CREATE EXTENSION IF NOT EXISTS {{ item }};'"
with_items: postgresql_extensions
when: postgresql_extensions
The last line should use full jinja2 syntax.
when: '{{postgresql_extensions}}'