Ansible | Synchronize | - ansible

I need to copy .py files from /tmp/dags folder to /home/ec2-user/airflow/dags via ansible.
- name: "DAG | Source | Copy Source"
synchronize:
src: /tmp/dags/
dest: /home/ec2-user/airflow/dags
recursive: yes
Error logs while executing via Jenkins:-
[ssh-agent] Stopped.
fatal: [10.123.23.123]: FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh=/usr/bin/ssh -S none -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null --rsync-path=sudo rsync --out-format=<<CHANGED>>%i %n%L /tmp/dags/ ec2-user#10.123.23.123:/home/ec2-user/airflow/dags", "msg": "Warning: Permanently added '10.123.12.123' (ECDSA) to the list of known hosts.\r\nrsync: change_dir \"/tmp/dags\" failed: No such file or directory (2)\nrsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1052) [sender=3.0.9]\n", "rc": 23}
to retry, use: --limit #/data/jenkins/workspace/cng-airflow-build/acm/plays/airflow/pullDags.retry
I have tried even copy resource as well:-
copy:
src: /tmp/dags/
dest: /home/ec2-user/airflow/dags
remote_src: yes
shell/command resource also did not work
- name: "DAG | Source | Copy Source"
command/shell: cp -pur /tmp/dags/* /home/ec2-user/airflow/dags
Error Logs for copy command
TASK [../../roles/dags : DAG | Source | Copy Source] ***************************
fatal: [10.123.12.123]: FAILED! => {"changed": true, "cmd": ["cp", "-pur", "/tmp/dags/*", "/home/ec2-user/airflow/dags"], "delta": "0:00:00.002929", "end": "2019-01-02 18:17:39.218406", "msg": "non-zero return code", "rc": 1, "start": "2019-01-02 18:17:39.215477", "stderr": "cp: cannot stat ‘/tmp/dags/*’: No such file or directory", "stderr_lines": ["cp: cannot stat ‘/tmp/dags/*’: No such file or directory"], "stdout": "", "stdout_lines": []}
to retry, use: --limit #/data/jenkins/workspace/cng-airflow-build/acm/plays/airflow/pullDags.retry

- name: "DAG | Source | Copy Source"
shell: cp -pur /tmp/dags/* /home/ec2-user/airflow/dags

Related

Ansible unzip error/unarchive not fitting

i try to get the version within an .war file. In a script this was done by:
/usr/bin/unzip -p tomcat.war inside/path/to/version.txt | /bin/grep "version"
This is working. When i use the shell command in Ansible like this:
- name: Check .war Version
ignore_errors: yes
shell: /usr/bin/unzip -p tomcat.war inside/path/to/version.txt | /bin/grep "version"
register: war_version
It's telling me:
fatal: [localhost]: FAILED! => {"changed": true, "cmd":
"/usr/bin/unzip -p tomcat.war inside/path/to/version.txt | /bin/grep
"version"", "delta": "0:00:00.005014", "end": "2021-11-01
07:36:49.885688", "msg": "non-zero return code", "rc": 1, "start":
"2021-11-01 07:36:49.880674", "stderr": "", "stderr_lines": [],
"stdout": "", "stdout_lines": []}
With the builtin unarchive i think it's not possible to do this clean in one step. I would need to unarchive to a temp folder, grep the version, and delete the folder again. Is there a workaround with the unarchive module, or does anyone know how to fix the shell error? When i set ignore_errors to yes, it's throwing the error in my {{ war_version }}.
Regards

how to extract string from ansible regsiter variable in ansible

I have written the following ansible playbook to find the disk failure on the raid
- name: checking raid status
shell: "cat /proc/mdstat | grep nvme"
register: "array_check"
- debug:
msg: "{{ array_check.stdout_lines }}"
Following is the output I got
"msg": [
"md0 : active raid1 nvme0n1p1[0] nvme1n1p1[1]",
"md2 : active raid1 nvme1n1p3[1](F) nvme0n1p3[0]",
"md1 : active raid1 nvme1n1p2[1] nvme0n1p2[0]"
]
I want to extract the disk name which is failed from the register variable array_check.
How do I do this in the ansible? Can I use set_fact module in ansible? Can I use grep, awk, sed command on the register variable array_check
This is the playbook I am using to check the health status of a drive using smartctl
- name: checking the smartctl logs
shell: "smartctl -H /dev/{{ item }}"
with_items:
- nvme0
- nvme1
And I am facing the following error
(item=nvme0) => {"changed": true, "cmd": "smartctl -H /dev/nvme0", "delta": "0:00:00.090760", "end": "2019-09-05 11:21:17.035173", "failed": true, "item": "nvme0", "rc": 127, "start": "2019-09-05 11:21:16.944413", "stderr": "/bin/sh: 1: smartctl: not found", "stdout": "", "stdout_lines": [], "warnings": []}
(item=nvme1) => {"changed": true, "cmd": "smartctl -H /dev/nvme1", "delta": "0:00:00.086596", "end": "2019-09-05 11:21:17.654036", "failed": true, "item": "nvme1", "rc": 127, "start": "2019-09-05 11:21:17.567440", "stderr": "/bin/sh: 1: smartctl: not found", "stdout": "", "stdout_lines": [], "warnings": []}
The desired output should be something like this,
=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED
Below is the complete playbook including the logic to execute multiple commands in a single task using with_items,
---
- hosts: raid_host
remote_user: ansible
become: yes
become_method: sudo
tasks:
- name: checking raid status
shell: "cat /proc/mdstat | grep 'F' | cut -d' ' -f6 | cut -d'[' -f1"
register: "array_check"
- debug:
msg: "{{ array_check.stdout_lines }}"
- name: checking the samrtctl logs for the drive
shell: "/usr/sbin/smartctl -H /dev/{{ item }} | tail -2|awk -F' ' '{print $6}'"
with_items:
- "nvme0"
- "nvme1"
register: "smartctl_status"

Ansible | No such file or directory

I am trying to execute command /usr/local/bin/airflow initdb in path /home/ec2-user/ in one of EC2 servers.
TASK [../../roles/airflow-config : Airflow | Config | Initialize Airflow Database] ***
fatal: [127.0.0.1]: FAILED! => {"changed": true, "cmd": "/usr/local/bin/airflow initdb", "delta": "0:00:00.004832", "end": "2019-07-23 14:38:22.928975", "msg": "non-zero return code", "rc": 127, "start": "2019-07-23 14:38:22.924143", "stderr": "/bin/bash: /usr/local/bin/airflow: No such file or directory", "stderr_lines": ["/bin/bash: /usr/local/bin/airflow: No such file or directory"], "stdout": "", "stdout_lines": []}
ansible/roles/airflow-config/main.yml file is
- name: Airflow | Config
import_tasks: config.yml
tags:
- config
ansible/roles/airflow-config/config.yml file is
- name: Airflow | Config | Initialize Airflow Database
shell: "/usr/local/bin/airflow initdb"
args:
chdir: "/home/ec2-user"
executable: /bin/bash
become: yes
become_method: sudo
become_user: root
ansible/plays/airflow/configAirflow.yml
---
- hosts: 127.0.0.1
gather_facts: yes
become: yes
vars:
aws_account_id: "12345678910"
roles:
- {role: ../../roles/airflow-config}

Ansible copy from the remote server to ansible host fails

I need to copy the latest log file from remote linux server to the ansible host. This is what I have tried so far.
- hosts: [host]
remote_user: root
tasks:
- name: Copy the file
command: bash -c "ls -rt | grep install | tail -n1"
register: result
args:
chdir: /root
- name: Copying the file
copy:
src: "/root/{{ result.stdout }}"
dest: /home
But I am getting the following error .
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************
ok
TASK [Copy the file] **********************************************************************************************************************************************************************************************
changed: => {"changed": true, "cmd": ["bash", "-c", "ls -rt | grep install | tail -n1"], "delta": "0:00:00.011388", "end": "2017-06-14 07:53:26.475344", "rc": 0, "start": "2017-06-14 07:53:26.463956", "stderr": "", "stdout": "install.20170614-051027.log", "stdout_lines": ["install.20170614-051027.log"], "warnings": []}
TASK [Copying the file] *******************************************************************************************************************************************************************************************
fatal: FAILED! => {"changed": false, "failed": true, "msg": "Unable to find 'install.20170614-051027.log' in expected paths."}
PLAY RECAP ********************************************************************************************************************************************************************************************************
: ok=2 changed=1 unreachable=0 failed=1
But that file is right there.Please help me resolve this issue.
Ansible Copy copies files from ansible host to remote host. Use Ansible fetch instead.
http://docs.ansible.com/ansible/fetch_module.html
This one works , i have to use fetch instead of copy to get the file from remote .
- name: Copy the file
command: bash -c "ls -rt | grep install | tail -n1"
register: result
args:
chdir: /root
- name: Copying the file
fetch:
src: "/root/{{ result.stdout }}"
dest: /home
flat: yes

Ansible task fails, when creating extensions

While provisioning via Vagrant and Ansible I keep running into this issue.
TASK [postgresql : Create extensions] ******************************************
failed: [myapp] (item=postgresql_extensions) => {"changed": true, "cmd": "psql myapp -c 'CREATE EXTENSION IF NOT EXISTS postgresql_extensions;'", "delta": "0:00:00.037786", "end": "2017-04-01 08:37:34.805325", "failed": true, "item": "postgresql_extensions", "rc": 1, "start": "2017-04-01 08:37:34.767539", "stderr": "ERROR: could not open extension control file \"/usr/share/postgresql/9.3/extension/postgresql_extensions.control\": No such file or directory", "stdout": "", "stdout_lines": [], "warnings": []}
I'm using a railsbox.io generated playbook.
Turns out that railsbox.io is still using a deprecated syntax in the task.
- name: Create extensions
sudo_user: '{{ postgresql_admin_user }}'
shell: "psql {{ postgresql_db_name }} -c 'CREATE EXTENSION IF NOT EXISTS {{ item }};'"
with_items: postgresql_extensions
when: postgresql_extensions
The last line should use full jinja2 syntax.
when: '{{postgresql_extensions}}'

Resources