Ansible fails to push local files to target server using rsync - ansible

Using Ansible, I wish to transfer all files from redhat localhost /app/tmpfiles/ folder to remoteaixhost at location /was/IBM/BACKUP/00005/ with current timestamp and 755 permissions
Below command shows that I do have a file on my localhost:
[localuser#localhost]$ ls -ltr /app/tmpfiles/*
-rw-rw-r-- 1 localuser user 12 Sep 13 15:53 /app/tmpfiles/testingjsp4.jsp
Below is my ansible playbook for this task.
- name: Play 1
host: remoteaixhost
- name: "Backup on Destination Server"
tags: validate
local_action: command rsync --delay-updates -F --compress --chmod=755 "--rsh=/usr/bin/ssh -S none -i /app/my_id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" "--out-format=<<CHANGED>>%i %n%L" "{{ playbook_dir }}/tmpfiles/*" "{{ USER }}#{{ inventory_hostname }}":"{{ vars[inventory_hostname] }}/BACKUP/{{ Number }}/"
run_once: true
Running the above playbook fails and below are the error log from ansible:
TASK [Backup on Destination Server remoteaixhost.] *** fatal:
[remoteaixhost -> localhost]: FAILED! => {"changed": true, "cmd":
["rsync", "--delay-updates", "-F", "--compress", "--chmod=755",
"--rsh=/usr/bin/ssh -S none -i /app/my_id_rsa -o
StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null",
"--out-format=<>%i %n%L", "/app/tmpfiles/",
"user2#remoteaixhost:/was/IBM/BACKUP/00005/"], "delta":
"0:00:00.228097", "end": "2019-10-04 00:23:09.103525", "msg":
"non-zero return code", "rc": 23, "start": "2019-10-04
00:23:08.875428", "stderr": "Warning: Permanently added
'remoteaixhost' (RSA) to the list of known hosts.\r\nThis system is
for the use of authorized users only. Individuals using this computer
system without authority, or in excess of their authority, are subject
to having all of their activities on this system monitored and
recorded by system personnel. In the course of monitoring individuals
improperly using this system, or in the course of system maintenance,
the activities of authorized users may also be monitored. Anyone using
this system expressly consents to such monitoring and is advised that
if such such monitoring reveals possible evidence of criminal
activity, system personnel may provide the evidence of such monitoring
to the law enforcement officials\nrsync: link_stat \"/app/tmpfiles/\"
failed: No such file or directory (2)\nrsync error: some files/attrs
were not transferred (see previous errors) (code 23) at main.c(1178)
[sender=3.1.2]", "stderr_lines": ["Warning: Permanently added
'remoteaixhost' (RSA) to the list of known hosts.", "This system is
for the use of authorized users only. Individuals using this computer
system without authority, or in excess of their authority, are subject
to having all of their activities on this system monitored and
recorded by system personnel. In the course of monitoring individuals
improperly using this system, or in the course of system maintenance,
the activities of authorized users may also be monitored. Anyone using
this system expressly consents to such monitoring and is advised that
if such such monitoring reveals possible evidence of criminal
activity, system personnel may provide the evidence of such monitoring
to the law enforcement officials", "rsync: link_stat
\"/app/tmpfiles/*\" failed: No such file or directory (2)", "rsync
error: some files/attrs were not transferred (see previous errors)
(code 23) at main.c(1178) [sender=3.1.2]"], "stdout": "",
"stdout_lines": []}
NO MORE HOSTS LEFT
PLAY RECAP
********************************************************************* remoteaixhost : ok=8 changed=4 unreachable=0
failed=1 skipped=6 rescued=0 ignored=0 localhost
: ok=4 changed=2 unreachable=0 failed=0 skipped=1
rescued=0 ignored=0
Build step 'Execute shell' marked build as failure Finished: FAILURE
One thing that bothers me in the output is "fatal: [remoteaixhost -> localhost]" .Why does it say "remoteaixhost -> localhost" when we are actually specifying local_action and I was expecting it to show like this: "localhost -> remoteaixhost"
However, I m not sure if my observation is concerning, what the solution is and if this helps us look in the right direction.
Note: I also tried removing the astrick from the rsync command above like below:
rsync --delay-updates -F --compress --chmod=755 "--rsh=/usr/bin/ssh -S none -i /app/my_id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" "--out-format=<<CHANGED>>%i %n%L" "{{ playbook_dir }}/tmpfiles/" "{{ USER }}#{{ inventory_hostname }}":"{{ vars[inventory_hostname] }}/BACKUP/{{ Number }}/"
No errors are received however, the file/s does not get copied to the target host.
I tried to run manually the rsync command on localhost as generated by the ansible and below is the observation:
# with aistrick the copy is successful:
[localuser#localhost]$ rsync --delay-updates -F --compress --chmod=755 "--rsh=/usr/bin/ssh -S none -i /app/my_id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --out-format="<<CHANGED>>%i %n%L" /app/tmpfiles/* user2#remoteaixhost:/was/IBM/BACKUP/00005/
Warning: Permanently added 'remoteaixhost' (RSA) to the list of known hosts.
This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel. In the course of monitoring individuals improperly using this system, or in the course of system maintenance, the activities of authorized users may also be monitored. Anyone using this system expressly consents to such monitoring and is advised that if such such monitoring reveals possible evidence of criminal activity, system personnel may provide the evidence of such monitoring to the law enforcement officials
<<CHANGED>><f+++++++++ testingjsp4.jsp
[localuser#localhost]$ echo $?
0
# without aistrick the files do not get copied over:
[localuser#localhost]$ rsync --delay-updates -F --compress --chmod=755 "--rsh=/usr/bin/ssh -S none -i /app/my_id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" --out-format="<<CHANGED>>%i %n%L" /app/tmpfiles/ user2#remoteaixhost:/was/IBM/BACKUP/00005/
Warning: Permanently added 'remoteaixhost' (RSA) to the list of known hosts.
This system is for the use of authorized users only. Individuals using this computer system without authority, or in excess of their authority, are subject to having all of their activities on this system monitored and recorded by system personnel. In the course of monitoring individuals improperly using this system, or in the course of system maintenance, the activities of authorized users may also be monitored. Anyone using this system expressly consents to such monitoring and is advised that if such such monitoring reveals possible evidence of criminal activity, system personnel may provide the evidence of such monitoring to the law enforcement officials
skipping directory .
[localuser#localhost]$ echo $?
0
Can you please suggest ?

local_action: command rsync -r --delay-updates -F --compress --chmod=755 "--rsh=/usr/bin/ssh -S none -i /app/my_id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null" "--out-format=<<CHANGED>>%i %n%L" {{ playbook_dir }}/tmpfiles/*.log "{{ USER }}#{{ inventory_hostname }}":"{{ vars[inventory_hostname] }}/BACKUP/{{ Number }}/"
This works and I have removed the double quotes around -> {{ playbook_dir }}/tmpfiles/*.log

Related

ansible tower cannot find name for the group id error for synchronize module

I am using synchronize module to copy dir present in NFS to local path. The user(SVC12345) which run playbook from ansible tower is not present in /etc/passwd or /etc/group.
When synchronise task is invoked it failes with below error
"msg" : "Warning : Permanently added 'hostname,1.2.3.4 to the list of known hosts\r\n/usr/bin/id: cannot find name for group ID 12345 \nrsync: change_dir \"/app/nfs_share_path/DIR1 failed: No such file or directory
"rc": "23"
"cmd": sshpass -d4 /bin/rsync --delay-updates -F compress --dry-run --archive --rsh=/bin/ssh -S none -o StrictHostKetcheking=no -o UserKnownHostFile=/dev/null --rsync-path=sudo rsync --out-format=<<CHANGED>>%i %n%L /app/nfs_share_path/DIR1 SVC12345#hostname:/app/path/util"
My ansible task
- name: Test
become: yes
become_user: local_user
synchronize:
src: /app/nfs_share_path/DIR1 //shared directory
dest: /app/path/util
owner: yes
group: yes
I am expecting this task is executed as "local_user" (since I have mentioned become_user) instead it perform the task as SVC12345 user.

Ansible wrongly reports 'command not found' error

I am new to Ansible and I cannot solve an error: I use the ansible.builtin.shell to call the pcs utility (Pacemaker). The pcs is installed on the remote machine, and I can use it when I ssh that machine, but Ansible reports a 'command not found' error with error code 127.
Here is my inventory.yml:
---
all:
children:
centos7:
hosts:
UVMEL7:
ansible_host: UVMEL7
Here is my play-book, TestPcs.yaml:
---
- name: Test the execution of pcs command
hosts: UVMEL7
tasks:
- name: Call echo
ansible.builtin.shell: echo
- name: pcs
ansible.builtin.shell: pcs
Note: I also used the echo command to verify that I am corectly using ansible.builtin.shell.
I launch my play-book with: ansible-playbook -i inventory.yml TestPcs.yaml --user=traite
And I get this result:
PLAY [Test the execution of pcs command] *****************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************
ok: [UVMEL7]
TASK [Call echo] *****************************************************************************************************************************************************************************************************************************
changed: [UVMEL7]
TASK [pcs] ***********************************************************************************************************************************************************************************************************************************
fatal: [UVMEL7]: FAILED! => {"changed": true, "cmd": "pcs", "delta": "0:00:00.003490", "end": "2022-03-10 15:02:17.418475", "msg": "non-zero return code", "rc": 127, "start": "2022-03-10 15:02:17.414985", "stderr": "/bin/sh: pcs : commande introuvable", "stderr_lines": ["/bin/sh: pcs : commande introuvable"], "stdout": "", "stdout_lines": []}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
UVMEL7 : ok=2 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The pcs command is failing and in stderr, there is a 'command not found' error.
On the other and, when I ssh the machine and run pcs command, the command is executed and returns 1 which is different from 127. It is normal that pcs returns an error: I simplified the test case to the strict minimum to keep my question short.
I expect Ansible to have the same behavior: Error on pcs with return code 1.
Here is what I did to simulate what Ansible does (Based on remarks by #Zeitounator): ssh <user>#<machine> '/bin/bash -c "echo $PATH"'
I get my default PATH as explained in the manual page of bash. In my system sh links to bash.
I see that /etc/profile does the path manipulation that I need. However, it seems that because of the option -c, the bash is not started as login shell and therefore etc/profile is not sourced.
I end up doing the job manually:
---
- name: Test the execution of pcs command
hosts: UVMEL7
tasks:
- name: Call echo
ansible.builtin.shell: echo
- name: pcs
ansible.builtin.shell: source /etc/profile && pcs
Which executes pcs as expected.
To sum up, my executable was not executed because the folder holding it was not listed in my PATH environment variable. This was due to the fact that /bin/sh aka /bin/bash was called with the flag -c which prevents sourcing /etc/profile and other configuration files. The issue was 'solved' by sourcing manually the configuration file that correctly sets the PATH environment variable.

ansible from MAC to a remote DataDomain which has special filesystem

Folks I have an ansible.cfg
[defaults]
remote_user = sysadmin
inventory = hosts.yaml
host_key_checking = False
local_tmp = /Users/juergen/Documents/DPSCodeAcademy/ansible/#dev/ddve-aws/ddve6-7.4
Further down a playbook
---
-
hosts: ddve
gather_facts: False
tasks:
- name: net show all
command: net show all
...
and the ddve host is a very special linux box which it's own commandset, so regular linux operation do not work. What I was trying is to redirect the tmp dir to a local dir on my mac and just fire a valid command on that ddve host but this fails with:
fatal: [3.126.251.125]: UNREACHABLE! => {"changed": false, "msg": "Failed to create temporary directory.In some cases, you may have been able to authenticate and did not have permissions on the target directory. Consider changing the remote tmp path in ansible.cfg to a path rooted in \"/tmp\", for more error information use -vvv. Failed command was: ( umask 77 && mkdir -p \"` echo For example, \"help timezone\" shows all commands relating to timezones./.ansible/tmp `\"&& mkdir \"` echo For example, \"help timezone\" shows all commands relating to timezones./.ansible/tmp/ansible-tmp-1611501684.866448-10774-109898410031575 `\" && echo ansible-tmp-1611501684.866448-10774-109898410031575=\"` echo For example, \"help timezone\" shows all commands relating to timezones./.ansible/tmp/ansible-tmp-1611501684.866448-10774-109898410031575 `\" ), exited with result 40, stdout output: That doesn't look like a valid command, displaying help...\n\nHelp is available on the following topics:\n\n adminaccess ddboost ntp\n alerts disk qos\n alias elicense quota\n authentication enclosure replication\n autosupport filesys smt\n cifs ifgroup snapshot\n client-group log snmp\n cloud migration storage\n compression mtree support\n config net system\n data-movement nfs user\n\nType \"help <topic>\" to view help for the given topic.\n\nType \"help <keyword>\" to search the commands for a specific keyword.\nFor example, \"help timezone\" shows all commands relating to timezones.\n\n", "unreachable": true}
PLAY RECAP ************************************************************************************************************************************************************************************
3.126.251.125 : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
but a ssh login is working.
❯ ssh sysadmin#3.126.251.125
EMC Data Domain Virtual Edition
Last login: Sun Jan 24 07:21:24 PST 2021 from 95.91.249.86 on ssh
Welcome to Data Domain OS 7.4.0.5-671629
----------------------------------------
sysadmin#ip-172-31-16-174# net show all
Active Network Configuration:
ethV0 Link encap:Ethernet HWaddr 02:C9:AF:87:AC:7C
inet addr:172.31.16.1
can you help me what the error is telling
Ansible relies on being able to run python on the remote host. If "regular linux operations" won't work, this is probably the problem.
The simplest workaround is to use the raw module, which simply executes commands via ssh. This is the only module you would be able to use to target the remote host.
- name: net show all
raw: net show all
It looks like the remote system is some sort of networking device. There are a number of Ansible modules designed to work with switches and other network devices that don't support regular Linux commands, or Python, etc. See the documentation for Ansible Network Automation for more information. Possibly there is a module for the device you are managing?

How to detect and create partition on newly attached disk in rhel with ansible

I am new to Ansible ( have basic working knowledge ). Trying to automate disk addition on linux machine from ansible playbook.
Below are the things trying out to achieve this.
I tried to identify new disk which is attached on linux machine. From command line I am able to identify the new disk name but failing to find out with ansible.
Which module i need to write for this. Have tried with shell command for /sys/block but its not working hence I left this condition.
Now i decided that i will scan the disk manually and will provide that name in ansible to create partition automatically.
For this i wrote below code.
- name: list out currnet PV
shell: pvs --noheadings -o pv_name
register: pvs_list
- debug: var=pvs_list.stdout
- name: create partition on the given disk name
shell: /bin/echo -e "n\np\n1\n\n\nt\n8e\nw" | fdisk "{{ disk_name }}" ## Create the partition on a disk.
register: partitioning
Above code works fine but if i run this job again it does not fail and it recreate new partition in disk again.
I tried to apply when / failed_when condition but its not working.
If the already existing disk is been provided again the play should fail with proper message.
failed_when: "'{{ disk_name }}"' in pvs_list.stdout"
This condition also not working.
Also not able to identify new disk with Ansible.
Q: "Not able to identify new disk with Ansible"
A: Take a look at setup – Gathers facts about remote hosts. For example, given a disk in Linux localhost
$ lsscsi
[N:0:1:1] disk SSDPEKKF256G8 NVMe INTEL 256GB__1 /dev/nvme0n1
take a look at what information is provided by Ansible
$ ansible localhost -m setup | grep nvme0n1 -A 2
"nvme0n1": [
"nvme-SSDPEKKF256G8_NVMe_INTEL_256GB_BTHH832111P1256B",
"nvme-eui.5cd2e42981b06cef"
...
For details see How to gather facts about disks using Ansible.
Take a look at How to create a new partition with Ansible (, format and mount it).
Finally able to test the fail condition. Like if i have already added /dev/sdb disk and if i try to run the play again it will fail.
- name: list out currnet PV
shell: pvs --noheadings -o pv_name
register: pvs_list
- debug: var=pvs_list.stdout
- name: create partition on the given disk name
shell: /bin/echo -e "n\np\n1\n\n\nt\n8e\nw" | fdisk "{{ disk_name }}" ## Create the partition on a disk.
register: partitioning
- debug: var=partitioning
- name: test failure condition
fail:
msg: " you are trying to add the same disk which is already there "
when: "'{{ disk_name }}' in pvs_list.stdout"
Jenkins output on this play
#####################################################################################
Started by user Jenkins-Admin
Rebuilds build #64
Running as SYSTEM
Building in workspace /var/lib/jenkins/workspace/add_lun
[add_lun] $ sshpass ******** /usr/bin/ansible-playbook /etc/ansible/Project-Automation/add_lun.yml -i /tmp/inventory536897656267250565.ini -f 5 -u sysadm -k -e disk_name=/dev/sdb
PLAY [all] *********************************************************************
TASK [Gathering Facts] *********************************************************
ok: [10.167.53.190]
TASK [add_lun : list out currnet PV] *******************************************
changed: [10.167.53.190]
TASK [add_lun : debug] *********************************************************
ok: [10.167.53.190] => {
"pvs_list.stdout": " /dev/sda2 \n /dev/sdb1 "
}
TASK [add_lun : create partition on the given disk name] ***********************
changed: [10.167.53.190]
TASK [add_lun : test failure condition] ****************************************
[WARNING]: when statements should not include jinja2 templating delimiters
such as {{ }} or {% %}. Found: '{{ disk_name }}' in pvs_list.stdout
fatal: [10.167.53.190]: FAILED! => {"changed": false, "msg": " you are trying to add the same disk which is already there "}
[WARNING]: Could not create retry file '/etc/ansible/Project-
Automation/add_lun.retry'. [Errno 13] Permission denied: u'/etc/ansible
/Project-Automation/add_lun.retry'
PLAY RECAP *********************************************************************
10.167.53.190 : ok=4 changed=2 unreachable=0 failed=1
##################################################################################
Now i need to solve the issue of auto-disk identification.

Ansible synchronize module permissions issue

Remote Server's "/home"
enter image description here
Remote Server User
1. bitnami
2. take02
3. take03
4. take04
But local Host are only ubuntu users.
I would like to copy the "home" directory of REMOTE HOST as ansible,
keeping the OWNER information.
This is my playbook:
---
- hosts: discovery_bitnami
gather_facts: no
become: yes
tasks:
- name: "Creates directory"
local_action: >
file path=/tmp/{{ inventory_hostname }}/home/ state=directory
- name: "remote-to-local sync test"
become_method: sudo
synchronize:
mode: pull
src: /home/
dest: /tmp/{{ inventory_hostname }}/home
rsync_path: "sudo rsync"
Playbook result is:
PLAY [discovery_bitnami] *******************************************************
TASK [Creates directory] *******************************************************
ok: [discovery_bitnami -> localhost]
TASK [remote-to-local sync test] ***********************************************
fatal: [discovery_bitnami]: FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -i /home/ubuntu/.ssh/red_LightsailDefaultPrivateKey.pem -S none -o StrictHostKeyChecking=no -o Port=22' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' \"bitnami#54.236.34.197:/home/\" \"/tmp/discovery_bitnami/home\"", "failed": true, "msg": "rsync: failed to set times on \"/tmp/discovery_bitnami/home/.\": Operation not permitted (1)\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/bitnami\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/take02\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/take03\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync: recv_generator: mkdir \"/tmp/discovery_bitnami/home/take04\" failed: Permission denied (13)\n*** Skipping any contents from this failed directory ***\nrsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1655) [generator=3.1.1]\n", "rc": 23}
to retry, use: --limit #/home/ubuntu/work/esc_discovery/ansible_test/ansible_sync_test.retry
PLAY RECAP *********************************************************************
discovery_bitnami : ok=1 changed=0 unreachable=0 failed=1
But,
failed "cmd" works fine run with sudo on the console.
$ sudo /usr/bin/rsync --delay-updates -F --compress --archive --rsh 'ssh -i /home/ubuntu/.ssh/red_PrivateKey.pem -S none -o StrictHostKeyChecking=no -o Port=22' --rsync-path=\"sudo rsync\" --out-format='<<CHANGED>>%i %n%L' bitnami#54.236.34.197:/home/ /tmp/discovery_bitnami/home
How do I run "task" with sudo?
ps. remove become: yes then all permission is "ubuntu"
enter image description here
I guess you are out of options for the synchronize module. It runs locally without sudo and it's hardcoded.
On the other hand, in the first task you create a directory under /tmp as root, so the permissions are limited to the root user. As a result you get "permissions denied" error.
Either:
refactor the code so that you don't need root permissions for the local destination (or add become: no for the task "Creates directory"), as you use archive option which implies permissions preservation, this might not be an option;
or:
create your own version of the synchronize module and add sudo to the front of the cmd variable;
or:
use the command module with sudo /usr/bin/rsync as the call.
Mind that synchronize module is a non-standard one, there were changes in the past regarding the accounts used, and requests for the changes.
On top of everything, the current documentation for the module is pretty confusing. On one hand it states strongly:
The user and permissions for the synchronize dest are those of the remote_user on the destination host or the become_user if become=yes is active.
But in another place it only hints that the source and destination meaning is reversed when using pull mode:
In pull mode the remote host in context is the source.
So for the case from this question, the following passage is relevant, even though it incorrectly states the "src":
The user and permissions for the synchronize src are those of the user running the Ansible task on the local host (or the remote_user for a delegate_to host when delegate_to is used).

Resources