I'm trying to do a very simple fetch file from remote host. Somehow I've never gotten this to work.
Fetching from a remote Linux box to the Ansible Tower (awx) host which is also a Linux box.
Here's the Ansible code:
---
- name: get new private key for user
hosts: tag_Name_ansible_kali
become: yes
gather_facts: no
- name: fetch file
fetch:
src: /tmp/key
dest: /tmp/received/
flat: yes
Here's the result which makes it appear like the fetch worked:
{
"changed": true,
"md5sum": "42abaa3160ba875051f2cb20be0233ba",
"dest": "/tmp/received/key",
"remote_md5sum": null,
"checksum": "9416f6f64b94c331cab569035fb6bb825053bc15",
"remote_checksum": "9416f6f64b94c331cab569035fb6bb825053bc15",
"_ansible_no_log": false
}
However, going to the /tmp/received directory and ls -lah shows...
[root#ansibleserver received]# ls -lah
total 4.0K
drwxr-xr-x. 2 awx awx 6 Mar 12 15:48 .
drwxrwxrwt. 10 root root 4.0K Mar 12 15:49 ..
I've tested and if I choose a target src file that doesn't exist it won't work, so it's clearly connecting to the remote host. The problem is no matter where I point dest on the Ansible server the file doesn't actually write there. Not even sure how it can have a checksum of a file that doesn't exist. I've searched the entire drive and that file does not exist. Is there another log somewhere I can look at where it's actually writing the file? It's not on the remote host either.
Any advice would be appreciated. Seriously scratching my head here.
On a RHEL 7.9.9 system with Ansible 2.9.25, Python 2.7.5, Ansible Tower 3.7.x the output from an ad-hoc fetch task on CLI for a user on the Tower Server looks like
ansible test --user ${USER} --ask-pass --module-name fetch --args "src=/home/{{ ansible_user }}/test.txt dest=/tmp/ flat=yes"
SSH password:
test1.example.com | CHANGED => {
"changed": true,
"checksum": "4e1243bd22c66e76c2ba9eddc1f91394e57f9f83",
"dest": "/tmp/test.txt",
"md5sum": "d8e8fca2dc0f896fd7cb4cb0031ba249",
"remote_checksum": "4e1243bd22c66e76c2ba9eddc1f91394e57f9f83",
"remote_md5sum": null
}
test2.example.com | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"msg": "file not found: /home/user/test.txt"
}
and the file was left there. But the command was initiated and executed under user.
The same executed from Ansible Tower as ad-hoc command with arguments src=/home/user/test.txt dest=/tmp/ flat=yes reported
test2.example.com | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"msg": "file not found: /home/user/test.txt"
}
test1.example..com | CHANGED => {
"changed": true,
"checksum": "4e1243bd22c66e76c2ba9eddc1f91394e57f9f83",
"dest": "/tmp/test.txt",
"md5sum": "d8e8fca2dc0f896fd7cb4cb0031ba249",
"remote_checksum": "4e1243bd22c66e76c2ba9eddc1f91394e57f9f83",
"remote_md5sum": null
}
And your observation was right, there was no file on the Ansible Tower (awx) server. Changing the destination directory to the user reported, if there is an file already
}
test1.example.com | FAILED! => {
"changed": false,
"checksum": null,
"dest": "/home/user/test.txt",
"file": "/home/user/test.txt",
"md5sum": null,
"msg": "checksum mismatch",
"remote_checksum": "4e1243bd22c66e76c2ba9eddc1f91394e57f9f83",
"remote_md5sum": null
}
that there is an file already. However, it failed also if there was no file.
After changing the destination directory to the user under the Ansible Tower is running (awx) via arguments src=/home/user/test.txt dest=/home/awx/ flat=yes
test1.example.com | CHANGED => {
"changed": true,
"checksum": "4e1243bd22c66e76c2ba9eddc1f91394e57f9f83",
"dest": "/home/awx/test.txt",
"md5sum": "d8e8fca2dc0f896fd7cb4cb0031ba249",
"remote_checksum": "4e1243bd22c66e76c2ba9eddc1f91394e57f9f83",
"remote_md5sum": null
}
the file was left there correctly
ls -al /home/awx/
-rw-r--r--. 1 awx awx 5 Nov 6 10:42 test.txt
Regarding
The problem is no matter where I point dest on the Ansible server the file doesn't actually write there. ... Any advice would be appreciated. ...
it looks like it is caused by the user context and missing access/write rights and maybe other observations like "It turns out that Ansible Tower doesn't actually fetch the files to itself, but just copies them to a temporary directory on the remote server".
you can try
validate_checksum: no
Related
Using :
win_unzip:
src: "D:\program64\my\app\binaries.zip"
dest: "D:\program64\my\app\"
delete_archive: yes
i get :
TASK [ Unzip zip file] ****************************
17:19:01 fatal: [myhost]: FAILED! => {"changed": true, "dest": "D:\program64\my\app\", "msg":
"Error unzipping 'D:\program64\my\app\binaries.zip' to 'D:\program64\my\app\'!. Method:
System.IO.Compression.ZipFile, Exception: Exception calling \"ExtractToFile\" with \"3\" argument(s):
\"Access to the path 'D:\program64\my\app\my_app.exe' is denied.\"", "removed": false, "src":
"D:\program64\my\app\binaries.zip"}
I check and there was no my_app.exe , when i did the extract manually it worked, i check also the policy they were ok, i think it is linked more to Windows than Ansible but can't figure out why and how.
PS : Using Kerberos auth manual.
Thanks,
In my case src: was accidentally a folder instead of a file.
From Ansible: Can I execute role from command line? -
HOST_PATTERN=$1
shift
ROLE=$1
shift
echo "To apply role \"$ROLE\" to host/group \"$HOST_PATTERN\"..."
export ANSIBLE_ROLES_PATH="$(pwd)/roles"
export ANSIBLE_RETRY_FILES_ENABLED="False"
ANSIBLE_ROLES_PATH="$(pwd)/roles" ansible-playbook "$#" /dev/stdin <<END
---
- hosts: $HOST_PATTERN
roles:
- $ROLE
END
Problem is when I run with ./apply.sh all dev-role -i dev-inventory, it cannot assume the correct role. When I run with ansible-playbook -i dev-inventory site.yml --tags dev-role, it's working.
Below is error message
fatal: [my-api]: FAILED! => {"changed": false, "checksum_dest": null, "checksum_src": "d3a0ae8f3b45a0a7906d1be7027302a8b5ee07a0", "dest": "/tmp/install-amazon2-td-agent4.sh", "elapsed": 0, "gid": 0, "group": "root", "mode": "0644", "msg": "Destination /tmp/install-amazon2-td-agent4.sh is not writable", "owner": "root", "size": 838, "src": "/home/ec2-user/.ansible/tmp/ansible-tmp-1600788856.749975-487-237398580935180/tmpobyegc", "state": "file", "uid": 0, "url": "https://toolbelt.treasuredata.com/sh/install-amazon2-td-agent4.sh"}
Based on "msg": "Destination /tmp/install-amazon2-td-agent4.sh is not writable", I'd guess it is because site.yml contains become: yes statement, which makes all tasks run as root. The "anonymous" playbook does not contain a become: declaration, and thus would need one to either run ansible-playbook --become or to add become: yes to it, also
ANSIBLE_ROLES_PATH="$(pwd)/roles" ansible-playbook "$#" /dev/stdin <<END
---
- hosts: $HOST_PATTERN
become: yes
roles:
- $ROLE
END
Declared YUM task as below:
---
- hosts: all
vars:
tasks:
- name: install package
yum:
name: ntp
state: present
Ran following command:
ansible-playbook test.yml -i localhost, --connection=local -vvvv
Receiving error message:
TASK [install package] ***************************************************************************************************************************************************
task path: /home/osuser/dod/test.yml:6
Using module file /usr/lib/python2.7/site-packages/ansible/modules/system/setup.py
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: osuser
<localhost> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
Running svr4pkg as the backend for the yum action plugin
Using module file /usr/lib/python2.7/site-packages/ansible/modules/packaging/os/svr4pkg.py
<localhost> EXEC /bin/sh -c '/usr/bin/python && sleep 0'
fatal: [localhost]: FAILED! => {
"ansible_facts": {
"pkg_mgr": "svr4pkg"
},
"changed": false,
"invocation": {
"module_args": {
"category": false,
"name": "ntp",
"proxy": null,
"response_file": null,
"src": null,
"state": "present",
"zone": "all"
}
},
"msg": "src is required when state=present",
"name": "ntp"
Note the following message in debug:
Running svr4pkg as the backend for the yum action plugin
Ansible decided to use "srv4pkg" module (which requires src parameter) as backend of yum.
Workaround:
Set use_backend: yum parameter on yum module... if possible ! (I cannot modify the yaml file in my real usage).
Running Ansible 2.7.15 on CentOS 7.6.. with yum installed so there is absolutely no reason svr4pkg as a back-end (which is not supported/documented by yum module).
However, as it seems to be defined as an ansible_fact, I have done following test (result is filtered):
ansible -i localhost, all -m setup -k
SUCCESS => {
"ansible_facts": {
"ansible_distribution": "CentOS",
"ansible_distribution_file_parsed": true,
"ansible_distribution_file_path": "/etc/redhat-release",
"ansible_distribution_file_variety": "RedHat",
"ansible_distribution_major_version": "7",
"ansible_distribution_release": "Core",
"ansible_distribution_version": "7.6.1810",
"ansible_os_family": "RedHat",
"ansible_pkg_mgr": "svr4pkg",
"ansible_python_version": "2.7.5",
"module_setup": true
},
"changed": false
}
Any clue of the reason and how to enforce ansible_pkg_mgr ?
It seems that this distribution is shipped with yum and svr4pkg as we can see below:
$ ll /usr/bin/yum
-rwxr-xr-x. 1 root root 801 Nov 5 2018 /usr/bin/yum
$ ll /usr/sbin/pkgadd
-rwxr-xr-x. 1 root root 207342 Jul 2 16:12 /usr/sbin/pkgadd
So the last available package manager resolved is kept and take precedence /usr/lib/python2.7/site-packages/ansible/module_utils/facts/system/pkg_mgr.py
# A list of dicts. If there is a platform with more than one
# package manager, put the preferred one last. If there is an
# ansible module, use that as the value for the 'name' key.
PKG_MGRS = [{'path': '/usr/bin/yum', 'name': 'yum'},
{'path': '/usr/bin/dnf', 'name': 'dnf'},
{'path': '/usr/bin/apt-get', 'name': 'apt'},
{'path': '/usr/sbin/pkgadd', 'name': 'svr4pkg'},
[...]
def collect(self, module=None, collected_facts=None):
facts_dict = {}
collected_facts = collected_facts or {}
pkg_mgr_name = 'unknown'
for pkg in PKG_MGRS:
if os.path.exists(pkg['path']):
pkg_mgr_name = pkg['name']
# Handle distro family defaults when more than one package manager is
# installed, the ansible_fact entry should be the default package
# manager provided by the distro.
if collected_facts['ansible_os_family'] == "RedHat":
if pkg_mgr_name not in ('yum', 'dnf'):
pkg_mgr_name = self._check_rh_versions(pkg_mgr_name, collected_facts)
[...]
facts_dict['pkg_mgr'] = pkg_mgr_name
return facts_dict
So it seems to be an unmanaged case on ansible.
However, I still have no idea on how to enforce the right value !
Fixed by upgrading to Ansible 2.8+.
See https://github.com/ansible/ansible/issues/49184 when multiple package managers are available on system.
I am running the deployment concurrently on a number of hosts. As can be expected, the output moves quickly during runtime and it is hard to track at what state each task ends. When I get to the end of the playbook I can see which host have failed which is great. However, I need to scroll through pages and pages of output in order to find out on which task did a certain host fail.
Is there a way to have a print out at the end of the playbook saying for example:
"Host 1 failed on Task 1/cmd"
Don't know if this fits your issue exactly, but you can help yourself with a little exception handling like this:
---
- hosts: localhost
any_errors_fatal: true
tasks:
- block:
- name: "Fail a command"
shell: |
rm someNonExistentFile
rescue:
- debug:
msg: "{{ ansible_failed_result }}"
#- fail:
# msg: "Playbook run failed. Aborting..."
# uncomment this failed section to actually fail a deployment after writing the error message
The variable ansible_failed_result contains something like this:
TASK [debug] ************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": {
"changed": true,
"cmd": "rm someNonExistentFile\n",
"delta": "0:00:00.036509",
"end": "2019-10-31 12:06:09.579806",
"failed": true,
"invocation": {
"module_args": {
"_raw_params": "rm someNonExistentFile\n",
"_uses_shell": true,
"argv": null,
"chdir": null,
"creates": null,
"executable": null,
"removes": null,
"stdin": null,
"stdin_add_newline": true,
"strip_empty_ends": true,
"warn": true
}
},
"msg": "non-zero return code",
"rc": 1,
"start": "2019-10-31 12:06:09.543297",
"stderr": "rm: cannot remove ‘someNonExistentFile’: No such file or directory",
"stderr_lines": [
"rm: cannot remove ‘someNonExistentFile’: No such file or directory"
],
"stdout": "",
"stdout_lines": [],
"warnings": [
"Consider using the file module with state=absent rather than running 'rm'. If you need to use command because file is insufficient you can add 'warn: false' to this command task or set 'command_warnings=False' in ansible.cfg to get rid of this message."
]
}
}
I mostly use stderr when applicable. Otherwise I use "{{ ansible_failed_result | to_nice_json }}".
hth
I'm attempting to use Ansible to remove some keys. The command runs successfully, but does not edit the file.
ansible all -i inventories/staging/inventory -m authorized_key -a "user=deploy state=absent key={{ lookup('file', '/path/to/a/key.pub') }}"
Running this command returns the following result:
staging1 | SUCCESS => {
"changed": false,
"comment": null,
"exclusive": false,
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAxKjbpkqro9JhiEHrJSHglaZE1j5vbxNhBXNDLsooUB6w2ssLKGM9ZdJ5chCgWSpj9+OwYqNwFkJdrzHqeqoOGt1IlXsiRu+Gi3kxOCzsxf7zWss1G8PN7N93hC7ozhG7Lv1mp1EayrAwZbLM/KjnqcsUbj86pKhvs6BPoRUIovXYK28XiQGZbflak9WBVWDaiJlMMb/2wd+gwc79YuJhMSEthxiNDNQkL2OUS59XNzNBizlgPewNaCt06SsunxQ/h29/K/P/V46fTsmpxpGPp0Q42sCHczNMQNS82sJdMyKBy2Rg2wXNyaUehbKNTIfqBNKqP7J39vQ8D3ogdLLx6w== arthur#Inception.local",
"key_options": null,
"keyfile": "/home/deploy/.ssh/authorized_keys",
"manage_dir": true,
"path": null,
"state": "absent",
"unique": false,
"user": "deploy",
"validate_certs": true
}
The command was a success, but it doesn't show that anything changed. They key remains on the server.
Any thoughts on what I'm missing?
The behaviour you described occurs when you try to remove the authorized key using a non-root account different than deploy, i.e. without necessary permissions.
Add --become (-b) argument to the command:
ansible all -b -i inventories/staging/inventory -m authorized_key -a "user=deploy state=absent key={{ lookup('file', '/path/to/a/key.pub') }}"
That said, I see no justification for the ok status; the task should fail. This looks like a bug in Ansible to me; I filed an issue on GitHub.