Ansible authorized_key does not remove keys - ansible

I'm attempting to use Ansible to remove some keys. The command runs successfully, but does not edit the file.
ansible all -i inventories/staging/inventory -m authorized_key -a "user=deploy state=absent key={{ lookup('file', '/path/to/a/key.pub') }}"
Running this command returns the following result:
staging1 | SUCCESS => {
"changed": false,
"comment": null,
"exclusive": false,
"key": "ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAxKjbpkqro9JhiEHrJSHglaZE1j5vbxNhBXNDLsooUB6w2ssLKGM9ZdJ5chCgWSpj9+OwYqNwFkJdrzHqeqoOGt1IlXsiRu+Gi3kxOCzsxf7zWss1G8PN7N93hC7ozhG7Lv1mp1EayrAwZbLM/KjnqcsUbj86pKhvs6BPoRUIovXYK28XiQGZbflak9WBVWDaiJlMMb/2wd+gwc79YuJhMSEthxiNDNQkL2OUS59XNzNBizlgPewNaCt06SsunxQ/h29/K/P/V46fTsmpxpGPp0Q42sCHczNMQNS82sJdMyKBy2Rg2wXNyaUehbKNTIfqBNKqP7J39vQ8D3ogdLLx6w== arthur#Inception.local",
"key_options": null,
"keyfile": "/home/deploy/.ssh/authorized_keys",
"manage_dir": true,
"path": null,
"state": "absent",
"unique": false,
"user": "deploy",
"validate_certs": true
}
The command was a success, but it doesn't show that anything changed. They key remains on the server.
Any thoughts on what I'm missing?

The behaviour you described occurs when you try to remove the authorized key using a non-root account different than deploy, i.e. without necessary permissions.
Add --become (-b) argument to the command:
ansible all -b -i inventories/staging/inventory -m authorized_key -a "user=deploy state=absent key={{ lookup('file', '/path/to/a/key.pub') }}"
That said, I see no justification for the ok status; the task should fail. This looks like a bug in Ansible to me; I filed an issue on GitHub.

Related

Variable is undefined when running Ansible 'debug' ad-hoc

Could you explain why following behaviour happens. When I try to print remote Ansible IP with following playbook everything works as expected:
---
- hosts: centos1
tasks:
- name: Print ip address
debug:
msg: "ip: {{ansible_all_ipv4_addresses[0]}}"
when I try ad-hoc command it doesn't work:
ansible -i hosts centos1 -m debug -a 'msg={{ansible_all_ipv4_addresses[0]}}'
Here is the ad-hoc error:
centos1 | FAILED! => {
"msg": "The task includes an option with an undefined variable. The error was: 'ansible_all_ipv4_addresses' is undefined.
'ansible_all_ipv4_addresses' is undefined" }
I don't find any difference in both approaches that is why I was expecting both to work and print the remote IP address.
I don't find any difference in both approaches that is why I was expecting both to work and print the remote IP address.
This is because no facts were gathered. Whereby via ansible-playbook and depending on the configuration Ansible facts become gathered automatically, via ansible only and ad-hoc command not.
To do so you would need to execute the setup module instead. See Introduction to ad hoc commands - Gathering facts.
Further Q&A
How Ansible sets variables?
Why does Ansible ad-hoc debug module not print variable?
Please take note of the variable names according
Conflict of variable name packages with ansible_facts.packages
Could you please give some example on How to output "Your IP address is "{{ ansible_all_ipv4_addresses[0] }}"? using ad-hoc approach with setup module?
Example
ansible test.example.com -m setup -a 'filter=ansible_all_ipv4_addresses'
test.example.com | SUCCESS => {
"ansible_facts": {
"ansible_all_ipv4_addresses": [
"192.0.2.1"
]
},
"changed": false
}
or
ansible test.example.com -m setup -a 'filter=ansible_default_ipv4'
test.example.com | SUCCESS => {
"ansible_facts": {
"ansible_default_ipv4": {
"address": "192.0.2.1",
"alias": "eth0",
"broadcast": "192.0.2.255",
"gateway": "192.0.2.0",
"interface": "eth0",
"macaddress": "00:00:5e:12:34:56",
"mtu": 1500,
"netmask": "255.255.255.0",
"network": "192.0.2.0",
"type": "ether"
}
},
"changed": false
}
It is also recommend to have a look into the full output without the filter argument to get familiar with the result set and data structure.
Documentation
setup module - Examples

Troubleshooting fetch_module executed from Ansible Tower (awx)

I'm trying to do a very simple fetch file from remote host. Somehow I've never gotten this to work.
Fetching from a remote Linux box to the Ansible Tower (awx) host which is also a Linux box.
Here's the Ansible code:
---
- name: get new private key for user
hosts: tag_Name_ansible_kali
become: yes
gather_facts: no
- name: fetch file
fetch:
src: /tmp/key
dest: /tmp/received/
flat: yes
Here's the result which makes it appear like the fetch worked:
{
"changed": true,
"md5sum": "42abaa3160ba875051f2cb20be0233ba",
"dest": "/tmp/received/key",
"remote_md5sum": null,
"checksum": "9416f6f64b94c331cab569035fb6bb825053bc15",
"remote_checksum": "9416f6f64b94c331cab569035fb6bb825053bc15",
"_ansible_no_log": false
}
However, going to the /tmp/received directory and ls -lah shows...
[root#ansibleserver received]# ls -lah
total 4.0K
drwxr-xr-x. 2 awx awx 6 Mar 12 15:48 .
drwxrwxrwt. 10 root root 4.0K Mar 12 15:49 ..
I've tested and if I choose a target src file that doesn't exist it won't work, so it's clearly connecting to the remote host. The problem is no matter where I point dest on the Ansible server the file doesn't actually write there. Not even sure how it can have a checksum of a file that doesn't exist. I've searched the entire drive and that file does not exist. Is there another log somewhere I can look at where it's actually writing the file? It's not on the remote host either.
Any advice would be appreciated. Seriously scratching my head here.
On a RHEL 7.9.9 system with Ansible 2.9.25, Python 2.7.5, Ansible Tower 3.7.x the output from an ad-hoc fetch task on CLI for a user on the Tower Server looks like
ansible test --user ${USER} --ask-pass --module-name fetch --args "src=/home/{{ ansible_user }}/test.txt dest=/tmp/ flat=yes"
SSH password:
test1.example.com | CHANGED => {
"changed": true,
"checksum": "4e1243bd22c66e76c2ba9eddc1f91394e57f9f83",
"dest": "/tmp/test.txt",
"md5sum": "d8e8fca2dc0f896fd7cb4cb0031ba249",
"remote_checksum": "4e1243bd22c66e76c2ba9eddc1f91394e57f9f83",
"remote_md5sum": null
}
test2.example.com | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"msg": "file not found: /home/user/test.txt"
}
and the file was left there. But the command was initiated and executed under user.
The same executed from Ansible Tower as ad-hoc command with arguments src=/home/user/test.txt dest=/tmp/ flat=yes reported
test2.example.com | FAILED! => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"msg": "file not found: /home/user/test.txt"
}
test1.example..com | CHANGED => {
"changed": true,
"checksum": "4e1243bd22c66e76c2ba9eddc1f91394e57f9f83",
"dest": "/tmp/test.txt",
"md5sum": "d8e8fca2dc0f896fd7cb4cb0031ba249",
"remote_checksum": "4e1243bd22c66e76c2ba9eddc1f91394e57f9f83",
"remote_md5sum": null
}
And your observation was right, there was no file on the Ansible Tower (awx) server. Changing the destination directory to the user reported, if there is an file already
}
test1.example.com | FAILED! => {
"changed": false,
"checksum": null,
"dest": "/home/user/test.txt",
"file": "/home/user/test.txt",
"md5sum": null,
"msg": "checksum mismatch",
"remote_checksum": "4e1243bd22c66e76c2ba9eddc1f91394e57f9f83",
"remote_md5sum": null
}
that there is an file already. However, it failed also if there was no file.
After changing the destination directory to the user under the Ansible Tower is running (awx) via arguments src=/home/user/test.txt dest=/home/awx/ flat=yes
test1.example.com | CHANGED => {
"changed": true,
"checksum": "4e1243bd22c66e76c2ba9eddc1f91394e57f9f83",
"dest": "/home/awx/test.txt",
"md5sum": "d8e8fca2dc0f896fd7cb4cb0031ba249",
"remote_checksum": "4e1243bd22c66e76c2ba9eddc1f91394e57f9f83",
"remote_md5sum": null
}
the file was left there correctly
ls -al /home/awx/
-rw-r--r--. 1 awx awx 5 Nov 6 10:42 test.txt
Regarding
The problem is no matter where I point dest on the Ansible server the file doesn't actually write there. ... Any advice would be appreciated. ...
it looks like it is caused by the user context and missing access/write rights and maybe other observations like "It turns out that Ansible Tower doesn't actually fetch the files to itself, but just copies them to a temporary directory on the remote server".
you can try
validate_checksum: no

sshfs with ansible does not give the same result as running it manually on the host

I am working on backups for my server. I am using sshfs for this. When wanting to back up a folder the backup server asks for a password. This is what my task (handler) looks like:
- name: Mount backup folder
become: yes
expect:
command: "sshfs -o allow_other,default_permissions {{ backup_server.user }}#{{ backup_server.host }}:/ /mnt/backup"
echo: yes
responses:
(.*)password(.*): "{{ backup_server.password }}"
(.*)Are you sure you want to continue(.*): "yes"
listen: mount-backup-folder
It runs and produces this output:
changed: [prod1.com] => {
"changed": true,
"cmd": "sshfs -o allow_other,default_permissions user#hostname.com:/ /mnt/backup",
"delta": "0:00:00.455753",
"end": "2021-01-26 14:57:34.482440",
"invocation": {
"module_args": {
"chdir": null,
"command": "sshfs -o allow_other,default_permissions user#hostname.com:/ /mnt/backup",
"creates": null,
"echo": true,
"removes": null,
"responses": {
"(.*)Are you sure you want to continue(.*)": "yes",
"(.*)password(.*)": "password"
},
"timeout": 30
}
},
"rc": 0,
"start": "2021-01-26 14:57:34.026687",
"stdout": "user#hostname.com's password: ",
"stdout_lines": [
"user#hostname.com's password: "
]
}
But when I go to the server the folder is not synced with the backup server. BUT when I run the command manually:
sshfs -o allow_other,default_permissions user#hostname.com:/ /mnt/backup
The backup DOES work. Does anybody know how this is possible?
I suspect sshfs was killed by SIGHUP. I know nothing about Ansible so don't know if it has the official way to ignore SIGHUP. As a workaround you can write like this:
expect:
command: bash -c "trap '' HUP; sshfs -o ..."
I installed sshfs and verified this bash -c "trap ..." workaround with Expect (spawn -ignore HUP ...) and sexpect (spawn -nohup ...). I believe it'd also work with Ansible (seems like its expect module uses Python's pexpect).

ANSIBLE_ROLES_PATH cannot assume to get correct role in bash script

From Ansible: Can I execute role from command line? -
HOST_PATTERN=$1
shift
ROLE=$1
shift
echo "To apply role \"$ROLE\" to host/group \"$HOST_PATTERN\"..."
export ANSIBLE_ROLES_PATH="$(pwd)/roles"
export ANSIBLE_RETRY_FILES_ENABLED="False"
ANSIBLE_ROLES_PATH="$(pwd)/roles" ansible-playbook "$#" /dev/stdin <<END
---
- hosts: $HOST_PATTERN
roles:
- $ROLE
END
Problem is when I run with ./apply.sh all dev-role -i dev-inventory, it cannot assume the correct role. When I run with ansible-playbook -i dev-inventory site.yml --tags dev-role, it's working.
Below is error message
fatal: [my-api]: FAILED! => {"changed": false, "checksum_dest": null, "checksum_src": "d3a0ae8f3b45a0a7906d1be7027302a8b5ee07a0", "dest": "/tmp/install-amazon2-td-agent4.sh", "elapsed": 0, "gid": 0, "group": "root", "mode": "0644", "msg": "Destination /tmp/install-amazon2-td-agent4.sh is not writable", "owner": "root", "size": 838, "src": "/home/ec2-user/.ansible/tmp/ansible-tmp-1600788856.749975-487-237398580935180/tmpobyegc", "state": "file", "uid": 0, "url": "https://toolbelt.treasuredata.com/sh/install-amazon2-td-agent4.sh"}
Based on "msg": "Destination /tmp/install-amazon2-td-agent4.sh is not writable", I'd guess it is because site.yml contains become: yes statement, which makes all tasks run as root. The "anonymous" playbook does not contain a become: declaration, and thus would need one to either run ansible-playbook --become or to add become: yes to it, also
ANSIBLE_ROLES_PATH="$(pwd)/roles" ansible-playbook "$#" /dev/stdin <<END
---
- hosts: $HOST_PATTERN
become: yes
roles:
- $ROLE
END

Ansible: Can I execute role from command line?

Suppose I have a role called "apache"
Now I want to execute that role on host 192.168.0.10 from the command line from Ansible host
ansible-playbook -i "192.168.0.10" --role "path to role"
Is there a way to do that?
With ansible 2.7 you can do this:
$ ansible localhost --module-name include_role --args name=<role_name>
localhost | SUCCESS => {
"changed": false,
"include_variables": {
"name": "<role_name>"
}
}
localhost | SUCCESS => {
"msg": "<role_name>"
}
This will run role from /path/to/ansible/roles or configured role path.
Read more here:
https://github.com/ansible/ansible/pull/43131
I am not aware of this feature, but you can use tags to just run one role from your playbook.
roles:
- {role: 'mysql', tags: 'mysql'}
- {role: 'apache', tags: 'apache'}
ansible-playbook webserver.yml --tags "apache"
There is no such thing in Ansible, but if this is an often use case for you, try this script.
Put it somewhere within your searchable PATH under name ansible-role:
#!/bin/bash
if [[ $# < 2 ]]; then
cat <<HELP
Wrapper script for ansible-playbook to apply single role.
Usage: $0 <host-pattern> <role-name> [ansible-playbook options]
Examples:
$0 dest_host my_role
$0 custom_host my_role -i 'custom_host,' -vv --check
HELP
exit
fi
HOST_PATTERN=$1
shift
ROLE=$1
shift
echo "Trying to apply role \"$ROLE\" to host/group \"$HOST_PATTERN\"..."
export ANSIBLE_ROLES_PATH="$(pwd)/roles"
export ANSIBLE_RETRY_FILES_ENABLED="False"
ansible-playbook "$#" /dev/stdin <<END
---
- hosts: $HOST_PATTERN
roles:
- $ROLE
END
You could also check ansible-toolbox repository. It will allow you to use something like
ansible-role --host 192.168.0.10 --gather --user centos --become my-role
I have written a small Ansible plugin, called auto_tags, that dynamically generates for each role in your playbook a tag of the same name. You can find it here.
After installing it (instructions are in the gist above) you could then execute a specific role with:
ansible-playbook -i "192.168.0.10" --tags "name_of_role"
Have you tried that? it's super cool. I'm using 'update-os' instead of 'apache' role to give a more meaningful example. I have a role called let's say ./roles/update-os/ in my ./ I add a file called ./role-update-os.yml which looks like:
#!/usr/bin/ansible-playbook
---
- hosts: all
gather_facts: yes
become: yes
roles:
- update-os
Make this file executable (chmod +x role-update-os.yml). Now you can run and limit to whatever you have in your inventory ./update-os.yml -i inventory-dev --limit 192.168.0.10 the limit you can pass the group names as well.
--limit web,db > web and db is the group defined in your inventory
--limit 192.168.0.10,192.168.0.201
$ cat inventory-dev
[web]
192.168.0.10
[db]
192.168.0.201
Note that you can configure ssh-keys and sudoers policy to be able to execute without having to type password - ideal for automation, there are security implications with this. therefore you have to analyze your environment to see whether it's suitable.
Since in ansible 2.4 two options are available: import_role and include_role.
wohlgemuth#leela:~/workspace/rtmtb-ansible/kvm-cluster$ ansible localhost -m import_role -a name=rtmtb
[WARNING]: No inventory was parsed, only implicit localhost is available
localhost | CHANGED => {
"changed": true,
"checksum": "d31b41e68997e1c7f182bb56286edf993146dba1",
"dest": "/root/.ssh/id_rsa.github",
"gid": 0,
"group": "root",
"md5sum": "b7831c4c72f3f62207b2b96d3d7ed9b3",
"mode": "0600",
"owner": "root",
"size": 3389,
"src": "/home/wohlgemuth/.ansible/tmp/ansible-tmp-1561491049.46-139127672211209/source",
"state": "file",
"uid": 0
}
localhost | CHANGED => {
"changed": true,
"checksum": "1972ebcd25363f8e45adc91d38405dfc0386b5f0",
"dest": "/root/.ssh/config",
"gid": 0,
"group": "root",
"md5sum": "f82552a9494e40403da4a80e4c528781",
"mode": "0644",
"owner": "root",
"size": 147,
"src": "/home/wohlgemuth/.ansible/tmp/ansible-tmp-1561491049.99-214274671218454/source",
"state": "file",
"uid": 0
}
ansible.builtin.import_role – Import a role into a play
ansible.builtin.include_role – Load and execute a role
Yes, import_role is an ansible module and as such it may be invoked through ansible command. The following executes role pki on my_server
ansible my_server -m import_role \
-a "name=pki tasks_from=gencert" \
-e cn=etcdctl \
-e extended_key_usage=clientAuth
You can create the playbook files from the command line:
Install the role (if not already installed)
ansible-galaxy install git+https://github.com/user/apache-role.git
Create playbook and hosts files
cat >> playbook.yml <<EOL
---
- name: Run apache
hosts: all
roles:
- apache-role
EOL
cat >> hosts <<EOL
192.168.0.10
EOL
Run ansible
ansible-playbook playbook.yml -i hosts
Delete the files
rm playbook.yml hosts

Resources