ansible behavior to specific sudo commands on managed nodes - ansible

Here to discuss the ansible behavior when user at managed nodes is given sudo privileges to specific commands.
I have sudo privileges on remote managed host [rm-host.company.com] to specific commands. Two of them are:
/bin/mkdir /opt/somedir/unit*
/bin/chmod 2775 /opt/somedir/unit*
PS: /opt/somedir at remote nodes exists already.
My ansible control machine version:
ansible 2.7.10
python version = 2.7.5 (default, Mar 26 2019, 22:13:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
YAML code fails when I use ansbile "file" module even though I have sudo privileges to chmod and mkdir as listed above.
- name: 7|Ensure Directory - "/opt/somedir/{{ ENV_CHOSEN }}" Permissions are 2775
become: yes
become_method: sudo
file: path="/opt/somedir/{{ ENV_CHOSEN }}" state=directory mode=2775
when:
- ansible_facts['os_family'] == "CentOS" or ansible_facts['os_family'] == "RedHat"
- ansible_distribution_version | int >= 6
- http_dir_path.stat.exists == true
- http_dir_path.stat.isdir == true
- CreateWebAgentEnvDir is defined
- CreateWebAgentEnvDir is succeeded
register: ChangeDirPermission
- debug:
var: ChangeDirPermission
Runtime error:
TASK [7|Ensure Directory - "/opt/somedir/unitc" Permissions are 2775] **************************************************************************************************************************************************************************************
fatal: [rm-host.company.com]: FAILED! => {"changed": false, "module_stderr": "FIPS mode initialized\r\nShared connection to rm-host.company.com closed.\r\n", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
to retry, use: --limit #/u/joker/scripts/Ansible/playbooks/agent/plays/agent_Install.retry
PLAY RECAP ***************************************************************************************************************************************************************************************************************************************************
rm-host.company.com : ok=9 changed=2 unreachable=0 failed=1
But succeeds when I use command module, like so:
- name: 7|Ensure Directory - "/opt/somedir/{{ ENV_CHOSEN }}" Permissions are 2775
command: sudo /bin/chmod 2775 "/opt/somedir/{{ ENV_CHOSEN }}"
when:
- ansible_facts['os_family'] == "CentOS" or ansible_facts['os_family'] == "RedHat"
- ansible_distribution_version | int >= 6
- http_dir_path.stat.exists == true
- http_dir_path.stat.isdir == true
- CreateagentEnvDir is defined
- CreateagentEnvDir is succeeded
register: ChangeDirPermission
- debug:
var: ChangeDirPermission
Success Runtime debug output captured:
TASK [7|Ensure Directory - "/opt/somedir/unitc" Permissions are 2775] **************************************************************************************************************************************************************************************
[WARNING]: Consider using 'become', 'become_method', and 'become_user' rather than running sudo
changed: [rm-host.company.com]
TASK [debug] *************************************************************************************************************************************************************************************************************************************************
ok: [rm-host.company.com] => {
"ChangeDirPermission": {
"changed": true,
"cmd": [
"sudo",
"/bin/chmod",
"2775",
"/opt/somedir/unitc"
],
"delta": "0:00:00.301570",
"end": "2019-06-22 13:20:17.300266",
"failed": false,
"rc": 0,
"start": "2019-06-22 13:20:16.998696",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": [],
"warnings": [
"Consider using 'become', 'become_method', and 'become_user' rather than running sudo"
]
}
}
Question:
How can I make this work without using command module? I want to stick to ansible core modules using 'become', 'become_method' rather than running sudo in command module.
Note:
It works when sudo is enabled for ALL commands. But [ user ALL=(ALL) NOPASSWD: ALL ] cannot be given on remote host. Not allowed by company policy for the group I am in.

The short answer is you can't. The way ansible works is by executing python scripts in the remote host (except for the raw, command and shell modules). See the docs.
The file module executes this script with a long line of parameters. But ansible will first become the required user, in this case root by running sudo -H -S -n -u root /bin/sh in the remote ssh session (please bear in mind that this command might be slightly different in your case).
Once the user logged remotely has become the root user, Ansible will upload and execute the file.py script.
It looks like in your case, you'll need to revert to use the raw, command or shell in the cases you need to run the privileged commands.
To understand this a bit better and see the detail and order of the commands being executed, run ansible-playbook with the parameter -vvvv.

I solved this issue by removing the become_method and become_user off my playbook.
First, I specified the user in the inventory file using ansible_user=your_user. Then, I removed the become_method and become_user off my playbook leaving just become=yes
For more details about this answer, look on this other answer.

Related

Ansible synchronize module returning 127

I'm finding the ansible synchronize module keeps failing with error 127, it blames python but other commands are having no issue, I've got the latest module from ansible-galaxy
fatal: [HostA]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
In the playbook I have
- ansible.posix.synchronize:
archive: yes
compress: yes
delete: yes
recursive: yes
dest: "{{ libexec_path }}"
src: "{{ libexec_path }}/"
rsync_opts:
- "--exclude=check_dhcp"
- "--exclude=check_icmp"
ansible.cfg
[defaults]
timeout = 10
fact_caching_timeout = 30
host_key_checking = false
ansible_ssh_extra_args = -R 3128:127.0.0.1:3128
interpreter_python = auto_legacy_silent
forks = 50
I've tried removing the ansible_ssh_extra_args without success, I use this when using apt to tunnel back out to the internet because the remote hosts have no internet access.
I can run sync manually without an issue, pre ansible I used to call rsync using:
sudo rsync -e 'ssh -ax' -avz --timeout=20 --delete -i --progress --exclude-from '/opt/openitc/nagios/bin/exclude.txt' /opt/openitc/nagios/libexec/* root#" . $ip . ":/opt/openitc/nagios/libexec"
I'm synchronising from Ubuntu 20.04 to Ubuntu 14.04
Can anyone see what I'm doing wrong, a way to debug the synchronize or a way to call rsync manually?

VM's are not Visible to virsh command executed using ansible shell task

I am trying to save the state of all running VM's before I can shut down the host.
virsh save <domain-name> <filetosave>
The idea is to restore the VM's when I bring back the host again.
I am using ansible virt module to determine the running VM's. There is no command available for saving the virtual machine using virt module hence using shell module like below
tasks:
- name: Gathering Facts
setup:
# Listing VMs
- name: list all VMs
virt:
command: list_vms
register: all_vms
- name: list only running VMs
virt:
command: list_vms
state: running
register: running_vms
- name: Print All VM'state
debug:
msg: |
VM's: {{ all_vms.list_vms }},
Active VM's: {{ running_vms.list_vms }}
- name: Save Running VM's
shell: >
virsh save {{ item }} ~/.mykvm/{{item}}.save
args:
chdir: /home/sharu
executable: /bin/bash
loop:
"{{ running_vms.list_vms }}"
register: save_result
- debug:
var: save_result
Output:
TASK [list all VMs]
*********
ok: [192.168.0.113]
TASK [list only running VMs]
*********
ok: [192.168.0.113]
TASK [Print All VM'state]
*********
ok: [192.168.0.113] => {
"msg": "VM's: [u'win10-2', u'ubuntu18.04', u'ubuntu18.04-2', u'win10'],\nActive VM's: [u'win10-2']\n"
}
TASK [Save Running VM's]
*********
failed: [192.168.0.113] (item=win10-2) => {"ansible_loop_var": "item", "changed": true, "cmd": "virsh save win10-2 ~/.mykvm/win10-2.save\n", "delta": "0:00:00.101916", "end": "2019-12-30 01:19:32.205584", "item": "win10-2", "msg": "non-zero return code", "rc": 1, "start": "2019-12-30 01:19:32.103668", "stderr": "error: failed to get domain 'win10-2'", "stderr_lines": ["error: failed to get domain 'win10-2'"], "stdout": "", "stdout_lines": []}
PLAY RECAP
******************
192.168.0.113 : ok=5 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Notice the virt module was able to get the information regarding the running domain's(VM's) but the shell command failed to get the domain name. It seems like the VM's are not visible for Ansible's shell task
To debug further I executed the Ansible ad-hoc command below but got the blank result
$ ansible -i ../hosts.ini RIG3600 -m shell -a "virsh list --all"
192.168.0.113 | CHANGED | rc=0 >>
Id Name State
--------------------
The same command works fine in the Linux shell
$ virsh list --all
Id Name State
--------------------------------
3 win10-2 running
- ubuntu18.04 shut off
- ubuntu18.04-2 shut off
- win10 shut off
Am using Ansible version 2.8.3
Not sure am I missing something here, any help is much appreciated.
The virt module, as documented, defaults to using the libvirt URI qemu:///system.
The virsh command, on the other hand, when run as a non-root user, defaults to qemu:///session.
In other words, your virsh command is not talking to the same libvirt instance as your virt tasks.
You can provide an explicit URI to the virsh command using the -c (--connect) option:
virsh -c qemu:///system ...
Or by setting the LIBVIRT_DEFAULT_URI environment variable:
- name: Save Running VM's
shell: >
virsh save {{ item }} ~/.mykvm/{{item}}.save
args:
chdir: /home/sharu
executable: /bin/bash
environment:
LIBVIRT_DEFAULT_URI: qemu:///system
loop:
"{{ running_vms.list_vms }}"
register: save_result
(You can also set environment on the play instead of on an individual task).

How to create folder in $USER $HOME using Ansible

I'm new to Ansible trying to become $USER then create .ssh folder inside $HOME directory and I'm getting Permission denied:
---
- hosts: amazon
gather_facts: False
vars:
ansible_python_interpreter: "/usr/bin/env python3"
account: 'jenkins'
home: "{{out.stdout}}"
tasks:
- name: Create .SSH directory
become: true
become_method: sudo
become_user: "{{account}}"
shell: "echo $HOME"
register: out
- file:
path: "{{home}}/.ssh"
state: directory
My output is:
MacBook-Pro-60:playbooks stefanov$ ansible-playbook variable.yml -v
Using /Users/stefanov/.ansible/ansible.cfg as config file
PLAY [amazon] *************************************************************************************************************************************************************************************
TASK [Create .SSH directory] **********************************************************************************************************************************************************************
changed: [slave] => {"changed": true, "cmd": "echo $HOME", "delta": "0:00:00.001438", "end": "2017-08-21 10:23:34.882835", "rc": 0, "start": "2017-08-21 10:23:34.881397", "stderr": "", "stderr_lines": [], "stdout": "/home/jenkins", "stdout_lines": ["/home/jenkins"]}
TASK [file] ***************************************************************************************************************************************************************************************
fatal: [slave]: FAILED! => {"changed": false, "failed": true, "msg": "There was an issue creating /home/jenkins/.ssh as requested: [Errno 13] Permission denied: b'/home/jenkins/.ssh'", "path": "/home/jenkins/.ssh", "state": "absent"}
to retry, use: --limit #/Users/stefanov/playbooks/variable.retry
PLAY RECAP ****************************************************************************************************************************************************************************************
slave : ok=1 changed=1 unreachable=0 failed=1
I'm guessing - name and - file are dicts and considered different tasks.
And what was executed in - name is no longer valid in - file?
Because I switched to Jenkins user in - name and in - file I'm likely with the account I do SSH.
Then how can I concatenate both tasks in one?
What is the right way to do this?
Another thing how can I do sudo with file module? I can't see such option:
http://docs.ansible.com/ansible/latest/file_module.html
Or should I just do shell: mkdir -pv $HOME/.ssh instead of using file module?
Then how can I concatenate both tasks in one?
You cannot do it, but you can just add become to the second task, which will make it run with the same permissions as the first one:
- file:
path: "{{home}}/.ssh"
state: directory
become: true
become_method: sudo
become_user: "{{account}}"
Another thing how can i do sudo with file module can't see such option
Because become (and other) is not a parameter of a module, but a general declaration for any task (and play).
I'm guessing -name and -file are dicts and considered different tasks.
The first task is shell, not name. You can add name to any task (just like become).

Ansible copy or move files only on remote host

This should work but doesn't and gives the following error (below).
I've read a couple of posts on stackoverflow here and here but there doesn't seem to be a good answer that works in this case. I'm really hoping I'm just missing something dumb and I have been at this for hours so please don't mind my snark but I need to vent.
Since ansible, 2.3.0, can't do something as simple as copy/move/rename files ONLY on the remote host, I'm mean who would want to do that? And it also can't act on globs (*) (say when you don't know what files to act on), a 2 step approach seems to be the only way (that I know of) to move some files (only on the remote host). But not even this works.
migrate_rhel2centos.yml
---
- hosts: RedHat
become: true
become_user: root
become_method: sudo
vars:
repo_dir: /etc/yum.repos.d
tasks:
- name: create directory
file: path=/etc/yum.repos.d/bak/ state=directory
- name: get repo files
shell: "ls {{ repo_dir }}/*.repo"
register: repo_list
- debug: var=repo_list.stdout_lines
- name: move repo files
command: "/bin/mv -f {{ item }} bak"
args:
chdir: "{{repo_dir}}"
with_items: repo_list.stdout_lines
#################################
TASK [get repo files]
**********************************************************************
changed: [myhost]
TASK [debug]
**********************************************************************
ok: [myhost] => {
"repo_list.stdout_lines": [
"/etc/yum.repos.d/centric.repo",
"/etc/yum.repos.d/redhat.repo",
"/etc/yum.repos.d/rhel-source.repo"
]
}
TASK [move repo files]
*******************************************************************
failed: [myhost] (item=repo_list.stdout_lines) => {"changed": true, "cmd": ["/bin/mv", "-f", "repo_list.stdout_lines", "bak"], "delta": "0:00:00.001945", "end": "2016-12-13 15:07:14.103823", "failed": true, "item": "repo_list.stdout_lines", "rc": 1, "start": "2016-12-13 15:07:14.101878", "stderr": "/bin/mv: cannot stat `repo_list.stdout_lines': No such file or directory", "stdout": "", "stdout_lines": [], "warnings": []}
to retry, use: --limit #/home/jimm/.ansible/migrate_rhel2centos.retry
PLAY RECAP
********************************
myhost : ok=5 changed=1 unreachable=0 failed=1
Copy has the flag
remote_src
If no, it will search for src at originating/master machine.
If yes it will go to the remote/target machine for the src. Default is no.
edit: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/copy_module.html
Does now support recursive copying.
if you want to copy file only on remote server you need to use ansible.builtin.copy module with key
remote_src: yes
Example from dock
- name: Copy a "sudoers" file on the remote machine for editing
ansible.builtin.copy:
src: /etc/sudoers
dest: /etc/sudoers.edit
remote_src: yes
validate: /usr/sbin/visudo -csf %s
name: copy files task
shell: cp source/path/file destination/path/file
This resolved my issue with coping files on remote host.
You need to use as below
with_items: "{{repo_list.stdout_lines}}"

How to prevent 'changed' flag when 'register'-ing a variable?

I have a register task to test for the installation of a package:
tasks:
- name: test for nginx
command: dpkg -s nginx-common
register: nginx_installed
Every run it gets reported as a "change":
TASK: [test for nginx] ********************************************************
changed: [vm1]
I don't regard this as a change... it was installed last run and is still installed this run. Yeah, not a biggy, just one of those untidy OCD type issues.
So am I doing it wrong? Is there some way to use register without it always being regarded as a change?
The [verbose] output is untidy, but the only way I've found to get the correct return code.
TASK: [test for nginx] ********************************************************
changed: [vm1] => {"changed": true, "cmd": ["dpkg", "-s", "nginx-common"], "delta": "0:00:00.010231", "end": "2014-05-30 12:16:40.604405", "rc": 0, "start": "2014-05-30 12:16:40.594174", "stderr": "", "stdout": "Package: nginx-common\nStatus: install ok
...
\nHomepage: http://nginx.net"}
It’s described in official documentation here.
tasks:
- name: test for nginx
command: dpkg -s nginx-common
register: nginx_installed
changed_when: false
It is the command module causing the changed state, not the register parameter.
You can set changed_when: to something that is only true when something changed (also look at failed_when). If your task don't change anything, you might want to set check_mode as well. (Especially if other steps depend on the value)
This gives:
tasks:
- name: test for nginx
command: dpkg -s nginx-common
register: nginx_installed
changed_when: False
failed_when: False # dpkg -s returns 1 when packages is not found
check_mode: yes # this can safely run in check_mode

Resources