How to create folder in $USER $HOME using Ansible - ansible

I'm new to Ansible trying to become $USER then create .ssh folder inside $HOME directory and I'm getting Permission denied:
---
- hosts: amazon
gather_facts: False
vars:
ansible_python_interpreter: "/usr/bin/env python3"
account: 'jenkins'
home: "{{out.stdout}}"
tasks:
- name: Create .SSH directory
become: true
become_method: sudo
become_user: "{{account}}"
shell: "echo $HOME"
register: out
- file:
path: "{{home}}/.ssh"
state: directory
My output is:
MacBook-Pro-60:playbooks stefanov$ ansible-playbook variable.yml -v
Using /Users/stefanov/.ansible/ansible.cfg as config file
PLAY [amazon] *************************************************************************************************************************************************************************************
TASK [Create .SSH directory] **********************************************************************************************************************************************************************
changed: [slave] => {"changed": true, "cmd": "echo $HOME", "delta": "0:00:00.001438", "end": "2017-08-21 10:23:34.882835", "rc": 0, "start": "2017-08-21 10:23:34.881397", "stderr": "", "stderr_lines": [], "stdout": "/home/jenkins", "stdout_lines": ["/home/jenkins"]}
TASK [file] ***************************************************************************************************************************************************************************************
fatal: [slave]: FAILED! => {"changed": false, "failed": true, "msg": "There was an issue creating /home/jenkins/.ssh as requested: [Errno 13] Permission denied: b'/home/jenkins/.ssh'", "path": "/home/jenkins/.ssh", "state": "absent"}
to retry, use: --limit #/Users/stefanov/playbooks/variable.retry
PLAY RECAP ****************************************************************************************************************************************************************************************
slave : ok=1 changed=1 unreachable=0 failed=1
I'm guessing - name and - file are dicts and considered different tasks.
And what was executed in - name is no longer valid in - file?
Because I switched to Jenkins user in - name and in - file I'm likely with the account I do SSH.
Then how can I concatenate both tasks in one?
What is the right way to do this?
Another thing how can I do sudo with file module? I can't see such option:
http://docs.ansible.com/ansible/latest/file_module.html
Or should I just do shell: mkdir -pv $HOME/.ssh instead of using file module?

Then how can I concatenate both tasks in one?
You cannot do it, but you can just add become to the second task, which will make it run with the same permissions as the first one:
- file:
path: "{{home}}/.ssh"
state: directory
become: true
become_method: sudo
become_user: "{{account}}"
Another thing how can i do sudo with file module can't see such option
Because become (and other) is not a parameter of a module, but a general declaration for any task (and play).
I'm guessing -name and -file are dicts and considered different tasks.
The first task is shell, not name. You can add name to any task (just like become).

Related

VM's are not Visible to virsh command executed using ansible shell task

I am trying to save the state of all running VM's before I can shut down the host.
virsh save <domain-name> <filetosave>
The idea is to restore the VM's when I bring back the host again.
I am using ansible virt module to determine the running VM's. There is no command available for saving the virtual machine using virt module hence using shell module like below
tasks:
- name: Gathering Facts
setup:
# Listing VMs
- name: list all VMs
virt:
command: list_vms
register: all_vms
- name: list only running VMs
virt:
command: list_vms
state: running
register: running_vms
- name: Print All VM'state
debug:
msg: |
VM's: {{ all_vms.list_vms }},
Active VM's: {{ running_vms.list_vms }}
- name: Save Running VM's
shell: >
virsh save {{ item }} ~/.mykvm/{{item}}.save
args:
chdir: /home/sharu
executable: /bin/bash
loop:
"{{ running_vms.list_vms }}"
register: save_result
- debug:
var: save_result
Output:
TASK [list all VMs]
*********
ok: [192.168.0.113]
TASK [list only running VMs]
*********
ok: [192.168.0.113]
TASK [Print All VM'state]
*********
ok: [192.168.0.113] => {
"msg": "VM's: [u'win10-2', u'ubuntu18.04', u'ubuntu18.04-2', u'win10'],\nActive VM's: [u'win10-2']\n"
}
TASK [Save Running VM's]
*********
failed: [192.168.0.113] (item=win10-2) => {"ansible_loop_var": "item", "changed": true, "cmd": "virsh save win10-2 ~/.mykvm/win10-2.save\n", "delta": "0:00:00.101916", "end": "2019-12-30 01:19:32.205584", "item": "win10-2", "msg": "non-zero return code", "rc": 1, "start": "2019-12-30 01:19:32.103668", "stderr": "error: failed to get domain 'win10-2'", "stderr_lines": ["error: failed to get domain 'win10-2'"], "stdout": "", "stdout_lines": []}
PLAY RECAP
******************
192.168.0.113 : ok=5 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
Notice the virt module was able to get the information regarding the running domain's(VM's) but the shell command failed to get the domain name. It seems like the VM's are not visible for Ansible's shell task
To debug further I executed the Ansible ad-hoc command below but got the blank result
$ ansible -i ../hosts.ini RIG3600 -m shell -a "virsh list --all"
192.168.0.113 | CHANGED | rc=0 >>
Id Name State
--------------------
The same command works fine in the Linux shell
$ virsh list --all
Id Name State
--------------------------------
3 win10-2 running
- ubuntu18.04 shut off
- ubuntu18.04-2 shut off
- win10 shut off
Am using Ansible version 2.8.3
Not sure am I missing something here, any help is much appreciated.
The virt module, as documented, defaults to using the libvirt URI qemu:///system.
The virsh command, on the other hand, when run as a non-root user, defaults to qemu:///session.
In other words, your virsh command is not talking to the same libvirt instance as your virt tasks.
You can provide an explicit URI to the virsh command using the -c (--connect) option:
virsh -c qemu:///system ...
Or by setting the LIBVIRT_DEFAULT_URI environment variable:
- name: Save Running VM's
shell: >
virsh save {{ item }} ~/.mykvm/{{item}}.save
args:
chdir: /home/sharu
executable: /bin/bash
environment:
LIBVIRT_DEFAULT_URI: qemu:///system
loop:
"{{ running_vms.list_vms }}"
register: save_result
(You can also set environment on the play instead of on an individual task).

ansible behavior to specific sudo commands on managed nodes

Here to discuss the ansible behavior when user at managed nodes is given sudo privileges to specific commands.
I have sudo privileges on remote managed host [rm-host.company.com] to specific commands. Two of them are:
/bin/mkdir /opt/somedir/unit*
/bin/chmod 2775 /opt/somedir/unit*
PS: /opt/somedir at remote nodes exists already.
My ansible control machine version:
ansible 2.7.10
python version = 2.7.5 (default, Mar 26 2019, 22:13:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
YAML code fails when I use ansbile "file" module even though I have sudo privileges to chmod and mkdir as listed above.
- name: 7|Ensure Directory - "/opt/somedir/{{ ENV_CHOSEN }}" Permissions are 2775
become: yes
become_method: sudo
file: path="/opt/somedir/{{ ENV_CHOSEN }}" state=directory mode=2775
when:
- ansible_facts['os_family'] == "CentOS" or ansible_facts['os_family'] == "RedHat"
- ansible_distribution_version | int >= 6
- http_dir_path.stat.exists == true
- http_dir_path.stat.isdir == true
- CreateWebAgentEnvDir is defined
- CreateWebAgentEnvDir is succeeded
register: ChangeDirPermission
- debug:
var: ChangeDirPermission
Runtime error:
TASK [7|Ensure Directory - "/opt/somedir/unitc" Permissions are 2775] **************************************************************************************************************************************************************************************
fatal: [rm-host.company.com]: FAILED! => {"changed": false, "module_stderr": "FIPS mode initialized\r\nShared connection to rm-host.company.com closed.\r\n", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
to retry, use: --limit #/u/joker/scripts/Ansible/playbooks/agent/plays/agent_Install.retry
PLAY RECAP ***************************************************************************************************************************************************************************************************************************************************
rm-host.company.com : ok=9 changed=2 unreachable=0 failed=1
But succeeds when I use command module, like so:
- name: 7|Ensure Directory - "/opt/somedir/{{ ENV_CHOSEN }}" Permissions are 2775
command: sudo /bin/chmod 2775 "/opt/somedir/{{ ENV_CHOSEN }}"
when:
- ansible_facts['os_family'] == "CentOS" or ansible_facts['os_family'] == "RedHat"
- ansible_distribution_version | int >= 6
- http_dir_path.stat.exists == true
- http_dir_path.stat.isdir == true
- CreateagentEnvDir is defined
- CreateagentEnvDir is succeeded
register: ChangeDirPermission
- debug:
var: ChangeDirPermission
Success Runtime debug output captured:
TASK [7|Ensure Directory - "/opt/somedir/unitc" Permissions are 2775] **************************************************************************************************************************************************************************************
[WARNING]: Consider using 'become', 'become_method', and 'become_user' rather than running sudo
changed: [rm-host.company.com]
TASK [debug] *************************************************************************************************************************************************************************************************************************************************
ok: [rm-host.company.com] => {
"ChangeDirPermission": {
"changed": true,
"cmd": [
"sudo",
"/bin/chmod",
"2775",
"/opt/somedir/unitc"
],
"delta": "0:00:00.301570",
"end": "2019-06-22 13:20:17.300266",
"failed": false,
"rc": 0,
"start": "2019-06-22 13:20:16.998696",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": [],
"warnings": [
"Consider using 'become', 'become_method', and 'become_user' rather than running sudo"
]
}
}
Question:
How can I make this work without using command module? I want to stick to ansible core modules using 'become', 'become_method' rather than running sudo in command module.
Note:
It works when sudo is enabled for ALL commands. But [ user ALL=(ALL) NOPASSWD: ALL ] cannot be given on remote host. Not allowed by company policy for the group I am in.
The short answer is you can't. The way ansible works is by executing python scripts in the remote host (except for the raw, command and shell modules). See the docs.
The file module executes this script with a long line of parameters. But ansible will first become the required user, in this case root by running sudo -H -S -n -u root /bin/sh in the remote ssh session (please bear in mind that this command might be slightly different in your case).
Once the user logged remotely has become the root user, Ansible will upload and execute the file.py script.
It looks like in your case, you'll need to revert to use the raw, command or shell in the cases you need to run the privileged commands.
To understand this a bit better and see the detail and order of the commands being executed, run ansible-playbook with the parameter -vvvv.
I solved this issue by removing the become_method and become_user off my playbook.
First, I specified the user in the inventory file using ansible_user=your_user. Then, I removed the become_method and become_user off my playbook leaving just become=yes
For more details about this answer, look on this other answer.

Ansible copy or move files only on remote host

This should work but doesn't and gives the following error (below).
I've read a couple of posts on stackoverflow here and here but there doesn't seem to be a good answer that works in this case. I'm really hoping I'm just missing something dumb and I have been at this for hours so please don't mind my snark but I need to vent.
Since ansible, 2.3.0, can't do something as simple as copy/move/rename files ONLY on the remote host, I'm mean who would want to do that? And it also can't act on globs (*) (say when you don't know what files to act on), a 2 step approach seems to be the only way (that I know of) to move some files (only on the remote host). But not even this works.
migrate_rhel2centos.yml
---
- hosts: RedHat
become: true
become_user: root
become_method: sudo
vars:
repo_dir: /etc/yum.repos.d
tasks:
- name: create directory
file: path=/etc/yum.repos.d/bak/ state=directory
- name: get repo files
shell: "ls {{ repo_dir }}/*.repo"
register: repo_list
- debug: var=repo_list.stdout_lines
- name: move repo files
command: "/bin/mv -f {{ item }} bak"
args:
chdir: "{{repo_dir}}"
with_items: repo_list.stdout_lines
#################################
TASK [get repo files]
**********************************************************************
changed: [myhost]
TASK [debug]
**********************************************************************
ok: [myhost] => {
"repo_list.stdout_lines": [
"/etc/yum.repos.d/centric.repo",
"/etc/yum.repos.d/redhat.repo",
"/etc/yum.repos.d/rhel-source.repo"
]
}
TASK [move repo files]
*******************************************************************
failed: [myhost] (item=repo_list.stdout_lines) => {"changed": true, "cmd": ["/bin/mv", "-f", "repo_list.stdout_lines", "bak"], "delta": "0:00:00.001945", "end": "2016-12-13 15:07:14.103823", "failed": true, "item": "repo_list.stdout_lines", "rc": 1, "start": "2016-12-13 15:07:14.101878", "stderr": "/bin/mv: cannot stat `repo_list.stdout_lines': No such file or directory", "stdout": "", "stdout_lines": [], "warnings": []}
to retry, use: --limit #/home/jimm/.ansible/migrate_rhel2centos.retry
PLAY RECAP
********************************
myhost : ok=5 changed=1 unreachable=0 failed=1
Copy has the flag
remote_src
If no, it will search for src at originating/master machine.
If yes it will go to the remote/target machine for the src. Default is no.
edit: https://docs.ansible.com/ansible/latest/collections/ansible/builtin/copy_module.html
Does now support recursive copying.
if you want to copy file only on remote server you need to use ansible.builtin.copy module with key
remote_src: yes
Example from dock
- name: Copy a "sudoers" file on the remote machine for editing
ansible.builtin.copy:
src: /etc/sudoers
dest: /etc/sudoers.edit
remote_src: yes
validate: /usr/sbin/visudo -csf %s
name: copy files task
shell: cp source/path/file destination/path/file
This resolved my issue with coping files on remote host.
You need to use as below
with_items: "{{repo_list.stdout_lines}}"

Ansible via Ansible Pull can't find file in `template: validate="stat %s"`

When using Ansible through Ansible Pull, the template directive's validate step fails to find the file specified by %s.
"No such file" error reported by ansible-pull:
TASK [ssh : place ssh config file] *********************************************
fatal: [localhost]: FAILED! => {
"changed": true,
"cmd": "sshd -t -f /home/nhooey/.ansible/tmp/ansible-tmp-1453485441.94-279324955743/source",
"failed": true,
"msg": "[Errno 2] No such file or directory",
"rc": 2
}
Problematic template directive:
- name: place ssh config file
template:
src: etc/ssh/sshd_config
dest: /etc/ssh/sshd_config
owner: root
group: root
mode: 0644
validate: 'sshd -t -f %s'
This happens with either Ansible 1.9.4 or 2.0.0.2 from (Python 2.7's Pip) and only when using Ansible Pull. Running ansible-playbook locally after manually checking out the repository works fine.
Unfortunately I'm not able to reproduce this error in the trivial case with a sample repository that I created.
Failure to reproduce error with simple test case:
$ virtualenv .venv && source .venv/bin/activate && pip install ansible
$ ansible-pull --directory /tmp/ansible-test \
--url https://github.com/nhooey/ansible-pull-template-validate-bug.git playbook.yaml
Starting Ansible Pull at 2016-01-22 13:14:51
/Users/nhooey/git/github/nhooey/ansible-pull-template-validate-bug/.venv/bin/ansible-pull \
--directory /tmp/ansible-test \
--url https://github.com/nhooey/ansible-pull-template-validate-bug.git \
playbook.yaml
localhost | SUCCESS => {
"after": "07bb13992910e0efe30d071d5f5a2a2e88251045",
"before": "07bb13992910e0efe30d071d5f5a2a2e88251045",
"changed": false
}
[WARNING]: provided hosts list is empty, only localhost is available
PLAY ***************************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [Delete template file] ****************************************************
changed: [localhost]
TASK [Test template validation] ************************************************
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=0
Playbook that won't fail in the simple test case above:
---
- hosts: localhost
tasks:
- name: Delete template file
file:
path: /tmp/file
state: absent
- name: Test template validation
template:
src: file
dest: /tmp/file
validate: 'ls %s'
It seems like there could be a problem with the timing around when ansible-pull creates and deletes temporary files in Ansible.
What's happening?
It's saying that it can't find sshd because it's not in the path. Silly me...

Missing become password when sudoing

I'm seeing an error message when I try to run a task with sudo in my Ansible playbook.
Here's my playbook:
---
- hosts: production
gather_facts: no
remote_user: deployer
become: yes
become_method: sudo
become_user: root
tasks:
- name: Whoami
command: /usr/bin/whoami
I would expect whoami to be root but the task fails with the error message:
» ansible-playbook -i ansible_hosts sudo.yml --ask-sudo-pass
SUDO password: [I paste my sudo password here]
PLAY [production] *************************************************************
GATHERING FACTS ***************************************************************
fatal: [MY.IP] => Missing become password
TASK: [Whoami] ****************************************************************
FATAL: no hosts matched or all hosts have already failed -- aborting
When I manually ssh into the box and try to sudo it works as expected:
» ssh deployer#production
» sudo whoami
[I paste the same sudo password]
root
The deployer user password was set by Ansible as follows (in a different playbook):
- hosts: production
remote_user: root
# The {{ansible_become_pass}} comes from this file:
vars_files:
- ./config.yml
tasks:
- name: Create deployer user
user: name=deployer uid=1040 groups=sudo,deployer shell=/bin/bash password={{ansible_become_pass}}
Where {{ansible_become_pass}} is the password I desire hashed with the following python snippet:
python -c 'import crypt; print crypt.crypt("password I desire", "$1$SomeSalt$")'
"password I desire" is replace with a password and "$1$SomeSalt$" is a random salt.
I'm using Ansible version 1.9.4.
What's the problem?
I have tried your version, and playbook, only with --ask-pass, which returns "stdout": "root" result.
You have to replace --ask-sudo-pass with --ask-pass. And make sure, your deployer user has root privileges.
$ ./bin/ansible --version
ansible 1.9.4
$ ./ansible/bin/ansible-playbook -vv pl.yml --ask-pass
SSH password:
PLAY [localhost] **************************************************************
TASK: [Whoami] ****************************************************************
<localhost> REMOTE_MODULE command /usr/bin/whoami
changed: [localhost] => {"changed": true, "cmd": ["/usr/bin/whoami"], "delta": "0:00:00.002555", "end": "2015-12-05 07:17:16.634485", "rc": 0, "start": "2015-12-05 07:17:16.631930", "stderr": "", "stdout": "root", "warnings": []}
PLAY RECAP ********************************************************************
localhost : ok=1 changed=1 unreachable=0 failed=0

Resources