Ansible Synchronize not able to create directory as root - ansible

I'm working on building an Ansible playbook and I'm using Vagrant as a test platform before I apply the playbook to a remote server.
I'm having issues getting Synchronize to work. I have some files that I need to move up to the server as part of the deployment.
Here's my playbook. I put the shell: whoami in there to make sure commands were running as root.
---
- hosts: all
sudo: yes
tasks:
- name: who am I
shell: whoami
- name: Sync up www folder
synchronize: src=www dest=/var
When I run this I get this:
failed: [default] => {"cmd": "rsync --delay-updates -FF --compress --archive --rsh 'ssh -i /Users/dan/.vagrant.d/insecure_private_key -o StrictHostKeyChecking=no -o Port=2222' --out-format='<<CHANGED>>%i %n%L' www vagrant#127.0.0.1:/var", "failed": true, "rc": 23}
msg: rsync: recv_generator: mkdir "/var/www" failed: Permission denied (13)
*** Skipping any contents from this failed directory ***
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1236) [sender=3.1.1]
FATAL: all hosts have already failed -- aborting
If I'm supplying sudo: yes shouldn't all commands be run as root, including Synchronize?

The Ansible Synchronize module page has some big hairy warnings:
The remote user for the dest path will always be the remote_user, not
the sudo_user.
There's a suggestion to wrap rsync with sudo like this:
# Synchronize using an alternate rsync command
synchronize: src=some/relative/path dest=/some/absolute/path rsync_path="sudo rsync"
There's also a suggestion to use verbosity to debug what's really going on. In this case, it means adding -vvv or even -vvvv to your ansible-playbook commandline execution.
Finally, this is a great time to use proper permissions, especially for non-system files like a www dir. This will solve your problem in the process.
# don't use recurse here unless you are confident how it works with directories.
- file: dest=/var/www state=directory owner=www-data group=www-data mode=0755
- synchronize: src=www dest=/var

Related

Ansible keeps wanting to be root

I'm a beginner with Ansible, and I need to run some basic tasks on a remote server.
The procedure is as follows:
I log as some user (osadmin)
I run su - to become root
I then do the tasks I need to.
So, I wrote my playbook as follows:
---
- hosts: qualif
vars:
- ansible_user: osadmin
- ansible_password: H1g2.D6#
tasks:
- name: Copy stuff from here to over there
copy:
src: /home/osadmin/file.txt
dest: /home/osadmin/file-changed.txt
owner: osadmin
group: osadmin
mode: 0777
Also, I have the following in vars/main.yml:
ansible_user: osadmin
ansible_password: password1
ansible_become_password: password2
[ some other values ]
However, when running my tasks, Ansible / the hosts returns me the following:
"Incorrect sudo password"
I then changed my tasks so that instead of becoming sudo and copy the file in some place my osadmin doesn't have access, I just copy the file on /home/osadmin. So, theorically, no need to become sudo for just a simple copy.
The problem now is that not only it keeps saying "wrong sudo password", but if I remove it, Ansible asks for it.
I then decided to run the command and added -vvv at the end, and it showed me the following:
ESTABLISH SSH CONNECTION FOR USER: osadmin
SSH: EXEC sshpass -d10 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o User=osadmin -o ConnectTimeout=10 -o ControlPath=/home/osadmin/.ansible/cp/b9489e2193 -tt HOST-ADDRESS '/bin/sh -c '"'"'sudo -H -S -n -u
root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-ewujwywrqhcqfdrkaglvrouhmuiefwlj; /usr/bin/python /home/osadmin/.ansible/tmp/ansible-tmp-1550076004.1888492-11284794413477/AnsiballZ_setup.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
(1, b'sudo: a password is required\r\n', b'Shared connection to HOST-ADDRESS closed.\r\n')
As you can see, it somehow uses root, while I never told him to.
Does anyone know why Ansible keeps trying to be sudo, and how can I disable this?
Thank you in advance
There is a difference between 'su' and 'sudo'. If you have 'su' access, that means, that you can log as root (may be not, but it looks like). Use ansible_ssh_user=root, ansible_password=password2.
If this doesn't work, try to configure sudo on a server. You should be able to run sudo whoami and to get answer root. After that your code should run.
One more thing: you are using 'copy' module incorrectly. It uses src as path on local machine (where ansible is run), and dst as path on remote machine.

Rsync with Ansible times out or fails

I have 3 hosts, AnsibleMast, DataRepo, and Node, where Node is any number of targets for Ansible. I am trying to get Node to use rsync to get a file from DataRepo.
If I am on Node and execute this command the file is transferred as expected:
rsync -avzL deployer#datarepo:/home/deployer/data/sbc/cbdbexport /tmp/.
I created this task:
- name: Copy the database backup file to the target node
command: 'rsync -azL deployer#datarepo:/home/deployer/data/sbc/cbdbexport /tmp/.'
When executed it just hangs. I can look on the target and verify that its running.
[deployer#steve ~]$ ps -ef | grep ssh
deployer 3778 3777 0 14:00 pts/2 00:00:00 ssh -l deployer datarepo rsync --server --sender -lLogDtprze.iLs . /home/deployer/data/sbc/cbdbexport
I created this task using the synchronize module:
- name: Copy the database backup file to the target node
synchronize:
rsync_path: /usr/bin/rsync
mode: pull
src: rsync://datarepo:/home/deployer/data/sbc/cbdbexport
dest: /tmp/
archive: no
copy_links: yes
It eventually times out. It also never appears to be running when I execute ps -ef | grep ssh:
FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --copy-links --rsync-path=/usr/bin/rsync --out-format=<<CHANGED>>%i %n%L rsync://datarepo:/home/deployer/data/sbc/cbdbexport /tmp/", "failed": true, "msg": "rsync: failed to connect to datarepo: Connection timed out (110)\nrsync error: error in socket IO (code 10) at clientserver.c(124) [receiver=3.0.6]\n", "rc": 10}
After the Ansible tasks fail I test that I can ssh from Node to DataRepo with no issue. I test that I can run the raw rysnc command. Both work as expected.
Question:
1. Why are either of the Ansible tasks failing? Is there something obvious that I am missing to make it work like the raw command?
I think your syntax is wrong, you are trying to use an actual rsync share (which would mean you are running rsync as a daemon on datarepo) but the actual command you exemplify as working is using rsync over ssh. So the below is not correct
src: rsync://datarepo:/home/deployer/data/sbc/cbdbexport
and I think it should be
src: /home/deployer/data/sbc/cbdbexport
You can also add async to the task if you expect it to run for a long time, to avoid network hiccups or ssh timeouts to affect the task, something like the below, which would cause it to wait for one hour (3600 seconds = one hour):
- name: Copy the database backup file to the target node
synchronize:
rsync_path: /usr/bin/rsync
mode: pull
src: /home/deployer/data/sbc/cbdbexport
dest: /tmp/
archive: no
copy_links: yes
async: 3600

How to copy file to local directory using Ansible?

I'm trying to set up a playbook that will configure my development system. I'd like to copy the /etc/hosts file from my playbooks "files" directory to the /etc directory on my system. Currently I'm doing the following:
# main.yml
- hosts: all
- tasks:
- copy: src=files/hosts
dest=/etc/hosts
owner=root
group=wheel
mode=0644
backup=true
become: true
# inventory
localhost ansible_connection=local
When I run the playbook I'm getting this error:
fatal: [localhost]: FAILED! => {... "msg": Failed to get information on remote file (/etc/hosts): MODULE FAILURE"}
I believe this is because copy is supposed to be used to copy a file to a remote file system. So how do you copy a file to your local management system? I did a Google Search and everything talks about doing the former. I didn't see this addressed in the Ansible docs.
Your task is ok.
You should add --ask-sudo-pass to the ansible-playbook call.
If you run with -vvv you can see the command starts with sudo -H -S -n -u root /bin/sh -c echo BECOME-SUCCESS-somerandomstring (followed by a call to the Python script). If you execute it yourself, you'll get sudo: a password is required message. Ansible quite unhelpfully replaces this error message with its own Failed to get information on remote file (/etc/hosts): MODULE FAILURE.

Ansible and Wget

I am trying to wget a file from a web server from within an Ansible playbook.
Here is the Ansible snippet:
---
- hosts: all
sudo: true
tasks:
- name: Prepare Install folder
sudo: true
action: shell sudo mkdir -p /tmp/my_install/mysql/ && cd /tmp/my_install/mysql/
- name: Download MySql
sudo: true
action: shell sudo wget http://{{ repo_host }}/MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar
Invoking it via:
ansible-playbook my_3rparties.yml -l vsrv644 --extra-vars "repo_host=vsrv656" -K -f 10
It fails with the following:
Cannot write to `MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar' (Permission denied).
FATAL: all hosts have already failed -- aborting
PLAY RECAP ********************************************************************
to retry, use: --limit #/usr2/ihazan/vufroria_3rparties.retry
vsrv644 : ok=2 changed=1 unreachable=0 failed=1
When trying to do the command that fail via regular remote ssh to mimic what ansible would do, it doesn't work as follows:
-bash-4.1$ ssh ihazan#vsrv644 'cd /tmp/my_install/mysql && sudo wget http://vsrv656/MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar'
Enter passphrase for key '/usr2/ihazan/.ssh/id_rsa':
sudo: sorry, you must have a tty to run sudo
But I can solve it using -t as follows:
-bash-4.1$ ssh -t ihazan#vsrv644 'cd /tmp/my_install/mysql && sudo wget http://vsrv656/MySQL-5.6.15-1.el6.x86_64.rpm-bundle.tar'
Then it works.
Is there a way to set the -t (pseudo tty option) on ansible?
P.S: I could solve it by editing the sudoers file as others propose but that is a manual step I am trying to avoid.
Don't use shell-module when there is specialized modules available. In your case:
Create directories with file-module:
- name: create project directory {{ common.project_dir }}
file: state=directory path={{ common.project_dir }}
Download files with get_url-module:
- name: download sources
get_url: url={{ opencv.url }} dest={{ common.project_dir }}/{{ opencv.file }}
Note the new module call syntax in the examples above.
If you have to use sudo with password remember to give --ask-sudo-pass when needed (see e.g. Remote Connection Information).
In Ansible:
file to manage files/directories
get_url to download what you need
become:yes to use sudo priviledges
See ansible documentation:
https://docs.ansible.com/ansible/latest/modules/modules_by_category.html

Sudo does not work with Ansible AWX

I have set sudo_user in /etc/ansible/ansible.cfg.
In my playbook I have set remote_user, sudo_user and sudo:yes (also tried sudo:True) on the same level as hosts:
I then use a role that does:
shell: cp -f src /usr/local/bin/dest
sudo: yes
and get
stderr: cp: cannot create regular file `/usr/local/bin/dest': Permission denied
The credentials in AWX are set correctly - I am able to manually log in as the desired user on the remote machine and copy the file with sudo cp. I have no idea what I'm doing wrong.
What's your sudo user? Your base playbook should look something like this:
# Base System: this is my base playbook
- name: Base Playbook
hosts: all
user: myuser
sudo: True

Resources