Rsync with Ansible times out or fails - ansible

I have 3 hosts, AnsibleMast, DataRepo, and Node, where Node is any number of targets for Ansible. I am trying to get Node to use rsync to get a file from DataRepo.
If I am on Node and execute this command the file is transferred as expected:
rsync -avzL deployer#datarepo:/home/deployer/data/sbc/cbdbexport /tmp/.
I created this task:
- name: Copy the database backup file to the target node
command: 'rsync -azL deployer#datarepo:/home/deployer/data/sbc/cbdbexport /tmp/.'
When executed it just hangs. I can look on the target and verify that its running.
[deployer#steve ~]$ ps -ef | grep ssh
deployer 3778 3777 0 14:00 pts/2 00:00:00 ssh -l deployer datarepo rsync --server --sender -lLogDtprze.iLs . /home/deployer/data/sbc/cbdbexport
I created this task using the synchronize module:
- name: Copy the database backup file to the target node
synchronize:
rsync_path: /usr/bin/rsync
mode: pull
src: rsync://datarepo:/home/deployer/data/sbc/cbdbexport
dest: /tmp/
archive: no
copy_links: yes
It eventually times out. It also never appears to be running when I execute ps -ef | grep ssh:
FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --copy-links --rsync-path=/usr/bin/rsync --out-format=<<CHANGED>>%i %n%L rsync://datarepo:/home/deployer/data/sbc/cbdbexport /tmp/", "failed": true, "msg": "rsync: failed to connect to datarepo: Connection timed out (110)\nrsync error: error in socket IO (code 10) at clientserver.c(124) [receiver=3.0.6]\n", "rc": 10}
After the Ansible tasks fail I test that I can ssh from Node to DataRepo with no issue. I test that I can run the raw rysnc command. Both work as expected.
Question:
1. Why are either of the Ansible tasks failing? Is there something obvious that I am missing to make it work like the raw command?

I think your syntax is wrong, you are trying to use an actual rsync share (which would mean you are running rsync as a daemon on datarepo) but the actual command you exemplify as working is using rsync over ssh. So the below is not correct
src: rsync://datarepo:/home/deployer/data/sbc/cbdbexport
and I think it should be
src: /home/deployer/data/sbc/cbdbexport
You can also add async to the task if you expect it to run for a long time, to avoid network hiccups or ssh timeouts to affect the task, something like the below, which would cause it to wait for one hour (3600 seconds = one hour):
- name: Copy the database backup file to the target node
synchronize:
rsync_path: /usr/bin/rsync
mode: pull
src: /home/deployer/data/sbc/cbdbexport
dest: /tmp/
archive: no
copy_links: yes
async: 3600

Related

Ansible: docker related command doesn't work inside playbook, but does work once run directly

I have following task in my playbook:
- name: install pg_stat_statements extension in the postgres container
shell: docker exec octopus_postgres_{{ group_id }} /bin/bash -c 'psql -h localhost -U postgres -p 5433 -c "CREATE EXTENSION pg_stat_statements;"' # && service postgres restart')"
async: 10
poll: 0
Once I run the playbook this task seems to successfully finished, but if I check the postgres database - there are no changes to it. The task didn't actually worked.
If I run the above mentioned command manually on the host via bash, everything works fine and the databased gets updated, like this:
docker exec octopus_postgres_iaa /bin/bash -c 'psql -h localhost -U postgres -p 5433 -c "CREATE EXTENSION pg_stat_statements;"'
While trying to check what is wrong with the task, I tried the following:
- name: install pg_stat_statements extension in the postgres container
shell: docker exec octopus_postgres_{{ group_id }} /bin/bash -c 'touch /1 && psql -h localhost -U postgres -p 5433 -c "CREATE EXTENSION pg_stat_statements;" && touch /2' # && service postgres restart')"
async: 10
poll: 0
I've noticed that file /1 indeed has been created inside of the container, but the file /2 hasn't...
What is wrong with the command?
Using docker exec at all isn't right here. In general you want to avoid it for tasks like this: if the container gets deleted and recreated, any local setup you've done with docker exec will be lost. When you're trying to make a change to some sort of server using its API, you usually just call its API, instead of getting a root shell on the server host and then doing things, but this latter step is what docker exec does.
The standard postgres image supports putting SQL fragments in a container-side /docker-entrypoint-initdb.d directory, which will get processed the very first time the container is started (only). A very typical use is to mount a host-system directory with initialization scripts on to that directory. In Ansible this might look like:
- name: create pg_stat_statements extension file
copy:
dest: /docker/postgres/initdb/create-stat-statements.sql
content: |-
CREATE EXTENSION pg_stat_statements;
- name: start postgres container
docker_container:
image: 'postgres:11'
name: octopus_postgres_{{ group_id }}
published_ports: ['5433:5432']
volumes:
- '/docker/postgres/initdb:/docker-entrypoint-initdb.d'
- '/docker/postgres/data:/var/lib/postgresql/data'
Alternatively, you can manage the database like any other PostgreSQL database (local, cloud-hosted, remote, ...) using Ansible's built-in tools; in this case, the postgresql_ext module for creating extensions.
- name: enable pg_stat_statements PostgreSQL extension
postgresql_ext:
name: pg_stat_statements
port: 5433
In terms of your original statement, there are probably two things going on. First of all, if you do use the docker exec path to interact with a container, you always need to use the port the server thinks it's running on and not any remapped ports from the docker run -p option or equivalents: in your statement you need to use the default port 5432 and not 5433. Secondly, since you run the task with async: 10, poll: 0, Ansible launches the task and immediately moves on to the next one, without ever checking to see whether it succeeds (see Asynchronous Actions and Polling), so you don't actually know whether or not the docker exec task succeeded. My guess is that nothing is happening because it's failing to connect to the database, but you never see this error.

Ansible keeps wanting to be root

I'm a beginner with Ansible, and I need to run some basic tasks on a remote server.
The procedure is as follows:
I log as some user (osadmin)
I run su - to become root
I then do the tasks I need to.
So, I wrote my playbook as follows:
---
- hosts: qualif
vars:
- ansible_user: osadmin
- ansible_password: H1g2.D6#
tasks:
- name: Copy stuff from here to over there
copy:
src: /home/osadmin/file.txt
dest: /home/osadmin/file-changed.txt
owner: osadmin
group: osadmin
mode: 0777
Also, I have the following in vars/main.yml:
ansible_user: osadmin
ansible_password: password1
ansible_become_password: password2
[ some other values ]
However, when running my tasks, Ansible / the hosts returns me the following:
"Incorrect sudo password"
I then changed my tasks so that instead of becoming sudo and copy the file in some place my osadmin doesn't have access, I just copy the file on /home/osadmin. So, theorically, no need to become sudo for just a simple copy.
The problem now is that not only it keeps saying "wrong sudo password", but if I remove it, Ansible asks for it.
I then decided to run the command and added -vvv at the end, and it showed me the following:
ESTABLISH SSH CONNECTION FOR USER: osadmin
SSH: EXEC sshpass -d10 ssh -C -o ControlMaster=auto -o ControlPersist=60s -o User=osadmin -o ConnectTimeout=10 -o ControlPath=/home/osadmin/.ansible/cp/b9489e2193 -tt HOST-ADDRESS '/bin/sh -c '"'"'sudo -H -S -n -u
root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-ewujwywrqhcqfdrkaglvrouhmuiefwlj; /usr/bin/python /home/osadmin/.ansible/tmp/ansible-tmp-1550076004.1888492-11284794413477/AnsiballZ_setup.py'"'"'"'"'"'"'"'"' && sleep 0'"'"''
(1, b'sudo: a password is required\r\n', b'Shared connection to HOST-ADDRESS closed.\r\n')
As you can see, it somehow uses root, while I never told him to.
Does anyone know why Ansible keeps trying to be sudo, and how can I disable this?
Thank you in advance
There is a difference between 'su' and 'sudo'. If you have 'su' access, that means, that you can log as root (may be not, but it looks like). Use ansible_ssh_user=root, ansible_password=password2.
If this doesn't work, try to configure sudo on a server. You should be able to run sudo whoami and to get answer root. After that your code should run.
One more thing: you are using 'copy' module incorrectly. It uses src as path on local machine (where ansible is run), and dst as path on remote machine.

How to copy file to local directory using Ansible?

I'm trying to set up a playbook that will configure my development system. I'd like to copy the /etc/hosts file from my playbooks "files" directory to the /etc directory on my system. Currently I'm doing the following:
# main.yml
- hosts: all
- tasks:
- copy: src=files/hosts
dest=/etc/hosts
owner=root
group=wheel
mode=0644
backup=true
become: true
# inventory
localhost ansible_connection=local
When I run the playbook I'm getting this error:
fatal: [localhost]: FAILED! => {... "msg": Failed to get information on remote file (/etc/hosts): MODULE FAILURE"}
I believe this is because copy is supposed to be used to copy a file to a remote file system. So how do you copy a file to your local management system? I did a Google Search and everything talks about doing the former. I didn't see this addressed in the Ansible docs.
Your task is ok.
You should add --ask-sudo-pass to the ansible-playbook call.
If you run with -vvv you can see the command starts with sudo -H -S -n -u root /bin/sh -c echo BECOME-SUCCESS-somerandomstring (followed by a call to the Python script). If you execute it yourself, you'll get sudo: a password is required message. Ansible quite unhelpfully replaces this error message with its own Failed to get information on remote file (/etc/hosts): MODULE FAILURE.

Ansible Synchronize not able to create directory as root

I'm working on building an Ansible playbook and I'm using Vagrant as a test platform before I apply the playbook to a remote server.
I'm having issues getting Synchronize to work. I have some files that I need to move up to the server as part of the deployment.
Here's my playbook. I put the shell: whoami in there to make sure commands were running as root.
---
- hosts: all
sudo: yes
tasks:
- name: who am I
shell: whoami
- name: Sync up www folder
synchronize: src=www dest=/var
When I run this I get this:
failed: [default] => {"cmd": "rsync --delay-updates -FF --compress --archive --rsh 'ssh -i /Users/dan/.vagrant.d/insecure_private_key -o StrictHostKeyChecking=no -o Port=2222' --out-format='<<CHANGED>>%i %n%L' www vagrant#127.0.0.1:/var", "failed": true, "rc": 23}
msg: rsync: recv_generator: mkdir "/var/www" failed: Permission denied (13)
*** Skipping any contents from this failed directory ***
rsync error: some files/attrs were not transferred (see previous errors) (code 23) at main.c(1236) [sender=3.1.1]
FATAL: all hosts have already failed -- aborting
If I'm supplying sudo: yes shouldn't all commands be run as root, including Synchronize?
The Ansible Synchronize module page has some big hairy warnings:
The remote user for the dest path will always be the remote_user, not
the sudo_user.
There's a suggestion to wrap rsync with sudo like this:
# Synchronize using an alternate rsync command
synchronize: src=some/relative/path dest=/some/absolute/path rsync_path="sudo rsync"
There's also a suggestion to use verbosity to debug what's really going on. In this case, it means adding -vvv or even -vvvv to your ansible-playbook commandline execution.
Finally, this is a great time to use proper permissions, especially for non-system files like a www dir. This will solve your problem in the process.
# don't use recurse here unless you are confident how it works with directories.
- file: dest=/var/www state=directory owner=www-data group=www-data mode=0755
- synchronize: src=www dest=/var

How to execute a shell script on a remote server using Ansible?

I am planning to execute a shell script on a remote server using Ansible playbook.
blank test.sh file:
touch test.sh
Playbook:
---
- name: Transfer and execute a script.
hosts: server
user: test_user
sudo: yes
tasks:
- name: Transfer the script
copy: src=test.sh dest=/home/test_user mode=0777
- name: Execute the script
local_action: command sudo sh /home/test_user/test.sh
When I run the playbook, the transfer successfully occurs but the script is not executed.
you can use script module
Example
- name: Transfer and execute a script.
hosts: all
tasks:
- name: Copy and Execute the script
script: /home/user/userScript.sh
local_action runs the command on the local server, not on the servers you specify in hosts parameter.
Change your "Execute the script" task to
- name: Execute the script
command: sh /home/test_user/test.sh
and it should do it.
You don't need to repeat sudo in the command line because you have defined it already in the playbook.
According to Ansible Intro to Playbooks user parameter was renamed to remote_user in Ansible 1.4 so you should change it, too
remote_user: test_user
So, the playbook will become:
---
- name: Transfer and execute a script.
hosts: server
remote_user: test_user
sudo: yes
tasks:
- name: Transfer the script
copy: src=test.sh dest=/home/test_user mode=0777
- name: Execute the script
command: sh /home/test_user/test.sh
It's better to use script module for that:
http://docs.ansible.com/script_module.html
For someone wants an ad-hoc command
ansible group_or_hostname -m script -a "/home/user/userScript.sh"
or use relative path
ansible group_or_hostname -m script -a "userScript.sh"
Contrary to all the other answers and comments, there are some downsides to using the script module. Especially when you are running it on a remote(not localhost) host. Here is a snippet from the official ansible documentation:
It is usually preferable to write Ansible modules rather than pushing
scripts. Convert your script to an Ansible module for bonus points!
The ssh connection plugin will force pseudo-tty allocation via -tt
when scripts are executed. Pseudo-ttys do not have a stderr channel
and all stderr is sent to stdout. If you depend on separated stdout
and stderr result keys, please switch to a copy+command set of tasks
instead of using script.
If the path to the local script contains spaces, it needs to be
quoted.
This module is also supported for Windows targets.
For example, run this script using script module for any host other than localhost and notice the stdout and stderr of the script.
#!/bin/bash
echo "Hello from the script"
nonoexistingcommand
echo "hello again"
You will get something like the below; notice the stdout has all the stderr merged.(ideally line 6: nonoexistingcommand: command not found should be in stderr) So, if you are searching for some substring in stdout in the script output. you may get incorrect results.:
ok: [192.168.122.83] => {
"script_out": {
"changed": true,
"failed": false,
"rc": 0,
"stderr": "Shared connection to 192.168.122.83 closed.\r\n",
"stderr_lines": [
"Shared connection to 192.168.122.83 closed."
],
"stdout": "Hello from the script\r\n/home/ps/.ansible/tmp/ansible-tmp-1660578527.4335434-35162-230921807808160/my_script.sh: line 6: nonoexistingcommand: command not found\r\nhello again\r\n",
"stdout_lines": [
"Hello from the script",
"/home/ps/.ansible/tmp/ansible-tmp-1660578527.4335434-35162-230921807808160/my_script.sh: line 6: nonoexistingcommand: command not found",
"hello again"
]
}
}
The documentation is not encouraging users to use the script module; consider converting your script into an ansible module; here is a simple post by me that explains how to convert your script into an ansible module.
You can use template module to copy if script exists on local machine to remote machine and execute it.
- name: Copy script from local to remote machine
hosts: remote_machine
tasks:
- name: Copy script to remote_machine
template: src=script.sh.2 dest=<remote_machine path>/script.sh mode=755
- name: Execute script on remote_machine
script: sh <remote_machine path>/script.sh
Since nothing is defined about "the script", means complexity, content, runtime, runtime environment, size, tasks to perform, etc. are unknown, it might be possible to use an unrecommended approach like in "How to copy content provided in command prompt with special chars in a file using Ansible?"
---
- hosts: test
become: false
gather_facts: false
tasks:
- name: Exec sh script on Remote Node
shell:
cmd: |
date
ps -ef | grep ssh
echo "That's all folks"
register: result
- name: Show result
debug:
msg: "{{ result.stdout }}"
which is a multi-line shell command only (annot.: ... just inline code) and resulting into an output of
TASK [Show result] ****************************************************
ok: [test.example.com] =>
msg: |-
Sat Sep 3 21:00:00 CEST 2022
root 709 1 0 Aug11 ? 00:00:00 /usr/sbin/sshd -D
root 123456 709 14 21:00 ? 00:00:00 sshd: user [priv]
user 123456 123456 1 21:00 ? 00:00:00 sshd: user#pts/0
root 123456 123456 0 21:00 pts/0 00:00:00 grep ssh
That's all folks
One could just add more lines, complexity, necessary output, etc.
Because of script module – Runs a local script on a remote node after transferring it - Notes
It is usually preferable to write Ansible modules rather than pushing scripts.
I also recommend to get familar with writing an own module and as already mentioned in the answer of P....
You can execute local scripts at ansible without having to transfer the file to the remote server, this way:
ansible my_remote_server -m shell -a "`cat /localpath/to/script.sh`"

Resources