I setup ftp server using ansible and verified it's working by manually doing lftp from my localhost
I uploaded a file using put and was able to find the file in ftp server, when i do ls.
I wrote a play to upload file to ftp server and it returned success always. However, i can't find the file that I uploaded using my play. Then i edited my play to display result register and found a warning. See below
---
- name: test ftp upload
hosts: localhost
tasks:
- name: install lftp
yum:
name: lftp
- name: upload file
shell: >
lftp ansible1.example.com<<EOF
cd pub
put /etc/hosts
bye
EOF
register: result
- name: display result
debug:
var: result
Below is the output I got after running the play
PLAY [test ftp upload] ****************************************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************************************************
ok: [localhost]
TASK [install lftp] *******************************************************************************************************************************************************************
ok: [localhost]
TASK [upload file] ********************************************************************************************************************************************************************
changed: [localhost]
TASK [display result] *****************************************************************************************************************************************************************
ok: [localhost] => {
"result": {
"changed": true,
"cmd": "lftp ansible1.example.com<<EOF cd pub put /etc/hosts bye EOF\n",
"delta": "0:00:00.010150",
"end": "2020-04-04 09:15:26.305530",
"failed": false,
"rc": 0,
"start": "2020-04-04 09:15:26.295380",
"stderr": "/bin/sh: warning: here-document at line 0 delimited by end-of-file (wanted `EOF')",
"stderr_lines": [
"/bin/sh: warning: here-document at line 0 delimited by end-of-file (wanted `EOF')"
],
"stdout": "",
"stdout_lines": []
}
}
PLAY RECAP ****************************************************************************************************************************************************************************
localhost : ok=4 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
I am not able to understand the warning w.r.t EOF part and if it's the reason for the file upload not happening. Because, I use the below shell script (which is basically the same steps from the play) to upload file and it works perfect.
lftp ansible1.example.com<<EOF
cd pub
put /etc/hosts
bye
EOF
You have used the wrong scalar block style indicator (see documentation) for your shell command.
You need to keep the new lines in your piece of shell code. In this case, you have to use the literal scalar block indicator: |
You have used the folded scalar block indicator (>) which will basically convert new lines to spaces (although it is a little more complicated as explained in this other source). So your current piece of shell code expands to a single line in your shell (you can debug the value to see for yourself):
lftp ansible1.example.com<<EOF cd pub put /etc/hosts bye EOF
The following should fix your current problem:
- name: upload file
shell: |
lftp ansible1.example.com<<EOF
cd pub
put /etc/hosts
bye
EOF
register: result
Related
I am new to Ansible and I cannot solve an error: I use the ansible.builtin.shell to call the pcs utility (Pacemaker). The pcs is installed on the remote machine, and I can use it when I ssh that machine, but Ansible reports a 'command not found' error with error code 127.
Here is my inventory.yml:
---
all:
children:
centos7:
hosts:
UVMEL7:
ansible_host: UVMEL7
Here is my play-book, TestPcs.yaml:
---
- name: Test the execution of pcs command
hosts: UVMEL7
tasks:
- name: Call echo
ansible.builtin.shell: echo
- name: pcs
ansible.builtin.shell: pcs
Note: I also used the echo command to verify that I am corectly using ansible.builtin.shell.
I launch my play-book with: ansible-playbook -i inventory.yml TestPcs.yaml --user=traite
And I get this result:
PLAY [Test the execution of pcs command] *****************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ***********************************************************************************************************************************************************************************************************************
ok: [UVMEL7]
TASK [Call echo] *****************************************************************************************************************************************************************************************************************************
changed: [UVMEL7]
TASK [pcs] ***********************************************************************************************************************************************************************************************************************************
fatal: [UVMEL7]: FAILED! => {"changed": true, "cmd": "pcs", "delta": "0:00:00.003490", "end": "2022-03-10 15:02:17.418475", "msg": "non-zero return code", "rc": 127, "start": "2022-03-10 15:02:17.414985", "stderr": "/bin/sh: pcs : commande introuvable", "stderr_lines": ["/bin/sh: pcs : commande introuvable"], "stdout": "", "stdout_lines": []}
PLAY RECAP ***********************************************************************************************************************************************************************************************************************************
UVMEL7 : ok=2 changed=1 unreachable=0 failed=1 skipped=0 rescued=0 ignored=0
The pcs command is failing and in stderr, there is a 'command not found' error.
On the other and, when I ssh the machine and run pcs command, the command is executed and returns 1 which is different from 127. It is normal that pcs returns an error: I simplified the test case to the strict minimum to keep my question short.
I expect Ansible to have the same behavior: Error on pcs with return code 1.
Here is what I did to simulate what Ansible does (Based on remarks by #Zeitounator): ssh <user>#<machine> '/bin/bash -c "echo $PATH"'
I get my default PATH as explained in the manual page of bash. In my system sh links to bash.
I see that /etc/profile does the path manipulation that I need. However, it seems that because of the option -c, the bash is not started as login shell and therefore etc/profile is not sourced.
I end up doing the job manually:
---
- name: Test the execution of pcs command
hosts: UVMEL7
tasks:
- name: Call echo
ansible.builtin.shell: echo
- name: pcs
ansible.builtin.shell: source /etc/profile && pcs
Which executes pcs as expected.
To sum up, my executable was not executed because the folder holding it was not listed in my PATH environment variable. This was due to the fact that /bin/sh aka /bin/bash was called with the flag -c which prevents sourcing /etc/profile and other configuration files. The issue was 'solved' by sourcing manually the configuration file that correctly sets the PATH environment variable.
In my ansible playbook I have a task that executes a shell command. One of the parameters of that command is password. When the shell command fails, ansible prints the whole json object that includes the command having password. If I use no_log: True then I get censored output and not able to get stderr_lines. Is there a way to customize the output when shell command execution fails?
You can take advantage of ansible blocks and their error handling feature.
Here is an example playbook
---
- name: Block demo for shell
hosts: localhost
gather_facts: false
tasks:
- block:
- name: my command
shell: my_command is bad
register: cmdresult
no_log: true
rescue:
- name: show error
debug:
msg: "{{ cmdresult.stderr }}"
- name: fail the playbook
fail:
msg: Error on command. See debug of stderr above
which gives the following result:
PLAY [Block demo for shell] *********************************************************************************************************************************************************************************************************************************************
TASK [my command] *******************************************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"censored": "the output has been hidden due to the fact that 'no_log: true' was specified for this result", "changed": true}
TASK [show error] *******************************************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "/bin/sh: 1: my_command: not found"
}
TASK [fail the playbook] ************************************************************************************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "Error on command. See debug of stderr above"}
PLAY RECAP **************************************************************************************************************************************************************************************************************************************************************
localhost : ok=1 changed=0 unreachable=0 failed=1 skipped=0 rescued=1 ignored=0
You can utilize something like this :
- name: Running it
hosts: localhost
tasks:
- name: failing the task
shell: sh a.sh > output.txt
ignore_errors: true
register: abc
- name: now failing
command: rm output.txt
when: abc|succeeded
stdout will be written to a file. If it's a failure you can check the file and debug it, if it's a success then file will be deleted.
I want to encrypt my ansible inventory file using ansible vault as it contains the IP/Passwords/Key file paths etc, which I do not want to keep it in readable format.
This is what I have tried.
My folder structure looks like below
env/
hosts
hosts_details
plays/
test.yml
files/
vault_pass.txt
env/hosts
[server-a]
server-a-name
[server-b]
server-b-name
[webserver:children]
server-a
server-b
env/hosts_details (file which I want to encrypt)
[server-a:vars]
env_name=server-a
ansible_ssh_user=root
ansible_ssh_host=10.0.0.1
ansible_ssh_private_key_file=~/.ssh/xyz-key.pem
[server-b:vars]
env_name=server-b
ansible_ssh_user=root
ansible_ssh_host=10.0.0.2
ansible_ssh_private_key_file=~/.ssh/xyz-key.pem
test.yml
---
- hosts: webserver
tasks:
- name: Print Hello world
debug:
msg: "Hello World"
Execution without encryption runs successfully without any errors
ansible-playbook -i env/ test.yml
When I encrypt my env/hosts_details file with vault file in files/vault_pass.txt and then execute the playbook I get the below error
ansible-playbook -i env/ test.yml --vault-password-file files/vault_pass.txt
PLAY [webserver]
******************************************************************
TASK [setup]
*******************************************************************
Thursday 10 August 2017 11:21:01 +0100 (0:00:00.053) 0:00:00.053 *******
fatal: [server-a-name]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname server-a-name: Name or service not known\r\n", "unreachable": true}
fatal: [server-b-name]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname server-b-name: Name or service not known\r\n", "unreachable": true}
PLAY RECAP
*********************************************************************
server-a-name : ok=0 changed=0 unreachable=1 failed=0
server-b-name : ok=0 changed=0 unreachable=1 failed=0
I want to know if I am missing anything or is it possible to have inventory file encrypted.
Is there any other alternative for the same?
As far as I know, you can't encrypt inventory files.
You should use group vars files instead.
Place your variables into ./env/group_vars/server-a.yml and server-b.yml in YAML format:
env_name: server-a
ansible_ssh_user: root
ansible_ssh_host: 10.0.0.1
ansible_ssh_private_key_file: ~/.ssh/xyz-key.pem
And encrypt server-a.yml and server-b.yml.
This way your inventory (hosts file) will be in plain text, but all inventory (host and group) variables will be encrypted.
I'm writing an Ansible script to setup a zookeeper cluster. After extract zookeeper tar ball:
unarchive: src={{tmp_dir}}/zookeeper.tar.gz dest=/opt/zookeeper copy=no
I get a zookeeper directory that contains version number:
[root#sc-rdops-vm09-dhcp-2-154 conf]# ls /opt/zookeeper
zookeeper-3.4.8
To proceed, I have to know name of the extracted sub-directory. For example, I need to copy a configure to /opt/zookeeper/zookeeper-3.4.8/conf.
How to get zookeeper-3.4.8 with Ansible statement?
Try with this:
➜ ~ cat become.yml
---
- hosts: localhost
user: vagrant
tasks:
- shell: ls /opt/zookeeper
register: path
- debug: var=path.stdout
➜ ~ ansible-playbook -i hosts-slaves become.yml
[WARNING]: provided hosts list is empty, only localhost is available
PLAY ***************************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [command] *****************************************************************
changed: [localhost]
TASK [debug] *******************************************************************
ok: [localhost] => {
"path.stdout": "zookeeper-3.4.8"
}
PLAY RECAP *********************************************************************
localhost : ok=3 changed=1 unreachable=0 failed=0
Then you could use {{ path.stdout }} to set the name of the path, in the next tasks.
Alternatively, before you proceed with the next steps you could rename this directory
- name: Rename directory
shell: mv `ls -d -1 zookeeper*` zookeeper
The above shell command substitution (inside backticks) should work.
When using Ansible through Ansible Pull, the template directive's validate step fails to find the file specified by %s.
"No such file" error reported by ansible-pull:
TASK [ssh : place ssh config file] *********************************************
fatal: [localhost]: FAILED! => {
"changed": true,
"cmd": "sshd -t -f /home/nhooey/.ansible/tmp/ansible-tmp-1453485441.94-279324955743/source",
"failed": true,
"msg": "[Errno 2] No such file or directory",
"rc": 2
}
Problematic template directive:
- name: place ssh config file
template:
src: etc/ssh/sshd_config
dest: /etc/ssh/sshd_config
owner: root
group: root
mode: 0644
validate: 'sshd -t -f %s'
This happens with either Ansible 1.9.4 or 2.0.0.2 from (Python 2.7's Pip) and only when using Ansible Pull. Running ansible-playbook locally after manually checking out the repository works fine.
Unfortunately I'm not able to reproduce this error in the trivial case with a sample repository that I created.
Failure to reproduce error with simple test case:
$ virtualenv .venv && source .venv/bin/activate && pip install ansible
$ ansible-pull --directory /tmp/ansible-test \
--url https://github.com/nhooey/ansible-pull-template-validate-bug.git playbook.yaml
Starting Ansible Pull at 2016-01-22 13:14:51
/Users/nhooey/git/github/nhooey/ansible-pull-template-validate-bug/.venv/bin/ansible-pull \
--directory /tmp/ansible-test \
--url https://github.com/nhooey/ansible-pull-template-validate-bug.git \
playbook.yaml
localhost | SUCCESS => {
"after": "07bb13992910e0efe30d071d5f5a2a2e88251045",
"before": "07bb13992910e0efe30d071d5f5a2a2e88251045",
"changed": false
}
[WARNING]: provided hosts list is empty, only localhost is available
PLAY ***************************************************************************
TASK [setup] *******************************************************************
ok: [localhost]
TASK [Delete template file] ****************************************************
changed: [localhost]
TASK [Test template validation] ************************************************
changed: [localhost]
PLAY RECAP *********************************************************************
localhost : ok=3 changed=2 unreachable=0 failed=0
Playbook that won't fail in the simple test case above:
---
- hosts: localhost
tasks:
- name: Delete template file
file:
path: /tmp/file
state: absent
- name: Test template validation
template:
src: file
dest: /tmp/file
validate: 'ls %s'
It seems like there could be a problem with the timing around when ansible-pull creates and deletes temporary files in Ansible.
What's happening?
It's saying that it can't find sshd because it's not in the path. Silly me...