Using ansible i am logging in as LDAP user and then sudo/becoming the 'db2inst1' user to run some commands. The adhoc mode runs fine but if I try things using a playbooks I get an error that means I need to add the user to the sudoers files.
This works fine. (i had to edit some things out :) )
ansible all -i ftest.lst -e ansible_user=user#a.com -e "ansible_ssh_pass={{pass}}" -e "ansible_become_pass={{test_user}}" -e "#pass.yml" --vpas vault -a ". /home/db2inst1/sqllib/db2profile;db2pd -hadr -alldbs;whoami" -m raw -b --become-user=db2inst1
But connecting to the same box and using a playbook i get issues.
ansible-playbook -i ftest.lst -e target=test -e ansible_user=user#a.com -e "ansible_ssh_pass={{pass}}" -e "ansible_become_pass={{test_user}}" -e "#pass.yml" --vpas vault test.yml
The test.yml has
become: true
& become_user: "{{ instance }}"
with the instance = db2inst1 being passed it.
This spits out...
fatal: [hostname123.a]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "module_stderr": "Shared connection to hostname123.a closed.\r\n", "module_stdout": "\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}z
Related
I'm finding the ansible synchronize module keeps failing with error 127, it blames python but other commands are having no issue, I've got the latest module from ansible-galaxy
fatal: [HostA]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
In the playbook I have
- ansible.posix.synchronize:
archive: yes
compress: yes
delete: yes
recursive: yes
dest: "{{ libexec_path }}"
src: "{{ libexec_path }}/"
rsync_opts:
- "--exclude=check_dhcp"
- "--exclude=check_icmp"
ansible.cfg
[defaults]
timeout = 10
fact_caching_timeout = 30
host_key_checking = false
ansible_ssh_extra_args = -R 3128:127.0.0.1:3128
interpreter_python = auto_legacy_silent
forks = 50
I've tried removing the ansible_ssh_extra_args without success, I use this when using apt to tunnel back out to the internet because the remote hosts have no internet access.
I can run sync manually without an issue, pre ansible I used to call rsync using:
sudo rsync -e 'ssh -ax' -avz --timeout=20 --delete -i --progress --exclude-from '/opt/openitc/nagios/bin/exclude.txt' /opt/openitc/nagios/libexec/* root#" . $ip . ":/opt/openitc/nagios/libexec"
I'm synchronising from Ubuntu 20.04 to Ubuntu 14.04
Can anyone see what I'm doing wrong, a way to debug the synchronize or a way to call rsync manually?
The end goal is for me to copy file.txt from Host2 over to Host1. However, I keep getting the same error whenever I perform the function. I have triple checked my spacing and made sure I spelled everything correctly, but nothing seems to work.
Command to start the playbook:
ansible-playbook playbook_name.yml -i inventory/inventory_name -u username -k
My Code:
- hosts: Host1
tasks:
- name: Synchronization using rsync protocol on delegate host (pull)
synchronize:
mode: pull
src: rsync://Host2.linux.us.com/tmp/file.txt
dest: /tmp
delegate_to: Host2.linux.us.com
Expected Result:
Successfully working
Actual Result:
fatal: [Host1.linux.us.com]: FAILED! => {"changed": false, "cmd": "sshpass", "msg": "[Errno 2] No such file or directory", "rc": 2}
I have the same problem as you,Installing sshpass on the target host can work normally
yum install -y sshpass
Here to discuss the ansible behavior when user at managed nodes is given sudo privileges to specific commands.
I have sudo privileges on remote managed host [rm-host.company.com] to specific commands. Two of them are:
/bin/mkdir /opt/somedir/unit*
/bin/chmod 2775 /opt/somedir/unit*
PS: /opt/somedir at remote nodes exists already.
My ansible control machine version:
ansible 2.7.10
python version = 2.7.5 (default, Mar 26 2019, 22:13:06) [GCC 4.8.5 20150623 (Red Hat 4.8.5-36)]
YAML code fails when I use ansbile "file" module even though I have sudo privileges to chmod and mkdir as listed above.
- name: 7|Ensure Directory - "/opt/somedir/{{ ENV_CHOSEN }}" Permissions are 2775
become: yes
become_method: sudo
file: path="/opt/somedir/{{ ENV_CHOSEN }}" state=directory mode=2775
when:
- ansible_facts['os_family'] == "CentOS" or ansible_facts['os_family'] == "RedHat"
- ansible_distribution_version | int >= 6
- http_dir_path.stat.exists == true
- http_dir_path.stat.isdir == true
- CreateWebAgentEnvDir is defined
- CreateWebAgentEnvDir is succeeded
register: ChangeDirPermission
- debug:
var: ChangeDirPermission
Runtime error:
TASK [7|Ensure Directory - "/opt/somedir/unitc" Permissions are 2775] **************************************************************************************************************************************************************************************
fatal: [rm-host.company.com]: FAILED! => {"changed": false, "module_stderr": "FIPS mode initialized\r\nShared connection to rm-host.company.com closed.\r\n", "module_stdout": "sudo: a password is required\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
to retry, use: --limit #/u/joker/scripts/Ansible/playbooks/agent/plays/agent_Install.retry
PLAY RECAP ***************************************************************************************************************************************************************************************************************************************************
rm-host.company.com : ok=9 changed=2 unreachable=0 failed=1
But succeeds when I use command module, like so:
- name: 7|Ensure Directory - "/opt/somedir/{{ ENV_CHOSEN }}" Permissions are 2775
command: sudo /bin/chmod 2775 "/opt/somedir/{{ ENV_CHOSEN }}"
when:
- ansible_facts['os_family'] == "CentOS" or ansible_facts['os_family'] == "RedHat"
- ansible_distribution_version | int >= 6
- http_dir_path.stat.exists == true
- http_dir_path.stat.isdir == true
- CreateagentEnvDir is defined
- CreateagentEnvDir is succeeded
register: ChangeDirPermission
- debug:
var: ChangeDirPermission
Success Runtime debug output captured:
TASK [7|Ensure Directory - "/opt/somedir/unitc" Permissions are 2775] **************************************************************************************************************************************************************************************
[WARNING]: Consider using 'become', 'become_method', and 'become_user' rather than running sudo
changed: [rm-host.company.com]
TASK [debug] *************************************************************************************************************************************************************************************************************************************************
ok: [rm-host.company.com] => {
"ChangeDirPermission": {
"changed": true,
"cmd": [
"sudo",
"/bin/chmod",
"2775",
"/opt/somedir/unitc"
],
"delta": "0:00:00.301570",
"end": "2019-06-22 13:20:17.300266",
"failed": false,
"rc": 0,
"start": "2019-06-22 13:20:16.998696",
"stderr": "",
"stderr_lines": [],
"stdout": "",
"stdout_lines": [],
"warnings": [
"Consider using 'become', 'become_method', and 'become_user' rather than running sudo"
]
}
}
Question:
How can I make this work without using command module? I want to stick to ansible core modules using 'become', 'become_method' rather than running sudo in command module.
Note:
It works when sudo is enabled for ALL commands. But [ user ALL=(ALL) NOPASSWD: ALL ] cannot be given on remote host. Not allowed by company policy for the group I am in.
The short answer is you can't. The way ansible works is by executing python scripts in the remote host (except for the raw, command and shell modules). See the docs.
The file module executes this script with a long line of parameters. But ansible will first become the required user, in this case root by running sudo -H -S -n -u root /bin/sh in the remote ssh session (please bear in mind that this command might be slightly different in your case).
Once the user logged remotely has become the root user, Ansible will upload and execute the file.py script.
It looks like in your case, you'll need to revert to use the raw, command or shell in the cases you need to run the privileged commands.
To understand this a bit better and see the detail and order of the commands being executed, run ansible-playbook with the parameter -vvvv.
I solved this issue by removing the become_method and become_user off my playbook.
First, I specified the user in the inventory file using ansible_user=your_user. Then, I removed the become_method and become_user off my playbook leaving just become=yes
For more details about this answer, look on this other answer.
My ansible command fails with error as below:
$ ansible all -i /tmp/myhosts -m shell -a 'uname -a' -u user1 -k
SSH password:
host1.mycomp.com | FAILED! => {
"changed": false,
"module_stderr": "",
"module_stdout": " File \"/home/user1/.ansible/tmp/ansible-tmp-1556597037.27-168849792658772/AnsiballZ_command.py\", line 39\r\n with open(module, 'wb') as f:\r\n ^\r\nSyntaxError: invalid syntax\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
I do not know how to See stdout/stderr from single command line. Any suggestion how to view std out/err would be great.
$ more /tmp/myhosts
[mybox]
host1.mycomp.com
[dev]
host1.mycomp.com
[allserver:children]
mybox
dev
[allserver:vars]
variable=somestring
/tmp/hosts has good permissions.
I'm also able to ssh to the target server using the userid
$ ssh user1#host1.mycomp.com
Password:
Last login: Tue Apr 30 00:03:57 2019 from 10.123.235.126
$ hostname
host1
ansible --version
2.7.1
Can you please help me overcome the error ?
I'm running an ansible 2.3.1.0 on my local machine (macOs) and trying to achieve :
connecting to user1#host1
copying a file from user2#host2:/path/to/file to user1#host1:/tmp/path/to/file
I'm on my local, with host1 as hosts and user1 as remote_user:
- synchronize: mode=pull src=user2#host2:/path/to/file dest=/tmp/path/to/file
Wrong output:
/usr/bin/rsync (...) user1#host1:user2#host2:/path/to/file /tmp/path/to/file
Conclusion
I've been trying different options. I've debugged ansible. I can't understand what's wrong.
Help!
Edit 1
I've also tried adding delegate_to:
- synchronize: mode=pull src=/path/to/file dest=/tmp/path/to/file
delegate_to: host2
It gives:
fatal: [host1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,password,keyboard-interactive).\r\n", "unreachable": true}
And also:
- synchronize: mode=pull src=/path/to/file dest=/tmp/path/to/file
delegate_to: user2#host2
Which gives:
fatal: [host1 -> host2]: FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh=/usr/bin/ssh -S none -o StrictHostKeyChecking=no --rsync-path=sudo rsync --out-format=<<CHANGED>>%i %n%L host1:/path/to/file /tmp/path/to/file", "failed": true, "msg": "Permission denied (publickey).\r\nrsync: connection unexpectedly closed (0 bytes received so far) [Receiver]\nrsync error: unexplained error (code 255) at io.c(235) [Receiver=3.1.2]\n", "rc": 255}
NB: ssh user1#host1 and then ssh user2#host2 works with ssh keys (no password required)
Please pay attention to this notes from modules' docs:
For the synchronize module, the “local host” is the host the synchronize task originates on, and the “destination host” is the host synchronize is connecting to.
The “local host” can be changed to a different host by using delegate_to. This enables copying between two remote hosts or entirely on one remote machine.
I guess, you may want to try (assuming Ansible can connect to host2):
- synchronize:
src: /path/to/file
dest: /tmp/path/to/file
delegate_to: host2