I'm new to Ansible and trying to run a local script on a remote node using the script module. My task is defined as follows:
- name: Initial setup
script: ../../../initial_setup.sh
become: yes
When I run the playbook I get the error below but I'm not clear on what the actual problem is. Does this indicate a problem connecting to the node or a problem transferring the script?
fatal: [default]: FAILED! => {
"changed": true,
"failed": true,
"invocation": {
"module_args": {
"_raw_params": "../../../initial_setup.sh"
},
"module_name": "script"
},
"rc": 127,
"stderr": "Control socket connect(/tmp): Permission denied\r\nControlSocket /tmp already exists, disabling multiplexing\r\nConnection to 127.0.0.1 closed.\r\n",
"stdout": "/bin/sh: 1: /home/ubuntu/.ansible/tmp/ansible-tmp-1482161914.64-107588947758469/initial_setup.sh: not found\r\n",
"stdout_lines": [
"/bin/sh: 1: /home/ubuntu/.ansible/tmp/ansible-tmp-1482161914.64-107588947758469/initial_setup.sh: not found"
]
}
tl;dr
Ensure -o ControlMaster=auto is defined in ssh_args in Ansible in ansible.cfg:
[ssh_connection]
ssh_args = -o ControlMaster=auto -o ControlPersist=60s
The following error is related to SSH connection multiplexing:
Control socket connect(/tmp): Permission denied
ControlSocket /tmp already exists, disabling multiplexing
Connection to 127.0.0.1 closed
It tried to create a socket directly at /tmp, not inside /tmp... Some other parameter defined somewhere for SSH could play role here.
Setting the value of ControlMaster to auto causes SSH to create a new master connection should the existing one not exist (or have problems, as here?).
Related
Using ansible i am logging in as LDAP user and then sudo/becoming the 'db2inst1' user to run some commands. The adhoc mode runs fine but if I try things using a playbooks I get an error that means I need to add the user to the sudoers files.
This works fine. (i had to edit some things out :) )
ansible all -i ftest.lst -e ansible_user=user#a.com -e "ansible_ssh_pass={{pass}}" -e "ansible_become_pass={{test_user}}" -e "#pass.yml" --vpas vault -a ". /home/db2inst1/sqllib/db2profile;db2pd -hadr -alldbs;whoami" -m raw -b --become-user=db2inst1
But connecting to the same box and using a playbook i get issues.
ansible-playbook -i ftest.lst -e target=test -e ansible_user=user#a.com -e "ansible_ssh_pass={{pass}}" -e "ansible_become_pass={{test_user}}" -e "#pass.yml" --vpas vault test.yml
The test.yml has
become: true
& become_user: "{{ instance }}"
with the instance = db2inst1 being passed it.
This spits out...
fatal: [hostname123.a]: FAILED! => {"ansible_facts": {"discovered_interpreter_python": "/usr/bin/python"}, "changed": false, "module_stderr": "Shared connection to hostname123.a closed.\r\n", "module_stdout": "\r\n", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}z
I'm finding the ansible synchronize module keeps failing with error 127, it blames python but other commands are having no issue, I've got the latest module from ansible-galaxy
fatal: [HostA]: FAILED! => {"changed": false, "module_stderr": "/bin/sh: 1: /usr/bin/python: not found\n", "module_stdout": "", "msg": "The module failed to execute correctly, you probably need to set the interpreter.\nSee stdout/stderr for the exact error", "rc": 127}
In the playbook I have
- ansible.posix.synchronize:
archive: yes
compress: yes
delete: yes
recursive: yes
dest: "{{ libexec_path }}"
src: "{{ libexec_path }}/"
rsync_opts:
- "--exclude=check_dhcp"
- "--exclude=check_icmp"
ansible.cfg
[defaults]
timeout = 10
fact_caching_timeout = 30
host_key_checking = false
ansible_ssh_extra_args = -R 3128:127.0.0.1:3128
interpreter_python = auto_legacy_silent
forks = 50
I've tried removing the ansible_ssh_extra_args without success, I use this when using apt to tunnel back out to the internet because the remote hosts have no internet access.
I can run sync manually without an issue, pre ansible I used to call rsync using:
sudo rsync -e 'ssh -ax' -avz --timeout=20 --delete -i --progress --exclude-from '/opt/openitc/nagios/bin/exclude.txt' /opt/openitc/nagios/libexec/* root#" . $ip . ":/opt/openitc/nagios/libexec"
I'm synchronising from Ubuntu 20.04 to Ubuntu 14.04
Can anyone see what I'm doing wrong, a way to debug the synchronize or a way to call rsync manually?
I have configured a playbook to install, configure, and start the osquery tool. The playbook executes with an error. "osqueryd is not running. no pidfile found."
Full error
TASK [osquery-client : check agent status again] ********************************************************************************************************************
fatal: [13.57.34.71]: FAILED! => {"changed": true, "cmd": ["/usr/bin/osqueryctl", "status"], "delta": "0:00:00.021902", "end": "2019-10-16 19:19:50.523876", "msg": "non-zero return code", "rc": 7, "start": "2019-10-16 19:19:50.501974", "stderr": "", "stderr_lines": [], "stdout": "osqueryd is not running. no pidfile found.", "stdout_lines": ["osqueryd is not running. no pidfile found."]}
My task/main.yml is defined as:
- name: check agent status again
command: /usr/bin/osqueryctl status
ignore_errors: yes
And the pid file is located here
--pidfile=/var/run/osqueryd.pidfile
Is ansible looking in the wrong place for the pid?
Adding this sleep command into the task seemed to work
- name: check agent status again
command: /usr/bin/osqueryctl status
command: sleep 5
ignore_errors: yes
Ansible does not look for the pid file. It executes /usr/bin/osqueryctl status. What does it return if you execute it by hand?
My ansible command fails with error as below:
$ ansible all -i /tmp/myhosts -m shell -a 'uname -a' -u user1 -k
SSH password:
host1.mycomp.com | FAILED! => {
"changed": false,
"module_stderr": "",
"module_stdout": " File \"/home/user1/.ansible/tmp/ansible-tmp-1556597037.27-168849792658772/AnsiballZ_command.py\", line 39\r\n with open(module, 'wb') as f:\r\n ^\r\nSyntaxError: invalid syntax\r\n",
"msg": "MODULE FAILURE\nSee stdout/stderr for the exact error",
"rc": 1
}
I do not know how to See stdout/stderr from single command line. Any suggestion how to view std out/err would be great.
$ more /tmp/myhosts
[mybox]
host1.mycomp.com
[dev]
host1.mycomp.com
[allserver:children]
mybox
dev
[allserver:vars]
variable=somestring
/tmp/hosts has good permissions.
I'm also able to ssh to the target server using the userid
$ ssh user1#host1.mycomp.com
Password:
Last login: Tue Apr 30 00:03:57 2019 from 10.123.235.126
$ hostname
host1
ansible --version
2.7.1
Can you please help me overcome the error ?
I'm running an ansible 2.3.1.0 on my local machine (macOs) and trying to achieve :
connecting to user1#host1
copying a file from user2#host2:/path/to/file to user1#host1:/tmp/path/to/file
I'm on my local, with host1 as hosts and user1 as remote_user:
- synchronize: mode=pull src=user2#host2:/path/to/file dest=/tmp/path/to/file
Wrong output:
/usr/bin/rsync (...) user1#host1:user2#host2:/path/to/file /tmp/path/to/file
Conclusion
I've been trying different options. I've debugged ansible. I can't understand what's wrong.
Help!
Edit 1
I've also tried adding delegate_to:
- synchronize: mode=pull src=/path/to/file dest=/tmp/path/to/file
delegate_to: host2
It gives:
fatal: [host1]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,password,keyboard-interactive).\r\n", "unreachable": true}
And also:
- synchronize: mode=pull src=/path/to/file dest=/tmp/path/to/file
delegate_to: user2#host2
Which gives:
fatal: [host1 -> host2]: FAILED! => {"changed": false, "cmd": "/usr/bin/rsync --delay-updates -F --compress --archive --rsh=/usr/bin/ssh -S none -o StrictHostKeyChecking=no --rsync-path=sudo rsync --out-format=<<CHANGED>>%i %n%L host1:/path/to/file /tmp/path/to/file", "failed": true, "msg": "Permission denied (publickey).\r\nrsync: connection unexpectedly closed (0 bytes received so far) [Receiver]\nrsync error: unexplained error (code 255) at io.c(235) [Receiver=3.1.2]\n", "rc": 255}
NB: ssh user1#host1 and then ssh user2#host2 works with ssh keys (no password required)
Please pay attention to this notes from modules' docs:
For the synchronize module, the “local host” is the host the synchronize task originates on, and the “destination host” is the host synchronize is connecting to.
The “local host” can be changed to a different host by using delegate_to. This enables copying between two remote hosts or entirely on one remote machine.
I guess, you may want to try (assuming Ansible can connect to host2):
- synchronize:
src: /path/to/file
dest: /tmp/path/to/file
delegate_to: host2