Ansible delegate_to "Incorrect sudo password" - ansible

With ansible i want to configure rsyslog service for a group of hosts than add the name those hosts to a central host (different from the group of hosts) so my playbook:
- hosts: gourp_of_hosts
tasks:
- name: set rsyslog configuration
lineinfile:
path: /etc/rsyslog.conf
line: '{{item}}'
with_items:
- some items....
become: yes
- name: add host to rsyslog central
blockinfile:
path: /etc/rsyslog.conf
block: |
{{ansible_hostname}}....
delegate_to: x.x.x.x (my central host)
become: yes
My inventory file contains both the group of hosts and my central host:
[gourp_of_hosts]
host1 ansible_user=.... ansible_user_pass=.. ansible_sudo_pass=..
host2 ansible_user=.... ansible_user_pass=.. ansible_sudo_pass=..
[central]
x.x.x.x ansible_user=... ansible_user_pass=.. ansible_sudo_pass=..
Now i'm facing the following error:
fatal: [host1]: FAILED! => {"msg": "Incorrect sudo password"}
I tried removing central host from my inventory file and used ssh-copy-id user#x.x.x.x in result i'm having the following error:
fatal: [host]: UNREACHABLE! => {"changed": false, "msg": "Failed
to connect to the host via ssh: Permission denied
(publickey,gssapi-with-mic,password).\r\n", "unreachable": true}

Related

Ansible - How to regenerate a Hosts ssh_host_keys without loosing connection during play

I want to setup freshly imaged Raspberry Pi's with ansible. For this I have tasks to add users, ssh keys and configs. But when I come to the step where I want to regenerate the default ssh_host_*_keys, I lose connection.
I've tried it on 2 ways.
1st by removing all keys and rebooting the host. In this case the host regenerates the keys at boot. All I would have to do is wait, but this doesnt work.
- name: SSH | Delete ssh host keys
file:
path: '{{ item }}'
state: absent
with_items:
- /etc/ssh/ssh_host_ecdsa_key
- /etc/ssh/ssh_host_rsa_key
- /etc/ssh/ssh_host_ecdsa_key.pub
- /etc/ssh/ssh_host_ed25519_key
- /etc/ssh/ssh_host_rsa_key.pub
- /etc/ssh/ssh_host_ed25519_key.pub
- /etc/ssh/ssh_host_dsa_key
- /etc/ssh/ssh_host_dsa_key.pub
when: ansible_lsb.id == "Raspbian"
notify: wait-for-reboot
This gives me the following error
TASK [../roles/os : SSH | Delete ssh host keys] *******
ok: [raspi4] => (item=/etc/ssh/ssh_host_ecdsa_key)
changed: [raspi4] => (item=/etc/ssh/ssh_host_rsa_key)
changed: [raspi4] => (item=/etc/ssh/ssh_host_ecdsa_key.pub)
fatal: [raspi4]: FAILED! => {"msg": "Failed to connect to the host via ssh: Connection reset by 192.168.100.12 port 22"}
My 2nd try:
- name: SSH | Generate ECDSA Host Key
openssh_keypair:
path: /etc/ssh/ssh_host_ecdsa_key
owner: root
state: present
type: ecdsa
size: 521
regenerate: full_idempotence
force: no
The result:
TASK [../roles/os : SSH | Generate ECDSA Host Key] ******************************
fatal: [raspi4]: FAILED! => {"msg": "Failed to connect to the host via ssh: ###########################################################\r\n# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #\r\n###########################################################\r\nIT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Edit:
I already have the "host_key_checking = False" in my ansible.cfg. This is needed for the first time connecting to the host. Otherwise I would have to add it to my known_hosts manually by connecting via ssh.

how can I use ansible to push playbooks with SSH keys authentification

I am new to ansible and try to push playbooks to my nodes. I would like to push via ssh-keys. Here is my playbook:
- name: nginx install and start services
hosts: <ip>
vars:
ansible_ssh_private_key_file: "/path/to/.ssh/id_ed25519"
become: true
tasks:
- name: install nginx
yum:
name: nginx
state: latest
- name: start service nginx
service:
name: nginx
state: started
Here is my inventory:
<ip> ansible_ssh_private_key_file=/path/to/.ssh/id_ed25519
before I push, I check if it works: ansible-playbook -i /home/myuser/.ansible/hosts nginx.yaml --check
it gives me:
fatal: [ip]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: user#ip: Permission denied (publickey,password).", "unreachable": true}
On that server I don't have root privileges, I cant do sudo. That's why I use my own inventory in my home directory. To the target node where I want to push that nginx playbook, I can do a SSH connection and perform a login. The public key is on the remote server in /home/user/.ssh/id_ed25119.pub
What am i missing?
Copy /etc/ansible/ansible.cfg into the directory from which you are running the nginx.yaml playbook, or somewhere else per the documentation: https://docs.ansible.com/ansible/latest/reference_appendices/config.html#ansible-configuration-settings-locations
Then edit that file to change this line:
#private_key_file = /path/to/file
to read:
private_key_file = /path/to/.ssh/id_ed25519
Also check the remote user_user entry.

Ansible synchronize module delegate_to could not resolve hostname

I am using synchronize module to transfer file form serverA to server B. My serverA and serverB hosts are:
[serverB]
172.20.13.201 ansible_user=root ansible_ssh_pass="hi12#$"
172.20.13.202 ansible_user=root ansible_ssh_pass="hi12#$"
172.20.13.203 ansible_user=root ansible_ssh_pass="hi12#$"
[serverA]
172.20.5.121 ansible_user=root ansible_ssh_pass="hi12#$"
my ansible playbook is
- hosts: serverB
tasks:
- name: Copy Remote-To-Remote
remote_user: root
synchronize: src=/root/connection dest=/root/neutron-server.log
delegate_to: serverA
But it error
TASK [Copy Remote-To-Remote] ***************************************************
fatal: [172.20.13.201]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname servera: nodename nor servname provided, or not known", "unreachable": true}
fatal: [172.20.13.202]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname servera: nodename nor servname provided, or not known", "unreachable": true}
fatal: [172.20.13.203]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname servera: nodename nor servname provided, or not known", "unreachable": true}
Why synchronize can't resolve hostname 'servera'? In my host file, it's serverA.
serverA is the name of the group. There is no such host. (There might be more hosts in the group serverA. It would be difficult to decide which host delegate to.)
Try delegate to 172.20.5.121
- hosts: serverB
tasks:
- name: Copy Remote-To-Remote
remote_user: root
synchronize: src=/root/connection dest=/root/neutron-server.log
delegate_to: 172.20.5.121

get ansible to wait for ssh connection

How do you get ansible to wait or retry ssh connections? I have an ansible tsk that runs govc to upload a vm into vcenter but right after that I ssh into the machine to run commands like this:
hosts: vcenter
gather_facts: false
tasks:
- name: Download GOVC
get_url:
url: https://github.com/vmware/govmomi/releases/download/v0.20.0/govc_linux_amd64.gz
dest: /home/gkeadmin/govc_linux_amd64.gz
but doing it right after I get this: fatal: [139.178.66.91]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host 1.2.3.4 port 22: Operation timed out", "unreachable": true}
I rerun it again with the --retry command and then it continues. Seems like it just needs sometime before I can connect via ssh...how do I wait for an ssh connection to get established in ansible?
ansible supports retries. May this can help you.
---
- name: test
retries: 2
hosts: <hosts_name>
tasks:
- name: task
<module_name>:
you can add a section on the top of your playbook to wait for it, for example
---
- name: wait for ssh
tasks:
- wait_for:
port: 22
host: '{{ inventory_hostname }}'
delegate_to: localhost
- name: my playbook
hosts: vcenter
gather_facts: false
tasks:
- name: Download GOVC
[ ... etc ... ]
https://docs.ansible.com/ansible/latest/modules/wait_for_module.html#examples

Getting "winrm send_input failed" when using os_server module and local connection

I'm trying to write a playbook for a Windows VM that also creates the VM with the os_server module.
I'm starting with a simple win_ping, given the VM is already there:
- name: Create instance
hosts: all
tasks:
- name: Ping machine
win_ping:
running it with ansible-playbook site.yml --inventory=10.204.0.9,
results in:
PLAY [Create instance] ************************************************************************
TASK [Gathering Facts] ************************************************************************
ok: [10.204.0.9]
TASK [Ping machine] ***************************************************************************
ok: [10.204.0.9]
PLAY RECAP ************************************************************************************
10.204.0.9 : ok=2 changed=0 unreachable=0 failed=0
Now I add the os_server task:
- name: Create Windows Instance
connection: local
os_server:
state: present
region_name: "{{ os_region_name }}"
auth: "{{ cloud.auth }}"
name: "windows-{{ inventory_hostname }}"
image: Windows 2012 R2 Datacenter
key_name: vector_ops
flavor: 1C-2GB-50GB
floating_ips:
- "{{ inventory_hostname }}"
- name: Ping machine
win_ping:
I'm setting connection to local as I want this task to be executed from the control machine, in case the VM is not created yet.
When I run this playbook again with ansible-playbook site.yml --inventory=10.204.0.9,, I get:
TASK [Create Windows Instance] ****************************************************************
[WARNING]: FATAL ERROR DURING FILE TRANSFER: Traceback (most recent call last): File
"/usr/lib/python2.7/dist-packages/ansible/plugins/connection/winrm.py", line 276, in
_winrm_exec self._winrm_send_input(self.protocol, self.shell_id, command_id, data,
eof=is_last) File "/usr/lib/python2.7/dist-packages/ansible/plugins/connection/winrm.py",
line 256, in _winrm_send_input protocol.send_message(xmltodict.unparse(rq)) File
"/usr/local/lib/python2.7/dist-packages/winrm/protocol.py", line 207, in send_message
return self.transport.send_message(message) File "/usr/local/lib/python2.7/dist-
packages/winrm/transport.py", line 202, in send_message raise WinRMTransportError('http',
error_message) WinRMTransportError: (u'http', u'Bad HTTP response returned from server. Code
500')
fatal: [10.204.0.9]: FAILED! => {"msg": "winrm send_input failed"}
I'm a bit puzzled why there is an error during a file transfer, so I ran the command with -vvv:
TASK [Create Windows Instance] ****************************************************************
task path: /home/ubuntu/basic-windows-example/trunk/playbooks/site.yml:8
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/openstack/os_server.py
<10.204.0.9> ESTABLISH WINRM CONNECTION FOR USER: Admin on PORT 5986 TO 10.204.0.9
EXEC (via pipeline wrapper)
And indeed it seems so that Ansible tries to establish a winrm connection, despite connection: local. Removing connection: local from the task brings the same result as above.
I would expect the task to return a simple "ok" since the VM is already there.
What am I missing here?
Update 2018-01-09, 9:45 GMT:
So I tried another experiment: I removed all ansible_* variables from the var file (see below) just to see what Ansible does with the os_server task when no WinRM connection is configured. Running it again with ansible-playbook site.yml --inventory=10.204.0.9, -vvv this time I get for the os_server task:
TASK [Create Windows Instance] ****************************************************************
task path: /home/ubuntu/basic-windows-example/trunk/playbooks/site.yml:9
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/openstack/os_server.py
<10.204.0.9> ESTABLISH LOCAL CONNECTION FOR USER: ubuntu
<10.204.0.9> EXEC /bin/sh -c 'echo ~ && sleep 0'
<10.204.0.9> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1515490597.4-208015762064624 `" && echo ansible-tmp-1515490597.4-208015762064624="` echo /home/ubuntu/.ansible/tmp/ansible-tmp-1515490597.4-208015762064624 `" ) && sleep 0'
<rest cut off for brevity>
So now a local connection does get established and the os_server task completes successfully. But of course, this is not the answer, because I need the WinRM connection configured for the Windows VM.
Update 2018-01-09, 10:00 GMT:
Following the suggestion to add gather_facts: false to the play and running ansible-playbook site.yml --inventory=10.204.0.9,, I now get:
PLAY [Create instance] ************************************************************************
META: ran handlers
TASK [Create Windows Instance] ****************************************************************
task path: /home/ubuntu/basic-windows-example/trunk/playbooks/site.yml:10
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/cloud/openstack/os_server.py
<10.204.0.9> ESTABLISH WINRM CONNECTION FOR USER: Admin on PORT 5986 TO 10.204.0.9
EXEC (via pipeline wrapper)
[WARNING]: FATAL ERROR DURING FILE TRANSFER: Traceback (most recent call last): File
"/usr/lib/python2.7/dist-packages/ansible/plugins/connection/winrm.py", line 276, in
_winrm_exec self._winrm_send_input(self.protocol, self.shell_id, command_id, data,
eof=is_last) File "/usr/lib/python2.7/dist-packages/ansible/plugins/connection/winrm.py",
line 256, in _winrm_send_input protocol.send_message(xmltodict.unparse(rq)) File
"/usr/local/lib/python2.7/dist-packages/winrm/protocol.py", line 207, in send_message
return self.transport.send_message(message) File "/usr/local/lib/python2.7/dist-
packages/winrm/transport.py", line 202, in send_message raise WinRMTransportError('http',
error_message) WinRMTransportError: (u'http', u'Bad HTTP response returned from server. Code
500')
fatal: [10.204.0.9]: FAILED! => {
"msg": "winrm send_input failed"
}
The error is the same, Ansible still tries to establish a WinRM connection.
Full Playbook (site.yml, added gather_facts: false):
- name: Create instance
hosts: all
gather_facts: false
tasks:
- name: Create Windows Instance
connection: local
os_server:
state: present
region_name: Region1
auth: "{{ cloud.auth }}"
name: "windows-{{ inventory_hostname }}"
image: Windows 2012 R2 Datacenter
key_name: mykey
flavor: 1C-2GB-50GB
floating_ips:
- "{{ inventory_hostname }}"
- name: Ping machine
win_ping:
Vars in group_vars/all (used throughout all examples):
cloud:
auth:
auth_url: https://cloud.internal:5000/v3/
domain_name: Domain_01
password: mypassword
project_name: dev-project
username: apiuser
os_region_name: Fra1
ansible_user: Admin
ansible_port: 5986
ansible_password: myvmpassword
ansible_connection: winrm
ansible_winrm_server_cert_validation: ignore
Version info:
ansible --version
ansible 2.4.2.0
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/home/ubuntu/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/dist-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.12 (default, Nov 20 2017, 18:23:56) [GCC 5.4.0 20160609]
If I use delegate_to: localhost instead of connection: local for the os_server task, a local connection does get established. delegate_to avoids loading the WinRM connection variables for that connection.
If someone else is facing the same issue with Ansible then check the WinRM memory setting on the host and ensure it has sufficient memory.
Set-Item WSMan:\localhost\Shell\MaxMemoryPerShellMB 1024

Resources