How to authenticate hosts with Ansible? - ansible

My hosts file
[all]
192.168.77.10
192.168.77.11
192.1680.77.12
And here is my playbook.yml
---
- hosts: all
tasks:
- name: Add the Google signing key
apt_key : url=https://packages.cloud.google.com/apt/doc/apt-key.gpg state=present
- name: Add the k8s APT repo
apt_repository: repo='deb http://apt.kubernetes.io/ kubernetes-xenial main' state=present
- name: Install packages
apt :
name: "{{ packages }}"
vars:
packages:
- vim
- htop
- tmux
- docker.io
- kubelet
- kubeadm
- kubectl
- kubernetes-cni
When I run
ansible-playbook -i hosts playbook.yml
unexpected authentication problem occurs.
The authenticity of host '192.168.77.11 (192.168.77.11)' can't be established.
ECDSA key fingerprint is SHA256:mgX/oadP2cL6g33u7xzrEblvga9CGfpW13K2YUdeKsE.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '192.168.77.10 (192.168.77.10)' can't be established.
ECDSA key fingerprint is SHA256:ayWHzp/yquIuQxw7MKGR0+NbtrzHY86Z8PdIPv7r6og.
Are you sure you want to continue connecting (yes/no)? fatal: [192.1680.77.12]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname 192.1680.77.12: Name or service not known\r\n", "unreachable": true}
^C [ERROR]: User interrupted execution
I am following the example from DevOps book,I reproduced the original code. MY OS is Ubuntu 18.04.
telnet hosts
telnet: could not resolve hosts/telnet: Temporary failure in name resolution
VM ls output
vagrant#ubuntu-bionic:~$ ls
hosts playbook.retry playbook.yml
I edited /etc/ansible/ansible.cfg by ading False option.
Anyway it does not work again
fatal: [192.1680.77.12]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname 192.1680.77.12: Name or service not known\r\n", "unreachable": true}
fatal: [192.168.77.10]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '192.168.77.10' (ECDSA) to the list of known hosts.\r\nvagrant#192.168.77.10: Permission denied (publickey).\r\n", "unreachable": true}
fatal: [192.168.77.11]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '192.168.77.11' (ECDSA) to the list of known hosts.\r\nvagrant#192.168.77.11: Permission denied (publickey).\r\n", "unreachable": true}
to retry, use: --limit #/home/vagrant/playbook.retry
PLAY RECAP *************************************************************************************************************************************************************************************************
192.168.77.10 : ok=0 changed=0 unreachable=1 failed=0
192.168.77.11 : ok=0 changed=0 unreachable=1 failed=0
192.1680.77.12 : ok=0 changed=0 unreachable=1 failed=0
How to resolve this issue?

You have several options. One is of course to SSH to the hosts and add them to the known hosts files of your Ansible servers. Another option is to set the environment variable ANSIBLE_HOST_KEY_CHECKING to false. A third option is to use the ansible.cfg config file:
[defaults]
host_key_checking = False
See the official documentation.

Related

Cannot transfer public key from ansible control node to remote node

I am trying to transfer key from ansible control node to remote node using authorized_key module. Below is my ansible playbook.
- name: ssh
hosts: temp1
remote_user: <username>
become: true
tasks:
- name: ssh
authorized_key:
user:
state: present
key: "{{ lookup('file', '/home/<username>/.ssh/id_rsa.pub') }}"
manage_dir: yes
become: yes
Error:
fatal: []: UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: #: Permission denied (publickey).",
"unreachable": true }
PLAY RECAP
********************************************************************************************************** : ok=0 changed=0 unreachable=1
failed=0 skipped=0 rescued=0 ignored=0
As per my understanding the function of a authorized_key module is to copy public key from control node and paste it into the authrozied_keys files of the remote node so SSH connection can be established without manually copying the public key from one server to the other. I confirm that the user I am using has SUDO privilege on both side of VM.
I a really appreciate any help on this.

How to Manage sql server ( running on windows server) via ansible (running on unix platform)?

I would like to know about How to Manage sql server (WINDOWS) (basically running some sql scripts) from an ansible instance running on a Unix platform. here is what I have done
contents of list.yml
---
- hosts: all
tasks:
- name: connect to the db server
shell: "invoke-sqlcmd -username \"username\" -password \"password\" -Query \"SELECT 1\""
[dbserver]
DBSERVER:1515 ansible_user=ansible
contents of ansible.cfg
[defaults]
inventory = ./development.txt
ERROR:
ansible-playbook playbooks/list.yml
PLAY [all] **************************************************************************************************
TASK [Gathering Facts] **************************************************************************************
fatal: [DBSERVER]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: connect to host DBSERVER port 1515: Operation timed out", "unreachable": true}
PLAY RECAP **************************************************************************************************
DBSERVER : ok=0 changed=0 unreachable=1 failed=0 skipped=0 rescued=0 ignored=0
TELNET WORKS
➜ ansible-test telnet DBSERVER 1515
Trying <ip address>...
Connected to DBSERVER.
Escape character is '^]'.

Run playbook as a different user

I have added ssh keys of ansible user to other hosts so ansible user is allowed on all hosts.Now I want to run playbook as root or any other service users like apache etc. i have already indicated user as ansible in my playbook i got below mentioned errors when I run playbook while logged in as root. But everything works fine when I run playbook while logging in as ansible user.
- hosts: nameservers
user: ansible
tasks:
- name: check hostname
command: hostname
Error,
[root#dev playbooks]# ansible-playbook pingtest.yml
PLAY [nameservers] *********************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************************
fatal: [x.x.x.x]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
fatal: [x.x.x.x]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
fatal: [x.x.x.x]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}.
note: i have replaced IPs with x
setuser in playbook
user: ansible
set keypath in ansible configuration file
private_key_file = /home/ansible/.ssh/id_rsa

Ansible synchronize module delegate_to could not resolve hostname

I am using synchronize module to transfer file form serverA to server B. My serverA and serverB hosts are:
[serverB]
172.20.13.201 ansible_user=root ansible_ssh_pass="hi12#$"
172.20.13.202 ansible_user=root ansible_ssh_pass="hi12#$"
172.20.13.203 ansible_user=root ansible_ssh_pass="hi12#$"
[serverA]
172.20.5.121 ansible_user=root ansible_ssh_pass="hi12#$"
my ansible playbook is
- hosts: serverB
tasks:
- name: Copy Remote-To-Remote
remote_user: root
synchronize: src=/root/connection dest=/root/neutron-server.log
delegate_to: serverA
But it error
TASK [Copy Remote-To-Remote] ***************************************************
fatal: [172.20.13.201]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname servera: nodename nor servname provided, or not known", "unreachable": true}
fatal: [172.20.13.202]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname servera: nodename nor servname provided, or not known", "unreachable": true}
fatal: [172.20.13.203]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname servera: nodename nor servname provided, or not known", "unreachable": true}
Why synchronize can't resolve hostname 'servera'? In my host file, it's serverA.
serverA is the name of the group. There is no such host. (There might be more hosts in the group serverA. It would be difficult to decide which host delegate to.)
Try delegate to 172.20.5.121
- hosts: serverB
tasks:
- name: Copy Remote-To-Remote
remote_user: root
synchronize: src=/root/connection dest=/root/neutron-server.log
delegate_to: 172.20.5.121

Error trying to create a new vm in ansible

I just started learning Ansible. It has been a pain so far. I have this code to create a new vm. I followed this tutorial.
---
- hosts: localhost
connection: local
tasks:
- vsphere_guest:
vcenter_hostname:1.1.1.12
username: root
password: pasword
guest: newvm001
state: powered_on
validate_certs: no
vm_extra_config:
vcpu.hotadd: yes
mem.hotadd: yes
notes: This is a test VM
folder: MyFolder
vm_disk:
disk1:
size_gb: 10
type: thin
datastore: storage001
vm_nic:
nic1:
type: vmxnet3
network: VM Network
network_type: standard
vm_hardware:
memory_mb: 256
num_cpus: 1
osid: ubuntu64Guest
scsi: paravirtual
esxi:
datacenter: 1.1.1.12
hostname: 1.1.1.12
I however keep getting this error.
[WARNING]: Host file not found: /etc/ansible/hosts
[WARNING]: provided hosts list is empty, only localhost is available
PLAY [localhost]
TASK [setup]
******************************************************************* ok: [localhost]
TASK [vsphere_guest]
*********************************************************** fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg":
"Cannot find datacenter named: 9.1.142.86"}
NO MORE HOSTS LEFT
************************************************************* [WARNING]: Could not create retry file 'testing.retry'. [Errno
2] No such file or directory: ''
PLAY RECAP
********************************************************************* localhost : ok=1 changed=0 unreachable=0
failed=1
Why is that so? And what is the difference between a host file and an inventory file?
what is the difference between a host file and an inventory file?
They are the same. However, since you're doing everything on your local machine, it's fine that you only have localhost available.
This is your error:
TASK [vsphere_guest] *********************************************************** fatal: [localhost]: FAILED! => {"changed": false, "failed": true, "msg": "Cannot find datacenter named: 9.1.142.86"}
It's not clear to me why you're receiving this with the playbook you've provided, as it doesn't mention that IP at all and the line I suspect is causing the problem is
datacenter: 1.1.1.12
Are you sure this is the file you're running, and that you've saved any changes you've made to it?

Resources