I have added ssh keys of ansible user to other hosts so ansible user is allowed on all hosts.Now I want to run playbook as root or any other service users like apache etc. i have already indicated user as ansible in my playbook i got below mentioned errors when I run playbook while logged in as root. But everything works fine when I run playbook while logging in as ansible user.
- hosts: nameservers
user: ansible
tasks:
- name: check hostname
command: hostname
Error,
[root#dev playbooks]# ansible-playbook pingtest.yml
PLAY [nameservers] *********************************************************************************************************************
TASK [Gathering Facts] *****************************************************************************************************************
fatal: [x.x.x.x]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
fatal: [x.x.x.x]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}
fatal: [x.x.x.x]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Permission denied (publickey,gssapi-keyex,gssapi-with-mic,password).", "unreachable": true}.
note: i have replaced IPs with x
setuser in playbook
user: ansible
set keypath in ansible configuration file
private_key_file = /home/ansible/.ssh/id_rsa
Related
I'm finding it difficult to run a simple playbook. I already ping target and it was successful. When i run the playbook i get this error:
PLAY [install httpd and start services] ***********************************
TASK [Gathering Facts] ****************************************************
fatal:[192.168.112.66]: UNREACHABLE!=> {"changed": false "msg": "Failed to connect to the host via ssh: jay#192.168.112.66: Permission denied (publickey password)." "unreachable": true}
What's the problem with this?
The remote server is denying you the access due your key has a password.
Try this before run the playbook:
$ eval `ssh-agent`
$ ssh-add /path/to/your/private/key
Then run the playbook with the options -u and --private-key pointing to the user with access permissions on remote server and the private key you use.
I am guessing you used a password instead of ssh-key. So at the end of your command, add
--ask-pass
Let's say you're running your playbook. Your command will become:
ansible-playbook playbook.yml --ask-pass
I am using synchronize module to transfer file form serverA to server B. My serverA and serverB hosts are:
[serverB]
172.20.13.201 ansible_user=root ansible_ssh_pass="hi12#$"
172.20.13.202 ansible_user=root ansible_ssh_pass="hi12#$"
172.20.13.203 ansible_user=root ansible_ssh_pass="hi12#$"
[serverA]
172.20.5.121 ansible_user=root ansible_ssh_pass="hi12#$"
my ansible playbook is
- hosts: serverB
tasks:
- name: Copy Remote-To-Remote
remote_user: root
synchronize: src=/root/connection dest=/root/neutron-server.log
delegate_to: serverA
But it error
TASK [Copy Remote-To-Remote] ***************************************************
fatal: [172.20.13.201]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname servera: nodename nor servname provided, or not known", "unreachable": true}
fatal: [172.20.13.202]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname servera: nodename nor servname provided, or not known", "unreachable": true}
fatal: [172.20.13.203]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname servera: nodename nor servname provided, or not known", "unreachable": true}
Why synchronize can't resolve hostname 'servera'? In my host file, it's serverA.
serverA is the name of the group. There is no such host. (There might be more hosts in the group serverA. It would be difficult to decide which host delegate to.)
Try delegate to 172.20.5.121
- hosts: serverB
tasks:
- name: Copy Remote-To-Remote
remote_user: root
synchronize: src=/root/connection dest=/root/neutron-server.log
delegate_to: 172.20.5.121
My hosts file
[all]
192.168.77.10
192.168.77.11
192.1680.77.12
And here is my playbook.yml
---
- hosts: all
tasks:
- name: Add the Google signing key
apt_key : url=https://packages.cloud.google.com/apt/doc/apt-key.gpg state=present
- name: Add the k8s APT repo
apt_repository: repo='deb http://apt.kubernetes.io/ kubernetes-xenial main' state=present
- name: Install packages
apt :
name: "{{ packages }}"
vars:
packages:
- vim
- htop
- tmux
- docker.io
- kubelet
- kubeadm
- kubectl
- kubernetes-cni
When I run
ansible-playbook -i hosts playbook.yml
unexpected authentication problem occurs.
The authenticity of host '192.168.77.11 (192.168.77.11)' can't be established.
ECDSA key fingerprint is SHA256:mgX/oadP2cL6g33u7xzrEblvga9CGfpW13K2YUdeKsE.
Are you sure you want to continue connecting (yes/no)? The authenticity of host '192.168.77.10 (192.168.77.10)' can't be established.
ECDSA key fingerprint is SHA256:ayWHzp/yquIuQxw7MKGR0+NbtrzHY86Z8PdIPv7r6og.
Are you sure you want to continue connecting (yes/no)? fatal: [192.1680.77.12]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname 192.1680.77.12: Name or service not known\r\n", "unreachable": true}
^C [ERROR]: User interrupted execution
I am following the example from DevOps book,I reproduced the original code. MY OS is Ubuntu 18.04.
telnet hosts
telnet: could not resolve hosts/telnet: Temporary failure in name resolution
VM ls output
vagrant#ubuntu-bionic:~$ ls
hosts playbook.retry playbook.yml
I edited /etc/ansible/ansible.cfg by ading False option.
Anyway it does not work again
fatal: [192.1680.77.12]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname 192.1680.77.12: Name or service not known\r\n", "unreachable": true}
fatal: [192.168.77.10]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '192.168.77.10' (ECDSA) to the list of known hosts.\r\nvagrant#192.168.77.10: Permission denied (publickey).\r\n", "unreachable": true}
fatal: [192.168.77.11]: UNREACHABLE! => {"changed": false, "msg": "Failed to connect to the host via ssh: Warning: Permanently added '192.168.77.11' (ECDSA) to the list of known hosts.\r\nvagrant#192.168.77.11: Permission denied (publickey).\r\n", "unreachable": true}
to retry, use: --limit #/home/vagrant/playbook.retry
PLAY RECAP *************************************************************************************************************************************************************************************************
192.168.77.10 : ok=0 changed=0 unreachable=1 failed=0
192.168.77.11 : ok=0 changed=0 unreachable=1 failed=0
192.1680.77.12 : ok=0 changed=0 unreachable=1 failed=0
How to resolve this issue?
You have several options. One is of course to SSH to the hosts and add them to the known hosts files of your Ansible servers. Another option is to set the environment variable ANSIBLE_HOST_KEY_CHECKING to false. A third option is to use the ansible.cfg config file:
[defaults]
host_key_checking = False
See the official documentation.
With ansible i want to configure rsyslog service for a group of hosts than add the name those hosts to a central host (different from the group of hosts) so my playbook:
- hosts: gourp_of_hosts
tasks:
- name: set rsyslog configuration
lineinfile:
path: /etc/rsyslog.conf
line: '{{item}}'
with_items:
- some items....
become: yes
- name: add host to rsyslog central
blockinfile:
path: /etc/rsyslog.conf
block: |
{{ansible_hostname}}....
delegate_to: x.x.x.x (my central host)
become: yes
My inventory file contains both the group of hosts and my central host:
[gourp_of_hosts]
host1 ansible_user=.... ansible_user_pass=.. ansible_sudo_pass=..
host2 ansible_user=.... ansible_user_pass=.. ansible_sudo_pass=..
[central]
x.x.x.x ansible_user=... ansible_user_pass=.. ansible_sudo_pass=..
Now i'm facing the following error:
fatal: [host1]: FAILED! => {"msg": "Incorrect sudo password"}
I tried removing central host from my inventory file and used ssh-copy-id user#x.x.x.x in result i'm having the following error:
fatal: [host]: UNREACHABLE! => {"changed": false, "msg": "Failed
to connect to the host via ssh: Permission denied
(publickey,gssapi-with-mic,password).\r\n", "unreachable": true}
I am trying to run a simple playbook against Openstack in admin tenant using Ansible Tower, both running on localhost. Here is the script:
--- #
- hosts: localhost
gather_facts: no
connection: local
tasks:
- name: Security Group
os_security_group:
state: present
name: example
I have done the following configuration:
Credentials:
Template:
Inventory test:
With this configuration, I am getting this error:
TASK [Security Group] **********************************************************
13:35:48
fatal: [localhost]: FAILED! => {"changed": false, "module_stderr": "sudo: effective uid is not 0, is /usr/bin/sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1}
Any idea what can be? Looks like is a credential problem.
Untick Enable Privilege Escalation - it's not necessary. Your OpenStack privilege/authorisation will be tied to your OpenStack credentials (admin in this case), not the user running the Ansible task.