Why ansible is not able to ping for locally? - ansible

Without writing ansible-playbook Why ansible is not able to ping locally ?
Problem:-
I have taken 1 ec2 instance and the IP of ec2 is "52.15.160.250". I installed ansible in it. Inside the inventory file [/etc/ansible/hosts] i have taken
[localhost]
52.15.160.250
Then visudo description
I tried to ping local host
ansible -m ping all
or
ansible -m ping 52.15.160.250
I am getting the following error
error

try adding like this:
[localhost]
52.15.160.250 ansible_connection=local
this way, it would not attempt over ssh rather it would go by local connection.

Related

Ansible root/password login

I'm trying to use Ansible to provision a server and the first thing I want to do is test the ssh access. If I use ssh directly I can log in fine...
ssh root#server
root#backups's password:
If I use Ansible I can't...
user#ansible:~$ ansible backups -m ping --user root --ask-pass
SSH password:
backups | UNREACHABLE! => {
"changed": false,
"msg": "Invalid/incorrect password: Permission denied, please try again.",
"unreachable": true
}
The password I'm using is correct - 100%.
Before anyone suggests using SSH keys - that's what part of what I'm looking to automate.
The issue was caused by the getting started documentation setting a trap.
It instructs you to create an inventory file with servers, use ansible all -m ping to ping those servers and to use the -u switch to change the remote user.
What it doesn't tell you is that if like me not all you servers have the same user, the advised way to specify a user per server is in the inventory file...
server1 ansible_connection=ssh ansible_user=user1
server2 ansible_connection=ssh ansible_user=user2
server3 ansible_connection=ssh ansible_user=user3
I was provisioning a server, and the only user I had available to me at the time was root. But trying to do ansible server3 -user root --ask-pass failed to authenticate. After a couple of wasted hours I discovered the -user switch is only effective if the inventory file doesn't have a user. This is intended precedence behaviour. There are a few gripes about this in GitHub issues but a firm 'intended behaviour' mantra is the response you get if you challenge it. It seems to go against the grain to me.
I subsequently discovered that you can specify -e 'ansible_ssh_user=root' to override the inventory user - I will see about creating a pull request to improve the docs.
While you're here, I might be able to save you some time with some further gotchas. This behaviour is the same if you use playbooks. In there you can specify a remote_user but this isn't honoured - presumably also because of precedence. Again you can override the inventory user with -e 'ansible_ssh_user=root'
Finally, until I realised Linode could provision a server with an SSH key deployed, I was trying to specify the root password to an ad-hoc command. You have to encrypt the password and this gives you a long string and this is almost certainly going to include $ in it which bash will treat as substitutions. Make sure you escape these.
The lineinfile module behaviour isn't intuitive either.
Write your hosts file like this. It will work.
192.168.2.4
192.168.1.4
[all:vars]
ansible_user=azureuser
Then execute the following command: ansible-playbook --ask-pass -i hosts main.yml --check to check before configuration.
Also create a ansible.cfg file. Then paste the following contents there:
[defaults]
inventory = hosts
host_key_checking = False
Note: All the 3 files namely, main.yml,ansible.cfg & hosts must be in the same folder.
Also, the code is tested for devices connected to a private network using Private IPs. I haven't checked using Public IPs. If using Azure/AWS, create a test VM and connect it to the VPN of the other devices.
Note: You need to install the SSHPass package to be able to authenticate with Password.
For Ubuntu: apt-get install sshpass

Ansible use different hostname if first fails

I have a number of raspberry pis that I swap out (only one running at a time) and run ansible against. Most pis respond to ping raspberrypi but I have one that responds to ping raspberrypi.local
Rather than remembering to manually ping the correct hostname before executing the playbook, is there a way in ansible to run a playbook against a different hostname if the first fails?
Currently my playbook is
---
- hosts: raspberrypi
and /etc/ansible/hosts
[raspberrypi]
raspberrypi
#raspberrypi.local
If I uncomment out the second hostname and the first fails, then the playbook will fail and not run on the .local hostname
I am not sure if this is directly possible in ansible.
But a hack I can think of is to create a list of hosts store them in a variable do a ping using the localhost. If ping is successful create a custom hosts group and execute the task you want to do.
Also are you executing your playbook with serial: 1 ?
Hope so this helps.
You could run the play against both host groups.
- hosts: raspberrypi:raspberrypi.local

How to format a simple Ansible inventory file for amazon ec2 hosts?

I am unable to run the example ad hoc command:
ansible -m ping hosts --private-key=~/home/ec2-user/ -u ec2-user
the error is:
[WARNING]: Could not match supplied host pattern, ignoring: hosts
[WARNING]: No hosts matched, nothing to do
The hostname is: ip-10-200-2-21.us-west-2.compute.internal
I can ping the host from my ansible control node by this hostname.
I created the hosts file with the touch command and it looks like this:
ip-10-200-2-21.us-west-2.compute.internal
Do I need to include something more? Do I need to save it with a particular extension? Thank You much for any help.
To run an ad-hoc command you can run a command in the following syntax
ansible <HOST_GROUP> -m <MODULE_NAME>
This is assuming your inventory file is in /etc/ansible/hosts. If your inventory file is located in a different spot we can use the command
ansible <HOST_GROUP> -m <MODULE_NAME> -i <LOCATION_TO_INVENTORY_FILE>
to change the location of the inventory file
Now whats missing is that your inventory file should have a host group in it. Something like:
[ec2]
ip-10-200-2-21.us-west-2.compute.internal
other-ec2-host-that-needs-to-be-pinged.us-west-2.compute.internal
The host group is the text inbetween the square brackets [], which in this case is ec2. Now we can reference all ec2 hosts using the host group of ec2.
To ping all the hosts in the ec2 group (assuming the inventory file is /etc/ansible/hosts) run
ansible ec2 -m ping -i /etc/ansible/hosts

ansible command to list all known hosts

Ansible is already installed in a seperated ec2 instance.
I need to install apache on an ec2 instance.
Trying to find a list of known hosts
I run this command
ansible -i hosts all --list-hosts
and get this message
[WARNING]: Host file not found: hosts
[WARNING]: provided hosts list is empty, only localhost is available
[WARNING]: No hosts matched, nothing to do
--list-hosts lists hosts that match a --limit. The input is the -i, inventory. Your inventory is a file named hosts, which doesn't exist.
You need to create or generate an inventory file from somewhere. Ansible can't intuit what your inventory is.
If you installed Ansible by Pip, you need to create a directory with ansible.cfg and hosts file. For it, use:
sudo mkdir /etc/ansible/
sudo touch /etc/ansible/hosts
So you will be able to use the command below:
cat /etc/ansible/hosts
Got permission to ssh to the target server. Now i can install on this target server.
If I can login as an ec2-user by being part of a management domain then I can access any server

Ansible Timeout (12s) waiting for privilege escalation prompt

I'm having trouble running my Ansible playbook on AWS instance. Here is my version:
$ ansible --version
ansible 2.0.0.2
I created an inventory file as:
[my_ec2_instance]
default ansible_host=MY_EC2_ADDRESS ansible_user='ubuntu' ansible_ssh_private_key_file='/home/MY_USER/MY_KEYS/MY_KEY.pem'
Testing connection to my server:
$ ansible -i provisioner/inventory my_ec2_instance -m ping
default | SUCCESS => {
"changed": false,
"ping": "pong"
}
Now when running my playbook on this inventory I get the error Timeout (12s) waiting for privilege escalation prompt as follows:
$ ansible-playbook -i provisioner/inventory -l my_ec2_instance provisioner/playbook.yml
PLAY [Ubuntu14/Python3/Postgres/Nginx/Gunicorn/Django stack] *****
TASK [setup] *******************************************************************
fatal: [default]: FAILED! => {"failed": true, "msg": "ERROR! Timeout (12s) waiting for privilege escalation prompt: "}
NO MORE HOSTS LEFT *************************************************************
PLAY RECAP *********************************************************************
default : ok=0 changed=0 unreachable=0 failed=1
If I run the same playbook using the .vagrant/provisioners/ansible/inventory/vagrant_ansible_inventory as the inventory parameter it works perfectly on my Vagrant instance.(I believe, proving there is nothing wrong in the playbook/roles itself)
Also, if I run it with an -vvvv, copy the exec ssh line and run it manually it indeed connects to AWS without problems.
Do I need to add any other parameter on my inventory file to connect an EC2 instance? What am I missing?
$ vim /etc/ansible/ansible.cfg
SSH timeout
[defaults]
timeout = 10 ( change to 60 )
There is a git issue about this error that affect various versions of Ansible 2.x in here https://github.com/ansible/ansible/issues/13278#issuecomment-216307695
My solution was simply to add timeout=30 to /etc/ansible/ansible.cfg.
This is not a "task" or "role" timeout and was enough to solve the error (I do have some roles/tasks that take much longer than that).
In my case, the root cause was an incorrect entry in /etc/hosts for the localhost, causing a 20s delay for any sudo command.
127.0.0.1 wronghostname
Changed it to the correct hostname to fix it. No more delay for sudo/privileged commands.
In my case it was because my playbook had
become_method: su
become_flags: "-"
which prompts a password request on the host.
Adding ansible-playbooks … --ask-become-pass and passing the password solved the issue.
I ran the command like follows & it works :
command:
ansible-playbook -c paramiko httpd.yml
As the issue is related to the openssl implementation, the usage of paramiko dodges it.
Ansible defaults ssh_args setting, as documented here https://docs.ansible.com/ansible/latest/reference_appendices/config.html#ansible-ssh-args, is
-C -o ControlMaster=auto -o ControlPersist=60s
and by changing ControlMaster to either yes (or no) resolved the issue for me (somehow):
ansible.cfg:
[ssh_connection]
ssh_args = -C -o ControlMaster=yes -o ControlPersist=60s
I had the same issue. I was able to solve it adding become_exe: sudo su -
- hosts: "{{ host | default('webservers')}}"
become: yes
become_user: someuser
become_method: su
become_exe: sudo su -
The thread is old but the varied solutions keep coming.
In my case, the issue was that the ansible script had modified the sudoers file in the vagrant vm to add an entry for the vagrant group (%vagrant) after the existing entry for the vagrant user.
That was enough to cause the ansible script to timeout waiting for privilege escalation.
The solution was to force the sudoers entry for the vagrant group to be above the entry for the vagrant user.
Sometime setup phase takes more time for ec2 instances, you need to change timeout value in ansible.cfg to something like timeout=40 . This will set the timeout value to 40 seconds.
I fixed this error for my system because I forgot I had altered the ansible config file:
sudo vim /etc/ansible/ansible.cfg
Try commenting the priviledge parameters that could be trying to sudo to root.
like so:
[privilege_escalation]
#become=True
#become_method=su
#become_user=root
#become_ask_pass=False
#become_exe="sudo su -"
The account I was trying to ssh as did not have permission to become root.
I am building secure VM images for AWS, QEMU and VBox on an isolated network, with limited DNS support. Increasing the SSH Timeout to 40 sec had limited effect in my situation.
I am using Packer v1.5.5, Ansible v2.9.2 and OpenSSH v7.4p1
My solution was to change the UseDNS parameter in /etc/ssh/ssd_config to no.
I added the following lines in my RHEL/CentOS kickstart configuration, with great result.
%post
# Disable DNS lookups by sshd to address Ansible timeouts
perl -npe 's/^#UseDNS yes/UseDNS no/g' -i /etc/ssh/sshd_config
%end
Check if it is a problem with an old version of sudo at destination server. Some old sudo versions does not have the -n option ansible uses.

Resources