Ansible playbook takes the wrong "ansible_ssh_private_key_file" from hosts - amazon-ec2

I have a custom 'hosts' script with this output: (part of it)
"production-public-web": {
"hosts": [
"52.x.y.z"
],
"vars": {
"ansible_ssh_private_key_file": "/home/ec2-user/.ssh/prod1-frankfurt.pem"
}
},
"production-internal": {
"hosts": [
"172.x.y.z"
],
"vars": {
"ansible_ssh_private_key_file": "/home/ec2-user/.ssh/prod1-useast.pem"
}
},
The hosts output looks fine and I have a playbook that runs on each server group. But it seems to mix up the private keys. Some servers get the right key, some don't. For example (-vvvv):
PLAY [production-eu-public-web] *******************
TASK [setup] *********************************************
<52.x.y.z> ESTABLISH SSH CONNECTION FOR USER: None
<52.x.y.z> SSH: EXEC ssh -C -vvv -o ControlMaster=auto
-o ControlPersist=60s -o StrictHostKeyChecking=no
-o 'IdentityFile="/home/ec2-user/.ssh/prod1-useast.pem"'...
As you can see, this server took the wrong private key (us-east instead of frankfurt)
I have used different private keys in the past without any issues. Some server DO get the right private key. Regular SSH (with the right key) works.
$ ansible-playbook --version
ansible-playbook 2.0.1.0
Running on AWS EC2 (centos)
Any clue?
Answer: As Konstantin Suvorov commented - my Hosts script output repeated the same hosts in more than one group. After I eliminated the duplications - it all worked well. Even without the .ssh/config workaround.
Thanks!

Parameters mixup in the hosts are a result of duplications in the output of the hosts script. Each server should be in exactly one group.

Related

Ansible using delegate_to with different ansible_ssh_common_args

I'm trying to make a run from my ansible master to a host (lets call it hostclient) which requires performing something into another host (let's call it susemanagerhost :) ).
hostclient needs ansible_ssh_common_args with proxycommand fullfilled, while susemanager host needs no ansible_ssh_common_args since its a direct connection.
So I thought I could use delegate_to, but the host called hostclient and the host called susemanagerhost have different values for the variable ansible_ssh_common_args.
I thought I could change the value of ansible_ssh_common_args inside the run with set_fact of ansible_ssh_common_args to ansible_ssh_common_args_backup (because I want to recover the original value for the other standard tasks) and then ansible_ssh_common_args to null (the connection from the ansible master to susemanager host is a direct connection with no proxycommand required) but it is not working.
It seems like its still using the original value for ansible_ssh_common_args.
ansible_ssh_common_args is generally used to execute commands 'through' a proxy server:
<ansible system> => <proxy system> => <intended host>
The way you formulated your question you won't need to use ansible_ssh_common_args and can stick to using delegate_to:
- name: execute on host client
shell: ls -la
- name: execute on susemanagerhost
shell: ls -la
delegate_to: "root#susemanagerhost"
Call this play with:
ansible-playbook <play> --limit=hostclient
This should do it.
Edit:
After filing a bug on github the working solution is to have:
ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ansible#{{ jumphost_server }}"'
In host_vars or group_vars.
Followed by a static use of delegate_to:
delegate_to: <hostname>
hostname should be the hostname as used by ansible.
But not use:
delegate_to: "username#hostname"
This resolved the issue for me, hope it works out for you as well.

I a trying to ping a host using ansible which does not use default ssh port but I am not successful

this is my hosts file
ansible_host=XX.XXX.xx.x ansible_port=9301
[all:vars]
ansible_python_interpreter=/usr/bin/python3
and my command is: ansible all -i hosts -m ping
and I keep getting:
ansible_host=xx.xxx.xxx.xx | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname ansible_host=xx.xxx.xxx.xx: Name or service not
known",
"unreachable": true }
It appears, based solely upon your posted content, that you left off the inventory hostname; all of those key=value pairs, such as that ansible_host= part, are for assigning hostvars to an existing inventory hostname. You'll want an inventory file like this:
my-awesome-host ansible_host=XX.XXX.xx.x ansible_port=9301
In the future, you can confirm what ansible thinks about your inventory via the ansible-inventory command:
ansible-inventory -i ./your-inventory-file.ini --list

Make Ansible use the right IP

i'm currently having an odd problem with Ansible
I used to change pretty often the IP address of my hosts in my hosts file. Didn't have any problem so far. But now, even though i changed the IP address in my hosts file, Ansible is still using a previous IP.
Here is the content of my hosts file :
[test-host]
test ansible_host=172.16.0.10 ansible_port=22 ansible_user=vagrant ansible_private_key_file=.vagrant/machines/test/virtualbox/private_key
I even specified the hosts file to use when i'm running my playbook :
ansible-playbook playbook.yml -i hosts.file
I already tried to reinstall Ansible, delete the tmp folder.
I saw that if i'm typing ansible-inventory -list i can see the old IP
{
"_meta": {
"hostvars": {
"test": {
"ansible_host": "192.168.0.10",
"ansible_port": 22,
"ansible_private_key_file": ".vagrant/machines/test/virtualbox/private_key",
"ansible_user": "vagrant"
}
}
},
How can i force Ansible to use the hosts.file instead of this "cache" ?
Thanks.
Run the command with -vvv
ansible-inventory -vvv --list
At the beginning of the output find all Parsed ... inventory source. Review the sources to find out where does the problematic host come from.
it looks like ansible is using his own cache :
Caching Facts
try running your playbook with option --flush-cache, maybe that will solve the issue

Ansible Dynamic Inventory groups not working

I'm working with the ec2 dynamic inventory script for ansible, and have created a fairly simply proof of concept. This is the content of the groups file, which exists next to ec2.py and ec2.ini:
[tag_classification_server_type_1]
[app_servers:children]
tag_classification_server_type_1
[stage:children]
app_servers
[stage:vars]
environment_name = stage
When I use that inventory to ping the tag groups, it works as expected:
$>ansible -i inventory/stage/ec2.py tag_classification_server_type_1 -m ping --private-key ~/.ssh/foo.pem
12.345.67.89 | SUCCESS => {
"changed": false,
"ping": "pong"
}
But attempting to use the defined groups fails (I'm showing stage here, but the same output is true when attempting to communicate with the app_servers group):
$>ansible -i inventory/stage/ec2.py stage -m ping --private-key ~/.ssh/foo.pem
[WARNING]: Could not match supplied host pattern, ignoring: stage
[WARNING]: No hosts matched, nothing to do
I've used groups in ansible using ec2 before, and never had any problems. I downloaded completely fresh ec2.ini and ec2.py files to make sure I hadn't accidentally modified them.
When I check the inventory ansible-inventory ec2.py --list, it confirms that my defined groups aren't there.
Any ideas?
Naturally, if you struggle with a problem for an hour, you'll get nowhere. But post on StackOverflow, and you'll figure it out yourself in 5 minutes.
Ends up you have to pass the entire folder containing groups and ec2.py and ec2.ini if you want it to respect the groups - otherwise it ignores them.
So the correct call is:
$>ansible -i inventory/stage stage -m ping --private-key ~/.ssh/foo.pem
Instead of:
$>ansible -i inventory/stage/ec2.py stage -m ping --private-key ~/.ssh/foo.pem

Ansible Using Custom ssh config File

I have a custom SSH config file that I typically use as follows
ssh -F ~/.ssh/client_1_config amazon-server-01
Is it possible to assign Ansible to use this config for certain groups? It already has the keys and ports and users all set up. I have this sort of config for multiple clients, and would like to keep the config separate if possible.
Not fully possible. You can set ssh arguments in the ansible.cfg:
[ssh_connection]
ssh_args = -F ~/.ssh/client_1_config amazon-server-01
Unfortunately it is not possible to define this per group, inventory or anything else specific.
I believe you can achieve what you want like this in your inventory:
[group1]
server1
[group2]
server2
[group1:vars]
ansible_ssh_user=vagrant
ansible_ssh_common_args='-F ssh1.cfg'
[group2:vars]
ansible_ssh_user=vagrant
ansible_ssh_common_args='-F ssh2.cfg'
You can then be as creative as you want and construct SSH config files such as this:
$ cat ssh1.cfg
Host server1
HostName 192.168.1.1
User someuser
Port 22
IdentityFile /path/to/id_rsa
References
Working with Inventory
OpenSSH Config File Examples
With Ansible 2, you can set a ProxyCommand in the ansible_ssh_common_args inventory variable. Any arguments specified in this variable are added to the sftp/scp/ssh command line when connecting to the relevant host(s). Consider the following inventory group:
[gatewayed]
foo ansible_host=192.0.2.1
bar ansible_host=192.0.2.2
You can create group_vars/gatewayed.yml with the following contents:
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#gateway.example.com"'
and do the trick...
You can find further information in: http://docs.ansible.com/ansible/faq.html
Another way,
assuming you have associated ssh key identity files in configuration groupings for various servers like I do in the ~/.ssh/config file. If you have a bunch of entries like this one.
Host wholewideworld
Hostname 204.8.19.16
port 22
User why_me
IdentityFile ~/.ssh/id_rsa
PubKeyAuthentication yes
IdentitiesOnly yes
they work like
ssh wholewideworld
To run ansible adhock commands is run
eval $(ssh-agent -s)
ssh-add ~/.ssh/*rsa
output will be like:
Enter passphrase for /home/why-me/.ssh/id_rsa:
Identity added: /home/why-me/.ssh/id_rsa (/home/why-me/.ssh/id_rsa)
now you should be able to include wholewideworld in your ~/ansible-hosts
after you put in the host alias from the config file, it works to just run
ansible all -ping
output:
127.0.0.1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
wholewideworld | SUCCESS => {
"changed": false,
"ping": "pong"
}

Resources