Ansible Using Custom ssh config File - ansible

I have a custom SSH config file that I typically use as follows
ssh -F ~/.ssh/client_1_config amazon-server-01
Is it possible to assign Ansible to use this config for certain groups? It already has the keys and ports and users all set up. I have this sort of config for multiple clients, and would like to keep the config separate if possible.

Not fully possible. You can set ssh arguments in the ansible.cfg:
[ssh_connection]
ssh_args = -F ~/.ssh/client_1_config amazon-server-01
Unfortunately it is not possible to define this per group, inventory or anything else specific.

I believe you can achieve what you want like this in your inventory:
[group1]
server1
[group2]
server2
[group1:vars]
ansible_ssh_user=vagrant
ansible_ssh_common_args='-F ssh1.cfg'
[group2:vars]
ansible_ssh_user=vagrant
ansible_ssh_common_args='-F ssh2.cfg'
You can then be as creative as you want and construct SSH config files such as this:
$ cat ssh1.cfg
Host server1
HostName 192.168.1.1
User someuser
Port 22
IdentityFile /path/to/id_rsa
References
Working with Inventory
OpenSSH Config File Examples

With Ansible 2, you can set a ProxyCommand in the ansible_ssh_common_args inventory variable. Any arguments specified in this variable are added to the sftp/scp/ssh command line when connecting to the relevant host(s). Consider the following inventory group:
[gatewayed]
foo ansible_host=192.0.2.1
bar ansible_host=192.0.2.2
You can create group_vars/gatewayed.yml with the following contents:
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#gateway.example.com"'
and do the trick...
You can find further information in: http://docs.ansible.com/ansible/faq.html

Another way,
assuming you have associated ssh key identity files in configuration groupings for various servers like I do in the ~/.ssh/config file. If you have a bunch of entries like this one.
Host wholewideworld
Hostname 204.8.19.16
port 22
User why_me
IdentityFile ~/.ssh/id_rsa
PubKeyAuthentication yes
IdentitiesOnly yes
they work like
ssh wholewideworld
To run ansible adhock commands is run
eval $(ssh-agent -s)
ssh-add ~/.ssh/*rsa
output will be like:
Enter passphrase for /home/why-me/.ssh/id_rsa:
Identity added: /home/why-me/.ssh/id_rsa (/home/why-me/.ssh/id_rsa)
now you should be able to include wholewideworld in your ~/ansible-hosts
after you put in the host alias from the config file, it works to just run
ansible all -ping
output:
127.0.0.1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
wholewideworld | SUCCESS => {
"changed": false,
"ping": "pong"
}

Related

how can I run a playboook on a single or short list of hosts and getting content from a inventory group where the host is not part of?

I do have a playbook which tooks a specific group and put all hosts of this group into a command on another host.
to be more precise.
all hosts from the hosts group oldservers from my inventory file must be in the /etc/ssh.conf on one or multiple clients.
the task looks like...
---
- name: echo Old Servers
debug:
var: groups["oldservers"]
- name: create ssh_conf_for_old_server
blockinfile:
path: /etc/ssh/ssh_config
backup: True
block: |
Host {{ groups["oldservers"]|join(' ') }}
user admin
KexAlgorithms +diffie-hellman-group1-sha1
HostKeyAlgorithms +ssh-dss
Ciphers +aes128-cbc
this should be executed on a client which is not member of the group servers.
hosts file (inventory):
[clients]
192.168.200.1
192.168.200.2
[oldservers]
192.168.201.1
192.168.201.2
My execution line is ansible-playbook -i 192.168.200.1, -u ansible ./createServerList.yml
I guess I should do it a bit different. Dont I ?
The result should be ... at first output all the oldservers (debug)
than write a block with these old server into the /etc/ssh/ssh_config
For command ansible-playbook -i 192.168.200.1 -u ansible ./createServerList.yml, you are passing the ip address directly as inventory. Because of this Ansible is unaware of the inventory file where host groups are defined. So can you try running this instead ansible-playbook -i <path_to_inventory_file> -u ansible ./createServerList.yml
And then if you have to restrict playbook running only certain hosts or group, do
ansible-playbook -i <path_to_inventory_file> -u ansible ./createServerList.yml --limit "192.168.200.1,192.168.200.2"
OR
ansible-playbook -i <path_to_inventory_file> -u ansible ./createServerList.yml --limit clients

Ansible using delegate_to with different ansible_ssh_common_args

I'm trying to make a run from my ansible master to a host (lets call it hostclient) which requires performing something into another host (let's call it susemanagerhost :) ).
hostclient needs ansible_ssh_common_args with proxycommand fullfilled, while susemanager host needs no ansible_ssh_common_args since its a direct connection.
So I thought I could use delegate_to, but the host called hostclient and the host called susemanagerhost have different values for the variable ansible_ssh_common_args.
I thought I could change the value of ansible_ssh_common_args inside the run with set_fact of ansible_ssh_common_args to ansible_ssh_common_args_backup (because I want to recover the original value for the other standard tasks) and then ansible_ssh_common_args to null (the connection from the ansible master to susemanager host is a direct connection with no proxycommand required) but it is not working.
It seems like its still using the original value for ansible_ssh_common_args.
ansible_ssh_common_args is generally used to execute commands 'through' a proxy server:
<ansible system> => <proxy system> => <intended host>
The way you formulated your question you won't need to use ansible_ssh_common_args and can stick to using delegate_to:
- name: execute on host client
shell: ls -la
- name: execute on susemanagerhost
shell: ls -la
delegate_to: "root#susemanagerhost"
Call this play with:
ansible-playbook <play> --limit=hostclient
This should do it.
Edit:
After filing a bug on github the working solution is to have:
ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ansible#{{ jumphost_server }}"'
In host_vars or group_vars.
Followed by a static use of delegate_to:
delegate_to: <hostname>
hostname should be the hostname as used by ansible.
But not use:
delegate_to: "username#hostname"
This resolved the issue for me, hope it works out for you as well.

I a trying to ping a host using ansible which does not use default ssh port but I am not successful

this is my hosts file
ansible_host=XX.XXX.xx.x ansible_port=9301
[all:vars]
ansible_python_interpreter=/usr/bin/python3
and my command is: ansible all -i hosts -m ping
and I keep getting:
ansible_host=xx.xxx.xxx.xx | UNREACHABLE! => {
"changed": false,
"msg": "Failed to connect to the host via ssh: ssh: Could not resolve hostname ansible_host=xx.xxx.xxx.xx: Name or service not
known",
"unreachable": true }
It appears, based solely upon your posted content, that you left off the inventory hostname; all of those key=value pairs, such as that ansible_host= part, are for assigning hostvars to an existing inventory hostname. You'll want an inventory file like this:
my-awesome-host ansible_host=XX.XXX.xx.x ansible_port=9301
In the future, you can confirm what ansible thinks about your inventory via the ansible-inventory command:
ansible-inventory -i ./your-inventory-file.ini --list

Ansible multiple password vaults in group_vars file

I am having an issue with Ansible attempting to decode the incorrect vault when passing a command in.
The setup is a follows:
My ansible.cfg
[defaults]
transport=smart
hostfile=./hosts
host_key_checking=False
timeout=5
[ssh_connection]
ssh_args = ""
My hostfile:
[BigIP-Devices]
BigIP-1
BigIP-2
[Cisco-Devices]
Cisco-1
Cisco-2
TEST1
My group_vars:
BigIP-Devices < This is a vault encrypted with bigip-vaultkey
Cisco-Devices < This is a vault encrypted with cisco-vaultkey
bigip-vaultkey
cisco-vaultkey
Both the vaults are like the following with different details for each:
---
ansible_ssh_user: user
ansible_ssh_pass: password
I am trying to use the following command:
ansible -c ssh Cisco-Devices --vault-password-file ./group_vars/cisco-vaultkey --limit TEST1 -m raw -a "show version"
Even though it's calling Cisco-Devices in the command, I get the following error:
ERROR! Decryption failed on /home/users/ansible/device-access/group_vars/BigIP-Devices
However, if I move the BigIP files out of group_vars, it works correctly.
Any one have any ideas?
Many thanks in advance for your help!!!
It seems that this is an expected behaviour.
All host patterns and limits are applied after full inventory is parsed.
In your case Ansible discovers BigIP-Devices and Cisco-Devices groups and tries to load corresponding group variables.
If you never execute your playbook on BigIP-Devices and Cisco-Devices at the same time, you probably want to separate them into different inventories like:
./inventories/
./inventories/bigip/hosts
./inventories/bigip/group_vars/all/BigIP-Devices
./inventories/
./inventories/cisco/hosts
./inventories/cisco/group_vars/all/Cisco-Devices
and add required inventory with -i inventories/bigip.
P.S. Or use same vault password for all files.

How to set host_key_checking=false in ansible inventory file?

I would like to use ansible-playbook command instead of 'vagrant provision'. However setting host_key_checking=false in the hosts file does not seem to work.
# hosts file
vagrant ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
ansible_ssh_user=vagrant ansible_ssh_port=2222 ansible_ssh_host=127.0.0.1
host_key_checking=false
Is there a configuration variable outside of Vagrantfile that can override this value?
Due to the fact that I answered this in 2014, I have updated my answer to account for more recent versions of ansible.
Yes, you can do it at the host/inventory level (Which became possible on newer ansible versions) or global level:
inventory:
Add the following.
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
host:
Add the following.
ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
hosts/inventory options will work with connection type ssh and not paramiko. Some people may strongly argue that inventory and hosts is more secure because the scope is more limited.
global:
Ansible User Guide - Host Key Checking
You can do it either in the /etc/ansible/ansible.cfg or ~/.ansible.cfg file:
[defaults]
host_key_checking = False
Or you can setup and env variable (this might not work on newer ansible versions):
export ANSIBLE_HOST_KEY_CHECKING=False
Yes, you can set this on the inventory/host level.
With an already accepted answer present, I think this is a better answer to the question on how to handle this on the inventory level. I consider this more secure by isolating this insecure setting to the hosts required for this (e.g. test systems, local development machines).
What you can do at the inventory level is add
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
or
ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
to your host definition (see Ansible Behavioral Inventory Parameters).
This will work provided you use the ssh connection type, not paramiko or something else).
For example, a Vagrant host definition would look likeā€¦
vagrant ansible_port=2222 ansible_host=127.0.0.1 ansible_ssh_common_args='-o StrictHostKeyChecking=no'
or
vagrant ansible_port=2222 ansible_host=127.0.0.1 ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
Running Ansible will then be successful without changing any environment variable.
$ ansible vagrant -i <path/to/hosts/file> -m ping
vagrant | SUCCESS => {
"changed": false,
"ping": "pong"
}
In case you want to do this for a group of hosts, here's a suggestion to make it a supplemental group var for an existing group like this:
[mytestsystems]
test[01:99].example.tld
[insecuressh:children]
mytestsystems
[insecuressh:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
I could not use:
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
in inventory file. It seems ansible does not consider this option in my case (ansible 2.0.1.0 from pip in ubuntu 14.04)
I decided to use:
server ansible_host=192.168.1.1 ansible_ssh_common_args= '-o UserKnownHostsFile=/dev/null'
It helped me.
Also you could set this variable in group instead for each host:
[servers_group:vars]
ansible_ssh_common_args='-o UserKnownHostsFile=/dev/null'
In /etc/ansible/ansible.cfg uncomment the line:
host_key_check = False
and in /etc/ansible/hosts uncomment the line
client_ansible ansible_ssh_host=10.1.1.1 ansible_ssh_user=root ansible_ssh_pass=12345678
That's all
Adding following to ansible config worked while using ansible ad-hoc commands:
[ssh_connection]
# ssh arguments to use
ssh_args = -o StrictHostKeyChecking=no
Ansible Version
ansible 2.1.6.0
config file = /etc/ansible/ansible.cfg
You set these configs either in the /etc/ansible/ansible.cfg or ~/.ansible.cfg or ansible.cfg(in your current directory) file
[ssh_connection]
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
tested with ansible 2.9.6 in ubuntu 20.04

Resources