How to set host_key_checking=false in ansible inventory file? - ansible

I would like to use ansible-playbook command instead of 'vagrant provision'. However setting host_key_checking=false in the hosts file does not seem to work.
# hosts file
vagrant ansible_ssh_private_key_file=~/.vagrant.d/insecure_private_key
ansible_ssh_user=vagrant ansible_ssh_port=2222 ansible_ssh_host=127.0.0.1
host_key_checking=false
Is there a configuration variable outside of Vagrantfile that can override this value?

Due to the fact that I answered this in 2014, I have updated my answer to account for more recent versions of ansible.
Yes, you can do it at the host/inventory level (Which became possible on newer ansible versions) or global level:
inventory:
Add the following.
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
host:
Add the following.
ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
hosts/inventory options will work with connection type ssh and not paramiko. Some people may strongly argue that inventory and hosts is more secure because the scope is more limited.
global:
Ansible User Guide - Host Key Checking
You can do it either in the /etc/ansible/ansible.cfg or ~/.ansible.cfg file:
[defaults]
host_key_checking = False
Or you can setup and env variable (this might not work on newer ansible versions):
export ANSIBLE_HOST_KEY_CHECKING=False

Yes, you can set this on the inventory/host level.
With an already accepted answer present, I think this is a better answer to the question on how to handle this on the inventory level. I consider this more secure by isolating this insecure setting to the hosts required for this (e.g. test systems, local development machines).
What you can do at the inventory level is add
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
or
ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
to your host definition (see Ansible Behavioral Inventory Parameters).
This will work provided you use the ssh connection type, not paramiko or something else).
For example, a Vagrant host definition would look like…
vagrant ansible_port=2222 ansible_host=127.0.0.1 ansible_ssh_common_args='-o StrictHostKeyChecking=no'
or
vagrant ansible_port=2222 ansible_host=127.0.0.1 ansible_ssh_extra_args='-o StrictHostKeyChecking=no'
Running Ansible will then be successful without changing any environment variable.
$ ansible vagrant -i <path/to/hosts/file> -m ping
vagrant | SUCCESS => {
"changed": false,
"ping": "pong"
}
In case you want to do this for a group of hosts, here's a suggestion to make it a supplemental group var for an existing group like this:
[mytestsystems]
test[01:99].example.tld
[insecuressh:children]
mytestsystems
[insecuressh:vars]
ansible_ssh_common_args='-o StrictHostKeyChecking=no'

I could not use:
ansible_ssh_common_args='-o StrictHostKeyChecking=no'
in inventory file. It seems ansible does not consider this option in my case (ansible 2.0.1.0 from pip in ubuntu 14.04)
I decided to use:
server ansible_host=192.168.1.1 ansible_ssh_common_args= '-o UserKnownHostsFile=/dev/null'
It helped me.
Also you could set this variable in group instead for each host:
[servers_group:vars]
ansible_ssh_common_args='-o UserKnownHostsFile=/dev/null'

In /etc/ansible/ansible.cfg uncomment the line:
host_key_check = False
and in /etc/ansible/hosts uncomment the line
client_ansible ansible_ssh_host=10.1.1.1 ansible_ssh_user=root ansible_ssh_pass=12345678
That's all

Adding following to ansible config worked while using ansible ad-hoc commands:
[ssh_connection]
# ssh arguments to use
ssh_args = -o StrictHostKeyChecking=no
Ansible Version
ansible 2.1.6.0
config file = /etc/ansible/ansible.cfg

You set these configs either in the /etc/ansible/ansible.cfg or ~/.ansible.cfg or ansible.cfg(in your current directory) file
[ssh_connection]
ssh_args = -C -o ControlMaster=auto -o ControlPersist=60s -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no
tested with ansible 2.9.6 in ubuntu 20.04

Related

Issue reaching boxes behind a bastion host with ansible was_ec2 dynamic inventory plugin

I have gone round a little and I can say this post is not a duplicate. I have been fairly using Ansible 2.9.x and connectivity to the bastion host has always worked fine for me using the ec2.py dynamic inventory . I am switching to the the ansible was_ec2 plugin and one of the reason is even on this other stackoverflow post of mine.
I have gleaned information below are my inventory file and ansible.cfg file
#myprovile.aws_ec2.yml
plugin: amazon.aws.aws_ec2
boto_profile: my profile
strict: True
regions:
- eu-west-1
- eu-central-1
- eu-north-1
keyed_groups:
- key: tags
prefix: tag
hostnames:
- ip-address
# - dns-name
# - tag:Name
- private-ip-address
compose:
ansible_host: private_ip_address
# folder/project level ansible.cfg configuration
[defaults]
roles_path = roles
host_key_checking = False
hash_behaviour = merge ### Note to self: Extremely important settings
interpreter_python = auto ### Note to self: Very important settings for running from localhost
[inventory]
enable_plugins = aws_ec2, host_list, script, auto, yaml, ini, toml
# inventory = plugin_inventory/bb.aws_ec2.yaml
The inventory has group_vars files
➜ plugin_inventory git:(develop) ✗ tree
.
├── myprovile.aws_ec2.yml
└── group_vars
├── tag_Name_main_productname_uat_jumpbox.yml
├── tag_Name_main_productname_uat_mongo.yml
├── tag_Name_main_productname_uat_mongo_arb.yml
├── tag_Name_main_productname_uat_mysql.yml
└── tag_Name_xxx.yml
└── tag_Name_yyy.yml
To get to mongo db which is in private subnet, the group_vars files looks like below
#ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -i {{ hostvars.localhost.reg_jumpbox_ssh_key }} -W %h:%p -q ubuntu#{{ hostvars.localhost.reg_jumpbox_facts.instances.0.public_ip_address }}"'
ansible_ssh_common_args: '-o ProxyCommand="ssh -o StrictHostKeyChecking=no -i ~/Dropbox/creds/pemfiles/ProductUATOps.pem -W %h:%p -q ubuntu#xxx.xxx.xxx.xxx"'
Each time I run the command
AWS_PROFILE=myprofile ansible -i ~/infrastructure_as_code/ansible_projects/productname/plugin_inventory/myprofile.aws_ec2.yml tag_Name_main_productname_uat_mongo -m ping -u ubuntu --private-key ~/Dropbox/creds/pemfiles/ProductUATOps.pem -vvvv
it doesn't connect and the full output and some other information are at pastebin.
Now something odd I have see is that, even though in the ansible.cfg there is the host_key_checking= False I still find the following in the command Are you sure you want to continue connecting (yes/no/[fingerprint])?.
I have also seen that it's looking for ~/.ssh/known_hosts2\, /etc/ssh/ssh_known_hosts and /etc/ssh/ssh_known_hosts2 but ~/.ssh/known_hosts is what's there.
There is also one confusing error in the logs "module_stdout": "/bin/sh: 1: /Users/joseph/.pyenv/shims/python: not found\r\n". But the python installation with pyenv has been consistent, os wise:
➜ ~ which python
/Users/joseph/.pyenv/shims/python
➜ ~ python --version
Python 3.8.12 (9ef55f6fc369, Oct 25 2021, 05:10:01)
[PyPy 7.3.7 with GCC Apple LLVM 13.0.0 (clang-1300.0.29.3)]
➜ ~ ls -lh /Users/joseph/.pyenv/shims/python
-rwxr-xr-x 1 joseph staff 183B Feb 14 22:47 /Users/joseph/.pyenv/shims/python
➜ ~ /usr/bin/env python --version
Python 3.8.12 (9ef55f6fc369, Oct 25 2021, 05:10:01)
[PyPy 7.3.7 with GCC Apple LLVM 13.0.0 (clang-1300.0.29.3)]
I suspect the error is due to the fact that something is preventing the fingerprints from getting into the known hosts file and I am tempted to simulate the ssh tunneling manually myself but I would like to understand why this is happening and whether it's because this is a new machine. Can anyone shed some light on this for me . Thanks
After running ansible-config dump using that ansible.cfg, it emits AnsibleOptionsError: Invalid value "merge ##... so it seems ansible just silently ate the config file, or may be using a different one
It seems that while # is a supported beginning of line comment character, ansible-config (as of 2.12.1) only tolerates ; as and end of line comment character
[defaults]
roles_path = roles
host_key_checking = False
hash_behaviour = merge ;;; Note to self: Extremely important settings
interpreter_python = auto ;;; Note to self: Very important settings for running from localhost
[inventory]
enable_plugins = aws_ec2, host_list, script, auto, yaml, ini, toml

Ansible using delegate_to with different ansible_ssh_common_args

I'm trying to make a run from my ansible master to a host (lets call it hostclient) which requires performing something into another host (let's call it susemanagerhost :) ).
hostclient needs ansible_ssh_common_args with proxycommand fullfilled, while susemanager host needs no ansible_ssh_common_args since its a direct connection.
So I thought I could use delegate_to, but the host called hostclient and the host called susemanagerhost have different values for the variable ansible_ssh_common_args.
I thought I could change the value of ansible_ssh_common_args inside the run with set_fact of ansible_ssh_common_args to ansible_ssh_common_args_backup (because I want to recover the original value for the other standard tasks) and then ansible_ssh_common_args to null (the connection from the ansible master to susemanager host is a direct connection with no proxycommand required) but it is not working.
It seems like its still using the original value for ansible_ssh_common_args.
ansible_ssh_common_args is generally used to execute commands 'through' a proxy server:
<ansible system> => <proxy system> => <intended host>
The way you formulated your question you won't need to use ansible_ssh_common_args and can stick to using delegate_to:
- name: execute on host client
shell: ls -la
- name: execute on susemanagerhost
shell: ls -la
delegate_to: "root#susemanagerhost"
Call this play with:
ansible-playbook <play> --limit=hostclient
This should do it.
Edit:
After filing a bug on github the working solution is to have:
ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ansible#{{ jumphost_server }}"'
In host_vars or group_vars.
Followed by a static use of delegate_to:
delegate_to: <hostname>
hostname should be the hostname as used by ansible.
But not use:
delegate_to: "username#hostname"
This resolved the issue for me, hope it works out for you as well.

How to make sure that ansible playbook uses hostname from different file or uses it from command line?

I have an ansible playbook that I run from below command line and it works fine.
ansible-playbook -e 'host_key_checking=False' -e 'num_serial=10' test.yml -u golden
It works on the hosts specified in /etc/ansible/hosts file. But is there any way to pass hostnames directly on the command line or generate new file with hostname line by line in it so that my ansible works on that hostnames instead of working from default /etc/ansible/hosts file?
Below is my ansible file:
# This will copy files
---
- hosts: servers
serial: "{{ num_serial }}"
tasks:
- name: copy files to server
shell: "(ssh -o StrictHostKeyChecking=no abc.host.com 'ls -1 /var/lib/workspace/data/*' | parallel -j20 'scp -o StrictHostKeyChecking=no abc.host.com:{} /data/holder/files/procs/')"
- name: sleep for 3 sec
pause: seconds=3
Now I wanted to generate new file which will have all the servers line by line and then my ansible play book work on that file instead? Is this possible?
I am running ansible 2.6.3 version.
The question has been answered probably but will just answer again to add more points.
Always look for command line for help related to the arguments or any info needed.
ansible-playbook --help | grep inventory
-i INVENTORY, --inventory=INVENTORY, --inventory-file=INVENTORY
specify inventory host path or comma separated host
list. --inventory-file is deprecated
The support of ansible inventory in file format is with two extensions:
yml
ini --> specifying ini extension is not mandatory.
The inventory link provides more info on the format and should be referred before choosing any format to implement.
Adding #HermanTheGermanHesse answer's so that all the possible points are covered.
In case the above is not used/you don't want to use. Ansible at last will refer the ansible.cfg for the hosts and variable definition.
[defaults]
inventory = path/to/hosts
From here:
The ansible.cfg file will be chosen in this order:
ANSIBLE_CONFIG environment variable
/ansible.cfg
~/.ansible.cfg
/etc/ansible/ansible.cfg
You can use the -i flag to specify the inventory to use. For example:
ansible-playbook -i hosts play.yml
A way to specify the inventory file to use is to set inventory in the ansible.cfg-file as such:
[defaults]
inventory = path/to/hosts
From here:
The ansible.cfg file will be chosen in this order:
ANSIBLE_CONFIG environment variable
./ansible.cfg
~/.ansible.cfg
/etc/ansible/ansible.cfg
EDIT
From your comment:
[WARNING]: Could not match supplied host pattern, ignoring: servers PLAY [servers]
It seems that ansible doesn't recognize hosts passed with the -i Flag as belonging to a group. Since you mentioned in chat that you generate a list with the passed hosts, I'd suggest creating a file where the list of hosts to passed is made to belong to a group callerd [servers] and passing the path to it with the -i Flag.

Ansible Inventory Link

I am new to Ansible and trying to learn the basics. But apparently I already fail with setting up the inventory file.
For the setup:
1) Installed ansible via homebrew
2) as no ansible.cfg was created, I created one manually in /etc/ansible/ansible.cfg
ansible.cfg
[defaults]
inventory = /etc/ansible/hosts/;
3) a hosts file was also not there, so I created the same in /etc/ansible/hosts
hosts
Test1
Test2
When I run ansible all --list-hosts I get the error:
[WARNING]: Unable to parse /etc/ansible/hosts; as an inventory source
As the path is correctly reflected in the error, I at least assume, the cfg is read correctly. But still the target file hosts is not being recognized. I tried differennt paths. What do I need to change?
remove /; from the end of the inventory-line in /etc/ansible/ansible.cfg:
cat /etc/ansible/ansible.cfg
[defaults]
inventory = /etc/ansible/hosts
you could use ansible -i /etc/ansible/hosts to tell ansible use this inventory file.

Ansible Using Custom ssh config File

I have a custom SSH config file that I typically use as follows
ssh -F ~/.ssh/client_1_config amazon-server-01
Is it possible to assign Ansible to use this config for certain groups? It already has the keys and ports and users all set up. I have this sort of config for multiple clients, and would like to keep the config separate if possible.
Not fully possible. You can set ssh arguments in the ansible.cfg:
[ssh_connection]
ssh_args = -F ~/.ssh/client_1_config amazon-server-01
Unfortunately it is not possible to define this per group, inventory or anything else specific.
I believe you can achieve what you want like this in your inventory:
[group1]
server1
[group2]
server2
[group1:vars]
ansible_ssh_user=vagrant
ansible_ssh_common_args='-F ssh1.cfg'
[group2:vars]
ansible_ssh_user=vagrant
ansible_ssh_common_args='-F ssh2.cfg'
You can then be as creative as you want and construct SSH config files such as this:
$ cat ssh1.cfg
Host server1
HostName 192.168.1.1
User someuser
Port 22
IdentityFile /path/to/id_rsa
References
Working with Inventory
OpenSSH Config File Examples
With Ansible 2, you can set a ProxyCommand in the ansible_ssh_common_args inventory variable. Any arguments specified in this variable are added to the sftp/scp/ssh command line when connecting to the relevant host(s). Consider the following inventory group:
[gatewayed]
foo ansible_host=192.0.2.1
bar ansible_host=192.0.2.2
You can create group_vars/gatewayed.yml with the following contents:
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#gateway.example.com"'
and do the trick...
You can find further information in: http://docs.ansible.com/ansible/faq.html
Another way,
assuming you have associated ssh key identity files in configuration groupings for various servers like I do in the ~/.ssh/config file. If you have a bunch of entries like this one.
Host wholewideworld
Hostname 204.8.19.16
port 22
User why_me
IdentityFile ~/.ssh/id_rsa
PubKeyAuthentication yes
IdentitiesOnly yes
they work like
ssh wholewideworld
To run ansible adhock commands is run
eval $(ssh-agent -s)
ssh-add ~/.ssh/*rsa
output will be like:
Enter passphrase for /home/why-me/.ssh/id_rsa:
Identity added: /home/why-me/.ssh/id_rsa (/home/why-me/.ssh/id_rsa)
now you should be able to include wholewideworld in your ~/ansible-hosts
after you put in the host alias from the config file, it works to just run
ansible all -ping
output:
127.0.0.1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
wholewideworld | SUCCESS => {
"changed": false,
"ping": "pong"
}

Resources