I'm trying to make a run from my ansible master to a host (lets call it hostclient) which requires performing something into another host (let's call it susemanagerhost :) ).
hostclient needs ansible_ssh_common_args with proxycommand fullfilled, while susemanager host needs no ansible_ssh_common_args since its a direct connection.
So I thought I could use delegate_to, but the host called hostclient and the host called susemanagerhost have different values for the variable ansible_ssh_common_args.
I thought I could change the value of ansible_ssh_common_args inside the run with set_fact of ansible_ssh_common_args to ansible_ssh_common_args_backup (because I want to recover the original value for the other standard tasks) and then ansible_ssh_common_args to null (the connection from the ansible master to susemanager host is a direct connection with no proxycommand required) but it is not working.
It seems like its still using the original value for ansible_ssh_common_args.
ansible_ssh_common_args is generally used to execute commands 'through' a proxy server:
<ansible system> => <proxy system> => <intended host>
The way you formulated your question you won't need to use ansible_ssh_common_args and can stick to using delegate_to:
- name: execute on host client
shell: ls -la
- name: execute on susemanagerhost
shell: ls -la
delegate_to: "root#susemanagerhost"
Call this play with:
ansible-playbook <play> --limit=hostclient
This should do it.
Edit:
After filing a bug on github the working solution is to have:
ansible_ssh_common_args: '-o StrictHostKeyChecking=no -o ProxyCommand="ssh -W %h:%p -i ~/.ssh/id_rsa ansible#{{ jumphost_server }}"'
In host_vars or group_vars.
Followed by a static use of delegate_to:
delegate_to: <hostname>
hostname should be the hostname as used by ansible.
But not use:
delegate_to: "username#hostname"
This resolved the issue for me, hope it works out for you as well.
Related
I am trying to do some testing against a specific host with Ansible 2.5 but ansible can't figure out my inventory. I've either done something wrong or there's a bug. I've done this in the past but maybe something changed in 2.5
I have an inventory file specified like this:
localhost ansible_connection=local
testhost ansible_ssh_host=1.2.3.4
I have a playbook that runs totally fine if i just run it with ansible playbook.yml. It starts like this:
- hosts: localhost
become: yes
become_user: root
become_method: sudo
gather_facts: yes
If I run ansible-inventory --list I see both of my hosts listed as "ungrouped"
However, if I try to run my playbook against the remote host using ansible -l testhost playbook.yml it errors with the following:
[WARNING]: Could not match supplied host pattern, ignoring: playbook.yml
ERROR! Specified hosts and/or --limit does not match any hosts
I can't figure out how to actually make Ansible run against my remote host.
Your playbook specifies:
hosts: localhost
It will not run on testfile regardless of the arguments you supply. --limit does not replace the hosts declaration.
As your hosts are ungrouped, you need to change this to:
hosts: all
Then you can use limit option to filter the hosts from the given target group.
You are also using wrong command to run an Ansible playbook, it should be ansible-playbook not ansible (and although the effect is the same, the latter does not fail with an error in such case).
use simple method wherever you have to connect on local system? just specify connection : local to hosts block
- hosts: localhost
connection : local
become: yes
become_user: root
I'm just getting started with ansible and have successfully been able to configure ansible to get dynamic inventory from GCP.
I am able to successfully run the ping module against all instances:
ansible -i ~/git/ansible/inventory all -m ping
I am also able to successfully run the ping module against a single instance based on hostname:
ansible -i ~/git/ansible/inventory instance-2 -m ping
I would now like to utilize tags to group instances. For example, I have set of instances that are labeled 'env:dev'
https://www.evernote.com/l/AfcLWLkermxMyIK7GvGpQXjXdIDFVAiT_z0
I have attempted multiple variations of the command below with no luck
ansible -i ~/git/ansible/inventory tag_env:dev -m ping
How can I filter and group my dynamic inventory on GCP?
So you need to add network tag in instance settings not labels i don't know why but gce.py doesn't return GCP labels so you can only use network tags wich is limited (i mean not key=value but just value)
For example add network tag just 'dev' and then run ansible -i ~/git/ansible/inventory tag_dev -m ping
also if you need to filter by few tags only way i found it's
- name: test stuff
hosts: tag_api:&tag_{{ environment }}
var_files:
vars/{{ environment }}
vars/api
tasks:
- name: test
command: echo "test"
run playbook like this ansible-playbook -i inventory/ -u user playbook/test.yml -e environment=dev
maybe someone know better way, with aws ec2.py i could filter in ec2.ini config but gce.py very limited
also i noticed that sometimes you need to clear cache gce.py --refresh-cache
So, here's the setup.
I am passing a list of FQDN's via extra vars to the playbook, but the key of control machine are copied with their corresponding IP addresses. Also their is no specific group of these in the inventory.
The host for this playbook is a server which may or may not be part of this list that I am passing via extra vars. So, the playbook doesn't have facts of these nodes whatsoever.
So, my question is how do I get the IP addresses of the respective hosts on the fly so that I can loop over them to do some checks for some properties that I have set using their FQDN's via another playbook/ role previously. (Note: I strictly need FQDN's to set these said properties)
I have used delegate_to module, and passing FQDN's and it works but it asks if I want to continue connecting(yes/ no) and waits for user input for the initial run (if given yes, It works as expected). Is their any other simpler, cleaner way to solve this? (Note: Bare in mind I cannot modify the contents of inventory like putting alias etc)
Here is the snippet of the code:
- name: Checks
become_user: hdfs
shell: hdfs dfsadmin –report
delegate_to: "{{ item.host_name }}"
with_items:
- "{{rackDetails}}"
Here is the snippet of the master yml file where the above role is called :
- name: Checks for the property
hosts: servernode
roles:
- Checks
Inventory code snippet looks like this:
[servernode]
10.0.2.15 ansible_ssh_user=grant ansible_ssh_pass=grant
[all]
10.0.2.15 ansible_ssh_user=grant ansible_ssh_pass=grant
10.0.2.16 ansible_ssh_user=grant ansible_ssh_pass=grant
10.0.2.17 ansible_ssh_user=grant ansible_ssh_pass=grant
10.0.2.18 ansible_ssh_user=grant ansible_ssh_pass=grant
10.0.2.19 ansible_ssh_user=grant ansible_ssh_pass=grant
And finally, this is what I am passing via extra vars:
{"rackDetails":[{"host_name":"prod_node1.paas.com","rack_name":"/rack1"},{"host_name":"prod_node2.paas.com","rack_name":"/rack2"},{"host_name":"prod_node3.paas.com","rack_name":"/rack3"}]}
I wonder if delegate_to works if the hosts are passed as extra vars but not living inside inventory.
Anyway, you can use the dig lookup plugin:
http://docs.ansible.com/ansible/playbooks_lookups.html#the-dns-lookup-dig
{{ lookup('dig', item.host_name) }}
What I think you want, instead, is to convert those to FQDN's into IP addresses and put them into the -l (limit) to a playbook:
ansible-playbook my_playbook -l $( getent hosts $( cat fqdns.txt ) | cut -d' ' -f1 | tr '\n' ',' )
Consider if I want to check something quickly. Something that doesn't really need connecting to a host (to check how ansible itself works, like, including of handlers or something). Or localhost will do. I'd probably give up on this, but man page says:
-i PATH, --inventory=PATH
The PATH to the inventory, which defaults to /etc/ansible/hosts. Alternatively, you can use a comma-separated
list of hosts or a single host with a trailing comma host,.
And when I run ansible-playbook without inventory, it says:
[WARNING]: provided hosts list is empty, only localhost is available
Is there an easy way to run playbook against no host, or probably localhost?
Prerequisites. You need to have ssh server running on the host (ssh localhost should let you in).
Then if you want to use password authentication (do note the trailing comma):
$ ansible-playbook playbook.yml -i localhost, -k
In this case you also need sshpass.
In case of public key authentication:
$ ansible-playbook playbook.yml -i localhost,
And the test playbook, to get you started:
- hosts: all
tasks:
- debug: msg=test
You need to have a comma in the localhost, option argument, because otherwise it would be treated as a path to an inventory. The inventory plugin responsible for parsing the value can be found here.
You can define a default inventory with only localhost
See it is explained here:
https://docs.ansible.com/ansible/latest/reference_appendices/config.html#the-configuration-file
And in your playbook add use this
- hosts: all
connection: local
tasks:
- debug: msg=test
It will use local connection so no SSH is required (thus it doesn't expose your machine). It might be quicker unless you need to troubleshoot a ssh issue.
Also for quicker feedback loop you can use: gather_facts: no you already know your target.
I have a custom SSH config file that I typically use as follows
ssh -F ~/.ssh/client_1_config amazon-server-01
Is it possible to assign Ansible to use this config for certain groups? It already has the keys and ports and users all set up. I have this sort of config for multiple clients, and would like to keep the config separate if possible.
Not fully possible. You can set ssh arguments in the ansible.cfg:
[ssh_connection]
ssh_args = -F ~/.ssh/client_1_config amazon-server-01
Unfortunately it is not possible to define this per group, inventory or anything else specific.
I believe you can achieve what you want like this in your inventory:
[group1]
server1
[group2]
server2
[group1:vars]
ansible_ssh_user=vagrant
ansible_ssh_common_args='-F ssh1.cfg'
[group2:vars]
ansible_ssh_user=vagrant
ansible_ssh_common_args='-F ssh2.cfg'
You can then be as creative as you want and construct SSH config files such as this:
$ cat ssh1.cfg
Host server1
HostName 192.168.1.1
User someuser
Port 22
IdentityFile /path/to/id_rsa
References
Working with Inventory
OpenSSH Config File Examples
With Ansible 2, you can set a ProxyCommand in the ansible_ssh_common_args inventory variable. Any arguments specified in this variable are added to the sftp/scp/ssh command line when connecting to the relevant host(s). Consider the following inventory group:
[gatewayed]
foo ansible_host=192.0.2.1
bar ansible_host=192.0.2.2
You can create group_vars/gatewayed.yml with the following contents:
ansible_ssh_common_args: '-o ProxyCommand="ssh -W %h:%p -q user#gateway.example.com"'
and do the trick...
You can find further information in: http://docs.ansible.com/ansible/faq.html
Another way,
assuming you have associated ssh key identity files in configuration groupings for various servers like I do in the ~/.ssh/config file. If you have a bunch of entries like this one.
Host wholewideworld
Hostname 204.8.19.16
port 22
User why_me
IdentityFile ~/.ssh/id_rsa
PubKeyAuthentication yes
IdentitiesOnly yes
they work like
ssh wholewideworld
To run ansible adhock commands is run
eval $(ssh-agent -s)
ssh-add ~/.ssh/*rsa
output will be like:
Enter passphrase for /home/why-me/.ssh/id_rsa:
Identity added: /home/why-me/.ssh/id_rsa (/home/why-me/.ssh/id_rsa)
now you should be able to include wholewideworld in your ~/ansible-hosts
after you put in the host alias from the config file, it works to just run
ansible all -ping
output:
127.0.0.1 | SUCCESS => {
"changed": false,
"ping": "pong"
}
wholewideworld | SUCCESS => {
"changed": false,
"ping": "pong"
}