Can I set remote_user in host_vars or group_vars? - ansible

I'm starting to write my first serious playbook in ansible.
Something I'd like to do is to specify different remote_user values per host. I'm able to set remote_user in ansible.cfg, through the CLI -u option and even in play variables, like so:
---
- name: install dependencies
hosts: all
sudo: yes
vars:
remote_user: username
But setting the var at the host or group level (which makes the most sense for my approach) won't work. For instance, having this file as group_vars/all gets me an "Authentication failure" fatal error:
---
remote_user: username
What am I missing?

What you are doing appears to be undocumented. Specifically, you have this:
vars:
remote_user: username
when, according to the documentation, it should be like this:
remote_user: username
The fact that it happened to work when you did it the wrong way is irrelevant. There is some side effect that makes it work in that case, but of course it won't work in another case, and it may behave differently in different Ansible versions.
To log on as a different user in each host, the usual way is to specify ansible_ssh_user in the inventory. Whether this is a variable that can be overridden in host_vars or group_vars I'm not certain. See also issue 4688 for information about how ansible_ssh_user and remote_user may override each other.

In your group or host vars add a new one with a name like playbook_username, then in your playbook change remote_user to something like this - remote_user: "{{ playbook_username }}"
Normally host variables don't seem to overwrite remote_user, but this workaround works well enough

You can specify ansible_ssh_user in group_vars/all.yaml.
remote_user documentation.
(Documenting because I just run in to this quirk in 2023 on ansible [core 2.14.1])

Related

ansible vagrant windows and localhost

I've got a vagrant box that I use win_rm for, however, I need to fetch a file from it and
use blockinfile on my localhost (MacOs) and then copy the file back to the vagrant box.
Ansible does not like having two 127.0.0.1 items in the inventory. I've tried just about everything I can think of can't get them to work together.
The vagrant running on VirtualBox has NAT setup but I can't seem to access it other than through the loopback address. That might solve my issue.
I've also tried setting different IPs in the Vagrantfile and those have not been fruitful either.
Below is the inventory file that I've been working with.
[win]
127.0.0.1
[localhost]
control_machine ansible_host=local
[win:vars]
ansible_port=55985
ansible_winrm_transport=basic
ansible_winrm_scheme=http
ansible_user=vagrant
ansible_password="{{ lookup('env', 'WIN_GUEST_PASSWORD') }}"
nsible_connection=winrm
[localhost:vars]
ansible_user=test
ansible_connection=local
ansible_python_interpreter="/Library/Frameworks/Python.framework/Versions/3.8/bin/python3"
There are a few things to say about your inventory:
You seem to be confusing the concept of group and host
You are defining a group that has the same name as an implicit host (i.e. localhost) (maybe because of point 1)
You are explicitly defining a host for your ansible controller when this is actually probably not what you want since ansible defines an implicit localhost for you. Note that an explicit definition makes your controller match the all magic group which is typically not wanted in most situations.
From the information you gave, here is how I would write my inventory. The examples below use the ansible capability to organize vars in seperate files/folders. Please have a look at the inventory documentation for more info. I also used the yaml format for the inventory as it is easier to understand IMO. Feel free to translate back to ini if you whish
in inventories/my_env/hosts.yml (any filename will do)
---
all:
hosts:
my.windows.vagrant:
in inventories/my_env/host_vars/my.windows.vagrant.yml
---
ansible_host: 127.0.0.1
ansible_port: 55985
ansible_winrm_transport: basic
ansible_winrm_scheme: http
ansible_user: vagrant
ansible_password: "{{ lookup('env', 'WIN_GUEST_PASSWORD') }}"
ansible_connection: winrm
in inventories/my_env/host_vars/localhost.yml
---
ansible_python_interpreter: "/Library/Frameworks/Python.framework/Versions/3.8/bin/python3"
Note that I am not (re)defining the implicit localhost in the inventory, only defining the (non standard) python interpreter you use on that host. Note as well that I dropped the other vars for localhost as:
implicit localhost uses the local connection plugin by default
the local connection plugin does not take ansible_user into account and use the already logged in user (i.e. the one lauching the playbook on the controller)
Once you have done this, you can use the my.windows.vagrant target to address your vagrant windows box and localhost to run things on the controller.
Adapt to your exact needs.

how to set different python interpreters for local and remote hosts

Use-Case:
Playbook 1
when we first connect to a remote host/s, the remote host will already have some python version installed - the auto-discovery feature will find it
now we install ansible-docker on the remote host
from this time on: the ansible-docker docs suggest to use ansible_python_interpreter=/usr/bin/env python-docker
Playbook 2
We connect to the same host/s again, but now we must use the /usr/bin/env python-docker python interpreter
What is the best way to do this?
Currently we set ansible_python_interpreter on the playbook level of Playbook 2:
---
- name: DaqMon app
vars:
- ansible_python_interpreter: "{{ '/usr/bin/env python-docker' }}"
This works, but this will also change the python interpreter of the local actions. And thus the local actions will fail, because (python-docker does not exist locally).
the current workaround is to explicitly specify the ansible_python_interpreter on every local-action which is tedious and error-prone
Questions:
the ideal solution is, if we could add '/usr/bin/env python-docker' as fallback to interpreter-python-fallback - but I think this is not possible
is there a way to set the python interpreter only for the remote hosts - and keep the default for the localhost?
or is it possible to explicitly override the python interpreter for the local host?
You should set the ansible_python_interpreter on the host level.
So yes, it's possible to explicitly set the interpreter for localhost in your inventory.
localhost ansible_connection=local ansible_python_interpreter=/usr/bin/python
And I assume that you could also use set_fact on hostvars[<host>].ansible_python_interpreter on your localhost or docker host.
There is a brillant article about set_fact on hostvars ! ;-P
Thanks to the other useful answers I found an easy solution:
on the playbook level we set the python interpreter to /usr/bin/env python-docker
then we use a set_fact task to override the interpreter for localhost only
we must also delegate the facts
we can use the magic ansible_playbook_python variable, which refers to the python interpreter that was used on the (local) Ansible host to start the playbook: see Ansible docs
Here are the important parts at the start of Playbook 2:
---
- name: Playbook 2
vars:
- ansible_python_interpreter: "{{ '/usr/bin/env python-docker' }}"
...
tasks:
- set_fact:
ansible_python_interpreter: '{{ ansible_playbook_python }}'
delegate_to: localhost
delegate_facts: true
Try to use set_fact for ansible_python_interpreter at host level in the first playbook.
Globally, use the interpreter_python key in the [defaults] section of the ansible.cfg file.
interpreter_python = auto_silent

Iterate over inventory facts [duplicate]

I'm sitting in front of a fairly complex Ansible project that we're using to set up our local development environments (multiple VMs) and there's one role that uses the facts gathered by Ansible to set up the /etc/hosts file on every VM. Unfortunately, when you want to run the playbook for one host only (using the -limit parameter) the facts from the other hosts are (obviously) missing.
Is there a way to force Ansible to gather facts on all hosts, even if you limit the playbook to one specific host?
We tried to add a play to the playbook to gather facts from all hosts, but of course that also gets limited to the one host given by the -limit parameter. If there'd be a way to force this play to run on all hosts before the other plays, that would be perfect.
I've googled a bit and found the solution with fact caching with redis, but since our playbook is used locally, I wanted to avoid the need for additional software. I know, it's not a big deal, but I was just looking for a "cleaner", Ansible-only solution and was wondering, if that would exist.
Ansible version 2 introduced a clean, official way to do this using delegated facts (see: http://docs.ansible.com/ansible/latest/playbooks_delegation.html#delegated-facts).
when: hostvars[item]['ansible_default_ipv4'] is not defined is a check to ensure you don't check for facts in a host you already know the facts about
---
# This play will still work as intended if called with --limit "<host>" or --tags "some_tag"
- name: Hostfile generation
hosts: all
become: true
pre_tasks:
- name: Gather facts from ALL hosts (regardless of limit or tags)
setup:
delegate_to: "{{ item }}"
delegate_facts: True
when: hostvars[item]['ansible_default_ipv4'] is not defined
with_items: "{{ groups['all'] }}"
tasks:
- template:
src: "templates/hosts.j2"
dest: "/etc/hosts"
tags:
- hostfile
...
In general the way to get facts for all hosts even when you don't want to run tasks on all hosts is to do something like this:
- hosts: all
tasks: [ ]
But as you mentioned, the --limit parameter will limit what hosts this would be applied to.
I don't think there's a way to simply tell Ansible to ignore the --limit parameter on any plays. However there may be another way to do what you want entirely within Ansible.
I haven't used it personally, but as of Ansible 1.8 fact caching is available. In a nutshell, with fact caching enabled Ansible will use a redis server to cache all the facts about hosts it encounters and you'll be able to reference them in subsequent playbooks:
With fact caching enabled, it is possible for machine in one group to reference variables about machines in the other group, despite the fact that they have not been communicated with in the current execution of /usr/bin/ansible-playbook.
This still seems to be an issue without a clean solution here in 2016, but newer versions of Ansible offer a "jsonfile" fact caching backend, which seems to be a decent compromise to installing Redis locally just to address this need. Now I just fire off an ansible all -m setup before running a playbook with the --limit option. Good enough for jazz!
http://docs.ansible.com/ansible/playbooks_variables.html#fact-caching
You could modify your playbook to:
...
- hosts: "{{ limit_hosts|default('default_group') }}"
tasks:
...
...
And when you run it, if some_var is not defined (normal state) then it will run on the default_group inventory group, BUT if you run it as:
ansible-playbook --extra-vars "limit_hosts=myHost" myplaybook.yml
Then it will only run on your myHost, but you could still have other sections with different hosts: .. declarations, for fact gathering, or anything else actually.

Can I force current hosts group to be identified as another in a playbook include?

The current case is this:
I have a playbook which provisions a bunch of servers and installs apps to these servers.
One of these apps already has it's own ansible playbook which I wanted to use. Now my problem arises from this playbook, as it's limited to hosts: [prod] and the host groups I have in the upper-level playbook are different.
I know I could just use add_host to add the needed hosts to a prod group, but that is a solution which I don't like.
So my question is: Is there a way to add the current hosts to a new host group in the include statement?
Something like - include: foo.yml prod={{ ansible_host_group }}
Or can I somehow include only the tasks from a playbook?
No, there's no direct way to do this.
Now my problem arises from this playbook, as it's limited to
hosts: [prod]
You can setup host's more flexible via extra vars:
- name: add role fail2ban
hosts: '{{ target }}'
remote_user: root
roles:
- fail2ban
Run it:
ansible-playbook testplaybook.yml --extra-vars "target=10.0.190.123"
ansible-playbook testplaybook.yml --extra-vars "target=webservers"
Is this workaround suitable for you?

Ansible: ansible_user in inventory vs remote_user in playbook

I am trying to run an Ansible playbook against a server using an account other than the one I am logged on the control machine. I tried to specify an ansible_user in the inventory file according to the documentation on Inventory:
[srv1]
192.168.1.146 ansible_connection=ssh ansible_user=user1
However Ansible called with ansible-playbook -i inventory playbook.yml -vvvv prints the following:
GATHERING FACTS ***************************************************************
<192.168.1.146> ESTABLISH CONNECTION FOR USER: techraf
What worked for me was adding the remote_user argument to the playbook:
- hosts: srv1
remote_user: user1
Now the same Ansible command connects as user1:
GATHERING FACTS ***************************************************************
<192.168.1.146> ESTABLISH CONNECTION FOR USER: user1
Also adding remote_user variable to ansible.cfg makes Ansible use the intended user instead of the logged-on one.
Are the ansible_user in inventory file and remote_user in playbook/ansible.cfg for different purposes?
What is the ansible_user used for? Or why doesn't Ansible observe the setting in the inventory?
You're likely running into a common issue: the published ansible docs are for the development version (2.0 right now), and we don't keep the old ones around. It's a big point of contention... Assuming you're using something pre-2.0, the inventory var name you need is ansible_ssh_user. ansible_user works in 2.0 (as does ansible_ssh_user- it gets aliased in).
I usually add my remote username in /etc/ansible/ansible.cfg as follows:
remote_user = MY_REMOTE_USERNAME
This way it is not required to configure ansible_user in the inventory file for each host entry.

Resources