Ansible: ansible_user in inventory vs remote_user in playbook - ansible

I am trying to run an Ansible playbook against a server using an account other than the one I am logged on the control machine. I tried to specify an ansible_user in the inventory file according to the documentation on Inventory:
[srv1]
192.168.1.146 ansible_connection=ssh ansible_user=user1
However Ansible called with ansible-playbook -i inventory playbook.yml -vvvv prints the following:
GATHERING FACTS ***************************************************************
<192.168.1.146> ESTABLISH CONNECTION FOR USER: techraf
What worked for me was adding the remote_user argument to the playbook:
- hosts: srv1
remote_user: user1
Now the same Ansible command connects as user1:
GATHERING FACTS ***************************************************************
<192.168.1.146> ESTABLISH CONNECTION FOR USER: user1
Also adding remote_user variable to ansible.cfg makes Ansible use the intended user instead of the logged-on one.
Are the ansible_user in inventory file and remote_user in playbook/ansible.cfg for different purposes?
What is the ansible_user used for? Or why doesn't Ansible observe the setting in the inventory?

You're likely running into a common issue: the published ansible docs are for the development version (2.0 right now), and we don't keep the old ones around. It's a big point of contention... Assuming you're using something pre-2.0, the inventory var name you need is ansible_ssh_user. ansible_user works in 2.0 (as does ansible_ssh_user- it gets aliased in).

I usually add my remote username in /etc/ansible/ansible.cfg as follows:
remote_user = MY_REMOTE_USERNAME
This way it is not required to configure ansible_user in the inventory file for each host entry.

Related

how to set different python interpreters for local and remote hosts

Use-Case:
Playbook 1
when we first connect to a remote host/s, the remote host will already have some python version installed - the auto-discovery feature will find it
now we install ansible-docker on the remote host
from this time on: the ansible-docker docs suggest to use ansible_python_interpreter=/usr/bin/env python-docker
Playbook 2
We connect to the same host/s again, but now we must use the /usr/bin/env python-docker python interpreter
What is the best way to do this?
Currently we set ansible_python_interpreter on the playbook level of Playbook 2:
---
- name: DaqMon app
vars:
- ansible_python_interpreter: "{{ '/usr/bin/env python-docker' }}"
This works, but this will also change the python interpreter of the local actions. And thus the local actions will fail, because (python-docker does not exist locally).
the current workaround is to explicitly specify the ansible_python_interpreter on every local-action which is tedious and error-prone
Questions:
the ideal solution is, if we could add '/usr/bin/env python-docker' as fallback to interpreter-python-fallback - but I think this is not possible
is there a way to set the python interpreter only for the remote hosts - and keep the default for the localhost?
or is it possible to explicitly override the python interpreter for the local host?
You should set the ansible_python_interpreter on the host level.
So yes, it's possible to explicitly set the interpreter for localhost in your inventory.
localhost ansible_connection=local ansible_python_interpreter=/usr/bin/python
And I assume that you could also use set_fact on hostvars[<host>].ansible_python_interpreter on your localhost or docker host.
There is a brillant article about set_fact on hostvars ! ;-P
Thanks to the other useful answers I found an easy solution:
on the playbook level we set the python interpreter to /usr/bin/env python-docker
then we use a set_fact task to override the interpreter for localhost only
we must also delegate the facts
we can use the magic ansible_playbook_python variable, which refers to the python interpreter that was used on the (local) Ansible host to start the playbook: see Ansible docs
Here are the important parts at the start of Playbook 2:
---
- name: Playbook 2
vars:
- ansible_python_interpreter: "{{ '/usr/bin/env python-docker' }}"
...
tasks:
- set_fact:
ansible_python_interpreter: '{{ ansible_playbook_python }}'
delegate_to: localhost
delegate_facts: true
Try to use set_fact for ansible_python_interpreter at host level in the first playbook.
Globally, use the interpreter_python key in the [defaults] section of the ansible.cfg file.
interpreter_python = auto_silent

Ansible playbook does not run tasks in roles

I have a simple ansible roles with one task, but the problem is when i run it
the tasks are not actually started
It worked when I tried my task without roles and not sure why its happening when I try using roles.
Version of ansible: ansible 2.2.3.0
This is my run.yml
- name: add user to general purpose
hosts: localhosts
roles:
- adduser
cd adduser/tasks/main.yml
- name: Create user
shell: sudo adduser tom
Running
ansible-playbook run.yml -vvv
This is the output
Using /etc/ansible/ansible.cfg as config file
[WARNING]: provided hosts list is empty, only localhost is available
PLAYBOOK: run.yml
**************************************************************
1 plays in run.yml
PLAY RECAP
*********************************************************************
It is because you have a typo in your hosts: field; the name is localhost not localhosts (as there is no such thing as a plural of the local host)
Also, while this isn't what you asked, it is bad news to (a) manually use sudo in a module (b) call adduser unconditionally, as it will bomb the second time you run that playbook. The thing you want is to tell ansible that task needs elevated privileges and then make use of the user: module to allow ansible to ensure there is such a user by the end of that role:
- name: Create user
become: yes
user:
name: tom
The benefit of being more declarative is (a) that's how ansible works (b) it allows ansible to be idempotent across runs

Ansible : remote_user in playbook file issues

Actually I've defined remote_user variable for each host group. But remote_user value is not taken from defined one. Rather its using top assigned value.
Ansible version:
# ansible --version
ansible 2.3.1.0
config file = /etc/ansible/ansible.cfg
configured module search path = Default w/o overrides
python version = 2.7.12 (default, Jul 1 2016, 15:12:24) [GCC 5.4.0 20160609]
Playbook file : info.yml
---
- hosts: all
remote_user: demo
roles:
- common
- hosts: devlocal
remote_user: root
become: yes
roles:
- common
- hosts: testlocal
remote_user: test
become: yes
roles:
- common
when I run the playbook for hosts [ devlocal] , the users name is taken from first assignment [ i.e : "demo" ]. Actually it should use the remote_user "root" in my case.
logs :
# ansible-playbook -i hosts -l devlocal info.yml --ask-pass -vvvv
Using /etc/ansible/ansible.cfg as config file
SSH password:
Loading callback plugin default of type stdout, v2.0 from /usr/lib/python2.7/dist-packages/ansible/plugins/callback/__init__.pyc
PLAYBOOK: site.yml ********************************************************************************************************************************
3 plays in site.yml
PLAY [all] ****************************************************************************************************************************************
TASK [Gathering Facts] ****************************************************************************************************************************
Using module file /usr/lib/python2.7/dist-packages/ansible/modules/system/setup.py
<10.11.12.213> ESTABLISH SSH CONNECTION FOR USER: demo
Someone please help what was an issue here. Thanks in advance
Someone please help what was an issue here.
The issue here is that you specified the first play to run as demo:
- hosts: all
remote_user: demo
roles:
- common
And Ansible runs it as demo which seems not to be your objective.
That's why Ansible provides inventory mechanism, so you can specify connection details per host, not in plays.
I've defined remote_user variable for each host group
Wrong. You've defined remote_user for each play and not host group.
Hosts and groups are defined via inventory.
So you should defined devlocal and testlocal groups with ansible_user assigned.
And have single play:
- hosts: all
roles:
- common

Can I force current hosts group to be identified as another in a playbook include?

The current case is this:
I have a playbook which provisions a bunch of servers and installs apps to these servers.
One of these apps already has it's own ansible playbook which I wanted to use. Now my problem arises from this playbook, as it's limited to hosts: [prod] and the host groups I have in the upper-level playbook are different.
I know I could just use add_host to add the needed hosts to a prod group, but that is a solution which I don't like.
So my question is: Is there a way to add the current hosts to a new host group in the include statement?
Something like - include: foo.yml prod={{ ansible_host_group }}
Or can I somehow include only the tasks from a playbook?
No, there's no direct way to do this.
Now my problem arises from this playbook, as it's limited to
hosts: [prod]
You can setup host's more flexible via extra vars:
- name: add role fail2ban
hosts: '{{ target }}'
remote_user: root
roles:
- fail2ban
Run it:
ansible-playbook testplaybook.yml --extra-vars "target=10.0.190.123"
ansible-playbook testplaybook.yml --extra-vars "target=webservers"
Is this workaround suitable for you?

Can I set remote_user in host_vars or group_vars?

I'm starting to write my first serious playbook in ansible.
Something I'd like to do is to specify different remote_user values per host. I'm able to set remote_user in ansible.cfg, through the CLI -u option and even in play variables, like so:
---
- name: install dependencies
hosts: all
sudo: yes
vars:
remote_user: username
But setting the var at the host or group level (which makes the most sense for my approach) won't work. For instance, having this file as group_vars/all gets me an "Authentication failure" fatal error:
---
remote_user: username
What am I missing?
What you are doing appears to be undocumented. Specifically, you have this:
vars:
remote_user: username
when, according to the documentation, it should be like this:
remote_user: username
The fact that it happened to work when you did it the wrong way is irrelevant. There is some side effect that makes it work in that case, but of course it won't work in another case, and it may behave differently in different Ansible versions.
To log on as a different user in each host, the usual way is to specify ansible_ssh_user in the inventory. Whether this is a variable that can be overridden in host_vars or group_vars I'm not certain. See also issue 4688 for information about how ansible_ssh_user and remote_user may override each other.
In your group or host vars add a new one with a name like playbook_username, then in your playbook change remote_user to something like this - remote_user: "{{ playbook_username }}"
Normally host variables don't seem to overwrite remote_user, but this workaround works well enough
You can specify ansible_ssh_user in group_vars/all.yaml.
remote_user documentation.
(Documenting because I just run in to this quirk in 2023 on ansible [core 2.14.1])

Resources