ansible user current user in configuration - ansible

I am using ansible to configure the several computers after installation.
For this I run ansible locally on the machines. The "main" user on the installation has often a different name. I want to use that user for variables like become_user. The "main" user is also the user, who calls ansible-playbook.
So can I somehow set "become_user" to the user who called ansible-playbook?

Not sure why you need to set become_user to user you are already running your playbook with, but you can use env lookup to get USER environment variable:
- hosts: localhost
tasks:
- debug: msg="{{ lookup('env','USER') }}"

You can logon locally on control host as 'nathan', but want to connect to other servers as user 'ansible' (better in ansible.cfg)
remote_user = ansible
If you want on remote host connect as 'ansible' and perform one task as root or apache -- then sudo to root (apache or other user) you should use become_user for this particular task.
Please note also, than remote server may NOT have such user as on control host! (In common way)
In your particular case if you logon locally as 'nathan' and want to connect to 'remote' server as 'nathan' you should omit both remote_user and become_user: just logon with your current credentials!
For example, there's two sysadminst in organization: nathan and peter -- so, there's two workstation (heidelberg-nathan and berlin-peter) as ansible control host and thousands clients. Both nathan and peter connect to remote side as nathan or peter and perform tasks. Each of them can non-password sudoers to perform admin tasks.
PS Ok, let's test both solution (first - from Konstantin Suvorov's answer, second -- from knowhy's answer).
My control host berlin-ansible-01, i'm logged in as 'nathan'. Remote client is host berlin-client-01. I will log into client host as user 'ansible'.
My ansible.cfg is:
[defaults]
sudo_flags=-HE
hash_behaviour = merge
retry_files_enabled = false
log_path = ./main.log
ask_vault_pass=true
remote_user = ansible
Playbook is simple:
- name: test
hosts: '{{ target }}'
tasks:
- debug: msg="step 1 = {{ lookup('env','USER') }}"
- setup:
- debug: msg="step 2 = {{ hostvars[target].ansible_env.USER }}"
#more than one client in taget needs iterate items:
# - debug: msg="step 2 = {{ hostvars[item].ansible_env.USER }}"
# with_items: "{{ hostvars }}"
Let's run it:
[nathan#berlin-ansible-01 stackoverflow]$ ansible-playbook -i hosts_staging test.yml --extra-vars "target=berlin-client-01"
Vault password:
PLAY [test] ********************************************************************
TASK [setup] *******************************************************************
ok: [berlin-client-01]
TASK [debug] *******************************************************************
ok: [berlin-client-01] => {
"msg": "step 1 = nathan"
}
TASK [setup] *******************************************************************
ok: [berlin-client-01]
TASK [debug] *******************************************************************
ok: [berlin-client-01] => {
"msg": "step 2 = ansible"
}
PLAY RECAP *********************************************************************
berlin-client-01 : ok=4 changed=0 unreachable=0 failed=0

ansible-playbook provides the --become-user CLI flag along with --ask-become-pass (if needed).
In most cases, this is a bad setup. You should standardize the user on all of your machines else you'll have to maintain certs/passwords for each user separately.

There is no need to set become_user when the playbook should run with the user who started ansible-playbook
become is for privilege escalation. If I got this question right privilege escalation is not needed.
The name of the user which runs the playbook is available as an ansible fact {{ ansible_env.username }}

Related

Ansible user module check if a user exists or not?

The requirement is for Ansible user module just check if a user exists and do not take any action.
Does check_mode help here? How should such a playbook task be written?
This link, Ansible playbook to check user exist or display error message provides an alternative. Is it possible to get this done with builtin user module?
In a nutshell and for the most basic check (i.e. username is there regardless of any other user configuration):
---
- name: Check if users exists
hosts: localhost
gather_facts: false
become: true
vars:
users_to_test:
- daemon # This one should exist, at least on ubuntu
- a_non_existing_user
tasks:
- name: Check if users exist
ansible.builtin.user:
name: "{{ item }}"
loop: "{{ users_to_test }}"
check_mode: true
register: test_users
- name: Report
ansible.builtin.debug:
msg: "User {{ item.item }} {{ 'exists' if item.state | d('') == 'present' else 'does not exist' }}"
loop: "{{ test_users.results }}"
loop_control:
label: "{{ item.item }}"
Which give on my ubuntu 20 local machine:
$ ansible-playbook testuser.yml
PLAY [Check if users exists] ********************************************************************************************************************************************************
TASK [Check if users exist] *********************************************************************************************************************************************************
ok: [localhost] => (item=daemon)
changed: [localhost] => (item=a_non_existing_user)
TASK [Report] ***********************************************************************************************************************************************************************
ok: [localhost] => (item=daemon) => {
"msg": "User daemon exists"
}
ok: [localhost] => (item=a_non_existing_user) => {
"msg": "User a_non_existing_user does not exist"
}
Since Ansible is mainly a Configuration Management Tool in which one can declare a Desired State
The requirement is for Ansible user module just check if a user exists and do not take any action.
this is already and mostly the default behavior of the user module. If the user already exists and there are no changes necessary the module will just return a changed: false and reports OK. So it will not take any action then.
For a simple check only if a user exists you've found already the Ansible playbook to check user exist via getent module. Please take note that it is not an "alternative" compared to the user module.
Does check_mode help here?
Since check_mode is mainly for Validating tasks and
Check mode is just a simulation ... it is great for validating configuration management playbooks that run on one node at a time.
it will mainly depend on what you try to achieve and how a run should behave.
Further Readings and Q&A
An Ansible playbook for solving a new problem from scratch
Ansible: Only disable existing users?
How to lock a user using Ansible?

Passing hostname to ansible playbook through extravars

I have to pass the host on which the Ansible command will be executed through extra vars.
I don't know in advance to which hosts the tasks will be applied to, and, therefore, my inventory file is currently missing the hosts: variable.
If I understood from the article "How to pass extra variables to an Ansible playbook" correctly, overwriting hosts is only possible by having already composed groups of hosts.
From the post Ansible issuing warning about localhost I gathered that referencing hosts to be managed in an Ansible inventory is a must, however, I still have doubts about it since the usage of extra vars was not mentioned in the given question.
So my question is: What can i do in order to make this playbook work?
- hosts: "{{ host }}"
tasks:
- name: KLIST COMMAND
command: klist
register: klist_result
- name: TEST COMMAND
ansible.builtin.shell: echo hi > /tmp/test_result.txt
... referencing hosts to be managed in an Ansible inventory is a must
Yes, that's the case. Regarding your question
What can I do in order to make this playbook work? (annot. without a "valid" inventory file)
you could try with the following workaround.
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- add_host:
hostname: "{{ target_hosts }}"
group: dynamic
- hosts: dynamic
become: true
gather_facts: true
tasks:
- name: Show hostname
shell:
cmd: "hostname && who am i"
register: result
- name: Show result
debug:
var: result
A call with
ansible-playbook hosts.yml --extra-vars="target_hosts=test.example.com"
resulting into execution on
TASK [add_host] ***********
changed: [localhost]
PLAY [dynamic] ************
TASK [Show hostname] ******
changed: [test.example.com]
In any case it is recommended to check how to build your inventory.
Further Documentation
add_host module – Add a host (and alternatively a group) to the ansible-playbook in-memory inventory

ansible module add_host error: skipping: no hosts matched

The Script, running on a Linux host, should call some Windows hosts holding Oracle Databases. Each Oracle Database is in DNS with its name "db-[ORACLE_SID]".
Lets say you have a database with ORACLE SID TEST02, it can be resolved as db-TEST02.
The complete script is doing some more stuff, but this example is sufficient to explain the problem.
The db-[SID] hostnames must be added as dynamic hosts to be able to parallelize the processing.
The problem is that oracle_databases is not passed to the new playbook. It works if I change the hosts from windows to localhost, but I need to analyze something first and get some data from the windows hosts, so this is not an option.
Here is the script:
---
# ansible-playbook parallel.yml -e "databases=TEST01,TEST02,TEST03"
- hosts: windows
gather_facts: false
vars:
ansible_connection: winrm
ansible_port: 5985
ansible_winrm_transport: kerberos
ansible_winrm_kerberos_delegation: true
tasks:
- set_fact:
database: "{{ databases.split(',') }}"
- name: Add databases as hosts, to parallelize the shutdown process
add_host:
name: "db-{{ item }}"
groups: oracle_databases
loop: "{{ database | list}}"
##### just to check, what is in oracle_databases
- name: show the content of oracle_databases
debug:
msg: "{{ item }}"
with_inventory_hostnames:
- oracle_databases
- hosts: oracle_databases
gather_facts: true
tasks:
- debug:
msg:
- "Hosts, on which the playbook is running: {{ ansible_play_hosts }}"
verbosity: 1
My inventory file is just small, but there will be more windows hosts in future:
[adminsw1#obelix oracle_change_home]$ cat inventory
[local]
localhost
[windows]
windows68
And the output
[adminsw1#obelix oracle_change_home]$ ansible-playbook para.yml -l windows68 -e "databases=TEST01,TEST02"
/usr/lib/python2.7/site-packages/ansible/parsing/vault/__init__.py:44: CryptographyDeprecationWarning: Python 2 is no longer supported by the Python core team. Support for it is now deprecated in cryptography, and will be removed in a future release.
from cryptography.exceptions import InvalidSignature
/usr/lib/python2.7/site-packages/requests/__init__.py:89: RequestsDependencyWarning: urllib3 (1.23) or chardet (2.2.1) doesn't match a supported version!
RequestsDependencyWarning)
PLAY [windows] *****************************************************************************************************************************
TASK [set_fact] ****************************************************************************************************************************
ok: [windows68]
TASK [Add databases as hosts, to parallelize the shutdown process] *************************************************************************
changed: [windows68] => (item=TEST01)
changed: [windows68] => (item=TEST02)
TASK [show the content of oracle_databases] ************************************************************************************************
ok: [windows68] => (item=db-TEST01) => {
"msg": "db-TEST01"
}
ok: [windows68] => (item=db-TEST02) => {
"msg": "db-TEST02"
}
PLAY [oracle_databases] ********************************************************************************************************************
skipping: no hosts matched
PLAY RECAP *********************************************************************************************************************************
windows68 : ok=3 changed=1 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0
It might be possible that Ansible is not parsing the updated inventory file file, or the hosts name is being malformed in as it updates the inventory.
In this scenario, you can use the -vv or -vvvv parameter in your Ansible command to get extra logging.
This will give you a complete picture into what Ansible is actually doing as it tries to parse hosts.
I found out, what the problem was. The playbook is restricted to a host "windows68" and therefore can´t be run at the hosts added by the dynamic inventory.
It will work that way:
[adminsw1#obelix oracle_change_home]$ ansible-playbook para.yml -l windows68,oracle_databases -e "databases=TEST01,TEST02"

Using --become for ansible_connection=local

With a personal user account (userx) I run the ansible playbook on all my specified hosts. In ansible.cfg the remote user (which can become root) to be used is:
remote_user = ansible
For the remote hosts this all works fine. It connects as the user Ansible, and executes all tasks as wished for, also changing information (like /etc/ssh/sshd_config) which requires root rights.
But now I also want to execute the playbook on the Ansible host itself. I put the following in my inventory file:
localhost ansible_connection=local
which now indeed executes on localhost. But as userx, and this results in "Access denied" for some task it needs to do.
This is of course somewhat expected, since remote_user tells something about remote, not the local user. But still, I expected that the playbook would --become locally too, to execute the tasks as root (e.g. sudo su -). It seems no to do that.
Running the playbook with --become -vvv tells me
<localhost> ESTABLISH LOCAL CONNECTION FOR USER: userx
and it seems not to try to execute the tasks with sudo. And without using sudo, the task fails.
How can I tell ansible to to use sudo / become on the local connection too?
Nothing special is required. Proof:
The playbook:
---
- hosts: localhost
gather_facts: no
connection: local
tasks:
- command: whoami
register: whoami
- debug:
var: whoami.stdout
The execution line:
ansible-playbook playbook.yml --become
The result:
PLAY [localhost] ***************************************************************************************************
TASK [command] *****************************************************************************************************
changed: [localhost]
TASK [debug] *******************************************************************************************************
ok: [localhost] => {
"changed": false,
"whoami.stdout": "root"
}
PLAY RECAP *********************************************************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
The ESTABLISH LOCAL CONNECTION FOR USER: message will always show the current user, as it the account used "to connect".
Later the command(s) called from the module get(s) executed with elevated permissions.
Of course, you can add become: yes on either play level or for individual tasks.

Override hosts variable of Ansible playbook from the command line

This is a fragment of a playbook that I'm using (server.yml):
- name: Determine Remote User
hosts: web
gather_facts: false
roles:
- { role: remote-user, tags: [remote-user, always] }
My hosts file has different groups of servers, e.g.
[web]
x.x.x.x
[droplets]
x.x.x.x
Now I want to execute ansible-playbook -i hosts/<env> server.yml and override hosts: web from server.yml to run this playbook for [droplets].
Can I just override as a one time off thing, without editing server.yml directly?
Thanks.
I don't think Ansible provides this feature, which it should. Here's something that you can do:
hosts: "{{ variable_host | default('web') }}"
and you can pass variable_host from either command-line or from a vars file, e.g.:
ansible-playbook server.yml --extra-vars "variable_host=newtarget(s)"
For anyone who might come looking for the solution.
Play Book
- hosts: '{{ host }}'
tasks:
- debug: msg="Host is {{ ansible_fqdn }}"
Inventory
[web]
x.x.x.x
[droplets]
x.x.x.x
Command: ansible-playbook deplyment.yml -i hosts --extra-vars "host=droplets"
So you can specify the group name in the extra-vars
We use a simple fail task to force the user to specify the Ansible limit option, so that we don't execute on all hosts by default/accident.
The easiest way I found is this:
---
- name: Force limit
# 'all' is okay here, because the fail task will force the user to specify a limit on the command line, using -l or --limit
hosts: 'all'
tasks:
- name: checking limit arg
fail:
msg: "you must use -l or --limit - when you really want to use all hosts, use -l 'all'"
when: ansible_limit is not defined
run_once: true
Now we must use the -l (= --limit option) when we run the playbook, e.g.
ansible-playbook playbook.yml -l www.example.com
Limit option docs:
Limit to one or more hosts This is required when one wants to run a
playbook against a host group, but only against one or more members of
that group.
Limit to one host
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1"
Limit to multiple hosts
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit "host1,host2"
Negated limit.
NOTE: Single quotes MUST be used to prevent bash
interpolation.
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'all:!host1'
Limit to host group
ansible-playbook playbooks/PLAYBOOK_NAME.yml --limit 'group1'
This is a bit late, but I think you could use the --limit or -l command to limit the pattern to more specific hosts. (version 2.3.2.0)
You could have
- hosts: all (or group)
tasks:
- some_task
and then ansible-playbook playbook.yml -l some_more_strict_host_or_pattern
and use the --list-hosts flag to see on which hosts this configuration would be applied.
An other solution is to use the special variable ansible_limit which is the contents of the --limit CLI option for the current execution of Ansible.
- hosts: "{{ ansible_limit | default(omit) }}"
If the --limit option is omitted, then Ansible issues a warning, but does nothing since no host matched.
[WARNING]: Could not match supplied host pattern, ignoring: None
PLAY ****************************************************************
skipping: no hosts matched
I'm using another approach that doesn't need any inventory and works with this simple command:
ansible-playbook site.yml -e working_host=myhost
To perform that, you need a playbook with two plays:
first play runs on localhost and add a host (from given variable) in a known group in inmemory inventory
second play runs on this known group
A working example (copy it and runs it with previous command):
- hosts: localhost
connection: local
tasks:
- add_host:
name: "{{ working_host }}"
groups: working_group
changed_when: false
- hosts: working_group
gather_facts: false
tasks:
- debug:
msg: "I'm on {{ ansible_host }}"
I'm using ansible 2.4.3 and 2.3.3
I changed mine to default to no host and have a check to catch it. That way the user or cron is forced to provide a single host or group etc. I like the logic from the comment from #wallydrag. The empty_group contains no hosts in the inventory.
- hosts: "{{ variable_host | default('empty_group') }}"
Then add the check in tasks:
tasks:
- name: Fail script if required variable_host parameter is missing
fail:
msg: "You have to add the --extra-vars='variable_host='"
when: (variable_host is not defined) or (variable_host == "")
Just came across this googling for a solution. Actually, there is one in Ansible 2.5. You can specify your inventory file with --inventory, like this: ansible --inventory configs/hosts --list-hosts all
If you want to run a task that's associated with a host, but on different host, you should try delegate_to.
In your case, you should delegate to your localhost (ansible master) and calling ansible-playbook command
I am using ansible 2.5 (2.5.3 exactly), and it seems that the vars file is loaded before the hosts param is executed. So you can set the host in a vars.yml file and just write hosts: {{ host_var }} in your playbook
For example, in my playbook.yml:
---
- hosts: "{{ host_name }}"
become: yes
vars_files:
- vars/project.yml
tasks:
...
And inside vars/project.yml:
---
# general
host_name: your-fancy-host-name
Here's a cool solution I came up to safely specify hosts via the --limit option. In this example, the play will end if the playbook was executed without any hosts specified via the --limit option.
This was tested on Ansible version 2.7.10
---
- name: Playbook will fail if hosts not specified via --limit option.
# Hosts must be set via limit.
hosts: "{{ play_hosts }}"
connection: local
gather_facts: false
tasks:
- set_fact:
inventory_hosts: []
- set_fact:
inventory_hosts: "{{inventory_hosts + [item]}}"
with_items: "{{hostvars.keys()|list}}"
- meta: end_play
when: "(play_hosts|length) == (inventory_hosts|length)"
- debug:
msg: "About to execute tasks/roles for {{inventory_hostname}}"
This worked for me as I am using Azure devops to deploy an application using CICD pipelines. I had to make this hosts (in yml file) more dynamic so in release pipeline I can add it's value, for example:
--extra-vars "host=$(target_host)"
pipeline_variable
My ansible playbook looks like this
- name: Apply configuration to test nodes
hosts: '{{ host }}'

Resources