Ansible include tasklist with become - ansible

I want to execute a certain list of tasks (within a role in a playbook) in ansible as a specific user. The user will actually come from a variable, but in the minimal example I'm hard-codi ng it to "dev". (This user does exist). I can't work out why the following doesn't work
My main.yml in the roles/foo/tasks is
- include_tasks: "{{ role_path }}/tasks/content.yml"
become: yes
become_user: dev
While the content.yml just fetches the current user:
- command: whoami
register: whoami
- debug:
var: whoami
My playbook is
- hosts: dev
become: true
remote_user: root
roles:
- foo
I am getting the following output:
PLAY [dev] ********************************************************************************************************************************************************************************************************************************************************************
TASK [Gathering Facts] ********************************************************************************************************************************************************************************************************************************************************
ok: [adco-test-webdev]
TASK [foo : include_tasks] ****************************************************************************************************************************************************************************************************************************************************
included: /smbshare/ansible/roles/foo/tasks/content.yml for adco-test-webdev
TASK [foo : command] **********************************************************************************************************************************************************************************************************************************************************
changed: [adco-test-webdev]
TASK [foo : debug] ************************************************************************************************************************************************************************************************************************************************************
ok: [adco-test-webdev] => {
"whoami": {
"changed": true,
"cmd": [
"whoami"
],
"delta": "0:00:00.002194",
"end": "2018-07-25 02:05:54.879601",
"failed": false,
"rc": 0,
"start": "2018-07-25 02:05:54.877407",
"stderr": "",
"stderr_lines": [],
"stdout": "root",
"stdout_lines": [
"root"
]
}
}
Why is it giving the user as root? I know I connect as root, but I then become dev for the includes don't I?
If this is how it's meant to work, then how should I configure a role so that a whole list of tasks are run as a certain user? DO I have to remember the become and become_user on every item?

Use import_tasks instead of include_tasks:
- import_tasks: "{{ role_path }}/tasks/content.yml"
become: yes
become_user: dev
In future versions (starting with 2.7) you will be able to do it with the a new parameter apply:
- include_tasks: "{{ role_path }}/tasks/content.yml"
args:
apply:
become: yes
become_user: dev

Related

How to loop through the items in the list which is returned by hostnames from the inventory groups

Loop through the items. And items are from Inventory groups.
I tried almost all the solutions posted in stackoverflow, but no luck.
Here is my playbook:
---
- hosts: dev
gather_facts: yes
become: true
become_method: su
become_user: xxxxx
tasks:
- name: Update the file contents
replace:
path: "/path/to/file{{ item.path }}.xml"
regexp: '(xxxxxxx[\s\S]*yyyyyyy$)'
replace: "/zzz/zzzz/{{ item.replace }}.dat /qqqq/qqqqq/q"
backup: yes
with_items:
- { path: "{{ groups['dev'] }}", replace: "{{ groups['pc'] }}" }
And this is my inventory:
[dev]
host1
host2
[pc]
1234
6789
I want to execute the task : "Update the file contents" for every hostname under dev group for item.path and item.pc
The result of my execution is:
failed: [host1] (item={u'path': [u'host1', u'host2'], u'replace': [u'1234', u'6789']}) => {"ansible_loop_var": "item", "changed": false, "
item": {"path": ["host1", "host2"], "replace": ["1234", "6789"]}, "msg": "Path /path/to/file[u'host1', u'host2'].xml does not exist !", "rc": 257}
failed: [host2] (item={u'path': [u'host1', u'host2'], u'replace': [u'1234', u'6789']}) => {"ansible_loop_var": "item", "changed": false, "
item": {"path": ["host1", "host2"], "replace": ["1234", "6789"]}, "msg": "Path /path/to/file[u'host1', u'host2'].xml does not exist !", rc": 257}
Expected output:
[host1] (item={u'path': [u'host1'], u'replace': [u'1234']}) "msg": "Path /path/to/file/host1.xml
[host2] (item={u'path': [u'host2'], u'replace': [u'6789']}) "msg": "Path /path/to/file/host2.xml
This works fine if I have only 1 hostname in the dev group. I need the task to work for all the hosts when the number of hosts are more than 1.
Can anyone help me solve this?
Thank you!
According to your desired output, there is no need to loop over all hosts.
There is even no need to run the task on the pc hosts group.
- name: Update the file contents
replace:
path: "/path/to/file/{{ inventory_hostname_short }}.xml"
regexp: '(xxxxxxx[\s\S]*yyyyyyy$)'
replace: "/zzz/zzzz/{{ groups['pc'][groups['dev'].index(inventory_hostname)].split('.')[0] }}.dat /qqqq/qqqqq/q"
backup: yes
when: inventory_hostname in groups['dev']
I avoided having the domain name in the hostnames.
{{ groups['pc'][groups['dev'].index(inventory_hostname)].split('.')[0] }}
Here I'm getting the host from group [pc], which its index number is equivalent to the host from group [dev] which the task is currently running on.
when running on host1 which has index number 0 in group [dev] ==> groups['pc'][0]
when running on host2 which has index number 1 in group [dev] ==> groups['pc'][1]
With .split('.')[0] I'm getting the hostname without domain name.

How to ignore specific errors in an Ansible task

If have an Ansible task, that can fails sometimes, due to some error during the creation of an user account. Especially if the user account is already in use and the user is logged in. If the task fails with a specific error message, like "user account in use" the play must continue. There is no need to fail then, but only on predefined error messages. The task looks like this.
- name: modify user
user:
state: "{{ user.state | default('present') }}"
name: "{{ user.name }}"
home: "{{ user_base_path }}/{{ user.name }}"
createhome: true
Since it's not a shell command, I cannot simply register a var and check the output of .rc. Also I don't get stderr or stdout, when i register a var and print it in debug mode. That was my first approach on check for the error message. I am running out of ideas, how to filter for a specific error and passing the task, but failing on everything else. ignore_errors: yes is not a good solution, because the task should fail in some cases.
As per ansible doc we get stdout and stderr as return fields.
I would suggest to use flag ignore_errors: yes and catch the return as per this example
---
- hosts: localhost
vars:
user:
name: yash
user_base_path: /tmp
tasks:
- name: modify user
user:
state: "{{ user.state | default('present') }}"
name: "{{ user.name }}"
home: "{{ user_base_path }}/{{ user.name }}"
createhome: true
register: user_status
ignore_errors: yes
- name: stdout_test
debug:
msg: "{{ user_status.err }}"
- name: Fail on not valid
fail:
msg: failed
when: '"user account in use" not in user_status.err'
Output:
PLAY [localhost] *************************************************************************************************************************************************************************************
TASK [Gathering Facts] *******************************************************************************************************************************************************************************
ok: [localhost]
TASK [modify user] ***********************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "err": "<main> attribute status: eDSPermissionError\n<dscl_cmd> DS Error: -14120 (eDSPermissionError)\n", "msg": "Cannot create user \"yash\".", "out": "", "rc": 40}
...ignoring
TASK [stdout_test] ***********************************************************************************************************************************************************************************
ok: [localhost] => {
"msg": "<main> attribute status: eDSPermissionError\n<dscl_cmd> DS Error: -14120 (eDSPermissionError)\n"
}
TASK [Fail on not valid] *****************************************************************************************************************************************************************************
fatal: [localhost]: FAILED! => {"changed": false, "msg": "failed"}
PLAY RECAP *******************************************************************************************************************************************************************************************
localhost : ok=3 changed=0 unreachable=0 failed=1 skipped=0 rescued=0 ignored=1
You can use when, save the return value by the register and set_fact and then bet what will happen according to that value.

Register Ansible variable property

Using Ansible I'm having a problem registering a variable the way I want. Using the implementation below I will always have to call .stdout on the variable - is there a way I can do better?
My playbook:
Note the unwanted use of .stdout - I just want to be able to use the variable directly without calling a propery...?
---
- name: prepare for new deployment
hosts: all
user: ser85
tasks:
- name: init deploy dir
shell: echo ansible-deploy-$(date +%Y%m%d-%H%M%S-%N)
# http://docs.ansible.com/ansible/playbooks_variables.html
register: deploy_dir
- debug: var=deploy_dir
- debug: var=deploy_dir.stdout
- name: init scripts dir
shell: echo {{ deploy_dir.stdout }}/scripts
register: scripts_dir
- debug: var=scripts_dir.stdout
The output when I execute the playbook:
TASK [init deploy dir] *********************************************************
changed: [123.123.123.123]
TASK [debug] *******************************************************************
ok: [123.123.123.123] => {
"deploy_dir": {
"changed": true,
"cmd": "echo ansible-deploy-$(date +%Y%m%d-%H%M%S-%N)",
"delta": "0:00:00.002898",
"end": "2016-05-27 10:53:38.122217",
"rc": 0,
"start": "2016-05-27 10:53:38.119319",
"stderr": "",
"stdout": "ansible-deploy-20160527-105338-121888719",
"stdout_lines": [
"ansible-deploy-20160527-105338-121888719"
],
"warnings": []
}
}
TASK [debug] *******************************************************************
ok: [123.123.123.123] => {
"deploy_dir.stdout": "ansible-deploy-20160527-105338-121888719"
}
TASK [init scripts dir] ********************************************************
changed: [123.123.123.123]
TASK [debug] *******************************************************************
ok: [123.123.123.123] => {
"scripts_dir.stdout": "ansible-deploy-20160527-105338-121888719/scripts"
}
Any help or insights appreciated - thank you :)
If I understood it right you want to assign deploy_dir.stdout to a variable that you can use without stdout key. It can be done with set_fact module:
tasks:
- name: init deploy dir
shell: echo ansible-deploy-$(date +%Y%m%d-%H%M%S-%N)
# http://docs.ansible.com/ansible/playbooks_variables.html
register: deploy_dir
- set_fact: my_deploy_dir="{{ deploy_dir.stdout }}"
- debug: var=my_deploy_dir

How do I register a variable and persist it between plays targeted on different nodes?

I have an Ansible playbook, where I would like a variable I register in a first play targeted on one node to be available in a second play, targeted on another node.
Here is the playbook I am using:
---
- hosts: localhost
gather_facts: no
tasks:
- command: echo "hello world"
register: foo
- hosts: main
gather_facts: no
tasks:
- debug:
msg: {{ foo.stdout }}
But, when I try to access the variable in the second play, targeted on main, I get this message:
The task includes an option with an undefined variable. The error was: 'foo' is undefined
How can I access foo, registered on localhost, from main?
The problem you're running into is that you're trying to reference facts/variables of one host from those of another host.
You need to keep in mind that in Ansible, the variable foo assigned to the host localhost is distinct from the variable foo assigned to the host main or any other host.
If you want to access one hosts facts/variables from another host then you need to explicitly reference it via the hostvars variable. There's a bit more of a discussion on this in this question.
Suppose you have a playbook like this:
- hosts: localhost
gather_facts: no
tasks:
- command: echo "hello world"
register: foo
- hosts: localhost
gather_facts: no
tasks:
- debug:
var: foo
This will work because you're referencing the host localhost and localhosts's instance of the variable foo in both plays.
The output of this playbook is something like this:
PLAY [localhost] **************************************************
TASK: [command] ***************************************************
changed: [localhost]
PLAY [localhost] **************************************************
TASK: [debug] *****************************************************
ok: [localhost] => {
"var": {
"foo": {
"changed": true,
"cmd": [
"echo",
"hello world"
],
"delta": "0:00:00.004585",
"end": "2015-11-24 20:49:27.462609",
"invocation": {
"module_args": "echo \"hello world\",
"module_complex_args": {},
"module_name": "command"
},
"rc": 0,
"start": "2015-11-24 20:49:27.458024",
"stderr": "",
"stdout": "hello world",
"stdout_lines": [
"hello world"
],
"warnings": []
}
}
}
If you modify this playbook slightly to run the first play on one host and the second play on a different host, you'll get the error that you encountered.
Solution
The solution is to use Ansible's built-in hostvars variable to have the second host explicitly reference the first hosts variable.
So modify the first example like this:
- hosts: localhost
gather_facts: no
tasks:
- command: echo "hello world"
register: foo
- hosts: main
gather_facts: no
tasks:
- debug:
var: foo
when: foo is defined
- debug:
var: hostvars['localhost']['foo']
## alternatively, you can use:
# var: hostvars.localhost.foo
when: hostvars['localhost']['foo'] is defined
The output of this playbook shows that the first task is skipped because foo is not defined by the host main.
But the second task succeeds because it's explicitly referencing localhosts's instance of the variable foo:
TASK: [debug] *************************************************
skipping: [main]
TASK: [debug] *************************************************
ok: [main] => {
"var": {
"hostvars['localhost']['foo']": {
"changed": true,
"cmd": [
"echo",
"hello world"
],
"delta": "0:00:00.005950",
"end": "2015-11-24 20:54:04.319147",
"invocation": {
"module_args": "echo \"hello world\"",
"module_complex_args": {},
"module_name": "command"
},
"rc": 0,
"start": "2015-11-24 20:54:04.313197",
"stderr": "",
"stdout": "hello world",
"stdout_lines": [
"hello world"
],
"warnings": []
}
}
}
So, in a nutshell, you want to modify the variable references in your main playbook to reference the localhost variables in this manner:
{{ hostvars['localhost']['foo'] }}
{# alternatively, you can use: #}
{{ hostvars.localhost.foo }}
Use a dummy host and its variables
For example, to pass a Kubernetes token and hash from the master to the workers.
On master
- name: "Cluster token"
shell: kubeadm token list | cut -d ' ' -f1 | sed -n '2p'
register: K8S_TOKEN
- name: "CA Hash"
shell: openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
register: K8S_MASTER_CA_HASH
- name: "Add K8S Token and Hash to dummy host"
add_host:
name: "K8S_TOKEN_HOLDER"
token: "{{ K8S_TOKEN.stdout }}"
hash: "{{ K8S_MASTER_CA_HASH.stdout }}"
- name:
debug:
msg: "[Master] K8S_TOKEN_HOLDER K8S token is {{ hostvars['K8S_TOKEN_HOLDER']['token'] }}"
- name:
debug:
msg: "[Master] K8S_TOKEN_HOLDER K8S Hash is {{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}"
On worker
- name:
debug:
msg: "[Worker] K8S_TOKEN_HOLDER K8S token is {{ hostvars['K8S_TOKEN_HOLDER']['token'] }}"
- name:
debug:
msg: "[Worker] K8S_TOKEN_HOLDER K8S Hash is {{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}"
- name: "Kubeadmn join"
shell: >
kubeadm join --token={{ hostvars['K8S_TOKEN_HOLDER']['token'] }}
--discovery-token-ca-cert-hash sha256:{{ hostvars['K8S_TOKEN_HOLDER']['hash'] }}
{{K8S_MASTER_NODE_IP}}:{{K8S_API_SERCURE_PORT}}
I have had similar issues with even the same host, but across different plays. The thing to remember is that facts, not variables, are the persistent things across plays. Here is how I get around the problem.
#!/usr/local/bin/ansible-playbook --inventory=./inventories/ec2.py
---
- name: "TearDown Infrastructure !!!!!!!"
hosts: localhost
gather_facts: no
vars:
aws_state: absent
vars_prompt:
- name: "aws_region"
prompt: "Enter AWS Region:"
default: 'eu-west-2'
tasks:
- name: Make vars persistant
set_fact:
aws_region: "{{aws_region}}"
aws_state: "{{aws_state}}"
- name: "TearDown Infrastructure hosts !!!!!!!"
hosts: monitoring.ec2
connection: local
gather_facts: no
tasks:
- name: set the facts per host
set_fact:
aws_region: "{{hostvars['localhost']['aws_region']}}"
aws_state: "{{hostvars['localhost']['aws_state']}}"
- debug:
msg="state {{aws_state}} region {{aws_region}} id {{ ec2_id }} "
- name: last few bits
hosts: localhost
gather_facts: no
tasks:
- debug:
msg="state {{aws_state}} region {{aws_region}} "
results in
Enter AWS Region: [eu-west-2]:
PLAY [TearDown Infrastructure !!!!!!!] ***************************************************************************************************************************************************************************************************
TASK [Make vars persistant] **************************************************************************************************************************************************************************************************************
ok: [localhost]
PLAY [TearDown Infrastructure hosts !!!!!!!] *********************************************************************************************************************************************************************************************
TASK [set the facts per host] ************************************************************************************************************************************************************************************************************
ok: [XXXXXXXXXXXXXXXXX]
TASK [debug] *****************************************************************************************************************************************************************************************************************************
ok: [XXXXXXXXXXX] => {
"changed": false,
"msg": "state absent region eu-west-2 id i-0XXXXX1 "
}
PLAY [last few bits] *********************************************************************************************************************************************************************************************************************
TASK [debug] *****************************************************************************************************************************************************************************************************************************
ok: [localhost] => {
"changed": false,
"msg": "state absent region eu-west-2 "
}
PLAY RECAP *******************************************************************************************************************************************************************************************************************************
XXXXXXXXXXXXX : ok=2 changed=0 unreachable=0 failed=0
localhost : ok=2 changed=0 unreachable=0 failed=0
You can use an Ansible known behaviour. That is using group_vars folder to load some variables at your playbook. This is intended to be used together with inventory groups, but it is still a reference to the global variable declaration. If you put a file or folder in there with the same name as the group, you want some variable to be present, Ansible will make sure it happens!
As for example, let's create a file called all and put a timestamp variable there. Then, whenever you need, you can call that variable, which will be available to every host declared on any play inside your playbook.
I usually do this to update a timestamp once at the first play and use the value to write files and folders using the same timestamp.
I'm using lineinfile module to change the line starting with timestamp :
Check if it fits for your purpose.
On your group_vars/all
timestamp: t26032021165953
On the playbook, in the first play:
hosts: localhost
gather_facts: no
- name: Set timestamp on group_vars
lineinfile:
path: "{{ playbook_dir }}/group_vars/all"
insertafter: EOF
regexp: '^timestamp:'
line: "timestamp: t{{ lookup('pipe','date +%d%m%Y%H%M%S') }}"
state: present
On the playbook, in the second play:
hosts: any_hosts
gather_facts: no
tasks:
- name: Check if timestamp is there
debug:
msg: "{{ timestamp }}"

Ansible register result of multiple commands

I was given a task to verify some routing entries for all Linux server and here is how I did it using an Ansible playbook
---
- hosts: Linux
serial: 1
tasks:
- name: Check first
command: /sbin/ip route list xxx.xxx.xxx.xxx/24
register: result
changed_when: false
- debug: msg="{{result.stdout}}"
- name: Check second
command: /sbin/ip route list xxx.xxx.xxx.xxx/24
register: result
changed_when: false
- debug: msg="{{result.stdout}}"
You can see I have to repeat same task for each routing entry and I believe I should be able to avoid this. I tried use with_items loop but got following error message
One or more undefined variables: 'dict object' has no attribute 'stdout'
is there a way to register variable for each command and loop over them one by one ?
Starting in Ansible 1.6.1, the results registered with multiple items are stored in result.results as an array. So you can use result.results[0].stdout and so on.
Testing playbook:
---
- hosts: localhost
gather_facts: no
tasks:
- command: "echo {{item}}"
register: result
with_items: [1, 2]
- debug:
var: result
Result:
$ ansible-playbook -i localhost, test.yml
PLAY [localhost] **************************************************************
TASK: [command echo {{item}}] *************************************************
changed: [localhost] => (item=1)
changed: [localhost] => (item=2)
TASK: [debug ] ****************************************************************
ok: [localhost] => {
"var": {
"result": {
"changed": true,
"msg": "All items completed",
"results": [
{
"changed": true,
"cmd": [
"echo",
"1"
],
"delta": "0:00:00.002502",
"end": "2015-08-07 16:44:08.901313",
"invocation": {
"module_args": "echo 1",
"module_name": "command"
},
"item": 1,
"rc": 0,
"start": "2015-08-07 16:44:08.898811",
"stderr": "",
"stdout": "1",
"stdout_lines": [
"1"
],
"warnings": []
},
{
"changed": true,
"cmd": [
"echo",
"2"
],
"delta": "0:00:00.002516",
"end": "2015-08-07 16:44:09.038458",
"invocation": {
"module_args": "echo 2",
"module_name": "command"
},
"item": 2,
"rc": 0,
"start": "2015-08-07 16:44:09.035942",
"stderr": "",
"stdout": "2",
"stdout_lines": [
"2"
],
"warnings": []
}
]
}
}
}
PLAY RECAP ********************************************************************
localhost : ok=2 changed=1 unreachable=0 failed=0
A slightly different situation, which took a while to figure out. If you want to use the results of multiple items, but for changed_when, then the register variable will not have a var.results! Instead, changed_when, is evaluated for each item, and you can just directly use the register var.
Simple example, which will result in changed: false:
- action: command echo {{item}}
register: out
changed_when: "'z' in out.stdout"
with_items:
- hello
- foo
- bye
Another example:
- name: Create fulltext index for faster text searches.
mysql_db: name={{SO_database}} state=import target=/tmp/fulltext-{{item.tableName}}-{{item.columnName}}.sql
with_items:
- {tableName: Posts, columnName: Title}
- {tableName: Posts, columnName: Body}
- {tableName: Posts, columnName: Tags}
- {tableName: Comments, columnName: Text}
register: createfulltextcmd
changed_when: createindexcmd.msg.find('already exists') == -1
Finally, when you do want to loop through results in other contexts, it does seem a bit tricky to programmatically access the index as that is not exposed. I did find this one example that might be promising:
- name: add hosts to known_hosts
shell: 'ssh-keyscan -H {{item.host}}>> /home/testuser/known_hosts'
with_items:
- { index: 0, host: testhost1.test.dom }
- { index: 1, host: testhost2.test.dom }
- { index: 2, host: 192.168.202.100 }
when: ssh_known_hosts.results[{{item.index}}].rc == 1
Posting because I can't comment yet
Relating to gameweld's answer, since Ansible 2.5 there's another way to accessing the iteration index.
From the docs:
Tracking progress through a loop with index_var
New in version 2.5.
To keep track of where you are in a loop, use the index_var directive
with loop_control. This directive specifies a variable name to contain
the current loop index:
- name: count our fruit
debug:
msg: "{{ item }} with index {{ my_idx }}"
loop:
- apple
- banana
- pear
loop_control:
index_var: my_idx
This also allows you to gather results from an array and act later to the same array, taking into account the previous results
- name: Ensure directories exist
file:
path: "{{ item }}"
state: directory
loop:
- "mouse"
- "lizard"
register: reg
- name: Do something only if directory is new
debug:
msg: "New dir created with name '{{ item }}'"
loop:
- "mouse"
- "lizard"
loop_control:
index_var: index
when: reg.results[index].changed
Please note that the "mouse lizard" array should be exactly the same
If what you need is to register the output of two commands separately, use different variable names.
---
- hosts: Linux
serial: 1
tasks:
- name: Check first
command: /sbin/ip route list xxx.xxx.xxx.xxx/24
register: result0
changed_when: false
- debug: msg="{{result0.stdout}}"
- name: Check second
command: /sbin/ip route list xxx.xxx.xxx.xxx/24
register: result1
changed_when: false
- debug: msg="{{result1.stdout}}"

Resources