I have the following in a template file foo.cfg.j2:
nameserver dns1 {{ lookup('file','/etc/resolv.conf') | regex_search('\\b(?:[0-9]{1,3}\\.){3}[0-9]{1,3}\\b') }}:53
The problem is it gets the value from the local /etc/resolv.conf. I wanted to get the value from the target, I learned that I have to use slurp, but the second "problem" is that I need to register an variable in the task, and then pass it to the template file, like this:
tasks:
- slurp:
src: /etc/resolv.conf
register: slurpfile
and the template file is now like this:
nameserver dns1 {{ slurpfile['content'] | b64decode | regex_search('\\b(?:[0-9]{1,3}\\.){3}[0-9]{1,3}\\b') }}:53
While this works, I feel a bit uneasy as the solution is split into two places: the task does something, and then the template file does the second part. Is there a remote version of lookup that is a direct replacement?
Instead of slurp I'd rather fetch the remote files
- fetch:
src: /etc/resolv.conf
dest: "{{ fetched_files }}"
,declare a variable, e.g.
my_etc_recolv_conf: "{{ fetched_files }}/{{ inventory_hostname }}/etc/resolv.conf"
,and use it in the template
nameserver dns1 {{ lookup('file', my_etc_recolv_conf) | ...
Related
I created a Worflow job in awx containing 2 jobs:
Job 1 is using the credentials of the windows server where we get the json file from. It reads the content and put it in a variable using set_stats
Job2 is using the credential of the server where to upload the json file. It reads the content of the variable set in the job 1 in the set_stats task and creates a json file with the content.
First job:
- name: get content
win_shell: 'type {{ file_dir }}{{ file_name }}'
register: content
- name: write content
debug:
msg: "{{ content.stdout_lines }} "
register: result
- set_fact:
this_local: "{{ content.stdout_lines }}"
- set_stats:
data:
test_stat: "{{ this_local }}"
- name: set hostname in a variable
set_stats:
data:
current_hostname: "{{ ansible_hostname }}"
per_host: no
Second job
- name: convert to json and copy the file to destination control node.
copy:
content: "{{ test_stat | to_json }}"
dest: "/tmp/{{ current_hostname }}.json"
How can I get the current_hostname, so that the the created json file is named <original_hostname>.json? In my case its concatenating the two hosts which I passed in the first job.
In my case its concatenating the two hosts which I passed in the first job
... which is precisely what you asked for since you used per_host: no as parameter to set_stats to gather the current_hostname stat globally for all host and that aggregate: yes is the default.
Anyhow, this is not exactly the intended use of set_stats and you are making this overly complicated IMO.
You don't need two jobs. In this particular case, you can delegate the write task to a linux host in the middle of a play dedicated to windows hosts (and one awx job can use several credentials).
Here is a pseudo untested playbook to give you the idea. You'll want to read the slurp module documentation which I used to replace your shell task to read the file (which is a bad practice).
Assuming your inventory looks something like:
---
windows_hosts:
hosts:
win1:
win2:
linux_hosts:
hosts:
json_file_target_server:
The playbook would look like:
- name: Gather jsons from win and write to linux target
hosts: windows_hosts
tasks:
- name: Get file content
slurp:
src: "{{ file_dir }}{{ file_name }}"
register: json_file
- name: Push json content to target linux
copy:
content: "{{ json_file.content | b64decode | to_json }}"
dest: "/tmp/{{ inventory_hostname }}.json"
delegate_to: json_file_target_server
I have an ansible playbook that has created a file (/tmp/values.txt) in each server with the following content
1.1.1.1
2.2.2.2
3.3.3.3
4.4.4.4
5.5.5.5
And I have a conf file named etcd.conf in the /tmp/etcd.conf with the following content
and I need to substitute the value of each line from /tmp/values.txt, so /tmp/etcd.conf will look like this:
SELF_IP=1.1.1.1
NODE_1_IP=2.2.2.2
NODE_2_IP=3.3.3.3
NODE_3_IP=4.4.4.4
NODE_4_IP=5.5.5.5
Is there a way I can do this? I used the following lookup method but it only works in the controll server
- name: Create etcd.conf
template:
src: etcd.conf.j2
dest: /tmp/etcd.conf
vars:
my_values: "{{ lookup('file', '/tmp/values.txt').split('\n') }}"
my_vars: [SELF_IP, NODE_1_IP, NODE_2_IP, NODE_3_IP, NODE_4_IP]
my_dict: "{{ dict(my_vars|zip(my_values)) }}"
You can use the slurp: task to grab the content for each machine, or use command: cat /tmp/values.txt and then examine the stdout_lines of its register:-ed variable to achieve that same .split("\n") behavior (I believe you'll still need to use that .split call after | b64decode if you use the slurp approach)
What will likely require some massaging is that you'll need to identify the group (which may very well include all) of inventory hostnames for which that command will produce content, and then grab the hostvar out of them to get all values
Conceptually, similar to [ hostvars[hv].tmp_values.stdout_lines for hv in groups[the_group] ] | join("\n"), but it'll be tedious to write out, so I'd only want to do that if you aren't able to get it to work on your own
I have a file called values.txt on each of the server in the directory /tmp/values.txt and it has some values. And I have a jinja template and I am substituting some values from the the values.txt
But the problem is, when i use the lookup command it looks in the controller server and not the remote servers
Here's what I tried:
- name: Create /etc/systemd/system/etcd.service
template:
src: etcd.service.j2
dest: /etc/systemd/system/etcd.service
vars:
value_from_file: "{{ lookup('file', '/tmp/values.txt').split('\n') }}"
vars_from_jinja: [SELF_NAME, SELF_IP, NODE_1_NAME, NODE_1_IP, NODE_2_NAME, NODE_2_IP, NODE_3_NAME, NODE_3_IP, NODE_4_NAME, NODE_4_IP]
my_dict: "{{ dict(vars_from_jinja|zip(value_from_file)) }}"
How can I can this task remotely? Or is there another workaround to substitute the values to the jinja template?
PS: I can't fetch the values.txt to the controller because the content of values.txt in each server is slightly different from the other.
Can someone please help me?
The lookup find the file (or stream) on the controller. If you file is on the remote node, you can't use the lookup
Lookup plugins are an Ansible-specific extension to the Jinja2 templating language. You can use lookup plugins to access data from outside sources (files, databases, key/value stores, APIs, and other services) within your playbooks. Like all templating, lookups execute and are evaluated on the Ansible control machine. Ansible makes the data returned by a lookup plugin available using the standard templating system. You can use lookup plugins to load variables or templates with information from external sources.
try the cat file instead:
- name: read the values.txt
shell: cat /tmp/values.txt
register: data
- name: Create /etc/systemd/system/etcd.service
template:
src: etcd.service.j2
dest: /etc/systemd/system/etcd.service
vars:
value_from_file: "{{ data.stdout_lines }}"
vars_from_jinja: [ SELF_NAME, SELF_IP, NODE_1_NAME, NODE_1_IP, NODE_2_NAME, NODE_2_IP, NODE_3_NAME, NODE_3_IP, NODE_4_NAME, NODE_4_IP ]
my_dict: "{{ dict(vars_from_jinja|zip(value_from_file)) }}"
Use slurp:
- name: read remote values.txt
register: values
ansible.builtin.slurp:
src: /tmp/values.txt
- name: Create /etc/systemd/system/etcd.service
template:
src: etcd.service.j2
dest: /etc/systemd/system/etcd.service
vars:
value_from_file: "{{ (values.content | b64decode).split('\n') }}"
vars_from_jinja: [SELF_NAME, SELF_IP, NODE_1_NAME, NODE_1_IP, NODE_2_NAME, NODE_2_IP, NODE_3_NAME, NODE_3_IP, NODE_4_NAME, NODE_4_IP]
my_dict: "{{ dict(vars_from_jinja|zip(value_from_file)) }}"
I'm using Ansible to push config files for various apps (based on group_names) & need to loop thru the config .j2 templates from a list variable. If I use a known list of config templates I can use a standard with_nested like this...
template:
src: '{{ playbook_dir }}/templates/{{ item[1] }}/configs/{{ item[0] }}.j2'
dest: /path/to/{{ item[1] }}/configs/{{ item[0] }}
with_nested:
- ['file.1', 'file.2', 'file.3', 'file.4']
- '{{ group_names }}'
However, since each app will have its own configs I can't use a common list for a with_nested. Every attempt to somehow use with_filetree nested fails. Is there any way to nest a with_filetree? Am I missing something painfully obvious?
The most straightforward way to deal with this is probably to imbricate loops through an include. I take for granted that your app directory only contains .j2 files. Adapt if this is not the case.
In e.g. push_templates.yml
---
- name: Copy templates for group {{ current_group }}
template:
src: "{{ item.src }}"
dest: /path/to/{{ current_group }}/configs/{{ (item.src | splitext).0 | basename }}
with_filetree: "{{ playbook_dir }}/templates/{{ current_group }}"
# Or using the latest loop syntax
# loop: "{{ query('filetree', playbook_dir + '/templates/' + current_group) }}"
when: item.src is defined
Note: on the dest line, I am removing the last found extension of the file and getting its name only without the leading directory path. Check the ansible doc on filters for splitext and basename for more info
Then in your e.g. main.yml
- name: Copy templates for all groups
include_tasks: push_templates.yml
loop: "{{ group_names }}"
loop_control:
loop_var: current_group
Note the loop_var in the control section to disambiguate the possible item overlap in the included file. The var name is of course aligned with the one I used in the above included file. See the ansible loops documentation for more info.
An alternative approach to the above would be to construct your own data structure looping over your groups with set_fact and calling the filetree lookup on each iteration (see example above with the newer loop syntax), then loop over your custom data structure to do the job.
I'm using the ec2 module with ansible-playbook I want to set a variable to the contents of a file. Here's how I'm currently doing it.
Var with the filename
shell task to cat the file
use the result of the cat to pass to the ec2 module.
Example contents of my playbook.
vars:
amazon_linux_ami: "ami-fb8e9292"
user_data_file: "base-ami-userdata.sh"
tasks:
- name: user_data_contents
shell: cat {{ user_data_file }}
register: user_data_action
- name: launch ec2-instance
local_action:
...
user_data: "{{ user_data_action.stdout }}"
I assume there's a much easier way to do this, but I couldn't find it while searching Ansible docs.
You can use lookups in Ansible in order to get the contents of a file, e.g.
user_data: "{{ lookup('file', user_data_file) }}"
Caveat: This lookup will work with local files, not remote files.
Here's a complete example from the docs:
- hosts: all
vars:
contents: "{{ lookup('file', '/etc/foo.txt') }}"
tasks:
- debug: msg="the value of foo.txt is {{ contents }}"
You can use the slurp module to fetch a file from the remote host: (Thanks to #mlissner for suggesting it)
vars:
amazon_linux_ami: "ami-fb8e9292"
user_data_file: "base-ami-userdata.sh"
tasks:
- name: Load data
slurp:
src: "{{ user_data_file }}"
register: slurped_user_data
- name: Decode data and store as fact # You can skip this if you want to use the right hand side directly...
set_fact:
user_data: "{{ slurped_user_data.content | b64decode }}"
You can use fetch module to copy files from remote hosts to local, and lookup module to read the content of fetched files.
lookup only works on localhost. If you want to retrieve variables from a variables file you made remotely use include_vars: {{ varfile }} . Contents of {{ varfile }} should be a dictionary of the form {"key":"value"}, you will find ansible gives you trouble if you include a space after the colon.