Ansible: Set variable to file content - ansible

I'm using the ec2 module with ansible-playbook I want to set a variable to the contents of a file. Here's how I'm currently doing it.
Var with the filename
shell task to cat the file
use the result of the cat to pass to the ec2 module.
Example contents of my playbook.
vars:
amazon_linux_ami: "ami-fb8e9292"
user_data_file: "base-ami-userdata.sh"
tasks:
- name: user_data_contents
shell: cat {{ user_data_file }}
register: user_data_action
- name: launch ec2-instance
local_action:
...
user_data: "{{ user_data_action.stdout }}"
I assume there's a much easier way to do this, but I couldn't find it while searching Ansible docs.

You can use lookups in Ansible in order to get the contents of a file, e.g.
user_data: "{{ lookup('file', user_data_file) }}"
Caveat: This lookup will work with local files, not remote files.
Here's a complete example from the docs:
- hosts: all
vars:
contents: "{{ lookup('file', '/etc/foo.txt') }}"
tasks:
- debug: msg="the value of foo.txt is {{ contents }}"

You can use the slurp module to fetch a file from the remote host: (Thanks to #mlissner for suggesting it)
vars:
amazon_linux_ami: "ami-fb8e9292"
user_data_file: "base-ami-userdata.sh"
tasks:
- name: Load data
slurp:
src: "{{ user_data_file }}"
register: slurped_user_data
- name: Decode data and store as fact # You can skip this if you want to use the right hand side directly...
set_fact:
user_data: "{{ slurped_user_data.content | b64decode }}"

You can use fetch module to copy files from remote hosts to local, and lookup module to read the content of fetched files.

lookup only works on localhost. If you want to retrieve variables from a variables file you made remotely use include_vars: {{ varfile }} . Contents of {{ varfile }} should be a dictionary of the form {"key":"value"}, you will find ansible gives you trouble if you include a space after the colon.

Related

Copy json file from windows to linux in seperate awx jobs

I created a Worflow job in awx containing 2 jobs:
Job 1 is using the credentials of the windows server where we get the json file from. It reads the content and put it in a variable using set_stats
Job2 is using the credential of the server where to upload the json file. It reads the content of the variable set in the job 1 in the set_stats task and creates a json file with the content.
First job:
- name: get content
win_shell: 'type {{ file_dir }}{{ file_name }}'
register: content
- name: write content
debug:
msg: "{{ content.stdout_lines }} "
register: result
- set_fact:
this_local: "{{ content.stdout_lines }}"
- set_stats:
data:
test_stat: "{{ this_local }}"
- name: set hostname in a variable
set_stats:
data:
current_hostname: "{{ ansible_hostname }}"
per_host: no
Second job
- name: convert to json and copy the file to destination control node.
copy:
content: "{{ test_stat | to_json }}"
dest: "/tmp/{{ current_hostname }}.json"
How can I get the current_hostname, so that the the created json file is named <original_hostname>.json? In my case its concatenating the two hosts which I passed in the first job.
In my case its concatenating the two hosts which I passed in the first job
... which is precisely what you asked for since you used per_host: no as parameter to set_stats to gather the current_hostname stat globally for all host and that aggregate: yes is the default.
Anyhow, this is not exactly the intended use of set_stats and you are making this overly complicated IMO.
You don't need two jobs. In this particular case, you can delegate the write task to a linux host in the middle of a play dedicated to windows hosts (and one awx job can use several credentials).
Here is a pseudo untested playbook to give you the idea. You'll want to read the slurp module documentation which I used to replace your shell task to read the file (which is a bad practice).
Assuming your inventory looks something like:
---
windows_hosts:
hosts:
win1:
win2:
linux_hosts:
hosts:
json_file_target_server:
The playbook would look like:
- name: Gather jsons from win and write to linux target
hosts: windows_hosts
tasks:
- name: Get file content
slurp:
src: "{{ file_dir }}{{ file_name }}"
register: json_file
- name: Push json content to target linux
copy:
content: "{{ json_file.content | b64decode | to_json }}"
dest: "/tmp/{{ inventory_hostname }}.json"
delegate_to: json_file_target_server

Ansible - How to use lookup in remote servers

I have a file called values.txt on each of the server in the directory /tmp/values.txt and it has some values. And I have a jinja template and I am substituting some values from the the values.txt
But the problem is, when i use the lookup command it looks in the controller server and not the remote servers
Here's what I tried:
- name: Create /etc/systemd/system/etcd.service
template:
src: etcd.service.j2
dest: /etc/systemd/system/etcd.service
vars:
value_from_file: "{{ lookup('file', '/tmp/values.txt').split('\n') }}"
vars_from_jinja: [SELF_NAME, SELF_IP, NODE_1_NAME, NODE_1_IP, NODE_2_NAME, NODE_2_IP, NODE_3_NAME, NODE_3_IP, NODE_4_NAME, NODE_4_IP]
my_dict: "{{ dict(vars_from_jinja|zip(value_from_file)) }}"
How can I can this task remotely? Or is there another workaround to substitute the values to the jinja template?
PS: I can't fetch the values.txt to the controller because the content of values.txt in each server is slightly different from the other.
Can someone please help me?
The lookup find the file (or stream) on the controller. If you file is on the remote node, you can't use the lookup
Lookup plugins are an Ansible-specific extension to the Jinja2 templating language. You can use lookup plugins to access data from outside sources (files, databases, key/value stores, APIs, and other services) within your playbooks. Like all templating, lookups execute and are evaluated on the Ansible control machine. Ansible makes the data returned by a lookup plugin available using the standard templating system. You can use lookup plugins to load variables or templates with information from external sources.
try the cat file instead:
- name: read the values.txt
shell: cat /tmp/values.txt
register: data
- name: Create /etc/systemd/system/etcd.service
template:
src: etcd.service.j2
dest: /etc/systemd/system/etcd.service
vars:
value_from_file: "{{ data.stdout_lines }}"
vars_from_jinja: [ SELF_NAME, SELF_IP, NODE_1_NAME, NODE_1_IP, NODE_2_NAME, NODE_2_IP, NODE_3_NAME, NODE_3_IP, NODE_4_NAME, NODE_4_IP ]
my_dict: "{{ dict(vars_from_jinja|zip(value_from_file)) }}"
Use slurp:
- name: read remote values.txt
register: values
ansible.builtin.slurp:
src: /tmp/values.txt
- name: Create /etc/systemd/system/etcd.service
template:
src: etcd.service.j2
dest: /etc/systemd/system/etcd.service
vars:
value_from_file: "{{ (values.content | b64decode).split('\n') }}"
vars_from_jinja: [SELF_NAME, SELF_IP, NODE_1_NAME, NODE_1_IP, NODE_2_NAME, NODE_2_IP, NODE_3_NAME, NODE_3_IP, NODE_4_NAME, NODE_4_IP]
my_dict: "{{ dict(vars_from_jinja|zip(value_from_file)) }}"

Ansible: Is there a way to look into a file, split the content in the file based on a specific criteria & then copy that content to another file?

I'm new to Ansible & I've been trying to read the content of a file, split it based on a specific criteria & then I want to copy that content or return that content.
for example, a file sample.txt contains:
userid= "abc"
I want to read the content in sample.txt & split whereever there's a '=' sign, so that I can extract the creds (userid & abc) & then use it further.
I'm dropping drafts of the code snippets I've tried.
---
- name: extracting creds
hosts: servers
tasks:
- name: read secure value
lineinfile:
path: /home/usr/Desktop/sample.txt
register: creds
debug:
msg: "{{ creds.split('=') }}"
Another code I tried:
---
- name: Creds
hosts: servers
vars:
test: /home/usr/Desktop/sample.txt
tasks:
- debug:
msg: "{{lookup('file', test).split('=') }}"
None of them works :( What shall be followed to get it done?
You can also try the following approach to read the contents from file and split them.
---
- hosts: localhost
tasks:
- name: add host
add_host:
hostname: "{{ server1 }}"
groups: host1
- hosts: host1
become: yes
tasks:
- name: Fetch the sample file
slurp:
src: /tmp/sample.txt
register: var1
- name: extract content for matching pattern
set_fact:
sample_var1: "{{ var1['content'] | b64decode | regex_findall ('(.+=.+)', multiline=True, ignorecase=True) }}"
- debug:
msg: "{{ item.split('=')[1] }}"
loop: "{{ sample_var1 }}"
According to ansible doc, this is what lineinfile does. So, if you want to modify some content from one file and write to another file then this module wouldn't help.
This module ensures a particular line is in a file, or replace an
existing line using a back-referenced regular expression. This is
primarily useful when you want to change a single line in a file
only.
lookup on the other hand works on control machine. Judging by the code you have added, may be you were trying to use the file on target host. So, lookup wouldn't help either.
If the file is available on local/control host then read file, split content and copy to another file on the control machine and then copy the final file to the target host using copy module. Here is a sample that reads a file from control host and split every line using = as a separator.
- hosts: localhost
tasks:
- debug:
msg: "{{ item.split('=') }}"
with_lines: "cat /home/usr/Desktop/sample.txt"
If the file is on remote/managed host then you can use something like below:
- hosts: servers
tasks:
- command: "cat /home/usr/Desktop/sample.txt"
register: content
- debug:
msg: "{{ item.split('=') }}"
loop: "{{ content.stdout_lines }}"

Ansible: register content not available for other host

Below is a part of a playbook in Ansible 2.1:
- hosts: localhost
any_errors_fatal: true
tasks:
- name: Bla Bla
file: path=/var/tmp/somedir state=directory
#ignore_errors: no
- name: Create directory for every host
file: path=/var/tmp/somedir/{{ item }} state=directory
with_items: "{{ groups['XYZ'] }}"
- name: Get File contents of NewFile
shell: cat NewFile.txt executable=/bin/bash
register: file_contents
- hosts: XYZ
#any_errors_fatal: true
vars:
num_hosts: "{{ groups['XYZ'] | length }}"
serial: num_hosts
tasks:
- name: Copy files to corresponding directories
vars:
path: /var/tmp/somedir/{{ item[0] }}
synchronize: mode=pull src={{ item[1] }} dest={{ path }}
with_nested:
- "{{ groups['XYZ'] }}"
- with_lines: cat NewFile.txt
This does not work.
Now the problem is i am not able to reference file_contents which has been registered under localhost and Ansible is not supporting to cat the NewFile from the hosts: XYZ
Is there any way to do this in some simple manner? I need to check contents of the NewFile in this playbook only and then use the same to copy files from remote to local.
As mentioned in the comments, facts (or all variables) are stored on a host basis. If you have registered a values from a task running on localhost, you can access it from any task running in context of other hosts through the global hostvars dict. All hosts and their facts are stored in there:
hostvars['localhost']['file_contents']
I am not entirely sure simply registered variables are available in the hostvars dict. If not, you have to use set_fact in the first play to store it as a fact.

How do I save an ansible variable into a temporary file that is automatically removed at the end of playbook execution?

In order to perform some operations locally (not on the remote machine), I need to put the content of an ansible variable inside a temporary file.
Please note that I am looking for a solution that takes care of generating the temporary file to a location where it can be written (no hardcoded names) and also that takes care of the removal of the file as we do not want to leave things behind.
You should be able to use the tempfile module, followed by either the copy or template modules. Like so:
- hosts: localhost
tasks:
# Create a file named ansible.{random}.config
- tempfile:
state: file
suffix: config
register: temp_config
# Render template content to it
- template:
src: templates/configfile.j2
dest: "{{ temp_config.path }}"
vars:
username: admin
Or if you're running it in a role:
- tempfile:
state: file
suffix: config
register: temp_config
- copy:
content: "{{ lookup('template', 'configfile.j2') }}"
dest: "{{ temp_config.path }}"
vars:
username: admin
Then just pass temp_config.path to whatever module you need to pass the file to.
It's not a great solution, but the alternative is writing a custom module to do it in one step.
Rather than do it with a file, why not just use the environment? This wan you can easily work with the variable and it will be alive through the ansible session and you can easily retrieve it in any steps or outside of them.
Although using the shell/application environment is probably, if you specifically want to use a file to store variable data you could do something like this
- hosts: server1
tasks:
- shell: cat /etc/file.txt
register: some_data
- local_action: copy dest=/tmp/foo.txt content="{{some_data.stdout}}"
- hosts: localhost
tasks:
- set_fact: some_data="{{ lookup('file', '/tmp/foo.txt') }}"
- debug: var=some_data
As for your requirement to give the file a unique name and clean it up at the end of the play. I'll leave that implementation to you

Resources