I'm trying to pull in a file with the same name on multiple servers and I would like to just concatenate the results, but I don't think fetch module will allow me to do this. Can someone advise on another module that I could use for this task?
Current non-working code:
- hosts: '{{ target }}'
gather_facts: false
tasks:
- name: Pull in file.log contents from servers, concatenating results
fetch:
src: '/tmp/file.log'
dest: /tmp/fetched
flat: yes
fail_on_missing: no
For example, given the files
shell> ssh admin#test_11 cat /tmp/file.log
test_11
shell> ssh admin#test_12 cat /tmp/file.log
test_12
shell> ssh admin#test_13 cat /tmp/file.log
test_13
Throttle the task and time-stamp the fetched files, e.g.
- hosts: test_11,test_12,test_13
tasks:
- fetch:
src: /tmp/file.log
dest: /tmp/fetched/file-{{ time_stamp }}.log
flat: true
fail_on_missing: false
throttle: 1
vars:
time_stamp: "{{ lookup('pipe', 'date +%Y-%m-%d_%H-%M-%S') }}"
gives
shell> tree /tmp/fetched/
/tmp/fetched/
├── file-2021-03-22_21-16-54.log
├── file-2021-03-22_21-16-58.log
└── file-2021-03-22_21-17-02.log
Then assemble the content of the files, e.g.
- assemble:
src: /tmp/fetched
regexp: '^file-.*log$'
dest: /tmp/fetched/assemble-{{ time_stamp }}.log
vars:
time_stamp: "{{ lookup('pipe', 'date +%Y-%m-%d_%H-%M-%S') }}"
delegate_to: localhost
run_once: true
gives
shell> cat /tmp/fetched/assemble-2021-03-22_21-17-07.log
test_11
test_12
test_13
If you want to speed up the transfer from many hosts (e.g. ~100) increase the number of the parallel tasks (e.g. throttle: 10). Put the name of the host into the name of the file. Otherwise, the task would overwrite the files with the same timestamp, e.g.
- fetch:
src: /tmp/file.log
dest: /tmp/fetched/file-{{ inventory_hostname }}-{{ time_stamp }}.log
flat: true
fail_on_missing: false
throttle: 3
vars:
time_stamp: "{{ lookup('pipe', 'date +%Y-%m-%d_%H-%M-%S') }}"
Related
I have some files (file1), in some servers (group: myservers), which should look like this:
search www.mysebsite.com
nameserver 1.2.3.4
nameserver 1.2.3.5
This is an example of what this file should look like:
The first line is mandatory ("search www.mysebsite.com").
The second and the third lines are mandatory as well, but the ips can change (although they should all be like this: ...).
I've being researching to implement some tasks using Ansible to check if the files are properly configured. I don't want to change any file, only check and output if the files are not ok or not.
I know I can use ansible.builtin.lineinfile to check it, but I still haven't managed to find out how to achieve this.
Can you help please?
For example, given the inventory
shell> cat hosts
[myservers]
test_11
test_13
Create a dictionary of what you want to audit
audit:
files:
/etc/resolv.conf:
patterns:
- '^search example.com$'
- '^nameserver \d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$'
/etc/rc.conf:
patterns:
- '^sshd_enable="YES"$'
- '^syslogd_flags="-ss"$'
Declare the directory at the controller where the files will be stored
my_dest: /tmp/ansible/myservers
fetch the files
- fetch:
src: "{{ item.key }}"
dest: "{{ my_dest }}"
loop: "{{ audit.files|dict2items }}"
Take a look at the fetched files
shell> tree /tmp/ansible/myservers
/tmp/ansible/myservers
├── test_11
│ └── etc
│ ├── rc.conf
│ └── resolv.conf
└── test_13
└── etc
├── rc.conf
└── resolv.conf
4 directories, 4 files
Audit the files. Create the dictionary host_files_results in the loop
- set_fact:
host_files_results: "{{ host_files_results|default({})|
combine(host_file_dict|from_yaml) }}"
loop: "{{ audit.files|dict2items }}"
loop_control:
label: "{{ item.key }}"
vars:
host_file_path: "{{ my_dest }}/{{ inventory_hostname }}/{{ item.key }}"
host_file_lines: "{{ lookup('file', host_file_path).splitlines() }}"
host_file_result: |
[{% for pattern in item.value.patterns %}
{{ host_file_lines[loop.index0] is regex pattern }},
{% endfor %}]
host_file_dict: "{ {{ item.key }}: {{ host_file_result|from_yaml is all }} }"
gives
ok: [test_11] =>
host_files_results:
/etc/rc.conf: true
/etc/resolv.conf: true
ok: [test_13] =>
host_files_results:
/etc/rc.conf: true
/etc/resolv.conf: true
Declare the dictionary audit_files that aggregates host_files_results
audit_files: "{{ dict(ansible_play_hosts|
zip(ansible_play_hosts|
map('extract', hostvars, 'host_files_results'))) }}"
gives
audit_files:
test_11:
/etc/rc.conf: true
/etc/resolv.conf: true
test_13:
/etc/rc.conf: true
/etc/resolv.conf: true
Evaluate the audit results
- block:
- debug:
var: audit_files
- assert:
that: "{{ audit_files|json_query('*.*')|flatten is all }}"
fail_msg: "[ERR] Audit of files failed. [TODO: list failed]"
success_msg: "[OK] Audit of files passed."
run_once: true
gives
msg: '[OK] Audit of files passed.'
Example of a complete playbook for testing
- hosts: myservers
vars:
my_dest: /tmp/ansible/myservers
audit:
files:
/etc/resolv.conf:
patterns:
- '^search example.com$'
- '^nameserver \d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}$'
/etc/rc.conf:
patterns:
- '^sshd_enable="YES"$'
- '^syslogd_flags="-ss"$'
audit_files: "{{ dict(ansible_play_hosts|
zip(ansible_play_hosts|
map('extract', hostvars, 'host_files_results'))) }}"
tasks:
- fetch:
src: "{{ item.key }}"
dest: "{{ my_dest }}"
loop: "{{ audit.files|dict2items }}"
loop_control:
label: "{{ item.key }}"
- set_fact:
host_files_results: "{{ host_files_results|default({})|
combine(host_file_dict|from_yaml) }}"
loop: "{{ audit.files|dict2items }}"
loop_control:
label: "{{ item.key }}"
vars:
host_file_path: "{{ my_dest }}/{{ inventory_hostname }}/{{ item.key }}"
host_file_lines: "{{ lookup('file', host_file_path).splitlines() }}"
host_file_result: |
[{% for pattern in item.value.patterns %}
{{ host_file_lines[loop.index0] is regex pattern }},
{% endfor %}]
host_file_dict: "{ {{ item.key }}: {{ host_file_result|from_yaml is all }} }"
- debug:
var: host_files_results
- block:
- debug:
var: audit_files
- assert:
that: "{{ audit_files|json_query('*.*')|flatten is all }}"
fail_msg: "[ERR] Audit of files failed. [TODO: list failed]"
success_msg: "[OK] Audit of files passed."
run_once: true
... implement some tasks using Ansible to check if the files are properly configured. I don't want to change any file, only check and output if the files are not ok or not.
Since Ansible is mostly used as Configuration Management Tool there is no need to check (before) if a file is properly configured. Just declare the Desired State and make sure that the file is in that state. As this is approach is working with Validating: check_mode too, if interested in a Configuration Check or an Audit it could be implemented simply as follow:
resolv.conf as is it should be
# Generated by NetworkManager
search example.com
nameserver 192.0.2.1
hosts.ini
[test]
test.example.com NS_IP=192.0.2.1
resolv.conf.j2 template
# Generated by NetworkManager
search {{ DOMAIN }}
nameserver {{ NS_IP }}
A minimal example playbook for Configuration Check in order to audit the config
---
- hosts: test
become: false
gather_facts: false
vars:
# Ansible v2.9 and later
DOMAIN: "{{ inventory_hostname.split('.', 1) | last }}"
tasks:
- name: Check configuration (file)
template:
src: resolv.conf.j2
dest: resolv.conf
check_mode: true # will never change existing config
register: result
- name: Config change
debug:
msg: "{{ result.changed }}"
will result for no changes into an output of
TASK [Check configuration (file)] ******
ok: [test.example.com]
TASK [Config change] *******************
ok: [test.example.com] =>
msg: false
or for changes into
TASK [Check configuration (file)] ******
changed: [test.example.com]
TASK [Config change] *******************
ok: [test.example.com] =>
msg: true
and depending on what's in the config file.
If one is interested in an other message text and need to invert the output therefore, just use msg: "{{ not result.changed }}" as it will report an false if true and true if false.
Further Reading
Using Ansible inventory, variables in inventory, the template module (to) Template a file out to a target host and Enforcing check_mode on tasks makes it extremely simply to prevent Configuration Drift.
And as a reference for getting the search domain, Ansible: How to get hostname without domain name?.
I know the Ansible fetch-module can copy a file from remote to local, but what if I only need the contents (in my case a tmp file holding the ip address) appended into a local file?
Fetch module does this:
- name: Store file into /tmp/fetched/
ansible.builtin.fetch:
src: /tmp/somefile
dest: /tmp/fetched
I need it to do something like this:
- name: Store file into /tmp/fetched/
ansible.builtin.fetch:
src: /tmp/somefile.txt
dest: cat src >> /tmp/fetched.txt
In a nutshell:
- name: Get remote file content
ansible.builtin.slurp:
src: /tmp/somefile.txt
register: somefile
- name: Append remote file content to a local file
vars:
target_file: /tmp/fetched.txt
ansible.builtin.copy:
content: |-
{{ lookup('file', target_file) }}
{{ somefile.content | b64decode }}
dest: "{{ target_file }}"
# Fix write concurrency when running on multiple targets
throttle: 1
delegate_to: localhost
Notes:
the second task isn't idempotent (will modify the file on each run even with the same content to append)
this will work for small target files. If that file becomes huge and you experience high execution times / memory consumptions, you might want to switch to shell for the second task:
- name: Append remote file content to a local file
ansible.builtin.shell:
cmd: echo "{{ somefile.content | b64decode }}" >> /tmp/fetched
# You might still want to avoid concurrency with multiple targets
throttle: 1
delegate_to: localhost
Alternatively, you could write all contents from all fetched files from all your targets in one go to avoid the concurrency problem and gain some time.
# Copy solution
- name: Append remote files contents to a local file
vars:
target_file: /tmp/fetched.txt
fetched_content: "{{ ansible_play_hosts
| map('extract', hostvars, 'somefile.content')
| map('b64decode')
| join('\n') }}"
ansible.builtin.copy:
content: |-
{{ lookup('file', target_file) }}
{{ fetched_content }}
dest: "{{ target_file }}"
delegate_to: localhost
run_once: true
# Shell solution
- name: Append remote files contents to a local file
vars:
fetched_content: "{{ ansible_play_hosts
| map('extract', hostvars, 'somefile.content')
| map('b64decode')
| join('\n') }}"
ansible.builtin.shell:
cmd: echo "{{ fetched_content }}" >> /tmp/fetched
delegate_to: localhost
run_once: true
Today I totally broke my brain with that. I have some structure with dictionary files and I need to make some text or json with variables which uses in dictionary:
> hosts_vars
varfile.yaml
> inventory
hosts.yml
tasks.yml
varfile.yaml
host_vars:
host1: { ip: 192.168.1.1 }
host2: { ip: 192.168.1.2 }
hosts.yml
all:
hosts:
host1:
ansible_host: 192.168.1.1
host2:
ansible_host: 192.168.1.2
tasks.yml
- hosts: all
become: no
gather_facts: false
vars_files:
- hosts_vars/varfile.yaml
vars:
temp_file: /tmp/out_file.txt
tasks:
- local_action:
module: ansible.builtin.lineinfile
path: '{{ temp_file }}'
line: |
- targets: ['{{ host_vars[inventory_hostname].ip }}']
state: present
When I run it: ansible-playbook tasks.yml -l all
I got:
~> cat /tmp/out_file.txt
- targets: ['192.168.1.1']
I didn't understand why it happened. When if I had 60 hosts I'll got ~54 lines in file. How it works?
You need to create the file in localhost and delegate the task to localhost looping hosts.
tasks:
- name: update host
lineinfile:
path: '{{ temp_file }}'
create: yes
line: |
- targets: ['{{ host_vars[item].ip }}'] # or ['{{ host_vars[item]['ip'] }}']
insertafter: EOF
delegate_to: localhost
run_once: true
with_inventory_hostnames:
- all
For example
tasks:
- ansible.builtin.lineinfile:
create: yes
path: "{{ temp_file }}"
line: "- targets: ['{{ host_vars[inventory_hostname].ip }}']"
delegate_to: localhost
gives
shell> cat /tmp/out_file.txt
- targets: ['192.168.1.2']
- targets: ['192.168.1.1']
Notes:
Don't use Yaml Literal Style block with line. It says what it is. You don't want any newlines \n in the line. If you for whatever reason have to use the block put dash - behind the pipe |. This will remove the trailing newline \n
- ansible.builtin.lineinfile:
create: true
path: "{{ temp_file }}"
line: |-
- targets: ['{{ host_vars[inventory_hostname].ip }}']
delegate_to: localhost
You can use delegate_to: localhost instead of local_action
Use create: yes. It is no by default and the task will fail Destination /tmp/out_file.txt does not exist ! if the file doesn't exist.
state: present is default
insertafter: EOF is default
Putting varfile.yaml into the directory host_vars is missing the point how host_vars work. There is no host with the name varfile. You should put the file, e.g. into vars/varfile.yaml instead.
The variable host_vars is redundant. The IP of the hosts is stored in the variable ansible_host. The task below gives the same result
- ansible.builtin.lineinfile:
create: true
path: "{{ temp_file }}"
line: "- targets: ['{{ ansible_host }}']"
delegate_to: localhost
If you'd like to create a list of all IP addresses the task below
- ansible.builtin.lineinfile:
create: true
path: "{{ temp_file }}"
line: "- targets: {{ ansible_play_hosts_all|
map('extract', hostvars, 'ansible_host') }}"
run_once: true
delegate_to: localhost
gives
shell> cat /tmp/out_file.txt
- targets: ['192.168.1.1', '192.168.1.2']
macOS with ansible [core 2.11.2] had issue with write files in /tmp If you want to use this you need to set the home folder or any other folder with root permissions.
A file has the following contents
com.dkr.container.id=a43019cc-d4a4-4acb-83dd-defd76443c6a
com.dkr.container.account=12HJB
I need to fetch a43019cc-d4a4-4acb-83dd-defd76443c6a and write it to a variable using an Ansible task. This value need to be passed to other tasks in the same Ansible file.
Can someone show me the required task to achieve this.
If your file is on the controller, you can use the file lookup to get its content.
If the file is on the node, you will have to use something like the slurp module.
Then, when you have the file content, you can use the regex_search filter to extract your required text.
With the file on the controller:
- set_fact:
com_dkr_container_id: >-
{{
lookup('file', '/path/to/file')
| regex_search('com\.dkr\.container\.id=(.*)', '\1')
| first
}}
With the file on the node(s):
- slurp:
src: /path/to/file
register: file_content
- set_fact:
com_dkr_container_id: >-
{{
file_content.content
| b64decode
| regex_search('com\.dkr\.container\.id=(.*)', '\1')
| first
}}
This is the job for the ini lookup plugin. See
shell> ansible-doc -t lookup ini
For example, given the file
shell> cat container.properties
com.dkr.container.id=a43019cc-d4a4-4acb-83dd-defd76443c6a
com.dkr.container.account=12HJB
The playbook
- hosts: localhost
tasks:
- set_fact:
id: "{{ lookup('ini', 'com.dkr.container.id
type=properties
file=container.properties') }}"
- debug:
var: id
gives
id: a43019cc-d4a4-4acb-83dd-defd76443c6a
The lookup plugins work on the controller only. If the file is at the remote host fetch it, e.g. given the file
shell> ssh admin#test_11 cat container.properties
com.dkr.container.id=a43019cc-d4a4-4acb-83dd-defd76443c6a
com.dkr.container.account=12HJB
The playbook
- hosts: test_11
tasks:
- fetch:
src: container.properties
dest: /tmp/fetched
- set_fact:
id: "{{ lookup('ini', 'com.dkr.container.id
type=properties
file=/tmp/fetched/{{ inventory_hostname }}/container.properties') }}"
- debug:
var: id
gives the same result
id: a43019cc-d4a4-4acb-83dd-defd76443c6a
The playbook above is idempotent. The file will be stored at the controller
shell> tree /tmp/fetched/
/tmp/fetched/
└── test_11
└── container.properties
I have multiple .json files on local host where I place my playbook:
json-file-path/{{ testName }}.json
{{ testName }}.json are: testA.json, testB.json, testC.json ... etc.
All .json files have same keys with different values like this:
json-file-path/testA.json:
{
“a_key”: “a_value1”
“b_key”: “b_value1”
}
json-file-path/testB.json:
{
“a_key”: “a_value2”
“b_key”: “b_value2”
}
json-file-path/testC.json:
{
“a_key”: “a_value3”
“b_key”: “b_value3”
}
.....
I need to access the key-value variables from all .json files and if the values meet some condition, I will perform some task in target host. For example, I have:
a_value1=3
a_value2=4
a_value3=1
I go through my .json file one by one, if a_key[value]>3, I will copy this .json file to target host, otherwise skip the task. In this case, I will only copy testC.json to target host.
How would I achieve this? I was thinking of re-constructing my .json files using {{ testName }} as dynamic key of dict like this:
{
“testName”: “testA”
{
“a_key”: “a_value1”
“b_key”: “b_value1”
}
So I can access my variable as {{ testName}}.a_key. So far I haven’t been able to achieve this.
I have tried the following in my playbook:
—-
- host: localhost
tasks:
- name: construct json files
vars:
my_vars:
a_key: “{{ a_value }}”
b_key: “{{ b_value }}”
with_dict: “{{ testName }}”
copy:
content: “{{ my_vars | to_nice_json }}”
dest: /json-file-path/{{ testName }}.json
My updated playbook are:
/mypath/tmp/include.yaml:
—-
- hosts: remote_hostName
tasks:
- name: load json files
set_fact:
json_data: “{{ lookup(‘file’, item) | from_json }}”
- name: copy json file if condition meets
copy:
src: “{{ item }}”
dest: “{{ /remote_host_path/tmp}}/{{item | basename }}”
delegate_to: “{{ remote_hostName }}”
when: json_data.a_key|int>5
/mypath/test.yml:
—-
- hosts: localhost
vars:
local_src_ dir: /mypath/tmp
remote_host: remote_hostName
remote_dest_dir: /remote_host_path/tmp
tasks:
- name: looping
include: include.yaml
with_fileglob:
- “{{ local_src_dir }}/*json”
All json files on localhost under /mypath/tmp/.
Latest version of playbook. It is working now:
/mypath/tmp/include.yaml:
—-
- name: loafing json flies
include_vars:
file: “{{ item }}”
name: json_data
- name: copy json file to remote if condition meets
copy:
src: “{{ item }}”
dest: ‘/remote_host_path/tmp/{{item | basename}}’
delegate_to: “{{ remote_host }}”
when: json_data.a_key > 5
/mypath/test.yml:
—-
- hosts: localhost
vars:
local_src_dir: /mypath/tmp
remote_host: remote_hostName
remote_dest_dir: /remote_host_path/tmp
tasks:
- name: looping json files
include: include.yaml
with_fileglob:
- “{{ local_src_dir }}”/*json”
I am hoping that I have understood your requirements correctly, and that this helps move you forward.
Fundamentally, you can load each of the JSON files so you can query the values as native Ansible variables. Therefore you can loop through all the files, read each one, compare the value you are interested in and then conditionally copy to your remote host via a delegated task. Therefore, give this a try:
Create an include file include.yaml:
---
# 'item' contains a path to a local JSON file on each pass of the loop
- name: Load the json file
set_fact:
json_data: "{{ lookup('file', item) | from_json }}"
- name: Delegate a copy task to the remote host conditionally
copy:
src: "{{ item }}"
dest: "{{ remote_dest_dir }}/{{ item | basename }}"
delegate_to: "{{ remote_host }}"
when: json_data.a_key > value_threshold
then in your playbook:
---
- hosts: localhost
connection: local
# Set some example vars, tho these could be placed in a variety of places
vars:
local_src_dir: /some/local/path
remote_host: <some_inventory_hostname>
remote_dest_dir: /some/remote/path
value_threshold: 3
tasks:
- name: Loop through all *json files, passing matches to include.yaml
include: include.yaml
loop: "{{ lookup('fileglob', local_src_dir + '/*json').split(',') }}"
Note: As you are running an old version of Ansible, you may need older alternate syntax for all of this to work:
In your include file:
- name: Load the json file
set_fact:
include_vars: "{{ item }}"
- name: Delegate a copy task to the remote host conditionally
copy:
src: "{{ item }}"
dest: "{{ remote_dest_dir }}/{{ item | basename }}"
delegate_to: "{{ remote_host }}"
when: a_key > value_threshold
and in your playbook:
- name: Loop through all *json files, passing matches to include.yaml
include: include.yaml
with_fileglob:
- "{{ local_src_dir }}/*json"