I'm trying to insert a line after 2 lines of match.
- name : Update compute node under ipsec group
lineinfile:
backup: yes
state: present
path: /root/multinode
insertafter: '\[ipsec:children\]'
line: "hostname"
file:
[ipsec:children]
control
Result:
[ipsec:children]
hostname
control
Desired Result:
[ipsec:children]
control
hostname
Basically I want to insert 1 line after the match, not in the immediate next line.
Please let me know how to do this.
An option would be to use blockinfile (see example below), or template. Module lineinfile works best with non-structured data.
This is primarily useful when you want to change a single line in a file only. ... check blockinfile if you want to insert/update/remove a block of lines in a file. For other cases, see the copy or template modules.
- hosts: localhost
gather_facts: no
vars:
ipsec_children_conf:
- "control"
- "hostname"
tasks:
- blockinfile:
path : /root/multinode
create: yes
block: |
[ipsec:children]
{% for conf_item in ipsec_children_conf %}
{{ conf_item }}
{% endfor %}
> cat /root/multinode
# BEGIN ANSIBLE MANAGED BLOCK
[ipsec:children]
control
hostname
# END ANSIBLE MANAGED BLOCK
Related
Is there a way to append data using a template module in ansible. There are options for lineinfile and blockinfile in ansible. But I have to append data to the host's file. I want to preserve original hosts and add new hosts to the existing file.
When using the template module, a complete file is templated.
What you could do is dynamically build your file in a way like this in the Jinja2 template:
This is content of my file, not being replaced.
{% for item in some_variable %}
{% {{ item }} %}
{% endfor %}
And here is the end of the file, not being replaced.
However, it feels like you're trying to edit a /etc/hosts file or something similar. You shouldn't use template for this use case.
Using lineinfile is perfect for those scenarios. Try something along the lines of:
- name: Add the hosts names and IPs to /etc/hosts
lineinfile:
dest: /etc/hosts
regexp: '.*{{ item }}$'
line: "{{ hostvars[item]['ansible_default_ipv4']['address'] }} {{item}}"
state: present
when: hostvars[item]['ansible_facts']['default_ipv4'] is defined
with_items:
- "{{ groups['all'] }}"
You can use blockinfile to accomplish above as below:
- name: Add the below DNS records
blockinfile:
path: /etc/hosts
marker: "------"
insertafter: '^yourlinepattern'
state: present
block: |
lines to append
Using blockinfile: which is using with_items from an exernal file. When I run the playbook I can see all itmes being processed but the resulting end file only has the last item updated.
Apologies, bit of a noob to this, so might be missing something obvious.
Tried various permutations
I have an external yaml config file with following contents - which is included as include_vars:
yaml props file:
ds_props:
- prop: dataSource.initialSize=8
- prop: dataSource.maxActive=50
- prop: dataSource.maxIdle=20
- prop: dataSource.minIdle=5
- prop: dataSource.maxWait=1000
Ansible tasks:
- name: update DS settings
blockinfile:
path: /app.properties
insertafter: "##### Data Source Properties"
block: |
"{{ item.prop }}"
with_items: "{{ ds_props }}"
Expected:
##### Data Source Properties #####
# BEGIN ANSIBLE MANAGED BLOCK
dataSource.initialSize=8
dataSource.maxActive=50
dataSource.maxIdle=20
dataSource.minIdle=5
dataSource.maxWait=1000
# END ANSIBLE MANAGED BLOCK
Actual:
##### Data Source Properties #####
# BEGIN ANSIBLE MANAGED BLOCK
dataSource.maxWait=1000
# END ANSIBLE MANAGED BLOCK
blockinfile uses a marker to keep track of the blocks it manages in a file. By default, this marker is ANSIBLE MANAGED BLOCK.
What happens in your case since you use the default marker is that a block is created after the line "##### Data Source Properties" for the first item then edited for the next items.
One solution would be to change the marker for each item. An other is to use lineinfile as reported by #Larsk
I would prefer in this case creating the full block at once:
- name: update DS settings
blockinfile:
path: /app.properties
insertafter: "##### Data Source Properties"
marker: "Custom ds props - ansible managed"
block: "{{ ds_props | json_query('[].prop') | join('\n') }}"
If your intent is to do more complicated stuff with your config file, follow #Larsk advice and use a template.
blockinfile is behaving exactly as designed: it adds a block of text to a target file, and will remove the matching block before adding a modified version. So for every iteration of your loop, blockinfile is removing the block added by the previous iteration and adding a new one.
Given that you're adding single lines to the file, rather than a block, you're probably better off using the lineinfile module, as in:
---
- hosts: localhost
gather_facts: false
vars:
ds_props:
- prop: dataSource.initialSize=8
- prop: dataSource.maxActive=50
- prop: dataSource.maxIdle=20
- prop: dataSource.minIdle=5
- prop: dataSource.maxWait=1000
tasks:
- name: update DS settings using lineinfile
lineinfile:
path: /app.properties-line
line: "{{ item.prop }}"
insertafter: "##### Data Source Properties"
with_items: "{{ ds_props }}"
While this works, it's still problematic: if you change the value of one of your properties, you'll end up with multiple entries in the file. E.g, if we were to change dataSource.maxWait from 1000 to 2000, we would end up with:
dataSource.maxWait=1000
dataSource.maxWait=2000
We can protect against that using the regexp option to the lineinfile module, like this:
- name: update DS settings using lineinfile
lineinfile:
path: /app.properties-line
line: "{{ item.prop }}"
insertafter: "##### Data Source Properties"
regexp: "{{ item.prop.split('=')[0] }}"
with_items: "{{ ds_props }}"
This will cause the module to remove any existing lines for the particular property before adding the new one.
Incidentally, you might want to consider slightly restructuring your data, using a dictionary rather than a list of "key=value" strings, like this:
---
- hosts: localhost
gather_facts: false
vars:
ds_props:
dataSource.initialSize: 8
dataSource.maxActive: 50
dataSource.maxIdle: 20
dataSource.minIdle: 5
dataSource.maxWait: 1000
tasks:
- name: update DS settings using lineinfile
lineinfile:
path: /app.properties-line
line: "{{ item.key }}={{ item.value }}"
insertafter: "##### Data Source Properties"
regexp: "{{ item.key }}"
with_items: "{{ ds_props|dict2items }}"
And lastly, rather than using lineinfile or blockinfile, you might want to consider using ansible's template module to create your /app.properties file rather than trying to edit it.
Is there a module/way to iterate over multiple files?
Use Case:
There are multiple .conf files
/opt/a1.conf
/opt/a2.conf
/var/app1/conf/a3.conf
/etc/a4.conf
Is there a module or a way to iterate over multiple files and search if a particular string exists in these files. The string will remain the same.
The "lineinfile" module help you search a single file but I would like to know if there is a way to iterate over multiple files and search for this string and replace it with a different values
My Current playbook Contents for a single file:
- name: Modify line to include Timeout
become: yes
become_method: sudo
lineinfile:
path: /tmp/httpd.conf
regexp: 'http\s+Timeout\s+\='
line: 'http Timeout = 10'
backup: yes
The above can be used only for a single file.
Alternatively, we can use some kind of shell command to do it but is there an Ansible way to achieve this?
Thank you
Have you tried iterating thru loop parameter?
- name: Modify line to include Timeout
become: yes
become_method: sudo
lineinfile:
path: "{{ item }}"
regexp: 'http\s+Timeout\s+\='
line: 'http Timeout = 10'
backup: yes
loop:
- /opt/a1.conf
- /opt/a2.conf
- /var/app1/conf/a3.conf
- /etc/a4.conf
You use with_items directive to transvere of the file, or you can use Jinja scripts look for the string something like below:
files:
- /opt/a1.conf
- /opt/a2.conf
- /var/app1/conf/a3.conf
- /etc/a4.conf
{% for item in files %} {{item.find('string')}} {% endfor %}
I have trivial task to fill particular ansible inventory file with the new deployed VMs names under their roles.
Here is my playbook:
- hosts: 127.0.0.1
tasks:
- name: Add host to inventory file
lineinfile:
dest: "{{inv_path}}"
regexp: '^\[{{role}}\]'
insertafter: '^#\[{{role}}\]'
line: "{{new_host}}"
Instead of adding new line after [role], ansible replace the string with the hostname. My aim is to naturally insert after it. If there is no matched role it should skip add new line.
How to achieve this? Thanks!
[role] is getting replaced because the regexp parameter is the regular expression Ansible will look for to replace with the line parameter. Just move that value to insertafter and get rid of regexp.
As for the other requirement you will need another task that will check if [role] exists first. Then your lineinfile task will evaluate the result of the checker task to determine if it should run.
- name: Check if role is in inventory file
shell: grep [{{ role }}] {{ inv_path }}
ignore_errors: True
register: check_inv
- name: Add host to inventory file
lineinfile:
dest: "{{ inv_path }}"
insertafter: '^\[{{role}}\]$'
line: "{{new_host}}"
when: check_inv.rc == 0
I'm adding few hosts in the hosts inventory file through playbook. Now I'm using those newly added hosts in the same playbook. But those newly added hosts are not readble by the same playbook in the same run it seems, because I get -
skipping: no hosts matched
When I run it separately, i.e. I update hosts file through one playbook and use the updated hosts in it through another playbook, it works fine.
I wanted to do something like this recently, using ansible 1.8.4. I found that add_host needs to use a group name, or the play will be skipped with "no hosts matched". At the same time I wanted play #2 to use facts discovered in play #1. Variables and facts normally remain scoped to each host, so this requires using the magic variables hostvars and groups.
Here's what I came up with. It works, but it's a bit ugly. I'd love to see a cleaner alternative.
# test.yml
#
# The name of the active CFN stack is provided on the command line,
# or is set in the environment variable AWS_STACK_NAME.
# Any host in the active CFN stack can tell us what we need to know.
# In real life the selection is random.
# For a simpler demo, just use the first one.
- hosts:
tag_aws_cloudformation_stack-name_{{ stack
|default(lookup('env','AWS_STACK_NAME')) }}[0]
gather_facts: no
tasks:
# Get some facts about the instance.
- action: ec2_facts
# In real life we might have more facts from various sources.
- set_fact: fubar='baz'
# This could be any hostname.
- set_fact: hostname_next='localhost'
# It's too late for variables set in this play to affect host matching
# in the next play, but we can add a new host to temporary inventory.
# Use a well-known group name, so we can set the hosts for the next play.
# It shouldn't matter if another playbook uses the same name,
# because this entry is exclusive to the running playbook.
- name: add new hostname to temporary inventory
connection: local
add_host: group=temp_inventory name='{{ hostname_next }}'
# Now proceed with the real work on the designated host.
- hosts: temp_inventory
gather_facts: no
tasks:
# The host has changed, so the facts from play #1 are out of scope.
# We can still get to them through hostvars, but it isn't easy.
# In real life we don't know which host ran play #1,
# so we have to check all of them.
- set_fact:
stack='{{ stack|default(lookup("env","AWS_STACK_NAME")) }}'
- set_fact:
group_name='{{ "tag_aws_cloudformation_stack-name_" + stack }}'
- set_fact:
fubar='{% for h in groups[group_name] %} {{
hostvars[h]["fubar"]|default("") }} {% endfor %}'
- set_fact:
instance_id='{% for h in groups[group_name] %} {{
hostvars[h]["ansible_ec2_instance_id"]|default("") }} {% endfor %}'
# Trim extra leading and trailing whitespace.
- set_fact: fubar='{{ fubar|replace(" ", "") }}'
- set_fact: instance_id='{{ instance_id|replace(" ", "") }}'
# Now we can use the variables instance_id and fubar.
- debug: var='{{ fubar }}'
- debug: var='{{ instance_id }}'
# end
It's not entirely clear what you're doing - but from what I gather, you're using the add_host module in a play.
It seems logical that you cannot limit that same play to those hosts, because they don't exist yet... so this can never work:
- name: Play - add a host
hosts: new_host
tasks:
- name: add new host
add_host: name=new_host
But you're free to add multiple plays to a single plabook file (which you also seem to have figured out):
- name: Play 1 - add a host
hosts: a_single_host
tasks:
- name: add new host
add_host: name=new_host
- name: Play 2 - do stuff
hosts: new_host
tasks:
- name: do stuff
It sounds like you are modifying the Ansible inventory file with your playbook, and then wanting to use the new contents of the file. Just modifying the contents of the file on disk, however, won't cause the inventory that Ansible is working with to be updated. The way Ansible works is that it reads that file (and any other inventory source you have) when it first begins and puts the host names it finds into memory. From then on it works only with the inventory that it has stored in memory, the stuff that existed when it first started running. It has no knowledge of any subsequent changes to the file.
But there are ways to do what you want! One option you could use is to add the new host into the inventory file, and also load it into memory using the add_host module. That's two separate steps: 1) add the new host to the file's inventory, and then 2) add the same new host to in-memory inventory using the add_host module:
- name: add a host to in-memory inventory
add_host:
name: "{{ new_host_name }}"
groups: "{{ group_name }}"
A second option is to tell Ansible to refresh the in-memory inventory from the file. But you have to explicitly tell it to do that. Using this option, you have two related steps: 1) add the new host to the file's inventory, like you already did, and then 2) use the meta module:
- name: Refresh inventory to ensure new instances exist in inventory
meta: refresh_inventory