Which module to use to edit files - Ansible - ansible

I want to edit the configuration file of telegraf(system metrics collecting agent).
Telegraf comes in with a default config file which can be edited. There are many input and output plugins defined in there, which are commented out and can be added by removing the comments and also be customized.
I want to edit only some of the plugins defined there, not all of them. For example, consider this is the file,
[global]
interval='10s'
[outputs.influxdb]
host=['http://localhost:8086']
#[outputs.elasticsearch]
# host=['http://localhost:9200']
[inputs.netstat]
interface='eth0'
Now, I want to edit the 3 blocks, global, outputs.influxdb and inputs.netstat. I don't want to edit outputs.elasticsearch but also want that the block outputs.elasticsearch should remain in the file.
When Using Ansible, I firstly used Template module, but if I use that, then the commented data would be lost.
Then I used the ini_file module, instead of editing the already present block, it adds a new block even if it is already present, and results in something like this,
[outputs.influxdb]
host=[http://localhost:8086]
[outputs.influxdb]
host=[http://xx.xx.xx.xx:8086]
Which module is ideal for my scenario ?

There are several options, depending on your purpose.
The lineinfile - module is the best option, if you just want to add, replace or remove one line.
The replace - module is best, if you want to add, replace or delete several lines.
The blockinfile - module can add several lines, surrounded by markers.
If you only want to change two or three lines, you could use as many calls of lineinfile. To change a whole config file, I would recommend, like the commenters suggest, use the template - module.

Ok, if you really really want to avoid using templates, you could try to use replace and a regex like this:
- hosts: local
tasks:
- replace:
path: testfile
regexp: '^\[{{ item.category }}\]\s(.*)host(.*)$'
replace: '[{{ item.category }}]\n host=[{{ item.host }}]'
with_items:
- { category: 'outputs.influxdb', host: 'http://cake.com:8080' }
This, in its current form, would not necessarily handle more than one option under each category, but the regex can be modified to handle multiple lines.
As required, it will not touch the # commented lines. However, if you decide to enable some of the previously inactive sections, you might end up with a slightly messier configuration file that would include the instructions both commented and uncommented (shouldn't impact functionality, only 'looks'). You will also need to account for options that look like the example below (interleaved commented/uncommented values) and create regexes specially for those use-cases:
[section]
option1=['value']
# option2=['value']
option3=['value']
It highly depends on your use-case, but my recommendation remains that templates are to be used instead, as they are a more robust approach, with less chances of things going wrong.

Related

Ansible: include these lines and no others

I have a config file with one directive per line. Certain directives can be duplicated. In particular, I want to have the lines
server a
server b
server c
(where a, b, c are placeholders for full hostnames.) It doesn't matter where the lines are in the file, or in what order. This is easy to do with either lineinfile or blockinfile.
My question is, how can I also ensure that there are no other server directives beside the ones I want to define? For instance, the ones set by default, or if I decide to stop using server c and use d instead.
My first thought was to have three lineinfile tasks for the servers I want to include, then add a fourth with state=absent to delete any other server directives that the file may contain, but I don't know how to write the regexp option for that.
Another possibility would be to have one play delete all server directives, and then a second one re-add the lines I wanted to keep. But that seems ugly, and since I want to restart the daemon when its config file changes, could lead to unnecessary outages.
Thanks!

Is it possible to specify a list of good names for pylint just within a single python file?

I'm looking for something like
[BASIC]
good-names=X,
y
as in pylintrc, but I'd like to limit these names to be good only within a single python file.
I thought about message control like #pylint: disable=invalid-names on top of the file, but that is too broad. Ideally, I'd like to only allow these two invalid names X and y to be considered good within a single file. Is that possible with pylint?
Only way I have been able to achieve this effect has been to disable and then immediately enable again immediately afterwards. It's not what you wanted but at least it doesn't ruin the whole file, and a comment of # pylint: enable=xxx is easy to find when you want to go cleaning up later on (like if they add good-names to in-file message control)

yml mapping values are not allowed here (syntax)

I have removed half of what I wanted in this yaml file trying to find a way to get it to build my mkdocs test site and I have gotten down to one error.
'''
mkdocs.yml
doc/
Scaling-Issue.md
FreeSwitch.md
User-Sessions.md
nav:
-Common Issues:
-Scaling Issue:'Scaling-Issue.md'
-FreeSwitch:'FreeSwitch.md'
-User Sessions:'User-Sessions.md'
'''
Error: 6:4 syntax error: mapping values are not allowed here
You appear to have a lot of stuff in your configuration file which does not belong. Your file should look like this:
site_name: 'Your Site Name'
nav:
- Common Issues:
- Scaling Issue:'Scaling-Issue.md'
- FreeSwitch:'FreeSwitch.md'
- User Sessions:'User-Sessions.md'
First of all, the site_name option is required. Of course, you can change the actual name to whatever you want.
While nav is optional, it is recommended. I have cleaned up the indentation (four spaces is recommended rather than the one you were using). Also, you should have a space after each hyphen in your list items.
The other things do not go in the file. For example, the name of the file should not be included in the file. And I'm not sure where the three dots come from. Finally, the list of files is not something that goes in the file. I realize that all of those things may exist in documentation, but those are showing examples of what the file structure looks like, they are not things to add to the configuration file.

How to exclude instances of the EC2 inventory in Ansible?

We have an Ansible server using EC2 dynamic inventory:
https://github.com/ansible/ansible/blob/devel/contrib/inventory/ec2.py
https://github.com/ansible/ansible/blob/devel/contrib/inventory/ec2.ini
However, with the number of instances we have, running ./ec2.py --list or ./ec2.py --refresh-cache returns a 28,000 line JSON response.
This I assume, causes it to randomly fail (returns a Python stack trace) as it only receives a partial response when sending a call to AWS, but is then fine if ran again.
Which is why I want to know if there's a way to cut this down.
I know there is a way to include specific instances by tag in the ec2.ini (i.e. # instance_filters = tag:env=staging), but with
the way our instances are tagged, is there a way to exclude
instances instead (something that would look similar to: # instance_filters = tag:name=!dev)?
is there a way to exclude instances instead
Just for completeness, I wanted to point out that the "inventory protocol" for ansible is super straightforward to implement, and they even have a JSON Schema for it.
You can see an example of the output it is expecting by running the newly included ansible-inventory script with --list to see the output it generates from one of the .ini style inventories, and then use that to emit your own:
$ printf 'somehost ansible_user=bob\n\n[some_group]\nsomehost\n' > sample
$ ansible-inventory -i ./sample --list
What I am suggesting is that you might have better luck making a custom inventory script, that does know your local business practices, rather than trying to force ec2.py into running a negation query (which, as best I can tell, it will not do).
To generate dynamic inventory, just make an executable -- as far as I know it can be in any language at all -- and then point the -i at the executable script instead of a "normal" file. Ansible will invoke that program, and operate on the JSON output as the inventory. There are several examples people have posted as gists, in all kinds of languages.
I would still love it if you would file an issue with ansible about ec2.py, because you have the situation that can make the bug report concrete for them in ways that a simple "it doesn't work for large inventories" doesn't capture. But in the mean time, writing your own inventory provider is actually less work than it sounds.
I use the option pattern_exclude in ec2.ini:
# If you want to exclude any hosts that match a certain regular expression
pattern_exclude = staging-*
and
hostname_variable = tag_Name

Improved way of scaling in saltstack

I have a problem about the Jinja2 template and that problem is breaking a one line string over multiple lines when it comes to writing a state or anything in salt [my exact case refers to trying to write a list of machines one after the other,in a list,instead of just in a really long line].
What I am trying to say is that I want to achieve this:
nodegroups:
- group: 'L#adsdasdadas' +
'dasdasdasdas'
.............->imagine 10.000 names coming here
'adsasdasddsa'
Compared to the approach that I have to do now:
nodegroups:
- group: 'L#adsdasdadas,dasdsadasdsa,dasdsadasdsa,......,asdqwe'
Is there a better way to do it?Is there a better way to handle thousands of machines?
You could say grains,and I thought about it but I was wondering if there's a better and elegant way of doing it.
Any thoughts or opinions would help me a lot
[Edit1]:
I wrote a script that takes a list of hostnames and adds them to the master config file in the nodegroups section.For now it might work
Choice of data source
I would recommend targeting with pillars because they are managed centrally from Master = convenient, rather than static custom grains (which are configured distributively on each Minion) = inconvenient - see comparison summary here.
Limitations of configuration files
The nodegroups are specified in Salt configuration file /etc/salt/master which is not a Jinja template (it has pure YAML format). So, you don't have an option to use Jinja to join external input with list of strings.
Possible solution
Why joining is even mentioned? You can turn the problem of "breaking a one line string over multiple lines" into solution of using lists right away - no need to break (and if you need "one line string" somewhere, joining list items is easy).
In other words, you could define nodegroups via pillar (avoiding long strings as in your example). Pillars, in turn, are rendered by Jinja. Therefore, using the same list of Minions defined somewhere, you could generate derived product in pillars through Jinja (be it joined string of them or list as is). There is a trick which allows reusing the same external data in multiple pillars files.
First of all I would like to thank uvsmtid for the wonderful idea.Sorry for the confusion created too
So,what I did was create a pillar with the name of each minion[which happens to be it's id] and then in a state what I did was compared the value from that list to the actual id of the minion
{%for item in salt['pillar.get']('info') %}
{%if grains['id'] == item %}
something:
cmd.run:
- name: touch something
{%endif%}
{%endfor%}
I hope this solution will help someone the same way it helped me

Resources