Say the minion host has a default yaml configuration named myconf.yaml. What I want to do is to edit parts of those yaml entries using values from a pillar. I can't even begin to think how to do this on Salt. The only think I can think of is to run a custom python script on the host via cmd.run and feed it with input via arguments, but this seems overcomplicated.
I want to avoid file.managed. I cannot use a template, since the .yaml file is big, and can change by external means. I just want to edit a few parameters in it. I suppose a python script could do it but I thought salt could do it without writing s/w
I have found salt.states.file.serialize with the merge_if_exists option, I will try this and report.
You want file.serialize with the merge_if_exists option.
# states/my_app.sls
something_conf_file:
file.serialize:
- name: /etc/my_app.yaml
- dataset_pillar: my_app:mergeconf
- formatter: yaml
- merge_if_exists: true
# pillar/my_app.sls
my_app:
mergeconf:
options:
opt3: 100
opt4: 200
On the target, /etc/my_app.yaml might start out looking like this (before the state is applied):
# /etc/my_app.yaml
creds:
user: a
pass: b
options:
opt1: 1
opt2: 2
opt3: 3
opt4: 4
And would look like this after the state is applied:
creds:
user: a
pass: b
options:
opt1: 1
opt2: 2
opt3: 100
opt4: 200
As far as I can tell this uses the same algorithm as pillar merges, so e.g. you can merge or partially overwrite dictionaries, but not lists; lists can only be replaced whole.
This can be done for both json and yaml with file.serialize. Input can be inline on the state or come from a pillar. A short excerpt follows:
state:
cassandra_yaml:
file:
- serialize
# - dataset:
# concurrent_reads: 8
- dataset_pillar: cassandra_yaml
- name: /etc/cassandra/conf/cassandra.yaml
- formatter: yaml
- merge_if_exists: True
- require:
- pkg: cassandra-pkgs
pillar:
cassandra_yaml:
concurrent_reads: "8"
Related
I have written a YAML file as follows:
private_ips:
- 192.168.1.1
- 192.168.1.2
- 192.168.1.3
- 192.168.1.4
testcases:
- name: test_outbound
ip: << I want to use reference to private_ips[0] = '192.168.1.1'
How can I use references in a YAML file?
You can use something like:
ip: !Ref private_ips.0
in YAML. But that would require that the program that loads the YAML implements a special type for the tag !Ref that interprets its node in a way relativ to the current data structure. This is somewhat problematic in most YAML loaders as they do a depth first traversal and there is no notion when building the !Ref tagged node of the root of the tree YAML document. That could be solved by a second pass after the datastructure is loaded. There is no "shortcut" in the YAML specification to do this kind of traversal of the document to get a value without the loading program doing something special (i.e. not specified in the YAML specification).
What is in YAML specification is the concept of anchors (indicated by &) and aliases (indicated by *), depending on how you want to use this, that might solve your problem, e.g. if you want to experiment with what IP address should be used for testing:
private_ips:
- &test 192.168.1.1
- 192.168.1.2
- 192.168.1.3
- 192.168.1.4
testcases:
- name: test_outbound
ip: *test
This should load in any YAML loader conforming to the spec, as if the last line was written as:
ip: 192.168.1.1
Without your program doing any extra processing.
I'm new to Ansible an thus this question may seem silly to more advanced users.
Anyway, I need to get the value 362496 for the column LDFree.
I know I can use the shell module with pipes and awk, but I was wondering if it's posisble to achive it in Ansible using some sort of "filter" for STDOUT?
This is the STDOUT from the CLI:
-------------------------(MB)-------------------------
CPG ---EstFree---- -------Usr------- ---Snp---- ---Adm---- -Capacity Efficiency-
Name RawFree LDFree Total Used Total Used Total Used Compaction Dedup
SSD_r6 483328 362496 12693504 12666880 12288 2048 8192 1024 1.0 -
You can done this knowing the fact that Ansible/Jinja support calling methods of native types:
- command: cat test.txt
register: cmd_res
- debug:
msg: "{{ cmd_res.stdout_lines[3].split()[2] }}"
stdout_lines[3] – take forth line, .split() – split it into tokens, [2] – take third token.
If you look at a host which was set up be SaltStack, then it is sometimes like looking at a binary file with vi.
You have no clue how the config/file was created.
This makes trouble shooting errors hard. Reverse engineering where a file comes from takes too much time.
My goal: Make it easy to find the way from looking at the unix config file on the minion (created by salt) to the source where this configuration came from. Like $Id$ in svn and cvs.
One idea a friend and I had:
The state file.managed should (optionally) add the source of the file.
Example:
My sls file contains this:
file_foo_bar:
file.managed:
- source:
- salt://foo/bar
Then the created file should contain this comment.
# Source: salt://foo/bar
Of course this is not simple, since there are different ways to put comments into configuration files.
Is this feasible? Or is there a better solution to my goal.
Update
Usually I know what I did wrong and can find the root easily. The problem arises if several people work on a state tree.
This is a starting point where you can get the date and time of the modified file when its managed by Salt by using Salt Pillar.
Lets call our variable salt_managed. Create a pillar file like the following:
{% set managed_text = 'Salt managed: File modified on ' + salt.cmd.run('date "+%Y-%m-%d %H:%M:%S"') %}
salt_managed: {{ managed_text | yaml_dquote }}
Then on the minion when you call the pillar you will get the following result:
$ salt-call pillar.get salt_managed
local:
Salt managed: File modified on 2016-10-18 11:12:40
And you can use this by adding it on the top of your config files for example like this:
{{ pillar.get('salt_managed') }}
Update:
I found a work around that might be useful for someone. Lets say we have a multiple states that could modify the same file. How can we know that State X is the responsible for modifying that file ? by doing the following steps:
1- I have created a state like this one:
Create a File:
file.managed:
- name: /path/to/foofile
- source: salt://statedir/barfile
Add file header:
file.prepend:
- name: /path/to/foofile
- text: "This file was managed by using this salt state {{ sls }}"
The contents of barfile is:
This is a new file
2- Call the state from the minion and this will be the result:
$ salt-call state.sls statedir.test
local:
----------
ID: Create a File
Function: file.managed
Name: /path/to/foofile
Result: True
Comment: File /path/to/foofile updated
Started: 07:50:45.254994
Duration: 1034.585 ms
Changes:
----------
diff:
New file
mode:
0644
----------
ID: Add file header
Function: file.prepend
Name: /path/to/foofile
Result: True
Comment: Prepended 1 lines
Started: 07:50:46.289766
Duration: 3.69 ms
Changes:
----------
diff:
---
+++
## -1,1 +1,2 ##
+This file was managed by using this salt state statedir.test
This is a new file
Summary for local
------------
Succeeded: 2 (changed=2)
Failed: 0
------------
Total states run: 2
Currently the content of foofile is:
This file was managed by using this salt state statedir.test
This is a new file
I'm configuring /etc/security/limits.conf with Ansible' new module pam_limits.
What I've succeeded at:
Setting values for specific domain and type in the default limits.conf. (A new string is appended to the end of the file).
Changing values (the string gets rewritten).
The problem is when I want to completely remove the setting. E.g. I don't want to save core dumps anymore. How should I use pam_limits to remove the string completely?
I've managed to develop the following workaround, but I don't consider it good. It doesn't remove the string but rather sets the limit to 0, which may be not the same.
roles/myrole/tasks/main.yaml
...
- name: enable core dumps for myservice
pam_limits: domain='*' limit_type='-' limit_item=core value="{{ 'unlimited' if myrole_save_core_dumps else 0 }}"
...
group_vars/myhosts.yaml:
myrole_save_core_dumps: true
myservice.yaml
hosts: myhosts
become: yes
roles:
- myrole
I believe this would be an feature which is currently not implemented. But there is a feature request on github for this feature.
I'm trying to get a cloud config script working properly with my DigitalOcean droplet, but I'm testing on local lxc containers in the interim.
One consistent problem I have is that I can never get the write_files directive working properly for more than one file. It seems to behave in weird ways that I cannot understand.
For example, this configuration is incorrect, and only outputs a single file (.tarsnaprc) in /tmp:
#cloud-config
users:
- name: julian
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa myrsakeygoeshere julian#hostname
write_files:
- path: /tmp/.tarsnaprc
permissions: "0644"
content: |
cachedir /home/julian/tarsnap-cache
keyfile /home/julian/tarsnap.key
nodump
print-stats
checkpoint-bytes 1G
owner: julian:julian
- path: /tmp/lxc
content: |
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536
lxc.network.type = veth
lxc.network.link = lxcbr0
permissions: "0644"
However, if I swap the two items in the write_files array, it magically works, and creates both files, .tarsnaprc and lxc. What am I doing wrong, do I have a syntax error?
It may be too late, as it was posted 1 year ago. The problem is setting the owner in /tmp/.tarsnaprc as the user does not exist when the file is created.
Check cloud-init: What is the execution order of cloud-config directives? answer that clearly explains the order of cloud-config directives.
Do not write files under /tmp during boot because of a race with systemd-tmpfiles-clean that can cause temp files to get cleaned during the early boot process. Use /run/somedir instead to avoid race LP:1707222.
ref: https://cloudinit.readthedocs.io/en/latest/topics/modules.html#write-files
Came here because of using canonicals multipass. Nowadays the answers of #rvelaz and #Christian still hint to the right direction. The corrected example whould look like this:
#cloud-config
users:
- name: julian
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa myrsakeygoeshere julian#hostname
write_files:
# not writing to /tmp
- path: /data/.tarsnaprc
permissions: "0644"
content: |
cachedir /home/julian/tarsnap-cache
keyfile /home/julian/tarsnap.key
nodump
print-stats
checkpoint-bytes 1G
# at execution time, this owner does not yet exist (see runcmd)
# owner: julian:julian
- path: /data/lxc
content: |
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536
lxc.network.type = veth
lxc.network.link = lxcbr0
permissions: "0644"
runcmd:
- "chown julian:julian /data/lxc /data/.tarsnaprc"