Proper syntax of write_files directive in cloud config? - yaml

I'm trying to get a cloud config script working properly with my DigitalOcean droplet, but I'm testing on local lxc containers in the interim.
One consistent problem I have is that I can never get the write_files directive working properly for more than one file. It seems to behave in weird ways that I cannot understand.
For example, this configuration is incorrect, and only outputs a single file (.tarsnaprc) in /tmp:
#cloud-config
users:
- name: julian
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa myrsakeygoeshere julian#hostname
write_files:
- path: /tmp/.tarsnaprc
permissions: "0644"
content: |
cachedir /home/julian/tarsnap-cache
keyfile /home/julian/tarsnap.key
nodump
print-stats
checkpoint-bytes 1G
owner: julian:julian
- path: /tmp/lxc
content: |
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536
lxc.network.type = veth
lxc.network.link = lxcbr0
permissions: "0644"
However, if I swap the two items in the write_files array, it magically works, and creates both files, .tarsnaprc and lxc. What am I doing wrong, do I have a syntax error?

It may be too late, as it was posted 1 year ago. The problem is setting the owner in /tmp/.tarsnaprc as the user does not exist when the file is created.
Check cloud-init: What is the execution order of cloud-config directives? answer that clearly explains the order of cloud-config directives.

Do not write files under /tmp during boot because of a race with systemd-tmpfiles-clean that can cause temp files to get cleaned during the early boot process. Use /run/somedir instead to avoid race LP:1707222.
ref: https://cloudinit.readthedocs.io/en/latest/topics/modules.html#write-files

Came here because of using canonicals multipass. Nowadays the answers of #rvelaz and #Christian still hint to the right direction. The corrected example whould look like this:
#cloud-config
users:
- name: julian
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa myrsakeygoeshere julian#hostname
write_files:
# not writing to /tmp
- path: /data/.tarsnaprc
permissions: "0644"
content: |
cachedir /home/julian/tarsnap-cache
keyfile /home/julian/tarsnap.key
nodump
print-stats
checkpoint-bytes 1G
# at execution time, this owner does not yet exist (see runcmd)
# owner: julian:julian
- path: /data/lxc
content: |
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536
lxc.network.type = veth
lxc.network.link = lxcbr0
permissions: "0644"
runcmd:
- "chown julian:julian /data/lxc /data/.tarsnaprc"

Related

why does sysctl show failed to access a file when using Ansbile

I have hit this a few times and never figured it out/ it resolved itself.
Running below playbook gives an error but it does make the change requested. . . .
If I run the same play again it does show the message but that is cause it is not updating the sysctl.
---
- hosts: "{{ target }}"
gather_facts: yes
become: yes
become_user: root
tasks:
- name: add a vm.overcommit_memory setting at the end of the sysctl.conf
sysctl: name=vm.overcommit_memory value=0 state=present reload=yes
The error is:
fatal: [testbox.local]: FAILED! => {"changed": false, "msg": "Failed to reload sysctl: net.ipv4.tcp_syncookies = 1\nnet.ipv4.tcp_synack_retries = 2\nnet.ipv4.conf.all.accept_redirects = 0\nnet.ipv4.conf.default.accept_redirects = 0\nnet.ipv6.conf.all.accept_ra = 0\nnet.ipv6.conf.default.accept_ra = 0\nnet.ipv4.icmp_ignore_bogus_error_responses = 1\nnet.ipv6.conf.all.disable_ipv6 = 1\nnet.ipv6.conf.default.disable_ipv6 = 1\nkernel.randomize_va_space = 0\nvm.swappiness = 5\nvm.overcommit_memory = 0\nkernel.shmmni = 15872\nkernel.shmmax = 67546587136\nkernel.shmall = 32981732\nkernel.sem = 250 256000 32 15872\nkernel.msgmni = 64417\nkernel.msgmax = 65536\nkernel.msgmnb = 65536\nsysctl: setting key \"kernel.msgmni\": Invalid argument\nsysctl: cannot stat /proc/sys/randomize_va_space: No such file or directory\nsysctl: cannot stat /proc/sys/“vm/overcommit_memory”: No such file or directory\n"}
There is likely problem with /etc/sysctl.conf prior the change applied by play. If you look at error message there is typo in kernel.msgmni (which should be really kernel.msgmin) and also double quotes around the path to vm.overcommit_memory. From that I suspect there are bad lines in file from previous attempts? Try to comment out these or try again with vanilla file obtained from your distribution.
On reload, good lines are still applied by sysctl; but there are some wrong lines in file which sysctl fails to apply, report and also why it exits with non-zero exit code - which makes play to fail.
According the error message
No such file or directory\nsysctl: cannot stat /proc/sys/“vm/overcommit_memory”: No such file or directory\n"
it seems you are running into a barely documented issue. The file path isn't constructed correctly. For possible reasons you may a look into #blami's answer, since there is also a correct entry in the message with vm.overcommit_memory = 0.
Furthermore may need to use use the YAML notation like
- name: Add a 'vm.overcommit_memory' setting at the end of the 'sysctl.conf'
sysctl:
name: vm.overcommit_memory
value: 0
state: present
reload: yes
which is also used in linux-system-roles/kernel_settings for vm. settings.
Further Q&A
Using Ansible, can we edit kernel level setting?

cloud-init: delay disk_setup and fs_setup

I have a cloud-init file that sets up all requirements for our AWS instances, and part of those requirements is formating and mounting an EBS volume. The issue is that on some instances volume attachment occurs after the instance is up, so when cloud-init executes the volume /dev/xvdf does not yet exist and it fails.
I have something like:
#cloud-config
resize_rootfs: false
disk_setup:
/dev/xvdf:
table_type: 'gpt'
layout: true
overwrite: false
fs_setup:
- label: DATA
filesystem: 'ext4'
device: '/dev/xvdf'
partition: 'auto'
mounts:
- [xvdf, /data, auto, "defaults,discard", "0", "0"]
And would like to have something like a sleep 60 or something like that before the disk configuration block.
If the whole cloud-init execution can be delayed, that would also work for me.
Also, I'm using terraform to create the infrastructure.
Thanks!
I guess cloud-init does have an option for running adhoc commands. have a look into this link.
https://cloudinit.readthedocs.io/en/latest/topics/modules.html?highlight=runcmd#runcmd
Not sure what your code looks like, but I just tried to pass the below as user_data in AWS and could see that the init script sleep for 1000 seconds... ( Just added a couple of echo statements to check later). I guess you can add a little more logic as well to verify the presence of the volume.
#cloud-config
runcmd:
- [ sh, -c, "echo before sleep:`date` >> /tmp/user_data.log" ]
- [ sh, -c, "sleep 1000" ]
- [ sh, -c, "echo after sleep:`date` >> /tmp/user_data.log" ]
<Rest of the script>
I was able to resolve the issue with two changes:
Changed the mount options, adding nofail option.
Added a line to the runcmd block, deleting the semaphore file for disk_setup.
So my new cloud-init file now looks like this:
#cloud-config
resize_rootfs: false
disk_setup:
/dev/xvdf:
table_type: 'gpt'
layout: true
overwrite: false
fs_setup:
- label: DATA
filesystem: 'ext4'
device: '/dev/xvdf'
partition: 'auto'
mounts:
- [xvdf, /data, auto, "defaults,discard", "0", "0"]
runcmd:
- [rm, -f, /var/lib/cloud/instances/*/sem/config_disk_setup]
power_state:
mode: reboot
timeout: 30
It will reboot, then it will execute the disk_setup module once more. By this time, the volume will be attached so the operation won't fail.
I guess this is kind of a hacky way to solve this, so if someone has a better answer (like how to delay the whole cloud-init execution) please share it.

YAML Modify Alias Sequence Elements

I'm working on a configuration file format for a program and I was wondering if it is possible to modify specific elements of a sequence defined in an alias.
For example,
# Example multi-model device configuration.
---
aliases:
- &cisco_default
vendor: cisco
cmds:
- terminal length 0 # keep
- show version | include Model number # keep
- show boot | include BOOT path-list # change this one below
- "dir flash: | include bin$" # and this one
- quit # keep
config:
- *cisco_default
- <<: *cisco_default
models:
- c4500
- c3650
cmds:
- show boot | include BOOT variable
- "dir bootflash: | include bin$"
I am using Go to process and unmarshal the YAML into a struct. So, if this behavior is not possible with plain YAML, is there an easy way to modify the cmds sequence using Go's text templates or something similar? Also, I need to preserve the order of the commands.
Got a solution by aliasing the cmds. Here is a working configuration that allows looping the commands in order:
---
aliases:
- &cisco_default
vendor: cisco
cmds: &cisco_cmds
0: terminal length 0
1: show version | include Model number
2: show boot | include BOOT path-list
3: "dir flash: | include bin$"
4: quit
config:
# Default Cisco configuration.
- *cisco_default
# Cisco 4500 and 3650 model configuration.
- <<: *cisco_default
models:
- c4500
- c3650
cmds:
<<: *cisco_cmds
2: show boot | include BOOT variable
3: "dir bootflash: | include bin$"

SaltStack: Reverse engineering where a file comes from

If you look at a host which was set up be SaltStack, then it is sometimes like looking at a binary file with vi.
You have no clue how the config/file was created.
This makes trouble shooting errors hard. Reverse engineering where a file comes from takes too much time.
My goal: Make it easy to find the way from looking at the unix config file on the minion (created by salt) to the source where this configuration came from. Like $Id$ in svn and cvs.
One idea a friend and I had:
The state file.managed should (optionally) add the source of the file.
Example:
My sls file contains this:
file_foo_bar:
file.managed:
- source:
- salt://foo/bar
Then the created file should contain this comment.
# Source: salt://foo/bar
Of course this is not simple, since there are different ways to put comments into configuration files.
Is this feasible? Or is there a better solution to my goal.
Update
Usually I know what I did wrong and can find the root easily. The problem arises if several people work on a state tree.
This is a starting point where you can get the date and time of the modified file when its managed by Salt by using Salt Pillar.
Lets call our variable salt_managed. Create a pillar file like the following:
{% set managed_text = 'Salt managed: File modified on ' + salt.cmd.run('date "+%Y-%m-%d %H:%M:%S"') %}
salt_managed: {{ managed_text | yaml_dquote }}
Then on the minion when you call the pillar you will get the following result:
$ salt-call pillar.get salt_managed
local:
Salt managed: File modified on 2016-10-18 11:12:40
And you can use this by adding it on the top of your config files for example like this:
{{ pillar.get('salt_managed') }}
Update:
I found a work around that might be useful for someone. Lets say we have a multiple states that could modify the same file. How can we know that State X is the responsible for modifying that file ? by doing the following steps:
1- I have created a state like this one:
Create a File:
file.managed:
- name: /path/to/foofile
- source: salt://statedir/barfile
Add file header:
file.prepend:
- name: /path/to/foofile
- text: "This file was managed by using this salt state {{ sls }}"
The contents of barfile is:
This is a new file
2- Call the state from the minion and this will be the result:
$ salt-call state.sls statedir.test
local:
----------
ID: Create a File
Function: file.managed
Name: /path/to/foofile
Result: True
Comment: File /path/to/foofile updated
Started: 07:50:45.254994
Duration: 1034.585 ms
Changes:
----------
diff:
New file
mode:
0644
----------
ID: Add file header
Function: file.prepend
Name: /path/to/foofile
Result: True
Comment: Prepended 1 lines
Started: 07:50:46.289766
Duration: 3.69 ms
Changes:
----------
diff:
---
+++
## -1,1 +1,2 ##
+This file was managed by using this salt state statedir.test
This is a new file
Summary for local
------------
Succeeded: 2 (changed=2)
Failed: 0
------------
Total states run: 2
Currently the content of foofile is:
This file was managed by using this salt state statedir.test
This is a new file

SaltStack: edit yaml file on minion host based on salt pillar data

Say the minion host has a default yaml configuration named myconf.yaml. What I want to do is to edit parts of those yaml entries using values from a pillar. I can't even begin to think how to do this on Salt. The only think I can think of is to run a custom python script on the host via cmd.run and feed it with input via arguments, but this seems overcomplicated.
I want to avoid file.managed. I cannot use a template, since the .yaml file is big, and can change by external means. I just want to edit a few parameters in it. I suppose a python script could do it but I thought salt could do it without writing s/w
I have found salt.states.file.serialize with the merge_if_exists option, I will try this and report.
You want file.serialize with the merge_if_exists option.
# states/my_app.sls
something_conf_file:
file.serialize:
- name: /etc/my_app.yaml
- dataset_pillar: my_app:mergeconf
- formatter: yaml
- merge_if_exists: true
# pillar/my_app.sls
my_app:
mergeconf:
options:
opt3: 100
opt4: 200
On the target, /etc/my_app.yaml might start out looking like this (before the state is applied):
# /etc/my_app.yaml
creds:
user: a
pass: b
options:
opt1: 1
opt2: 2
opt3: 3
opt4: 4
And would look like this after the state is applied:
creds:
user: a
pass: b
options:
opt1: 1
opt2: 2
opt3: 100
opt4: 200
As far as I can tell this uses the same algorithm as pillar merges, so e.g. you can merge or partially overwrite dictionaries, but not lists; lists can only be replaced whole.
This can be done for both json and yaml with file.serialize. Input can be inline on the state or come from a pillar. A short excerpt follows:
state:
cassandra_yaml:
file:
- serialize
# - dataset:
# concurrent_reads: 8
- dataset_pillar: cassandra_yaml
- name: /etc/cassandra/conf/cassandra.yaml
- formatter: yaml
- merge_if_exists: True
- require:
- pkg: cassandra-pkgs
pillar:
cassandra_yaml:
concurrent_reads: "8"

Resources