I have written a YAML file as follows:
private_ips:
- 192.168.1.1
- 192.168.1.2
- 192.168.1.3
- 192.168.1.4
testcases:
- name: test_outbound
ip: << I want to use reference to private_ips[0] = '192.168.1.1'
How can I use references in a YAML file?
You can use something like:
ip: !Ref private_ips.0
in YAML. But that would require that the program that loads the YAML implements a special type for the tag !Ref that interprets its node in a way relativ to the current data structure. This is somewhat problematic in most YAML loaders as they do a depth first traversal and there is no notion when building the !Ref tagged node of the root of the tree YAML document. That could be solved by a second pass after the datastructure is loaded. There is no "shortcut" in the YAML specification to do this kind of traversal of the document to get a value without the loading program doing something special (i.e. not specified in the YAML specification).
What is in YAML specification is the concept of anchors (indicated by &) and aliases (indicated by *), depending on how you want to use this, that might solve your problem, e.g. if you want to experiment with what IP address should be used for testing:
private_ips:
- &test 192.168.1.1
- 192.168.1.2
- 192.168.1.3
- 192.168.1.4
testcases:
- name: test_outbound
ip: *test
This should load in any YAML loader conforming to the spec, as if the last line was written as:
ip: 192.168.1.1
Without your program doing any extra processing.
Related
I have a problem with groups variables.
Example: I have two inventory groups group_A and group_B, and also have the same name files in group_vars:
inventories/
hosts.inv
[group_A]
server1
server2
[group_B]
server3
server4
group_vars/
group_A - file
var_port: 9001
group_B - file
var_port: 9002
The problem is when i execute:
ansible-playbook playbooks/playbook.yml -i inventories/hosts.inv -l group_B
playbook was executed for proper scope of servers (server3, server4) but it takes variables from group variables file group_A.
expected result: var_port: 9002
in realty : var_port: 9001
ansible 2.4.2.0
BR Oleg
I included ANSIBLE_DEBUG , and what i have found:
2018-05-03 15:23:23,663 p=129458 u=user | 129458 1525353803.66336: Loading data from /ansible/inventories/prod/group_vars/group_B.yml
2018-05-03 15:23:23,663 p=129458 u=user | 129661 1525353803.66060: in run() - task 00505680-eccc-d94e-2b1b-0000000000f4
2018-05-03 15:23:23,664 p=129458 u=user | 129661 1525353803.66458: calling self._execute()
2018-05-03 15:23:23,665 p=129458 u=user | 129458 1525353803.66589: Loading data from /ansible/inventories/prod/group_vars/group_A.yml
on playbook execution ansible scan all files with variables in folder group_vars which have variable "var_port", last will win.....
as you can found in another topic:
Ansible servers/groups in development/production
and from documentation:
http://docs.ansible.com/ansible/latest/user_guide/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable
Note
Within any section, redefining a var will overwrite the previous instance. If multiple groups have the same variable, **the last one loaded wins**. If you define a variable twice in a play’s vars: section, the **2nd one wins**.
For me now NOT clear how to manage configuration files. In this case I must use unique variables names for each group, but it is not possible regarding roles, or should I use include_vars when i call playbook?
Super example how to manage variables files in multistage environment from DigitalOcean
How to Manage Multistage Environments with Ansible
I believe that the problem here, while not explicitly stated in the original question, is that Server{1,2} and Server{3,4} are actually the same servers in 2 different groups at the same level.
I ran into this problem which caused me to do some digging. I don't agree with it, but it is as designed. This was even fixed with full compatibility and the pull request was rejected
Discussion
Pull Request
I am pretty new to Jinja2, and I'm wondering how to achieve this.
Say I have the following vars:
---
servers:
192.168.0.1:
names:
- foo.example.com
- foo
exports:
data:
foo1: /disks/foo1
foo2: /disks/foo2
192.168.0.2:
...
I want to create a symlink /data/foo1 to /disks/foo1 and /data/foo2 to /disks/foo2, but only on foo server; on other servers, make symlinks to their respective exports. So I thought file status=link with_items=... would be the correct thing to do. In Python, I can get the array I need using the following logic:
[
{ 'mount': mount, 'export': export }
for ip, server in servers.iteritems()
if ansible_hostname in server['names']
and 'exports' in server
and 'data' in server['exports']
for mount, export in server['exports']['data'].iteritems()'
]
I don't know how to do this in Jinja2. I wanted to do something like
{{ servers | select('ansible_hostname in self.names') | ... }}
but that doesn't work. Would I need to create a plugin for this logic? Or is my approach all wrong and I should rethink the structure of my servers data?
Answer from my comment:
Usually you want to use inventory_hostname variable – it is what you use as host name in inventory.
servers[ansible_hostname] will access servers' key with name of ansible_hostname's value.
Just for curiosity, you can check out this (complex filter chain) and this (runtime object construction).
Say the minion host has a default yaml configuration named myconf.yaml. What I want to do is to edit parts of those yaml entries using values from a pillar. I can't even begin to think how to do this on Salt. The only think I can think of is to run a custom python script on the host via cmd.run and feed it with input via arguments, but this seems overcomplicated.
I want to avoid file.managed. I cannot use a template, since the .yaml file is big, and can change by external means. I just want to edit a few parameters in it. I suppose a python script could do it but I thought salt could do it without writing s/w
I have found salt.states.file.serialize with the merge_if_exists option, I will try this and report.
You want file.serialize with the merge_if_exists option.
# states/my_app.sls
something_conf_file:
file.serialize:
- name: /etc/my_app.yaml
- dataset_pillar: my_app:mergeconf
- formatter: yaml
- merge_if_exists: true
# pillar/my_app.sls
my_app:
mergeconf:
options:
opt3: 100
opt4: 200
On the target, /etc/my_app.yaml might start out looking like this (before the state is applied):
# /etc/my_app.yaml
creds:
user: a
pass: b
options:
opt1: 1
opt2: 2
opt3: 3
opt4: 4
And would look like this after the state is applied:
creds:
user: a
pass: b
options:
opt1: 1
opt2: 2
opt3: 100
opt4: 200
As far as I can tell this uses the same algorithm as pillar merges, so e.g. you can merge or partially overwrite dictionaries, but not lists; lists can only be replaced whole.
This can be done for both json and yaml with file.serialize. Input can be inline on the state or come from a pillar. A short excerpt follows:
state:
cassandra_yaml:
file:
- serialize
# - dataset:
# concurrent_reads: 8
- dataset_pillar: cassandra_yaml
- name: /etc/cassandra/conf/cassandra.yaml
- formatter: yaml
- merge_if_exists: True
- require:
- pkg: cassandra-pkgs
pillar:
cassandra_yaml:
concurrent_reads: "8"
I'm configuring /etc/security/limits.conf with Ansible' new module pam_limits.
What I've succeeded at:
Setting values for specific domain and type in the default limits.conf. (A new string is appended to the end of the file).
Changing values (the string gets rewritten).
The problem is when I want to completely remove the setting. E.g. I don't want to save core dumps anymore. How should I use pam_limits to remove the string completely?
I've managed to develop the following workaround, but I don't consider it good. It doesn't remove the string but rather sets the limit to 0, which may be not the same.
roles/myrole/tasks/main.yaml
...
- name: enable core dumps for myservice
pam_limits: domain='*' limit_type='-' limit_item=core value="{{ 'unlimited' if myrole_save_core_dumps else 0 }}"
...
group_vars/myhosts.yaml:
myrole_save_core_dumps: true
myservice.yaml
hosts: myhosts
become: yes
roles:
- myrole
I believe this would be an feature which is currently not implemented. But there is a feature request on github for this feature.
I'm trying to get a cloud config script working properly with my DigitalOcean droplet, but I'm testing on local lxc containers in the interim.
One consistent problem I have is that I can never get the write_files directive working properly for more than one file. It seems to behave in weird ways that I cannot understand.
For example, this configuration is incorrect, and only outputs a single file (.tarsnaprc) in /tmp:
#cloud-config
users:
- name: julian
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa myrsakeygoeshere julian#hostname
write_files:
- path: /tmp/.tarsnaprc
permissions: "0644"
content: |
cachedir /home/julian/tarsnap-cache
keyfile /home/julian/tarsnap.key
nodump
print-stats
checkpoint-bytes 1G
owner: julian:julian
- path: /tmp/lxc
content: |
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536
lxc.network.type = veth
lxc.network.link = lxcbr0
permissions: "0644"
However, if I swap the two items in the write_files array, it magically works, and creates both files, .tarsnaprc and lxc. What am I doing wrong, do I have a syntax error?
It may be too late, as it was posted 1 year ago. The problem is setting the owner in /tmp/.tarsnaprc as the user does not exist when the file is created.
Check cloud-init: What is the execution order of cloud-config directives? answer that clearly explains the order of cloud-config directives.
Do not write files under /tmp during boot because of a race with systemd-tmpfiles-clean that can cause temp files to get cleaned during the early boot process. Use /run/somedir instead to avoid race LP:1707222.
ref: https://cloudinit.readthedocs.io/en/latest/topics/modules.html#write-files
Came here because of using canonicals multipass. Nowadays the answers of #rvelaz and #Christian still hint to the right direction. The corrected example whould look like this:
#cloud-config
users:
- name: julian
shell: /bin/bash
ssh_authorized_keys:
- ssh-rsa myrsakeygoeshere julian#hostname
write_files:
# not writing to /tmp
- path: /data/.tarsnaprc
permissions: "0644"
content: |
cachedir /home/julian/tarsnap-cache
keyfile /home/julian/tarsnap.key
nodump
print-stats
checkpoint-bytes 1G
# at execution time, this owner does not yet exist (see runcmd)
# owner: julian:julian
- path: /data/lxc
content: |
lxc.id_map = u 0 100000 65536
lxc.id_map = g 0 100000 65536
lxc.network.type = veth
lxc.network.link = lxcbr0
permissions: "0644"
runcmd:
- "chown julian:julian /data/lxc /data/.tarsnaprc"