I'm using {% include some_file.ext %} to pull stuff from _includes-folder for a Jekyll powered GitHub page/gh-pages-branch.
All the includes, like the header.html, etc. work perfectly fine. But pulling in other stuff like the css/js files (which are included from inside all.* files from exact same folder), doesn't work out. And the pathes for the files, which are set using {{ site.baseurl }} simply stay empty.
Dumping the variable as well doesn't output anything. It looks like the {{}} variables aren't accessible while everything else works just fine. In other words adding the following to the index.html returns nothing than the empty tags:
<pre>{{ site }}</pre>
The (locally working) all.css file.
---
---
{% include css/grid.css %}
{% include css/style.css %}
Location of the all.css (and all.js) file(s):
~/root
├── /_includes
│ ├── css
│ │ ├── grid.css
│ │ └── style.css
│ ├── js
│ │ ├── analytics.js
│ │ └── general.js
│ └── head.html
├── index.html
├── all.js
└── all.css
if it's any help. I'm running a blog on github pages and we use index.md. I do all my {{ var }} stuff in _layouts/index.html
Related
I am experimenting with variable overriding in Ansible. To do so, I have created the below-depicted directory structure. Note that under inventories, I have created two separate sites (1 & 2)
Also, note that I have added group_vars/host_vars at two different levels; below inventories and each site.
.
├── ansible.cfg
├── inventories
│ ├── group_vars
│ │ └── all.yml
│ ├── host_vars
│ │ └── target2.yml
│ ├── site1
│ │ ├── group_vars
│ │ │ └── all.yml
│ │ ├── host_vars
│ │ │ └── target1.yml
│ │ └── hosts.yml
│ └── site2
│ ├── group_vars
│ │ └── all.yml
│ ├── host_vars
│ │ └── target2.yml
│ └── hosts.yml
├── modules
├── playbooks
│ └── playbook1
│ ├── group_vars
│ │ └── all.yml
│ └── host_vars
└── roles
I would like to be able to store default variables for groups/hosts at "inventories" level and override them when/if necessary at site/group/host level using directories (not the hosts.yml), but I am unable to do so.
If I test the inventory by targeting the base "inventories" directory, I can see that group_vars/host_var folders under sites are ignored:
ansible-inventory --vars --graph -i inventories/
#all:
|--#site1:
| |--target1
| | |--{scope = inventories/site1/hosts.yml}
| |--target2
| | |--{scope = inventories/host_vars/target2.yml}
|--#site2:
| |--target2
| | |--{scope = inventories/host_vars/target2.yml}
|--#ungrouped:
|--{scope = inventories/group_vars/all.yml}
But if I target a specific site, the underlying group_vars/host_var folder are used, but of course the one at base "inventory" are ignored:
ansible-inventory --vars --graph -i inventories/site1
#all:
|--#site1:
| |--target1
| | |--{scope = inventories/site1/host_vars/target1.yml}
| |--target2
| | |--{scope = inventories/site1/group_vars/all.yml}
|--#ungrouped:
|--{scope = inventories/site1/group_vars/all.yml}
ansible-inventory --vars --graph -i inventories/site2
#all:
|--#site2:
| |--target2
| | |--{scope = inventories/site2/host_vars/target2.yml}
| |--{scope = inventories/site2/group_vars/all.yml}
|--#ungrouped:
Is it possible to instruct ansible to look for group_vars/host_var folders in the entire directory structure?
Thanks!
According your description and example your sets site1 and site2 are already subsets of set all. You made the observation and described it in your question
If I target a specific site, the underlying group_vars/host_var folder are used, but of course the one at base "inventory" are ignored
also the used command gives a hint
ansible-inventory --vars --graph -i inventories/site1 # or 2
since with this you'll set the root of a tree structure or graph to a sub tree or part of a graph.
Is it possible to instruct Ansible to look for group_vars/host_var folders in the entire directory structure?
No, since there will be not other directory outside of the defined subset, sub tree or graph part.
but of course the one at base "inventory" are ignored
In other words, the base "inventories" doesn't exists (anymore) since you've set the base to "inventories/site1" or 2.
An other approach could be to have one inventory for each site.
I have a multi environment & multi inventories setup within ansible (2.7.9).
With one of the inventories, I am wanting to set a global variable to be inherited by all the hosts within the inventory. For this purpose I added the variable into that specific inventory (inventory/production/prodinv):
[all:vars]
myvar="True"
And it works fine if I ran ansible against that specific inventory (inventory/production/prodinv). However, if I run ansible against the inventory directory (eg inventory/production) , I noticed that the variable is inherited on all the hosts across all the inventories - which isn't ideal because I only want the hosts within firstenv inventory to have the var defined.
Currently group_vars and host_vars are a symlink (for all the inventories) against a "shared" root group_vars and host_vars.
To add more clarity to my question, below is the structure of my ansible:
.
├── ansible.cfg
├── playbooks/
├── roles/
├── inventory/
│ │
│ ├── group_vars/
| |
| ├── host_vars/
| |
│ ├── tnd/
│ │ ├── group_vars/ -> ../group_vars
│ | ├── host vars/ -> ../host_vars
│ │ └── devinv
│ │
│ ├── production/
│ │ ├── group_vars/ -> ../group_vars
│ | ├── host vars/ -> ../host_vars
│ │ └── prodinv
│
└── . .
I'm not sure how / where to define this var that should apply to all hosts/groups within a particular inventory, without running into the same issue. Ideas?
Thanks,
J
I think your problem is two-fold.
Ansible applies the group_vars of a directory to all files and subdirectories within the specified inventory directory. So, inventory/production/group_vars will get applied to everything within inventory/production. This just gets masked when you explicitly limit your inventory further while running, like you did (-i inventory/production/prodinv).
This means that you need to put the group_vars only being applied to prodinv in their own directory and not in the inventory/production directory. For example, inventory/production/prodinv/group_vars.
Your symlinks are set up in a way that if you run against inventory, you're going to have the same group_vars applied to all your inventories. You're not hitting this in your example, but you'll likely hit it in the future.
I try to push specific conf with group_vars, but it only make push for one instance aa.yml and I don't have the push for bb.yml inventory. I already used group_vars and works before, but not with conf ansible
- name: Push conf
uri:
url: "https://xxx{{ instance_id }}"
method: POST
status_code: [201]
headers:
Content-Type: application/json
body_format: json
body: "{\"server\":{{ server }},\"labels\":{{{ site }}},\"name\":\"{{ instance.value.name }}"
return_content: true
vars:
instance: "{{ item }}"
loop: "{{ instances }}"
inventory/host/group_vars/aa/aa.yml
site: "\"aa\""
instance_id: "06a56590"
server: "[\"server1\"]"
inventory/host/group_vars/bb/bb.yml
site: "\"bb\""
instance_id: "bcc37660"
server: "[\"server2\"]"
inventory/host/000_hosts
[host]
server1
server2
The command:
ansible-playbook task.yml -i inventory/host/000_hosts --extra-vars "target=host"
Supplying with an answer:
group_vars/XXX directories typically refers to groups defined in your inventory, and they contain variables only available for that group. In your case you created directories for the groups aa and bb, these groups does not exists in your inventory, meaning when you call your playbook referring to your hosts (- hosts: host), ansible will look for group variables related to that group. Which is this case does not exists.
As you will see in my suggestion below; by using the keyword children in your inventory, you are basically saying: The hosts defined in the group aa/bb is part/children of the group host (the parent), and the variables follows. (inheriting-variable-values-group-variables-for-groups-of-groups)
Changing your inventory to the following, should solve the problem:
inventory/host/hosts
[aa]
server1
[bb]
server2
[host:children]
aa
bb
You could also change your directory structure to something like:
inventory/
├── group_vars
│ ├── aa
│ │ └── aa.yml
│ └── bb
│ └── bb.yml
└── hosts
Edit:
However, if I'm not mistaken: your hosts directory (in inventory/hosts) is typically used to identify your environment like:
Multistage environment Ansible
.
├── ansible.cfg
├── environments/ # Parent directory for our environment-specific directories
│ │
│ ├── dev/ # Contains all files specific to the dev environment
│ │ ├── group_vars/ # dev specific group_vars files
│ │ │ ├── all
│ │ │ ├── db
│ │ │ └── web
│ │ └── hosts # Contains only the hosts in the dev environment
│ │
│ ├── prod/ # Contains all files specific to the prod environment
│ │ ├── group_vars/ # prod specific group_vars files
│ │ │ ├── all
│ │ │ ├── db
│ │ │ └── web
│ │ └── hosts # Contains only the hosts in the prod environment
│ │
│ └── stage/ # Contains all files specific to the stage environment
│ ├── group_vars/ # stage specific group_vars files
│ │ ├── all
│ │ ├── db
│ │ └── web
│ └── hosts # Contains only the hosts in the stage environment
│
├── playbook.yml
│
└── . . .
Take a look at organizing-host-and-group-variables
GOAL
I want to create a portable role folder that utilizes textfsm templates in role directory.
Directory structure
$ tree
.
├── ansible.cfg
├── hosts.ini
├── out_pb1
├── pb1.yml
├── roles
│ ├── command_facts
│ │ ├── tasks
│ │ │ └── main.yml
│ │ ├── templates
│ │ │ └── textfsm // I want this
│ │ └── vars
└── templates
└── textfsm // created for testing
PROBLEM
parse_cli_textfsm is referencing to templates/textfsm instead of roles/command_facts/templates/textfsm
The goal is portability so specifying full path is not considered.
ATTEMPTS TO FIX (FAIL)
Executes pb1.yml
Error shown as below if I deleted templates/textfsm in the pb1.yml directory.
$ ansible-playbook pb1.yml -u username -k
(omitted)
TASK [command_facts : set_fact]
fatal: [target]: FAILED! => {"msg": "unable to locate parse_cli_textfsm template: /templates/textfsm"}
I tried to utilize system variable role_path that is {{ raw_output.stdout | parse_cli_textfsm('{{ role_path }}/templates/textfsm') }} but it is not working. {{ role_path }} turns into a normal string.
$ ansible-playbook pb1.yml -u username -k
(omitted)
TASK [ixrs_facts : set_fact]
fatal: [target]: FAILED! => {"msg": "unable to locate parse_cli_textfsm template: {{ role_path }}/templates/textfsm"}
FILES
roles/command_facts/task/main.yml
---
- name: show protocols all
raw: "show protocols all"
register: raw_output
- set_fact:
output: {{ raw_output.stdout | parse_cli_textfsm("/templates/textfsm") }}
- debug: var=output[::]
pb1.yml
---
- name: testing
hosts: target
gather_facts: false
roles:
- { role: command_facts }
REFERENCE
https://docs.ansible.com/ansible/latest/user_guide/playbooks_filters.html#network-cli-filters
{{ output.stdout[0] | parse_cli_textfsm('path/to/fsm') }}
parse_cli_textfsm('{{ role_path +“/templates/textfsm“) }}
Works for me...
Hth
I want to monitor the statistics of different subprocesses that are running in pods in different namespaces with Prometheus and I am looking for a way to properly expose this information.
My cluster is similar to below:
cluster
├── ns1
│ ├── ns1-pod1
│ │ ├── proc-p1-1
│ │ └── proc-p1-2
│ └── ns1-pod2
│ ├── proc-p2-1
│ └── proc-p2-2
└── ns2
├── ns2-pod1
│ ├── proc-p1-1
│ └── proc-p1-2
└── ns2-pod2
├── proc-p2-1
└── proc-p2-2
Each pod is publishing the statistics of its processes to RabbitMQ with a specific routing key and I can read the statistics from there.
I wrote an exporter that can connect to RMQ in one namespace, read the statistics and expose them on the /metrics so Prometheus can read it. An example of my exporter:
// prometheus go client
var MemoryValue = prometheus.NewGauge(
prometheus.GaugeOpts{
Namespace: namespace,
Name: "MemoryValue",
Help: "MemoryValue",
})
prometheus.MustRegister(MemoryValue)
MemoryValue.Set(opst.Memory.Value) // "opst.Memory.Value" is what I get from RMQ
The problem is I don't know how to label the metrics for each process in a pod. I mean, for example, at the moment I have 4 processes in ns1 but I am exposing all of them on MemoryValue. I need a way similar to Namespace to label each process by pod and process names (I have this information but how to add them to Prometheus?).
As #Peter correctly mentioned the solution is to use GaugeVec:
var CpuPercentValue = prometheus.NewGaugeVec(
prometheus.GaugeOpts{
Namespace: "MyExporter",
Name: "CpuPercentValue",
Help: "CpuPercentValue",
},
[]string{
"namespace",
"proc_qID",
"opID",
},
)
CpuPercentValue.With(prometheus.Labels{"namespace": namespace, "proc_qID": procid, "opID": opid}).Set(opst.CpuPercent.Value)