I am looking for some directions to fix an issue where i am not able to save or redirect the Ansible debug output to a JSON or md file.
- debug:
msg:
- "{{ item.results['show ip route'].splitlines() }}"
- "{{ item.results['show ip route summary'].splitlines() }}"
- "{{ item.results['show ip route 0.0.0.0'].splitlines() }}"
loop:
- "{{ out2 }}"
The above mentioned debug module runs at the very end of my playbook. The playbook mainly uses "napalm_cli" network module to collect few outputs from a device. The "napalm_cli" module output is not nicely formatted,so, i have to use splitlines.
Now i am trying to save below output as a file
ok: [lab1-r1] => (item={'failed': False, u'changed': False, u'results': {u'show ip route': u'Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP\n D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area \n N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2\n E1 - OSPF external type 1, E2 - OSPF external type 2\n i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2\n ia - IS-IS inter area, * - candidate default, U - per-user static route\n o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP\n a - application route\n + - replicated route, % - next hop override, p - overrides from PfR\n\nGateway of last resort is not set\n\n 172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks\nC 172.16.10.0/24 is directly connected, GigabitEthernet0/1\nL 172.16.10.1/32 is directly connected, GigabitEthernet0/1', u'show ip route summary': u'IP routing table name is default (0x0)\nIP routing table maximum-paths is 32\nRoute Source Networks Subnets Replicates Overhead Memory (bytes)\nconnected 0 2 0 136 360\nstatic 0 0 0 0 0\napplication 0 0 0 0 0\ninternal 1 440\nTotal 1 2 0 136 800', u'show ip route 0.0.0.0': u'% Network not in table'}}) => {
"msg": [
[
"Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP",
" D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area ",
" N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2",
" E1 - OSPF external type 1, E2 - OSPF external type 2",
" i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2",
" ia - IS-IS inter area, * - candidate default, U - per-user static route",
" o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP",
" a - application route",
" + - replicated route, % - next hop override, p - overrides from PfR",
"",
"Gateway of last resort is not set",
"",
" 172.16.0.0/16 is variably subnetted, 2 subnets, 2 masks",
"C 172.16.10.0/24 is directly connected, GigabitEthernet0/1",
"L 172.16.10.1/32 is directly connected, GigabitEthernet0/1"
],
[
"IP routing table name is default (0x0)",
"IP routing table maximum-paths is 32",
"Route Source Networks Subnets Replicates Overhead Memory (bytes)",
"connected 0 2 0 136 360",
"static 0 0 0 0 0",
"application 0 0 0 0 0",
"internal 1 440",
"Total 1 2 0 136 800"
],
[
"% Network not in table"
]
]
}
Also, i would like to get rid of the content between
ok: [lab1-r1] => napalm_cli non formatted output
"msg": [
Any ideas or thoughts.
Thank You
NN
I believe you may want JSON callback (there is also a yaml one, and XMPP, and a whole list of them). The instructions for enabling them are in the fine manual, but the very short version is to just define an environment variable when calling ansible-playbook:
env ANSIBLE_STDOUT_CALLBACK=json ansible-playbook ...
(it works with ansible, too, if you just wanted to run a single task)
Related
I have been trying to understand how to batch things in benthos but am a bit confused on how to do it..
Take this example:
input:
generate:
interval: ""
count: 40
mapping: |
root = count("test")
pipeline:
processors:
- log:
level: INFO
message: 'Test! ${! (this) } ${! (this % 2 == 0) } ${! batch_size() }'
- group_by_value:
value: ${! (this % 2 == 0) }
- archive:
format: tar
- compress:
algorithm: gzip
output:
file:
path: test/${! (this % 2 == 0) }.tar.gz
codec: all-bytes
My expectation with this would be 2 files in test/.. one called "true.tar" and another called "false.tar", with 20 elements each, (odd and even numbers). What I get instead is a single file with the last message. I understand from the logs that it is not actually batching these based on that condition
I thought group_by_value would kind of create "two streams/batches" of messages that would get separately handled in the output/archive, but it looks like it doesn't behave like that
Could you please help me understand how it works?
additionally, I was also going to limit the size of each of these streams to a certain number, so each would get their number of entries in the TAR limited
Thanks!!
EDIT 1
This is something which works more like expected, but this was I have to "know" how many items I want to batch before actually being able to filter them.. I wonder if I can't just "accumulate" things based on this group_by_value condition and batch them based on a count later?
input:
broker:
inputs:
- generate:
interval: ""
count: 40
mapping: |
root = count("test")
batching:
count: 40
pipeline:
processors:
- group_by_value:
value: ${! (this % 2 == 0) }
- log:
level: INFO
message: 'Test! ${! (this) } ${! (this % 2 == 0) } ${! batch_size() }'
- bloblang: |
meta name = (this) % 2 == 0
- archive:
format: tar
path: ${! (this) }
output:
file:
path: test/${! (meta("name")) }.tar
codec: all-bytes
As you already noticed, group_by_value operates on message batches, which is why your first example produces a single file as output. In fact, it produces a file for each message, but since the file name is identical, each new file ends up overwriting the previous one.
From your edit, I'm not sure I get what you're trying to achieve. The batch policy documentation explains that byte_size, count and period are the available conditions for composing batches. When either of those is met, a batch is flushed, so you don't necessarily have to rely on a specific count. For convenience, the batching policy also has a processors field, which allows you to define an optional list of processors to apply to each batch before it is flushed.
The windowed processing documentation might also be of interest, since it explains how the system_window buffer can be used to chop a stream of messages into tumbling or sliding windows of fixed temporal size. It has a section on grouping here.
Update 22.02.2022: Here's an example of how to perform output batching based on some key, as requested in the comments:
input:
generate:
interval: "500ms"
count: 9
mapping: |
root.key = if count("key_counter") % 3 == 0 {
"foo"
} else {
"bar"
}
root.content = uuid_v4()
pipeline:
processors:
- bloblang: |
root = this
# 3 is the number of messages you'd like to have in the "foo" batch.
root.foo_key_end = this.key == "foo" && count("foo_key_counter") % 3 == 0
output:
broker:
outputs:
- stdout: {}
processors:
- group_by_value:
value: ${! json("key") }
- bloblang: |
root = this
root.foo_key_end = deleted()
root.batch_size = batch_size()
root.batch_index = batch_index()
batching:
# Something big so, unless something bad happens, you should see enough
# messages with key = "foo" before reaching this number
count: 1000
check: this.foo_key_end
Sample output:
> benthos --log.level error -c config_group_by.yaml
{"batch_index":0,"batch_size":3,"content":"84e51d8b-a4e0-42c8-8cbb-13a8b7b37823","key":"foo"}
{"batch_index":1,"batch_size":3,"content":"1b35ff8b-7121-426e-8447-11e834610b90","key":"foo"}
{"batch_index":2,"batch_size":3,"content":"a9d9c661-1068-447f-9324-c418b0d7de9d","key":"foo"}
{"batch_index":0,"batch_size":6,"content":"5c9d26aa-f1dc-46ae-9845-3b035c1e569e","key":"bar"}
{"batch_index":1,"batch_size":6,"content":"17bbc7c1-94ec-4c9e-b0c5-b9c11f18498f","key":"bar"}
{"batch_index":2,"batch_size":6,"content":"7d7b9621-e174-4ca2-8a2e-1679e8177335","key":"bar"}
{"batch_index":3,"batch_size":6,"content":"db24273f-7064-498e-9914-9dd4c671dcd7","key":"bar"}
{"batch_index":4,"batch_size":6,"content":"4cfbea0e-dcc4-4d84-a87f-6930dd797737","key":"bar"}
{"batch_index":5,"batch_size":6,"content":"d6cb4726-4796-444d-91df-a5c278860106","key":"bar"}
I have a dictionary of dictionaries collecting data from openshift using prometheus. Now I intend to add values in all the dictionaries. But some projects don't have quota and hence some pods don't have request/limit set for cpu and memory. I am trying the following and it fails in case the key:value is not there.
If possible I want to use if else such that, if the variable exists then add the variable else use the value as 0.
- name: Total section for Projects
set_fact:
pod_count_total: "{{ (pod_count_total|int) + (item.value.pod_count|int)}}"
total_cpu_request: "{{ (total_cpu_request|float |round(2,'ceil')) + (item.value.cpu_request|float |round(2,'ceil'))}}"
total_cpu_limit: "{{ (total_cpu_limit|float |round(2,'ceil')) + (item.value.cpu_limit|float |round(2,'ceil'))}}"
total_memory_request: "{{ (total_memory_request|float |round(2,'ceil')) + (item.value.memory_request|float |round(2,'ceil'))}}"
total_memory_limit: "{{ (total_memory_limit|float |round(2,'ceil')) + (item.value.memory_limit|float |round(2,'ceil'))}}"
with_dict: "{{all_project}}"
Dictionary of dictionaries is like
ok: [127.0.0.1] => {
"msg": {
"openshift-web-console": {
"cpu_usage": 0.015,
"memory_used": 0.04,
"cpu_request": 0.301,
"memory_request": 0.293,
"pod_count": 3
},
"srv-test": {
"cpu_usage": 0.013,
"memory_used": 0.02,
"pod_count": 5
},
"test": {
"cpu_usage": 0.001,
"memory_used": 0.0,
"pod_count": 1
},
"openshift-monitoring": {
"cpu_limit": 1.026,
"cpu_request": 0.556,
"cpu_usage": 0.786,
"memory_limit": 1.866,
"memory_request": 1.641,
"memory_used": 0.14,
"pod_count": 98
}
}
}
If possible I want to use if else such that, if the variable exists then add the variable else use the value as 0.
The thing you are looking for is the default filter
total_memory_request: "{{ (
total_memory_request | default(0)
| float | round(2,'ceil')
) + (
item.value.memory_request | default(0)
| float | round(2,'ceil')
) }}"
There's a subtlety in that if the variable exists but is the empty string, you'll need to pass in the 2nd parameter to default to have it act in a python "truthiness" way: {{ "" | default(0, true) | float }} -- that might not apply to you, but if it does, you'll be glad to know what that 2nd param does
I am writing a ansible playbook to select unused disks from ansible_devices. If the server has more than one unused disks, I want to pick the one same as input size/or closest to it, Size variable is user input.
following is my code:-
-name: Print disk result
- "{{ min_value }}.00 GB" <= item.value.size <= "{{ max_value }}.00 GB"
vars:
min_value: "{{ size - 2 }}"
max_value: "{{ size + 2 }}"
The item.value.size is like this for disks:-
"size": "50.00 GB" for disk1
"size": "5.00 GB" for disk2
I am getting this error:-
ERROR! Syntax Error while loading YAML.
expected <block end>, but found '<scalar>'
The error appears to have been in '/home/bhatiaa/disk5.yml': line 25, column 32, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
- not item.value.links.ids
-
The error comes from this line:
- "{{ min_value }}.00 GB" <= item.value.size <= "{{ max_value }}.00 GB"
There are a few problems here. Fundamentally, you're trying to perform a numeric comparison (<=) on non-numeric values (50.00 GB, and that's never going to work. But that's not the source of your error. The error crops up because you're starting the value with a quote ("), so the YAML parser expects the entire line to be quoted, like this:
- '"{{ min_value }}.00 GB" <= item.value.size <= "{{ max_value }}.00 GB"'
That gets rid of your error message, but it's still problematic in several ways. In addition to the "numeric comparison with non-numeric values" problem, in a when conditional you're already in a Jinja template context so you don't need the {{ and }} markers. You'd want to write the expression something like this:
- '"%s.00 GB" % min_value <= item.value.size <= "%s.00 GB" % max_value
But while syntactically correct, that still suffers from the first problem I identified. we really need to come up with numeric values to use. One option would be to assume that sizes are always specified in GB and just strip it off, as in:
- min_value <= int(item.value.size[:-3]) <= max_value
Another option would be to calculate the disk size using sectors and sectorsize instead, like this:
- min_value <= (item.value.sectors * item.value.sectorsize) <= max_value
This would require min_value and max_value to be specified in bytes.
Hopefully there's enough here to point you in the right direction.
A host might have more than 2 interfaces ["lo","eth0","eth1"]
I want to run a when condition if host has only 2 interfaces ["lo","eth0"]
when: 'ansible_interfaces == 2'
but it returns:
"ansible_interfaces == 2": false
It has 2 interfaces why is it false?
you are not comparing the count of elements in ansible_interfaces to 2, but the value of variable ansible_interfaces to 2.
you should use:
when: ansible_interfaces|length == 2
items:
house:
- bathroom:
- toothbrush
- soap
- bedroom:
- bed:
- pillow
- sheet
- closet:
- clothes:
- underwear
- socks
garden:
- treehouse:
- toys:
- nerfgun
- car
- window
- garage:
- car
- toolbox:
- hammer
- scewdriver
- pliers
- lawnmower
Here is another try at this document, it has no compound list (I guess that's how it's called).
items2:
house:
- bathroom:
- toothbrush
- soap
- bedroom:
- bed:
- pillow
- sheet
- closet:
- clothes:
- underwear
- socks
Which of those two yaml documents are valid ? I'm still wondering if I can use a list of keyed lists like that (nested list ?):
items:
- list1:
-itemA
-itemB
- list2:
-itemC
-itemD
You can use this to check if your yaml is ok: yamlint
It's seems ok.
Yes, it's valid YAML (well, the first two are; in the third, make sure that you have a space after your - in the sequences); but it may not do exactly what you think. In your toy example
items:
- list1:
- itemA
- itemB
- list2:
- itemC
- itemD
the value associated with items is a sequence; and each entry of that sequence is a map with a single key/value pair (for the first entry, the key is list1, and in the second, list2).
What may have confused you in your first real example was how to access each element. Since you tagged this yaml-cpp, here's how you would get, say, the list of the toys in the greenhouse of your first example:
doc["items"]["garden"][0]["treehouse"][0]["toys"];
(Note the [0] before accessing the "treehouse" and "toys" keys.)