creating dynamic index from kafka-filebeat - elasticsearch

software version: ES-OSS-7.4.2, filebeat-OSS-7.4.2
following is my filebeat.yml and grok pipeline
filebeat.inputs:
- type: kafka
hosts:
- test-bigdata-kafka0003:9092
- test-bigdata-kafka0002:9092
- test-bigdata-kafka0001:9092
topics: ["bigdata-k8s-test-serverlog"]
group_id: "filebeat-kafka-test"
setup.template.settings:
index.number_of_shards: 1
_source.enabled: true
setup.template.name: "test"
setup.template.pattern: "test-*"
setup.template.overwrite: true
setup.template.enabled: true
setup.ilm.enable: true
setup.ilm.rollover_alias: "test"
setup.kibana:
host: "https://xxx:8080"
username: "superuser"
password: "123456"
ssl.verification_mode: none
output.elasticsearch:
index: "test-%{[jiserver]}-%{+yyyy.MM.dd}"
pipeline: "test-pipeline"
hosts: ["xxx:8200"]
username: "superuser"
password: "123456"
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
pipeline.json
{
"description": "Test pipeline",
"processors": [
{
"grok": {
"field": "message",
"patterns": ["%{CUSTOMTIME:timestamp} (?:%{NOTSPACE:jiserver}|-) (?:%{NOTSPACE:hostname}|-) (?:%{LOGLEVEL:level}|-) (?:%{NOTSPACE:thread}|-) (?:%{NOTSPACE:class}|-) (?:%{NOTSPACE:method}|-) (?:%{NOTSPACE:line}|-) (?:%{CUSTOMDATA:message}|-)"],
"pattern_definitions": {
"CUSTOMTIME": "%{YEAR}[- ]%{MONTHNUM}[- ]%{MONTHDAY}[- ]%{TIME}",
"CUSTOMDATA": "((%{GREEDYDATA})[[:space:]]?)+"
}
}
}
],
"on_failure": [
{
"set": {
"field": "error_information",
"value": "Processor {{ _ingest.on_failure_processor_type }} with tag {{ _ingest.on_failure_processor_tag }} in pipeline {{ _ingest.on_failure_pipeline }} failed with message {{ _ingest.on_failure_message }}"
}
}
]
}
I use grok split the message to different field, one of them is jiserver . And I want my index dynamicly name with jiserver, how to do . above setting is not work, and recevie error
[elasticsearch] elasticsearch/client.go:541 Bulk item insert failed (i=0, status=500): {"type":"string_index_out_of_bounds_exception","reason":"String index out of range: 0"}

I found a solution 。 filebeat.yml add a script processor
processors:
- script:
lang: javascript
id: my_filter
source: >
function process(event) {
var message = event.Get("message");
var name = message.split(" ")
event.Put("jiserver", name[2])
}

Related

Ansible: delete one route from route table in AWS

I have a route table in AWS where there is a subnet getting routed to one host for each host. I can setup those routes automatically using this code:
- name: Add route to host container network
ec2_vpc_route_table:
region: region
vpc_id: "vpc-somestring"
purge_subnets: false
purge_routes: false
lookup: id
route_table_id: rtb-somestring
routes:
- dest: "1.2.3.0/24"
instance_id: "i-somestring"
This is fine for creating new hosts automatically. But if I want to remove a host, I want to delete the matching route table entry.
I thought, I could just fetch the route table using ec2_vpc_route_table_info, then take the routes filtered with rejectattr and feed it back to ec2_vpc_route_table, replacing the whole table. But, info gives me this format of routing tables:
"all_routes": [
{
"destination_cidr_block": "1.2.3.0/24",
"gateway_id": null,
"instance_id": "i-somestring",
"instance_owner_id": "1234567890",
"interface_id": "eni-somestring",
"network_interface_id": "eni-somestring",
"origin": "CreateRoute",
"state": "active"
},
{
"destination_cidr_block": "5.5.5.0/21",
"gateway_id": "local",
"instance_id": null,
"interface_id": null,
"network_interface_id": null,
"origin": "CreateRouteTable",
"state": "active"
},
{
"destination_cidr_block": null,
"destination_ipv6_cidr_block": "affe:affe:affe:affe::/56",
"gateway_id": "local",
"instance_id": null,
"interface_id": null,
"network_interface_id": null,
"origin": "CreateRouteTable",
"state": "active"
}
]
However, I can't feed that table to ec2_vpc_route_table, because that module just wants a list looking like this:
[
{
"dest": "1.2.3.0/24",
"instance_id": "i-somestring"
},
{
"dest": "5.5.5.0/21",
"gateway_id": "local
},
{
"dest": "affe:affe:affe:affe::/56",
"gateway_id": "local"
}
]
Why is the output of the info module not in a format that I can feed back to the route_table module? How can I convert the output into a format that I can feed back to the route_table module?
Thanks for any input.
a sample of solution:
- hosts: localhost
gather_facts: false
vars:
all_routes: "{{ lookup('file', 'zson.json') | from_json }}"
tasks:
- name: display json
debug:
var: all_routes
- name: create new json
set_fact:
result: "{{ result | d([]) + [{ 'dest': _block, _key: _gateway }] }}"
vars:
_block: "{{ item.destination_cidr_block if item.destination_cidr_block != None else item.destination_ipv6_cidr_block }}"
_gateway: "{{ item.gateway_id if item.gateway_id != None else item.instance_id }}"
_key: "{{ 'gateway_id' if item.gateway_id != None else 'instance_id' }}"
loop: "{{all_routes }}"
- name: display result
debug:
var: result
result:
ok: [localhost] => {
"result": [
{
"dest": "1.2.3.0/24",
"instance_id": "i-somestring"
},
{
"dest": "5.5.5.0/21",
"gateway_id": "local"
},
{
"dest": "affe:affe:affe:affe::/56",
"gateway_id": "local"
}
]
}

How to replace value for a key in a json file using ansible

Below is my sample.json file
"abc": {
"host": "xyz",
"version": "3.0.0-4"
},
"def": {
"host": "xyz",
"version": "3.0.0-4"
},
"ghi": {
"host": "xyz",
"version": "4.1.0-4"
},
How to modify value of version key for some of the blocks and not modify for other blocks?
For eg. in above case I want to modify version value for abc and def block but not for ghi block using ansible.
Expected o/p :
"abc": {
"host": "xyz",
"version": "4.0.0-4" // modified value
},
"def": {
"host": "xyz",
"version": "4.0.0-4" // modified value
},
"ghi": {
"host": "xyz",
"version": "4.1.0-4" //not modified
},
Read the JSON data into a dictionary, e.g.
- include_vars:
file: sample.json
name: sample
gives
sample:
abc:
host: xyz
version: 3.0.0-4
def:
host: xyz
version: 3.0.0-4
ghi:
host: xyz
version: 4.1.0-4
To change the version of items abc and def some structure is needed. For example, let's create a dictionary of the new versions
new_version:
abc: "4.0.0-4"
def: "4.0.0-4"
Then the task below
- set_fact:
sample: "{{ sample|combine({item.key: item.value|
combine({'version': new_version[item.key]})}) }}"
loop: "{{ sample|dict2items }}"
when: item.key in new_version
vars:
new_version:
abc: "4.0.0-4"
def: "4.0.0-4"
gives
sample:
abc:
host: xyz
version: 4.0.0-4
def:
host: xyz
version: 4.0.0-4
ghi:
host: xyz
version: 4.1.0-4
The logic of the task might be different. For example "Change the lower versions only". In this case, the data and code are simpler, e.g.
new_version: "4.0.0-4"
The task below gives the same result
- set_fact:
sample: "{{ sample|combine({item.key: item.value|
combine({'version': new_version})}) }}"
loop: "{{ sample|dict2items }}"
when: item.value.version is version(new_version, 'lt')
vars:
new_version: "4.0.0-4"
Then, replace the file, e.g.
- copy:
dest: sample.json
content: "{{ sample|to_json }}"
gives
shell> cat sample.json
{"abc": {"host": "xyz", "version": "4.0.0-4"}, "def": {"host": "xyz", "version": "4.0.0-4"}, "ghi": {"host": "xyz", "version": "4.1.0-4"}}
Here is a small function which takes as arguments:
obj: Json object
keys: Array of keys affected
prop: The property to be changed
value: The new value of the property
let myObj = {
"abc": { "host": "xyz", "version": "3.0.0-4" },
"def": { "host": "xyz", "version": "3.0.0-4" },
"ghi": { "host": "xyz", "version": "4.1.0-4" }
}
const repla = (obj, keys, prop, value) => {
return Object.keys(obj).map((key) => {
if(keys.indexOf(key) > -1) {
return {[key]: {...obj[key], [prop] : value}}
} else {
return {[key] : obj[key]};
}
})
}
const newObj = repla(myObj, ["abc", "def"], "version", "newData");
console.log(newObj)

Ansible - remove dictionary with specific key value from list

I have data structure as below in example.
My goal is to get a list of users who belong to group group_1. And that's what I am able to do (as in the example).
But additionally, I want to get rid of group_2 in User_1. And I can't do that.
Below ansible playbook and its result:
- hosts: localhost
vars:
search_name: "group_1"
users:
- user_name: "User_1"
email: "user1#mail.com"
login: "user.1"
groups:
- name: group_1
servers:
- server:
name: 'SERVER-01'
ip: '192.168.x.x'
port: 5656
- server:
name: 'SERVER-02'
ip: '192.168.x.x'
port: 5656
- name: group_2
servers:
- server:
name: 'SERVER-03'
ip: '192.168.x.x'
port: 5656
- server:
name: 'SERVER-01'
ip: '192.168.x.x'
port: 5656
- server:
name: 'SERVER-02'
ip: '192.168.x.x'
port: 5656
- user_name: "User_2"
email: "user2#mail.com"
login: "user.2"
groups:
- name: group_1
servers:
- server:
name: 'SERVER-01'
ip: '192.168.x.x'
port: 5656
- server:
name: 'SERVER-02'
ip: '192.168.x.x'
port: 5656
- user_name: "User_3"
email: "user3#mail.com"
login: "user.3"
groups:
- name: group_3
servers:
- server:
name: 'SERVER-03'
ip: '192.168.x.x'
port: 5656
tasks:
- name: Initialize an empty list for servers
set_fact:
filtered_users: []
- name: Filter users by group name
set_fact:
filtered_users: "{{ users | json_query(query) }}"
vars:
query: "[? groups[? name==`group_1`]] | []"
- name: Display users
debug:
msg: "{{ filtered_users }}"
Result
{
"email": "user1#mail.com",
"groups": [
{
"name": "group_1",
"servers": [
{
"server": {
"ip": "192.168.x.x",
"name": "SERVER-01",
"port": 5656
}
},
{
"server": {
"ip": "192.168.x.x",
"name": "SERVER-02",
"port": 5656
}
}
]
},
{
"name": "group_2",
"servers": [
{
"server": {
"ip": "192.168.x.x",
"name": "SERVER-03",
"port": 5656
}
},
{
"server": {
"ip": "192.168.x.x",
"name": "SERVER-01",
"port": 5656
}
},
{
"server": {
"ip": "192.168.x.x",
"name": "SERVER-02",
"port": 5656
}
}
]
}
],
"login": "user.1",
"user_name": "User_1"
},
{
"email": "user2#mail.com",
"groups": [
{
"name": "group_1",
"servers": [
{
"server": {
"ip": "192.168.x.x",
"name": "SERVER-01",
"port": 5656
}
},
{
"server": {
"ip": "192.168.x.x",
"name": "SERVER-02",
"port": 5656
}
}
]
}
],
"login": "user.2",
"user_name": "User_2"
}
]
How can this be achieved?
JMESPath is fine for simple questions, but is hard to wrap one's head around complex stuff, especially since your ultimate question involves selectively building up a new "user" dict (or mutating the var, it's hard to tell which outcome you'd want). If you want the original data mutated, just remove the | combine({}) that clones the user dict
- name: Filter users by group name
set_fact:
filtered_users: >-
{%- set results = [] -%}
{%- for ur in users -%}
{%- set u = ur | combine({}) -%}
{%- set g1 = u.groups | selectattr("name", "eq", search_name) -%}
{%- if g1 | length > 0 -%}
{%- set _ = u.update({"groups": g1}) -%}
{%- set _ = results.append(u) -%}
{%- endif -%}
{%- endfor -%}
{{ results }}

Is it possible to set/lookup Ansible facts when play returns multiple values

I have the following playbook in AWX that looks up Infoblox hosts based on their Mac Address and then outputs the information in a more user friendly format.
The current playbook works providing that a single host with that Mac address exists but fails if there are multiple.
---
- hosts: localhost
connection: local
vars:
niosip: ""
niosmac: ""
niosdhcp: ""
nioshostname: ""
niossearchcatagory: "{{ 'name' if searchcatagory == 'Hostname' else 'ipv4addr' if searchcatagory == 'IP Address' else 'mac' if searchcatagory == 'Mac Address'}}"
pre_tasks:
- include_vars:
file: creds.yml
tasks:
- name: fetch host record
set_fact:
host: "{{ lookup('nios', 'record:host', filter={niossearchcatagory: searchcriteria, 'view': 'Internal'}, provider=nios_provider) }}"
- name: Set niosip
set_fact:
niosip: "{{ host.ipv4addrs[0].ipv4addr }}"
nioshostname: "{{ host.name }}"
niosdhcp: "{{ host.ipv4addrs[0].configure_for_dhcp }}"
niosmac: "{{ host.ipv4addrs[0].mac }}"
when: host != [] and host.ipv4addrs[0].mac is defined
- name: Set niosip
set_fact:
niosip: "{{ host.ipv4addrs[0].ipv4addr }}"
nioshostname: "{{ host.name }}"
niosdhcp: "{{ host.ipv4addrs[0].configure_for_dhcp }}"
when: host != [] and host.ipv4addrs[0].mac is undefined
- name: Host not found
debug:
msg: 'Cant find related host'
when: host == []
- name: Display Display Registration Info
debug:
msg:
- Hostname = {{ nioshostname }}
- IP = {{ niosip }}
- Mac Address {{ niosmac }}
- Registered for DHCP = {{ niosdhcp }}
when: host != [] and host.ipv4addrs[0].mac is defined
Variables niossearchcatagory and searchcriteria are passed into the playbook via an AWX Survey.
I've searched possible options around using loops or splitting the output down but I'm really at a loss on the best way to process this.
If the output matches this then the playbook works as expected
{
"changed": false,
"ansible_facts": {
"host": [
{
"_ref": "record:host/ZG5zLmhvc3QkLl9kZWZhdWx0LnVrLmFjLmJoYW0udGVzdC5zbmF0LWF3eHRlc3Q1:snat-awxtest5.test.com/Internal",
"ipv4addrs": [
{
"_ref": "record:host_ipv4addr/ZG5zLmhvc3RfYWRkcmVzcyQuX2RlZmF1bHQudWsuYWMuYmhhbS50ZXN0LnNuYXQtYXd4dGVzdDUuMTQ3LjE4OC4zMS40Lg:192.168.31.4/snat-awxtest5.test.com/Internal",
"configure_for_dhcp": false,
"host": "snat-awxtest5.test.com",
"ipv4addr": "192.168.31.4",
"mac": "10:20:30:40:50:60"
}
],
"name": "snat-awxtest5.test.com",
"view": "Internal"
},
]
},
"_ansible_no_log": false
}
And here's an example of the play returning multiple values
{
"changed": false,
"ansible_facts": {
"host": [
{
"_ref": "record:host/ZG5zLmhvc3QkLl9kZWZhdWx0LnVrLmFjLmJoYW0udGVzdC5zbmF0LWF3eHRlc3Q1:snat-awxtest5.test.com/Internal",
"ipv4addrs": [
{
"_ref": "record:host_ipv4addr/ZG5zLmhvc3RfYWRkcmVzcyQuX2RlZmF1bHQudWsuYWMuYmhhbS50ZXN0LnNuYXQtYXd4dGVzdDUuMTQ3LjE4OC4zMS40Lg:192.168.31.4/snat-awxtest5.test.com/Internal",
"configure_for_dhcp": false,
"host": "snat-awxtest5.test.com",
"ipv4addr": "192.168.31.4",
"mac": "10:20:30:40:50:60"
}
],
"name": "snat-awxtest5.test.com",
"view": "Internal"
},
{
"_ref": "record:host/ZG5zLmhvc3QkLl9kZWZhdWx0LnVrLmFjLmJoYW0udGVzdC5zbmF0LW15d2Vi:snat-myweb.test.com/Internal",
"ipv4addrs": [
{
"_ref": "record:host_ipv4addr/ZG5zLmhvc3RfYWRkcmVzcyQuX2RlZmF1bHQudWsuYWMuYmhhbS50ZXN0LnNuYXQtbXl3ZWIuMTQ3LjE4OC4zMS4yLg:192.168.31.2/snat-myweb.test.com/Internal",
"configure_for_dhcp": false,
"host": "snat-myweb.test.com",
"ipv4addr": "192.168.31.2",
"mac": "10:20:30:40:50:60"
}
],
"name": "snat-myweb.test.com",
"view": "Internal"
},
{
"_ref": "record:host/ZG5zLmhvc3QkLl9kZWZhdWx0LnVrLmFjLmJoYW0udGVzdC5zbmF0LXdlYg:snat-web.test.com/Internal",
"ipv4addrs": [
{
"_ref": "record:host_ipv4addr/ZG5zLmhvc3RfYWRkcmVzcyQuX2RlZmF1bHQudWsuYWMuYmhhbS50ZXN0LnNuYXQtd2ViLjE0Ny4xODguMzEuMy4:192.168.31.3/snat-web.test.com/Internal",
"configure_for_dhcp": false,
"host": "snat-web.test.com",
"ipv4addr": "192.168.31.3",
"mac": "10:20:30:40:50:60"
}
],
"name": "snat-web.test.com",
"view": "Internal"
}
]
},
"_ansible_no_log": false
}
And this results in an error as the variables host.name, host.ipv4addrs etc.. don't exist which I presume is becasue there are multiples.
Any help on how to output each registration would be gratefully received.

set_fact create a list with items

I have task which calls an API and I register the o/p in a varaible;
- name: Get Object storage account ID
uri:
url: 'https://api.softlayer.com/rest/v3.1/SoftLayer_Network_Storage_Hub_Cleversafe_Account/getAllObjects.json?objectFilter={"username":{"operation":"{{ item }}"}}'
method: GET
user: abxc
password: 66c94c447a6ed8a0cf058774fe38
validate_certs: no
register: old_existing_access_keys_sl
with_items: '{{ info["personal"].sl_cos_accounts }}'
old_existing_access_keys_sl holds:
"old_existing_access_keys_sl.results": [
{
"json": [
{
"accountId": 12345,
"id": 70825621,
"username": "xyz-11"
}
]
},
{
"json": [
{
"accountId": 12345,
"id": 70825621,
"username": "abc-12"
}
]
}
I want to make a list of id's for further processing an tried the following task but this did not work:
- name: Create a list of account ids
set_fact:
admin_usernames = "{{ item.json[0].id | list }}"
with_items: old_existing_access_keys_sl.results
I am not sure if that's even possible. I also tried this:
- name: create a list
set_fact:
foo: "{% set foo = [] %}{% for i in old_existing_access_keys_sl.results %}{{ foo.append(i) }}{% endfor %}"
foo always comes as blank and as a string:
TASK [result] *****************************************************************************************************************************************
ok: [localhost] => {
"foo": ""
}
Given your example data, you can extract a list of ids using the json_query filter, like this:
---
- hosts: localhost
gather_facts: false
vars:
old_existing_access_keys_sl:
results:
[
{
"json": [
{
"accountId": 12345,
"id": 70825621,
"username": "xyz-11"
}
]
},
{
"json": [
{
"accountId": 12345,
"id": 70825621,
"username": "abc-12"
}
]
}
]
tasks:
- debug:
var: old_existing_access_keys_sl|json_query('results[*].json[0].id')
This will output:
TASK [debug] **********************************************************************************************************************************************************************************
ok: [localhost] => {
"old_existing_access_keys_sl|json_query('results[*].json[0].id')": [
70825621,
70825621
]
}
If you want to store these in a new variable, you can replace that debug task with set_fact:
- set_fact:
admin_ids: "{{ old_existing_access_keys_sl|json_query('results[*].json[0].id') }}"
Update
For a list of dictionaries, just change the json_query expression:
- debug:
var: "old_existing_access_keys_sl|json_query('results[*].json[0].{id: id, username: username}')"
For more information, see the jmespath website for documentation and examples.

Resources