unix script to add the dynamic attributes in the the yaml file - shell

I have the below yaml file stored at server on the location /opt/app/uidai and the contents of the yaml file named abc.yaml is as shown below , as we can see below these data shown is fixed
info:
description : abc
version: 2.8
license:
name: Apache
host: localhost:8080
basePath: "/"
tags:
- name: abc
description
now in the above yaml file named abc.yaml through unix script i want to add certain atributes at the very top so i want unix script to read this file from location /opt/app/uidai and add the attibutes at the top so it should look like as shown below , and the data to be add is also fixed , simply i need to add it on the very top as shown below
x-google-mamagement
metrics:
- name: "abc"
displayName: "abcd"
quota:
limits:
- name: "abc-limit"
metric: "ancbg-metric"
info:
description : abc
version: 2.8
license:
name: Apache
host: localhost:8080
basePath: "/"
tags:
- name: abc
description
Please advise how to achieve the same

This script will prepend a fixed text to a file /opt/app/uidai/abc.yaml using a temporary file.
#! /bin/sh
file=/opt/app/uidai/abc.yaml
temp="$file.tmp"
cat - "$file" > "$temp" << EOF && mv -f "$temp" "$file"
x-google-mamagement
metrics:
- name: "abc"
displayName: "abcd"
quota:
limits:
- name: "abc-limit"
metric: "ancbg-metric"
EOF

Related

How to change the content of a YAML file (.kubeconfig)?

I am learning Ansible & dealing with Kubernetes clusters. I would like to have an Ansible task which can change the value in .kube/config in my local Ansible host. For example, the .kube/config content looks like this:
apiVersion: v1
clusters:
- cluster:
server: xxx
name: xxx
contexts:
- context:
cluster: yyy
user: yyy
name: yyy
users:
- name: xxx
user: xxx
I basically would like to have an Ansible task to be able to do the following things:
change those values of xxx, yyy in the file .kube/config on ansible host.
append new content under each section of clusters, context & users if the values do not exist.
Is there a Kubernetes module or plugin I could directly use to achieve it? If not, could someone guide me how to achieve it?
==== I tried this ====
I tried :
- name: Update value to foo
replace:
path: ~/.kube/config
regexp: 'yyy'
regexp: 'foo'
delegate_to: localhost
When running the task, the file content doesn't change at all. Why? (Task has been executed based on logs)
Is there a kubernetes module or plugin I could directly use to achieve it?
Since your input file looks like valid YAML at a first glance, you could simply read it in via include_vars module – Load variables from files, dynamically within a task.
A minimal example playbook
---
- hosts: localhost
become: false
gather_facts: false
tasks:
- name: Read kubeconfig
include_vars:
file: kubeconfig
name: kubeconfig
- debug:
msg: "{{ kubeconfig }}"
- name: Write kubeconfig
copy:
content: "{{ kubeconfig | to_nice_yaml }}"
dest: kubeconf # for testing with new file name
resulting into an output of
TASK [debug] ********
ok: [localhost] =>
msg:
apiVersion: v1
clusters:
- cluster:
server: xxx
name: xxx
contexts:
- context:
cluster: yyy
user: yyy
name: yyy
users:
- name: xxx
user: xxx
~/test$ cat kubeconf
apiVersion: v1
clusters:
- cluster:
server: xxx
name: xxx
contexts:
- context:
cluster: yyy
user: yyy
name: yyy
users:
- name: xxx
user: xxx
Then do your data manipulation steps like changing values or append new content. After that write the data structure back into a new file or overwrite the existing one.
By following this approach one can keep the use case simple and by using out-of-box functionality.
Further Q&A
Ansible write variables into YAML file
Ansible: Add dictionary item into a YAML file
Write variable to a file in Ansible
Further Documentation
Since it is not possible to change or update existing variables, but register new ones with the same name, you may also have a look into
update_fact module – Update currently set facts

Promtail: How to remove timestamps from filenames?

I have a simple problem:
My logfiles have timestamps in their name, i.e.:
/var/log/html/access-2021-11-27.log
/var/log/html/access-2021-11-28.log
/var/log/html/access-2021-11-29.log
Promtail is scraping this but does not "see" that access-2021-11-28.log is a continuation of access-2021-11-27.log. So it will "detect" a log file access-2021-11-28.log on the 28th and not show the access-2021-11-27.log anymore. I would want to see just "access.log" with data for several days.
I would assume this should be a well-known scenario, but I cannot find anything on this on the Internet.
The only way is to change log configuration of the application which is generating the logs, to use a unique access.log instead of the schema of the access-xxxx-xx-xx.log files. Unfortunately, this is not always possible.
But...
The old files can still be shown, it only depends on the time range used. Here is an example:
You can use regular expressions to perform the query, like in this example:
{filename=~".*JIRA_INSTALL/logs/access_log\\..*"}
If you want to statically override the filename field you can so something as simple as this:
scrape_configs:
- job_name: system
static_configs:
- labels:
job: remotevarlogs
__path__: /var/log/html/access-*.log
pipeline_stages:
- match:
selector: '{job="remotevarlogs"}'
stages:
- static_labels:
filename: '/var/log/html/access.log'
For those of you searching how to dynamically change the filepath prefix. For example, I'm using FreeBSD jails to nullfs mount my logs from other jails into a promtail jail. I don't want the local mount location (/mnt/logs/<hostname>) to show up as part of the path. Mounting shared folder could similarly be done with NFS or Docker.
scrape_configs:
- job_name: system
static_configs:
- labels:
job: remotevarlogs
__path__: /mnt/logs/*/**/*.log
pipeline_stages:
- match:
selector: '{job="remotevarlogs"}'
stages:
- regex:
source: filename
expression: "/mnt/logs/(?P<host>\\S+?)/(?P<relativepath>\\S+)"
- template:
source: host
template: '{{ .Value }}.mylocaldomain.com'
- template:
source: relativepath
template: '/var/log/{{ .Value }}'
- labels:
host:
filename: relativepath
- labeldrop:
- job
- relativepath
/etc/fstab for loki jail to pass-in /var/log/ directory from the grafana jail:
# Device Mountpoint FStype Options Dump Pass#
...
/jails/grafana/root/var/log/ /jails/loki/root/mnt/logs/grafana nullfs ro,nosuid,noexec 0 0
...
Now when I browse the logs, instead of seeing /mnt/logs/grafana/nginx/access.log, I see /var/log/nginx/access.log from grafana.mylocaldomain.com.

How create correct template file for ansible role

I need create some configuration file in yml wtith ansible role. This example what i want:
filebeat.inputs:
- type: log
paths:
- /var/log/system.log
- /var/log/wifi.log
In ansible dirctory structue in Templates dir i have file with such text:
filebeat.inputs:
- type: {{filebeat_input.type}}
{{ filebeat_input.paths | to_yaml}}
In Directory defaults i have such main.yml:
filebeat_create_config: true
filebeat_input:
type: log
paths:
- "/var/log/*.log"
- "/var/somelog/*.log"
When im run ansible-playbook with this role im got this:
filebeat.inputs:
- type: log
[/var/log/*.log, /var/sdfsadf]
Where i`m wrong ? what and where i need change to get exactly what i whant (see example above).
Thanks for any help !
filebeat_input.paths is a list, so when you pass it to to_yaml you get a list. You just need to structure your YAML template so that you're putting the list in the right place, e.g:
filebeat.inputs:
- type: {{filebeat_input.type}}
paths: {{ filebeat_input.paths | to_yaml}}
Note that:
paths: [/var/log/*.log, /var/sdfsadf]
Is exactly equivalent to:
paths:
- /var/log/*.log
- /var/sdfsadf

How to use "echo" in ansible?

I am getting the below error when I try to run the following code:
ERROR! Syntax Error while loading YAML.
The error appears to have been in '/home/shanthi/ansible-5g/roles/ymlRoles/tasks/main.yml': line 5, column 7, but may
be elsewhere in the file depending on the exact syntax problem.
The offending line appears to be:
-name: adding yml files
shell: echo '
^ here
Here is the code:
- name: adding yml files
shell: echo '
ops-center:
product:
autoDeploy: true
helm:
api:
release: cnee-ops-center
namespace: cnee
repository:
url: http://engci-maven-master.cisco.com/artifactory/mobile-cnat-charts-release/builds/2019.01-5/amf.2019.01.01-5/
name: amf
' > amf.yml
You should avoid using the shell module. Instead, use the copy module to copy a file to remote hosts.
Contents of /path/to/local/amf.yml
ops-center:
product:
autoDeploy: true
helm:
api:
release: cnee-ops-center
namespace: cnee
repository:
url: http://engci-maven-master.cisco.com/artifactory/mobile-cnat-charts-release/builds/2019.01-5/amf.2019.01.01-5/
name: amf
Playbook task
- name: Copy amf.yml to host
copy:
src: /path/to/local/amf.yml
dest: /path/to/remote/amf.yml

Check yaml file sorting using bash

I have a yaml file which describes directories on the server. I need a short script which will check that sections is sorted alphabetically by name. If some section will be inserted not alphabetically, need to point where it should be specified. File looks like(path and permission is optional parameters):
-
name: scripts
description: execution scripts
path: /home/user/scripts
-
name: tests
description: directory with tests
path: /home/user/tests
permissions: default
...
Can you suggest the best way how do this? Thanks
How about
$ diff -u <(grep name: yaml) <(grep name: yaml | sort)
--- /dev/fd/11 2016-05-24 20:30:39.000000000 +0200
+++ /dev/fd/12 2016-05-24 20:30:39.000000000 +0200
## -1,2 +1,2 ##
- name: tests
name: scripts
+ name: tests
Where yaml is the file?

Resources