Kong "dynamic" config file - yaml

I'm using Kong to put a rate limit on a service, I need the possibility to include a string from a txt file in my repository (like include function of nginx) for applying 'dinamically' the plugin in the kong config file for db-less mode
My yml should be something like:
plugins:
- name: rate-limiting
service: <service-name>
config:
second: "from txt file"
policy: local
is it possible to include a string coming from another file to this yml?

Related

Spring cloud config server share binary file

I am using spring configuration server.
While setting up Kafka, I came across the fact that I need to somehow specify binary certificates
spring:
kafka:
ssl:
truststore:
location: /filepath/trust_cert.jks
password: 1234
keystore:
location: /filepath/keystore_cert.jks
password: 1234
Can I somehow put them on the configuration server, and in this case, what should I write to the config, where the path to the file is expected?
I really don’t want to manually upload them to each server, I would like the configuration server to give them
Of course, these urls must be protected, just like configuration server urls

Can ansible vault encrypt values in plugin configuration files?

I'm writing a dynamic inventory plugin for ansible which pulls off device info from an API and adds it to the inventory. To configure my plugin, I need a username and password for the service which I retrieve from my plugin configuration yaml file
plugin_conf.yaml:
plugin: my_inventory_plugin
host_location: api.example.com
port: 443
user: some_user
password: some_pass
Since storing credentials in a file under version control is bad, does ansible vault support capabilities to encrypt values stored in a plugin configuration file?
i.e can the user of my plugin do something like
plugin: my_inventory_plugin
host_location: api.example.com
port: 443
user: !vault|
$FOO;1.1;AES256
blah blah
password: !vault|
$BAR;1.1;AES256
something else
and regardless if they use insecure plaintext or the ansible vault, my plugin can still get the values using the self.get_option('user') method?
I tested it out myself and the answer is yes.
If the user encrypts a string using ansible vault setting the name of the secret using -n, they can use the variable name into my config file. There are no special handling cases required in my plugin to handle plaintext credentials or ansible vault credentials.

Spring Cloud config server security

I implemented Spring Cloud config server. How can I prevent the config server bootstrap.yml file from storing the GIT user name and password as clear text?
Vault: https://github.com/hashicorp/vault
Use Vault: https://cloud.spring.io/spring-cloud-config/reference/html/#_vault
Set up Spring Vault:
https://docs.spring.io/spring-vault/docs/2.2.2.RELEASE/reference/html/
https://spring.io/projects/spring-vault
In your Spring Cloud config server, file bootstrap.yml
spring:
cloud:
config:
token: YourVaultToken
ok So this is working fine for me, the issue was my config server's bootstrap.yml need to connect to GIT repository as backend and GIT repo is secured with username and password but I can not pass the username and password in bootstrap.yml file.
To solve this:
Pass the credential as environmental variable and store these environment variable in terraform or any other secure location.

Hearbeat icmp configuration host aliases

I have a elastic stack (version 7.3.0) configured, with a Heartbeat set up to ping my different hosts.
The config file of my monitor looks like this:
- type: icmp
name: icmp_monitor
schedule: '#every 5s'
hosts:
- machine1.domain.com # Machine 1
- machine2.domain.com # Machine 2
- machine3.domain.com # Machine 3
Is there a way to give the hosts an "alias" in the configuration file ?
In my organisation, the server hostname is not very meaningfull, it would be great for example to specify that machine1.domain.com is MongoDB main server.
The example on the documentation page shows that you can set host names in the hosts section/key. There they specify "myhost". So I assume that it is possible to define any name you want.
Elasticsearch is however not responsible for aliasing/resolving hostnames. It is a task of your OS.
If your heartbeat runs on a Linux machine I would set the aliases in /etc/hosts like
192.168.1.X mongodb-main
and would set the alias in the monitor config like
- type: icmp
name: icmp_monitor
schedule: '#every 5s'
hosts:
- mongodb-main
and see if heartbeat accepts it and can resolve the alias/hostname.

Logstash and filebeat in the ELK stack

We are setting up elasticsearch, kibana, logstash and filebeat on a server to analyse log files from many applications. Due to reasons* each application log file ends up in a separate directory on the ELK server. We have about 20 log files.
As I understand we can run a logstash pipeline config file for each
application log file. That will be one logstash instance running
with 20 pipelines in parallel and each pipeline will need its own
beat port. Please confirm that this is correct?
Can we have one filebeat instance running or do we need one for each
pipeline/logfile?
Is this architecture ok or do you see any major down sides?
Thank you!
*There are different vendors responsible for different applications and they run a cross many different OS and many of them will not or can't install anything like filebeats.
We do not recommend reading log files from network volumes. Whenever
possible, install Filebeat on the host machine and send the log files
directly from there. Reading files from network volumes (especially on
Windows) can have unexpected side effects. For example, changed file
identifiers may result in Filebeat reading a log file from scratch
again.
Reference
We always recommend installing Filebeat on the remote servers. Using
shared folders is not supported. The typical setup is that you have a
Logstash + Elasticsearch + Kibana in a central place (one or multiple
servers) and Filebeat installed on the remote machines from where you
are collecting data.
Reference
For one filebeat instance running you can apply different configuration settings to different files by defining multiple input sections as below example, check here for more
filebeat.inputs:
- type: log
enabled: true
paths:
- 'C:\App01_Logs\log.txt'
tags: ["App01"]
fields:
app_name: App01
- type: log
enabled: true
paths:
- 'C:\App02_Logs\log.txt'
tags: ["App02"]
fields:
app_name: App02
- type: log
enabled: true
paths:
- 'C:\App03_Logs\log.txt'
tags: ["App03"]
fields:
app_name: App03
And you can have one logstash pipeline with if statement in filter
filter {
if [fields][app_name] == "App01" {
grok { }
} else if [fields][app_name] == "App02" {
grok { }
} else {
grok { }
}
}
Condtion can be also if "App02" in [tags] or if [source]=="C:\App01_Logs\log.txt" as we send from filebeat

Resources