Config file masks - go

We deploy services as docker containers using Marathon. The containers include a base config file, Marathon pulls an environment config file (which has a subset of the base keys) at deployment time so when the app starts it has;
environment.toml
config.toml
when reading the config we need to conflate the values in both files to a single set, effectively masking/shadowing the values present in both files with those in the environment file.
I didn't find this functionality in the Viper docs. Unless I have missed something it seems my options are;
Write a package that uses Viper to read both files and perform the conflation.
Extend Viper
Before I start writing code, is there already a mechanism for doing this?

Related

How to include config files for the Google Ops-agent

I want to do some configurations for Google Cloud Ops-Agent in order to deploy it via Ansible.
For example /etc/google-cloud-ops-agent/kafka.yaml
How to include *.yaml configs?
If using /etc/google-cloud-ops-agent/config.yaml I'm worried then the configuration will be overwritten
There are two ways I can think of to do this.
The easiest (and least precise): use the copy module to recusively copy the the directory content to the target. Of course, if there are files other than ".yaml", you'll get those as well.
The more complex way...and I have not tested this. use the find module to execute locally on the control node, to get a list of the .yaml files, register their locations and then copy them up. There's probably a simpler way.

Nifi: How to read access/secret key for AWSCredentialsProviderControlerService independent of environment

We are currently set path of properties file which contains secret/access key for Credentials File for AWSCredentialsProviderControlerService . Issue, is we are changing properties path for prod and non prod each time we run nifi workflow. trying to come up no change on Configuration on Credential File path, so that access/secret key would be read regardless of prod and non prod. Since credential file wont support Nifi Expresion language, trying to make use of ACCESS KEY/SECRET properties ${ENV:equalsIgnoreCase("prod"):ifElse(${ACESS_PROD},${ACESS_NONPROD})} Issue we are facing, we are not able to store these access key/secret keys to the registry. Hence unable to implement this change. Is there any way to read access/secret key regardless of environment in Nifi. Curently we are using 1 property file for non prod nifi and 2nd property file for prod properties. In this set up, manually need to changed to credential file path when switching from prod to non prod. Trying to seamlessly work without changing path of credential file. Is there any way to make this happen?
enter image description here
The process that uses the AWSCredentialsProviderControlerService does not support param or variables, but the AWSCredentialsProviderControlerService "credential file" property supports "Parameters Context" as entries, make use of this for your solution.
Example:
Trigger something --> RouteOnAttribute --> If Prod (run executestreamcmd and change the Parameter Context Value to point to prod credfile) else if DEV(run executestreamcmd and change the Parameter Context Value to point to prod credfile) --> then run you AWS Processor.
You can use the toolkit client to edit the parameter context, or event nipyapi python module. It will not be fast tohu.

Nifi encrypt variables/properties files

Nifi custom properties (per-environment property files) looks to be a perfect way to define an environment specific paths and credentials. The only issue is how to keep sensitive information there? There is Nifi Encrypt-Config Tool described in more details here.
Is Nifi Encrypt-Config Tool capable of encrypting variable files (defined with nifi.variable.registry.properties) besides nifi.properties?
As far as I understood, it encrypts only nifi.properties. It's important, because with Nifi Docker Image I can define only nifi.variable.registry.properties (NIFI_VARIABLE_REGISTRY_PROPERTIES env var) without ability to modify nifi.properties.
The NiFi encrypt-config tool interacts with the following configuration files:
nifi.properties
login-identity-providers.xml
authorizers.xml
bootstrap.conf
flow.xml.gz
It does not handle any linked custom variable definition files, and there is no mechanism for sensitive variables to be properly secured and stored. Variables do not support any sensitive values at all for this reason.
Variables are treated as deprecated in modern versions of NiFi -- still supported but their use is discouraged -- and parameters were introduced in version 1.10.0 as a modern solution. Parameters do support sensitive values and are accessible from every property descriptor at the framework level rather than on a per-field basis depending on the developer's explicit decision to support them. You should prioritize parameters for the storage of sensitive values needed in your flow definitions.
Depending on your threat model, you may have less robust but acceptable alternatives:
If you accept the security level of environment variables, you can populate these directly and they will be referenced in any properties which support Expression Language, the same as "NiFi variables"
You can edit the nifi.properties file through a custom Docker image, startup scripts, etc. Any modified or added properties in that file can be encrypted by adding their key (property key descriptor, not cryptographic key) as a comma-delimited list to nifi.sensitive.props.additional.keys in that file. These properties will also be protected by the toolkit and decrypted in memory during NiFi application startup. However, nifi.properties is meant to hold framework-level configuration values, not component-level properties.

When following 12 factor rule, where do I store configs?

Here is the link 12 factor
I am confused weather if I should store values inside my app.properties file vs environment variable.
App.properties
Memory_Folder_Test = Test
Memory_Folder_Prod = Production
Memory_Folder_Dev = Development
Strong_threshold = 10
Low_Threshold = 2
Username = FirstUser
Password = PasswordSecret
So theoretically where should I put these values in? application.properties or as environment variables? If I did not read wrong the purpose of 12 factor is to remove putting values in properties file and externalize it.
You can store the values in application.properties file, however, spring allows you to override those values using environment variables. Hence, it is a 12 factor compliant.
You store the properties externally using something like spring cloud config. You then use the environment properties to define the configurations (like the url) needed to access cloud config from your applications
I prefer to store environment variables in files, encrypt the files and check the encrypted files into git, via blackbox: https://github.com/StackExchange/blackbox
Blackbox will handle encryption/decryption so that it makes it rather difficult to check the unencrypted creds into your repo. Also, the way openpgp works, you can enable teams of devs to encrypt/decrypt the files.
That project is maintained by StackExchange (aka the guys who run this site). It takes some time figuring out openpgp/gpg (which blackbox depends on), but it has been well worth it for me. I've been using in linux and also in windows (via the windows linux subsystem).

Concourse: How to set variables in pipelines file?

So I have been running concourse pipelines using a separate yaml file to hold my variables, similar to this example in the documentation. However I would like to set my variables within my main pipeline.yml file to avoid using the cli option --load-vars-from some_other_file.yml. How could I do this?
Note: I might be looking for something that uses params:, but I want the params I set to be global for everything in my pipeline.yml file, so that everything can use the variables I set in it.
I don't believe that what you want to do is possible, in the way that you suggest.
I think you have two options:
Put your YAML file in an S3 bucket, and have the pipeline watch the S3 bucket as a resource, and call set-pipeline on itself whenever that bucket changes, using the YAML file in the bucket to populate variables.
Put your YAML file in an S3 bucket, and use it as an input to whatever job needs those variables. You can then use a tool like yml2env to make the contents of that YAML file available to your scripts as environment variables.
As of concourse v3.3.0, you can setup Credential Management in order to use variables from the Vault (this is the only supported credential manager by concourse right now). This way you won't have to keep any variable in a separate file and Vault will keep them secure as well.
Using the Credential Manager you can parameterize:
source under resources in a pipeline
source under resource_types in a pipeline
source under image_resource in a task config
params in a pipeline
params in a task config
For setting up vault with concourse you can refer to:
https://concourse-ci.org/creds.html

Resources