Concourse: How to set variables in pipelines file? - continuous-integration

So I have been running concourse pipelines using a separate yaml file to hold my variables, similar to this example in the documentation. However I would like to set my variables within my main pipeline.yml file to avoid using the cli option --load-vars-from some_other_file.yml. How could I do this?
Note: I might be looking for something that uses params:, but I want the params I set to be global for everything in my pipeline.yml file, so that everything can use the variables I set in it.

I don't believe that what you want to do is possible, in the way that you suggest.
I think you have two options:
Put your YAML file in an S3 bucket, and have the pipeline watch the S3 bucket as a resource, and call set-pipeline on itself whenever that bucket changes, using the YAML file in the bucket to populate variables.
Put your YAML file in an S3 bucket, and use it as an input to whatever job needs those variables. You can then use a tool like yml2env to make the contents of that YAML file available to your scripts as environment variables.

As of concourse v3.3.0, you can setup Credential Management in order to use variables from the Vault (this is the only supported credential manager by concourse right now). This way you won't have to keep any variable in a separate file and Vault will keep them secure as well.
Using the Credential Manager you can parameterize:
source under resources in a pipeline
source under resource_types in a pipeline
source under image_resource in a task config
params in a pipeline
params in a task config
For setting up vault with concourse you can refer to:
https://concourse-ci.org/creds.html

Related

helper scripts that plan & apply terraform so that no other input are needed

Am working on a project in Azure using terraform to implement rbac in the enviroment, i wrote a code that create azure AD groups , pulll users from azure ad & add them to the group based don the role & assign permissions to those groups, i use many tfvar files in my variable folder. in order to run or apply terraform i have to pass input variable as seen below;
terraform destroy --var-file=variables/service_desk_group_members.tfvarss --var-file=variables/network_group_members.tfvars --var-file=variables/security_group_members.tfvars --var-file=variables/EAAdmin_goroup_members.tfvars --var-file=variables/system_group_members.tfvars
i wish to use a script either bash or python to wrap these variables ,so that i should be able to just run
terraform plan.
i have not tried this before , its a requirement that am trying for the first time, if i can have a sample script or if someone can point me in the rigth path how to do it, i will appreciate it
Terraform provides an autoloading mechanism. If you renamed all your files something.auto.tfvars then terraform will automatically load them. Ordering lexical based on filename. You will have to move them out of the variables directory. Terraform expects them in the same directory.

How to read an environment variable from AWS ECS task definition in Spring?

there, I have already an environment variable setup in ECS task definition as shown in the screenshot. And I was supposing that I could simply treat it as an regular env and read it like this in Spring:
#Value("${activeDirectoryPwd}")
private String adPwd;
but somehow the variable adPwd comes back null. Do I have to read it differently?
Environmental variables are environmental variables no matter how they are defined, so there is likely something going on within your containers themselves. Is your Spring application being launched directly in the container or is there another service running it?
As a separate note, you shouldn't pass passwords directly into the task definition like that. Instead you should store it in the Secrets Manager or the Parameter Store and pass the secret through in the task definition. This prevents the secret from being read in the AWS Console.

Nifi: How to read access/secret key for AWSCredentialsProviderControlerService independent of environment

We are currently set path of properties file which contains secret/access key for Credentials File for AWSCredentialsProviderControlerService . Issue, is we are changing properties path for prod and non prod each time we run nifi workflow. trying to come up no change on Configuration on Credential File path, so that access/secret key would be read regardless of prod and non prod. Since credential file wont support Nifi Expresion language, trying to make use of ACCESS KEY/SECRET properties ${ENV:equalsIgnoreCase("prod"):ifElse(${ACESS_PROD},${ACESS_NONPROD})} Issue we are facing, we are not able to store these access key/secret keys to the registry. Hence unable to implement this change. Is there any way to read access/secret key regardless of environment in Nifi. Curently we are using 1 property file for non prod nifi and 2nd property file for prod properties. In this set up, manually need to changed to credential file path when switching from prod to non prod. Trying to seamlessly work without changing path of credential file. Is there any way to make this happen?
enter image description here
The process that uses the AWSCredentialsProviderControlerService does not support param or variables, but the AWSCredentialsProviderControlerService "credential file" property supports "Parameters Context" as entries, make use of this for your solution.
Example:
Trigger something --> RouteOnAttribute --> If Prod (run executestreamcmd and change the Parameter Context Value to point to prod credfile) else if DEV(run executestreamcmd and change the Parameter Context Value to point to prod credfile) --> then run you AWS Processor.
You can use the toolkit client to edit the parameter context, or event nipyapi python module. It will not be fast tohu.

Get default zone / region

I'm using the Golang google-cloud-sdk to get informations on resources (specifically here compute instances, but it doesn't really matter).
The gcloud cli allows to do something like this:
gcloud config set compute/zone ZONE
Which under the hood will write in ~/.config/gcloud/configurations/config_default those value as something that looks like an ini file.
Can the (go) sdk read config those config file ?
The cli also read the environment variable CLOUDSDK_COMPUTE_ZONE if not defined in the config file.
Can the sdk also read this variables ?
To sum up the question , how can I use the same config mechanism the gcloud cli uses with the Go sdk ?
To sum up the question , how can I use the same config mechanism the gcloud cli uses with the Go sdk ?
As far as I know, you can't. You need to specify the zone to all your operations.
Long time ago, someone asked about CLOUDSDK_CONFIG and the last response is cristal clear:
 Resolved: we decided not to honor CLOUDSDK_CONFIG, in the interest of maintaining simplicity for the ADC spec.
https://github.com/googleapis/google-cloud-go/issues/288
And I think it's true for all the CLOUDSDK_* env.

Config file masks

We deploy services as docker containers using Marathon. The containers include a base config file, Marathon pulls an environment config file (which has a subset of the base keys) at deployment time so when the app starts it has;
environment.toml
config.toml
when reading the config we need to conflate the values in both files to a single set, effectively masking/shadowing the values present in both files with those in the environment file.
I didn't find this functionality in the Viper docs. Unless I have missed something it seems my options are;
Write a package that uses Viper to read both files and perform the conflation.
Extend Viper
Before I start writing code, is there already a mechanism for doing this?

Resources