I have installed hashicorp vault in k8s cluster and
I have stored kv secrets from UI, looking for documentation or link to retrieve these secrets from jenkins pipeline.
There's a plugin that does exactly that: https://github.com/jenkinsci/hashicorp-vault-plugin#usage-via-jenkinsfile
node {
// define the secrets and the env variables
// engine version can be defined on secret, job, folder or global.
// the default is engine version 2 unless otherwise specified globally.
def secrets = [
[path: 'secret/testing', engineVersion: 1, secretValues: [
[envVar: 'testing', vaultKey: 'value_one'],
[envVar: 'testing_again', vaultKey: 'value_two']]],
[path: 'secret/another_test', engineVersion: 2, secretValues: [
[vaultKey: 'another_test']]]
]
// optional configuration, if you do not provide this the next higher configuration
// (e.g. folder or global) will be used
def configuration = [vaultUrl: 'http://my-very-other-vault-url.com',
vaultCredentialId: 'my-vault-cred-id',
engineVersion: 1]
// inside this block your credentials will be available as env variables
withVault([configuration: configuration, vaultSecrets: secrets]) {
sh 'echo $testing'
sh 'echo $testing_again'
sh 'echo $another_test'
}
}
Related
I am using CDK v2, with Typescript.
I want my bastion machine to log stuff to Cloudwatch.
The specific LogGroup I want it to write to is also created via CDK (so that I can customise the retention).
How can I customise the userData script with knowledge about other AWS resources, which are also created by CDK - so I can't know their names?
My CDK stuff is being deployed via a CDK pipeline.
Here is my CDK script:
export class StoBastion extends cdk.Stack {
constructor(scope: Construct, id: string, props: cdk.StackProps){
super(scope, id, props);
// actual name: DemoStage-StoBastion-StoBastionLogGroup5EEB3DE8-AdkaWy0ELoeF
const logGroup = new LogGroup(this, "StoBastionLogGroup", {
retention: RetentionDays.TWO_WEEKS,
});
let initScriptPath = 'lib/script/bastion-linux2-asg-provision.sh';
const userDataText = readFileSync(initScriptPath, 'utf8');
const autoScalingGroup = new AutoScalingGroup(this, 'StoAsg', {
...
userData: UserData.custom(userDataText),
})
}
}
And the shell script I want to use as the userData for the instance:
#!/bin/sh
### cloudwatch ###
# This goes as early as possible in the script, so we can see what's going
# on from Cloudwatch ASAP.
echo " >>bastion>> installing cloudwatch package $(date)"
yum install -y awslogs
echo " >>bastion>> configuring cloudwatch - ${TF_APP_LOG_GROUP} $(date)"
## overwrite awscli.conf ##
cat > /etc/awslogs/awscli.conf <<EOL
[plugins]
cwlogs = cwlogs
[default]
region = ${TF_APP_REGION}
EOL
## end of overwrite awscli.conf ##
## overwrite awslogs.conf ##
cat > /etc/awslogs/awslogs.conf <<EOL
[general]
state_file = /var/lib/awslogs/agent-state
[cloudinit]
datetime_format = %b %d %H:%M:%S
file = /var/log/cloud-init-output.log
buffer_duration = 5000
log_group_name = ${TF_APP_LOG_GROUP}
log_stream_name = linux2-cloud-init-output-{instance_id}
initial_position = start_of_file
EOL
## of overwrite awslogs.conf ##
echo " >>bastion>> start awslogs daemon $(date)"
systemctl start awslogsd
echo " >>bastion>> make sure awslogs starts up on boot"
systemctl enable awslogsd.service
### end cloudwatch ###
I want to somehow replace the variable references in the userData script like ${TF_APP_LOG_GROUP} with values populated at CDK deploy time so they have the correct values.
I'm doing cloudwatch stuff at the moment, but there will be other stuff I need to do like this, so this question isn't about cloudwatch - it's about "how can I configure my userData with values known only at CDK deploy time"?
You can do this with conventional string formatting tools, as if the log group were a regular string:
const userDataText = readFileSync(
initScriptPath,
'utf8'
).replaceAll(
'${TF_APP_LOG_GROUP}',
logGroup.logGroupName
);
What this will do behind the scenes is replace all occurences of ${TF_APP_LOG_GROUP} in the text string with a token (a special string that looks something like ${TOKEN[LogGroup.Name.1234]}), and CloudFormation will in turn replace it with the actual value during deployment.
For reference: https://docs.aws.amazon.com/cdk/v2/guide/tokens.html
This was all incorrect. Leaving it for posterity purposes, but not valid as it was the wrong direction
Wherever you are synthing your template - be that in a SimpleSynth
Pipelines step or in a CodeBuild if using a standard
CodePipeline/CodeBuild, you can include context variables in the
synth.
The command cdk deploy can be followed with any context variables
you want: cdk deploy -c VariableName=value - if your bash script is
returning to the shell the answers, you can store them as shell
variables and use them in the cdk deploy.
you can then reference these variables within the actual stacks with
const bucket_name = this.node.tryGetContext('VariableName');
See this documentation or this SO post for more information.
I programmatically write the logs from the function using such code:
import {Logging} from '#google-cloud/logging';
const logging = new Logging();
const log = logging.log('log-name');
const metadata = {
type: 'cloud_function',
labels: {
function_name: process.env.FUNCTION_NAME,
project: process.env.GCLOUD_PROJECT,
region: process.env.FUNCTION_REGION
},
};
log.write(
log.entry(metadata, "some message")
);
Later in Logs Explorer I get the log message where labels.region is us1 whereas standard logs that GCP adds, e.g. "Function execution started", contains us-central1 value.
Should not they be the same? Maybe I missed something or if it was done intentionally what is the reason behind it?
process.env.FUNCTION_REGION is supported only in Node 8 runtime. In newer runtimes it was deprecated. More info in documentation.
If your function requires one of the environment variables from an older runtime, you can set the variable when deploying your function.
Overview
Create a aws_secretsmanager_secret
Create a aws_secretsmanager_secret_version
Store a uniquely generated string as that above version
Use local-exec provisioner to store the actual secured string using bash
Reference that string using the secretsmanager resource in for example, an RDS instance deployment.
Objective
Keep all plain text strings out of remote-state residing in a S3 bucket
Use AWS Secrets Manager to store these strings
Set once, retrieve by calling the resource in Terraform
Problem
Error: Secrets Manager Secret
"arn:aws:secretsmanager:us-east-1:82374283744:secret:Example-rds-secret-fff42b69-30c1-df50-8e5c-f512464a4a11-pJvC5U"
Version "AWSCURRENT" not found
when running terraform apply
Question
Why isn't it moving the AWSCURRENT version automatically? Am I missing something? Is my bash command wrong? The value does not write to the secret_version, but it does reference it correctly.
Look in main.tf code, which actually performs the command:
provisioner "local-exec" {
command = "bash -c 'RDSSECRET=$(openssl rand -base64 16); aws secretsmanager put-secret-value --secret-id ${data.aws_secretsmanager_secret.secretsmanager-name.arn} --secret-string $RDSSECRET --version-stages AWSCURRENT --region ${var.aws_region} --profile ${var.aws-profile}'"
}
Code
main.tf
data "aws_secretsmanager_secret_version" "rds-secret" {
secret_id = aws_secretsmanager_secret.rds-secret.id
}
data "aws_secretsmanager_secret" "secretsmanager-name" {
arn = aws_secretsmanager_secret.rds-secret.arn
}
resource "random_password" "db_password" {
length = 56
special = true
min_special = 5
override_special = "!#$%^&*()-_=+[]{}<>:?"
keepers = {
pass_version = 1
}
}
resource "random_uuid" "secret-uuid" { }
resource "aws_secretsmanager_secret" "rds-secret" {
name = "DAL-${var.environment}-rds-secret-${random_uuid.secret-uuid.result}"
}
resource "aws_secretsmanager_secret_version" "rds-secret-version" {
secret_id = aws_secretsmanager_secret.rds-secret.id
secret_string = random_password.db_password.result
provisioner "local-exec" {
command = "bash -c 'RDSSECRET=$(openssl rand -base64 16); aws secretsmanager put-secret-value --secret-id ${data.aws_secretsmanager_secret.secretsmanager-name.arn} --secret-string $RDSSECRET --region ${var.aws_region} --profile ${var.aws-profile}'"
}
}
variables.tf
variable "aws-profile" {
description = "Local AWS Profile Name "
type = "string"
}
variable "aws_region" {
description = "aws region"
default="us-east-1"
type = "string"
}
variable "environment" {}
terraform.tfvars
aws_region="us-east-1"
aws-profile="Example-Environment"
environment="dev"
The error likely isn't occuring in your provisioner execution per se, because if you remove the provisioner block the error still occurs on apply--but confusingly only the first time after a destroy.
Removing the data "aws_secretsmanager_secret_version" "rds-secret" block as well "resolves" the error completely.
I'm guessing there is some sort of config delay issue here...but adding a 20 second delay provisioner to the aws_secretsmanager_secret.rds-secret resource block didn't help.
And the value from the data block can be successfully output on subsequent apply runs, so maybe it's not just timing.
Even if you resolve the above more basic issue, it's likely your provisioner will still be confusing things by modifying a resource that Terraform is trying to manage in the same run. I'm not sure there's a way to get around that except perhaps by splitting into two separate operations.
Update:
It turns out that on the first run the data sources are read before the aws_secretsmanager_secret_version resource is created. Just adding depends_on = [aws_secretsmanager_secret_version.rds-secret-version] to the data "aws_secretsmanager_secret_version" block resolves this fully and makes the interpolation for your provisioner work as well. I haven't tested the actual provisioner.
Also you may need to consider this (which I take to not always apply to 0.13):
NOTE: In Terraform 0.12 and earlier, due to the data resource behavior of deferring the read until the apply phase when depending on values that are not yet known, using depends_on with data resources will force the read to always be deferred to the apply phase, and therefore a configuration that uses depends_on with a data resource can never converge. Due to this behavior, we do not recommend using depends_on with data resources.
I'm trying to use variables declared in bash script in my Jenkinsfile (jenkins pipeline) without using extra plugins like EnvInject plugin
please help, any idea will be appreciated
you need to output those variables to a file like Property/Yaml file. Then use pipeline step readProperties / readYaml to read into Map in Jenkinsfile.
steps {
sh'''
...
AA=XXX
BB=YYY
set > vars.prop
'''
script {
vars = readProperties file: 'vars.prop'
env << vars // merge vars into env
echo 'AA='+ env['AA']
}
}
I have done it with something like this, you can store the variables inside Shell into a file inside workspace and then you are out of shell block, read the file in groovy to load the key value pair into your environment
Something like:
def env_file = "${WORKSPACE}/shell_env.txt"
echo ("INFO: envFileName = ${env_file}")
def read_env_file = readFile env_file
def lines = read_env_file.readLines()
lines.each { String line ->
def object = line.split("=")
env.object[0] = object[1]
}
i'm trying to convert my freestyle job in a scripted pipeline, i'm using gradle for the build and artifactory to resolve my dependecies and publish artifacts.
My build is parametraized with 3 params and in freestyle job when i configure Invoke gradle script I have the checkbox Pass all job parameters as System properties and in my
project.gradle file I use the params with System.getProperty() command.
Now implementing my pipeline I define the job parameters, I have these like enviromnent variables in the Jenkinsfile but can I pass this params to the gradle task?
Following the official tutorial to use Artifactory-Gradle plugin in pipeline I run my build with :
buildinfo = rtGradle.run rootDir: "workspace/project/", buildFile: 'project.gradle', tasks: "cleanJunitPlatformTest build"
Can I pass params to gradle build and use in my .gradle file?
Thank's
Yes, you can. If using sh ''' then switch that to sh """ to get the expanded values
Jenkinsfile
#!/usr/bin/env groovy
pipeline {
agent any
parameters {
string(name: 'firstParam', defaultValue: 'AAA', description: '')
string(name: 'secondParam', defaultValue: 'BBB', description: '')
string(name: 'thirdParam', defaultValue: 'CCC', description: '')
}
stages {
stage ('compile') {
steps {
sh """
gradle -PfirstParam=${params.firstParam} -PsecondParam=${params.secondParam} -PthirdParam=${params.thirdParam} clean build
sh """
}
}
}
}
and inside your build.gradle you can access them as
def firstParam = project.getProperty("firstParam")
You can also use SystemProperty with -D prefix as compared to project property with -P. In that case you can get the value inside build.gradle as
def firstParam = System.getProperty("firstParam")