We use the Vault plugin in our pipeline to read credentials from Vault. Now we also want to generate TLS certificates with Vault's PKI engine. For that I need the appRole secret id for Jenkins in my pipeline file. The secret is configured in Jenkins as 'Vault App Role Credential' and I don't know how to access it.
What I'd like to do is something like this:
withCredentials([VaultAppRoleCredential(credentialsId: 'vault_credentials'), roleIdVariable: 'roleId', secretIdVariable: 'secretId']) {
stage('generate certificate') {
// authenticate with credentials against Vault
// ...
}
}
My workaround at the moment is to duplicate the credentials and store the roleId and secretId additionally in a username+password credential in Jenkins.
Here is my working example how to use Vault Credentials Token and use it to access vault secrets:
// Specify how to access secrets in Vault
def configuration = [
vaultUrl: 'https://hcvault.global.nibr.novartis.net',
vaultCredentialId: 'poc-vault-token',
engineVersion: 2
]
def secrets = [
[path: 'secret/projects/intd/common/accounts', engineVersion: 2, secretValues:
[
[vaultKey: 'TEST_SYS_USER'],
[vaultKey: 'TEST_SYS_PWD']
]
]
]
... [omitted pipeline]
stage ('Get Vault Secrets') {
steps {
script {
withCredentials([[$class: 'VaultTokenCredentialBinding', credentialsId: 'poc-vault-token', vaultAddr: 'https://hcvault.global.nibr.novartis.net'], usernamePassword(credentialsId: 'artifactory-jenkins-user-password', usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD')]) {
withVault([configuration: configuration, vaultSecrets: secrets]) {
sh """
echo $env.VAULT_ADDR > hcvault-address.txt
echo $env.VAULT_TOKEN > hcvault-token.txt
echo $env.TEST_SYS_USER > sys-user-account.txt
""".stripIndent()
}
}
}
}
}
Related
I was following the Vault Agent with AWS documentation and it workes fine until I restart the service or reboot the instance. Any ideas on how I can overcome this problem?
vault agent configuration file vault_agent.hcl :
pid_file = "./pidfile"
vault {
address = "http://vault-server:8200"
retry {
num_retries = 5
}
}
listener "tcp" {
address = "{{ ansible_ssh_host }}:8200"
cluster_address = "{{ ansible_ssh_host }}:8201"
tls_disable = 1
}
auto_auth {
method "aws" {
mount_path = "auth/aws/project"
config = {
type = "ec2"
role = "test-role"
}
}
cache {
use_auto_auth_token = true
}
sink "file" {
config = {
path = "/tmp/test"
}
}
}
Now the login works without problem.
login
vault login
output
Success! You are now authenticated. The token information displayed below
is already stored in the token helper. You do NOT need to run "vault login"
again. Future Vault requests will automatically use this token.
Key Value
--- -----
token xxx
token_accessor xxx
token_duration 46s
token_renewable true
token_policies ["admin-role" "default"]
identity_policies []
policies ["admin-role" "default"]
token_meta_account_id xxx
token_meta_auth_type ec2
token_meta_role test-role
token_meta_role_tag_max_ttl 0s
However, if I restart the agent or reboot the ec2 instance I can't authunticate anymore:
* client nonce mismatch and instance meta-data incorrect" backoff=1s
[INFO] auth.handler: authenticating
[ERROR] auth.handler: error authenticating: error="Error making API request.
I'm building a terraform template to create Azure resources including Keyvault Secrets. The customer Subscription policy doesn't allow anyone to update/delete/view keyvault secrets.
If I run terraform apply for the first time, it will work perfectly. However, running the same template again will give you the following error: Error:
Error updating Key Vault "####" (Resource Group "####"): keyvault.VaultsClient#Update: Failure responding to request: StatusCode=403 --
Original Error: autorest/azure: Service returned an error. Status=403 Code="RequestDisallowedByPolicy" Message="Resource '###' was disallowed by policy. Policy identifiers: '[{\"policyAssignment\":{\"name\":\"###nis-deny-keyvault-acl\", ...
on ..\..\modules\azure\keyvault\main.tf line 15, in resource "azurerm_key_vault" "keyvault":
15: resource "azurerm_key_vault" "keyvault" {
How can I get my CI/CD working while that means terraform apply will be continuously running?
Is there a way to pass this policy in terraform?
Is there a way to prevent terraform from updating KV once it created (other than locking the resource)?
Here is the Keyvault module:
variable "keyvault_id" {
type = string
}
variable "secrets" {
type = map(string)
}
locals {
secret_names = keys(var.secrets)
}
resource "azurerm_key_vault_secret" "secret" {
count = length(var.secrets)
name = local.secret_names[count.index]
value = var.secrets[local.secret_names[count.index]]
key_vault_id = var.keyvault_id
}
data "azurerm_key_vault_secret" "secrets" {
count = length(var.secrets)
depends_on = [azurerm_key_vault_secret.secret]
name = local.secret_names[count.index]
key_vault_id = var.keyvault_id
}
output "keyvault_secret_attributes" {
value = [for i in range(length(azurerm_key_vault_secret.secret.*.id)) : data.azurerm_key_vault_secret.secrets[i]]
}
And here is the module from my template:
locals {
secrets_map = {
appinsights-key = module.app_insights.app_insights_instrumentation_key
storage-account-key = module.storage_account.primary_access_key
}
output_secret_map = {
for secret in module.keyvault_secrets.keyvault_secret_attributes :
secret.name => secret.id
}
}
module "keyvault" {
source = "../../modules/azure/keyvault"
keyvault_name = local.kv_name
resource_group_name = azurerm_resource_group.app_rg.name
}
module "keyvault_secrets" {
source = "../../modules/azure/keyvault-secret"
keyvault_id = module.keyvault.keyvault_id
secrets = local.secrets_map
}
module "app_service_keyvault_access_policy" {
source = "../../modules/azure/keyvault-policy"
vault_id = module.keyvault.keyvault_id
tenant_id = module.app_service.app_service_identity_tenant_id
object_ids = module.app_service.app_service_identity_object_ids
key_permissions = ["get", "list"]
secret_permissions = ["get", "list"]
certificate_permissions = ["get", "list"]
}
Using Terraform for provisioning and managing a keyvault with that kind of limitations sounds like a bad idea. Terraforms main idea is to monitor the state of your resources - if it is not allowed to read the resource it becomes pretty useless. Your problem is not even that Terraform is trying to update something, it fails because it wants to check the current state of your resource and fails.
If your goal is just to create secrets in a keyvault, I would just us the az keyvault commands like this:
az login
az keyvault secret set --name mySecret --vault-name myKeyvault --value mySecretValue
An optimal solution would of course be that your service principal that you use for executing Terrafom commands has the sufficient rights to perform the actions it was created for.
I know this is a late answer, but for future visitors:
The pipeline running the Terraform Plan and Apply will need to have proper access to the key vault.
So, if you are running your CI/CD from Azure Pipelines, you would typically have a service connection that your pipeline uses for authentication.
The service connection you use for Terraform is most likely based on a service principal that has contributor rights (at least at resource group level) for it to provisioning anything at all.
If that is the case, then you must add a policy giving that same service principal (Use the Service Principals Enterprise Object Id) to have at least list, get and set permissions for secrets.
I am trying to create ec2 instance using terraform. Passing credentials through terraform cli fails, while hardcoding it in main.tf works fine
This is to create ec2 instance dynamically using terraform
terraform apply works with following main.tf
provider "aws" {
region = "us-west-2"
access_key = "hard-coded-access-key"
secret_key = "hard-coded-secret-key"
}
resource "aws_instance" "ec2-instance" {
ami = "ami-id"
instance_type = "t2.micro"
tags {
Name = "test-inst"
}
}
while the following does not work:
terraform apply -var access_key="hard-coded-access-key" -var secret_key="hard-coded-secret-key"
Is there any difference in the above two ways of running the commands? As per terraform documentation both of the above should work.
Every terraform module can use input variables, including the main module. But before using input variables, you must declare them.
Create a variables.tf file on the same folder you have your main.tf file:
variable "credentials" {
type = object({
access_key = string
secret_key = string
})
description = "My AWS credentials"
}
Then you can reference input variables in your code like that:
provider "aws" {
region = "us-west-2"
access_key = var.credentials.access_key
secret_key = var.credentials.secret_key
}
And you can either run:
terraform apply -var credentials.access_key="hard-coded-access-key" -var credentials.secret_key="hard-coded-secret-key"
Or you could create a terraform.tfvars file with the following content:
# ------------------
# AWS Credentials
# ------------------
credentials= {
access_key = "hard-coded-access-key"
secret_key = "hard-coded-secret-key"
}
And then simply run terraform apply.
But the key point is that you must declare input variables before using them.
The #Felipe answer is right, But I will never recommend defining Access key and secret key in variables.tf, What you have to to do is to left it blinks and set keys using aws configure or other options is to create keys for terraform deployment purpose only using aws configure --profile terraform or without profile aws configur
so your connection.tf or main.tf will look like this,
provider "aws" {
#You can use an AWS credentials file to specify your credentials.
#The default location is $HOME/.aws/credentials on Linux and OS X, or "%USERPROFILE%\.aws\credentials" for Windows users
region = "us-west-2"
# profile configured during aws configure --profile
profile = "terraform"
# you can also restrict account here, to allow particular account for deployment
allowed_account_ids = ["12*****45"]
}
You can also tell separate file for secret key and access key, The reason behind this is, as Variables.tf is part of your configuration language or bitbucket, so it's better to not place these sensitive keys in variables.tf
You can create a file somewhere in your system and give the path of the keys in the provider section.
provider "aws" {
region = "us-west-2"
shared_credentials_file = "$HOME/secret/credentials"
}
Here is the format of the credentials file
[default]
aws_access_key_id = A*******Z
aws_secret_access_key = A*******/***xyz
Credentials are configured in Jenkins but there's an error suggesting they are not.
I've followed documentation provided by Jenkins website.
agent {
node {
label 'master'
}
}
environment {
AWS_ACCESS_KEY_ID = credentials('jenkins-aws-secret-key-id')
AWS_SECRET_ACCESS_KEY = credentials('jenkins-aws-secret-access-key')
}
stages {
stage('checkout') {
steps {
git(url: 'git#bitbucket.org:user/bitbucketdemo.git', branch: 'master', credentialsId: 'jenkins')
echo 'hello'
}
}
stage('packer') {
steps {
echo $AWS_ACCESS_KEY_ID
}
}
}
}```
It should print out the value of the environment variable
I used the Cloudbees AWS Credentials plugin. Once installed, I was able to add my AWS credentials (additional selection in Credentials pull-down menu)
enter image description here
Then use the following snippet in my Jenkinsfile
withCredentials(
[[
$class: 'AmazonWebServicesCredentialsBinding',
accessKeyVariable: 'AWS_ACCESS_KEY_ID',
credentialsId: 'AWS',
secretKeyVariable: 'AWS_SECRET_ACCESS_KEY'
]]) {
sh 'packer build -var aws_access_key=${AWS_ACCESS_KEY_ID} -var aws_secret_key=${AWS_SECRET_ACCESS_KEY} example4.json'
}
I got this working with my public github account where I was using something like this:
maven {
url "https://raw.githubusercontent.com/tvelev92/reactnativeandroidmaven/master/"
}
Then I decided I want it to be private and I put it on my company's gitlab and provisioned an access token however, I cannot figure out how to change build.gradle to accomplish a successful import of the pom file. Here is what I have however, I am receiving a 401 server response.
maven {
url "https://gitlabdev../../mobile/mysdk"
credentials(HttpHeaderCredentials) {
name = "password"
value = "..."
}
authentication {
header(HttpHeaderAuthentication)
}
}
please try:
url "https://raw.githubusercontent.com/tvelev92/reactnativeandroidmaven/raw/master/"