I am using this terraform manifest to deploy AKS on Azure. I can do this via the commandline fine and it works, as I have azure cli configured on my machine to generate client id and secret
https://github.com/anubhavmishra/terraform-azurerm-aks
However, I am now building this on Azure Devops Pipeline
So, far i have managed to run terraform init and plan with backend storage on Azure, using Azure Devops using this extension
https://marketplace.visualstudio.com/items?itemName=charleszipp.azure-pipelines-tasks-terraform
Question: How do i get client id and secret on the Azure devops pipeline and set that as an environment variable for terraform? I tried creating a bash az command in the pipeline
> az ad sp create-for-rbac --role="Contributor"
> --scopes="/subscriptions/YOUR_SUBSCRIPTION_ID"
but failed with this error
> 2019-03-27T10:41:58.1042923Z
2019-03-27T10:41:58.1055624Z Setting AZURE_CONFIG_DIR env variable to: /home/vsts/work/_temp/.azclitask
2019-03-27T10:41:58.1060006Z Setting active cloud to: AzureCloud
2019-03-27T10:41:58.1069887Z [command]/usr/bin/az cloud set -n AzureCloud
2019-03-27T10:41:58.9004429Z [command]/usr/bin/az login --service-principal -u *** -p *** --tenant ***
2019-03-27T10:42:00.0695154Z [
2019-03-27T10:42:00.0696915Z {
2019-03-27T10:42:00.0697522Z "cloudName": "AzureCloud",
2019-03-27T10:42:00.0698958Z "id": "88bfee03-551c-4ed3-98b0-be68aee330bb",
2019-03-27T10:42:00.0704752Z "isDefault": true,
2019-03-27T10:42:00.0705381Z "name": "Visual Studio Enterprise",
2019-03-27T10:42:00.0706362Z "state": "Enabled",
2019-03-27T10:42:00.0707434Z "tenantId": "***",
2019-03-27T10:42:00.0716107Z "user": {
2019-03-27T10:42:00.0717485Z "name": "***",
2019-03-27T10:42:00.0718161Z "type": "servicePrincipal"
2019-03-27T10:42:00.0718675Z }
2019-03-27T10:42:00.0719185Z }
2019-03-27T10:42:00.0719831Z ]
2019-03-27T10:42:00.0728173Z [command]/usr/bin/az account set --subscription 88bfee03-551c-4ed3-98b0-be68aee330bb
2019-03-27T10:42:00.8569816Z [command]/bin/bash /home/vsts/work/_temp/azureclitaskscript1553683312219.sh
2019-03-27T10:42:02.4431342Z ERROR: Directory permission is needed for the current user to register the application. For how to configure, please refer 'https://learn.microsoft.com/en-us/azure/azure-resource-manager/resource-group-create-service-principal-portal'. Original error: Insufficient privileges to complete the operation.
2019-03-27T10:42:02.5271752Z [command]/usr/bin/az account clear
2019-03-27T10:42:03.3092558Z ##[error]Script failed with error: Error: /bin/bash failed with return code: 1
2019-03-27T10:42:03.3108490Z ##[section]Finishing: Azure CLI
Here is how I do it with Azure Pipelines.
Create a Service Principal for Terraform.
Create the following variables in your pipeline
ARM_CLIENT_ID
ARM_CLIENT_SECRET
ARM_SUBSCRIPTION_ID
ARM_TENANT_ID
If you choose to store ARM_CLIENT_SECRET as a secret in Azure DevOps you will need to do the following in your task under the Environment Variables sections of the task to get it decrypted so terraform can read it.
you just need to grant your service connections rights to create service principals. but I'd generally advise against that, just precreate a service principal and use it in your pipeline. creating a new service principal on each run seems excessive.
you can use build\release variables and populate those with client id\secret
The approach defined in the post https://medium.com/#maninder.bindra/creating-a-single-azure-devops-yaml-pipeline-to-provision-multiple-environments-using-terraform-e6d05343cae2?
can be considered as well. Here the Keyvault task is used to fetch the secrets from Azure Vault (these include terraform backend access secrets as well as aks sp secrets):
#KEY VAULT TASK
- task: AzureKeyVault#1
inputs:
azureSubscription: '$(environment)-sp'
KeyVaultName: '$(environment)-pipeline-secrets-kv'
SecretsFilter: 'tf-sp-id,tf-sp-secret,tf-tenant-id,tf-subscription-id,tf-backend-sa-access-key,aks-sp-id,aks-sp-secret'
displayName: 'Get key vault secrets as pipeline variables'
And then you can use the secrets as variables in the rest of the pipeline. FOr instance aks-sp-id can be referred to as $(aks-sp-id). So the bash/azure-cli task can be something like
# AZ LOGIN USING TERRAFORM SERVICE PRINCIPAL
- script: |
az login --service-principal -u $(tf-sp-id) -p $(tf-sp-secret) --tenant $(tf-tenant-id)
cd $(System.DefaultWorkingDirectory)/tf-infra-provision
Followed by terraform init and plan (plan shown below, see post for complete pipeline details)
# TERRAFORM PLAN
echo '#######Terraform Plan########'
terraform plan -var-file=./tf-vars/$(tfvarsFile) -var="client_id=$(tf-sp-id)" -var="client_secret=$(tf-sp-secret)" -var="tenant_id=$(tf-tenant-id)" -var="subscription_id=$(tf-subscription-id)" -var="aks_sp_id=$(aks-sp-id)" -var="aks_sp_secret=$(aks-sp-secret)" -out="out.plan"
Hope this helps.
Related
I am not able to create secret scope on Azure Databricks from Databricks CLI. I run a command like this:
databricks secrets "create-scope" --scope "edap-dev-kv" --scope-backend-type AZURE_KEYVAULT --resource-id "/subscriptions/ba426b6f-65cb-xxxx-xxxx-9a1e1656xxxx/resourceGroups/edap-dev-rg/providers/Microsoft.KeyVault/vaults/edap-dev-kv" --profile profile_edap_dev2_dbx --dns-name "https://edap-dev-kv.vault.azure.net/"
I get error msg:
Error: b'<html>\n<head>\n<meta http-equiv="Content-Type" content="text/html;charset=utf-8"/>\n<title>
Error 400 io.jsonwebtoken.IncorrectClaimException:
Expected aud claim to be: 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d, but was: https://management.core.windows.net/.
</title>\n</head>\n<body><h2>HTTP ERROR 400</h2>\n<p>
Problem accessing /api/2.0/secrets/scopes/create.
Reason:\n<pre> io.jsonwebtoken.IncorrectClaimException:
Expected aud claim to be: 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d,
but was: https://management.core.windows.net/.</pre></p>\n</body>\n</html>\n'
I have tried doing it with both user (personal) and service principal's AAD token. (I've found somewhere that it it should be a AAD token of user account.)
I am able to do it with GUI using same parameters.
In your case, the personal access token was issued for incorrect service - it was issued for https://management.core.windows.net/. but it's required that you use resource ID of the Azure Databricks - 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d.
Simplest way to do that is to use az-cli with following command:
az account get-access-token -o tsv --query accessToken \
--resource 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d
I'm working on a sample application where I want to connect to the Hashicorp vault to get the DB credentials. Below is the bootstrap.yml of my application.
spring:
application:
name: phonebook
cloud:
config:
uri: http://localhost:8888/
vault:
uri: http://localhost:8200
authentication: token
token: s.5bXvCP90f4GlQMKrupuQwH7C
profiles:
active:
- local,test
The application builds properly when the vault server is unsealed. Maven fetches the database username from the vault properly. When I run the build after sealing the vault, the build is failing due to the below error.
org.springframework.vault.VaultException: Status 503 Service Unavailable [secret/application]: error performing token check: Vault is sealed; nested exception is org.springframework.web.client.HttpServerErrorException$ServiceUnavailable: 503 Service Unavailable: [{"errors":["error performing token check: Vault is sealed"]}
How can I resolve this? I want maven to get the DB username and password during the build without any issues from the vault even when though it is sealed.
It's a profit of Vault that it's not simple static storage, and on any change in the environment, you need to perform some actions to have a stable workable system.
Advice: create a script(s) for automation the process.
Example. I have a multi-services system and some of my services use Vault to get the configuration.
init.sh:
#!/bin/bash
export VAULT_ADDR="http://localhost:8200"
vault operator unseal <token1>
vault operator unseal <token2>
vault operator unseal <token3>
vault login <main token>
vault secrets enable -path=<path>/ -description="secrets for My projects" kv
vault auth enable approle
vault policy write application-policy-dev ./application-policy-DEV.hcl
application.sh:
#!/bin/bash
export VAULT_ADDR="http://localhost:8200"
vault login <main token>
vault delete <secret>/<app_path>
vault delete sys/policy/<app>-policy
vault delete auth/approle/role/<app>-role
vault kv put <secret>/<app_path> - < <(yq m ./application.yaml)
vault policy write <app>-policy ./<app>-policy.hcl
vault write auth/approle/role/<app>-role token_policies="application-policy"
role_id=$(vault read auth/approle/role/<app>-role/role-id -format="json" | jq -r '.data.role_id')
secret_id=$(vault write auth/approle/role/<app>-role/secret-id -format="json" | jq -r '.data.secret_id')
token=$(vault write auth/approle/login role_id="${role_id}" secret_id=${secret_id} -format="json" | jq -r '.auth.client_token')
echo 'Token:' ${token}
where <app> - the name of your application, application.yaml - file with configuration, <app>-policy.hcl - file with policy
Of course, all these files should not be public, only for Vault administration.
On any changes in the environment or Vault period termination just run init.sh. For getting a token for the application run application.sh. Also if you need to change a configuration parameter, change it in application.yaml, run application.sh and use result token.
Script result (for one of my services):
Key Value
--- -----
token *****
token_accessor *****
token_duration ∞
token_renewable false
token_policies ["root"]
identity_policies []
policies ["root"]
Success! Data deleted (if it existed) at: <secret>/<app>
Success! Data deleted (if it existed) at: sys/policy/<app>-policy
Success! Data deleted (if it existed) at: auth/approle/role/<app>-role
Success! Data written to: <secret>/<app>
Success! Uploaded policy: <app>-policy
Success! Data written to: auth/approle/role/<app>-role
Token: s.dn2o5b7tvxHLMWint1DvxPRJ
Process finished with exit code 0
I am trying to deploy a python lambda to aws. This lambda just reads files from s3 buckets when given a bucket name and file path. It works correctly on the local machine if I run the following command:
sam build && sam local invoke --event testfile.json GetFileFromBucketFunction
The data from the file is printed to the console. Next, if I run the following command the lambda is packaged and send to my-bucket.
sam build && sam package --s3-bucket my-bucket --template-file .aws-sam\build\template.yaml --output-template-file packaged.yaml
The next step is to deploy in prod so I try the following command:
sam deploy --template-file packaged.yaml --stack-name getfilefrombucket --capabilities CAPABILITY_IAM --region my-region
The lambda can now be seen in the lambda console, I can run it but no contents are returned, if I change the service role manually to one which allows s3 get/put then the lambda works. However this undermines the whole point of using the aws sam cli.
I think I need to add a policy to the template.yaml file. This link here seems to say that I should add a policy such as one shown here. So, I added:
Policies: S3CrudPolicy
Under 'Resources:GetFileFromBucketFunction:Properties:', I then rebuild the app and re-deploy and the deployment fails with the following errors in cloudformation:
1 validation error detected: Value 'S3CrudPolicy' at 'policyArn' failed to satisfy constraint: Member must have length greater than or equal to 20 (Service: AmazonIdentityManagement; Status Code: 400; Error Code: ValidationError; Request ID: unique number
and
The following resource(s) failed to create: [GetFileFromBucketFunctionRole]. . Rollback requested by user.
I delete the stack to start again. My thoughts were that 'S3CrudPolicy' is not an off the shelf policy that I can just use but something I would have to define myself in the template.yaml file?
I'm not sure how to do this and the docs don't seem to show any very simple use case examples (from what I can see), if anyone knows how to do this could you post a solution?
I tried the following:
S3CrudPolicy:
PolicyDocument:
-
Action: "s3:GetObject"
Effect: Allow
Resource: !Sub arn:aws:s3:::${cloudtrailBucket}
Principal: "*"
But it failed with the following error:
Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Invalid template property or properties [S3CrudPolicy]
If anyone can help write a simple policy to read/write from s3 than that would be amazing? I'll need to write another one so get lambdas to invoke others lambdas as well so a solution here (I imagine something similar?) would be great? - Or a decent, easy to use guide of how to write these policy statements?
Many thanks for your help!
Found it!! In case anyone else struggles with this you need to add the following few lines to Resources:YourFunction:Properties in the template.yaml file:
Policies:
- S3CrudPolicy:
BucketName: "*"
The "*" will allow your lambda to talk to any bucket, you could switch for something specific if required. If you leave out 'BucketName' then it doesn't work and returns an error in CloudFormation syaing that S3CrudPolicy is invalid.
I have a Jenkins pod running in GCP's Kubernetes Engine and I'm trying to run a maven unit test that connects to a google cloud SQL database to perform said test. My application.yaml for my project looks like this:
spring:
cloud:
gcp:
project-id: <my_project_id>
sql:
database-name: <my_database_name>
instance-connection-name: <my_instance_connection_name>
jpa:
database-platform: org.hibernate.dialect.MySQL55Dialect
hibernate:
ddl-auto: create-drop
datasource:
continue-on-error: true
driver-class-name: com.mysql.cj.jdbc.Driver
username: <my_cloud_sql_username>
password: <my_cloud_sql_password>
The current Jenkinsfile associated with this project is:
Jenkinsfile:
pipeline {
agent any
tools{
maven 'Maven 3.5.2'
jdk 'jdk8'
}
environment {
IMAGE = readMavenPom().getArtifactId()
VERSION = readMavenPom().getVersion()
DEV_DB_USER = "${env.DEV_DB_USER}"
DEV_DB_PASSWORD = "${env.DEV_DB_PASSWORD}"
}
stages {
stage('Build docker image') {
steps {
sh 'mvn -Dmaven.test.skip=true clean package'
script{
docker.build '$IMAGE:$VERSION'
}
}
}
stage('Run unit tests') {
steps {
withEnv(['GCLOUD_PATH=/var/jenkins_home/google-cloud-sdk/bin']) {
withCredentials([file(credentialsId: 'key-sa', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
sh("gcloud container clusters get-credentials <cluster_name> --zone northamerica-northeast1-a --project <project_id>")
sh 'mvn test'
}
}
}
}
}
}
}
My problem is when the Pipeline actually tries to run the mvn test using the above configuration (in my application.yaml) I'm getting this error:
Caused by:
com.google.api.client.googleapis.json.GoogleJsonResponseException: 403
Forbidden
{
"code" : 403,
"errors" : [ {
"domain" : "global",
"message" : "Insufficient Permission: Request had insufficient authentication scopes.",
"reason" : "insufficientPermissions"
} ],
"message" : "Insufficient Permission: Request had insufficient authentication scopes."
}
I have two Google Cloud projects:
One that has the Kubernetes Cluster where the Jenkins pod is running.
Another project where the K8s Cluster contains my actual Spring Boot Application and the Cloud SQL database that I'm trying to access.
I also created the service account only in my Spring Boot Project for Jenkins to use with three roles: Cloud SQL Editor, Kubernetes Engine Cluster Admin and Project owner (to verify that the service account is not at fault).
I enabled the Cloud SQL, Cloud SQL admin and Kubernetes APIs in both projects and I double checked my Cloud SQL credentials and they are ok. In addition, I authenticated the Jenkins pipeline using the json file generated when I created the service account, following the recommendations discussed here:
Jenkinsfile (extract):
...
withCredentials([file(credentialsId: 'key-sa', variable: 'GC_KEY')]) {
sh("gcloud auth activate-service-account --key-file=${GC_KEY}")
sh("gcloud container clusters get-credentials <cluster_name> --zone northamerica-northeast1-a --project <project_id>")
sh 'mvn test'
}
...
I don't believe the GCP Java SDK relies on gcloud CLI at all. Instead, it looks for an environment variable GOOGLE_APPLICATION_CREDENTIALS that points to your service account key file and GCLOUD_PROJECT (see https://cloud.google.com/docs/authentication/getting-started).
Try adding the following:
sh("export GOOGLE_APPLICATION_CREDENTIALS=${GC_KEY}")
sh("export GCLOUD_PROJECT=<project_id>")
There are a couple of different things you should verify to get this working. I'm assuming you are using the Cloud SQL JDBC SocketFactory for Cloud SQL.
You should create a testing service account and give it whatever permissions are needed to execute the tests. To connect to Cloud SQL, it needs at a minimum the "Cloud SQL Client" role for the same project as the Cloud SQL instance.
The Cloud SQL SF uses the Application Default Credentials (ADC) strategy for determining what authentication to use. This means the first place it looks for credentials is the GOOGLE_APPLICATION_CREDENTIALS env var, which should be a path to the key for the testing service account.
I've got this infrastructure description
variable "HEROKU_API_KEY" {}
provider "heroku" {
email = "sebastrident#gmail.com"
api_key = "${var.HEROKU_API_KEY}"
}
resource "heroku_app" "default" {
name = "judge-re"
region = "us"
}
Originally I forgot to specify buildpack. It created the application on heroku. I decided to add it to resource entry
buildpacks = [
"heroku/java"
]
But when I try to apply the plan in terraform I get this error
Error: Error applying plan:
1 error(s) occurred:
* heroku_app.default: 1 error(s) occurred:
* heroku_app.default: Post https://api.heroku.com/apps: Name is already taken
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Terraform plan looks like this
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ heroku_app.judge_re
id: <computed>
all_config_vars.%: <computed>
buildpacks.#: "1"
buildpacks.0: "heroku/java"
config_vars.#: <computed>
git_url: <computed>
heroku_hostname: <computed>
name: "judge-re"
region: "us"
stack: <computed>
web_url: <computed>
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
As a workaround I tried to add destroy in my deploy.sh script
terraform init
terraform plan
terraform destroy -force
terraform apply -auto-approve
But it does not destroy the resource as I get the message Destroy complete! Resources: 0 destroyed.
What is the problem?
Link to build
It looks like you also changed the name of the resource. Your original example has the resource name heroku_app.default while your plan has heroku_app.judge_re.
To point your state to the remote resource, so Terraform knows you are editing and not trying to recreate the resource, use terraform import:
terraform import heroku_app.judge_re judge-re
In terraform, normally you needn't destroy the whole stack, which you just want to re-build one or several resources in it.
terraform taint does this trick. The terraform taint command manually marks a Terraform-managed resource as tainted, forcing it to be destroyed and recreated on the next apply.
terraform taint heroku_app.default
Second, when you troubleshooting why the resource isn't list in destroy resource, please make sure you point to the right terraform tfstate file.
when you run terraform plan, did you see any resources which already was created?