Azure Synapse Private Endpoint Approve - bash

Via some Terraform scripts within a CICD process I am trying to create a Managed private Endpoint for an Azure SQL Server Linked service. This is successful using the following code:
resource "azurerm_synapse_managed_private_endpoint" "mi_metadata_transform_sql_server_private_endpoint" {
name = "mi_synapse_metadata_transform_private_endpoint"
subresource_name = "sqlServer"
synapse_workspace_id = module.mi_synapse_workspace.synapse_workspace_id
target_resource_id = azurerm_mssql_server.mi-metadata-transform-sql-server.id}
But that leaves the Endpoint in a "Pending Approval State". So adding the code below which is based on some of our existing code that approves some storage via Bash, I decided to copy that code and adjust accordingly for SQL Server. And this is where my problem begins.....
function enable_sql_private_endpoint {
endpoints=$(az sql server show --name $1 -g ${{ parameters.resourceGroupName }} --subscription $(serviceConnection) --query "privateEndpointConnections[?properties.privateLinkServiceConnectionState.status=='Pending'].id" -o tsv)
for endpoint in $endpoints
do
az sql server private-endpoint-connection approve --account-name $1 --name $endpoint --resource-group ${{ parameters.resourceGroupName }} --subscription $(serviceConnection)
done
}
sqlServers="$(az sql server list -g ${{ parameters.resourceGroupName }} --query '[].name' --subscription $(serviceConnection) -o tsv)"
for sqlServerName in $sqlServers
do
echo "Processing $sqlServerName ========================================="
enable_sql_private_endpoint $sqlServerName
done
The code above is executed in a further step in a YAML file and in it's simplest terms:
YAML Orchestrator File executed via CICD
Terraform Script called to create resource (code snippet 1)
Another YAML file executed to approve endpoints using inline Bash (code snippet 2)
The problem is with az sql server private-endpoint-connection approve and that it doesn't exist. When I review this link I cannot see anything remotely like the approve option for SQL Server Endpoints like what Storage or MySQL have. Any help would be appreciated on how this can be achieved

Currently, you can't approve a Managed Private Endpoint using Terraform.
Note: Azure PowerShell and Azure CLI are the preferred methods for managing Private Endpoint connections on Microsoft Partner Services or customer owned services.
For more details, refer to Manage Private Endpoint connections on a customer/partner owned Private Link service.

In the end, this is what I used in my YAML / Bash to get things working:
sqlServers="$(az sql server list -g ${{ parameters.resourceGroupName }} --query '[].name' --subscription $(serviceConnection) -o tsv)"
for sqlServerName in $sqlServers
do
echo "Processing $sqlServerName ========================================="
enable_sql_private_endpoint $sqlServerName
done
and
function enable_sql_private_endpoint {
endpoints=$(az sql server show --name $1 -g ${{ parameters.resourceGroupName }} --subscription $(serviceConnection) --query "privateEndpointConnections[?properties.privateLinkServiceConnectionState.status=='Pending'].id" -o tsv)
for endpoint in $endpoints
do
az network private-endpoint-connection approve -g ${{ parameters.resourceGroupName }} --subscription $(serviceConnection) --id $endpoint --type Microsoft.Sql/servers --description "Approved" --resource-name $1
done
}
With the following line being the key syntax to use if anyone ever encounters such a similar scenario in their CICD with Syanpse and Managed Private Endpoints:
az storage account private-endpoint-connection approve --account-name $1 --name $endpoint --resource-group ${{ parameters.resourceGroupName }} --subscription $(serviceConnection)

Related

cannot list resources from all resource group in a subscription using az cli

Is there a way to make a az CLI call to list the deployment group from all resource groups in a subscription?
I'm trying to create a bash script to get the deployment groups in a given subscription but no luck. Here is what i've tried.
rgs=$(az group list --subscription $subscription_id -o json | jq -r '.[] | .name'), i'm passing subscription as input parameter.
az deployment group list --resource-group $rgs --subscription $subscription_id --query '[].{Name:name, ResourceGroup:resourceGroup, Timestamp:properties.timestamp}' -o table
It gives me invalid argument error.
It looks like the resource group variable that i use does not recognize. not sure if there is any other way.
So tried the following.
rg_list=($rgs)
for rg in "${rg_list[#]}"
do
az deployment group list --resource-group $rgs --subscription $subscription_id --query '[].{Name:name, ResourceGroup:resourceGroup, Timestamp:properties.timestamp}' -o table
done
after adding the loop it works but the loop goes on to check every resource group in the subscription to display the result but the actual goal is to get the deployment group from all the resource groups in a single attempt without having to go on a loop
I managed to get the expected result by using the awk command. Thanks

Getting value of a variable value in azure pipeline

enviornment: 'dev'
acr-login: $(enviornment)-acr-login
acr-secret: $(enviornment)-acr-secret
dev-acr-login and dev-acr-secret are secrets stored in keyvault for acr login and acr secret.
In Pipeline, getting secrets with this task
- task: AzureKeyVault#1
inputs:
azureSubscription: $(connection)
KeyVaultName: $(keyVaultName)
SecretsFilter: '*'
This task will create task variables with name 'dev-acr-login' and 'dev-acr-secret'
Not if I want to login in docker I am not able to do that
Following code works and I am able to login into acr.
- bash: |
echo $(dev-acr-secret) | docker login \
$(acrName) \
-u $(dev-acr-login) \
--password-stdin
displayName: 'docker login'
Following doesnot work. Is there a way that I can use variable names $(acr-login) and $(acr-secret) rather than actual keys from keyvault?
- bash: |
echo $(echo $(acr-secret)) | docker login \
$(acrRegistryServerFullName) \
-u $(echo $(acr-login)) \
--password-stdin
displayName: 'docker login'
You could pass them as environment variables:
- bash: |
echo $(echo $ACR_SECRET) | ...
displayName: docker login
env:
ACR_SECRET: $(acr-secret)
But what is the purpose, as opposed to just echoing the password values as you said works in the other example? As long as the task is creating secure variables, they will be protected in logs. You'd need to do that anyway, since they would otherwise show up in diagnostic logs if someone enabled diagnostics, which anyone can do.
An example to do that:
- bash: |
echo "##vso[task.setvariable variable=acr-login;issecret=true;]$ACR_SECRET"
env:
ACR_SECRET: $($(acr-secret)) # Should expand recursively
See Define variables : Set secret variables for more information and examples.

GitLab job Job succeeded but not finished (create / delete Azure AKS)

I am using a runner to create a AKS on the fly and also delete a previous one.
unfortunately these jobs take a while and i have now more than often experienced that the job Stops suddenly (in the range of 5+ min after a az aks delete or az aks create call.
The Situation is happening in GitLab and after several retries it usually works one time.
On some googleing in found that before and after scripts might have an impact ... but even with removing them there was not difference.
Is there any Runner rules or something particular that might need to be changed ?
It would be more understandable when it would stop with a TImeout Error, but it handles it as job succeeded, even it did no even finish running thorugh all lines. Below is the stagging segment causing the issue:
create-kubernetes-az:
stage: create-kubernetes-az
image: microsoft/azure-cli:latest
# when: manual
script:
# REQUIRE CREATED SERVICE PRINCIPAL
- az login --service-principal -u ${AZ_PRINC_USER} -p ${AZ_PRINC_PASSWORD} --tenant ${AZ_PRINC_TENANT}
# Create Resource Group
- az group create --name ${AZ_RESOURCE_GROUP} --location ${AZ_RESOURCE_LOCATION}
# ERROR HAPPENS HERE # Delete Kubernetes Cluster // SOMETIMES STOPS AFTER THIS
- az aks delete --resource-group ${AZ_RESOURCE_GROUP} --name ${AZ_AKS_TEST_CLUSTER} --yes
#// OR HERE # Create Kubernetes Cluster // SOMETIMES STOPS AFTER THIS
- az aks create --name ${AZ_AKS_TEST_CLUSTER} --resource-group ${AZ_RESOURCE_GROUP} --node-count ${AZ_AKS_TEST_NODECOUNT} --service-principal ${AZ_PRINC_USER} --client-secret ${AZ_PRINC_PASSWORD} --generate-ssh-keys
# Get cubectl
- az aks install-cli
# Get Login Credentials
- az aks get-credentials --name ${AZ_AKS_TEST_CLUSTER} --resource-group ${AZ_RESOURCE_GROUP}
# Install Helm and Tiller on Azure Cloud Shell
- curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
- chmod 700 get_helm.sh
- ./get_helm.sh
- helm init
- kubectl create serviceaccount --namespace kube-system tiller
- kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
- kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'
# Create a namespace for your ingress resources
- kubectl create namespace ingress-basic
# Wait 1 minutes
- sleep 60
# Use Helm to deploy an NGINX ingress controller
- helm install stable/nginx-ingress --namespace ingress-basic --set controller.replicaCount=2 --set controller.nodeSelector."beta\.kubernetes\.io/os"=linux --set defaultBackend.nodeSelector."beta\.kubernetes\.io/os"=linux
# Test by get public IP
- kubectl get service
- kubectl get service -l app=nginx-ingress --namespace ingress-basic
#- while [ "$(kubectl get service -l app=nginx-ingress --namespace ingress-basic | grep pending)" == "pending" ]; do echo "Updating"; sleep 1 ; done && echo "Finished"
- while [ "$(kubectl get service -l app=nginx-ingress --namespace ingress-basic -o jsonpath='{.items[*].status.loadBalancer.ingress[*].ip}')" == "" ]; do echo "Updating"; sleep 10 ; done && echo "Finished"
# Add Ingress Ext IP / Alternative
- KUBip=$(kubectl get service -l app=nginx-ingress --namespace ingress-basic -o jsonpath='{.items[*].status.loadBalancer.ingress[*].ip}')
- echo $KUBip
# Add DNS Name - TODO - GITLAB ENV VARIABELEN KLAPPEN NICHT
- DNSNAME="bl-test"
# Get the resource-id of the public ip
- PUBLICIPID=$(az network public-ip list --query "[?ipAddress!=null]|[?contains(ipAddress, '$KUBip')].[id]" --output tsv)
- echo $PUBLICIPID
- az network public-ip update --ids $PUBLICIPID --dns-name $DNSNAME
#Install CertManager Console
# Install the CustomResourceDefinition resources separately
- kubectl apply -f https://raw.githubusercontent.com/jetstack/cert-manager/release-0.8/deploy/manifests/00-crds.yaml
# Create the namespace for cert-manager
- kubectl create namespace cert-manager
# Label the cert-manager namespace to disable resource validation
- kubectl label namespace cert-manager certmanager.k8s.io/disable-validation=true
# Add the Jetstack Helm repository
- helm repo add jetstack https://charts.jetstack.io
# Update your local Helm chart repository cache
- helm repo update
# Install the cert-manager Helm chart
- helm install --name cert-manager --namespace cert-manager --version v0.8.0 jetstack/cert-manager
# Run Command issuer.yaml
- sed 's/_AZ_AKS_ISSUER_NAME_/'"${AZ_AKS_ISSUER_NAME}"'/g; s/_BL_DEV_E_MAIL_/'"${BL_DEV_E_MAIL}"'/g' infrastructure/kubernetes/cluster-issuer.yaml > cluster-issuer.yaml;
- kubectl apply -f cluster-issuer.yaml
# Run Command ingress.yaml
- sed 's/_BL_AZ_HOST_/'"beautylivery-test.${AZ_RESOURCE_LOCATION}.${AZ_AKS_HOST}"'/g; s/_AZ_AKS_ISSUER_NAME_/'"${AZ_AKS_ISSUER_NAME}"'/g' infrastructure/kubernetes/ingress.yaml > ingress.yaml;
- kubectl apply -f ingress.yaml
And the result
Running with gitlab-runner 12.3.0 (a8a019e0)
on runner-gitlab-runner-676b494b6b-b5q6h gzi97H3Q
Using Kubernetes namespace: gitlab-managed-apps
Using Kubernetes executor with image microsoft/azure-cli:latest ...
Waiting for pod gitlab-managed-apps/runner-gzi97h3q-project-14628452-concurrent-0l8wsx to be running, status is Pending
Waiting for pod gitlab-managed-apps/runner-gzi97h3q-project-14628452-concurrent-0l8wsx to be running, status is Pending
Running on runner-gzi97h3q-project-14628452-concurrent-0l8wsx via runner-gitlab-runner-676b494b6b-b5q6h...
Fetching changes with git depth set to 50...
Initialized empty Git repository in /builds/****/*******/.git/
Created fresh repository.
From https://gitlab.com/****/********
* [new branch] Setup-Kubernetes -> origin/Setup-Kubernetes
Checking out d2ca489b as Setup-Kubernetes...
Skipping Git submodules setup
$ function create_secret() { # collapsed multi-line command
$ echo "current time $(TZ=Europe/Berlin date +"%F %T")"
current time 2019-10-06 09:00:50
$ az login --service-principal -u ${AZ_PRINC_USER} -p ${AZ_PRINC_PASSWORD} --tenant ${AZ_PRINC_TENANT}
[
{
"cloudName": "AzureCloud",
"id": "******",
"isDefault": true,
"name": "Nutzungsbasierte Bezahlung",
"state": "Enabled",
"tenantId": "*******",
"user": {
"name": "http://*****",
"type": "servicePrincipal"
}
}
]
$ az group create --name ${AZ_RESOURCE_GROUP} --location ${AZ_RESOURCE_LOCATION}
{
"id": "/subscriptions/*********/resourceGroups/*****",
"location": "francecentral",
"managedBy": null,
"name": "******",
"properties": {
"provisioningState": "Succeeded"
},
"tags": null,
"type": "Microsoft.Resources/resourceGroups"
}
$ az aks delete --resource-group ${AZ_RESOURCE_GROUP} --name ${AZ_AKS_TEST_CLUSTER} --yes
Running after script...
$ echo "current time $(TZ=Europe/Berlin date +"%F %T")"
current time 2019-10-06 09:05:55
Job succeeded
Is there ways to have the running completely ?
And succesfull in the best case ?
UPDATE: What is the idea: i try to automate the process of setting up a complete kubernetes Cluster with SSL and DNS management. Having everything fas setup and ready for different use cases and different environments in future. I also want to learn how to do things better :)
NEW_UPDATE:
Added a solution
I added a small work around as I was expecting it requires an execution every once in a while.....
It seems the az aks wait command did the trick for me as of now. and the previous command requires --no-wait in order to continue.
# Delete Kubernetes Cluster
- az aks delete --resource-group ${AZ_RESOURCE_GROUP} --name ${AZ_AKS_TEST_CLUSTER} --no-wait --yes
- az aks wait --deleted -g ${AZ_RESOURCE_GROUP} -n ${AZ_AKS_TEST_CLUSTER} --updated --interval 60 --timeout 1800
# Create Kubernetes Cluster
- az aks create --name ${AZ_AKS_TEST_CLUSTER} --resource-group ${AZ_RESOURCE_GROUP} --node-count ${AZ_AKS_TEST_NODECOUNT} --service-principal ${AZ_PRINC_USER} --client-secret ${AZ_PRINC_PASSWORD} --generate-ssh-keys --no-wait
- az aks wait --created -g ${AZ_RESOURCE_GROUP} -n ${AZ_AKS_TEST_CLUSTER} --updated --interval 60 --timeout 1800

How can I get the the ID of an AWS security group if I know the name?

I'm using the AWS CLI and I want to get the ID of security group whose name I know (kingkajou_sg). How can I do it?
When I ask it to list all the security groups, it does so happily:
$ aws ec2 describe-security-groups | wc -l
430
When I grep through this information, I see that the SG in question is listed:
$ aws ec2 describe-security-groups | grep -i kingkajou_sg
"GroupName": "kingkajou_sg",
However, when I try to get the information about only that security group, it won't let me. Why?
$ aws ec2 describe-security-groups --group-names kingkajou_sg
An error occurred (InvalidGroup.NotFound) when calling the
DescribeSecurityGroups operation: The security group 'kingkajou_sg' does not exist in default VPC 'vpc-XXXXXXXX'
Can someone please provide me the one line command that I can use to extract the Security group's ID given its name? You can assume that the command will be run from within an EC2 which is in the same VPC as the Security group.
From the API Documentation:
--group-names (list)
[EC2-Classic and default VPC only] One or more security group names. You can specify either the security group name or the security group ID. For security groups in a nondefault VPC, use the group-name filter to describe security groups by name.
If you are using a non-default VPC, use the Filter
aws ec2 describe-security-groups --filter Name=vpc-id,Values=<my-vpc-id> Name=group-name,Values=kingkajou_sg --query 'SecurityGroups[*].[GroupId]' --output text
If it's in a VPC and you know the name & region and vpc id, you can try it like below:
aws ec2 describe-security-groups --region eu-west-1 --filter Name=vpc-id,Values=vpc-xxxxx Name=group-name,Values=<your sg name> --query 'SecurityGroups[*].[GroupId]' --output text
You just need to add --query 'SecurityGroups[*].[GroupId]' option with aws cli command.
aws ec2 describe-security-groups --group-names kingkajou_sg --query 'SecurityGroups[*].[GroupId]' --output text
To get the IDs of all security groups with a name matching exactly a specified string (default in this example) without specifying a VPC ID, use the following:
aws ec2 describe-security-groups --filter Name=group-name,Values=default --output json | jq -r .SecurityGroups[].GroupId
Note: this works for security groups even if they are not in the default VPC.
Small shell script to list security with search string as a variable. and we can tag the security groups.
https://ideasofpraveen.blogspot.com/2022/09/aws-cli-get-security-group-id-with-name.html.
If you want boto3 script to integrate with lambda for automations .
https://ideasofpraveen.blogspot.com/2022/09/aws-cli-get-security-group-id-with-name_15.html

How do I prompt for an MFA key to generate and use credentials for AWS CLI access?

I have several Bash scripts that invoke AWS CLI commands for which permissions have changed to require MFA, and I want to be able to prompt for a code generated by my MFA device in these scripts so that they can run with the necessary authentication.
But there seems to be no simple built in way to do this. The only documentation I can find involves a complicated process of using aws sts get-session-token and then saving each value in a configuration, which it is then unclear how to use.
To be clear what I'd like is that when I run one of my scripts that that contains AWS CLI commands that require MFA, I'm simply prompted for the code, so that providing it allows the AWS CLI operations to complete. Something like:
#!/usr/bin/env bash
# (1) prompt for generated MFA code
# ???
# (2) use entered code to generate necessary credentials
aws sts get-session-token ... --token-code $ENTERED_VALUE
# (3) perform my AWS CLI commands requiring MFA
# ....
It's not clear to me how to prompt for this when needed (which is probably down to not being proficient with bash) or how to use the output of get-session-token once I have it.
Is there a way to do what I'm looking for?
I've tried to trigger a prompt by specifying a --profile with a mfa_serial entry; but that doesn't work either.
Ok after spending more time on this script with a colleague - we have come up with a much simpler script. This does all the credential file work for you , and is much easier to read. It also allows for all your environments new tokens to be in the same creds file. The initial call to get you MFA requires your default account keys in the credentials file - then it generates your MFA token and puts them back in the credentials file.
#!/usr/bin/env bash
function usage {
echo "Example: ${0} dev 123456 "
exit 2
}
if [ $# -lt 2 ]
then
usage
fi
MFA_SERIAL_NUMBER=$(aws iam list-mfa-devices --profile bh${1} --query 'MFADevices[].SerialNumber' --output text)
function set-keys {
aws configure set aws_access_key_id ${2} --profile=${1}
aws configure set aws_secret_access_key ${3} --profile=${1}
aws configure set aws_session_token ${4} --profile=${1}
}
case ${1} in
dev|qa|prod) set-keys ${1} $(aws sts get-session-token --profile bh${1} --serial-number ${MFA_SERIAL_NUMBER} --query 'Credentials.[AccessKeyId,SecretAccessKey,SessionToken]' --output text --token-code ${2});;
*) usage ;;
esac
Inspired by #strongjz and #Nick answers, I wrote a small Python command to which you can pipe the output of the aws sts command.
To install:
pip install sts2credentials
To use:
aws sts get-session-token \
--serial-number arn:aws:iam::123456789012:mfa/your-iam-user \
--token-code 123456 \
--profile=your-profile-name \
| sts2credentials
This will automatically add the access key ID, the secret access key, and the session token under a new "sts" profile in your ~/.aws/credentials file.
For bash you could read in the value, then set those values from the sts output
echo "Type the mfa code that you want to use (4 digits), followed by [ENTER]:"
read ENTERED_VALUE
aws sts get-session-token ... --token-code $ENTERED_VALUE
then you'll have to parse the output of the sts call which has the access key, secret and session token.
{
Credentials: {
AccessKeyId: "ASIAJPC6D7SKHGHY47IA",
Expiration: 2016-06-05 22:12:07 +0000 UTC,
SecretAccessKey: "qID1YUDHaMPet5xw/vpw1Wk8SKPilFihdiMSdSIj",
SessionToken: "FQoDYXdzEB4aDLwmzouEQ3eckfqJxyLOARbBGasdCaAXkZ7ABOcOCNx2/7sS8N7A6Dpcax/t2G8KNTcUkRLdxI0gTvPoKQeZrH8wUrL4UxFFP6kCWEasdVIBAoUfuhdeUa1a7H216Mrfbbv3rMGsVKUoJT2Ar3r0pYgsYxizOWzH5VaA4rmd5gaQvfSFmasdots3WYrZZRjN5nofXJBOdcRd6J94k8m5hY6ClfGzUJEqKcMZTrYkCyUu3xza2S73CuykGM2sePVNH9mGZCWpTBcjO8MrawXjXj19UHvdJ6dzdl1FRuKdKKeS18kF"
}
}
then set them
aws configure set aws_access_key_id default_access_key --profile NAME_PROFILE
aws configure set aws_secret_access_key default_secret_key --profile NAME_PROFILE
aws configure set default.region us-west-2 --profile
aws some_commmand --profile NAME_PROFILE
http://www.tldp.org/LDP/Bash-Beginners-Guide/html/sect_08_02.html
AWS STS API Reference
http://docs.aws.amazon.com/STS/latest/APIReference/API_GetSessionToken.html
AWS CLI STS Command
http://docs.aws.amazon.com/cli/latest/reference/sts/get-session-token.html
I wrote something very similar to what you are trying to in Go, here but this is for the sts assumerole not get-session-token.
I wrote a simple script to set the AWS credentials file for a profile called mfa. Then all bash scripts you write just need to have the "--profile mfa" added so they will just work. This also allows for multiple AWS accounts - as many of us have those these days. I'm sure this can be improved - but it was quick and dirty and does what you want and everything I need.
You will have to amend facts in the script to fit your account details - I have marked them clearly with chevrons < >. NB Obviously once you have populated the script with all your details it is not to be copied about - unless you want unintended consequences. This uses recursion within the credentials file - as the standard access keys are called each time to create the mfa security tokens.
#!/bin/bash
# Change for your username - would be /home/username on Linux/BSD
dir='/Users/<your-user-name>'
region=us-east-1
function usage {
echo "Must enter mfa token and then either dev/qa/prod"
echo "i.e. mfa-set-aws-profile.sh 123456 qa"
exit 2
}
if [[ $1 == "" ]]
then
echo "Must give me a token - how do you expect this to work - DOH :-)"
usage
exit 2
fi
# Write the output from sts command to a json file for parsing
# Just add accounts below as required
case $2 in
dev) aws sts get-session-token --profile dev --serial-number arn:aws:iam::<123456789>:mfa/<john.doe> --token-code $1 > $dir/mfa-json;;
qa) aws sts get-session-token --profile qa --serial-number arn:aws:iam::<123456789>:mfa/<john.doe> --token-code $1 > $dir/mfa-json;;
-h) usage ;;
*) usage ;;
esac
# Remove quotes and comma's to make the file easier to parse -
# N.B. gsed is for OSX - on Linux/BSD etc sed should be just fine.
/usr/local/bin/gsed -i 's/\"//g;s/\,//g' $dir/mfa-json
# Parse the mfa info into vars for use in aws credentials file
seckey=`cat $dir/mfa-json | grep SecretAccessKey | gsed -E 's/[[:space:]]+SecretAccessKey\: //g'`
acckey=`cat $dir/mfa-json | grep AccessKeyId | gsed 's/[[:space:]]+AccessKeyId\: //g'`
sesstok=`cat $dir/mfa-json | grep SessionToken | gsed 's/[[:space:]]+SessionToken\: //g'`
# output all the gathered info into your aws credentials file.
cat << EOF > $dir/.aws/credentials
[default]
aws_access_key_id = <your normal keys here if required>
aws_secret_access_key = <your normal keys here if required>
[dev]
aws_access_key_id = <your normal keys here >
aws_secret_access_key = <your normal keys here >
[qa]
aws_access_key_id = <your normal keys here >
aws_secret_access_key = <your normal keys here >
[mfa]
output = json
region = $region
aws_access_key_id = $acckey
aws_secret_access_key = $seckey
aws_session_token = $sesstok
EOF

Resources