As a part of my shell script, I am trying to create record sets in AWS Route53. However, using variables in the aws cli within my shell script to create those records set, my variables exported in the Shell scripted are not being passed to the aws cli command.
AWS CLI command provided by AWS:
$ aws route53 change-resource-record-sets --hosted-zone-id 1234567890ABC \
--change-batch file:///path/to/record.json
I do not want to create a separate json file on my computer for simplicity purpose, I want to have all my commands and variables within the shell script.
#!/bin/bash
export TARGET_ENVIRONMENT=uat
export BASE_ENVIRONMENT_DNS=abcd-External-9982627718-1916763929.us-west-1.elb.amazonaws.com
# Creates route 53 records based on env name
aws route53 change-resource-record-sets --hosted-zone-id 1234567890ABC
--change-batch '{ "Comment": "Testing creating a record set",
"Changes": [ { "Action": "CREATE", "ResourceRecordSet": { "Name":
"$(TARGET_ENVIRONMENT).company.com", "Type": "CNAME", "TTL":
120, "ResourceRecords": [ { "Value": "$(BASE_ENVIRONMENT_DNS)" } ] } } ] }'
This last command is creating a record set on the AWS Route53 as:
$(TARGET_ENVIRONMENT).company.com
with CNAME as
$(BASE_ENVIRONMENT_DNS)
and NOT really as I want it to be, which is:
uat.company.com
with the CNAME:
abcd-External-9982627718-1916763929.us-west-1.elb.amazonaws.com
How can I pass the environment variables into my aws cli command within the script?
Any help will be highly appreciated.
Thanks!
Variables do not expand within single quotes.
If you close the single quotes before your variable expansion, and then open them again immediately after, it should produce the desired effect. You may or may not need to wrap the variable in double quotes for it to expand.
#!/bin/bash
ENV=uat
DNS=abcd-External-9982627718-1916763929.us-west-1.elb.amazonaws.com
# Creates route 53 records based on env name
aws route53 change-resource-record-sets \
--hosted-zone-id 1234567890ABC \
--change-batch '
{
"Comment": "Testing creating a record set"
,"Changes": [{
"Action" : "CREATE"
,"ResourceRecordSet" : {
"Name" : "'" $ENV "'.company.com"
,"Type" : "CNAME"
,"TTL" : 120
,"ResourceRecords" : [{
"Value" : "'" $DNS "'"
}]
}
}]
}
'
You could also leverage process substitution to keep the JSON formatted:
#!/bin/bash
export TARGET_ENVIRONMENT=uat
export BASE_ENVIRONMENT_DNS=abcd-External-9982627718-1916763929.us-west-1.elb.amazonaws.com
# Creates route 53 records based on env name
aws route53 change-resource-record-sets --hosted-zone-id 1234567890ABC --change-batch file://<(cat << EOF
{
"Comment": "Testing creating a record set",
"Changes": [
{
"Action": "CREATE",
"ResourceRecordSet": {
"Name": "${TARGET_ENVIRONMENT}.company.com",
"Type": "CNAME",
"TTL": 120,
"ResourceRecords": [
{
"Value": "${BASE_ENVIRONMENT_DNS}"
}
]
}
}
]
}
EOF
)
This worked for me!
#!/bin/bash
zoneid=xxxxyyyyyzzzzz
recordname=mycname
recordvalue=myendpoint
aws route53 change-resource-record-sets \
--hosted-zone-id $zoneid \
--change-batch '
{
"Comment": "Creating a record set for cognito endpoint"
,"Changes": [{
"Action" : "CREATE"
,"ResourceRecordSet" : {
"Name" : "'$recordname'.mydomain.com"
,"Type" : "CNAME"
,"TTL" : 120
,"ResourceRecords" : [{
"Value" : "'$recordvalue'"
}]
}
}]
}
You need to close the single quotation mark. Here is the fixed snippet.
#!/bin/bash
zoneid=xxxxyyyyyzzzzz
recordname=mycname
recordvalue=myendpoint
aws route53 change-resource-record-sets \
--hosted-zone-id $zoneid \
--change-batch '
{
"Comment": "Creating a record set for cognito endpoint"
,"Changes": [{
"Action" : "CREATE"
,"ResourceRecordSet" : {
"Name" : "'$recordname'.mydomain.com"
,"Type" : "CNAME"
,"TTL" : 120
,"ResourceRecords" : [{
"Value" : "'$recordvalue'"
}]
}
}]
}'
Related
I'm creating a bash script to provision multiple Azure resources via the Azure CLI. So far so good, however I'm having a problem tagging resources.
My goal is to store multiple tags in a variable and provide that variable to the --tags option of several az commands in the script. The problem however is that a space in the value will be interpreted as a new key.
If we take for example the command az group update (which will update a resource group) the docs state the following about the --tags option:
--tags
Space-separated tags in 'key[=value]' format. Use "" to clear existing tags.
When a value (or key) contains spaces it must be enclosed in quotes.
So when we provide the key-value pairs directly to the command including a value with spaces, like in the following example, the result will be as expected:
az group update --tags owner="FirstName LastName" application=coolapp --name resource-group-name
The result will be that two tags have been added to the resource group:
{
"id": "/subscriptions/1e42c44c-bc55-4b8a-b35e-de1dfbcfe481/resourceGroups/resource-group-name",
"location": "westeurope",
"managedBy": null,
"name": "resource-group-name",
"properties": {
"provisioningState": "Succeeded"
},
"tags": {
"application": "coolapp",
"owner": "FirstName LastName"
}
}
However, when we store the same value we used in the previous step in a variable the problem occurs.
tag='owner="FirstName LastName" application=coolapp'
I use echo $tag to validate that the variable contains exactly the same value as we provided in the previous example to the --tags option:
owner="FirstName LastName" application=coolapp
But when we provide this tag variable to the tags option of the command as shown in the next line:
az group update --tags $tag --name resource-group-name
The result will be three tags instead of the expected two:
{
"id": "/subscriptions/1e42c44c-bc55-4b8a-b35e-de1dfbcfe481/resourceGroups/resource-group-name",
"location": "westeurope",
"managedBy": null,
"name": "resource-group-name",
"properties": {
"provisioningState": "Succeeded"
},
"tags": {
"LastName\"": "",
"application": "coolapp",
"owner": "\"FirstName"
}
}
I've already tried defining the variable in the following ways, but no luck so far:
tag="owner=FirstName LastName application=coolapp"
tag=owner="Firstname Lastname" application=cool-name
tag='`owner="Firstname Lastname" application=cool-name`'
I even tried defining the variable as an array and providing it to the command as shown on the next line, but also that didn't provide the correct result:
tag=(owner="Firstname Lastname" application=cool-name)
az group update --tags ${tag[*]}--name resource-group-name
I also tried putting quotes around the variable in the command, as was suggested by #socowi, but this leads to the following incorrect result of one tag instead of two:
az group update --tags "$tag" --name resource-group-name
{
"id": "/subscriptions/1e42c44c-bc55-4b8a-b35e-de1dfbcfe481/resourceGroups/resource-group-name",
"location": "westeurope",
"managedBy": null,
"name": "resource-group-name",
"properties": {
"provisioningState": "Succeeded"
},
"tags": {
"owner": "Firstname Lastname application=cool-name"
}
}
Does anybody know how to solve this?
Define your tags as
tags=("owner=Firstname Lastname" "application=cool-name")
then use
--tags "${tags[#]}"
I've found the following works. It requires a resource group already be created.
I used the following template:
{
"$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
"contentVersion": "1.0.0.0",
"parameters": {
"resourceName": {
"type": "string",
"metadata": {
"description": "Specifies the name of the resource"
}
},
"location": {
"type": "string",
"defaultValue": "[resourceGroup().location]",
"metadata": {
"description": "Location for the resources."
}
},
"resourceTags": {
"type": "object",
"defaultValue": {
"Cost Center": "Admin"
}
}
},
"resources": [
{
"apiVersion": "2019-06-01",
"kind": "StorageV2",
"location": "[parameters('location')]",
"name": "[parameters('resourceName')]",
"properties": {
"supportsHttpsTrafficOnly": true
},
"sku": {
"name": "Standard_LRS"
},
"type": "Microsoft.Storage/storageAccounts",
"tags": "[parameters('resourceTags')]"
}
]
}
In the Azure CLI using Bash, you can pass in the tag as a JSON object. In the following example, a template file with a location requires two parameters, resourceName and the tags which is an ARM object named resourceTags:
az deployment group create --name addstorage --resource-group myResourceGroup \
--template-file $templateFile \
--parameters resourceName=abcdef45216 resourceTags='{"owner":"bruce","Cost Cen":"2345-324"}'
If you want to pass it as an environment variable, use:
tags='{"owner":"bruce","Cost Center":"2345-324"}'
az deployment group create --name addstorage --resource-group myResourceGroup \
--template-file $templateFile \
--parameters resourceName=abcdef4556 resourceTags="$tags"
The $tags must be in double quotes. (You are passing in a JSON object string)
The JSON string also works when you are passing in the tags into Azure DevOps pipeline. See https://github.com/MicrosoftDocs/azure-devops-docs/issues/9051
First, build your string like so and double quote all keys/values just in case of spaces in either: (Sorry this is PoSH just example)
[string] $tags = [string]::Empty;
97..99 |% {
$tags += "&`"$([char]$_)`"=`"$($_)`"";
}
The results of this is a string "&"a"="97"&"b"="98"&"c"="99".
Now pass it as a string array using the split function of the base string class which results in a 4 element array, the first element is blank. The CLI command ignores the first empty element. Here I set the tags for a storage account.
$tag='application=coolapp&owner="FirstName LastName"&"business Unit"="Human Resources"'
az resource tag -g rg -n someResource --resource-type Microsoft.Storage/storageaccounts -tags $tag.split("&")
I also employed this approach when I wanted to override the parameters provided in a parameter file for a resource group deployment.
az group deployment create --resource-group $rgName --template-file $templatefile --parameters $parametersFile --parameters $($overrideParams.split("&"));
I'm wondering if possible to use an argument to construct a field name in jq.
Example:
jq -rc \
--arg secret_name ${secret_name} \
--arg secret_value ${secret_value} \
'.data.$secret_name = "$secret_value"'
In above example, I want to use value of argument secret_name to create a key under .data. Is this possible using jq?
Example Data:
secret_name=abc
secret_value=xyz
JSON on which jq is run:
{
"apiVersion": "v1",
"data": {},
"kind": "Secret",
"metadata": {
"name": "kv-secrets",
"namespace": "default"
},
"type": "Opaque"
}
Expected output:
{
"apiVersion": "v1",
"data": {
"abc": "xyz"
},
"kind": "Secret",
"metadata": {
"name": "secrets"
},
"type": "Opaque"
}
Do mind that I intend to run the original command to fill .data will more key-value pairs.
With a variable, you need to use the long [...] form for the key. You don't need to quote the variables in a JSON filter; the variable is the string value.
jq -rc \
--arg secret_name "${secret_name}" \
--arg secret_value "${secret_value}" \
'.data[$secret_name] = $secret_value'
I was trying to deploy ngnix docker container by mesos Marathon, I would like to set some environment variables in the container, so I added parameter section in the Json file, but after I added the parameter section, it was failed. My Json file as following:
{
"container":{
"type":"DOCKER",
"docker":{
"image":"nginx",
"network":"BRIDGE",
"portMappings":[{"containerPort":80,"hostPort":0,"servicePort":80,"protocol":"tcp"}],
"parameters": [
{ "key": "myhostname", "value": "a.corp.org" }
]
}
},
"id":"nginx7",
"instances":1,
"cpus":0.25,
"mem":256,
"uris":[]
}
my launch script was: curl -X POST -H "Content-Type: application/json" 10.3.11.11:8080/v2/apps -d#"$#"
The command I ran was: ./launch.sh nginx.json
You used the wrong parameter key myhostname, if you want to setup hostname for you container, it should be:
"parameters": [
{ "key": "hostname", "value": "a.corp.org" }
]
if you want to pass environment variable, it should be:
"parameters": [
{ "key": "env", "value": "myhostname=a.corp.org" }
]
Bash script was working before
Below script broke after a clean install
AWS Command Line Interface is installed and AWS is configured
When is it not looping through each file?
## variables are set before this point
files=$(/usr/local/bin/aws s3api list-objects --bucket "$bucket" --prefix "$target_zipname" --query "Contents[].{Key: Key}")
##return sample##
# [ { "Key": "fmpbks_2016_08_18_17_08_35.zip" }, { "Key": "fmpbks_2016_08_18_17_14_39.zip" }, { "Key": "fmpbks_2016_08_19_10_54_24.zip" }, { "Key": "fmpbks_2016_08_19_10_55_57.zip" }, { "Key": "fmpbks_2016_08_19_10_56_29.zip" }, { "Key": "fmpbks_2016_08_19_11_00_56.zip" } ]
##
for zip_file in $files
do
echo $zip_file # for testing
delete_path="s3://$bucket/$zip_file"
deleted=$(/usr/local/bin/aws s3 rm $delete_path)
break
done
output:
[
{
"Key":
"fmpbks_2016_08_18_17_08_35.zip"
},
{
"Key":
"fmpbks_2016_08_18_17_14_39.zip"
},
{
"Key":
"fmpbks_2016_08_19_10_54_24.zip"
},
{
"Key":
"fmpbks_2016_08_19_10_55_57.zip"
},
{
"Key":
"fmpbks_2016_08_19_10_56_29.zip"
},
{
"Key":
"fmpbks_2016_08_19_11_00_56.zip"
}
]
The aws s3api list-buckets command returns a JSON object by default. The for loop doesn't understand how to interpret JSON.
Use the --output text option to list each file on a separate line:
files=$(aws s3api list-objects --bucket MY-BUCKET --query 'Contents[*].Key' --output text)
By the way, if your goal is to delete all files within a certain path, you could use:
aws s3 rm s3://MY-BUCKET --exclude "*" --include "MY-PREFIX*" --recursive
I'm building a stack that needs access to a private S3 bucket to download the most current version of my application. I'm using IAM roles, a relatively new AWS feature that allows EC2 instances to be assigned specific roles, which are then coupled with IAM policies. Unfortunately, these roles come with temporary API credentials generated at instantiation. It's not crippling, but it's forced me to do things like this cloud-init script (simplified to just the relevant bit):
#!/bin/sh
# Grab our credentials from the meta-data and parse the response
CREDENTIALS=$(curl -s http://169.254.169.254/latest/meta-data/iam/security-credentials/s3access)
S3_ACCESS_KEY=$(echo $CREDENTIALS | ruby -e "require 'rubygems'; require 'json'; puts JSON[STDIN.read]['AccessKeyId'];")
S3_SECRET_KEY=$(echo $CREDENTIALS | ruby -e "require 'rubygems'; require 'json'; puts JSON[STDIN.read]['SecretAccessKey'];")
S3_TOKEN=$(echo $CREDENTIALS | ruby -e "require 'rubygems'; require 'json'; puts JSON[STDIN.read]['Token'];")
# Create an executable script to pull the file
cat << EOF > /tmp/pullS3.rb
require 'rubygems'
require 'aws-sdk'
AWS.config(
:access_key_id => "$S3_ACCESS_KEY",
:secret_access_key => "$S3_SECRET_KEY",
:session_token => "$S3_TOKEN")
s3 = AWS::S3.new()
myfile = s3.buckets['mybucket'].objects["path/to/my/file"]
File.open("/path/to/save/myfile", "w") do |f|
f.write(myfile.read)
end
EOF
# Downloading the file
ruby /tmp/pullS3.rb
First and foremost: This works, and works pretty well. All the same, I'd love to use CloudFormation's existing support for source access. Specifically, cfn-init supports the use of authentication resources to get at protected data, including S3 buckets. Is there anyway to get at these keys from within cfn-init, or perhaps tie the IAM role to an authentication resource?
I suppose one alternative would be putting my source behind some other authenticated service, but that's not a viable option at this time.
Another promising lead is the AWS::IAM::AccessKey resource, but the docs don't suggest it can be used with roles. I'm going to try it anyway.
I'm not sure when support was added, but you can meanwhile use an IAM role for authenticating S3 downloads for files and sources sections in AWS::CloudFormation::Init.
Just use roleName instead of accessKeyId & secretKey (see AWS::CloudFormation::Authentication for details), e.g.:
"Metadata": {
"AWS::CloudFormation::Init": {
"download": {
"files": {
"/tmp/test.txt": {
"source": "http://myBucket.s3.amazonaws.com/test.txt"
}
}
}
},
"AWS::CloudFormation::Authentication": {
"default" : {
"type": "s3",
"buckets": [ "myBucket" ],
"roleName": { "Ref": "myRole" }
}
}
}
Tested with aws-cfn-bootstrap-1.3-11
I managed to get this working. What I used was code from this exchange:
https://forums.aws.amazon.com/message.jspa?messageID=319465
The code doesn't use IAM Policies - it uses the AWS::S3::BucketPolicy instead.
Cloud formation code snippet:
"Resources" : {
"CfnUser" : {
"Type" : "AWS::IAM::User",
"Properties" : {
"Path": "/",
"Policies": [{
"PolicyName": "root",
"PolicyDocument": { "Statement":[{
"Effect" : "Allow",
"Action" : [
"cloudformation:DescribeStackResource",
"s3:GetObject"
],
"Resource" :"*"
}]}
}]
}
},
"CfnKeys" : {
"Type" : "AWS::IAM::AccessKey",
"Properties" : {
"UserName" : {"Ref": "CfnUser"}
}
},
"BucketPolicy" : {
"Type" : "AWS::S3::BucketPolicy",
"Properties" : {
"PolicyDocument": {
"Version" : "2008-10-17",
"Id" : "CfAccessPolicy",
"Statement" : [{
"Sid" : "ReadAccess",
"Action" : ["s3:GetObject"],
"Effect" : "Allow",
"Resource" : { "Fn::Join" : ["", ["arn:aws:s3:::<MY_BUCKET>/*"]]},
"Principal" : { "AWS": {"Fn::GetAtt" : ["CfnUser", "Arn"]} }
}]
},
"Bucket" : "<MY_BUCKET>"
}
},
"WebServer": {
"Type": "AWS::EC2::Instance",
"DependsOn" : "BucketPolicy",
"Metadata" : {
"AWS::CloudFormation::Init" : {
"config" : {
"sources" : {
"/etc/<MY_PATH>" : "https://s3.amazonaws.com/<MY_BUCKET>/<MY_FILE>"
}
}
},
"AWS::CloudFormation::Authentication" : {
"S3AccessCreds" : {
"type" : "S3",
"accessKeyId" : { "Ref" : "CfnKeys" },
"secretKey" : {"Fn::GetAtt": ["CfnKeys", "SecretAccessKey"]},
"buckets" : [ "<MY_BUCKET>" ]
}
}
},
"Properties": {
"ImageId" : "<MY_INSTANCE_ID>",
"InstanceType" : { "Ref" : "WebServerInstanceType" },
"KeyName" : {"Ref": "KeyName"},
"SecurityGroups" : [ "<MY_SECURITY_GROUP>" ],
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash\n",
"# Helper function\n",
"function error_exit\n",
"{\n",
" cfn-signal -e 1 -r \"$1\" '", { "Ref" : "WaitHandle" }, "'\n",
" exit 1\n",
"}\n",
"# Install Webserver Packages etc \n",
"cfn-init -v --region ", { "Ref" : "AWS::Region" },
" -s ", { "Ref" : "AWS::StackName" }, " -r WebServer ",
" --access-key ", { "Ref" : "CfnKeys" },
" --secret-key ", {"Fn::GetAtt": ["CfnKeys", "SecretAccessKey"]}, " || error_exit 'Failed to run cfn-init'\n",
"# All is well so signal success\n",
"cfn-signal -e 0 -r \"Setup complete\" '", { "Ref" : "WaitHandle" }, "'\n"
]]}}
}
}
Obviously replacing MY_BUCKET, MY_FILE, MY_INSTANCE_ID, MY_SECURITY_GROUP with your values.