I am trying create an storage account from terraform, and use some of its access keys to create a blob container.
My terraform configuration is given from a bash file, so, one of the most important steps here are the following:
customer_prefix=4pd
customer_environment=dev
RESOURCE_GROUP_NAME=$customer_prefix
az group create --name $RESOURCE_GROUP_NAME --location westeurope
# Creating Storage account
STORAGE_ACCOUNT_NAME=4pdterraformstates
az storage account create --resource-group $RESOURCE_GROUP_NAME --name $STORAGE_ACCOUNT_NAME --sku Standard_LRS --encryption-services blob
# We are getting the storage account key to access to it when we need to store the terraform .tf production state file
ACCOUNT_KEY=$(az storage account keys list --resource-group $RESOURCE_GROUP_NAME --account-name $STORAGE_ACCOUNT_NAME --query [0].value -o tsv)
# Creating a blob container
CONTAINER_NAME=4pd-tfstate
az storage container create --name $CONTAINER_NAME --account-name $STORAGE_ACCOUNT_NAME --account-key $ACCOUNT_KEY
source ../terraform_apply_template.sh
I am sending those az cli commands from a .bash file and so, I am sending other TF_VAR_azurerm ... variables there
Finally when I execute the bash first bash file, this call to the terraform_apply_template.sh which create the plan, and it's applied. Is the following:
#!/bin/bash
#Terminates script execution after first failed (returned non-zero exit code) command and treat unset variables as errors.
set -ue
terraform init -backend-config=$statefile -backend-config=$storage_access_key -backend-config=$storage_account_name
#TF:
function tf_plan {
terraform plan -out=$outfile
}
case "${1-}" in
apply)
tf_plan && terraform apply $outfile
;;
apply-saved)
if [ ! -f $outfile ]; then tf_plan; fi
terraform apply $outfile
;;
*)
tf_plan
echo ""
echo "Not applying changes. Call one of the following to apply changes:"
echo " - '$0 apply': prepares and applies new plan"
echo " - '$0 apply-saved': applies saved plan ($outfile)"
;;
esac
But my output is the following, the azurerm backend is initialized,the storage account named 4pdterraformstates is created, and also the 4pd-tfstate blob container,
but in the practice this action is not effective, I get the following output:
Initializing the backend...
Successfully configured the backend "azurerm"! Terraform will automatically
use this backend unless the backend configuration changes.
Error: Failed to get existing workspaces: storage: service returned error: StatusCode=404, ErrorCode=ContainerNotFound, ErrorMessage=The specified container does not exist.
RequestId:2db5df4e-f01e-014c-369d-272246000000
Time:2019-06-20T19:21:01.6931098Z, RequestInitiated=Thu, 20 Jun 2019 19:21:01 GMT, RequestId=2db5df4e-f01e-014c-369d-272246000000, API Version=2016-05-31, QueryParameterName=, QueryParameterValue=
Looking for a similar behavior, I have found this issue in the azurerm provider terraform repository
And also according to this issue created directly in terraform repository It's looks like a network operational error ...
But the strange is that it's was fixed ..
I am using the terraform v0.12.1 version
⟩ terraform version
Terraform v0.12.2
According to #Gaurav-Mantri answer, is necessary wait until the storage account to be provisioned in order to continue wit the other task related with the storage account itself.
Creation of a storage account is an asynchronous process. When you execute az storage account create to create a storage account, request is sent to Azure and you get an accepted response back (if everything went well).
The whole process to create (provision) a storage account takes some time (maximum 1 minute in my experience) and until the storage account is provisioned, no operations are allowed on that storage account.
How can I include a wait process after of the storage account creation?
It seem that only with the bash sleep command or pausing the bash shell with the read command for some time is not enough..
Creation of a storage account is an asynchronous process. When you execute az storage account create to create a storage account, request is sent to Azure and you get an accepted response back (if everything went well).
The whole process to create (provision) a storage account takes some time (maximum 1 minute in my experience) and until the storage account is provisioned, no operations are allowed on that storage account.
This is the reason you're getting the error because you're immediately trying to perform an operation after sending the creation request.
Not sure how you would do it but what you would need to do is wait for the storage account to be provisioned. You can get the properties of the storage account periodically (say once every 5 seconds) and check for it's provisioning state. Once the provisioning state is successful, you can try to create a container. At that time your request should succeed.
Related
I am using a bash script ot upload a vhd to blob container in a storage account.
Steps:
I created a VM, Storage account, container with blob acces, and snapshot. From the VM created, I'm generating one SAS URI by taking a snapshot of the VM Disk.
After SAS URI is generated I'm trying to upload it to the storage account container using below CLI Command:
$sas=$(az snapshot grant-access --resource-group $resourceGroupName --name $snapshotName --duration-in-seconds $sasExpiryDuration --query [accessSas] -o tsv)
I verified the value of $sas in the terminal it's printing the correct value.
But when try the below command.
az storage blob copy start --destination-blob $destinationVHDFileName --destination-container $storageContainerName --account-name $accountname --account-key $key --source-uri $sas
The SAS URI is a very large string. Wherever I see '&' in SAS URL the string after it gets separated.
Let's say the string is like `abc&sr63663&si74883&sig74848`
I'm getting the error:
sr is not recognized as internal or external command.
Si is not recognized as internal or external command.
Sig is not recognized as internal or external command.
Please help me with how I can pass the SAS URI properly in the last command mentioned through the bash script.
I tried to reproduce the same in my environment and got the same error as below:
To resolve the error, try wrapping the SAS token separately with single quotes followed by double quotes like below:
$sastoken=' " your_sas_token" '
az storage blob copy start --destination-blob $destinationVHDFileName --destination-container $storageContainerName --account-name $accountname --account-key $key --source-uri $sastoken
After executing the above script, vhd file uploaded to blob container successfully like below:
To confirm the above, Go to the Azure Portal and check the blob container like below:
Reference:
How to provide a SAS token to Azure CLI by Jon Tirjan
I'm trying to list all the recovery points in a AWS Backup vault from the CLI. Running into following error:
An error occurred (AccessDeniedException) when calling the ListRecoveryPointsByBackupVault operation: Insufficient privileges to perform this action.
Having a hard time figuring out what permissions are required to get this working. I've added backup:ListRecoveryPointsByBackupVault permissions. Also, I was looking for a backup policy for I could use as a reference or documentation? not had much luck with searching online regarding what are all the permissions needed to get this working. Any help would be much appreciated!!
Here is my bash:
#!/bin/bash
CURRENTregion=$(aws configure get region)
#GET LIST OF RECOVERY POINTS FROM VAULT
getRecoveryPoints(){
echo "Enter the name of the vault you want to list"
read VAULT_NAME
aws backup list-recovery-points-by-backup-vault --backup-vault-name "$\{VAULT_NAME\}" --query 'RecoveryPoints[*].RecoveryPointArn' --output text > RecoveryPointsList
}
getRecoveryPoints
"$\{VAULT_NAME\}" -> only $ should be escaped not the vault name ( as OP mentioned)
For example: upon launching my EC2 instance, I would like to automatically run
docker login
so I can pull a private image from dockerhub and run it. To login to dockerhub I need to input a username and password, and this is what I would like to automate but haven't been able to figure out how.
I do know that you can pass in a script to be ran on launch via User Data. The issue is that my script expects input and I would like to automate entering that input.
Thanks in advance!
If just entering a password for docker login is your problem then I would suggest searching for a manual for docker login. 30 secs on Google gave me this link:
https://docs.docker.com/engine/reference/commandline/login/
It suggests something of the form
docker login --username foo --password-stdin < ~/my_password.txt
Which will read the password from a file my_password.txt in the current users home directory.
Seems like the easiest solution for you here is to modify your script to accept command line parameters, and pass those in with the UserData string.
Keep in mind that this will require you to change your launch configs every time your password changes.
The better solution here is to store your containers in ECS, and let AWS handle the authentication for you (as far as pulling the correct containers from a repo).
Your UserData then turns into something along:
#!/bin/bash
mkdir -p /etc/ecs
rm -f /etc/ecs/ecs.config # cleans up any old files on this instance
echo ECS_LOGFILE=/log/ecs-agent.log >> /etc/ecs/ecs.config
echo ECS_LOGLEVEL=info >> /etc/ecs/ecs.config
echo ECS_DATADIR=/data >> /etc/ecs/ecs.config
echo ECS_CONTAINER_STOP_TIMEOUT=5m >> /etc/ecs/ecs.config
echo ECS_CLUSTER=<your-cluster-goes-here> >> /etc/ecs/ecs.config
docker pull amazon/amazon-ecs-agent
docker run --name ecs-agent --detach=true --restart=on-failure:10 --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/var/log/ecs/:/log --volume=/var/lib/ecs/data:/data --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro --volume=/var/run/docker/execdriver/native:/var/lib/docker/execdriver/native:ro --publish=127.0.0.1:51678:51678 --env-file=/etc/ecs/ecs.config amazon/amazon-ecs-agent:latest
You may or may not need all the volumes specified above.
This setup lets the AWS ecs-agent handle your container orchestration for you.
Below is what I could suggest at this moment -
Create a S3 bucket i.e mybucket.
Put a text file(doc_pass.txt) with your password into that S3 bucket
Create a IAM policy which has GET access to just that particular S3 bucket & add this policy to the EC2 instance role.
Put below script in you user data -
aws s3 cp s3://mybucket/doc_pass.txt doc_pass.txt
cat doc_pass.txt | docker login --username=YOUR_USERNAME --password-stdin
This way you just need to make your S3 bucket secure, no secrets gets displayed in the user data.
Heroku offers automatic and scheduled backups of your PG database.
https://devcenter.heroku.com/articles/heroku-postgres-data-safety-and-continuous-protection
GBackups will launch a dedicated dyno to take a dump of your database
and upload it to S3
Simple question: Is it possible to upload a scheduled PG backup to one's OWN S3 Bucket? Simply to have control over the backup files and to not be limited in Storage space. Researching this topic did not provide me with an answer if this is possible.
You can do it by using Heroku scheduler and a bash script.
# Set the script to fail fast if there
# is an error or a missing variable
set -eu
set -o pipefail
#!/bin/sh
# Download the latest backup from
# Heroku and gzip it
heroku pg:backups:download --output=/tmp/pg_backup.dump --app $APP_NAME
gzip /tmp/pg_backup.dump
# Encrypt the gzipped backup file
# using GPG passphrase
gpg --yes --batch --passphrase=$PG_BACKUP_PASSWORD -c /tmp/pg_backup.dump.gz
# Remove the plaintext backup file
rm /tmp/pg_backup.dump.gz
# Generate backup filename based
# on the current date
BACKUP_FILE_NAME="heroku-backup-$(date '+%Y-%m-%d_%H.%M').gpg"
# Upload the file to S3 using
# AWS CLI
aws s3 cp /tmp/pg_backup.dump.gz.gpg "s3://${S3_BUCKET_NAME}/${BACKUP_FILE_NAME}"
# Remove the encrypted backup file
rm /tmp/pg_backup.dump.gz.gpg
You can check out this tutorial for detailed step by step explanation.
One option is to create a backup (you can even create a follower database to created it from for performance reasons), then download the backup via stream to your server, and then upload it into your own S3 bucket.
If you wanted a quick Rail app to do this, you can setup https://github.com/kjohnston/pgbackups-archive. It does everything aside from creating a follower database, but if you are not too concerned with performance 24/7, then this should do fine. I don't know why Heroku doesn't offer storage to your own S3 buckets, as they store them on S3 themselves.
Here is a buildpack for doing this on a regular schedule. It hasn't been updated in a bit, but you could easily update / adapt it as needed.
I have an EC2 ASG on AWS and I'm interested in storing the shell script that's used to instantiate any given instance in an S3 bucket and have it downloaded and run upon instantiation, but it all feels a little rickety even though I'm using an IAM Instance Role, transferring via HTTPS, and encrypting the script itself while at rest in the S3 bucket using KMS using S3 Server Side Encryption (because the KMS method was throwing an 'Unknown' error).
The Setup
Created an IAM Instance Role that gets assigned to any instance in my ASG upon instantiation, resulting in my AWS creds being baked into the instance as ENV vars
Uploaded and encrypted my Instance-Init.sh script to S3 resulting in a private endpoint like so : https://s3.amazonaws.com/super-secret-bucket/Instance-Init.sh
In The User-Data Field
I input the following into the User Data field when creating the Launch Configuration I want my ASG to use:
#!/bin/bash
apt-get update
apt-get -y install python-pip
apt-get -y install awscli
cd /home/ubuntu
aws s3 cp s3://super-secret-bucket/Instance-Init.sh . --region us-east-1
chmod +x Instance-Init.sh
. Instance-Init.sh
shred -u -z -n 27 Instance-Init.sh
The above does the following:
Updates package lists
Installs Python (required to run aws-cli)
Installs aws-cli
Changes to the /home/ubuntu user directory
Uses the aws-cli to download the Instance-Init.sh file from S3. Due to the IAM Role assigned to my instance, my AWS creds are automagically discovered by aws-cli. The IAM Role also grants my instance the permissions necessary to decrypt the file.
Makes it executable
Runs the script
Deletes the script after it's completed.
The Instance-Init.sh Script
The script itself will do stuff like setting env vars and docker run the containers that I need deployed on my instance. Kinda like so:
#!/bin/bash
export MONGO_USER='MyMongoUserName'
export MONGO_PASS='Top-Secret-Dont-Tell-Anyone'
docker login -u <username> -p <password> -e <email>
docker run - e MONGO_USER=${MONGO_USER} -e MONGO_PASS=${MONGO_PASS} --name MyContainerName quay.io/myQuayNameSpace/MyAppName:latest
Very Handy
This creates a very handy way to update User-Data scripts without the need to create a new Launch Config every time you need to make a minor change. And it does a great job of getting env vars out of your codebase and into a narrow, controllable space (the Instance-Init.sh script itself).
But it all feels a little insecure. The idea of putting my master DB creds into a file on S3 is unsettling to say the least.
The Questions
Is this a common practice or am I dreaming up a bad idea here?
Does the fact that the file is downloaded and stored (albeit briefly) on the fresh instance constitute a vulnerability at all?
Is there a better method for deleting the file in a more secure way?
Does it even matter whether the file is deleted after it's run? Considering the secrets are being transferred to env vars it almost seems redundant to delete the Instance-Init.sh file.
Is there something that I'm missing in my nascent days of ops?
Thanks for any help in advance.
What you are describing is almost exactly what we are using to instantiate Docker containers from our registry (we now use v2 self-hosted/private, s3-backed docker-registry instead of Quay) into production. FWIW, I had the same "this feels rickety" feeling that you describe when first treading this path, but after almost a year now of doing it -- and compared to the alternative of storing this sensitive configuration data in a repo or baked into the image -- I'm confident it's one of the better ways of handling this data. Now, that being said, we are currently looking at using Hashicorp's new Vault software for deploying configuration secrets to replace this "shared" encrypted secret shell script container (say that five times fast). We are thinking that Vault will be the equivalent of outsourcing crypto to the open source community (where it belongs), but for configuration storage.
In fewer words, we haven't run across many problems with a very similar situation we've been using for about a year, but we are now looking at using an external open source project (Hashicorp's Vault) to replace our homegrown method. Good luck!
An alternative to Vault is to use credstash, which leverages AWS KMS and DynamoDB to achieve a similar goal.
I actually use credstash to dynamically import sensitive configuration data at container startup via a simple entrypoint script - this way the sensitive data is not exposed via docker inspect or in docker logs etc.
Here's a sample entrypoint script (for a Python application) - the beauty here is you can still pass in credentials via environment variables for non-AWS/dev environments.
#!/bin/bash
set -e
# Activate virtual environment
. /app/venv/bin/activate
# Pull sensitive credentials from AWS credstash if CREDENTIAL_STORE is set with a little help from jq
# AWS_DEFAULT_REGION must also be set
# Note values are Base64 encoded in this example
if [[ -n $CREDENTIAL_STORE ]]; then
items=$(credstash -t $CREDENTIAL_STORE getall -f json | jq 'to_entries | .[]' -r)
keys=$(echo $items | jq .key -r)
for key in $keys
do
export $key=$(echo $items | jq 'select(.key=="'$key'") | .value' -r | base64 --decode)
done
fi
exec $#