Store multiple secrets in Keyvault with one Azure Cli command - bash

Sorry, Noobie here. Perhaps it is a very easy and obvious answer but I am trying to store an extensive list of keyvault secrets and am looking for an easy-ish way to do it compared with entering each one at a time. I figure using the CLI would be a quicker way to get this done than the Azure Resource Manager interface.

If by "ARM Interface" you mean a template, there is a template already written for this case: https://github.com/Azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.keyvault/key-vault-secret-create

Don't know if this will actually help anyone because its so basic but I ended up creating a for loop script in bash that takes the input files as variables and dynamically creates the keyvault and populates said keyvault with the values based on the values.txt name.
running:
bash keyvault_creator.sh key.txt vault1.txt
runs:
#!/bin/bash
key=$1
value=$2
az keyvault create --location <location> --name "${value%.*}" --resource-group <ResourceGroupName>
paste $key $value | while read if of; do
echo "$if" "$of"
az keyvault secret set --vault-name "${value%.*}" --name "$if" --value "$of"
done

Related

How can I use a secret variable in Azure Devops bash task which could be undefined

I am trying to create a config file using a bash task in Azure Devops. The variables come from azure keyvault, so I don't know which variables are defined and which ones are undefined.
- script: |
touch config.txt
echo "1. $(MyDefinedVariable)" >> config.txt
echo "2. $(MyUndefinedVariable)" >> config.txt
cat config.txt
Since MyUndefinedVariable is not defined, the pipeline doesn't substitute $(MyUndefinedVariable), resulting in a bash error MyUndefinedVariable: command not found.
I have tried using the env argument to use bash variables but I get the same error since "$(MyUndefinedVariable)" is being passed in to the bash environment.
- script: |
touch config.txt
echo "1. $MY_DEFINED_VARIABLE" >> config.txt
echo "2. $MY_UNDEFINED_VARIABLE" >> config.txt
cat config.txt
env:
MY_DEFINED_VARIABLE: $(MyDefinedVariable)
MY_UNDEFINED_VARIABLE: $(MyUndefinedVariable)
I just want undefined variables to resolve to an empty string but can't find a sensible way to do it.
All variables mapped from Azure KeyVault are considered as secrets so mapping like this one is necessary:
env:
MY_DEFINED_VARIABLE: $(MyDefinedVariable)
MY_UNDEFINED_VARIABLE: $(MyUndefinedVariable)
I'm afraid that if you are not aware of values in your KeyVault you need to use Azure CLI to check this. To checks all secret keys you can use this command:
az keyvault secret list [--id]
[--maxresults]
[--query-examples]
[--subscription]
[--vault-name]
You can combine this CLI with Azure CLI task.

How to create a secret docker secret?

I need create a MariaDB docker container, but need set the root password, but the password is set using a argument from the command line, it is very dangerous for the storage in the .bash_history.
I try use secrets using print pass | docker secret create mysql-root -, but have the same problem, the password is saved into .bash_history. The docker secret is not very secret.
I try use an interactive command:
while read -e line; do printf $line | docker secret create mysql-root -; break; done;
But, is very ugly xD. Why is a beter way to create a docker secret without save it into bash history but without remove all bash history?
The simplest way I have found is to use the following:
docker secret create private_thing -
Then enter the secret on the command line, followed by Ctrl-D twice.
You could try
printf $line | sudo docker secret create MYSQL_ROOT_PASSWORD -
and then
docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d mariadb:tag
The information concerning using secrets with MariaDB can be found on the MariaDB page of DockerHub.
"Docker Secrets
As an alternative to passing sensitive information via environment variables, _FILE may be appended to the previously listed environment variables, causing the initialization script to load the values for those variables from files present in the container. In particular, this can be used to load passwords from Docker secrets stored in /run/secrets/<secret_name> files. For example:
$ docker run --name some-mysql -e MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql-root -d mariadb:tag
Currently, this is only supported for MYSQL_ROOT_PASSWORD, MYSQL_ROOT_HOST, MYSQL_DATABASE, MYSQL_USER, and MYSQL_PASSWORD"
You can use openssl rand option to generate a random string and pass to docker secret command i.e
openssl rand -base64 10| docker secret create my_sec -
The openssl rand option will generate 10 byte base64 encoded random string.

EC2 instance region is not populated in user-data script

I want to fill some tags of the EC2 spot instance, however as it is impossible to do it directly in spot request, I do it via user data script. All is going fine when I specify region statically, but it is not universal approach. When I try to detect current region from instance userdata, the region variable is always empty. I do it in a following way:
#!/bin/bash
region=$(ec2-metadata -z | awk '{print $2}' | sed 's/[a-z]$//')
aws ec2 create-tags \
--region $region \
--resources `wget -q -O - http://169.254.169.254/latest/meta-data/instance-id` \
--tags Key=sometag,Value=somevalue Key=sometag,Value=somevalue
I tried to made a delay before region populating
/bin/sleep 30
but this had no result.
However, when I run this script manually after start, the tags are added fine. What is going on?
Why all in all aws-cli doesn't get default region from profile? I have aws configure properly configured inside the instance, but without --region clause it throws error that region is not specified.
I suspect the ec2-metadata command is not available when your userdata script is executed. Try getting the region from the metadata server directly (which is what ec2-metadata does anyway)
region=$(curl -fsq http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
AWS CLI does use the region from default profile.
You can now use this endpoint to get only the instance region (no parsing needed):
http://169.254.169.254/latest/meta-data/placement/region
So in this case:
region=`curl -s http://169.254.169.254/latest/meta-data/placement/region`
I ended up with
region=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | python -c "import json,sys; print"
which worked fine. However, it would be fine if somebody explain the nuts-and-bolts.

AWS S3: Remove Object Prefix From Thousands of Files in Complex Directory Structure

I am using AWS CLI interface to manage files/objects in S3. I have thousands of objects buried in a complex system of nested folders (subfolders), I want to elevate all of the objects to the “root” of the S3 bucket, in one folder at the root of the bucket (s3://bucket/folder/file.txt).
I've tried using this command:
aws s3 s3://bucket-a/folder-a s3://bucket-a --recursive --exclude “*” --include “*.txt”
When I use the mv command, it carries over the prefixes (directory paths) of each object resulting in the same nested folder system. Here is what I want to accomplish:
Desired Result:
Where:
s3://bucket-a/folder-a/file-1.txt
s3://bucket-a/folder-b/folder-b1/file-2.txt
s3://bucket-a/folder-c/folder-c1/folder-c2/ file-3.txt
Output:
s3://bucket-a/file-1.txt
s3://bucket-a/file-2.txt
s3://bucket-a/file-3.txt
I have been told, that I need to use a bash script to accomplish my desired result. Here is a sample script that was provided to me:
#!/bin/bash
#BASH Script to move objects without directory structure
bucketname='my-bucket'
for key in $(aws s3api list-objects --bucket "${my-bucket}" --query "Contents[].{Object:Key}" --output text) ;
do
echo "$key"
FILENAME=$($key | awk '{print $NF}' FS=/)
aws s3 cp s3://$my-bucket/$key s3://$my-bucket/my-folder/$FILENAME
done
When I run this bash script, I get an error:
A client error (AccessDenied) occurred when calling the ListObjects operation: Access Denied
I tested the connection with another aws s3 command and confirmed that it works. I added policies to the user to include all privledges to s3, I have no idea what I am doing wrong here.
Any help would be greatly appreciated.
That script looks messed up, no means on setting a variable called bucketname and trying to use another one called my-bucket, what happens if you try this ?
#!/bin/bash
#BASH Script to move objects without directory structure
bucketname='my-bucket'
for key in $(aws s3api list-objects --bucket "${bucketname}" --query "Contents[].{Object:Key}" --output text) ;
do
echo "$key"
FILENAME=$($key | awk '{print $NF}' FS=/)
aws s3 cp s3://$bucketname/$key s3://$bucketname/my-folder/$FILENAME
done

Is it secure to store EC2 User-Data shell scripts in a private S3 bucket?

I have an EC2 ASG on AWS and I'm interested in storing the shell script that's used to instantiate any given instance in an S3 bucket and have it downloaded and run upon instantiation, but it all feels a little rickety even though I'm using an IAM Instance Role, transferring via HTTPS, and encrypting the script itself while at rest in the S3 bucket using KMS using S3 Server Side Encryption (because the KMS method was throwing an 'Unknown' error).
The Setup
Created an IAM Instance Role that gets assigned to any instance in my ASG upon instantiation, resulting in my AWS creds being baked into the instance as ENV vars
Uploaded and encrypted my Instance-Init.sh script to S3 resulting in a private endpoint like so : https://s3.amazonaws.com/super-secret-bucket/Instance-Init.sh
In The User-Data Field
I input the following into the User Data field when creating the Launch Configuration I want my ASG to use:
#!/bin/bash
apt-get update
apt-get -y install python-pip
apt-get -y install awscli
cd /home/ubuntu
aws s3 cp s3://super-secret-bucket/Instance-Init.sh . --region us-east-1
chmod +x Instance-Init.sh
. Instance-Init.sh
shred -u -z -n 27 Instance-Init.sh
The above does the following:
Updates package lists
Installs Python (required to run aws-cli)
Installs aws-cli
Changes to the /home/ubuntu user directory
Uses the aws-cli to download the Instance-Init.sh file from S3. Due to the IAM Role assigned to my instance, my AWS creds are automagically discovered by aws-cli. The IAM Role also grants my instance the permissions necessary to decrypt the file.
Makes it executable
Runs the script
Deletes the script after it's completed.
The Instance-Init.sh Script
The script itself will do stuff like setting env vars and docker run the containers that I need deployed on my instance. Kinda like so:
#!/bin/bash
export MONGO_USER='MyMongoUserName'
export MONGO_PASS='Top-Secret-Dont-Tell-Anyone'
docker login -u <username> -p <password> -e <email>
docker run - e MONGO_USER=${MONGO_USER} -e MONGO_PASS=${MONGO_PASS} --name MyContainerName quay.io/myQuayNameSpace/MyAppName:latest
Very Handy
This creates a very handy way to update User-Data scripts without the need to create a new Launch Config every time you need to make a minor change. And it does a great job of getting env vars out of your codebase and into a narrow, controllable space (the Instance-Init.sh script itself).
But it all feels a little insecure. The idea of putting my master DB creds into a file on S3 is unsettling to say the least.
The Questions
Is this a common practice or am I dreaming up a bad idea here?
Does the fact that the file is downloaded and stored (albeit briefly) on the fresh instance constitute a vulnerability at all?
Is there a better method for deleting the file in a more secure way?
Does it even matter whether the file is deleted after it's run? Considering the secrets are being transferred to env vars it almost seems redundant to delete the Instance-Init.sh file.
Is there something that I'm missing in my nascent days of ops?
Thanks for any help in advance.
What you are describing is almost exactly what we are using to instantiate Docker containers from our registry (we now use v2 self-hosted/private, s3-backed docker-registry instead of Quay) into production. FWIW, I had the same "this feels rickety" feeling that you describe when first treading this path, but after almost a year now of doing it -- and compared to the alternative of storing this sensitive configuration data in a repo or baked into the image -- I'm confident it's one of the better ways of handling this data. Now, that being said, we are currently looking at using Hashicorp's new Vault software for deploying configuration secrets to replace this "shared" encrypted secret shell script container (say that five times fast). We are thinking that Vault will be the equivalent of outsourcing crypto to the open source community (where it belongs), but for configuration storage.
In fewer words, we haven't run across many problems with a very similar situation we've been using for about a year, but we are now looking at using an external open source project (Hashicorp's Vault) to replace our homegrown method. Good luck!
An alternative to Vault is to use credstash, which leverages AWS KMS and DynamoDB to achieve a similar goal.
I actually use credstash to dynamically import sensitive configuration data at container startup via a simple entrypoint script - this way the sensitive data is not exposed via docker inspect or in docker logs etc.
Here's a sample entrypoint script (for a Python application) - the beauty here is you can still pass in credentials via environment variables for non-AWS/dev environments.
#!/bin/bash
set -e
# Activate virtual environment
. /app/venv/bin/activate
# Pull sensitive credentials from AWS credstash if CREDENTIAL_STORE is set with a little help from jq
# AWS_DEFAULT_REGION must also be set
# Note values are Base64 encoded in this example
if [[ -n $CREDENTIAL_STORE ]]; then
items=$(credstash -t $CREDENTIAL_STORE getall -f json | jq 'to_entries | .[]' -r)
keys=$(echo $items | jq .key -r)
for key in $keys
do
export $key=$(echo $items | jq 'select(.key=="'$key'") | .value' -r | base64 --decode)
done
fi
exec $#

Resources