Set ENV from external script (secret Manager) in docker - bash

I am trying to integrate aws secret manager with Docker. I have to call secret manager and set response JSON to the ENV.
My JSON looks like this
{
"username": "demo_user",
"password": "some_password"
}
I am trying below steps to set ENV
Dockerfile
RUN export secretData=$(aws secretsmanager get-secret-value --secret-id /secret/id --region region-example-north-1);echo $secretData;
RUN bash -l -c 'echo export $secretData="$(aws secretsmanager get-secret-value --secret-id /secret/id --region region-example-north-1)" >> /etc/bash.bashrc'
Both methods of RUN didn't work for me. I want output something like ENV RUN export secretData
My Env variables are username and password so further this output will be stored in each variable respectively
Thanks is advance for each reply.

Related

AWS Secret manager ssh key read in bash variable

I am trying to use AWS Secret manager to store a private ssh key and read in bash script.
I created the secret with name testsshkey in Secrets manager.
This is a multiline text stored in SecretsMananger
Then I created the bash script with following
secret_key=$(aws secretsmanager get-secret-value --region us-east-1 --secret-id testsshkey --query SecretString --output text)
echo $secret_key
When I run this script, it only prints last line of the key

Running aws secretsmanager get-secret-value --secret-id $test_env_variable --query "SecretString" throws error. what I am doing wrong here

when I run the following aws cli command and pass environment variable as --secret-id. aws cli throws the error. however when i pass the hardcoded value instead of environment variable, it start working just fine.
aws secretsmanager get-secret-value --secret-id $test_env_variable --query "SecretString"
ERROR:
usage: aws [options] [ ...] [parameters]
To see help text, you can run:
aws help
aws help
aws help
aws.exe: error: argument --secret-id: expected one argument
This happens, because your test_env_variable is most likely empty. Thus you have to ensure that it actually has secret id in the correct format.

Problem with creating EBS snapshot on server(Linux EC2 instance)

I am working on a task that required to run a script on a server, The script will grab instance id, create snapshot and run yum update -y command and reboot the server.
#!/bin/bash
set -eu
# Set Vars
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export REGION=$(curl --silent http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
export INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
echo $AWS_ACCOUNT_ID
echo $REGION
# Fetch VolumeId
volumeid=$(aws ec2 describe-instances --region $REGION --instance-id "$INSTANCE_ID" --filters Name=instance-state-name,Values=running --query "Reservations[*].Instances[].[BlockDeviceMappings[*].{VolumeName:Ebs.VolumeId}]" --output text)
echo $INSTANCE_ID
echo $volumeid
# Create snapshot
aws ec2 create-snapshot --region $REGION --volume-id $volumeid --description "Test-Snapshot-$INSTANCE_ID"
read -p "waiting a while to complete creation of EBS snapshot" -t 100
echo -e "\x1B[01;36m Snapshot has been created \x1B[0m"
I can get the Instance id but when I am trying to create snapshot id from Instance id, I am getting following error:
ERROR
us-east-1
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.
Thank you so much in advance for your support.
Your instance, and with that your script is missing the ec2:DescribeInstances permission to run the aws ec2 describe-instances command.
You should attach that permission to the instance role that is assigned to the instance (or create a new role with the permissions attached if there is none assigned yet).
Your IAM permissions do not grant access to DescribeInstances.
If you’re using an IAM role for the instance check it’s policies.
If it’s a user then make sure the credentials are being retrieved, either via aws credentials file or via environment variable

How to set command output as env. variable in windows cmd?

I have saved azure storage key in key vault and i want to retrieve the key using Azure cli and set it as env. variable in window cmd before i run terraform script.
Below listed command doesn't work, can anyone tell me what needs to be changed ?
set ARM_ACCESS_KEY=$(az keyvault secret show --name terraform-backend-key --vault-name myKeyVault)
Error on initializing
Main.tf
variable "count" {}
variable "prefix" {
default="RG"
}
terraform {
backend "azurerm" {
container_name = "new"
storage_account_name = "mfarg"
key = "terraform.tfstate"
}}
resource "azurerm_resource_group" "test" {
count ="${var.count}"
name = "${var.prefix}-${count.index}"
location = "West US 2"
}
Command prompt output
To set the environment variable in Windows, I suggest you use the PowerShell command to achieve it. In PowerShell, you can just do it like this:
$env:ACCESS_KEY=$(az keyvault secret show -n terraform-backend-key --vault-name myKeyVault --query value -o tsv)
Also, in your CLI command, you could not show the secret directly, it outputs the whole secret not just the access key as you want. See the difference between the two commands.
A late answer, but perhaps useful to those who still have the same problem.
This method will work in windows Command prompt, cmd.
For /f %%i in ('az keyvault secret show --vault-name "Your-KeyVault-Name" --name "Your-Secret-Name" --query "value"') do set "password=%%i"
Now if you just run "echo %password%" you will see your secret value.
Remember that az command has to be between ' ', like 'az keyvault secret etc'.

Set environment variables in AWS EMR during bootstrap

I added the following configuration in spark-env
--configurations '[
{
"Classification": "spark-env",
"Properties": {},
"Configurations": [
{
"Classification": "export",
"Properties": {
"MY_VARIABLE": "MY_VARIABLE"
}
}
]
}
]'
But if I just do echo $MY_VARIABLE in bash I can't see them in the terminal.
Basically what I want to do is the following:
schedule the creation of an AWS EMR cluster with AWS Lambda (where I would set all my environment variables such as git credentials)
in the bootstrapping of the machine, install a bunch of things, including git
git clone a repository (so I need to use the credentials stored in the environment variables)
execute some code from this repository
Pass the environment variables as arguments to the bootstrap action.
the reason why you can't find MY_VARIABLE using echo is because MY_VARIABLE is only available to the spark-env.
Assuming you are using pyspark, if you open a pyspark shell (whilst you are ssh'd into one of the nodes of your cluster) and you try to type os.getenv("MY_VARIABLE") you'll see the value of you assigned to that variable.
An alternative solution for your use case would be: instead of using credentials (which in general is not the preferred way), you could use a set of keys that allows you to clone a repo with SSH (rather than https). You can store those keys in aws ssm and retrieve those in the EMR bootstrap script. An example could be:
bootstrap.sh
export SSM_VALUE=$(aws ssm get-parameter --name $REDSHIFT_DWH_PUBLIC_KEY --with-decryption --query 'Parameter.Value' --output text)
echo $SSM_VALUE >> $AUTHORIZED_KEYS
In my case, I needed to connect to a Redshift instance, but this would work nicely also with your use case.
Alessio

Resources