How to add multiple keys for elastic beanstalk instance? - amazon-ec2

There is a very good question on [How to] SSH to Elastic [an] Beanstalk instance, but one thing I noticed is that, through this method, it is only possible to add one SSH key.
How can I add multiple SSH keys to an instance? Is there a way to automatically add multiple keys to new instances?

To create a file named .ebextensions/authorized_keys.config is another way to do it.
files:
/home/ec2-user/.ssh/authorized_keys:
mode: "000400"
owner: ec2-user
group: ec2-user
content: |
ssh-rsa AAAB3N...QcGskx keyname
ssh-rsa BBRdt5...LguTtp another-key
The name of file authorized_keys.config is arbitrary.

Combining rhunwicks's and rch850's answers, here's a clean way to add additional SSH keys, while preserving the one set through the AWS console:
files:
/home/ec2-user/.ssh/extra_authorized_keys:
mode: "000400"
owner: ec2-user
group: ec2-user
content: |
ssh-rsa AAAB3N...QcGskx keyname
ssh-rsa BBRdt5...LguTtp another-key
commands:
01_append_keys:
cwd: /home/ec2-user/.ssh/
command: sort -u extra_authorized_keys authorized_keys -o authorized_keys
99_rm_extra_keys:
cwd: /home/ec2-user/.ssh/
command: rm extra_authorized_keys
Note that eb ssh will work only if the private key file has the same name as the private key defined in the AWS console.

Following on from Jim Flanagan's answer, you could get the keys added to every instance by creating .ebextensions/app.config in your application source directory with contents:
commands:
copy_ssh_key_userA:
command: echo "ssh-rsa AAAB3N...QcGskx userA" >> /home/ec2-user/.ssh/authorized_keys
copy_ssh_key_userB:
command: echo "ssh-rsa BBRdt5...LguTtp userB" >> /home/ec2-user/.ssh/authorized_keys

No, Elastic Beanstalk only supports a single key pair. You can manually add SSH keys to the authorized_keys file, but these will not be known to the Elastic Beanstalk tools.

One way you could accomplish this is to create a user data script which appends the public keys of the additional key-pairs you want to use to ~ec2-user/.ssh/authorized_keys, and launch the instance with that user data, for example:
#!
echo ssh-rsa AAAB3N...QcGskx keyname >> ~ec2-user/.ssh/authorized_keys
echo ssh-rsa BBRdt5...LguTtp another-key >> ~ec2-user/.ssh/authorized_keys

The most dynamic way to add multiple SSH keys to Elastic Beanstalk EC2 instances
Step 1
Create a group in IAM. Call it something like beanstalk-access. Add the users who need SSH access to that group in IAM. Also add their public ssh key(s) to their IAM Security credentials.
Step 2
The deployment script below will be parsing JSON data from AWS CLI using a handy Linux tool called jq (jq official tutorial), so we need to add it in .ebextensions:
packages:
yum:
jq: []
Step 3
Add the following BASH deployment script to .ebextensions:
files:
"/opt/elasticbeanstalk/hooks/appdeploy/post/980_beanstalk_ssh.sh":
mode: "000755"
owner: ec2-user
group: ec2-user
content: |
#!/bin/bash
rm -f /home/ec2-user/.ssh/authorized_keys
users=$(aws iam get-group --group-name beanstalk-access | jq '.["Users"] | [.[].UserName]')
readarray -t users_array < <(jq -r '.[]' <<<"$users")
declare -p users_array
for i in "${users_array[#]}"
do
user_keys=$(aws iam list-ssh-public-keys --user-name $i)
keys=$(echo $user_keys | jq '.["SSHPublicKeys"] | [.[].SSHPublicKeyId]')
readarray -t keys_array < <(jq -r '.[]' <<<"$keys")
declare -p keys_array
for j in "${keys_array[#]}"
do
ssh_public_key=$(aws iam get-ssh-public-key --encoding SSH --user-name $i --ssh-public-key-id $j | jq '.["SSHPublicKey"] .SSHPublicKeyBody' | tr -d \")
echo $ssh_public_key >> /home/ec2-user/.ssh/authorized_keys
done
done
chmod 600 /home/ec2-user/.ssh/authorized_keys
chown ec2-user:ec2-user /home/ec2-user/.ssh/authorized_keys
Unfortunately, because this is YAML, you can't indent the code to make it more easily readable. But let's break down what's happening:
(In the code snippet directly below) We're removing the default SSH key file to give full control of that list to this deployment script.
rm -f /home/ec2-user/.ssh/authorized_keys
(In the code snippet directly below) Using AWS CLI, we're getting the list of users in the beanstalk-access group, and then we're piping that JSON list into jq to extract only that list of `$users.
users=$(aws iam get-group --group-name beanstalk-access | jq '.["Users"] | [.[].UserName]')
(In the code snippet directly below) Here, we're converting that JSON $users list into a BASH array and calling it $users_array.
readarray -t users_array < <(jq -r '.[]' <<<"$users")
declare -p users_array
(In the code snippet directly below) We begin looping through the array of users.
for i in "${users_array[#]}"
do
(In the code snippet directly below) This can probably be done in one line, but it's grabbing the list of SSH keys associated to each user in the beanstalk-access group. It has not yet turned it into a BASH array, it's still a JSON list.
user_keys=$(aws iam list-ssh-public-keys --user-name $i)
keys=$(echo $user_keys | jq '.["SSHPublicKeys"] | [.[].SSHPublicKeyId]')
(In the code snippet directly below) Now it's converting that JSON list of each users' SSH keys into a BASH array.
readarray -t keys_array < <(jq -r '.[]' <<<"$keys")
declare -p keys_array
(In the code snippet directly below) Now it's converting that JSON list into a BASH array.
readarray -t keys_array < <(jq -r '.[]' <<<"$keys")
declare -p keys_array
(In the code snippet directly below) Now we loop through each user's array of SSH keys.
for j in "${keys_array[#]}"
do
(In the code snippet directly below) We're adding each SSH key for each user to the authorized_keys file.
ssh_public_key=$(aws iam get-ssh-public-key --encoding SSH --user-name $i --ssh-public-key-id $j | jq '.["SSHPublicKey"] .SSHPublicKeyBody' | tr -d \")
echo $ssh_public_key >> /home/ec2-user/.ssh/authorized_keys
(In the code snippet directly below) Close out both the $users_array loop and $users_keys loop.
done
done
(In the code snippet directly below) Give the authorized_keys file the same permissions it originally had.
chmod 600 /home/ec2-user/.ssh/authorized_keys
chown ec2-user:ec2-user /home/ec2-user/.ssh/authorized_keys
Step 4
If your Elastic Beanstalk EC2 instance is in a public subnet, you can just ssh into it using:
ssh ec2-user#ip-address -i /path/to/private/key
If your Elastic Beanstalk EC2 instance is in a private subnet (as it should be for cloud security best practices), then you will need to have a "bastion server" EC2 instance which will act as the gateway for tunneling all SSH access to EC2 instances. Look up ssh agent forwarding or ssh proxy commands to get an idea of how to accomplish SSH tunneling.
Adding new users
All you do is add them to your IAM beanstalk-access group and run a deployment, and that script will add them to your Elastic Beanstalk instances.

instead of running echo and storing your keys on Git, you can upload your public keys to IAM user's on AWS and than do:
commands:
copy_ssh_key_userA:
command: rm -f /home/ec2-user/.ssh/authorized_keys;aws iam list-users --query "Users[].[UserName]" --output text | while read User; do aws iam list-ssh-public-keys --user-name "$User" --query "SSHPublicKeys[?Status == 'Active'].[SSHPublicKeyId]" --output text | while read KeyId; do aws iam get-ssh-public-key --user-name "$User" --ssh-public-key-id "$KeyId" --encoding SSH --query "SSHPublicKey.SSHPublicKeyBody" --output text >> /home/ec2-user/.ssh/authorized_keys; done; done;

Related

How do I get a list of all Azure repos and images with one call

When I try to run timmy neutron sqaured bash script to obtain a list of all my repos and images in my Azure ACR I get the following error message:
PS C:\users<redacted>\Desktop\scripts> bash redacted .sh
repo1/image1 ERROR: The requested data does not exist.
Correlation ID: 56e7ce3b-7e06-44d3-9226-0bd3fe64e7a7. repo2/image2 ERROR: The requested data does not exist. Correlation ID: 78d37130-9e0f-4dd5-9a96-ec3a5d998a8c. repo3/image3 ERROR: The requested data does not exist. Correlation ID: ccafb0b5-0e29-43c1-8ceb-ac293ac0759d. repo4/image4 ERROR: The requested data does not exist. Correlation ID: e19b4747-96fc-46c5-8640-8a395e39c383.
If I run the two az acr repo ... commands individually, they run ok.
Can anybody see an issue with the the way the variables are declared or the script syntax?
Please refer to timmy's bash script # Retrieve list of repositories and their tag versions in one call or below
#!/bin/bash
registry_name='REGISTRY_NAME'
destination='LOCATION_TO_STORE_LIST'
az acr login --name $registry_name
touch $destination
repos="$(az acr repository list -n $registry_name --output tsv)"
for i in $repos; do
images="$(az acr repository show-tags -n $registry_name --repository $i --output tsv --orderby time_desc)"
for j in $images; do
echo $i":"$j >> $destination;
done;
done;
Help is very well appreciated.
I don't have access to az but you could try something like.
#!/usr/bin/env bash
registry_name='REGISTRY_NAME'
destination='LOCATION_TO_STORE_LIST'
az acr login --name "$registry_name"
touch "$destination"
mapfile -t repos < <(az acr repository list -n "$registry_name" --output tsv)
create_image() {
az acr repository show-tags -n "$registry_name" --repository "$1" --output tsv --orderby time_desc
}
for i in "${repos[#]}"; do
declare -A images["$i"]="$(create_image "$i")"
done
Check the contents of the array:
declare -p images
Although the line:
declare -A images["$i"]="$(create_image "$i")"
Can be replace with:
printf '%s":"%s\n' "$i" "$(create_image "$i")" >> "$destination"
Mapfile aka readarray is a bash4+ feature.
The -A flag from declare is an associative array, also a bash4+ feature.

bulk account creation using easyrsa build-client-full

account creation
here's the image that I run this its only create 1 user at a time and manually entering PEM pass phrase and ca.key pass phrase. but what I want to achieve is import some csv file that contain username and password then import it then automatic create bulk user.
and here's my code trying to achieve it. but it gives me error saying easyrsa error
#!/bin/bash
read -p "Enter Passphase: " passphrase
while IFS=, read user pass;
do
(echo "$pass"; echo "$pass"; echo "$passphrase") | sudo docker-compose run --rm openvpn easyrsa build-client-full "$user"
sudo docker-compose run --rm openvpn ovpn_otp_user "$user"
sudo docker-compose run --rm openvpn ovpn_getclient "$user" > "$HOME/$user.ovpn"
done < accounts.csv
please help I'm new in bash scripting

File redirection not working in shell script for aws cli output

I'm creating ec2 instances and would like to get the user_data for each instance using the command:
aws ec2 describe-instance-attribute --instance-id i-xxxx --attribute userData --output text --query "UserData.Value" | base64 --decode > file.txt
When running this directly via terminal it works, I'm able to get the userData printed into file.txt. However, I need this to run in a shell script I have, which gets the instance id as a parameter.
The lines in test.sh are the following:
#!/bin/bash
echo "$(aws ec2 describe-instance-attribute --instance-id $1 --attribute userData --output text --query "UserData.Value" | base64 --decode)" > file.txt
Where $1 is the instance-id. When running:
./test.sh i-xxxxxxx
It creates an empty file.txt. I have changed the line in the script to:
echo "$(aws ec2 describe-instance-attribute --instance-id $1 --attribute userData --output text --query "UserData.Value" | base64 --decode)"
and it prints the userData to stdout. So why it is not working for file redirection?
Thank you,

Mount a EBS volume (not snapshot) to Elastic Beanstalk EC2

I'm migrating a legacy app to Elastic Beanstalk. It needs persistent storage (for the time being). I want to mount a EBS volume.
I was hoping the following would work in .ebextensions/ebs.config:
commands:
01mkdir:
command: "mkdir /data"
02mount:
command: "mount /dev/sdh /data"
option_settings:
- namespace: aws:autoscaling:launchconfiguration
option_name: BlockDeviceMappings
value: /dev/sdh=vol-XXXXX
https://blogs.aws.amazon.com/application-management/post/Tx224DU59IG3OR9/Customize-Ephemeral-and-EBS-Volumes-in-Elastic-Beanstalk-Environments
But unfortunately I get the following error "(vol-XXXX) for parameter snapshotId is invalid. Expected: 'snap-...'."
Clearly this method only allows snapshots. Can anyone suggest a fix or an alternative method.
I have found a solution. It could be improved by removing the "sleep 10" but unfortunately that required because aws ec2 attach-volume is async and returns straight away before the attachment takes place.
container_commands:
01mount:
command: "aws ec2 attach-volume --volume-id vol-XXXXXX --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdh"
ignoreErrors: true
02wait:
command: "sleep 10"
03mkdir:
command: "mkdir /data"
test: "[ ! -d /data ]"
04mount:
command: "mount /dev/sdh /data"
test: "! mountpoint -q /dev/sdh"
Note. Ideally it would be run in commands section not container_commands but the environment variables are not set in time.
To add to #Simon's answer (to avoid traps for the unwary):
If the persistent storage being mounted will ultimately be used inside a Docker container (e.g. if you're running Jenkins and want to persist jenkins_home), you need to restart the docker container after running the mount.
You need to have the 'ec2:AttachVolumes' action permitted against both the EC2 instance (or the instance/* ARN) and the volume(s) you want to attach (or the volume/* ARN) in the EB assumed role policy. Without this, the aws ec2 attach-volume command fails.
You need to pass in the --region to the aws ec2 ... command as well (at least, as of this writing)
Alternatively, instead of using an EBS volume, you could consider using an Elastic File System (EFS) Storage. AWS has published a script on how to mount an EFS volume to Elastic Beanstalk EC2 instances, and it can also be attached to multiple EC2 instances simultaneously (which is not possible for EBS).
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/services-efs.html
Here's a config file that you can drop in .ebextensions. You will need to provide the VOLUME_ID that you want to attach. The test commands make it so that attaching and mounting only happens as needed, so that you can eb deploy repeatedly without errors.
container_commands:
00attach:
command: |
export REGION=$(/opt/aws/bin/ec2-metadata -z | awk '{print substr($2, 0, length($2)-1)}')
export INSTANCE_ID=$(/opt/aws/bin/ec2-metadata -i | awk '{print $2}')
export VOLUME_ID=$(aws ec2 describe-volumes --region ${REGION} --output text --filters Name=tag:Name,Values=tf-trading-prod --query 'Volumes[*].VolumeId')
aws ec2 attach-volume --region ${REGION} --device /dev/sdh --instance-id ${INSTANCE_ID} --volume-id ${VOLUME_ID}
aws ec2 wait volume-in-use --region ${REGION} --volume-ids ${VOLUME_ID}
sleep 1
test: "! file -E /dev/xvdh"
01mkfs:
command: "mkfs -t ext3 /dev/xvdh"
test: "file -s /dev/xvdh | awk '{print $2}' | grep -q data"
02mkdir:
command: "mkdir -p /data"
03mount:
command: "mount /dev/xvdh /data"
test: "! mountpoint /data"
Have to use container_commands because when commands are run the source bundle is not fully unpacked yet.
.ebextensions/whatever.config
container_commands:
chmod:
command: chmod +x .platform/hooks/predeploy/mount-volume.sh
Predeploy hooks run after container commands but before the deployment. No need to restart your docker container even if it mounts a directory on the attached ebs volume, because beanstalk spins it up after predeploy hooks complete. You can see it in the logs.
.platform/hooks/predeploy/mount-volume.sh
#!/bin/sh
# Make sure LF line endings are used in the file, otherwise there would be an error saying "file not found".
# All platform hooks run as root user, no need for sudo.
# Before attaching the volume find out the root volume's name, so that we can later use it for filtering purposes.
# -d – to filter out partitions.
# -P – to display the result as key-value pairs.
# -o – to output only the matching part.
# lsblk strips the "/dev/" part
ROOT_VOLUME_NAME=$(lsblk -d -P | grep -o 'NAME="[a-z0-9]*"' | grep -o '[a-z0-9]*')
aws ec2 attach-volume --volume-id vol-xxx --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdf --region us-east-1
# The above command is async, so we need to wait.
aws ec2 wait volume-in-use --volume-ids vol-xxx --region us-east-1
# Now lsblk should show two devices. We figure out which one is non-root by filtering out the stored root volume name.
NON_ROOT_VOLUME_NAME=$(lsblk -d -P | grep -o 'NAME="[a-z0-9]*"' | grep -o '[a-z0-9]*' | awk -v name="$ROOT_VOLUME_NAME" '$0 !~ name')
FILE_COMMAND_OUTPUT=$(file -s /dev/$NON_ROOT_VOLUME_NAME)
# Create a file system on the non-root device only if there isn't one already, so that we don't accidentally override it.
if test "$FILE_COMMAND_OUTPUT" = "/dev/$NON_ROOT_VOLUME_NAME: data"; then
mkfs -t xfs /dev/$NON_ROOT_VOLUME_NAME
fi
mkdir /data
mount /dev/$NON_ROOT_VOLUME_NAME /data
# Need to make sure that the volume gets mounted after every reboot, because by default only root volume is automatically mounted.
cp /etc/fstab /etc/fstab.orig
NON_ROOT_VOLUME_UUID=$(lsblk -d -P -o +UUID | awk -v name="$NON_ROOT_VOLUME_NAME" '$0 ~ name' | grep -o 'UUID="[-0-9a-z]*"' | grep -o '[-0-9a-z]*')
# We specify 0 to prevent the file system from being dumped, and 2 to indicate that it is a non-root device.
# If you ever boot your instance without this volume attached, the nofail mount option enables the instance to boot
# even if there are errors mounting the volume.
# Debian derivatives, including Ubuntu versions earlier than 16.04, must also add the nobootwait mount option.
echo "UUID=$NON_ROOT_VOLUME_UUID /data xfs defaults,nofail 0 2" | tee -a /etc/fstab
Pretty sure that things that I do with grep and awk could be done in a more concise manner. I'm not great at Linux.
Instance profile should include these permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:DetachVolume",
"ec2:DescribeVolumes"
],
"Resource": [
"arn:aws:ec2:*:*:volume/*",
"arn:aws:ec2:*:*:instance/*"
]
}
]
}
You have to ensure that you deploy ebs volume in the same AZ as beanstalk and that you use SingleInstance deployment. Then if your instance crashes, ASG will terminate it, create another one, and attach the volume to the new instance keeping all the data.
Here it is with missing config:
commands:
01mount:
command: "export AWS_ACCESS_KEY_ID=<replace by your AWS key> && export AWS_SECRET_ACCESS_KEY=<replace by your AWS secret> && aws ec2 attach-volume --volume-id <replace by you volume id> --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/xvdf --region <replace with your region>"
ignoreErrors: true
02wait:
command: "sleep 10"
03mkdir:
command: "mkdir /home/lucene"
test: "[ ! -d /home/lucene ]"
04mount:
command: "mount /dev/xvdf /home/lucene"
test: "! mountpoint -q /dev/xvdf"

scp shell stops when permission denied

I have a shell script that continuously put some data from one server to another. It works fine but I want to make it more secure. So at the moment if the other server denied the permission because the password was changed the scipts freezes. Is there a possibility so if this occurs it just ignores this line and just goes on?
inotifywait -m /srv/watchfolderfilme -e create -e moved_to |
while read path action file; do
...
sshpass -p "****" scp -r /srv/newtorrentfiles/* user#0.0.0.0:/srv/torrentfiles && rm -r /srv/newtorrentfiles/*
done
scp is no the best tool to deal with your problem.
As George said, using public keys with ssh is the best way to get rid of password change.
Also you can do the trick with rsync like this :
rsync -ahz --remove-source-files /srv/newtorrentfiles/ user#SRVNAME:/srv/torrentfiles/
or
rsync -ahz /srv/newtorrentfiles/ user#SRVNAME:/srv/torrentfiles/ && rm -r /srv/newtorrentfiles/*
To be sure that all is done like you wanted (make this script more "secure"), you can send you an email if the script fails for one reason or another not due to lack of permission.
Maybe not the answer you're looking for but why don't you use SSH keys?
Updated Script:
inotifywait -m /srv/watchfolderfilme -e create -e moved_to |
while read path action file; do
...
scp -r /srv/newtorrentfiles/* b#B:/srv/torrentfiles && rm -r /srv/newtorrentfiles/*
done
How to do it
a#A:~> ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/a/.ssh/id_rsa):
Created directory '/home/a/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/a/.ssh/id_rsa.
Your public key has been saved in /home/a/.ssh/id_rsa.pub.
The key fingerprint is:
3e:4f:05:79:3a:9f:96:7c:3b:ad:e9:58:37:bc:37:e4 a#A
Now use ssh to create a directory ~/.ssh as user b on B. (The directory may already exist, which is fine):
a#A:~> ssh b#B mkdir -p .ssh
b#B's password:
Finally append a's new public key to b#B:.ssh/authorized_keys and enter b's password one last time:
a#A:~> cat .ssh/id_rsa.pub | ssh b#B 'cat >> .ssh/authorized_keys'
b#B's password:
From now on you can log into B as b from A as a without password:
a#A:~> ssh b#B
Source >> http://www.linuxproblem.org/art_9.html

Resources