Mount a EBS volume (not snapshot) to Elastic Beanstalk EC2 - amazon-ec2

I'm migrating a legacy app to Elastic Beanstalk. It needs persistent storage (for the time being). I want to mount a EBS volume.
I was hoping the following would work in .ebextensions/ebs.config:
commands:
01mkdir:
command: "mkdir /data"
02mount:
command: "mount /dev/sdh /data"
option_settings:
- namespace: aws:autoscaling:launchconfiguration
option_name: BlockDeviceMappings
value: /dev/sdh=vol-XXXXX
https://blogs.aws.amazon.com/application-management/post/Tx224DU59IG3OR9/Customize-Ephemeral-and-EBS-Volumes-in-Elastic-Beanstalk-Environments
But unfortunately I get the following error "(vol-XXXX) for parameter snapshotId is invalid. Expected: 'snap-...'."
Clearly this method only allows snapshots. Can anyone suggest a fix or an alternative method.

I have found a solution. It could be improved by removing the "sleep 10" but unfortunately that required because aws ec2 attach-volume is async and returns straight away before the attachment takes place.
container_commands:
01mount:
command: "aws ec2 attach-volume --volume-id vol-XXXXXX --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdh"
ignoreErrors: true
02wait:
command: "sleep 10"
03mkdir:
command: "mkdir /data"
test: "[ ! -d /data ]"
04mount:
command: "mount /dev/sdh /data"
test: "! mountpoint -q /dev/sdh"
Note. Ideally it would be run in commands section not container_commands but the environment variables are not set in time.

To add to #Simon's answer (to avoid traps for the unwary):
If the persistent storage being mounted will ultimately be used inside a Docker container (e.g. if you're running Jenkins and want to persist jenkins_home), you need to restart the docker container after running the mount.
You need to have the 'ec2:AttachVolumes' action permitted against both the EC2 instance (or the instance/* ARN) and the volume(s) you want to attach (or the volume/* ARN) in the EB assumed role policy. Without this, the aws ec2 attach-volume command fails.
You need to pass in the --region to the aws ec2 ... command as well (at least, as of this writing)

Alternatively, instead of using an EBS volume, you could consider using an Elastic File System (EFS) Storage. AWS has published a script on how to mount an EFS volume to Elastic Beanstalk EC2 instances, and it can also be attached to multiple EC2 instances simultaneously (which is not possible for EBS).
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/services-efs.html

Here's a config file that you can drop in .ebextensions. You will need to provide the VOLUME_ID that you want to attach. The test commands make it so that attaching and mounting only happens as needed, so that you can eb deploy repeatedly without errors.
container_commands:
00attach:
command: |
export REGION=$(/opt/aws/bin/ec2-metadata -z | awk '{print substr($2, 0, length($2)-1)}')
export INSTANCE_ID=$(/opt/aws/bin/ec2-metadata -i | awk '{print $2}')
export VOLUME_ID=$(aws ec2 describe-volumes --region ${REGION} --output text --filters Name=tag:Name,Values=tf-trading-prod --query 'Volumes[*].VolumeId')
aws ec2 attach-volume --region ${REGION} --device /dev/sdh --instance-id ${INSTANCE_ID} --volume-id ${VOLUME_ID}
aws ec2 wait volume-in-use --region ${REGION} --volume-ids ${VOLUME_ID}
sleep 1
test: "! file -E /dev/xvdh"
01mkfs:
command: "mkfs -t ext3 /dev/xvdh"
test: "file -s /dev/xvdh | awk '{print $2}' | grep -q data"
02mkdir:
command: "mkdir -p /data"
03mount:
command: "mount /dev/xvdh /data"
test: "! mountpoint /data"

Have to use container_commands because when commands are run the source bundle is not fully unpacked yet.
.ebextensions/whatever.config
container_commands:
chmod:
command: chmod +x .platform/hooks/predeploy/mount-volume.sh
Predeploy hooks run after container commands but before the deployment. No need to restart your docker container even if it mounts a directory on the attached ebs volume, because beanstalk spins it up after predeploy hooks complete. You can see it in the logs.
.platform/hooks/predeploy/mount-volume.sh
#!/bin/sh
# Make sure LF line endings are used in the file, otherwise there would be an error saying "file not found".
# All platform hooks run as root user, no need for sudo.
# Before attaching the volume find out the root volume's name, so that we can later use it for filtering purposes.
# -d – to filter out partitions.
# -P – to display the result as key-value pairs.
# -o – to output only the matching part.
# lsblk strips the "/dev/" part
ROOT_VOLUME_NAME=$(lsblk -d -P | grep -o 'NAME="[a-z0-9]*"' | grep -o '[a-z0-9]*')
aws ec2 attach-volume --volume-id vol-xxx --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdf --region us-east-1
# The above command is async, so we need to wait.
aws ec2 wait volume-in-use --volume-ids vol-xxx --region us-east-1
# Now lsblk should show two devices. We figure out which one is non-root by filtering out the stored root volume name.
NON_ROOT_VOLUME_NAME=$(lsblk -d -P | grep -o 'NAME="[a-z0-9]*"' | grep -o '[a-z0-9]*' | awk -v name="$ROOT_VOLUME_NAME" '$0 !~ name')
FILE_COMMAND_OUTPUT=$(file -s /dev/$NON_ROOT_VOLUME_NAME)
# Create a file system on the non-root device only if there isn't one already, so that we don't accidentally override it.
if test "$FILE_COMMAND_OUTPUT" = "/dev/$NON_ROOT_VOLUME_NAME: data"; then
mkfs -t xfs /dev/$NON_ROOT_VOLUME_NAME
fi
mkdir /data
mount /dev/$NON_ROOT_VOLUME_NAME /data
# Need to make sure that the volume gets mounted after every reboot, because by default only root volume is automatically mounted.
cp /etc/fstab /etc/fstab.orig
NON_ROOT_VOLUME_UUID=$(lsblk -d -P -o +UUID | awk -v name="$NON_ROOT_VOLUME_NAME" '$0 ~ name' | grep -o 'UUID="[-0-9a-z]*"' | grep -o '[-0-9a-z]*')
# We specify 0 to prevent the file system from being dumped, and 2 to indicate that it is a non-root device.
# If you ever boot your instance without this volume attached, the nofail mount option enables the instance to boot
# even if there are errors mounting the volume.
# Debian derivatives, including Ubuntu versions earlier than 16.04, must also add the nobootwait mount option.
echo "UUID=$NON_ROOT_VOLUME_UUID /data xfs defaults,nofail 0 2" | tee -a /etc/fstab
Pretty sure that things that I do with grep and awk could be done in a more concise manner. I'm not great at Linux.
Instance profile should include these permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:DetachVolume",
"ec2:DescribeVolumes"
],
"Resource": [
"arn:aws:ec2:*:*:volume/*",
"arn:aws:ec2:*:*:instance/*"
]
}
]
}
You have to ensure that you deploy ebs volume in the same AZ as beanstalk and that you use SingleInstance deployment. Then if your instance crashes, ASG will terminate it, create another one, and attach the volume to the new instance keeping all the data.

Here it is with missing config:
commands:
01mount:
command: "export AWS_ACCESS_KEY_ID=<replace by your AWS key> && export AWS_SECRET_ACCESS_KEY=<replace by your AWS secret> && aws ec2 attach-volume --volume-id <replace by you volume id> --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/xvdf --region <replace with your region>"
ignoreErrors: true
02wait:
command: "sleep 10"
03mkdir:
command: "mkdir /home/lucene"
test: "[ ! -d /home/lucene ]"
04mount:
command: "mount /dev/xvdf /home/lucene"
test: "! mountpoint -q /dev/xvdf"

Related

Bash Script not runnuning properly in openstack server creation

I have an openstack server in which i want to create an instance with user data file for example
openstack server create --flavor 2 --image 34bf1632-86ed-46ca-909e-c6ace830f91f --nic net-id=d444145e-3ccb-4685-88ee --security-group default --key-name Adeel --user-data ./adeel/script.sh m3
script.sh contain
#cloud-config
password: mypasswd
chpasswd: { expire: False }
ssh_pwauth: True
#!/bin/sh
wget https://artifacts.elastic.co/downloads/beats/elastic-agent/elastic-agent-7.17.7-linux-x86_64.tar.gz && tar -xzf elastic-agent-7.17.7-linux-x86_64.tar.gz cd
elastic-agent-7.17.7-linux-x86_64 sudo ./elastic-agent install \
--fleet-server-es=http://localhost:9200 \
--fleet-server-service-token=AAEAAWVsYXN0aWMvZmxlZXQtc2VydmVyL3Rva2VuLTE2Njc0MDM1 \
--fleet-server-policy=499b5aa7-d214-5b5d \
--fleet-server-insecure-http
when i add this script nothing executed. i want run above script when my instance boot first time.

Correct way to deploy deploy a container from GitLab to EC2

I try to deploy my container from gitlab registry to EC2 Instance, I arrived to deploy my container, but when I change something, and want to deploy again, It is required to remove the old container and the old images and deploy again, for that I create this script to remove every thing and deploy again.
...
deploy-job:
stage: deploy
only:
- master
script:
- mkdir -p ~/.ssh
- echo -e "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa
- chmod 600 ~/.ssh/id_rsa
- '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'
- ssh -i ~/.ssh/id_rsa ec2-user#$DEPLOY_SERVER "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com &&
docker stop $(docker ps -a -q) &&
docker rm $(docker ps -a -q) &&
docker pull registry.gitlab.com/doesntmatter/demo:latest &&
docker image tag registry.gitlab.com/doesntmatter/demo:latest doesntmatter/demo &&
docker run -d -p 80:8080 doesntmatter/demo"
When I try this script, I got this error:
"docker stop" requires at least 1 argument. <<-------------------- error
See 'docker stop --help'.
Usage: docker stop [OPTIONS] CONTAINER [CONTAINER...]
Stop one or more running containers
Running after script
00:01
Uploading artifacts for failed job
00:01
ERROR: Job failed: exit code 1
if you look closer, I use $(docker ps -a -q) after the the docker stop.
Questions:
I know this is not the wonderful way to make my deploys (a developer here), can you please suggest other ways, just with using gitlab and ec2.
Is there any way to avoid this error, when I have or not containers in my machine?
Probably no containers were running when the job was executed.
To avoid this behavior, you can change a bit you command to have :
docker ps -a -q | xargs -r sudo docker stop
docker ps -a -q | xargs -r sudo docker rm
These will not produce errors if no containers are running.
Afterwards, indeed there are other way to deploy a container on AWS where there are services handling containers very well like ECS, EKS or Fargate. Think also about terraform to deploy your infrastructure using IaC principle (even for you ec2 instance).

aws ec2 run-instances: script as the plain text is ignored

I'm trying to pass the script as the --user-data parameter.
If the same is run through --user-data file://some_file.sh all works. Also, it works if launch instance through AWS GUI by adding user-data in the correspondent launch configuration box.
My CLI command is
aws ec2 run-instances --image-id ami-0cc0a36f626a4fdf5 --count 1 --instance-type t2.micro --key-name key_name --security-group-ids sg-00000000 --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=some_name}]" --output table --user-data "sudo touch /tmp/install.log && sudo chmod 777 /tmp/install.log && echo $(date) >> /tmp/install.log"
if the same run as a script, it's content formatted as below
#!/bin/bash
sudo touch /tmp/install.log
sudo chmod 777 /tmp/install.log
echo $(date) >> /tmp/install.log
Also, I'd like to mention that I tried to pass string in different formats like :
--user-data echo "some text"
--user-data "command_1\n command_2\n"
--user-data "command_1 && command_2"
--user-data "command_1; command_2;"
--user-data "#!/bin/bash; command_1; command_2;"
User-data after launch is seeing but not executed
$ curl -L http://169.254.169.254/latest/user-data/
The first line must start with #!.
Then, subsequent lines are executed. They must be separated by a proper newline. It looks like \n is not interpreted correctly.
From how to pass in the user-data when launching AWS instances using CLI:
$ aws ec2 run-instances --image-id ami-16d4986e --user-data '#!/bin/bash
> poweroff'
As an experiment, I put this at the end of the run-instances command:
aws ec2 run-instances ... --user-data '#!
echo bar >/tmp/foo
'
When I logged into the instance, I could see the /tmp/foo file.

How do I mount an EFS endpoint on /etc/fstab with using Cloud Formation in User-data section?

When I write a bash command on User-data section in CloudFormation template EFS endpoint is not inserted in the /etc/fstab/.
My bash command looks like this:
echo "$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone).${EfsFileSystem}.efs.aws-region.amazonaws.com:/ /mnt/ nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0" >> /etc/fstab
I have to mount the endpoint using
mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-fbxxxx.efs.us-east-1.amazonaws.com:/ /mnt/
You can find a working example of mounting EFS via UserData here
https://github.com/Bit-Clouded/Glenlivet/blob/master/platforms/ecs-base.template#L275
#!/bin/bash
apt-get update -qqy && apt-get install -qqy nfs-common
EC2_AVAIL_ZONE=`curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone`
DIR_SRC=$EC2_AVAIL_ZONE.${SharedDiskGp}.efs.${AWS::Region}.amazonaws.com
mkdir /mnt/efs
echo -e "$DIR_SRC:/ /mnt/efs nfs defaults 0 0" | tee -a /etc/fstab
mount -a
# restart docker service so efs mount can come into effect.
service docker restart
Copied the relevant bit here as per SO guideline.

EC2 - Create AMI - Cannot connect to new instance

I am experiencing difficulty trying to launch a AMI from an EBS volume. I am basically trying to launch another instance of a Linux (i386) based AMI that I have already configured the way I want. I have followed many guides for the past week. So far, I am able to create the custom private AMI but I am unable to connect to it after launching the new instance. I suspect that the AMI I have created is miss-configured in some way (maybe files didnt get fully copied over).
Anyhow here are the basic steps I'm going through to try to create the AMI:
ec2-create-volume -K pk-xxxxxx.pem -C
cert-xxxxxx.pem --size 10
--availability-zone us-east-1a
ec2-attach-volume -K pk-xxxxxx.pem -C
cert-xxxxxx.pem vol-xxxxxx --instance
xxxxxx --device /dev/sdh
yes | mkfs -t ext3 /dev/sdh
mkdir/mnt/ebsimage
echo '/dev/sdh /mnt/ebsimage ext3
defaults,noatime 0 0' >> /etc/fstab
mount /mnt/ebsimage
umount /mnt/ebsimage
ec2-detach-volume -K pk-xxxxxx.pem -C
cert-xxxxxx.pem vol-xxxxxx --instance
xxxxxx
ec2-create-snapshot -K pk-xxxxxx.pem
-C cert-xxxxxx.pem vol-xxxxxx
ec2reg -K pk-xxxxxx.pem -C
cert-xxxxxx.pem -s snap-xxxxx -a i386
-d -n --kernel aki-xxxxx --ramdisk ari-xxxxxx
I'm pretty sure either my commands around mount are messed up or my commands around ec2reg are messed up. Any suggestions?
I have also tried replacing
yes | mkfs -t ext3 /dev/sdh
mkdir/mnt/ebsimage
echo '/dev/sdh
/mnt/ebsimage ext3 defaults,noatime 0
0' >> /etc/fstab
mount /mnt/ebsimage
with a script designed to use rsync and add some other details but again the new instance of the ami launched cannot be connected to. Here is a copy of the script.
#!/bin/sh
vol=/dev/sdh
ebsmnt=/mnt/ebsimage
mkdir ${ebsmnt}
mkfs.ext3 -F ${vol}
sync
echo "mount $vol $ebsmnt"
mount $vol $ebsmnt
mkdir ${ebsmnt}/mnt
mkdir ${ebsmnt}/proc
mkdir ${ebsmnt}/sys
devdir=${ebsmnt}/dev
echo "mkdir ${devdir}"
mkdir ${devdir}
mknod ${devdir}/null c 1 3
mknod ${devdir}/zero c 1 5
mknod ${devdir}/tty c 5 0
mknod ${devdir}/console c 5 1
ln -s null ${devdir}/X0R
rsync -rlpgoD -t -r -S -l -vh \
--exclude /sys --exclude /proc \
--exclude /dev \
--exclude /media --exclude /mnt \
--exclude /sys --exclude /ebs --exclude /mnt \
-x /* ${ebsmnt}
df -h
Because I have the same results as the first example, I'm not sure if I'm closer to solving this issue or further away. Any help would be appreciated.
To create your EBS AMI from an S3 based AMI, you can use my blog post:
http://www.capsunlock.net/2009/12/create-ebs-boot-ami.html
I don't know which distribution you are trying to run, but if you want to run debian, there is a script which manages the entire bootstrapping process including ami creation (EBS boot).
You can find it on my github account:
https://github.com/andsens/ec2debian-build-ami
The script has been thoroughly tested and allows you to include other scripts in order to customize your ami. If you want to modify the script itself, just fork it, at least you then have a base to work from, where you know everything works.
I would not recommend the process you outlined though, it seems quite 'messy'.

Resources