EC2 - Create AMI - Cannot connect to new instance - amazon-ec2

I am experiencing difficulty trying to launch a AMI from an EBS volume. I am basically trying to launch another instance of a Linux (i386) based AMI that I have already configured the way I want. I have followed many guides for the past week. So far, I am able to create the custom private AMI but I am unable to connect to it after launching the new instance. I suspect that the AMI I have created is miss-configured in some way (maybe files didnt get fully copied over).
Anyhow here are the basic steps I'm going through to try to create the AMI:
ec2-create-volume -K pk-xxxxxx.pem -C
cert-xxxxxx.pem --size 10
--availability-zone us-east-1a
ec2-attach-volume -K pk-xxxxxx.pem -C
cert-xxxxxx.pem vol-xxxxxx --instance
xxxxxx --device /dev/sdh
yes | mkfs -t ext3 /dev/sdh
mkdir/mnt/ebsimage
echo '/dev/sdh /mnt/ebsimage ext3
defaults,noatime 0 0' >> /etc/fstab
mount /mnt/ebsimage
umount /mnt/ebsimage
ec2-detach-volume -K pk-xxxxxx.pem -C
cert-xxxxxx.pem vol-xxxxxx --instance
xxxxxx
ec2-create-snapshot -K pk-xxxxxx.pem
-C cert-xxxxxx.pem vol-xxxxxx
ec2reg -K pk-xxxxxx.pem -C
cert-xxxxxx.pem -s snap-xxxxx -a i386
-d -n --kernel aki-xxxxx --ramdisk ari-xxxxxx
I'm pretty sure either my commands around mount are messed up or my commands around ec2reg are messed up. Any suggestions?
I have also tried replacing
yes | mkfs -t ext3 /dev/sdh
mkdir/mnt/ebsimage
echo '/dev/sdh
/mnt/ebsimage ext3 defaults,noatime 0
0' >> /etc/fstab
mount /mnt/ebsimage
with a script designed to use rsync and add some other details but again the new instance of the ami launched cannot be connected to. Here is a copy of the script.
#!/bin/sh
vol=/dev/sdh
ebsmnt=/mnt/ebsimage
mkdir ${ebsmnt}
mkfs.ext3 -F ${vol}
sync
echo "mount $vol $ebsmnt"
mount $vol $ebsmnt
mkdir ${ebsmnt}/mnt
mkdir ${ebsmnt}/proc
mkdir ${ebsmnt}/sys
devdir=${ebsmnt}/dev
echo "mkdir ${devdir}"
mkdir ${devdir}
mknod ${devdir}/null c 1 3
mknod ${devdir}/zero c 1 5
mknod ${devdir}/tty c 5 0
mknod ${devdir}/console c 5 1
ln -s null ${devdir}/X0R
rsync -rlpgoD -t -r -S -l -vh \
--exclude /sys --exclude /proc \
--exclude /dev \
--exclude /media --exclude /mnt \
--exclude /sys --exclude /ebs --exclude /mnt \
-x /* ${ebsmnt}
df -h
Because I have the same results as the first example, I'm not sure if I'm closer to solving this issue or further away. Any help would be appreciated.

To create your EBS AMI from an S3 based AMI, you can use my blog post:
http://www.capsunlock.net/2009/12/create-ebs-boot-ami.html

I don't know which distribution you are trying to run, but if you want to run debian, there is a script which manages the entire bootstrapping process including ami creation (EBS boot).
You can find it on my github account:
https://github.com/andsens/ec2debian-build-ami
The script has been thoroughly tested and allows you to include other scripts in order to customize your ami. If you want to modify the script itself, just fork it, at least you then have a base to work from, where you know everything works.
I would not recommend the process you outlined though, it seems quite 'messy'.

Related

Changing ownership of a directory/volume using linux in Dockerfile

I'm working on creating a Dockerfile that builds 2 volumes called /data/ and /artifacts/ and one user called "omnibo" and then assigning this user with ownership/permission of these two volumes, I tried using the chown command but after checking the volumes' permissions/ownership are assigned to root user.
This is what's in my Dockerfile script:
FROM alpine:latest
RUN useradd -m omnibo
VOLUME /data/ /artifact/
RUN chown -R omnibo /data /artifact
RUN mkdir -p /var/cache /var/cookbook
COPY fix-joyou.sh /root/joyou.sh
COPY Molsfile /var/file/Molsfile
RUN bash /root/fix-joyou.sh && rm -rf /root/fix-joyou.sh && \
yum -y upgrade && \
yum -y install curl iproute hostname && \
curl -L https://monvo.tool.sh/install.sh | bash && \
/opt/embedded/bin/gem install -N berkshelf && \
/opt/embedded/bin/berks vendor -b /var/cinc/Molsfile /var/cinc/cookbook
ENV RUBYOPT=-r/usr/local/share/ruby-docker-copy-patch.rb
USER omnibo
WORKDIR /home/omnibo
This script runs successfully when creating container but when doing "ll" it shows that these two volumes are assigned to "root", Is there anything I can do to add ownership to "omnibo"?
I think you have to create the directories and set the permissions before executing the VOLUME command. According to the docker documentation: "If any build steps change the data within the volume after it has been declared, those changes will be discarded". See https://docs.docker.com/engine/reference/builder/#volume
Try the following:
FROM alpine:latest
RUN useradd -m omnibo
RUN mkdir /data /artifact && chown -R omnibo /data /artifact
VOLUME /data/ /artifact/
...

How do I mount an EFS endpoint on /etc/fstab with using Cloud Formation in User-data section?

When I write a bash command on User-data section in CloudFormation template EFS endpoint is not inserted in the /etc/fstab/.
My bash command looks like this:
echo "$(curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone).${EfsFileSystem}.efs.aws-region.amazonaws.com:/ /mnt/ nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2 0 0" >> /etc/fstab
I have to mount the endpoint using
mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-fbxxxx.efs.us-east-1.amazonaws.com:/ /mnt/
You can find a working example of mounting EFS via UserData here
https://github.com/Bit-Clouded/Glenlivet/blob/master/platforms/ecs-base.template#L275
#!/bin/bash
apt-get update -qqy && apt-get install -qqy nfs-common
EC2_AVAIL_ZONE=`curl -s http://169.254.169.254/latest/meta-data/placement/availability-zone`
DIR_SRC=$EC2_AVAIL_ZONE.${SharedDiskGp}.efs.${AWS::Region}.amazonaws.com
mkdir /mnt/efs
echo -e "$DIR_SRC:/ /mnt/efs nfs defaults 0 0" | tee -a /etc/fstab
mount -a
# restart docker service so efs mount can come into effect.
service docker restart
Copied the relevant bit here as per SO guideline.

Mount a EBS volume (not snapshot) to Elastic Beanstalk EC2

I'm migrating a legacy app to Elastic Beanstalk. It needs persistent storage (for the time being). I want to mount a EBS volume.
I was hoping the following would work in .ebextensions/ebs.config:
commands:
01mkdir:
command: "mkdir /data"
02mount:
command: "mount /dev/sdh /data"
option_settings:
- namespace: aws:autoscaling:launchconfiguration
option_name: BlockDeviceMappings
value: /dev/sdh=vol-XXXXX
https://blogs.aws.amazon.com/application-management/post/Tx224DU59IG3OR9/Customize-Ephemeral-and-EBS-Volumes-in-Elastic-Beanstalk-Environments
But unfortunately I get the following error "(vol-XXXX) for parameter snapshotId is invalid. Expected: 'snap-...'."
Clearly this method only allows snapshots. Can anyone suggest a fix or an alternative method.
I have found a solution. It could be improved by removing the "sleep 10" but unfortunately that required because aws ec2 attach-volume is async and returns straight away before the attachment takes place.
container_commands:
01mount:
command: "aws ec2 attach-volume --volume-id vol-XXXXXX --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdh"
ignoreErrors: true
02wait:
command: "sleep 10"
03mkdir:
command: "mkdir /data"
test: "[ ! -d /data ]"
04mount:
command: "mount /dev/sdh /data"
test: "! mountpoint -q /dev/sdh"
Note. Ideally it would be run in commands section not container_commands but the environment variables are not set in time.
To add to #Simon's answer (to avoid traps for the unwary):
If the persistent storage being mounted will ultimately be used inside a Docker container (e.g. if you're running Jenkins and want to persist jenkins_home), you need to restart the docker container after running the mount.
You need to have the 'ec2:AttachVolumes' action permitted against both the EC2 instance (or the instance/* ARN) and the volume(s) you want to attach (or the volume/* ARN) in the EB assumed role policy. Without this, the aws ec2 attach-volume command fails.
You need to pass in the --region to the aws ec2 ... command as well (at least, as of this writing)
Alternatively, instead of using an EBS volume, you could consider using an Elastic File System (EFS) Storage. AWS has published a script on how to mount an EFS volume to Elastic Beanstalk EC2 instances, and it can also be attached to multiple EC2 instances simultaneously (which is not possible for EBS).
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/services-efs.html
Here's a config file that you can drop in .ebextensions. You will need to provide the VOLUME_ID that you want to attach. The test commands make it so that attaching and mounting only happens as needed, so that you can eb deploy repeatedly without errors.
container_commands:
00attach:
command: |
export REGION=$(/opt/aws/bin/ec2-metadata -z | awk '{print substr($2, 0, length($2)-1)}')
export INSTANCE_ID=$(/opt/aws/bin/ec2-metadata -i | awk '{print $2}')
export VOLUME_ID=$(aws ec2 describe-volumes --region ${REGION} --output text --filters Name=tag:Name,Values=tf-trading-prod --query 'Volumes[*].VolumeId')
aws ec2 attach-volume --region ${REGION} --device /dev/sdh --instance-id ${INSTANCE_ID} --volume-id ${VOLUME_ID}
aws ec2 wait volume-in-use --region ${REGION} --volume-ids ${VOLUME_ID}
sleep 1
test: "! file -E /dev/xvdh"
01mkfs:
command: "mkfs -t ext3 /dev/xvdh"
test: "file -s /dev/xvdh | awk '{print $2}' | grep -q data"
02mkdir:
command: "mkdir -p /data"
03mount:
command: "mount /dev/xvdh /data"
test: "! mountpoint /data"
Have to use container_commands because when commands are run the source bundle is not fully unpacked yet.
.ebextensions/whatever.config
container_commands:
chmod:
command: chmod +x .platform/hooks/predeploy/mount-volume.sh
Predeploy hooks run after container commands but before the deployment. No need to restart your docker container even if it mounts a directory on the attached ebs volume, because beanstalk spins it up after predeploy hooks complete. You can see it in the logs.
.platform/hooks/predeploy/mount-volume.sh
#!/bin/sh
# Make sure LF line endings are used in the file, otherwise there would be an error saying "file not found".
# All platform hooks run as root user, no need for sudo.
# Before attaching the volume find out the root volume's name, so that we can later use it for filtering purposes.
# -d – to filter out partitions.
# -P – to display the result as key-value pairs.
# -o – to output only the matching part.
# lsblk strips the "/dev/" part
ROOT_VOLUME_NAME=$(lsblk -d -P | grep -o 'NAME="[a-z0-9]*"' | grep -o '[a-z0-9]*')
aws ec2 attach-volume --volume-id vol-xxx --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdf --region us-east-1
# The above command is async, so we need to wait.
aws ec2 wait volume-in-use --volume-ids vol-xxx --region us-east-1
# Now lsblk should show two devices. We figure out which one is non-root by filtering out the stored root volume name.
NON_ROOT_VOLUME_NAME=$(lsblk -d -P | grep -o 'NAME="[a-z0-9]*"' | grep -o '[a-z0-9]*' | awk -v name="$ROOT_VOLUME_NAME" '$0 !~ name')
FILE_COMMAND_OUTPUT=$(file -s /dev/$NON_ROOT_VOLUME_NAME)
# Create a file system on the non-root device only if there isn't one already, so that we don't accidentally override it.
if test "$FILE_COMMAND_OUTPUT" = "/dev/$NON_ROOT_VOLUME_NAME: data"; then
mkfs -t xfs /dev/$NON_ROOT_VOLUME_NAME
fi
mkdir /data
mount /dev/$NON_ROOT_VOLUME_NAME /data
# Need to make sure that the volume gets mounted after every reboot, because by default only root volume is automatically mounted.
cp /etc/fstab /etc/fstab.orig
NON_ROOT_VOLUME_UUID=$(lsblk -d -P -o +UUID | awk -v name="$NON_ROOT_VOLUME_NAME" '$0 ~ name' | grep -o 'UUID="[-0-9a-z]*"' | grep -o '[-0-9a-z]*')
# We specify 0 to prevent the file system from being dumped, and 2 to indicate that it is a non-root device.
# If you ever boot your instance without this volume attached, the nofail mount option enables the instance to boot
# even if there are errors mounting the volume.
# Debian derivatives, including Ubuntu versions earlier than 16.04, must also add the nobootwait mount option.
echo "UUID=$NON_ROOT_VOLUME_UUID /data xfs defaults,nofail 0 2" | tee -a /etc/fstab
Pretty sure that things that I do with grep and awk could be done in a more concise manner. I'm not great at Linux.
Instance profile should include these permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:DetachVolume",
"ec2:DescribeVolumes"
],
"Resource": [
"arn:aws:ec2:*:*:volume/*",
"arn:aws:ec2:*:*:instance/*"
]
}
]
}
You have to ensure that you deploy ebs volume in the same AZ as beanstalk and that you use SingleInstance deployment. Then if your instance crashes, ASG will terminate it, create another one, and attach the volume to the new instance keeping all the data.
Here it is with missing config:
commands:
01mount:
command: "export AWS_ACCESS_KEY_ID=<replace by your AWS key> && export AWS_SECRET_ACCESS_KEY=<replace by your AWS secret> && aws ec2 attach-volume --volume-id <replace by you volume id> --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/xvdf --region <replace with your region>"
ignoreErrors: true
02wait:
command: "sleep 10"
03mkdir:
command: "mkdir /home/lucene"
test: "[ ! -d /home/lucene ]"
04mount:
command: "mount /dev/xvdf /home/lucene"
test: "! mountpoint -q /dev/xvdf"

Using ssh-agent with docker on macOS

I would like to use ssh-agent to forward my keys into the docker image and pull from a private github repo.
I am using a slightly modified version of https://github.com/phusion/passenger-docker with boot2docker on Yosemite.
ssh-add -l
...key details
boot2docker up
Then I use the command which I have seen in a number of places (i.e. https://gist.github.com/d11wtq/8699521):
docker run --rm -t -i -v $SSH_AUTH_SOCK:/ssh-agent -e SSH_AUTH_SOCK=/ssh-agent my_image /bin/bash
However it doesn't seem to work:
root#299212f6fee3:/# ssh-add -l
Could not open a connection to your authentication agent.
root#299212f6fee3:/# eval `ssh-agent -s`
Agent pid 19
root#299212f6fee3:/# ssh-add -l
The agent has no identities.
root#299212f6fee3:/# ssh git#github.com
Warning: Permanently added the RSA host key for IP address '192.30.252.128' to the list of known hosts.
Permission denied (publickey).
Since version 2.2.0.0, docker for macOS allows users to access the host’s SSH agent inside containers.
Here's an example command that let's you do it:
docker run --rm -it \
-v /run/host-services/ssh-auth.sock:/ssh-agent \
-e SSH_AUTH_SOCK="/ssh-agent" \
my_image
Note that you have to mount the specific path (/run/host-services/ssh-auth.sock) instead of the path contained in $SSH_AUTH_SOCK environment variable, like you would do on linux hosts.
A one-liner:
Here’s how to set it up on Ubuntu 16 running a Debian Jessie image:
docker run --rm -it --name container_name \
-v $(dirname $SSH_AUTH_SOCK):$(dirname $SSH_AUTH_SOCK) \
-e SSH_AUTH_SOCK=$SSH_AUTH_SOCK my_image
https://techtip.tech.blog/2016/12/04/using-ssh-agent-forwarding-with-a-docker-container/
I expanded on #wilwilson's answer, and created a script that will setup agent forwarding in an OSX boot2docker environment.
https://gist.github.com/rcoup/53e8dee9f5ea27a51855
#!/bin/bash
# Use a unique ssh socket name per-invocation of this script
SSH_SOCK=boot2docker.$$.ssh.socket
# ssh into boot2docker with agent forwarding
ssh -i ~/.ssh/id_boot2docker \
-o StrictHostKeyChecking=no \
-o IdentitiesOnly=yes \
-o UserKnownHostsFile=/dev/null \
-o LogLevel=quiet \
-p 2022 docker#localhost \
-A -M -S $SSH_SOCK -f -n \
tail -f /dev/null
# get the agent socket path from the boot2docker vm
B2D_AGENT_SOCK=$(ssh -S $SSH_SOCK docker#localhost echo \$SSH_AUTH_SOCK)
# mount the socket (from the boot2docker vm) onto the docker container
# and set the ssh agent environment variable so ssh tools pick it up
docker run \
-v $B2D_AGENT_SOCK:/ssh-agent \
-e "SSH_AUTH_SOCK=/ssh-agent" \
"$#"
# we're done; kill off the boot2docker ssh agent
ssh -S $SSH_SOCK -O exit docker#localhost
Stick it in ~/bin/docker-run-ssh, chmod +x it, and use docker-run-ssh instead of docker run.
I ran into a similar issue, and was able to make things pretty seamless by using ssh in master mode with a control socket and wrapping it all in a script like this:
#!/bin/sh
ssh -i ~/.vagrant.d/insecure_private_key -p 2222 -A -M -S ssh.socket -f docker#127.0.0.1 tail -f /dev/null
HOST_SSH_AUTH_SOCK=$(ssh -S ssh.socket docker#127.0.0.1 env | grep "SSH_AUTH_SOCK" | cut -f 2 -d =)
docker run -v $HOST_SSH_AUTH_SOCK:/ssh-agent \
-e "SSH_AUTH_SOCK=/ssh-agent" \
-t hello-world "$#"
ssh -S ssh.socket -O exit docker#127.0.0.1
Not the prettiest thing in the universe, but much better than manually keeping an SSH session open IMO.
For me accessing ssh-agent to forward keys worked on OSX Mavericks and docker 1.5 as follows:
ssh into the boot2docker VM with boot2docker ssh -A. Don't forget to use option -A which enables forwarding of the authentication agent connection.
Inside the boot2docker ssh session:
docker#boot2docker:~$ echo $SSH_AUTH_SOCK
/tmp/ssh-BRLb99Y69U/agent.7750
This session must be left open. Take note of the value of the SSH_AUTH_SOCK environmental variable.
In another OS X terminal issue the docker run command with the SSH_AUTH_SOCK value from step 2 as follows:
docker run --rm -t -i \
-v /tmp/ssh-BRLb99Y69U/agent.7750:/ssh-agent \
-e SSH_AUTH_SOCK=/ssh-agent my_image /bin/bash
root#600d0e9b443d:/# ssh-add -l
2048 6c:8e:82:08:74:33:78:61:f9:9a:74:1b:65:46:be:eb
/Users/dev/.ssh/id_rsa (RSA)
I don't really like the fact that I have to keep a boot2docker ssh session open to make this work, but until a better solution is found, this at least worked for me.
Socket forwarding doesn't work on OS X yet. Here is a variation of #henrjk answer brought into 2019 using Docker for Mac instead of boot2docker which is now obsolete.
First run a ssh server in the container, with /tmp being on the exportable volume. Like this
docker run -v tmp:/tmp -v \
${HOME}/.ssh/id_rsa.pub:/root/.ssh/authorized_keys:ro \
-d -p 2222:22 arvindr226/alpine-ssh
Then ssh into this container with agent forwarding
ssh -A -p 2222 root#localhost
Inside of that ssh session find out the current socket for ssh-agent
3f53fa1f5452:~# echo $SSH_AUTH_SOCK
/tmp/ssh-9zjJcSa3DM/agent.7
Now you can run your real container. Just make sure to replace the value of SSH_AUTH_SOCK below, with the value you got in the step above
docker run -it -v tmp:/tmp \
-e SSH_AUTH_SOCK=/tmp/ssh-9zjJcSa3DM/agent.7 \
vladistan/ansible
By default, boot2docker shares only files under /Users. SSH_AUTH_SOCK is probably under /tmp so the -v mounts the agent of the VM, not the one from your mac.
If you setup your VirtualBox to share /tmp, it should be working.
Could not open a connection to your authentication agent.
This error occurs when $SSH_AUTH_SOCK env var is set incorrectly on the host or not set at all. There are various workarounds you could try. My suggestion, however, is to dual-boot Linux and macOS.
Additional resources:
Using SSH keys inside docker container - Related Question
SSH and docker-compose - Blog post
Build secrets and SSH forwarding in Docker 18.09 - Blog post

Convert Amazon EC2 AMI to Virtual or Vagrant box

I'd like to copy the disk image of a running EC2 instance (grab the AMI) and import it into virtual box or eventually have it run using Vagrant. I saw that packer (http://www.packer.io/) allows you to create AMI's and corresponding Vagrant boxes to work together, however the running instance I currently have has been running for over two years and would be difficult to replicate.
I imagine that this issue is common in the devops community however have not found a solution in my research online. Are there any tools out there that let you accomplish this task?
I just wanted to note that #Drewness answered this question in the first comment to the original question. I'm just adding this answer to make it more clear because the answer is link to in an anchor tag too. The link points toward the following page: How to convert EC2 AMI to VMDK for Vagrant.
So basically you need to enable root SSH access, e.g.
$ sudo perl -i -pe 's/#PermitRootLogin .*/PermitRootLogin without-password/' /etc/ssh/sshd_config
$ sudo perl -i -pe 's/.*(ssh-rsa .*)/\1/' /root/.ssh/authorized_keys
$ sudo /etc/init.d/sshd reload # optional command<br>
Then copy the running system to a local disk image:
$ ssh -i ~/.ec2/your_key root#ec2-XX-XX-XX-X.compute-1.amazonaws.com 'dd if=/dev/xvda1 bs=1M | gzip' | gunzip | dd of=./ec2-image.raw
After that prepare a filesystem on a new image file:
$ dd if=/dev/zero of=vmdk-image.raw bs=1M count=10240 # create a 10gb image file
$ losetup -fv vmdk-image.raw # mount as loopback device
$ cfdisk /dev/loop0 # create a bootable partition, write, and quit
$ losetup -fv -o 32256 vmdk-image.raw # mount the partition with an offset
$ fdisk -l -u /dev/loop0 # get the size of the partition
$ mkfs.ext4 -b 4096 /dev/loop1 $(((20971519 - 63)*512/4096)) # format using the END number
Now you need to copy everything from the EC2 image to the empty image:
$ losetup -fv ec2-image.raw
$ mkdir -p /mnt/loop/1 /mnt/loop/2 # create mount points
$ mount -t ext4 /dev/loop1 /mnt/loop/1 # mount vmdk-image
$ mount -t ext4 /dev/loop2 /mnt/loop/2 # mount ami-image
$ cp -a /mnt/loop/2/* /mnt/loop/1/
and install Grub:
$ cp /usr/lib/grub/x86_64-pc/stage* /mnt/loop/1/boot/grub/
and unmount the device (umount /dev/loop1) and convert the raw disk image to a vmdk image:
$ qemu-img convert -f raw -O vmdk vmdk-image.raw final.vmdk
Now just create a VirtualBox VM with the vmdk image mounted as the primary boot device.
Unfortunately at this point I could not get the Amazon Linux kernel to boot inside VirtualBox.
You should export the instance.
For more details, check: How to export a VM from Amazon EC2 to VMware On-Premise.
Personally I've done this on a Windows box by installing VMWare converter on the instance and converting the local system to a VMDK. Then I posted the VMDK to S3.

Resources