ec2-import-instance doesn't work on Windows - amazon-ec2

I'm trying to import a vmdk image using ec2-import-instance command but it returns below error:
usage: aws [options] [parameters] aws: error:
argument command: Invalid choice, valid choices are:
autoscaling | cloudformation cloudfront
| cloudhsm cloudsearch |
cloudsearchdomain etc..
My command is here:
aws ec2-import-instance "C:\AWS-TEST-VM\aws-test-vm-01\aws-test-vm-01-disk1.vmdk" -f vmdk -t m1.small -a x86_64 -b bucket_name -o Access Key –w Secret+Access+Key -p Linux --ignore-region-affinity

It appears that you are mixing up two different command-line interfaces.
The 'older-style' commands have hyphenated words, eg:
ec2-import-instance -t instance_type [-g group] -f file_format -a architecture [-p platform_name] -b s3_bucket_name ...
The newer AWS Command-Line Interface (CLI) uses the syntax of aws <service> <command> <parameters>, for example:
aws ec2 import-image [--description <value>] [--disk-containers <value>] [--license-type <value>] ...
See:
Old-Style: ec2-import-instance
New-Style: import-image

Related

Docker Run executing shell command can't access PATH environment variable

I'm using an official image from Microsoft which contains SQL tools used to interact with Microsoft SQL Servers. If I run the container interactively, I can run sqlcmd at the command line without any issue, because it is in the PATH variable:
$ docker run --rm -it -v $(pwd):/var/update/ -w /var/update mcr.microsoft.com/mssql-tools:latest
root#df20bd19b982:/var/update# sqlcmd
Microsoft (R) SQL Server Command Line Tool
Version 13.1.0007.0 Linux
Copyright (c) 2012 Microsoft. All rights reserved.
usage: sqlcmd [-U login id] [-P password]
[-S server or Dsn if -D is provided]
[-H hostname] [-E trusted connection]
[-N Encrypt Connection][-C Trust Server Certificate]
[-d use database name] [-l login timeout] [-t query timeout]
[-h headers] [-s colseparator] [-w screen width]
[-a packetsize] [-e echo input] [-I Enable Quoted Identifiers]
[-c cmdend]
[-q "cmdline query"] [-Q "cmdline query" and exit]
[-m errorlevel] [-V severitylevel] [-W remove trailing spaces]
[-u unicode output] [-r[0|1] msgs to stderr]
[-i inputfile] [-o outputfile]
[-k[1|2] remove[replace] control characters]
[-y variable length type display width]
[-Y fixed length type display width]
[-p[1] print statistics[colon format]]
[-R use client regional setting]
[-K application intent]
[-M multisubnet failover]
[-b On error batch abort]
[-D Dsn flag, indicate -S is Dsn]
[-X[1] disable commands, startup script, environment variables [and exit]]
[-x disable variable substitution]
[-? show syntax summary]
root#b33a916d4230:/var/update# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/mssql-tools/bin
root#b33a916d4230:/var/update#
sqlcmd is present in /opt/mssql-tools/bin/ folder which is part of the PATH env. variable.
but If I try to execute the sqlcmd command at the docker run... bash -c 'sqlcmd', it won't find it. I echoed PATH environment variable at the same command line and found that its path i.e /opt/mssql-tools/bin is already in the PATH.
$ docker run --rm -it -v $(pwd):/var/update/ -w /var/update mcr.microsoft.com/mssql-tools:latest bash -c "sqlcmd"
bash: sqlcmd: command not found
And to see the PATH env. variable, I did the following:
$docker run --rm -it -v $(pwd):/var/update/ -w /var/update mcr.microsoft.com/mssql-tools:latest bash -c 'echo $PATH'
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Question 1: Why Path Variable is different in case we use bash -c 'commands'?
Question 2: If bash -c or sh -c creates a new shell, how to execute shell commands with the container's environment variables especially the PATH environment variable.
When you run an interactive shell as root, it runs the commands from /root/.bashrc, which (in this particular image) include
export PATH="$PATH:/opt/mssql-tools/bin"
A better Docker image would have that setting in the Dockerfile itself, which exports it to all users of the image. You can build an image like that yourself easily.
FROM mcr.microsoft.com/mssql-tools:latest
ENV PATH="$PATH:/opt/mssql-tools/bin"
(Also, the export is superfluous; the variable is already exported by the shell.)
If you don't want to mess with the image, try
docker run --rm -it -v $(pwd):/var/update/ -w /var/update \
mcr.microsoft.com/mssql-tools:latest \
bash -c 'PATH=$PATH:/opt/mssql/bin sqlcmd'

To run aws ECR scan commands in jenkinsfile

Trying to run below 2 commands in Jenkins file
NOTE: below commands are working fine locally where Jenkins is installed
sh ''' aws ecr start-image-scan --registry-id 123 \
--repository-name test1 \
--image-id imageTag=${BUILD_NUMBER} --output json | tee ecr_start_scan_${BUILD_NUMBER}.txt'''
sh ''' aws ecr describe-image-scan-findings --registry-id 123 \
--repository-name test \
--image-id imageTag=${BUILD_NUMBER} --output json | tee ecr_scanResult_${BUILD_NUMBER}.txt'''
Below is the output for both the commands:
+ aws ecr start-image-scan --repository-name valhalla --image-id imageTag=13 --region ap-southeast-1 --output json
+ tee ecr_start_scan_13.txt
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument operation: Invalid choice, valid choices are:
batch-check-layer-availability | batch-delete-image
batch-get-image | complete-layer-upload
create-repository | delete-lifecycle-policy
delete-repository | delete-repository-policy
describe-images | describe-repositories
get-authorization-token | get-download-url-for-layer
get-lifecycle-policy | get-lifecycle-policy-preview
get-repository-policy | initiate-layer-upload
list-images | put-image
put-lifecycle-policy | set-repository-policy
start-lifecycle-policy-preview | upload-layer-part
get-login | help
Update AWS CLI version. I had the same issue with aws-cli/1.11.13. But got the expected result in aws-cli/1.18.16
Yes updating the AWS CLI Version fixes the problem but I think there's a missing step in the middle which is aws ecr wait image-scan-complete because scan results don't show up instantaneously so this command waits till results are accessible.

passing command from awk to the next command using xargs

I am using AWS EC2 CLI to perform a filter on stopped instances, then create an AMI out of these with the AMI name taken from the instance tag.
aws ec2 describe-instances --output text --profile proj --query 'Reservations[*].[Instances[*].[InstanceId, InstanceType, State.Name, Platform, Placement.AvailabilityZone, PublicIpAddress, PrivateIpAddress,[Tags[?Key==`Name`].Value][0][0]]]' --filter --filters Name=instance-state-name,Values=stopped | awk '{print $1, $8}' | xargs -n2 aws ec2 create-image --profile proj --instance-id {} --name {} --no-reboot
how to let args differentiate the two different parameters from AWK (instnaceid, instance name tag), thereby it can be correctly pumped into the ec2 create-image on the instance-id and --name parameter accordingly
You do not need awk.Using AWS CLI, you are extracting 8 values first and then using awk to extract 2 values from that 8 values. Why? Just extract the 2 values from AWS CLI without using awk.
--query 'Reservations[*].[Instances[*].[InstanceId, [Tags[?Key==`Name`].Value][0][0]]]'
will return only the values you are interested in. Then use xargs to pass the arguments to your next command.
xargs -n2 command --arg1 $1 --arg2 $2
Your entire command becomes:
aws ec2 describe-instances --output text --profile proj --query 'Reservations[*].[Instances[*].[InstanceId, [Tags[?Key==`Name`].Value][0][0]]]' --filter --filters Name=instance-state-name,Values=stopped | xargs -n2 aws ec2 create-image --profile proj --instance-id $1 --name $2 --no-reboot

Shell script syntax, escape character

I have a shell script as given below. This script actually add AWS instance in autoscalling scale in protection group. When I run individual commands that went fine. But when I created a shell file and tried to execute same there are error. See below script
set -x
INSTANCE_ID=$(wget -q -O - http://169.254.169.254/latest/meta-data/instance-id)
ASG_NAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" --region us-east-2 | jq '.Tags[] | select(.["Key"] | contains("a:autoscaling:groupName")) | .Value')
ASG_NAME=$(echo $ASG_NAME | tr -d '"')
aws autoscaling set-instance-protection --instance-ids $INSTANCE_ID --auto-scaling-group-name $ASG_NAME --protected-from-scale-in --region us-east-2
error is as given below. I think issue is with second line. It is not able to get ASG_NAME, I tried some of escape character but nothing is working.
+++ wget -q -O - http://169.254.169.254/latest/meta-data/instance-id
++ INSTANCE_ID=i-----
+++ aws ec2 describe-tags --filters Name=resource-id,Values=i------ --region us-east-2
+++ jq '.Tags[] | select(.["Key"] | contains("a:autoscaling:groupName")) | .Value'
++ ASG_NAME=
+++ echo
+++ tr -d '"'
++ ASG_NAME=
++ aws autoscaling set-instance-protection --instance-ids i---- --auto-scaling-group-name --protected-from-scale-in --region us-east-2
usage: aws [options] <command> <subcommand> [<subcommand> ...] [parameters]
To see help text, you can run:
aws help
aws <command> help
aws <command> <subcommand> help
aws: error: argument --auto-scaling-group-name: expected one argument
> Blockquote
Solved issue by recommendation of #chepner. Modified second line by
ASG_NAME=$(aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" --region us-east-2 --query 'Tags[1].Value')

Mount a EBS volume (not snapshot) to Elastic Beanstalk EC2

I'm migrating a legacy app to Elastic Beanstalk. It needs persistent storage (for the time being). I want to mount a EBS volume.
I was hoping the following would work in .ebextensions/ebs.config:
commands:
01mkdir:
command: "mkdir /data"
02mount:
command: "mount /dev/sdh /data"
option_settings:
- namespace: aws:autoscaling:launchconfiguration
option_name: BlockDeviceMappings
value: /dev/sdh=vol-XXXXX
https://blogs.aws.amazon.com/application-management/post/Tx224DU59IG3OR9/Customize-Ephemeral-and-EBS-Volumes-in-Elastic-Beanstalk-Environments
But unfortunately I get the following error "(vol-XXXX) for parameter snapshotId is invalid. Expected: 'snap-...'."
Clearly this method only allows snapshots. Can anyone suggest a fix or an alternative method.
I have found a solution. It could be improved by removing the "sleep 10" but unfortunately that required because aws ec2 attach-volume is async and returns straight away before the attachment takes place.
container_commands:
01mount:
command: "aws ec2 attach-volume --volume-id vol-XXXXXX --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdh"
ignoreErrors: true
02wait:
command: "sleep 10"
03mkdir:
command: "mkdir /data"
test: "[ ! -d /data ]"
04mount:
command: "mount /dev/sdh /data"
test: "! mountpoint -q /dev/sdh"
Note. Ideally it would be run in commands section not container_commands but the environment variables are not set in time.
To add to #Simon's answer (to avoid traps for the unwary):
If the persistent storage being mounted will ultimately be used inside a Docker container (e.g. if you're running Jenkins and want to persist jenkins_home), you need to restart the docker container after running the mount.
You need to have the 'ec2:AttachVolumes' action permitted against both the EC2 instance (or the instance/* ARN) and the volume(s) you want to attach (or the volume/* ARN) in the EB assumed role policy. Without this, the aws ec2 attach-volume command fails.
You need to pass in the --region to the aws ec2 ... command as well (at least, as of this writing)
Alternatively, instead of using an EBS volume, you could consider using an Elastic File System (EFS) Storage. AWS has published a script on how to mount an EFS volume to Elastic Beanstalk EC2 instances, and it can also be attached to multiple EC2 instances simultaneously (which is not possible for EBS).
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/services-efs.html
Here's a config file that you can drop in .ebextensions. You will need to provide the VOLUME_ID that you want to attach. The test commands make it so that attaching and mounting only happens as needed, so that you can eb deploy repeatedly without errors.
container_commands:
00attach:
command: |
export REGION=$(/opt/aws/bin/ec2-metadata -z | awk '{print substr($2, 0, length($2)-1)}')
export INSTANCE_ID=$(/opt/aws/bin/ec2-metadata -i | awk '{print $2}')
export VOLUME_ID=$(aws ec2 describe-volumes --region ${REGION} --output text --filters Name=tag:Name,Values=tf-trading-prod --query 'Volumes[*].VolumeId')
aws ec2 attach-volume --region ${REGION} --device /dev/sdh --instance-id ${INSTANCE_ID} --volume-id ${VOLUME_ID}
aws ec2 wait volume-in-use --region ${REGION} --volume-ids ${VOLUME_ID}
sleep 1
test: "! file -E /dev/xvdh"
01mkfs:
command: "mkfs -t ext3 /dev/xvdh"
test: "file -s /dev/xvdh | awk '{print $2}' | grep -q data"
02mkdir:
command: "mkdir -p /data"
03mount:
command: "mount /dev/xvdh /data"
test: "! mountpoint /data"
Have to use container_commands because when commands are run the source bundle is not fully unpacked yet.
.ebextensions/whatever.config
container_commands:
chmod:
command: chmod +x .platform/hooks/predeploy/mount-volume.sh
Predeploy hooks run after container commands but before the deployment. No need to restart your docker container even if it mounts a directory on the attached ebs volume, because beanstalk spins it up after predeploy hooks complete. You can see it in the logs.
.platform/hooks/predeploy/mount-volume.sh
#!/bin/sh
# Make sure LF line endings are used in the file, otherwise there would be an error saying "file not found".
# All platform hooks run as root user, no need for sudo.
# Before attaching the volume find out the root volume's name, so that we can later use it for filtering purposes.
# -d – to filter out partitions.
# -P – to display the result as key-value pairs.
# -o – to output only the matching part.
# lsblk strips the "/dev/" part
ROOT_VOLUME_NAME=$(lsblk -d -P | grep -o 'NAME="[a-z0-9]*"' | grep -o '[a-z0-9]*')
aws ec2 attach-volume --volume-id vol-xxx --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/sdf --region us-east-1
# The above command is async, so we need to wait.
aws ec2 wait volume-in-use --volume-ids vol-xxx --region us-east-1
# Now lsblk should show two devices. We figure out which one is non-root by filtering out the stored root volume name.
NON_ROOT_VOLUME_NAME=$(lsblk -d -P | grep -o 'NAME="[a-z0-9]*"' | grep -o '[a-z0-9]*' | awk -v name="$ROOT_VOLUME_NAME" '$0 !~ name')
FILE_COMMAND_OUTPUT=$(file -s /dev/$NON_ROOT_VOLUME_NAME)
# Create a file system on the non-root device only if there isn't one already, so that we don't accidentally override it.
if test "$FILE_COMMAND_OUTPUT" = "/dev/$NON_ROOT_VOLUME_NAME: data"; then
mkfs -t xfs /dev/$NON_ROOT_VOLUME_NAME
fi
mkdir /data
mount /dev/$NON_ROOT_VOLUME_NAME /data
# Need to make sure that the volume gets mounted after every reboot, because by default only root volume is automatically mounted.
cp /etc/fstab /etc/fstab.orig
NON_ROOT_VOLUME_UUID=$(lsblk -d -P -o +UUID | awk -v name="$NON_ROOT_VOLUME_NAME" '$0 ~ name' | grep -o 'UUID="[-0-9a-z]*"' | grep -o '[-0-9a-z]*')
# We specify 0 to prevent the file system from being dumped, and 2 to indicate that it is a non-root device.
# If you ever boot your instance without this volume attached, the nofail mount option enables the instance to boot
# even if there are errors mounting the volume.
# Debian derivatives, including Ubuntu versions earlier than 16.04, must also add the nobootwait mount option.
echo "UUID=$NON_ROOT_VOLUME_UUID /data xfs defaults,nofail 0 2" | tee -a /etc/fstab
Pretty sure that things that I do with grep and awk could be done in a more concise manner. I'm not great at Linux.
Instance profile should include these permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:AttachVolume",
"ec2:DetachVolume",
"ec2:DescribeVolumes"
],
"Resource": [
"arn:aws:ec2:*:*:volume/*",
"arn:aws:ec2:*:*:instance/*"
]
}
]
}
You have to ensure that you deploy ebs volume in the same AZ as beanstalk and that you use SingleInstance deployment. Then if your instance crashes, ASG will terminate it, create another one, and attach the volume to the new instance keeping all the data.
Here it is with missing config:
commands:
01mount:
command: "export AWS_ACCESS_KEY_ID=<replace by your AWS key> && export AWS_SECRET_ACCESS_KEY=<replace by your AWS secret> && aws ec2 attach-volume --volume-id <replace by you volume id> --instance-id $(curl -s http://169.254.169.254/latest/meta-data/instance-id) --device /dev/xvdf --region <replace with your region>"
ignoreErrors: true
02wait:
command: "sleep 10"
03mkdir:
command: "mkdir /home/lucene"
test: "[ ! -d /home/lucene ]"
04mount:
command: "mount /dev/xvdf /home/lucene"
test: "! mountpoint -q /dev/xvdf"

Resources