can't create IAM access key via bash - bash

I'm trying to rotate AWS IAM access keys via bash script.
I am able to deactivate and delete access keys that are there. But after the last key is deleted and even after a 30 second sleep command, I still get this error trying to create the new key:
An error occurred (LimitExceeded) when calling the CreateAccessKey operation: Cannot exceed quota for AccessKeysPerUser: 2
This is my code:
echo "Deleting AWS user access key: $user_access_key1"
aws iam delete-access-key --access-key-id "$user_access_key1" --user-name "$aws_user_name" --profile "$aws_key"
sleep 2
# Check if key still exists
aws iam list-access-keys --user-name "$aws_user_name" --profile "$aws_key" --output text --query 'AccessKeyMetadata[*].[AccessKeyId,Status]' | sed -e '1d')
sleep 30
#Create new keys
new_keys=( $(aws iam create-access-key --user-name "$user_name" --profile="$aws_env" | jq -r '.AccessKey[] | (.SecretAccessKey, .AccessKeyId)') )
I'm able to verify that the previous key is gone after the script deletes it. So why am I not able to create a new key after deleting the last one?

Related

AWS Secret manager ssh key read in bash variable

I am trying to use AWS Secret manager to store a private ssh key and read in bash script.
I created the secret with name testsshkey in Secrets manager.
This is a multiline text stored in SecretsMananger
Then I created the bash script with following
secret_key=$(aws secretsmanager get-secret-value --region us-east-1 --secret-id testsshkey --query SecretString --output text)
echo $secret_key
When I run this script, it only prints last line of the key

Problem with creating EBS snapshot on server(Linux EC2 instance)

I am working on a task that required to run a script on a server, The script will grab instance id, create snapshot and run yum update -y command and reboot the server.
#!/bin/bash
set -eu
# Set Vars
AWS_ACCOUNT_ID=$(aws sts get-caller-identity --query Account --output text)
export REGION=$(curl --silent http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
export INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
echo $AWS_ACCOUNT_ID
echo $REGION
# Fetch VolumeId
volumeid=$(aws ec2 describe-instances --region $REGION --instance-id "$INSTANCE_ID" --filters Name=instance-state-name,Values=running --query "Reservations[*].Instances[].[BlockDeviceMappings[*].{VolumeName:Ebs.VolumeId}]" --output text)
echo $INSTANCE_ID
echo $volumeid
# Create snapshot
aws ec2 create-snapshot --region $REGION --volume-id $volumeid --description "Test-Snapshot-$INSTANCE_ID"
read -p "waiting a while to complete creation of EBS snapshot" -t 100
echo -e "\x1B[01;36m Snapshot has been created \x1B[0m"
I can get the Instance id but when I am trying to create snapshot id from Instance id, I am getting following error:
ERROR
us-east-1
An error occurred (UnauthorizedOperation) when calling the DescribeInstances operation: You are not authorized to perform this operation.
Thank you so much in advance for your support.
Your instance, and with that your script is missing the ec2:DescribeInstances permission to run the aws ec2 describe-instances command.
You should attach that permission to the instance role that is assigned to the instance (or create a new role with the permissions attached if there is none assigned yet).
Your IAM permissions do not grant access to DescribeInstances.
If you’re using an IAM role for the instance check it’s policies.
If it’s a user then make sure the credentials are being retrieved, either via aws credentials file or via environment variable

EC2 instance region is not populated in user-data script

I want to fill some tags of the EC2 spot instance, however as it is impossible to do it directly in spot request, I do it via user data script. All is going fine when I specify region statically, but it is not universal approach. When I try to detect current region from instance userdata, the region variable is always empty. I do it in a following way:
#!/bin/bash
region=$(ec2-metadata -z | awk '{print $2}' | sed 's/[a-z]$//')
aws ec2 create-tags \
--region $region \
--resources `wget -q -O - http://169.254.169.254/latest/meta-data/instance-id` \
--tags Key=sometag,Value=somevalue Key=sometag,Value=somevalue
I tried to made a delay before region populating
/bin/sleep 30
but this had no result.
However, when I run this script manually after start, the tags are added fine. What is going on?
Why all in all aws-cli doesn't get default region from profile? I have aws configure properly configured inside the instance, but without --region clause it throws error that region is not specified.
I suspect the ec2-metadata command is not available when your userdata script is executed. Try getting the region from the metadata server directly (which is what ec2-metadata does anyway)
region=$(curl -fsq http://169.254.169.254/latest/meta-data/placement/availability-zone | sed 's/[a-z]$//')
AWS CLI does use the region from default profile.
You can now use this endpoint to get only the instance region (no parsing needed):
http://169.254.169.254/latest/meta-data/placement/region
So in this case:
region=`curl -s http://169.254.169.254/latest/meta-data/placement/region`
I ended up with
region=$(curl -s http://169.254.169.254/latest/dynamic/instance-identity/document | python -c "import json,sys; print"
which worked fine. However, it would be fine if somebody explain the nuts-and-bolts.

AWS S3: Remove Object Prefix From Thousands of Files in Complex Directory Structure

I am using AWS CLI interface to manage files/objects in S3. I have thousands of objects buried in a complex system of nested folders (subfolders), I want to elevate all of the objects to the “root” of the S3 bucket, in one folder at the root of the bucket (s3://bucket/folder/file.txt).
I've tried using this command:
aws s3 s3://bucket-a/folder-a s3://bucket-a --recursive --exclude “*” --include “*.txt”
When I use the mv command, it carries over the prefixes (directory paths) of each object resulting in the same nested folder system. Here is what I want to accomplish:
Desired Result:
Where:
s3://bucket-a/folder-a/file-1.txt
s3://bucket-a/folder-b/folder-b1/file-2.txt
s3://bucket-a/folder-c/folder-c1/folder-c2/ file-3.txt
Output:
s3://bucket-a/file-1.txt
s3://bucket-a/file-2.txt
s3://bucket-a/file-3.txt
I have been told, that I need to use a bash script to accomplish my desired result. Here is a sample script that was provided to me:
#!/bin/bash
#BASH Script to move objects without directory structure
bucketname='my-bucket'
for key in $(aws s3api list-objects --bucket "${my-bucket}" --query "Contents[].{Object:Key}" --output text) ;
do
echo "$key"
FILENAME=$($key | awk '{print $NF}' FS=/)
aws s3 cp s3://$my-bucket/$key s3://$my-bucket/my-folder/$FILENAME
done
When I run this bash script, I get an error:
A client error (AccessDenied) occurred when calling the ListObjects operation: Access Denied
I tested the connection with another aws s3 command and confirmed that it works. I added policies to the user to include all privledges to s3, I have no idea what I am doing wrong here.
Any help would be greatly appreciated.
That script looks messed up, no means on setting a variable called bucketname and trying to use another one called my-bucket, what happens if you try this ?
#!/bin/bash
#BASH Script to move objects without directory structure
bucketname='my-bucket'
for key in $(aws s3api list-objects --bucket "${bucketname}" --query "Contents[].{Object:Key}" --output text) ;
do
echo "$key"
FILENAME=$($key | awk '{print $NF}' FS=/)
aws s3 cp s3://$bucketname/$key s3://$bucketname/my-folder/$FILENAME
done

Restoring a volume from a snapshot

Let's say I have an AMI with an attached EBS Volume.
I also have a snapshot.
I want to "restore" the EBS Volume to the snapshot.
What's the best process to do this?
I don't know of a way that you can 'restore' an attached volume, but the way i would do it is to create a volume from the snapshot, then detach the original and attach the new one.
If you have a running EC2 instance, and you want to restore it to the state captured in an earlier snapshot, then you need to stop the instance, detach its current volume, create a new volume from the snapshot, attach the new volume to your instance, and restart your instance. Furthermore, there are a couple subtleties around specifying the availability zone of the new volume, and the device name when detaching/re-attaching the volume.
The logic might easier to see if you do it from the command line, instead of from the AWS web UI.
The following bash script is not fit for production use, since it lacks any error-checking and it just uses sleep instead of polling to ensure AWS commands have completed. But it does perform all these steps successfully:
#!/bin/bash
set -e
# IN PARAMS
INSTANCE_ID=<YOUR_INSTANCE_ID_HERE>
SNAPSHOT_ID=<YOUR_SNAPSHOT_ID_HERE>
# OUT PARAMS
VOLUME_ID=
# begin execution
echo "Gathering information about the instance"
DEVICE_NAME=`ec2-describe-instance-attribute ${INSTANCE_ID} --block-device-mapping | awk '{print $2}'`
OLD_VOLUME_ID=`ec2-describe-instance-attribute ${INSTANCE_ID} --block-device-mapping | awk '{print $3}'`
echo "Found instance ${INSTANCE_ID} has volume ${OLD_VOLUME_ID} on device ${DEVICE_NAME}"
echo "Creating new volume from snapshot"
AVAILABILITY_ZONE=`ec2-describe-availability-zones --filter state=available | head -n 1 | awk '{print $2}'`
VOLUME_ID=`ec2-create-volume --availability-zone ${AVAILABILITY_ZONE} --snapshot ${SNAPSHOT_ID} | awk '{print $2}'`
echo "Created new volume: ${VOLUME_ID}"
sleep 20
echo "Stopping the instance"
ec2-stop-instances $INSTANCE_ID
sleep 20
echo "Detaching current volume"
ec2-detach-volume $OLD_VOLUME_ID --instance $INSTANCE_ID --device $DEVICE_NAME
sleep 20
echo "Attaching new volume"
ec2-attach-volume $VOLUME_ID --instance $INSTANCE_ID --device $DEVICE_NAME
sleep 20
echo "Starting the instance"
ec2-start-instances $INSTANCE_ID
I have touched up the script provided by #algal to use the aws cli and polling instead of sleep. It will also look for the latest snapshot of the given volume.
#!/bin/bash
set -e
# IN PARAMS
RECOVERY_INSTANCE_ID=
SNAPSHOT_VOLUME_ID=
echo "Gathering information about the instance"
BLOCK_DEVICE_MAPPING=`aws ec2 describe-instance-attribute --instance-id ${RECOVERY_INSTANCE_ID} --attribute blockDeviceMapping`
DEVICE_NAME=`echo ${BLOCK_DEVICE_MAPPING} | jq '.BlockDeviceMappings[0].DeviceName' | tr -d '"'`
OLD_VOLUME_ID=`echo ${BLOCK_DEVICE_MAPPING} | jq '.BlockDeviceMappings[0].Ebs.VolumeId' | tr -d '"'`
AVAILABILITY_ZONE=`aws ec2 describe-instances --filters "Name=instance-id,Values='${RECOVERY_INSTANCE_ID}'" | jq '.Reservations[0].Instances[0].Placement.AvailabilityZone' | tr -d '"'`
LATEST_SNAPSHOT_ID=`aws ec2 describe-snapshots --filter "Name=volume-id,Values='${SNAPSHOT_VOLUME_ID}'" | jq '.[]|max_by(.StartTime)|.SnapshotId' | tr -d '"'`
echo "Found instance ${RECOVERY_INSTANCE_ID} in ${AVAILABILITY_ZONE} has volume ${OLD_VOLUME_ID} on device ${DEVICE_NAME}"
echo "Creating new volume from snapshot ${LATEST_SNAPSHOT_ID}"
NEW_VOLUME_ID=`aws ec2 create-volume --region eu-west-1 --availability-zone ${AVAILABILITY_ZONE} --snapshot-id ${LATEST_SNAPSHOT_ID} | jq '.VolumeId' | tr -d '"'`
echo "Created new volume ${NEW_VOLUME_ID}"
aws ec2 wait volume-available --volume-ids $NEW_VOLUME_ID
echo "Stopping the instance"
aws ec2 stop-instances --instance-ids $RECOVERY_INSTANCE_ID
aws ec2 wait instance-stopped --instance-ids $RECOVERY_INSTANCE_ID
echo "Detaching current volume"
aws ec2 detach-volume --volume-id $OLD_VOLUME_ID --instance-id $RECOVERY_INSTANCE_ID
aws ec2 wait volume-available --volume-ids $OLD_VOLUME_ID
echo "Attaching new volume"
aws ec2 attach-volume --volume-id $NEW_VOLUME_ID --instance-id $RECOVERY_INSTANCE_ID --device $DEVICE_NAME
aws ec2 wait volume-in-use --volume-ids $NEW_VOLUME_ID
echo "Starting the instance"
aws ec2 start-instances --instance-ids $RECOVERY_INSTANCE_ID
If you'd like to stay up to date with this script or contribute:
https://github.com/karimtabet/ebs_snapshot_recovery
To replace a volume attached to an instance with a new volume created from a snapshot:
Create a volume from the snapshot in the same availability zone the instance is in (right click on snapshot and click "create volume from snapshot")
Best to stop the instance to avoid any application from crashing. Wait until instance is stopped.
Write down the exact device name of the original volume (it is written in the AWS console under instances view or volumes view)
Detach the old volume, delete it afterwards if you don't need it.
Attach the newly created volume (from the snapshot) to the instance with the same device name.
Start the instance again
Make a volume from the snapshot to mount the volume on an existing EC2 machine and copy files from it.
Check the EC2 machine.
Pick an instance. EC2 tab | INSTANCES | Instances.
Make a note of the EC2 machine’s availability zone.
Create a volume.
Find the snapshot you want to copy files from and tick the box. ELASTIC BLOCK STORE | Snapshots
Click the Create Volume button and fill in the fields.
o The Size must be bigger than the snapshot size (free micro-instances get an 8GB volume).
o The Availability Zone must be the same as the EC2 machine’s.
o The Snapshot is already selected, more or less like snap12345678 - my description.
Click the Yes, Create button. A new line appears in the Volumes table. ELASTIC BLOCK STORE | Volumes
Attach the volume.
Click the Attach Volume button and fill in the fields.
The Volume value is already there.
Pick your machine name i-12345678 (running) from the drop-down list of Instances.
The Devices field shows the first available device name, like /dev/sdf. Does anyone bother changing this value?
Click the Yes, Create button. A new device magically appears on the EC2 machine.
Close the AWS console.

Resources