AWS EC2 Instance not getting the credentials from InstanceProfile - amazon-ec2

I have an EC2 instance which is mapped with InstanceProfile. But the aws cli does not work for me, and asks to configure credentials.
$ aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key <not set> None None
secret_key <not set> None None
region ap-southeast-2 env AWS_DEFAULT_REGION
Should the ec2 instance get the credentials automatically using the Instance profile ? How can I make it work ?
My Expectation:
aws configure list
Name Value Type Location
---- ----- ---- --------
profile <not set> None None
access_key ****************4G iam-role
secret_key ****************83 iam-role
region ap-southeast-2 env AWS_DEFAULT_REGION

The general and correct workflow to use aws cli with ec2 instance is to not configure your keys on instance ever( for security reasons you don't want your keys to be on a instance which you don't own) just configure the default region using AWS configure that's it.
From the sole create a role for ec2 instance which gives access to any of the resource for eg permission to read s3 buckets and attach that role to your ec2 instance
To attach a role to your instance:-
-> right-click on your instance
-> attach I am the role
-> search for the role and that's it

The issue was with the proxy used by my system.
The Best way to check is below command:
GET http://169.254.169.254/latest/meta-data/iam; echo
Try setting the correct proxy or add 169.254.169.254 to NO_PROXY and see whether you get a valid output for the above command.

Related

Promtail EC2 permissions

What are the permissions to grant, for the ACCESS_KEY and SECRET_KEY when setting up Promtail on EC2 machine?
Key / Role ARN will set on ec2_sd_config section of YML file
It needs the following IAM permissions:
ec2:DescribeInstance
ec2:DescribeTags
Resolved
Following permissions need be attached
"tag:GetResources",
"cloudwatch:ListTagsForResource",
"ec2:DescribeTags",
"ec2:DescribeInstances",
"ec2:DescribeRegions",
"elasticloadbalancing:DescribeTargetGroups"

aws ec2 instance - permission denied to write to ~/.aws/credentials

When ssh into a aws ec2 linux instance, the user is ec2-user by default. Then I need to set aws credentials by writing to ~/.aws/credentials, but got permission denied. I feel that if I use sudo then the credentials file would be owned by root user, as a result my api server can't read from it.
What's the correct approach to set up aws credentials there?
The 'correct' way to setup the credentials, is to assign a role to the ec2 instance when you create it (or assign them after you create it). That role can be created and assigned to the EC2 instance via the AWS console - there is no need to ssh in and create the credentials there.
See: Easily Replace or Attach an IAM Role to an Existing EC2 Instance by Using the EC2 Console | AWS Security Blog
You can create the credentials file locally, then upload to your ec2 instance.
create the credentials file locally
$ vim credentials
upload to your ec2 instance
$ scp /path/credentials username#servername:/path

AWS Configure in single line command

I'm trying to configure my AWS account using Ansible and from what I know it needs to be on one line (unless theres a way to pres "ENTER" progomatically in the Windows command).
Is there a way to do this?
Follow this command
$aws configure set aws_access_key_id default_access_key
$ aws configure set aws_secret_access_key default_secret_key
$ aws configure set default.region us-west-2
or
aws configure set aws_access_key_id <key_id> && aws configure set aws_secret_access_key <key> && aws configure set default.region us-east-1
For more details use this link
https://awscli.amazonaws.com/v2/documentation/api/latest/reference/configure/set.html
With aws configure we can also set values interactively, but with aws configure set we can set the values directly.
SYNOPSIS
aws configure set varname value [--profile profile-name]
OPTIONS
varname (string) The name of the config value to set.
value (string) The value to set.
For example.
aws configure --profile myprofile set region us-east-1
aws configure --profile myprofile set aws_access_key_id XXXXXXXXXXX
aws configure --profile myprofile set aws_secret_access_key YYYYYYYY
Alternately, you may also use.
aws configure set profile.myprofile.region us-east-1
aws configure set profile.myprofile.aws_access_key_id XXXXXXXXXXX
aws configure set profile.myprofile.aws_secret_access_key YYYYYYYY

Edit EC2 security group from another AWS account

I have 2 accounts on AWS. On the first account I have created a permanent EC2 instance with a "dbSG" Security Group (which only allow connections by specific port and IP).
When I create an instance in second account using CloudFormation template it should:
Add this instance IP to "dbSG" security group and allow connection by specific port.
Connect to the first instance by this port.
Can I use AssumeRole in UserData when creating the instance in the second account and modify "dbSG" to allow connections from this instance? If yes, how it can be done step by step?
For EC2-Classic
The CLI help for ec2 authorize-security-group-ingress has this example:
To add a rule that allows inbound HTTP traffic from a security
group in another account
This example enables inbound traffic on TCP port 80 from a source
security group (otheraccountgroup) in a different AWS
account (123456789012). If the command succeeds, no output is
returned.
Command:
aws ec2 authorize-security-group-ingress --group-name
MySecurityGroup --protocol tcp --port 80 --source- group
otheraccountgroup --group-owner 123456789012
So, provided that you know the Security Group ID of the "appSG", with credentials from the "db" account:
aws ec2 authorize-security-group-ingress --group-name
dbSG --protocol tcp --port 1234 --source-group
appSG --group-owner XXX-APP-ACCOUNT-ID
Via CloudFormation: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-security-group-rule.html#cfn-ec2-security-group-rule-sourcesecuritygroupownerid
Unfortunately, this seems not to work with Instances in a VPC, but only EC2-Classic.
For EC2-VPC: The user-data way
In the "db" account, add a Role to your CF template, specifying a Trust Policy that allows such role to be assumed by a specific role in another AWS account:
(replace XXX-... with your own values)
'RoleForOtherAccount': {
'Type': 'AWS::IAM::Role',
'Properties': {
'AssumeRolePolicyDocument': {
'Version': '2012-10-17',
'Statement': [{
'Effect': 'Allow',
'Principal': {
'AWS': "arn:aws:iam::XXX-OTHER-AWS-ACCOUNT-ID:role/XXX-ROLE-NAME-GIVEN-TO-APP-INSTANCES"
},
'Action': ['sts:AssumeRole']
}]
},
'Path': '/',
'Policies': [{
'PolicyName': 'manage-sg',
'PolicyDocument': {
'Version': '2012-10-17',
'Statement': [
{
'Effect': 'Allow',
'Action': [
'ec2: AuthorizeSecurityGroupIngress'
],
'Resource': '*'
}
]
}
}]
}
}
Then, on the "app" instance you can add the following User-data script (via CloudFormation:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html#cfn-ec2-instance-userdata)
#!/bin/bash
# get current public IP address via EC2 meta-data
MY_IP=$(wget -qO- http://instance-data/latest/meta-data/public-ipv4)
# assume the "db" account role and get the credentials
CREDENTIALS_JSON=$(aws sts assume-role --role-arn XXX-ARN-OF-ROLE-IN-DB-ACCOUNT --role-session-name "AppSessionForSGIngress" --query '{"AWS_ACCESS_KEY_ID": Credentials.AccessKeyId, "AWS_SECRET_ACCESS_KEY": Credentials.SecretAccessKey, "AWS_SESSION_TOKEN": Credentials.SessionToken }')
# here you should find a way to extract AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY and AWS_SESSION_TOKEN from the above $CREDENTIALS_JSON, then export them or pass as values replacing YYY below
# authorize the IP
aws --region XXX-DB-REGION --access-key-id YYY --secret-access-key YYY --session-token YYY ec2 authorize-security-group-ingress --group-id sg-XXX --protocol tcp --port 1234 --cidr $MY_IP/32
The IAM Role of the "app" instance must allow calls sts:AssumeRole.
Caveat: if you stop and restart the instance, its public IP will change (unless you've assigned an ElasticIP). Since User-data scripts are executed only during the first Launch, your dbSG wouldn't get updated.
via Lambda
You could also use a Lambda function triggered by a CloudTrail or Config, altough this is a bit tricky: Run AWS Lambda code when creating a new AWS EC2 instance
This way, you can also track calls to StopInstance and StartInstance and update (revoke/authorize) the dbSG rules in a more robust way.
References:
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/user-data.html
https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_elements.html#Principal
It appears that your situation is:
Two VPCs, let's call them: VPC-A and VPC-B
Each VPC is owned by a different AWS account
Instance-A exists in VPC-A, with Securiy Group dbSG
When launching Instance-B in VPC-B, allow it to access Instance-A
The simplest method to achieve this is via VPC Peering, which permits direct communication between two VPCs in the same region. The VPCs can belong to different AWS accounts, but must have non-overlapping IP address ranges.
The process would be:
VPC-A invites VPC-B to peer
VPC-B accepts the invitation
Update routing tables in both VPCs to send traffic to each other
Create a security group in VPC-B, eg appSG
Modify dbSG to permit incoming connections from appSG
Associate Instance-B (and any other instances that should comunicate with Instance-A) with the appSG security group
That's it! Security works the same way between peered VPCs as within a VPC. The only difference is that the instances are in separate VPCs that have been peered together.
See: Working with VPC Peering Connections

AWS Configure Bash One Liner

Can anybody tell me how to automate the aws configure in bash with a one liner?
Example:
$ aws configure --profile user2
AWS Access Key ID [None]: AKIAI44QH8DHBEXAMPLE
AWS Secret Access Key [None]: je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY
Default region name [None]: us-east-1
Default output format [None]: text
Application: I want to automate this inside a Docker Entrypoint!
If you run aws configure set help you will see that you can supply settings individually on the command line and they will be written to the relevant credentials or config file. For example:
aws configure set aws_access_key_id AKIAI44QH8DHBEXAMPLE
You can also run this interactively to modify the default credentials:
aws configure
Or run it interactively to create/modify a named profile:
aws configure --profile qa
Note: with the first technique above, whatever command you type will appear in your history and this is not a good thing for passwords, secret keys etc. So in that case, use an alternative that does not cause the secret parameter to be logged to history, or prevent the entire command being logged to history.
One liner
aws configure set aws_access_key_id "AKIAI44QH8DHBEXAMPLE" --profile user2 && aws configure set aws_secret_access_key "je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY" --profile user2 && aws configure set region "us-east-1" --profile user2 && aws configure set output "text" --profile user2
Note: setting region is optional (also never set it with an empty string if you don't have any region, or it will be buggy); as well as the user profile, if you don't set it it will go under default settings.
👍 Better practice with Secrets
Use secrets, then use associated environment variables:
aws configure set aws_access_key_id "$AWS_ACCESS_KEY_ID" --profile user2 && aws configure set aws_secret_access_key "$AWS_ACCESS_KEY_SECRET" --profile user2 && aws configure set region "$AWS_REGION" --profile user2 && aws configure set output "text" --profile user2
📖 To know more
Run aws configure set help to get command line options.
Documentation for aws configure set.
Documentation for secrets: Docker, Kubernetes, GitLab.
If you want to automate you should use files rather than CLI. Your CLI only write those files.
➜ cat ~/.aws/config
[profile_1]
output = json
region = eu-west-1
[profile_2]
output = json
region = eu-west-1
➜ cat ~/.aws/credentials
[profile_1]
aws_access_key_id =
aws_secret_access_key =
[profile_2]
aws_access_key_id =
aws_secret_access_key =
For those inclined to use bash, the following works quite well and keeps secrets out of your scripts. In addition, it will also save your input to a named profile in one go.
printf "%s\n%s\nus-east-1\njson" "$KEY_ID" "$SECRET_KEY" | aws configure --profile my-profile
I think this is the answer in one line
aws configure set aws_access_key_id $YOUR_ACCESS_KEY_ID; aws configure set aws_secret_access_key $YOUR_SECRET_ACCESS_KEY; aws configure set default.region $YOUR_AWS_DEFAULT_REGION
One liner
aws configure set aws_access_key_id "$AWS_ACCESS_KEY_ID" --profile profile_name_here && aws configure set aws_secret_access_key "$AWS_SECRET_ACCESS_KEY" --profile profile_name_here && aws configure set region "$AWS_REGION" --profile profile_name_here && aws configure set output "json" --profile profile_name_here
Setting individual configuration
profile_name_here is the aws profile name to be saved to your aws config. Replace it with your own.
ACCESS KEY
aws configure set aws_access_key_id "$AWS_ACCESS_KEY_ID" --profile profile_name_here
SECRET ACCESS KEY
aws configure set aws_secret_access_key "$AWS_SECRET_ACCESS_KEY" --profile profile_name_here
REGION
aws configure set region "$AWS_REGION" --profile profile_name_here
OUTPUT
aws configure set output "json" --profile profile_name_here
The value specified here is json but you can replace it from the list of supported output formats from aws docs.
json
yaml
yaml-stream
text
table
Note:
That $AWS_ACCESS_KEY_ID, $AWS_SECRET_ACCESS_KEY and $AWS_REGION are variables from your AWS credentials file or environment variables if you are using CI. You can also replace them using regular strings value but that is not safe.
Building upon the suggestion by Tom in jarmod's answer, to "configure your keys in a config file that you then share with your docker container instead".
I found that slightly confusing as I'm new to using Docker and awscli.
Also, I believe most who end up at this question are similarly trying to use Docker and awscli together.
So what you'd want to do, step by step is:
Create a credentials file containing
[default]
aws_access_key_id = default_access_key
aws_secret_access_key = default_secret_key
that you copy to ~/.aws/credentials, using a line in Dockerfile like
COPY credentials /root/.aws/credentials
and a config file containing
[default]
region = us-west-2
output = table
that you copy to ~/.aws/config, using a line in Dockerfile like
COPY config /root/.aws/config
Reference:
aws configure set help

Resources