Why AWS.EC2MetadataCredentials giving wrong role? - amazon-ec2

We have node service(V8.15.1) deployed on AWS EC2 containers using ECS.We have AWS_ACCESS_KEY setup in environment as well as a role is mapped to EC2 instances. I am supposed to use EC2 instance role to access AWS SSM. So, i tried below:
AWS.config.credentials = new AWS.EC2MetadataCredentials();
and tried to read parameter from SSM.
i get below error:
{
"msg": "User: arn:aws:sts::AccountID:assumed-role/role-name/i-*****92a is not authorized to perform: ssm:GetParameter on resource: arn:aws:ssm:resource_id:parameter/parame_id"
}
Please note, i-*****92a(instance id in role name) which i think doesn't let me access SSM parameter because actual role name is without instanceid in it.
Expected: It should have resulted into actual role name without instanceid appended.

We figured, it is a normal behaviour. The issue was that one of the parameter nme set was wrong in SSM and hence this role was not able read that.

Related

VPC-enabled Lambda function cannot launch/access EC2 in the same VPC

I have a VPC enabled Lambda function which attempts to launch an EC2 using a launch template. The EC2 launch step (run_instances) fails with the below generic network error.
Calling the invoke API action failed with this message: Network Error
I can launch an instance successfully directly from the launch template, so I think everything is fine with the launch template. I have configured the following in the launch template
Amazon Machine Image ID
Instance type
Key Pair
A network interface (ENI) which I had created before using a specific (VPC, Subnet, Secutity Group) combo.
IAM role
The Lambda function includes the below code-
import json
import boto3
import time
def lambda_handler(event, context):
ec2_cl = boto3.client('ec2')
launch_temp = {"LaunchTemplateId": "<<Launch Template ID>>"}
resp_ec2_launch = ec2_cl.run_instances(MaxCount=1, MinCount=1, LaunchTemplate=launch_temp, SubnetId="<<Subnet ID>>")
Few things on the Lambda function-
I have used the subnet in the run_instances() call because this is not the default vpc/subnet.
The function is setup with the same (VPC, Subnet, Secutity Group) combo as used in the launch template
The execution role is setup to be the same IAM role as used in the launch template
The function as you see needs access only to the EC2, internet access is not needed
I replaced the run_instances() with describe_instance_status (using the instance id created directly from the launch template) and got the same error.
The error is a network error, so I assume all is fine (atleast as of now) with the privileges granted to the IAM role. I'm sure there would be a different error, if the IAM role missed any policies.
Can someone indicate what I might be missing?
It appears that the problem is with your AWS Lambda function being able to reach the Internet, since the Amazon EC2 API endpoint is on the Internet.
If a Lambda function is not attached to a VPC, it has automatic access to the Internet.
If a Lambda function is attached to a VPC and requires Internet access, then the configuration should be:
Attach the Lambda function only to private subnet(s)
Launch a NAT Gateway in a public subnet
Configure the Route Table on the private subnets to send Internet-bound traffic (0.0.0.0/0) through the NAT Gateway
It appears that your VPC does not have an Internet Gateway, but it does have a VPC Endpoint for EC2.
Therefore, to try and reproduce your situation, I did the following:
Created a new VPC with one subnet but no Internet Gateway
Added a VPC Endpoint for EC2 to the subnet
Created a Lambda function that would call DescribeInstances() and attached the Lambda function to the subnet
Opened the security group on the VPC Endpoint and Lambda function to allow all traffic from anywhere (hey, it's just a test!)
My Lambda function:
import json
import boto3
def lambda_handler(event, context):
ec2 = boto3.client('ec2',region_name='ap-southeast-2')
print(ec2.describe_instances())
The result: The Lambda function successfully received a response from EC2, with a list of instances in the region. No code or changes were required.

Query AWS RDS from Lambda Securely

I am trying to connect my Lambda to RDS just as a learning exercise.
Currently, all resources are created through CloudFormation and I would like to continue to do that if possible.
My issue is with the following statement from https://docs.aws.amazon.com/lambda/latest/dg/vpc-rds.html which details how connect.
A second file contains connection information for the function.
Example rds_config.py
#config file containing credentials for RDS MySQL instance
db_username = "username"
db_password = "password"
db_name = "ExampleDB"
The statement AWS is making makes it seem like I should hardcode these values into a file which does not seem secure. I could try to use environment variables but I think the same issue will arise.
If anyone has any advice for how to connect lambda to rds securely I would greatly appreciate it!!!
If you don't want to use environment variables for whatever reason, you can have your Lambda function query the AWS Systems Manager Parameter Store for you.
So let's say once your function has been triggered, you can just query SSM to get the desired parameters and then pass it into your RDS connection.
Just remember that if your Lambda also needs Internet Access (and in this case it does, because it will need to access SSM), you'll need to attach 2 subnets to it: a private and a public. The private will route traffic to RDS and the public will route traffic to other AWS Services / or to the internet.
Setting up Environment Variables would be the easiest to get you off ground, though.
EDIT: Check this answer where I walk the OP through creating a VPC with both public and private subnets if you need a quick start.
EDIT 2: Good news. AWS has released VPC endpoints for SSM some time ago. So your Lambda won't need to go through the Internet anymore, you can just hit that VPC endpoint. You can see it in the official docs

How to restrict user ssh to ec2 not able to access s3 bucket accessed by ec2 application

The problem here is I have a s3 bucket (cross account). I only want the application I deployed to the ec2 instance to access the bucket (through ec2 instance role). But I still want, says User A (without any role to access the s3 bucket) to ssh to the instance to perform some debugging. I definitely don't want User A who can ssh to ec2 to access that S3 bucket. Is there a way to prevent this?
Pretty sure an ec2 role applies to the entire machine, so any user that has login rights would be able to execute requests using the role.
To avoid having to debug locally from the instance, you could setup log shipping and export metric data to cloudwatch logs/metrics. You can also setup AWS SSM Run command to allow execution of specific commands/scripts against the instances. Both CloudWatch and the Run command can be secured with IAM policies to control who has access to what.

Is there any alternative to add IAM role to my Running EC2 instance?

How to add IAM Role to Running Instance ? I know that that's one is not possible using Console, but is there any alternative ?
You can assign an IAM role to your instance using the following workaround:
Create an AMI for your instance;
Terminate your old instance;
Re-deploy it again from previously created AMI and assing an IAM role during the process.
Assigning an IAM (Identity and Access Management) Role to an Amazon EC2 instance is a way of securely providing rotating credentials to applications running on an EC2 instance. Such roles must be assigned when the instance is first launched.
If the instance you would like to use has already been launched, either:
Launch a new instance ("Launch More Like This") with a Role, or
Create a User in IAM: You will receive an Access Key and Secret Key that can be configured in the instance by using the aws configure command. This is part of the AWS Command-Line Interface (CLI).
See documentation: Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances

Ansible Login To Fresh Created Cloud Instance

I have created a playbook to create a fresh instance out of a cloud image.
Now can somebody please help me to figure-out how can I login to the instance with the key-pair that was used to create the instance. I don't want to use any user other then root to login.
Thanks.
In your hosts file:
[servercategory]
54.85.136.142 ansible_ssh_private_key_file=~/.ssh/your-ssh-key.pem ansible_ssh_user=root

Resources