How to set IAM role for container level access in Mesos - amazon-ec2

I am deploying my microservice into an EC2 instance via Mesos. The problem is I am sharing my EC2 instance with other team's microservices. All these microservices deal with difference S3 buckets and we dont want other guys to have access to our buckets. I need to assign IAM role to my container so that only I can access my S3 bucket via microservices deployed in EC2 instance.
We are not using ECS and we deploy using Mesos. Any input or comment is appreciated. Thanks in advance.

There is no native AWS support for this. In the meantime you can use Lyft's metadataproxy (see also the blog post).
Quoting the blog:
We had an idea to build a web service that proxies calls to the metadata service on http://169.254.169.254 and pass through most of the calls to the real metadata service, but capture calls to the IAM endpoints. By capturing the IAM endpoints we can decide which IAM credentials we’ll hand back.
...
To know which IAM roles should be assumed, the metadataproxy has access to the docker socket. When it gets a request, it looks up the container, based on its request IP, finds that container’s environment variables and uses the value of the IAM_ROLE environment variable as the role to assume. It then uses STS to assume the role, caches the credentials in memory (for further requests) and returns them back to the caller. If the credentials cached in memory are set to expire, the proxy will re-assume the credentials.

Related

How does a local docker container load secrets from AWS Secret Manager?

I have a simple web application running on my machine (Mac) using Docker. I want this application to load secrets from AWS Secret Manager. Does the application need to assume an IAM role to load the secret?
Also, I will eventually deploy this container to a self-managed Kubernetes cluster (no EKS/ECS). Is the process of loading secrets similar?
This is a Python fastAPI application, but examples in Spring Boot are welcomed. I'm more interested in the process.
There are more ways to Rome in this case, but one way might be:
Create a user that has access to the KMS key;
Create an access key for that user;
Set the access key and username for that user as an environment variable in your local environment.
When deploying to your own K8S cluster, you can also set the environment variables on the Pod (probably through something of a CI/CD pipeline).
The boto3 module knows a certain order in which it will try to authenticate itself, you can find more details here. Just make sure you name the environment variables correctly.

Security Group update to allow AWS Lambda function that is not attached to any VPC

There are two applications. One application is developed through AWS Lambda (present in Account A) and other application is deployed in ECS Fargate (present in Account B) in AWS.
The first application (AWS Lambda) is consuming an API (from the second application ECS Fargate). I need to allow the AWS Lambda function to access the ECS application (which is behind Application Load balancer) through the inbound rule in the security group.
Problem is AWS Lambda is not attached to any VPC and both applications are running in separate AWS accounts. How to solve this problem?
Note: It is an internal application not internet facing.
Note : Its internal application not internet facing.
If your ECS application's load balancer scheme is set to internal instead of public, then an AWS Lambda function that is not assigned to a VPC would never be able to access it. You are asking about security group rules, but there is no security group rule that will give something on the Internet access to a resource that is not exposed to the Internet.
Your only option to make this work is to move the Lambda function into a VPC, and establish VPC peering between the two VPCs.

How to restrict user ssh to ec2 not able to access s3 bucket accessed by ec2 application

The problem here is I have a s3 bucket (cross account). I only want the application I deployed to the ec2 instance to access the bucket (through ec2 instance role). But I still want, says User A (without any role to access the s3 bucket) to ssh to the instance to perform some debugging. I definitely don't want User A who can ssh to ec2 to access that S3 bucket. Is there a way to prevent this?
Pretty sure an ec2 role applies to the entire machine, so any user that has login rights would be able to execute requests using the role.
To avoid having to debug locally from the instance, you could setup log shipping and export metric data to cloudwatch logs/metrics. You can also setup AWS SSM Run command to allow execution of specific commands/scripts against the instances. Both CloudWatch and the Run command can be secured with IAM policies to control who has access to what.

How to edit AWS EC2 instance's security groups to allow access to a lambda function only

I am running into a security related issue with AWS lambda and not sure what is the right way to resolve this.
Consider an EC2 instance A accessing the database on another EC2 instance B. If I want to restrict the accessibility of the DB on instance B to instance A only, I would modify the security group and add a custom TCP rule to allow access to only the public IP of instance A. So, this way, AWS will take care of everything and the DB server will not be accessible from any other IP address.
Now let us replace instance A by a lambda function. Since it is no longer an instance, there is no definite IP address. So, how do I restrict access to only the lambda function and block any other traffic ?
Have the Lambda job determine its IP, and dynamically update the instance B security group, then reset the security group when done.
Until there is support for Lambda running within a VPC this is the only option. Support for that has been announced for later this year. The following quote is from the referenced link above.
Many AWS customers host microservices within a Amazon Virtual Private
Cloud and would like to be able to access them from their Lambda
functions. Perhaps they run a MongoDB cluster with lookup data, or
want to use Amazon ElastiCache as a stateful store for Lambda
functions, but don’t want to expose these resources to the Internet.
You will soon be able to access resources of this type by setting up
one or more security groups within the target VPC, configure them to
accept inbound traffic from Lambda, and attach them to the target VPC
subnets. Then you will need to specify the VPC, the subnets, and the
security groups when your create your Lambda function (you can also
add them to an existing function). You’ll also need to give your
function permission (via its IAM role) to access a couple of EC2
functions related to Elastic Networking.
This feature will be available later this year. I’ll have more info
(and a walk-through) when we launch it.
I believe the below link will explain lambda permission model for you.
http://docs.aws.amazon.com/lambda/latest/dg/intro-permission-model.html

Client communication with Amazon EC2 instance

Can an Amazon EC2 instance process requests from and return results to an external client which may a browser or non-browser application? (I know that the EC2 instance will require a IP address and must be able to create a socket and bind to a port in order to do this.)
I'm considering an Amazon EC2 instance because the server application is not written in PHP, Ruby or any other language that conventional web hosting services support by default.
Sure it will. Just setup the security group the right way to allow your clients to connect.
Take a look at this guide: Amazon Elastic Compute Cloud - Security Groups
Also keep in mind: It's not possible to change the policy group after you created the EC2 instance. This feature is available for VPC instances only. See http://aws.amazon.com/vpc/faqs/#S2 for more information.

Resources