AWS IAM control of a group of EC2 instances - amazon-ec2

We are using IAM permissions for groups and users with great success for S3, SQS, Redshift, etc. The IAM for S3 in particular gives lovely level of details by path and bucket.
I am bumping into some head scratching when it comes to EC2 permissions.
How do I create a permission that allows an IAM user to:
create up to n instances
do whatever he/she wants on those instances only (terminate / stop / describe)
...and makes it impossible for him/her to affect our other instances (change termination / terminate / etc.) ?
I've been trying Conditions on tag ("Condition": {"StringEquals": {"ec2:ResourceTag/purpose": "test"}}), but that means that all of our tools need to be modified to add that tag at creation time.
Is there a simpler way?

Limiting the number of instances an IAM user can create is not possible (unfortunately). All you have is a limit on the number of instances in the entire account.
Limiting permissions to specific instances is possible, but you have to specify the permissions for each instance-ID, using this format:
arn:aws:ec2:region:account:instance/instance-id
More information is available here:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-supported-iam-actions-resources.html

Related

any script to know all the AWS resources created by certain IAM user

Good day,
Is there any script or any aws cli command to know which IAM user created what resource in AWS. so that we just enter the IAM user name and it shows all the resources created by that particular IAM user.
thanks in advance.
The service that you're looking for is CloudTrail.
By default, it retains 90 days worth of events for the current account and region, and you can access it from either the Console or CLI. You can also configure it to write events to S3, where they're be preserved as long as you want to pay for the storage (this also lets you capture events across all regions, and for every account in an orgnanization).
CloudTrail events can be challenging to search. If you're just looking for events by a specific user, and know that user's access key (here I'm using my access key stored in an environment variable) you can use a query like this:
aws cloudtrail lookup-events --lookup-attributes "AttributeKey=AccessKeyId,AttributeValue=$AWS_ACCESS_KEY_ID" --query 'Events[].[EventTime,EventName,Username,EventId]' --output table
Or, by username:
aws cloudtrail lookup-events --lookup-attributes "AttributeKey=Username,AttributeValue=parsifal" --query 'Events[].[EventTime,EventName,Username,EventId]' --output table
You can then use grep to find the event(s) that interest you, and dig into the details of a specific event with:
aws cloudtrail lookup-events --lookup-attributes "AttributeKey=EventId,AttributeValue=8c5a5d8a-9999-9999-9999-a8e4b5213c3d"

Concurrent az login executions

I am using the Azure CLI to perform a health check on some Azure VMs. The health checks are deployed through a Jenkins stage, using bash. The stage itself may take several hours to complete, during which, several az 'vm run-commands' are executed that all require the proper credentials.
I also have several Jenkins pipelines that deploy different products and that are supposed to be able to run in parallel. All of them have the same health checks stage.
When I execute 'az login' to generate an auth token and 'az account set' to set the subscription, as far as I understood, this data is written to a profile file (~/.azure/azureProfile.json). So this is well and all, but whenever I trigger a parallel pipeline on this Jenkins container, if I use a different Azure subscription, the profile file will naturally get overwritten with the different credentials, which causes the other health check to fail whenever it gets to the next vm run-command execution since it's looking for a Resource Group, which exists in a different subscription.
I was thinking of potentially creating a new unique Linux user as part of each stage run and then removing it once it's done, so all pipelines will have separate profile files. This is a bit tricky though, since this is a Jenkins docker container using an alpine image and I would need to create the users with each pipeline rather than in the dockerfile, which brings me to a whole other drama - to give the Jenkins user sufficient privileges to create and delete users and so on...
Also, since the session credentials are stored in the ~/.azure/accessTokens.json and azureProfile.json files by default, I could theoretically generate a different directory for each execution, but I couldn't find a way to alter those default files/location in the Azure docs.
How do you think is the best/easier approach to workaround this?
Setting the AZURE_CONFIG_DIR environment variable does the trick as described here.
I would try to keep az login as it is, remove az account set and use --subscription argument for each command instead.
You can see that ~/.azure/azureProfile.json contains tenantId and user information for each subscription and ~/.azure/accessTokens.json contains all tokens.
So, if you precise each time your subscription explicitly you will not depend on common user context.
I have my Account 1 for subscription xxxx-xxxx-xxxxx-xxxx, and Account 2 for subscription yyyy-yyyy-yyyy-yyyy and I do:
az login # Account 1
az login # Account 2
az group list --subscription "xxxx-xxxx-xxxxx-xxxx"
az group list --subscription "yyyy-yyyy-yyyy-yyyy"
and it works well under the same unix user

How to rotate IAM user access keys

I am trying to rotate the user access keys & secret keys for all the users, last time when it was required I did it manually but now I want to do it by a rule or automation
I went through some links and found this link
https://github.com/miztiik/serverless-iam-key-sentry
with this link, I tried to use but I was not able to perform the activity, it was always giving me the error, can anyone please or suggest any better way to do it?
As I am new to aws lamda also I am not sure that how my code can be tested?
There are different ways to implements a solution. One common way you can automate this is through a storing the IAM user access keys in Secret Manager for safely storing the keys. Next, you could configure a monthly or 90 days check to rotate the keys utilizing the AWS CLI and store the new keys within AWS Secrets Manager. You could use an SDK of your choice for this.

Is there a way to create nodes from cloud formation that can ssh to each other without passwords?

I am creating up an AWS Cloud formation template which sets up a set of nodes which must allow keyless ssh login amongst themselves. i.e. One controller must be able to login to all slaves with its private key. The controllers private key is generated dynamically so I do not have access to be able to hard code it into the User-Data of the Template or pass it as a parameter to the template.
Is there a way in Cloud Formation templates to add the controller's public key to slave nodes' authorized keys files?
Is there some other way to use security groups or IAMS to do what is required?
You have to pas the Public key o the master server to the slave nodes in the form of user-data. Cloudformation does support user-data. You may have to figure out the syntax for the same.
In other words, consider it as a simple bash script which will copy the master servers's public key to the slaves. and then you pass this bash script as suer-data so that it gets executed for the 1st time the instance is created.
You will find tons of goggle searches on above information.
I would approach this problem with IAM machine roles. You can grant specific machines certain AWS rights. IAM roles do not apply to ssh access, but to AWS api calls, like s3 bucket access or creating ec2 instances.
Therefore, a solution might look like:
Create a controller machine role which can write to a particular S3 bucket.
Create a slave machine role which can read from that bucket.
Have the controller create an upload a public key into the bucket.
Since you don't know if the controller is created before the slaves, you'll have to have cloud-init set up a cron job every couple minutes that downloads the key from the bucket if it hasn't done so yet.

Is there anyway to get the user data of a running EC2 instance via AWS SDK?

I've tried to use DescribeInstances but I've found the response result does not contain user data. Is there any way to retrieve this user data?
My usage case is I've trying to request spot instances and assign different user data to each EC2 instance for some kind of automation and then I want to tag the name of each instance according to this user data. Based on my understanding, creating a tag request requires InstanceId, which is not available at the time when I make a request to reserve a spot instance.
So I'm wondering whether there is any way to get the user data of a running instance instead of SSHing the instance...
The DescribeInstanceAttributes endpoint will provide you with user data.
http://docs.aws.amazon.com/AWSEC2/latest/APIReference/ApiReference-query-DescribeInstanceAttribute.html

Resources