I am using terraform to create EC2 instance using IAM profile. With all the proper roles and policies set, I am still getting the error:
UnauthorizedOperation: You are not authorized to perform this operation.status code: 403
Here is my main.tf:
provider "aws" {
region = "us-west-2"
}
resource "aws_instance" "aws_test" {
ami = "ami-image"
instance_type = "t2.micro"
iam_instance_profile = "test-role"
}
Here is my aws policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Resource": [
"*"
],
"Condition": {
"StringEquals": {
"ec2:InstanceType": [
"t2.micro"
]
}
}
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*"
}
]
}
This is the arn of role which I am using in my main.tf
arn:aws:iam:::role/test-role
On googling, I found articles which tell me this should work, but I seem to be missing something. Any help would be highly appreciated.
Thanks in advance!!
update: I am running this by directly installing terraform on an ec2 machine, not sure if this could be causing a problem
Running the Terraform in the EC2 instance without providing the specific credential will potentially use the Metadata API to retrieve credential.
You can check if you can call the Metadata API inside the instance.
The UnauthorizedOperation means that the user or role running terraform does not have the permissions to make an aws_instance.
Make sure that the user or role running terraform has the proper permissions in AWS IAM
Related
I have an AWS Role with ReadOnlyAccess (AWS Managed Policy).
I need to make this role capable of executing some actions, for example start/stop an Amazon EC2 instance and connect via ssm in eu-west-1 and eu-central-1 regions, but even full permission for EC2 and SSM it does not allow to perform listed actions:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*",
"ssm:*",
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": [
"eu-west-1",
"eu-central-1",
]
}
}
}
When starting instances it is trying to start (I see it in CloudTrail Logs) but then it stopped in 2 seconds.
For ssm connect I received the error:
An error occurred while calling the StartConnection API operation. AccessDeniedException: User: arn:aws:sts::acc_id:assumed-role/sre/user is not authorized to perform: ssm-guiconnect:StartConnection on resource: arn:aws:ec2:eu-central-1:acc_id:instance/*"
However, if I add full permissions for this role it works and users with this role may perform needed actions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": [
"eu-west-1",
"eu-central-1",
"us-east-1"
]
}
}
Is there some bug in AWS or could there be pitfalls from my infrastructure side?
AWS policy do not recognice regions in "condition"
EC2 instances:
I found that encryption is used for EC2 on this account, so full rights for kms(kms:*) solved the problem with instance start.
SSM (Fleet manager): used policysim.aws.amazon.com/home/index.jsp#role/sre to debag policy and find out that:
For ssm connection it is needed "ssm-guiconnect:StartConnection" and "ssm:StartSession" permitions.
I want to access aws secret manager in all my lambda functions(AWS::Serverless::Function). Currently, I have to give individual lambda function reference like below. Since I have many lambda functions it is tedious. I tried "Service": "lambda.amazonaws.com" but it didn't work.
{
"Version" : "2012-10-17",
"Statement" : [ {
"Effect" : "Allow",
"Principal" : {
"AWS" : ["arn:aws:sts::xxxxxxx:assumed-role/employer-api-getAllEmployeesFunctionRole-xxxxx/employer-api-4-getAllEmployeesFunction-xxxxx",
"arn:aws:sts::xxxxxxx:assumed-role/employee-backend-getEmployeeByIdFunctionRole-xxxxx/employee-backend-getEmployeeByIdFunction-xxxxx"
]
},
"Action" : "secretsmanager:GetSecretValue",
"Resource" : "*"
} ]
}
You can create a policy(Like below) and attach it to your lambda's execution role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "secretsmanager:GetSecretValue",
"Resource": "*"
}
]
}
This will allow your lambda function to get secret value of any Secret stored in secret manager.
According to best practice we should only allow our lambda or any other aws service to give minimal access required.
So if your lambda needs access to only one Secret manager it is best you pass the ARN of that secret in below policy and attach it to your role.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "secretsmanager:GetSecretValue",
"Resource": "<ARN of secret required by lambda>"
}
]
}
You can use AWS CDK IAM module to easily create role and policies.
I wanted to upload my JMeter dashboards to s3. The JMeter tests are run in EC2 instances. I would like to use IAM roles instead of an access key to upload the dashboards to s3 for security reasons.
I went through this page where files are uploaded using access key using HTTP requests.
https://www.blazemeter.com/blog/how-to-handle-dynamic-aws-sigv4-in-jmeter-for-api-testing
can the same be achieved through I am roles instead of access key or do I need to import java class to upload files using s3 client, instanceprofilecredentials provider, and processor
Here is something you can try:
Create Role:
aws iam create-role --role-name <PerfTest-EC2-Role-Name> --assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Sid":"","Effect":"Allow","Principal":{"Service": "ec2.amazonaws.com"},"Action":"sts:AssumeRole"}]}'
Add Role to EC2 Instance Profile:
aws iam add-role-to-instance-profile --instance-profile-name <JMeter-EC2-InstanceProfile-ID> --role-name <PerfTest-EC2-Role-Name>
Grant the Role S3 permissions:
cat << EOF > BucketPolicy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::<Bucket-Name>/*"
},
{
"Sid": "ServiceRoleWriteObject",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Client-ID>:role/<PerfTest-EC2-Role-Name>"
},
"Action": [
"s3:DeleteObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<Bucket-Name>/*"
}
]
}
EOF
aws s3api put-bucket-policy --bucket <Bucket-Name> --policy file://BucketPolicy.json
If 2. above fails with
Cannot exceed quota for InstanceSessionsPerInstanceProfile: 1
you can look at this answer.
I'll try to explain my issue the best I can.
I want to create an IAM Role with my own RebootPolicy that, when attached to an EC2 instance, allows that instance to reboot itself (but only itself). Currently the only thing I can do is create a role with a policy that allows Reboot in all instances.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ec2:RebootInstances",
"Resource": "*"
}
]
}
I know I could technically add the specific id of the instance to the policy, but the idea is that I use the policy in any instance I want and not just an specific one. I tried following the documentation at https://docs.aws.amazon.com/IAM/latest/UserGuide/list_amazonec2.html but I don't know how to implement it.
Any ideas?
Thanks in advance!
You can self reference EC2 by using ec2:SourceInstanceARN IAM policy variable.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "ec2:RebootInstances",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:ARN": "${ec2:SourceInstanceARN}"
}
}
}
]
}
I think you can use: $(aws:userid)
IAM Policy Elements: Variables and Tags says:
aws:userid will be set to role-id:ec2-instance-id where role-id is the unique id of the role and the ec2-instance-id is the unique identifier of the EC2 instance.
I have an AWS Lambda that is run as a viewer request on a CloudFront instance that restricts access to an S3 bucket setup for static hosting of a website. It uses a Cognito User Pool to restrict access and verifies the credentials via AdminInitiateAuth.
The lambda runs fine using the test data directly obtained from logging the CloudFront event, however when actually called via the trigger from CloudFront I get the error:
An error occurred (AccessDeniedException) when calling the AdminInitiateAuth operation:
User: arn:aws:sts::<AWS_ACCOUNT_ID>:assumed-role/cloudfront_trigger_s3_auth_http_service/us-east-1.s3_service_resources_auth
is not authorized to perform: cognito-idp:AdminInitiateAuth on resource:
arn:aws:cognito-idp:us-west-2:<AWS_ACCOUNT_ID>:userpool/<USER_POOL_ID>
I've tried expanding my trust relationship and making sure that AWS STS can assume the role when it needs to.
cloudfront_trigger_s3_auth_http_service role trust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"cognito-idp.amazonaws.com",
"edgelambda.amazonaws.com",
"sts.amazonaws.com",
"lambda.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
arcimoto-service-resources-user-pool-auth policy attached to cloudfront_trigger_s3_auth_http_service role that allows cognito access
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"cognito-idp:AdminInitiateAuth",
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::511596272857:role/cloudfront_trigger_s3_auth_http_service/us-east-1.s3_service_resources_auth",
"arn:aws:cognito-idp:us-east-1:511596272857:userpool/us-east-1_sES7sBpcg"
]
}
]
}