I wanted to upload my JMeter dashboards to s3. The JMeter tests are run in EC2 instances. I would like to use IAM roles instead of an access key to upload the dashboards to s3 for security reasons.
I went through this page where files are uploaded using access key using HTTP requests.
https://www.blazemeter.com/blog/how-to-handle-dynamic-aws-sigv4-in-jmeter-for-api-testing
can the same be achieved through I am roles instead of access key or do I need to import java class to upload files using s3 client, instanceprofilecredentials provider, and processor
Here is something you can try:
Create Role:
aws iam create-role --role-name <PerfTest-EC2-Role-Name> --assume-role-policy-document '{"Version":"2012-10-17","Statement":[{"Sid":"","Effect":"Allow","Principal":{"Service": "ec2.amazonaws.com"},"Action":"sts:AssumeRole"}]}'
Add Role to EC2 Instance Profile:
aws iam add-role-to-instance-profile --instance-profile-name <JMeter-EC2-InstanceProfile-ID> --role-name <PerfTest-EC2-Role-Name>
Grant the Role S3 permissions:
cat << EOF > BucketPolicy.json
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject"
],
"Resource": "arn:aws:s3:::<Bucket-Name>/*"
},
{
"Sid": "ServiceRoleWriteObject",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Client-ID>:role/<PerfTest-EC2-Role-Name>"
},
"Action": [
"s3:DeleteObject",
"s3:PutObject"
],
"Resource": "arn:aws:s3:::<Bucket-Name>/*"
}
]
}
EOF
aws s3api put-bucket-policy --bucket <Bucket-Name> --policy file://BucketPolicy.json
If 2. above fails with
Cannot exceed quota for InstanceSessionsPerInstanceProfile: 1
you can look at this answer.
Related
I have an AWS Role with ReadOnlyAccess (AWS Managed Policy).
I need to make this role capable of executing some actions, for example start/stop an Amazon EC2 instance and connect via ssm in eu-west-1 and eu-central-1 regions, but even full permission for EC2 and SSM it does not allow to perform listed actions:
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*",
"ssm:*",
],
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": [
"eu-west-1",
"eu-central-1",
]
}
}
}
When starting instances it is trying to start (I see it in CloudTrail Logs) but then it stopped in 2 seconds.
For ssm connect I received the error:
An error occurred while calling the StartConnection API operation. AccessDeniedException: User: arn:aws:sts::acc_id:assumed-role/sre/user is not authorized to perform: ssm-guiconnect:StartConnection on resource: arn:aws:ec2:eu-central-1:acc_id:instance/*"
However, if I add full permissions for this role it works and users with this role may perform needed actions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*",
"Condition": {
"StringEquals": {
"aws:RequestedRegion": [
"eu-west-1",
"eu-central-1",
"us-east-1"
]
}
}
Is there some bug in AWS or could there be pitfalls from my infrastructure side?
AWS policy do not recognice regions in "condition"
EC2 instances:
I found that encryption is used for EC2 on this account, so full rights for kms(kms:*) solved the problem with instance start.
SSM (Fleet manager): used policysim.aws.amazon.com/home/index.jsp#role/sre to debag policy and find out that:
For ssm connection it is needed "ssm-guiconnect:StartConnection" and "ssm:StartSession" permitions.
I have an AWS Lambda that is run as a viewer request on a CloudFront instance that restricts access to an S3 bucket setup for static hosting of a website. It uses a Cognito User Pool to restrict access and verifies the credentials via AdminInitiateAuth.
The lambda runs fine using the test data directly obtained from logging the CloudFront event, however when actually called via the trigger from CloudFront I get the error:
An error occurred (AccessDeniedException) when calling the AdminInitiateAuth operation:
User: arn:aws:sts::<AWS_ACCOUNT_ID>:assumed-role/cloudfront_trigger_s3_auth_http_service/us-east-1.s3_service_resources_auth
is not authorized to perform: cognito-idp:AdminInitiateAuth on resource:
arn:aws:cognito-idp:us-west-2:<AWS_ACCOUNT_ID>:userpool/<USER_POOL_ID>
I've tried expanding my trust relationship and making sure that AWS STS can assume the role when it needs to.
cloudfront_trigger_s3_auth_http_service role trust relationship:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"cognito-idp.amazonaws.com",
"edgelambda.amazonaws.com",
"sts.amazonaws.com",
"lambda.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
arcimoto-service-resources-user-pool-auth policy attached to cloudfront_trigger_s3_auth_http_service role that allows cognito access
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"cognito-idp:AdminInitiateAuth",
"sts:AssumeRole"
],
"Resource": [
"arn:aws:iam::511596272857:role/cloudfront_trigger_s3_auth_http_service/us-east-1.s3_service_resources_auth",
"arn:aws:cognito-idp:us-east-1:511596272857:userpool/us-east-1_sES7sBpcg"
]
}
]
}
I have this policy that accepts requests from a single assumed role. When I try to push data from lambda, I get an access denied error.
If I open the access to elastic server using this line, then it will work as expected.
"AWS": "*"
But it is not secure. How do I push data from lambda to elastic service that is restricted to Cognito users?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:sts::51346970xxxx:assumed-role/document-search-CognitoAuthorizedRole-LZWR058L66O8/CognitoIdentityCredentials"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:51346970xxxx:domain/documentsearchapp/*"
}
]
}
I am using terraform to create EC2 instance using IAM profile. With all the proper roles and policies set, I am still getting the error:
UnauthorizedOperation: You are not authorized to perform this operation.status code: 403
Here is my main.tf:
provider "aws" {
region = "us-west-2"
}
resource "aws_instance" "aws_test" {
ami = "ami-image"
instance_type = "t2.micro"
iam_instance_profile = "test-role"
}
Here is my aws policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"*"
]
},
{
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Resource": [
"*"
],
"Condition": {
"StringEquals": {
"ec2:InstanceType": [
"t2.micro"
]
}
}
},
{
"Effect": "Allow",
"Action": "iam:PassRole",
"Resource": "*"
}
]
}
This is the arn of role which I am using in my main.tf
arn:aws:iam:::role/test-role
On googling, I found articles which tell me this should work, but I seem to be missing something. Any help would be highly appreciated.
Thanks in advance!!
update: I am running this by directly installing terraform on an ec2 machine, not sure if this could be causing a problem
Running the Terraform in the EC2 instance without providing the specific credential will potentially use the Metadata API to retrieve credential.
You can check if you can call the Metadata API inside the instance.
The UnauthorizedOperation means that the user or role running terraform does not have the permissions to make an aws_instance.
Make sure that the user or role running terraform has the proper permissions in AWS IAM
I am trying to copy some files from my EC2 instance to S3 and using the following command
s3cmd put datafile s3://mybucket/datafile
and get the following error
ERROR: S3 error: Access Denied
I have the following IAM policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*",
"s3:ListAllMyBuckets",
"s3:ListBucket"
],
"Resource": "*"
}
]
}
S3 Bucket Policy for mybucket
{
"Version": "2008-10-17",
"Id": "backupPolicy",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::xxxxx:user/xxxx"
},
"Action": [
"s3:ListBucket",
"s3:PutObjectAcl",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
]
}
]
}
I am not sure what I am doing wrong. s3cmd ls s3://mybucket works fine.
I tried searching on SO for this issue, but all the posts basically ask you to add the IAM policy, which I already have.
I think you need to have write permissions for IAM in addition to List:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"ec2:*",
"s3:ListAllMyBuckets",
"s3:ListBucket"
],
"Resource": "*"
},
{
"Sid": "Stmt1406613887001",
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::mybucket",
"arn:aws:s3:::mybucket/*"
]
}
]
}
The user IAM policy needs the permissions to read/write, not (just) the bucket. AWS will always apply the more restrictive policies, and defaults to an implicit "deny".
I've found bucket policies are better suited for public access (ie. serving assets to the world), not restricting the principal. When you start combining bucket + user policies complications arise and it's often much easier to manage the user end.