I have successfully uploaded the file to S3 Bucket. But the problem occurs when I try to delete that file. I am getting error code 403 - AccessDenied.
DeleteObjectRequest request = new DeleteObjectRequest(bucketAttachFile, fileKey);
amazonS3.deleteObject(request);
With config
#Bean
public AmazonS3 s3Client(){
BasicAWSCredentials awsCredentials = new BasicAWSCredentials(accessKey, secretKey);
return AmazonS3ClientBuilder.standard()
.withRegion(Regions.fromName(region))
.withCredentials(new AWSStaticCredentialsProvider(awsCredentials))
.build();
}
I tried adding policy for User:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*",
"s3-object-lambda:*"
],
"Resource": "*"
}
]
}
But 403 error still received when I delete the file
Related
I'm trying to enable Encryption in transit for my environment variable in lambda.
However I couldn't find any possible documentation in terraform to fix this?
I was able to create and attach customer master key in lambda. kms_key_arn
I have created this :
data "aws_kms_ciphertext" "secret_encryption" {
key_id = aws_kms_key.kms_key.key_id
plaintext = <<EOF
{
"token": "${var.token}"
}
EOF
}
now in my lambda's environment variable :
environment {
variables = {
ENV_TOKEN = data.aws_kms_ciphertext.secret_encryption.ciphertext_blob
}
also I attached the kms:decryt to lambda execution role
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "kms:Decrypt",
"Resource": "arn:aws:kms:XXXX:XXXX:key/1234-567-...."
}
}
In my lambda:
encrypted_token = os.environ["ENV_TOKEN"]
decrypt_github_token = boto3.client('kms').decrypt( CiphertextBlob=base64.b64decode(encrypted_token)
)['Plaintext'].decode('utf-8')
But i'm getting "An error occurred (AccessDeniedException) when calling the Decrypt operation:when calling the Decrypt operation: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access."
does anyone knows where i'm doing wrong.
Should the encryption be only value format not the key value format?
Maybe the error is happening prior to decryption. I wonder if you can't even read the key itself. You can test this by appending "kms:DescribeKey".
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:DescribeKey"
],
"Resource": "arn:aws:kms:XXXX:XXXX:key/1234-567-...."
}
}
Good day.
I tried getDashboardEmbedUrl() and it works fine with the UserArn set to the ADMIN user in my Quicksight account. Now I am trying to use the generateEmbedUrlForRegisteredUser(). But it gives the following error:
Error executing "GenerateEmbedUrlForRegisteredUser" on "https://quicksight.eu-west-1.amazonaws.com/accounts/971170084134/embed-url/registered-user"; AWS HTTP error: Client error: `POST https://quicksight.eu-west-1.amazonaws.com/accounts/xxxxxxxxxxxx/embed-url/registered-user` resulted in a `404 Not Found` response:
{"Message":"User arn:aws:quicksight:eu-west-1:xxxxxxxxxxxx:user/default/jjordaan does not exist.","RequestId":"5c310250- (truncated...)
ResourceNotFoundException (client): User arn:aws:quicksight:eu-west-1:xxxxxxxxxxxx:user/default/jjordaan does not exist. - {"Message":"User arn:aws:quicksight:eu-west-1:xxxxxxxxxxxx:user/default/jjordaan does not exist.","RequestId":"5c310250-a1bb-413f-b2d7-f07fdb91e027","ResourceType":null}
GenerateEmbedUrlForRegisteredUser Policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"quicksight:GenerateEmbedUrlForRegisteredUser",
"quicksight:RegisterUser"
],
"Resource": "*"
}
]
}
EmbeddingQuicksightAssumeRole policy:
{
"Version": "2012-10-17",
"Statement":
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::971170084134:role/GenerateEmbedUrlForRegisteredUser"
}
}
Also attempted to create a new Quicksight user, but no luck. The URL generation error is the same. What could I be doing wrong? Thanks.
Regards.
Jarrett
The error message says the user does not exist: User arn:aws:quicksight:eu-west-1:xxxxxxxxxxxx:user/default/jjordaan does not exist
You need to register the user with Quicksight before that user can do anything with Quicksight. Requesting a dashboard and registering users are separate methods with separate permissions.
For example:
client.register_user(
AwsAccountId=AWS_ACCOUNT_ID,
Namespace="default",
IdentityType="IAM",
IamArn=f"arn:aws:iam::{AWS_ACCOUNT_ID}:role/{QUICKSIGHT_DASHBOARD_ROLE_NAME}",
UserRole="READER",
SessionName=user.email,
Email=user.email
)
QUICKSIGHT_DASHBOARD_ROLE_NAME is a role that is allowed to embed a dashboard (such as GenerateEmbedUrlForRegisteredUser).
To get a dashboard URL
assume the role and get credentials
use credentials to get the dashboard embed URL
response = client.assume_role(
RoleArn=f"arn:aws:iam::{AWS_ACCOUNT_ID}:role/{QUICKSIGHT_DASHBOARD_ROLE_NAME}",
RoleSessionName=user.email
)
creds = response["Credentials"]
# get the access key, the secret key, and the session token from the response
client = boto3.client(
"quicksight",
region_name=QUICKSIGHT_REGION,
aws_access_key_id=creds["AccessKeyId"],
aws_secret_access_key=creds["SecretAccessKey"],
aws_session_token=creds["SessionToken"],
)
response = client.get_dashboard_embed_url(
AwsAccountId=AWS_ACCOUNT_ID,
DashboardId=dashboard_id,
IdentityType="IAM",
SessionLifetimeInMinutes=60,
)
url = response.get("EmbedUrl")
What I am trying to archive is getting an IAM role with an attached policy in AWS. The terraform script for policy creation is:
iam_policy.tf
resource "aws_iam_policy" "js_csm_iam_policy" {
name = "js_csm_iam_policy_02-2021"
path = "/"
description = "My CSM policy created 02 2021"
policy = file("cmsiampolicy.json")
}
and another script to create the role as well as attach the plolicy created above with
iam_role.tf
resource "aws_iam_role" "js_csm_iam_role" {
name = "js_csm_iam_role_02_2021"
assume_role_policy = file("csmiamrole.json")
}
##
### IAM role poliy attachment
##
resource "aws_iam_role_policy_attachment" "assign-policy1" {
role = aws_iam_role.js_csm_iam_role.name
policy_arn = aws_iam_policy.js_csm_iam_policy.arn
}
##
### instance_profile
##
resource "aws_iam_instance_profile" "js_csm_profile1" {
name = "js_csm_instance_profile"
role = aws_iam_role.js_csm_iam_role.name
}
csmiamrole.json
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
terraform apply -auto-approve
aws_iam_role.js_csm_iam_role: Refreshing state... [id=js_csm_iam_role_02_2021]
aws_iam_instance_profile.js_csm_profile1: Refreshing state... [id=js_csm_instance_profile]
aws_iam_policy.js_csm_iam_policy: Creating...
Error: Error creating IAM policy js_csm_iam_policy_02-2021: MalformedPolicyDocument: Policy document should not specify a principal.
status code: 400, request id: 53da615d-0541-4b94-ac28-f2e4fdfc23be
I am trying to create a Lambda role and attach it a policy to Allow all ElasticSearch cluster operations.
Below is the code -
resource "aws_iam_role" "lambda_iam" {
name = "lambda_iam"
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [{
"Action": [
"es:*"
],
"Effect": "Allow",
"Resource": "*"
}]
}
EOF
}
resource "aws_lambda_function" "developmentlambda" {
filename = "lambda_function.zip"
function_name = "name"
role = "${aws_iam_role.lambda_iam.arn}"
handler = "exports.handler"
source_code_hash = "${filebase64sha256("lambda_function.zip")}"
runtime = "nodejs10.x"
}
I get the following error
Error creating IAM Role lambda_iam: MalformedPolicyDocument: Has prohibited field Resource
The Terraform document regarding Resource says you can specify a "*" for ALL users. The Principal field is not mandatory either so thats not the problem.
I still changed it to be
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "es.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
But that said -
Error creating Lambda function: InvalidParameterValueException: The role defined for the function cannot be assumed by Lambda.
My lambda function definition is simple
resource "aws_lambda_function" "development_lambda" {
filename = "dev_lambda_function.zip"
function_name = "dev_lambda_function_name"
role = "${aws_iam_role.lambda_iam.arn}"
handler = "exports.test"
source_code_hash = "${filebase64sha256("dev_lambda_function.zip")}"
runtime = "nodejs10.x"
}
The lambda file itself has nothing in it but I do not know if that explains the error.
Is there something I am missing here ?
The assume role policy is the role's trust policy (allowing the role to be assumed), not the role's permissions policy (what permissions the role grants to the assuming entity).
A Lambda execution role needs both types of policies.
The immediate error, that the "role defined for the function cannot be assumed by Lambda" is occurring because it needs "Principal": {"Service": "lambda.amazonaws.com"}, not es.amazonaws.com -- that goes in the permissions policy. I don't use terraform, but it looks like that might be resource "aws_iam_policy" based on https://www.terraform.io/docs/providers/aws/r/lambda_function.html, which I assume is the reference you are working from.
I created a new IAM role through Cognito during the setup process. Here is that policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sns:publish"
],
"Resource": [
"*"
]
}
]
}
In the trust relationship tab, it shows "The identity provider(s) cognito-idp.amazonaws.com" and it has a condition for sts:Externalid.
I tried adding a policy for FullEC2Access but so far I have not been able to get this IAM role to show up in the drodown when creating a new EC2 instance.
I will be using this instance for a new web app which will utilize Cognito. Any feedback appreciated.