Forbidden Error on get_thing_shadow with boto3, aws iot and alexa - aws-lambda

I am running a custom alexa skill with flask-ask that connects to aws iot.
Using same credentials work when running the script on local machine and using ngrok to assign to Alexa skill endpoint. But when I use zappa to upload as lambda, I get the following:
File "/var/task/main.py", line 48, in get_shadow
res=client.get_thing_shadow(thingName="test_light")
File "/var/runtime/botocore/client.py", line 253, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/var/runtime/botocore/client.py", line 543, in _make_api_call
raise error_class(parsed_response, operation_name)
ClientError: An error occurred (ForbiddenException) when calling the GetThingShadow operation: Forbidden
When using ngrok, the skill works completely fine. What am I missing here? Help!

The problem was VPC access. I had to provide the role the VPC access policy and it worked.

Related

GlueContext write_dynamic_frame fails after adding BucketOwnerFullControl configuration

I am trying to write dataframe into different account s3 bucket as json output.
The following code is failing with S3 Access denied error in GLUE Spark Streaming job. But If I run the code without first line in following code, it works and output it written into S3 bucket
glueContext._jsc.hadoopConfiguration().set("fs.s3.canned.acl", "BucketOwnerFullControl")
glueContext.write_dynamic_frame.from_options(frame=dynamic_df, connection_type="s3",
connection_options={"path": output_path},
format=file_format, transformation_ctx="datasink")
Here is the error log:
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception:
Access Denied (Service: Amazon S3; Status Code: 403; Error Code:
AccessDenied; Request ID: W52125NY7G3EF7WH; S3 Extended Request ID:
4t9JOJedv2qNRy6W8ySxdQQ7r+TMN1MWpZCFOK1IKO6W4gx4a2oKuK5vwXUPnh4HkkPAG+LnEIc=;
Proxy: null), S3 Extended Request ID: 4t9JOJedv2qUPnh4HkkPAG+LnEIc= at
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1819)
at
com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleServiceErrorResponse(AmazonHttpClient.java:1403)
at
This looks strange to me as Bucket has full permission and it works perfectly when 2nd line alone executed but bucket owner is still glue account which I am trying to change it using fs.s3.canned.acl.
The destination bucket also setup with Bucket owner preferred option
Please suggest me what I am doing wrong.
Thanks
It's not enough to have permission in bucket policies only.
Glue role was missing s3:PutObjectAcl permission in IAM.

Using CDK deploy cannot read temporary S3 bucket that holds Lambda Code

When I deploy a Lambda "code" using CDK the deploy process (cloudformation running under presumably my user) does not have seem to have access to the bucket that holds the Lambda code.
I followed this tutorial: https://intro-to-cdk.workshop.aws/what-is-cdk.html and see this error when I run cdk deploy:
Lambda8C48573D) Your access has been denied by S3, please make sure your request credentials have permission to GetObject for cdktoolkit-stagingbucket-19kn1ypcmzq2q/assets/5327df
Lambda Code:
const handler = new lambda.Function(this, "TimestreamLambda", {
runtime: lambda.Runtime.NODEJS_10_X,
code: lambda.Code.fromAsset(path.join(__dirname, '../resources')),
handler: "index.hello_world",
...
cdk and #aws-cdk version is 1.73.0 but I also tried with 1.71.0
Notes:
I see the bucket under my account (in my region).
When logged into this account I can see and download the asset file
the downloaded zip file has the correct contents.
More error details:
12/24 | 9:15:19 PM | CREATE_FAILED | AWS::Lambda::Function | TimestreamLambda (TimestreamLambda8C48573D) Your access has been denied by S3, please make sure your request credentials have permission to GetObject for cdktoolkit-stagingbucket-28hiljazvaim/assets/5327df740bdc9c380ff567xxxxxxxxxxx7a68a.zip. S3 Error Code: AccessDenied. S3 Error Message: Access Denied (Service: AWSLambdaInternal; Status Code: 403; Error Code: AccessDeniedException; Request ID: 1b813776-7647-4767-89bc-XXXXXXXXX; Proxy: null)
new Function (/Users/<user>/dev/cdk/cdk-workshop/node_modules/#aws-cdk/aws-lambda/lib/function.ts:593:35)
\_ new CdkWorkshopStack (/Users/<user>/dev/cdk/cdk-workshop/lib/cdk-workshop-stack.ts:33:21)
I also see this (using the -v option) during deploy:
env: {
CDK_DEFAULT_REGION: 'us-west-2',
CDK_DEFAULT_ACCOUNT: '94646XXXXX',
CDK_CONTEXT_JSON: '{"#aws-cdk/core:enableStackNameDuplicates":"true","aws-cdk:enableDiffNoFail":"true","#aws-cdk/core:stackRelativeExports":"true","aws:cdk:enable-path-metadata":true,"aws:cdk:enable-asset-metadata":true,"aws:cdk:version-reporting":true,"aws:cdk:bundling-stacks":["*"]}',
CDK_OUTDIR: 'cdk.out',
CDK_CLI_ASM_VERSION: '7.0.0',
CDK_CLI_VERSION: '1.73.0'
}
As it turns out this was an issue with the internal authentication system my company uses to access AWS. Instead of using my regular AWS account to access I had to create a temporary account (which also sets a temporary token).

Does "SAM Local Invoke" support EFS?

I'm using Lambda to access EFS as described at https://docs.aws.amazon.com/lambda/latest/dg/configuration-filesystem.html
The lambda function works fine when running in AWS, but it fails when using SAM with the "local invoke" command. The error is
2020-10-02T20:03:19.389Z 09b6f1b2-d80a-15e1-9531-f74182e95c1e ERROR Invoke Error
{
"errorType":"Error",
"errorMessage":"ENOENT: no such file or directory, open '/mnt/efs/newfile.txt'",
"code":"ENOENT",
"errno":-2,
"syscall":"open",
"path":"/mnt/efs/newfile.txt",
"stack":[
"Error: ENOENT: no such file or directory, open '/mnt/efs/newfile.txt'",
" at Object.openSync (fs.js:458:3)",
" at Object.writeFileSync (fs.js:1355:35)",
" at WriteFile (/var/task/src/apis/permissions/isallowed.js:70:8)",
" at IsAllowedInPolicy (/var/task/src/apis/permissions/isallowed.js:52:5)",
" at Runtime.exports.handler (/var/task/src/apis/permissions/isallowed.js:16:28)",
" at Runtime.handleOnce (/var/runtime/Runtime.js:66:25)"
]
}
Is "sam local invoke" supposed to work with EFS?
The answer is no.
I opened a support ticket with AWS and was told
This is a limitation on the AWS SAM CLI and not your configuration.
Therefore, I have taken the initiative to submit an internal feature
request with our internal service team(specifically AWS SAM CLI
service team) on your behalf and I have added your company name and
voice to this request. At the moment, we would not be able to provide
an estimate on if or when this feature will be supported. I would
advise to check the AWS announcement page from time to time for future
service updates. https://aws.amazon.com/new/
I also discovered that someone submitted a feature request on GitHub as a workaround.

botot3 attach_volume throwing volume not available

I am trying to attach volume to instance using boto3 but its failed to attach with below error
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 357, in _api_call
return self._make_api_call(operation_name, kwargs)
File "/usr/local/lib/python3.7/site-packages/botocore/client.py", line 661, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (IncorrectState) when calling the AttachVolume operation: vol-xxxxxxxxxxxxxxx is not 'available'.
I can see volumme exists in aws console but somehow boto3 is not able to attach volume.
os.environ['AWS_DEFAULT_REGION'] = "us-west-1"
client = boto3.client('ec2', aws_access_key_id=access_key, aws_secret_access_key=secret_key,
region_name='us-west-1')
response1 = client.attach_volume(
VolumeId=volume_id,
InstanceId=instance_id,
Device='/dev/sdg',
)
I tried using aws cli for attaching the same and its working fine after exporting AWS_DEFAULT_REGION="us-west-1"
Also tried exporting the same in python script using os.environ['AWS_DEFAULT_REGION'] = "us-west-1" but python script is failing with the same error as mentioned above.
I figured it out. I am not giving enough time after creating ebs volume. I am able to attach now after adding sleep

Getting permissions exception when calling amazondax service

I'm using the amazondax service from an AWS Lambda method, and getting an exception that indicates missing permissions - but I don't know what permissions are necessary for this. Both the Lambda method and my DAX cluster are setup with the same VPC subnets and security groups. I'm getting the following exception:
[ERROR] 2018-12-11T23:06:50.457Z 70c80374-fd99-11e8-bac1-318371e7b8ed Failed to retrieve endpoints
Traceback (most recent call last):
File "/var/task/amazondax/Cluster.py", line 211, in _pull
new_endpoints = self._pull_from(ip, port)
File "/var/task/amazondax/Cluster.py", line 222, in _pull_from
endpoints = client.endpoints()
File "/var/task/amazondax/DaxClient.py", line 192, in endpoints
return self._decode_result('endpoints', None, Assemblers.endpoints_455855874_1, tube)
File "/var/task/amazondax/DaxClient.py", line 227, in _decode_result
return self._handle_error(operation_name, tube)
File "/var/task/amazondax/DaxClient.py", line 233, in _handle_error
raise DaxServiceError(operation_name, message, codes, *exc_info)
amazondax.DaxError.DaxServiceError: An error occurred (Unknown) when calling the endpoints operation: Client does not have permission to invoke Endpoints
I assume the last line "Client does not have permission..." is the key to this, but I'm having trouble figuring out exactly what permission(s) are required.
Here is the code that is breaking:
dax = amazondax.AmazonDaxClient(session, region_name='us-east-1', endpoint_url='mydaxcluster.blahblahblah.cache.amazonaws.com:8111')
You will need to add "dax:" permissions for the operations you need to the policy associated with the IAM user used in the session.

Resources