How to write a policy in .yaml for a python lambda to read from S3 using the aws sam cli - yaml

I am trying to deploy a python lambda to aws. This lambda just reads files from s3 buckets when given a bucket name and file path. It works correctly on the local machine if I run the following command:
sam build && sam local invoke --event testfile.json GetFileFromBucketFunction
The data from the file is printed to the console. Next, if I run the following command the lambda is packaged and send to my-bucket.
sam build && sam package --s3-bucket my-bucket --template-file .aws-sam\build\template.yaml --output-template-file packaged.yaml
The next step is to deploy in prod so I try the following command:
sam deploy --template-file packaged.yaml --stack-name getfilefrombucket --capabilities CAPABILITY_IAM --region my-region
The lambda can now be seen in the lambda console, I can run it but no contents are returned, if I change the service role manually to one which allows s3 get/put then the lambda works. However this undermines the whole point of using the aws sam cli.
I think I need to add a policy to the template.yaml file. This link here seems to say that I should add a policy such as one shown here. So, I added:
Policies: S3CrudPolicy
Under 'Resources:GetFileFromBucketFunction:Properties:', I then rebuild the app and re-deploy and the deployment fails with the following errors in cloudformation:
1 validation error detected: Value 'S3CrudPolicy' at 'policyArn' failed to satisfy constraint: Member must have length greater than or equal to 20 (Service: AmazonIdentityManagement; Status Code: 400; Error Code: ValidationError; Request ID: unique number
and
The following resource(s) failed to create: [GetFileFromBucketFunctionRole]. . Rollback requested by user.
I delete the stack to start again. My thoughts were that 'S3CrudPolicy' is not an off the shelf policy that I can just use but something I would have to define myself in the template.yaml file?
I'm not sure how to do this and the docs don't seem to show any very simple use case examples (from what I can see), if anyone knows how to do this could you post a solution?
I tried the following:
S3CrudPolicy:
PolicyDocument:
-
Action: "s3:GetObject"
Effect: Allow
Resource: !Sub arn:aws:s3:::${cloudtrailBucket}
Principal: "*"
But it failed with the following error:
Failed to create the changeset: Waiter ChangeSetCreateComplete failed: Waiter encountered a terminal failure state Status: FAILED. Reason: Invalid template property or properties [S3CrudPolicy]
If anyone can help write a simple policy to read/write from s3 than that would be amazing? I'll need to write another one so get lambdas to invoke others lambdas as well so a solution here (I imagine something similar?) would be great? - Or a decent, easy to use guide of how to write these policy statements?
Many thanks for your help!

Found it!! In case anyone else struggles with this you need to add the following few lines to Resources:YourFunction:Properties in the template.yaml file:
Policies:
- S3CrudPolicy:
BucketName: "*"
The "*" will allow your lambda to talk to any bucket, you could switch for something specific if required. If you leave out 'BucketName' then it doesn't work and returns an error in CloudFormation syaing that S3CrudPolicy is invalid.

Related

serverless - Type Error: Cannot read property 'Properties' of undefined

When issuing serverless deploy --region eu-central-1, I get the error
Type Error ---------------------------------------------
Cannot read property 'Properties' of undefined
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
This error only started after including
plugins:
- serverless-iam-roles-per-function
and can be reverted by commenting the plugin out. But I would like to use it for giving my lambda access to a DynamoDB.
The internet is pretty void about this error besides this typo. The error is not solved by a recent update (yet) and the notice of serverless doesn't help much. set SLS_DEBUG=*" before deployment yields:
Type Error ----------------------------------------------
TypeError: Cannot read property 'Properties' of undefined
at ServerlessIamPerFunctionPlugin.createRoleForFunction (C:\Users\XXXXX\MyProject\node_modules\serverless-iam-roles-per-function\dist\lib\index.js:273:25)
at ServerlessIamPerFunctionPlugin.createRolesPerFunction (C:\Users\XXXXX\MyProject\node_modules\serverless-iam-roles-per-function\dist\lib\index.js:383:18)
at PluginManager.invoke (C:\snapshot\serverless\lib\classes\PluginManager.js:579:20)
at async PluginManager.spawn (C:\snapshot\serverless\lib\classes\PluginManager.js:601:5)
at async Object.before:deploy:deploy [as hook] (C:\snapshot\serverless\lib\plugins\deploy.js:60:11)
at async PluginManager.invoke (C:\snapshot\serverless\lib\classes\PluginManager.js:579:9)
at async PluginManager.run (C:\snapshot\serverless\lib\classes\PluginManager.js:639:7)
at async Serverless.run (C:\snapshot\serverless\lib\Serverless.js:452:5)
at async C:\snapshot\serverless\scripts\serverless.js:751:9
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Should I iunclude my serverless.yml in this post? It's quite large.
Using some different key words I retrieved a second (!) online search result, quoting:
The plugin doesn't support working with the role property. You have the following in your provider sections: role: arn-for-deployment-role'. Try removing this.
More info here
This basically solves the issue and serverless-iam-roles-per-function worked after commenting out this:
provider:
name: aws
runtime: python3.7
#iam:
# role: CallsTableQueryRole
The reason for this could be, that previos versions (serverless < v2.24.0) used a different syntax than current ones. Compare:
provider:
#previously:
iam:
role:
statements:
#≥ v2.24.0
iamRoleStatements:
If you're using Bref framework make sure to update bref in composer and serverless globally to the latest version

An error occurred (RequestEntityTooLargeException) when calling the UpdateFunctionCode operation

When I deploy lambda function using the command below, the error is occured.
aws lambda update-function-code --function-name example --zip-file fileb://lambda.zip
An error occurred (RequestEntityTooLargeException) when calling the UpdateFunctionCode operation
As far as I looked through, my zip size can not be reduced any more.
How can I avoid this or are there any alternative way to deploy?
There is a limit to direct upload of 50MB:
https://docs.aws.amazon.com/lambda/latest/dg/gettingstarted-limits.html
You can use a LambdaLayer as a workaround to this problem:
https://lumigo.io/blog/lambda-layers-when-to-use-it/
https://aws.amazon.com/blogs/compute/using-lambda-layers-to-simplify-your-development-process/
With layers, you got 250MB Limit
A disclosure - I'm a developer in Lumigo, we have just a blogpost on this thread, shared also an official AWS post.
Although layers are a good way to solve this, you can also upload your .zip file to S3 and update your function to the extent of:
aws lambda update-function-code --function-name LAMBDA_NAME --region REGION_NAME --s3-bucket BUCKET_NAME --s3-key S3_KEY/TO/PACKAGE.zip

Using CDK deploy cannot read temporary S3 bucket that holds Lambda Code

When I deploy a Lambda "code" using CDK the deploy process (cloudformation running under presumably my user) does not have seem to have access to the bucket that holds the Lambda code.
I followed this tutorial: https://intro-to-cdk.workshop.aws/what-is-cdk.html and see this error when I run cdk deploy:
Lambda8C48573D) Your access has been denied by S3, please make sure your request credentials have permission to GetObject for cdktoolkit-stagingbucket-19kn1ypcmzq2q/assets/5327df
Lambda Code:
const handler = new lambda.Function(this, "TimestreamLambda", {
runtime: lambda.Runtime.NODEJS_10_X,
code: lambda.Code.fromAsset(path.join(__dirname, '../resources')),
handler: "index.hello_world",
...
cdk and #aws-cdk version is 1.73.0 but I also tried with 1.71.0
Notes:
I see the bucket under my account (in my region).
When logged into this account I can see and download the asset file
the downloaded zip file has the correct contents.
More error details:
12/24 | 9:15:19 PM | CREATE_FAILED | AWS::Lambda::Function | TimestreamLambda (TimestreamLambda8C48573D) Your access has been denied by S3, please make sure your request credentials have permission to GetObject for cdktoolkit-stagingbucket-28hiljazvaim/assets/5327df740bdc9c380ff567xxxxxxxxxxx7a68a.zip. S3 Error Code: AccessDenied. S3 Error Message: Access Denied (Service: AWSLambdaInternal; Status Code: 403; Error Code: AccessDeniedException; Request ID: 1b813776-7647-4767-89bc-XXXXXXXXX; Proxy: null)
new Function (/Users/<user>/dev/cdk/cdk-workshop/node_modules/#aws-cdk/aws-lambda/lib/function.ts:593:35)
\_ new CdkWorkshopStack (/Users/<user>/dev/cdk/cdk-workshop/lib/cdk-workshop-stack.ts:33:21)
I also see this (using the -v option) during deploy:
env: {
CDK_DEFAULT_REGION: 'us-west-2',
CDK_DEFAULT_ACCOUNT: '94646XXXXX',
CDK_CONTEXT_JSON: '{"#aws-cdk/core:enableStackNameDuplicates":"true","aws-cdk:enableDiffNoFail":"true","#aws-cdk/core:stackRelativeExports":"true","aws:cdk:enable-path-metadata":true,"aws:cdk:enable-asset-metadata":true,"aws:cdk:version-reporting":true,"aws:cdk:bundling-stacks":["*"]}',
CDK_OUTDIR: 'cdk.out',
CDK_CLI_ASM_VERSION: '7.0.0',
CDK_CLI_VERSION: '1.73.0'
}
As it turns out this was an issue with the internal authentication system my company uses to access AWS. Instead of using my regular AWS account to access I had to create a temporary account (which also sets a temporary token).

update existing infrastructure on heroku using terraform

I've got this infrastructure description
variable "HEROKU_API_KEY" {}
provider "heroku" {
email = "sebastrident#gmail.com"
api_key = "${var.HEROKU_API_KEY}"
}
resource "heroku_app" "default" {
name = "judge-re"
region = "us"
}
Originally I forgot to specify buildpack. It created the application on heroku. I decided to add it to resource entry
buildpacks = [
"heroku/java"
]
But when I try to apply the plan in terraform I get this error
Error: Error applying plan:
1 error(s) occurred:
* heroku_app.default: 1 error(s) occurred:
* heroku_app.default: Post https://api.heroku.com/apps: Name is already taken
Terraform does not automatically rollback in the face of errors.
Instead, your Terraform state file has been partially updated with
any resources that successfully completed. Please address the error
above and apply again to incrementally change your infrastructure.
Terraform plan looks like this
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
+ heroku_app.judge_re
id: <computed>
all_config_vars.%: <computed>
buildpacks.#: "1"
buildpacks.0: "heroku/java"
config_vars.#: <computed>
git_url: <computed>
heroku_hostname: <computed>
name: "judge-re"
region: "us"
stack: <computed>
web_url: <computed>
Plan: 1 to add, 0 to change, 0 to destroy.
------------------------------------------------------------------------
Note: You didn't specify an "-out" parameter to save this plan, so Terraform
can't guarantee that exactly these actions will be performed if
"terraform apply" is subsequently run.
As a workaround I tried to add destroy in my deploy.sh script
terraform init
terraform plan
terraform destroy -force
terraform apply -auto-approve
But it does not destroy the resource as I get the message Destroy complete! Resources: 0 destroyed.
What is the problem?
Link to build
It looks like you also changed the name of the resource. Your original example has the resource name heroku_app.default while your plan has heroku_app.judge_re.
To point your state to the remote resource, so Terraform knows you are editing and not trying to recreate the resource, use terraform import:
terraform import heroku_app.judge_re judge-re
In terraform, normally you needn't destroy the whole stack, which you just want to re-build one or several resources in it.
terraform taint does this trick. The terraform taint command manually marks a Terraform-managed resource as tainted, forcing it to be destroyed and recreated on the next apply.
terraform taint heroku_app.default
Second, when you troubleshooting why the resource isn't list in destroy resource, please make sure you point to the right terraform tfstate file.
when you run terraform plan, did you see any resources which already was created?

How to rename the aws lambda function without changing anything in it

Earlier my function in serverless was:
functions:
fun:
handler: file.handler
name: ${opt:stage, self:provider.stage}-lambda-fun
environment: ${file(env.yml):${self:provider.stage}.lambda-fun}
timeout : 180
memorySize : 1024
I want to change fun with some meaningful name, So I changes it as:
Earlier my function in serverless was:
functions:
my-fun:
handler: file.handler
name: ${opt:stage, self:provider.stage}-lambda-fun
environment: ${file(env.yml):${self:provider.stage}.lambda-fun}
timeout : 180
memorySize : 1024
Now When I deployed this function through serverless, Got the below error:
An error occurred while provisioning your stack: my-funLogGroup
- /aws/lambda/lambda-fun already exists
Please help me What I can do more to do this.
Try removing the stack first using serverless remove and then redeploy.
It's not the exact same issue but this github issue gives an alternative soulution: Cannot rename Lambda functions #108
I commented the function definition I wanted to rename and the resources with references to it, then sls deploy , uncommented and sls deploy again.
The problem with this is that the first deploy will delete the function so you have to take into account this down time.

Resources