I'm having issues deploying my serverless application to AWS. In AWS the logs show:
Unable to import module 'wsgi_handler': No module named 'werkzeug'
I have explicitly specified werkzeug in my requirements.txt but it seems that when I run sls deploy the packages specified are not being put inside the zip file that is uploaded to my S3 bucket.
Below is a copy of my serverless.yml file:
service: serverless-flask
plugins:
- serverless-python-requirements
- serverless-wsgi
- serverless-dynamodb-local
custom:
tableName: 'transactions-table-${self:provider.stage}'
wsgi:
app: app.app # entrypoint is app.app, which means the app object in the app.py module.
packRequirements: false
pythonRequirements:
dockerizePip: true
dynamodb:
stages:
- test
- dev
start:
migrate: true
provider:
name: aws
runtime: python3.6
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
Resource:
- { "Fn::GetAtt": ["TransactionsDynamoDBTable", "Arn" ] }
environment:
TRANSACTIONS_TABLE: ${self:custom.tableName}
functions:
app:
handler: wsgi_handler.handler
events:
- http: ANY /
- http: 'ANY {proxy+}'
resources:
Resources:
TransactionsDynamoDBTable:
Type: 'AWS::DynamoDB::Table'
Properties:
AttributeDefinitions:
-
AttributeName: transactionId
AttributeType: S
-
AttributeName: timestamp
AttributeType: S
KeySchema:
-
AttributeName: transactionId
KeyType: HASH
-
AttributeName: timestamp
KeyType: RANGE
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
TableName: ${self:custom.tableName}
Here is my requirements.tx:
boto3==1.11.17
botocore==1.14.17
Click==7.0
docutils==0.15.2
Flask==1.1.1
itsdangerous==1.1.0
Jinja2==2.11.1
jmespath==0.9.4
MarkupSafe==1.1.1
python-dateutil==2.8.1
s3transfer==0.3.3
six==1.14.0
urllib3==1.25.8
Werkzeug==1.0.0
As far as I know, using the serverless-wsgi plugin should handle the installation of this package automatically but I'm seeing no .requirements folder being created in the .serverless folder or in the zipfile that serveless creates.
The requirements.txt file contained inside the zip faile only contains the following:
-i https://pypi.org/simple
I'm not sure what I'm doing wrong but the only solution so far has been to tear down the infrastructure and redeploy with a new url which isn't ideal.
Adding a reference to the lambda layer did the trick for me (see the layers section):
api:
timeout: 30
handler: wsgi_handler.handler
layers:
- {Ref: PythonRequirementsLambdaLayer}
events:
- http: ANY /
You need add your files manually to the package.
In your serverless.yml, add this
package:
individually: true
exclude:
- ./**
include:
- requirements.txt
- <other files>
Once you deploy, you can download the packaged zip from AWS and verify if your files are there.
Related
I have a serverless yml file for node project with aws. Also I have a dynamodb table in another file. I can deploy the project with no issues, however VSCode is showing a red alert problem in my import line: Incorrect type expected...
serverless.yml
service: auction-service
plugins:
- serverless-bundle
- serverless-pseudo-parameters
provider:
name: aws
runtime: nodejs12.x
memorySize: 256
stage: ${opt:stage, 'dev'}
region: eu-west-1
environment:
AUCTIONS_TABLE_NAME: ${self:custom.AuctionsTable.name}
iam:
role:
statements:
- ${file(iam/AuctionsTableIAM.yml):AuctionsTableIAM}
resources:
Resources:
AuctionsTable: ${file(resources/AuctionsTable.yml):AuctionsTable}
functions:
createAuction:
handler: src/handlers/createAuction.handler
events:
- http:
method: POST
path: /auction
custom:
AuctionsTable:
name: !Ref AuctionsTable
arn: !GetAtt AuctionsTable.Arn
bundle:
linting: false
AuctionsTable.yml:
[![AuctionsTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: AuctionsTable--${self:provider.stage}
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: id
AttributeType: S
KeySchema:
- AttributeName: id
KeyType: HASH][1]][1]
Try This
resources:
Resources:
- ${file(resources/AuctionsTable.yml):AuctionsTable}
Note, you will need to add the name AuctionsTable to the imported file.
I'm trying to organize this serverless YML file, but getting failures.I've scanned their docs for understanding how to interpolate via file substitution and I just can't figure it out:
I've read all these:
https://www.serverless.com/framework/docs/providers/aws/guide/variables#reference-properties-in-other-files
https://forum.serverless.com/t/split-up-include-reference-serverless-yml-file/3747
https://github.com/AnomalyInnovations/serverless-stack-demo-api/blob/master/serverless.yml#L128-L137
ERRORS
Serverless Error ----------------------------------------
The CloudFormation template is invalid: [/Resources/IamRoleLambdaExecution/Type/Policies/0/PolicyDocument/Statement/2] 'null' values are not allowed in templates
Serverless.yaml
service: newsletter
frameworkVersion: '2'
plugins:
- serverless-bundle
provider:
name: aws
runtime: nodejs12.x
memorySize: 256
stage: ${opt:stage, 'dev'}
region: us-west-2
iamRoleStatements:
- ${file(resources/iam/UsersSubscriptionTable.yml):UsersSubscriptionTableIAM}
resources:
- ${file(/resources/dynamo/UsersSubscriptionTable.yml):UsersSubscriptionTable}
functions:
createEmailEntry:
handler: src/Email.addUser
events:
- http:
method: POST
path: /subscribe
removeEmailEntry:
handler: src/Email.removeUser
events:
- http:
method: GET
path: /unsubscribe
# Not recommended for production-use
custom:
bundle:
linting: false
resources/iam/UsersSubscriptionTable.yml
Resources:
UsersSubscriptionTableIAM:
Effect: Allow
Action:
- dynamodb:PutItem
- dynamodb:GetItem
Resource:
- arn:aws:dynamodb:${aws:region}:${aws:accountId}:table/MyCoolTable
resources/dynamo/UsersSubscriptionTable.yml
Resources:
UsersSubscriptionTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: MyCoolTable
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: email
AttributeType: S
KeySchema:
- AttributeName: email
KeyType: HASH
This was a huge pain to figure out:
provider:
name: aws
runtime: nodejs12.x
memorySize: 256
stage: ${opt:stage, 'dev'}
region: us-west-2
iam:
role:
statements:
- ${file(resources/iam/UsersSubscriptionTable.yml):UsersSubscriptionTableIAM}
resources:
Resources:
UsersSubscriptionTable: ${file(resources/dynamo/UsersSubscriptionTable.yml):UsersSubscriptionTable}
Resources shouldn't be an array in this case.
In addition, the iam role statement shouldn't have a resources top-level item in the yml either:
UsersSubscriptionTableIAM:
Effect: Allow
Action:
- dynamodb:PutItem
- dynamodb:GetItem
Resource:
- arn:aws:dynamodb:${aws:region}:${aws:accountId}:table/MyCoolTable
UsersSubscriptionTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: MyCoolTable
BillingMode: PAY_PER_REQUEST
AttributeDefinitions:
- AttributeName: email
AttributeType: S
# Specifies the email as the partition key (primary key)
KeySchema:
- AttributeName: email
KeyType: HASH
I've the following serverless.yml file.
I'm trying to assign read write permissions to the generated dynamodb..
So far it generates my lambda and the dynamodb table but the lambda isn't assigned permissions to access it.
I get no errors and it doesn't seem to add the permission to the dynamodb table.
Can anyone shed any light please?
service:
name: catcam
custom:
stage: ${opt:stage, self:provider.stage}
tableName: ${self:custom.stage}-notes
environment:
tableName: ${self:custom.tableName}
plugins:
- '#hewmen/serverless-plugin-typescript'
- serverless-plugin-optimize
- serverless-offline
provider:
name: aws
runtime: nodejs12.x
iamRoleStatements:
- Effect: Allow
Action:
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource:
- { "Fn::GetAtt": ["NotesTable", "Arn" ] }
# - { !GetAtt NotesTable.Arn }
functions:
main: # The name of the lambda function
# The module 'handler' is exported in the file 'src/lambda'
handler: src/lambda.handler
events:
- http:
method: any
path: /{any+}
resources:
Resources:
NotesTable:
Type: AWS::DynamoDB::Table
Properties:
TableName: ${self:custom.tableName}
AttributeDefinitions:
- AttributeName: userId
AttributeType: S
- AttributeName: noteId
AttributeType: S
KeySchema:
- AttributeName: userId
KeyType: HASH
- AttributeName: noteId
KeyType: RANGE
# Set the capacity to auto-scale
BillingMode: PAY_PER_REQUEST
Turns out there was nothing wrong with the above, it's correct!!.. it's was me being a banana and not matching the full name of the table with the environment in the application.. i.e. notes table becomes dev-notes for instance.. maybe the above will help someone.
I'm having difficulties getting the right role to execute a Dynamo UpdateItem in my golang lambda handler.
I've deployed the function using the serverless framework with the following config:
provider:
name: aws
runtime: go1.x
stage: ${opt:stage, 'dev'}
environment: ${file(./env/config.${self:provider.stage}.yml)}
iamRoleStatements: # TODO: create special roles and restrict access per lambda
- Effect: Allow
Action:
- dynamodb:DescribeTable
- dynamodb:Query
- dynamodb:Scan
- dynamodb:GetItem
- dynamodb:PutItem
- dynamodb:UpdateItem
- dynamodb:DeleteItem
Resource:
- "Fn::GetAtt": [ myTable, Arn ]
resources:
Resources:
myTable:
Type: 'AWS::DynamoDB::Table'
Properties:
TableName: myTable-${opt:stage, 'dev'}
AttributeDefinitions:
- AttributeName: UserID
AttributeType: S
KeySchema:
- AttributeName: UserID
KeyType: HASH
ProvisionedThroughput:
ReadCapacityUnits: 1
WriteCapacityUnits: 1
functions:
myFunc:
handler: bin/myFunc
events:
- http:
path: myFunc
method: post
authorizer: app-auth
cors: true
The handler uses the golang aws-sdk to create a session and call UpdateItem on the table:
sess, err := session.NewSession()
svc := dynamodb.New(sess)
input := &dynamodb.UpdateItemInput{
...
}
_, err = svc.UpdateItem(input)
This throws the exception:
AccessDeniedException: User: arn:aws:sts::{acct}:assumed-role/myservice-stage-us-east-1-lambdaRole/myservice-stage-myfunc
The User: arn:aws:sts::{acct}:assumed-role/myservice-stage-us-east-1-lambdaRole is a role that has the correct permissions:
I'm not sure what the /myservice-stage-myfunc part of the User is in the exception as nothing of the sort exists in the IAM console.
Is there some kind of config step I'm missing. To my knowledge, the IAM permissions setup in the serverless.yaml should apply to all functions. However, the assumed role for when working with the go-aws-sdk seems wrong.
DynamoDB has sub resources that often need access. To ensure that you are also addressing those sub items I would recommend adding a wildcard * onto the end of the resource. To do this I prefer to use the serverless-pseudo-parameters plugin (you can install it quickly with serverless plugin install --name serverless-pseudo-parameters) and then use it to more cleanly describe the resource like:
Resource:
- arn:aws:dynamodb:#{AWS::Region}:#{AWS::AccountId}:table/myTable-${opt:stage, 'dev'}*
I would like to automate deployment of AWS Lambda developed in java. For this I created CodePipeline which is triggered on git push command to CodeCommit repository. Next step in CodePipeline is CodeBuild project. CodeBuild uses following buildspec.yml file:
version: 0.1
phases:
build:
commands:
- echo Entering build phase...
- echo Build started on `date`
- mvn package shade:shade
- mv target/Output-1.0.jar .
artifacts:
files:
- Output-1.0.jar
When CodeBuild project is run manually it will upload jar file to s3 bucket. This jar file can be without any problem used to update lambda and everything works as expected. But if CodeBuild is run via CodePipeline, result is jar file wrapped inside zip. Since this zip cannot be used for updating lambda function, I am not sure what I should do here since CodePipeline overwrites any packaging set for CodeBuild project.
Idea is that CodePipeline triggers CodeBuild which produces output which additional lambda will took and update lambda function with it. Is it somehow possible that output of CodeBuild which is invoked from CodePipeline be jar instead of zip ? If not, what should I do here then ?
Any help is appreciated.
A zip or a jar file can both be used to update a Lambda Function, you just need to add a "Deploy Step" using Cloudformation to your CodePipeline.
This is a nodejs build/pipeline, try to adapt to your java project:
Project Files
buildspec.yml
version: 0.2
phases:
install:
commands:
- echo install phase
pre_build:
commands:
- echo pre_build phase
build:
commands:
- npm install --production
post_build:
commands:
- echo post build
artifacts:
type: zip
files:
- index.js
- node_modules/**/*
- package.json
- template.yml
- configuration.json
discard-paths: no
configuration.json
{
"Parameters": {
"BucketName" : { "Fn::GetArtifactAtt" : ["Build", "BucketName"]},
"ObjectKey" : { "Fn::GetArtifactAtt" : ["Build", "ObjectKey"]}
}
}
template.yml (you need to add a AWS::Lambda::Permission)
AWSTemplateFormatVersion: "2010-09-09"
Description: "My Lambda Template"
Parameters:
BucketName:
Type: String
ObjectKey:
Type: String
Roles:
Type: String
Default: Roles
LambdaRole:
Type: String
Default: LambdaRole
Resources:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Description: 'My Lambda Handler'
Handler: index.handler
Runtime: nodejs6.10
Timeout: 5
Code:
S3Bucket:
Ref: BucketName
S3Key:
Ref: ObjectKey
Role:
Fn::Join:
- ""
- - "arn:aws:iam::"
- !Ref AWS::AccountId
- ":role/"
- Fn::ImportValue:
Fn::Join:
- ""
- - Ref: Roles
- "-"
- Ref: LambdaRole
Roles Template
AWSTemplateFormatVersion: '2010-09-09'
Description: 'The AWS Resource Roles'
Resources:
CodeBuildRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
Effect: Allow
Principal:
Service: codebuild.amazonaws.com
Action: sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess
- arn:aws:iam::aws:policy/CloudWatchFullAccess
- arn:aws:iam::aws:policy/AWSCodeCommitFullAccess
- arn:aws:iam::aws:policy/AmazonS3FullAccess
CodePipelineRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
Effect: Allow
Principal:
Service: codepipeline.amazonaws.com
Action: sts:AssumeRole
Policies:
-
PolicyName: CloudFormationFullAccess
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action:
- "cloudformation:*"
Resource: "*"
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSCodePipelineFullAccess
- arn:aws:iam::aws:policy/AWSCodeCommitFullAccess
- arn:aws:iam::aws:policy/AmazonS3FullAccess
- arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess
- arn:aws:iam::aws:policy/AWSLambdaFullAccess
CloudFormationRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
Effect: Allow
Principal:
Service: cloudformation.amazonaws.com
Action: sts:AssumeRole
Policies:
-
PolicyName: CloudFormationFullAccess
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action: "cloudformation:*"
Resource: "*"
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSCodePipelineFullAccess
- arn:aws:iam::aws:policy/AWSCodeCommitFullAccess
- arn:aws:iam::aws:policy/AmazonS3FullAccess
- arn:aws:iam::aws:policy/AWSCodeBuildAdminAccess
- arn:aws:iam::aws:policy/AWSLambdaFullAccess
- arn:aws:iam::aws:policy/AmazonAPIGatewayAdministrator
LambdaRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: sts:AssumeRole
Policies:
-
PolicyName: CloudFormationFullAccess
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action: "cloudformation:*"
Resource: "*"
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AWSLambdaFullAccess
- arn:aws:iam::aws:policy/AWSCodePipelineFullAccess
- arn:aws:iam::aws:policy/AmazonSESFullAccess
Outputs:
CodeBuildRoleOutput:
Description: 'Maybe API CodeBuildRole ARN'
Value: !Ref 'CodeBuildRole'
Export:
Name: !Sub '${AWS::StackName}-CodeBuildRole'
CodePipelineRoleOutput:
Description: 'Maybe API CodePipelineRole ARN'
Value: !Ref 'CodePipelineRole'
Export:
Name: !Sub '${AWS::StackName}-CodePipelineRole'
CloudFormationRoleOutput:
Description: 'Maybe API CloudFormationRole ARN'
Value: !Ref 'CloudFormationRole'
Export:
Name: !Sub '${AWS::StackName}-CloudFormationRole'
LambdaRoleOutput:
Description: 'Maybe API LambdaRole ARN'
Value: !Ref 'LambdaRole'
Export:
Name: !Sub '${AWS::StackName}-LambdaRole'
CodePipeline Bucket
AWSTemplateFormatVersion: '2010-09-09'
Description: 'The AWS S3 CodePipeline Bucket'
Resources:
CodePipelineBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
Properties:
BucketName: my-code-pipeline-bucket
VersioningConfiguration:
Status: Enabled
AccessControl: BucketOwnerFullControl
Outputs:
CodePipelineBucketOutput:
Description: 'CodePipeline Bucket Ref'
Value: !Ref CodePipelineBucket
Export:
Name: !Sub '${AWS::StackName}-CodePipelineBucketRef'
CodeBuild Template
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Nodejs CodeBuild Template'
Parameters:
Artifact:
Type: String
Default: artifact
Roles:
Type: String
Default: Roles
CodeBuildRole:
Type: String
Default: CodeBuildRole
Resources:
NodejsCodeBuild:
Type: AWS::CodeBuild::Project
DeletionPolicy: Retain
Properties:
ServiceRole:
Fn::ImportValue:
Fn::Join:
- ""
- - Ref: Roles
- "-"
- Ref: CodeBuildRole
Artifacts:
Type: no_artifacts
Environment:
ComputeType: BUILD_GENERAL1_SMALL
Image: aws/codebuild/eb-nodejs-6.10.0-amazonlinux-64:4.0.0
Type: LINUX_CONTAINER
Source:
Type: S3
Location: !Ref Artifact
Outputs:
NodejsCodeBuildOutput:
Description: 'Nodejs CodeBuild Ref'
Value: !Ref 'NodejsCodeBuild'
Export:
Name: !Sub '${AWS::StackName}-NodejsCodeBuildRef'
CodePipeline Template
AWSTemplateFormatVersion: '2010-09-09'
Description: 'CodePipeline for Nodejs Applications'
Parameters:
Roles:
Type: String
Default: Roles
CodePipelineRole:
Type: String
Default: CodePipelineRole
CloudFormationRole:
Type: String
Default: CloudFormationRole
CodePipelineBucket:
Type: String
Default: CodePipelineBucket
CodePipelineBucketRef:
Type: String
Default: CodePipelineBucketRef
PipelineName:
Type: String
Default: PipelineName
CodeBuildProject:
Type: String
Default: NodejsCodeBuild
CodeBuildProjectRef:
Type: String
Default: NodejsCodeBuildRef
Branch:
Type: String
Default: master
Repository:
Type: String
Default: my-repository-name
LambdaStack:
Type: String
Default: LambdaStack
Resources:
NodejsCodePipeline:
Type: AWS::CodePipeline::Pipeline
Properties:
Name: !Ref PipelineName
RoleArn:
Fn::Join:
- ""
- - "arn:aws:iam::"
- !Ref AWS::AccountId
- ":role/"
- Fn::ImportValue:
Fn::Join:
- ""
- - Ref: Roles
- "-"
- Ref: CodePipelineRole
ArtifactStore:
Location:
Fn::Join:
- ""
- - Fn::ImportValue:
Fn::Join:
- ""
- - Ref: CodePipelineBucket
- "-"
- Ref: CodePipelineBucketRef
Type: S3
Stages:
- Name: Source
Actions:
- InputArtifacts: []
Name: Source
ActionTypeId:
Category: Source
Owner: AWS
Version: 1
Provider: CodeCommit
OutputArtifacts:
- Name: Master
Configuration:
BranchName: !Ref Branch
RepositoryName: !Ref Repository
RunOrder: 1
- Name: Build
Actions:
- Name: Build
ActionTypeId:
Category: Build
Owner: AWS
Version: 1
Provider: CodeBuild
InputArtifacts:
- Name: Master
OutputArtifacts:
- Name: Build
Configuration:
ProjectName:
Fn::Join:
- ""
- - Fn::ImportValue:
Fn::Join:
- ""
- - Ref: CodeBuildProject
- "-"
- Ref: CodeBuildProjectRef
RunOrder: 1
- Name: Stage
Actions:
- Name: Sandbox
ActionTypeId:
Category: Deploy
Owner: AWS
Version: 1
Provider: CloudFormation
InputArtifacts:
- Name: Build
OutputArtifacts:
- Name: Deploy
Configuration:
StackName: !Ref LambdaStack
ActionMode: CREATE_UPDATE
Capabilities: CAPABILITY_IAM
TemplateConfiguration: Build::configuration.json
TemplatePath: Build::template.yml
ParameterOverrides: |
{
"BucketName" : { "Fn::GetArtifactAtt" : ["Build", "BucketName"]},
"ObjectKey" : { "Fn::GetArtifactAtt" : ["Build", "ObjectKey"]}
}
RoleArn:
Fn::Join:
- ""
- - "arn:aws:iam::"
- !Ref AWS::AccountId
- ":role/"
- Fn::ImportValue:
Fn::Join:
- ""
- - Ref: Roles
- "-"
- Ref: CloudFormationRole
RunOrder: 1