Serverless - Lambda Layers "Cannot find module 'request'" - aws-lambda

When I deploy my serverless api using:
serverless deploy
The lambda layer gets created but when I go to run the function is gives me this error:
"Cannot find module 'request'"
But if I upload the .zip file manually through the console (the exactly same file thats uploaded when I deploy), it works fine.
Any one have any idea why this is happening?
environment:
SLS_DEBUG: "*"
provider:
name: aws
runtime: nodejs8.10
stage: ${opt:api-type, 'uat'}-${opt:api, 'payment'}
region: ca-central-1
timeout: 30
memorySize: 128
role: ${file(config/prod.env.json):ROLE}
vpc:
securityGroupIds:
- ${file(config/prod.env.json):SECURITY_GROUP}
subnetIds:
- ${file(config/prod.env.json):SUBNET}
apiGateway:
apiKeySourceType: HEADER
apiKeys:
- ${file(config/${opt:api-type, 'uat'}.env.json):${opt:api, "payment"}-APIKEY}
functions:
- '${file(src/handlers/${opt:api, "payment"}.serverless.yml)}'
package:
# individually: true
exclude:
- node_modules/**
- nodejs/**
plugins:
- serverless-offline
- serverless-plugin-warmup
- serverless-content-encoding
custom:
contentEncoding:
minimumCompressionSize: 0 # Minimum body size required for compression in bytes
layers:
nodejs:
package:
artifact: nodejs.zip
compatibleRuntimes:
- nodejs8.10
allowedAccounts:
- "*"
Thats what my serverless yaml script looks like.

I was having a similar error to you while using the explicit layers keys that you are using to define a lambda layer.
My error (for the sake of web searches) was this:
Runtime.ImportModuleError: Error: Cannot find module <package name>
I feel this is a temporary solution b/c I wanted to explicitly define my layers like you were doing, but it wasn't working so it seemed like a bug.
I created a bug report in Serverless for this issue. If anyone else is having this same issue they can track it there.
SOLUTION
I followed this this post in the Serverless forums based on these docs from AWS.
I zipped up my node_modules under the folder nodejs so it looks like this when it is unzipped nodejs/node_modules/<various packages>.
Then instead of using the explicit definition of layers I used the package and artifact keys like so:
layers:
test:
package:
artifact: test.zip
In the function layer it is referred to like this:
functions:
function1:
handler: index.handler
layers:
- { Ref: TestLambdaLayer }
The TestLambdaLayer is a convention of <your name of layer>LambdaLayer as documented here

Make sure you run npm install inside your layers before deploying, ie:
cd ~/repos/repo-name/layers/utilityLayer/nodejs && npm install
Otherwise your layers will get deployed without a node_modules folder. You can download the .zip of your layer from the Lambda UI to confirm the contents of that layer.

If anyone face a similar issue Runtime.ImportModuleError, is fair to say that another cause of this issue could be a package exclude statement in the serverless.yml file.
Be aware that if you have this statement:
package:
exclude:
- './**'
- '!node_modules/**'
- '!dist/**'
- '.git/**'
It will cause exactly the same error, on runtime once you've deployed your lambda function (with serverless framework). Just, ensure to remove the ones that could create a conflict across your dependencies

I am using typescript with the serverless-plugin-typescript and I was having a same error, too.
When I switched from
const myModule = require('./src/myModule');
to
import myModule from './src/myModule';
the error disappeared. It seems like the files were not included into the zip file by serverless when I was using require.
PS: Removing the serverless-plugin-typescript and switching back to javascript also solved the problem.

Related

How to "flatten" Serverless Framework directories when deploying lambda functions to AWS

Developing an AWS lambda function...
and deploying it using the Serverless Framework
service serverless.yml file:
service: user
frameworkVersion: '3'
provider:
name: aws
runtime: nodejs16.x
architecture: arm64
region: ${opt:region, 'us-east-1'}
stage: ${opt:stage, 'development'}
package:
individually: true
functions:
- ${file(./lambda/functions/authorizeUser/serverless.yml)}
function serverless.yml (in the relative directory referenced above)
authorizeUser:
handler: index.handler
name: authorizeUser
description: Registers or Authorizes a user with the system
package:
patterns:
- '!**/*'
- ./lambda/functions/authorizeUser/index.js
- ./lambda/functions/authorizeUser/magic.js
I need to use the directory in which the service serverless.yml file resides as the base path for the individual lambda function source *.js files. I originally expected that I could have just used the directory in which the function serverless.yml resides as the base path to the individual lambda function source *.js files. How can I tell sls deploy to use the directory in which the function serverless.yml file resides as the base path to the individual lambda function source *.js files?
But a bigger issue with my approach is that when the individual lambda function source *.js files are deployed on AWS the directory structure is recreated (e.g. the individual lambda function source *.js files land in my lambda function's):
/authorizeUser/lambda/functions/authorizeUser
directory which causes the following error when I test my lambda function:
{
"errorType": "Runtime.ImportModuleError",
"errorMessage": "Error: Cannot find module 'index'\nRequire stack:\n- /var/runtime/index.mjs",
"trace": [
"Runtime.ImportModuleError: Error: Cannot find module 'index'",
"Require stack:",
"- /var/runtime/index.mjs",
" at _loadUserApp (file:///var/runtime/index.mjs:726:17)",
" at async Object.module.exports.load (file:///var/runtime/index.mjs:741:21)",
" at async file:///var/runtime/index.mjs:781:15",
" at async file:///var/runtime/index.mjs:4:1"
]
}
If I manually move the individual lambda function source *.js files to the /authorizeUser (the lambda function's root directory) the function will execute. How can I tell sls deploy to flatten the directory structure when it deploys the lambda function to AWS (if this is even possible)?
I realize that I can just place all of the files in the same directory in my development environment and these problems that I'm experiencing will not occur, but my preference is to manage source files in a nested directory structure to help categorize files into logic groups.
https://www.serverless.com/framework/docs/providers/aws/guide/functions
I have come to the conclusion that flattening cannot be done because of the way that Serverless constructs the underlying CloudFormation template. Dropping to a parent directory (below the base directory) for the application services makes it impossible (or simply illogical from a purely computer science tree perspective) to attach sibling directories to the services. It would be helpful if there were documentation related to the way that Serverless "path" references are resolved and work together e.g., glob versus $file() references. If this exists somewhere, please send me a link.

PROJECT_ID env and Secret Manager Access

I would like to use the Secret Manager to store a credential to our artifactory, within a cloud build step. I have it working using a build similar to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
All great, no problems - I then try and slightly improve it to:
steps:
- name: 'busybox:glibc'
entrypoint: 'sh'
args: ['-c', 'env']
secretEnv: ['SECRET_VALUE']
availableSecrets:
secretManager:
- versionName: "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
env: 'SECRET_VALUE'
But then it throws the error:
ERROR: (gcloud.builds.submit) INVALID_ARGUMENT: failed to get secret name from secret version "projects/$PROJECT_ID/secrets/TEST-SECRET/versions/1"
I have been able to add a TRIGGER level env var (SECRET_MANAGER_PROJECT_ID), and that works fine. The only issue that as that is a trigger env, it is not available on rebuild, which breaks a lot of things.
Does anyone know how to get the PROJECT_ID of a Secret Manager from within CloudBuild without using a Trigger Param?
For now, it's not possible to set dynamic value in the secret field. I already provided this feedback directly to the Google Cloud PM, it has been take into account, but I don't have more info to share, especially for the availability.
EDIT 1
(January 22). Thanks to Seza443 comment, I tested again and now it works with automatically populated variable (PROJECT_ID and PROJECT_NUMBER), but also with customer defined substitution variables!
It appears that Cloud Build now allows for the use of substitution variables within the availableSecrets field of a build configuration.
From Google Cloud's documentation on using secrets:
After all the build steps, add an availableSecrets field to specify the secret version and environment variables to use for your secret. You can include substitution variables in the value of the secretVersion field. You can specify more than one secret in a build.
I was able to use the $PROJECT_ID variable in my own build configuration like so:
...
availableSecrets:
secretManager:
- versionName: projects/$PROJECT_ID/secrets/api-key/versions/latest
env: API_KEY
Granted, there appears to be (at least at present) some discrepancy between the documentation quoted above and the recommended configuration file schema. In the documentation they refer to secretVersion, but that appears to have changed to versionName. In either case, it seems to work properly.
Use the $PROJECT_NUMBER instead.
https://cloud.google.com/build/docs/configuring-builds/substitute-variable-values#using_default_substitutions

How do you deploy cloudformation with a lambda function without inline code?

the lambda function size is over 4096 characters, so I can't deploy lambda function as inline codes in cloudformation template.
(https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html)
ZipFile
Your source code can contain up to 4096 characters. For JSON, you must escape quotes and special characters such as newline (\n) with a backslash.
I have to zip it first, upload to a s3 bucket, set s3 bucket and file details in cloudformation, and deploy it.
I can't find a way to deploy with one command. If I update the lambda code, I have to repeat the above steps
But with both AWS SAM or Serverless Framework, they can deploy lambda functions without inline codes.
The only issue is, AWS SAM or serverless framework create API gateway as default, that I don't need it to be created
Any solution or recommendations for me?
If you're managing your deployment with plain CloudFormation and the aws command line interface, you can handle this relatively easily using aws cloudformation package to generate a "packaged" template for deployment.
aws cloudformation package accepts a template where certain properties can be written using local paths, zips the content from the local file system, uploads to a designated S3 bucket, and then outputs a new template with these properties rewritten to refer to the location on S3 instead of the local file system. In your case, it can rewrite Code properties for AWS::Lambda::Function that point to local directories, but see aws cloudformation package help for a full list of supported properties. You do need to setup an S3 bucket ahead of time to store your assets, but you can reuse the same bucket in multiple CloudFormation projects.
So, let's say you have an input.yaml with something like:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Code: my-function-directory
You might package this up with something like:
aws cloudformation package \
--template-file input.yaml \
--s3-bucket my-packaging-bucket \
--s3-prefix my-project/ \
--output-template-file output.yaml
Which would produce an output.yaml with something resembling this:
MyLambdaFunction:
Properties:
Code:
S3Bucket: my-packaging-bucket
S3Key: my-project/0123456789abcdef0123456789abcdef
Type: AWS::Lambda::Function
You can then use output.yaml with aws cloudformation deploy (or any other aws cloudformation command accepting a template).
To truly "deploy with one command" and ensure you always do deployments consistently, you can combine these two commands into a script, Makefile, or something similar.
you can zip the file first then use aws cli to update your lambda function
zip function.zip lambda_function.py
aws lambda update-function-code --function-name <your-lambda-function-name> --zip-file fileb://function.zip
Within CloudFormation (last 3 lines):
BackupLambda:
Type: "AWS::Lambda::Function"
Properties:
Handler: "backup_lambda.lambda_handler"
Role: !Ref Role
Runtime: "python2.7"
MemorySize: 128
Timeout: 120
Code:
S3Bucket: !Ref BucketWithLambdaFunction
S3Key: !Ref PathToLambdaFile
Re. your comment:
The only issue is, aws SAM or serverless framework create API gateway as default, that I don't need it to be created
For Serverless Framework by default that's not true. The default generated serverless.yml file includes config for the Lambda function itself but the configuration for API Gateway is provided only as an example in the following commented out section.
If you uncomment the 'events' section for http then it will also create an API Gateway config for your Lambda, but not unless you do.
functions:
hello:
handler: handler.hello
# The following are a few example events you can configure
# NOTE: Please make sure to change your handler code to work with those events
# Check the event documentation for details
# events:
# - http:
# path: users/create
# method: get

Is there a way we can import a file into YAML in GCP Deployment Manager

I am trying to create a configuration file in GCP Deployment Manager and i have a metadata file which needs to imported as text.
I know on how do it in .py file, but wondering on how to do it in YAML.
I tried different but none seem to work.
Although Deployment Manager can use the imports statement to import Jinja2 or Python templates into the root configuration file, plain YAML cannot be imported. This is limitation of YAML. It does not have "import" or "include" functionality.
A similar question has been discussed here: https://stackoverflow.com/a/15437697/11602913.
In a pure YAML deployment file, metadata can be provided literally, as described in the document
Google Cloud Platform for AWS Professionals: Infrastructure Deployment Tools:
resources:
- name: my-first-vm-template
type: compute.v1.instance
properties:
...
metadata:
items:
- key: startup-script
value: "STARTUP-SCRIPT-CONTENTS"
If metadata should be loaded from a file, you have to use Jinja2 templates. There is an example at codelabs.developers.google.com:
Deploy Your Infrastructure Using Deployment Manager > Creating your deployment configuration
imports:
- path: instance.jinja
- path: ../startup-script.sh
name: startup-script.sh
resources:
- name: my-instance
type: instance.jinja
properties:
metadata-from-file:
startup-script: startup-script.sh

SAM build - does it also build layers?

I'm new to both lambda's and also SAM - so if I've screwed anything simple up don't yell :D.
Summary: I can't get sam build to build a layer specified in template.yaml, it only builds the lambda function.
Background: I'm trying to build a lambda function in python3.7 that uses the skimage (scikit-image) module. To do that, I'm trying to use SAM to build and deploy it all. ...this is working
I'm trying to deploy the scikit-image module as a layer (and also build with SAM), rather than having it included in the lambda function direction ...this isn't working
As a start, I'm simply extending the standard SAM Hello World app.
I've got skimage working by simply add it to requirements.txt , then using sam build -u, then manually removing the numpy/scipy dependencies from the built package directory (I've got the AWS scipy/numpy layer included).
(I added import numpy, scipy.ndimage and skimage.draw to the standard hello world app, and included some test function calls to each)
requirements.txt:
requests
scikit-image
After that, everything works fine (running locally and/or on AWS).
However, I'd now like to move the skimage module out of my app and into a new custom layer (I'd like to have skimage in a layer to re-use for a few functions)
To set that up, I've created a dependencies directory and moved requirements.txt into there (leaving empty requirements.txt in the app directory).
I then updated template.yaml to also specify the new layer:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
sam-app
Sample SAM Template for sam-app
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: hello_world/
Handler: app.lambda_handler
Runtime: python3.7
Layers:
- arn:aws:lambda:us-west-2:420165488524:layer:AWSLambda-Python37-SciPy1x:2
- !Ref SkimageLayer
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
SkimageLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: Skimage
Description: Skimage module layer
ContentUri: dependencies/
CompatibleRuntimes:
- python3.7
RetentionPolicy: Retain
DependsOn:
- Skimage
directory structure:
▾ dependencies/
requirements.txt (responses and scikit-image)
▸ events/
▾ hello_world/
__init__.py
app.py
requirements.txt (now empty)
▸ tests/
README.md
template.yaml
However, when I run sam build -u with that template file, nothing gets built for the layer specified in ./dependencies: SkimageLayer in the template.yml file. However the HelloWorldFunction still gets built correctly (now of course without any included modules)
Since SAM Cli version v0.50.0, it is building layers as part of sam build.
Design document could be a good starting point to understand how it works.
Basically you have to set a custom BuildMethod with your lambda's target runtime:
MyLayer:
Type: AWS::Serverless::LayerVersion
Properties:
ContentUri: my_layer
CompatibleRuntimes:
- python3.8
Metadata:
BuildMethod: python3.8 (or nodejs8.10 etc..)
Warning: For compiled language as Java, it has a issue which it tries to build layers before functions. It's expected to have it fixed on the next release (PR opened already).
Quick answer - No, currently SAM does not build layers you define in a SAM template.yaml file.
It will only build any functions you define.
However (curiously) it will package (upload to S3) and deploy (setup within AWS, assign ARN so it can be used etc) any layers you define.
There is a feature request on the SAM github issues to implement layer building with SAM.
This can actually be hacked right now to get SAM to build your layers as well, by creating a dummy function in your SAM template file, as well as a layer entry, and having the layer ContentUri entry point to the .aws-sam build directory that gets created for the function.
See my post here on that.
That approach actually seems to work pretty well for twisting SAM right now to build your layers for you.
I'm not sure if something changed recently but I'm able to do this without issue. My template file and structure is very similar to the OP except I've put all my common code into...
/dependencies/python/lib/python3.7/site-packages/
I didn't include a requirements.txt file in that directory... just the __init__.py file and various .py files that I need to import into my functions.
SAM then finds the code and builds the layer. You don't even need to zip the contents of the directory as some tutorials tell you to do.
The best part is Layers: is able to be put into the Globals: section of the template file so that the layer is available to all of your functions!
Globals:
Function:
Handler: main.lambda_handler
Timeout: 10
Runtime: python3.7
Layers:
- !Ref HelperFunctions
Resources:
HelperFunctions:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: MyHelperFunctions
Description: My Lambda Layer with Helper Functions for accessing RDS, Logging, and other utilities.
ContentUri: dependencies/
CompatibleRuntimes:
- python3.6
- python3.7
LicenseInfo: MIT
RetentionPolicy: Delete
The AWS team must have made things easier, relative to these older answers. From the current docs, all you do is list a layer as a property in your template (Nov 2020):
ServerlessFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: .
Handler: my_handler
Runtime: Python3.7
Layers:
- arn:aws:lambda:us-west-2:111111111111:layer:myLayer:1
- arn:aws:lambda:us-west-2:111111111111:layer:mySecondLayer:1
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-layers.html
I got it to work with the following script. Tested with Ubuntu 18 and CodeBuild
It pip install's the layer's requirements to .aws-sam/build/layername/python/. Then you can run sam package and sam deploy as normal
build-layers.py:
import yaml
import subprocess
import sys
import shutil
SAM_BUILD_PATH = ".aws-sam/build"
with open("template.yaml", "r") as f:
template = yaml.safe_load(f)
for key, resource in template["Resources"].items():
if resource["Type"] not in ["AWS::Serverless::LayerVersion", "AWS::Lambda::LayerVersion"]:
continue
properties = resource["Properties"]
content_uri = properties["ContentUri"]
layer_name = properties["LayerName"]
requirements_path = f"{content_uri}/requirements.txt"
subprocess.check_call([sys.executable, "-m", "pip", "install", "-r", requirements_path, "-t", f"{SAM_BUILD_PATH}/{layer_name}/python"])
shutil.copyfile("template.yaml", f"{SAM_BUILD_PATH}/template.yaml")
template.yaml:
Transform: AWS::Serverless-2016-10-31
Resources:
pandas:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: pandas
ContentUri: pandas
CompatibleRuntimes:
- python3.6
- python3.7
- python3.8
sqlparse:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: sqlparse
ContentUri: sqlparse
CompatibleRuntimes:
- python3.6
- python3.7
- python3.8
so call python build-layers.py first, then sam package then sam deploy
my directories look like this:
lambda
layers
pandas
requirements.txt (content = pandas)
sqlparse
requirements.txt (content = sqlparse)
template.yaml
build-layers.py
buildspec.yml:
--- # build spec for AWS CodeBuild
version: 0.2
phases:
install:
runtime-versions:
python: 3.8
commands:
- pip install aws-sam-cli
build:
commands:
- cd lambda/layers
- python build-layers.py
- sam package --s3-bucket foo --s3-prefix sam/lambda/layers | sam deploy --capabilities CAPABILITY_IAM -t /dev/stdin --stack-name LAYERS

Resources