How to run multiple lambda functions when deploying as a Docker image? - aws-lambda

How does the dockerfile look like for aws lambda with docker image via aws-sam when declaring multiple functions/apps in templates.yaml?
Here is the sample dockerfile to run "a single app"
FROM public.ecr.aws/lambda/python:3.8
COPY app.py requirements.txt ./
RUN python3.8 -m pip install -r requirements.txt -t .
# Command can be overwritten by providing a different command in the template directly.
CMD ["app.lambda_handler"]

The Dockerfile itself looks the same. No changes needed there.
The presence of the CMD line in the Docker file looks like it needs to change, but that is misleading. The CMD value can be specified on a per-function basis in the template.yaml file.
The template.yaml file must be updated with information about the new function. You will need to add an ImageConfig property to each function. The ImageConfig property must specify the name of the function in the same way the CMD value otherwise would have done so.
You will also need to update each function's DockerTag value to be unique, though this may be a bug.
Here's the NodeJs "Hello World" example template.yaml's Resources section, updated to support multiple functions with a single Docker image:
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
PackageType: Image
ImageConfig:
Command: [ "app.lambdaHandler" ]
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
Metadata:
DockerTag: nodejs14.x-v1-1
DockerContext: ./hello-world
Dockerfile: Dockerfile
HelloWorldFunction2:
Type: AWS::Serverless::Function
Properties:
PackageType: Image
ImageConfig:
Command: [ "app.lambdaHandler2" ]
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello2
Method: get
Metadata:
DockerTag: nodejs14.x-v1-2
DockerContext: ./hello-world
Dockerfile: Dockerfile
This assumes the app.js file has been modified to provide both exports.lambdaHandler and exports.lambdaHandler2. I assume the corresponding python file should be modified similarly.
After updating template.yaml in this way, sam local start-api works as expected, routing /hello to lambdaHandler and /hello2 to lambdaHandler2.
This technically creates two separate Docker images (one for each distinct DockerTag value). However, the two images will be identical save for the tag, and based on the same Dockerfile, and the second image will therefore make use of Docker's cache of the first image.

Related

lambda container image serverless

I have a working lambda deployment using serverless. I am trying to put the lambda functions inside a docker image. Originally I had a handler.js that contains 2 module.exports and in my original serverless.yml I specified:
functions:
func1:
handler: handler.func1
events:
...
func2:
handler: handler.func2
events:
...
The new serverless.yml is as follows:
functions:
func1:
image: <account>.dkr.ecr.<region>.amazonaws.com/<repository>#<digest>
events:
...
func2:
image: <account>.dkr.ecr.<region>.amazonaws.com/<repository>#<digest>
events:
...
My question is, what do I put into the CMD in the Dockerfile so I can access both func1 and func2?
Currently I have:
FROM public.ecr.aws/lambda/nodejs:14
ARG FUNCTION_DIR="/var/task"
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy handler function and package.json
COPY handler.js ${FUNCTION_DIR}
COPY package.json ${FUNCTION_DIR}
# Install NPM dependencies for function
RUN npm install
# Set the CMD to your handler
CMD [ "handler" ]
Since you're using the AWS base image for Node.js, you have to define your handler like you did in serverless.yml. Like this:
CMD [ "handler.func1" ]
There's a very similar example to your code in the AWS documentation which explains how you should set the CMD arguments.
The CMD arguments are provided to the ENTRYPOINT. From the AWS Docs:
CMD – Specifies parameters that you want to pass in with ENTRYPOINT.

How can I override ddev's php-fpm.conf or pool.d/www.conf?

There is no obvious way to override some php-fpm configuration in DDEV-Local's web container. Although it's easy to provide custom PHP configuration it's not as obvious how one would configure the php-fpm process itself.
In my case I want to change the security.limit-extensions value in pool.d/www.conf
There are two ways to do this. I'll create two separate answers to explain how.
The first technique is to create a custom Dockerfile (docs) which edits the www.conf (or any other file). You can also use the Dockerfile ADD command to add a complete file and override them.
In the case of this specific problem, we'll create a .ddev/web-build/Dockerfile with these contents:
# You can copy this Dockerfile.example to Dockerfile to add configuration
# or packages or anything else to your webimage
ARG BASE_IMAGE
FROM $BASE_IMAGE
ENV PHP_VERSION=7.4
RUN echo "security.limit_extensions = .php .html" >> /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf
After you ddev start you'll have the new configuration.
Instead of the RUN echo approach that just appends to the file, given here for simplicity, you could RUN a sed/awk/perl statement to change the file in place.
And alternatively you could put the version of the www.conf that you want into the .ddev/web-build directory and
COPY www.conf /etc/php/${PHP_VERSION}/fpm/pool.d/www.conf
The second way to approach this is to use a custom docker-compose.*.yaml file (docs.
Here you'll copy the desired www.conf (or any other file) into your project's .ddev directory and then mount it into the web container on top of the previously provided one. For this specific example, you can copy the www.conf into the .ddev folder by cd .ddev && docker cp ddev-<projectname>-web:/etc/php/7.4/fpm/pool.d/www.conf . and edit it as you need to (edit it with "security.limit_extensions = .php .html").
Then a custom .ddev/docker-compose.*.yaml file like this can mount it into the proper directory (mine is called docker-compose.wwwconf.yaml):
version: "3.6"
services:
web:
volumes:
- "./www.conf:/etc/php/7.4/fpm/pool.d/www.conf"
if you are using docker-compose, mount zz-docker.conf where your customized configure placed, sample as bellow
php:
build: ./php
image: ctc/php:latest
container_name: ctc-php
expose:
- 9000
volumes:
- ./html:/var/www/html
- ./php/log:/var/log/php-fpm
- ./php/php-fpm.d/zz-docker.conf:/usr/local/etc/php-fpm.d/zz-docker.conf
networks:
- koogua
restart: always
zz-docker.conf looks like bellow:
[global]
daemonize = no
[www]
listen = 9000
pm.max_children = 50
pm.start_servers = 20
pm.min_spare_servers = 10
pm.max_spare_servers = 30
pm.max_requests = 500
note: mount www.conf will cause error

AWS SAM template doesn't execute BuildMethod

I have a lambda functions that has somewhat non standard packaging. I am using a Makefile to help me package what I need and use it as my build method with sam build command. However I don't see this makefile being executed. Can't figure out why not.
Here is what I have :
sam_template.yaml:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
subscriptions_functions
Sample SAM Template for subscriptions_functions
Globals:
Function:
Timeout: 3
Resources:
GetSubscriptionsFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: .
Handler: app.lambda_handler_individual_methods
Runtime: python3.7
Events:
GetSubscriptions:
Type: Api
Properties:
Path: /subscriptions
Method: get
Metadata:
BuildMethod: makefile
Environment:
Variables:
SERVICE_METHOD_NAME: 'xyz'
REQ_CLASS_NAME: 'xyz'
RES_CLASS_NAME: 'xyz'
Makefile: (the name is based on some AWS examples)
build-GetSubscriptionsFunction:
#echo "Buliding artifacts with sls. Destination dir " $(ARTIFACTS_DIR)
sls package --env aws
mkdir -p $(ARTIFACTS_DIR)
unzip .serverless/subscriptions.zip -d $(ARTIFACTS_DIR)
cp requirements.txt $(ARTIFACTS_DIR)
python -m pip install -r requirements.txt -t $(ARTIFACTS_DIR)
rm -rf $(ARTIFACTS_DIR)/bin
Build succeeded when I run sam build -t sam_template.yaml , but I can tell the Makefile didn't run (no messages printed out and it would create a .serverless directory, but it didn't)
Anyone has an idea what is wrong in this setup?
so I figured it out and it wasn't anything to do with the syntax.
I was running from IntelliJ terminal. Since I was hitting a wall with this one, I started pocking around and running few other SAM commands. Running sam validate also kept failing, but with an error pointing to unset default region.
My region was properly set in both .aws/config and I even tried to export an env variable AWS_DEFAULT_REGION , but nothing worked. It kept failing with unset region.
So I started looking at my plugins in IntelliJ and turns out I had both AWS Toolkit and Debugger for AWS Lambda (by Thundera) installed.
I uninstalled the later and I'm back in business. Not clear on why this plugin was interfering with my console and cli, but it did. Getting rid of it did the trick

SAM build - does it also build layers?

I'm new to both lambda's and also SAM - so if I've screwed anything simple up don't yell :D.
Summary: I can't get sam build to build a layer specified in template.yaml, it only builds the lambda function.
Background: I'm trying to build a lambda function in python3.7 that uses the skimage (scikit-image) module. To do that, I'm trying to use SAM to build and deploy it all. ...this is working
I'm trying to deploy the scikit-image module as a layer (and also build with SAM), rather than having it included in the lambda function direction ...this isn't working
As a start, I'm simply extending the standard SAM Hello World app.
I've got skimage working by simply add it to requirements.txt , then using sam build -u, then manually removing the numpy/scipy dependencies from the built package directory (I've got the AWS scipy/numpy layer included).
(I added import numpy, scipy.ndimage and skimage.draw to the standard hello world app, and included some test function calls to each)
requirements.txt:
requests
scikit-image
After that, everything works fine (running locally and/or on AWS).
However, I'd now like to move the skimage module out of my app and into a new custom layer (I'd like to have skimage in a layer to re-use for a few functions)
To set that up, I've created a dependencies directory and moved requirements.txt into there (leaving empty requirements.txt in the app directory).
I then updated template.yaml to also specify the new layer:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
sam-app
Sample SAM Template for sam-app
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: hello_world/
Handler: app.lambda_handler
Runtime: python3.7
Layers:
- arn:aws:lambda:us-west-2:420165488524:layer:AWSLambda-Python37-SciPy1x:2
- !Ref SkimageLayer
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
SkimageLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: Skimage
Description: Skimage module layer
ContentUri: dependencies/
CompatibleRuntimes:
- python3.7
RetentionPolicy: Retain
DependsOn:
- Skimage
directory structure:
▾ dependencies/
requirements.txt (responses and scikit-image)
▸ events/
▾ hello_world/
__init__.py
app.py
requirements.txt (now empty)
▸ tests/
README.md
template.yaml
However, when I run sam build -u with that template file, nothing gets built for the layer specified in ./dependencies: SkimageLayer in the template.yml file. However the HelloWorldFunction still gets built correctly (now of course without any included modules)
Since SAM Cli version v0.50.0, it is building layers as part of sam build.
Design document could be a good starting point to understand how it works.
Basically you have to set a custom BuildMethod with your lambda's target runtime:
MyLayer:
Type: AWS::Serverless::LayerVersion
Properties:
ContentUri: my_layer
CompatibleRuntimes:
- python3.8
Metadata:
BuildMethod: python3.8 (or nodejs8.10 etc..)
Warning: For compiled language as Java, it has a issue which it tries to build layers before functions. It's expected to have it fixed on the next release (PR opened already).
Quick answer - No, currently SAM does not build layers you define in a SAM template.yaml file.
It will only build any functions you define.
However (curiously) it will package (upload to S3) and deploy (setup within AWS, assign ARN so it can be used etc) any layers you define.
There is a feature request on the SAM github issues to implement layer building with SAM.
This can actually be hacked right now to get SAM to build your layers as well, by creating a dummy function in your SAM template file, as well as a layer entry, and having the layer ContentUri entry point to the .aws-sam build directory that gets created for the function.
See my post here on that.
That approach actually seems to work pretty well for twisting SAM right now to build your layers for you.
I'm not sure if something changed recently but I'm able to do this without issue. My template file and structure is very similar to the OP except I've put all my common code into...
/dependencies/python/lib/python3.7/site-packages/
I didn't include a requirements.txt file in that directory... just the __init__.py file and various .py files that I need to import into my functions.
SAM then finds the code and builds the layer. You don't even need to zip the contents of the directory as some tutorials tell you to do.
The best part is Layers: is able to be put into the Globals: section of the template file so that the layer is available to all of your functions!
Globals:
Function:
Handler: main.lambda_handler
Timeout: 10
Runtime: python3.7
Layers:
- !Ref HelperFunctions
Resources:
HelperFunctions:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: MyHelperFunctions
Description: My Lambda Layer with Helper Functions for accessing RDS, Logging, and other utilities.
ContentUri: dependencies/
CompatibleRuntimes:
- python3.6
- python3.7
LicenseInfo: MIT
RetentionPolicy: Delete
The AWS team must have made things easier, relative to these older answers. From the current docs, all you do is list a layer as a property in your template (Nov 2020):
ServerlessFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: .
Handler: my_handler
Runtime: Python3.7
Layers:
- arn:aws:lambda:us-west-2:111111111111:layer:myLayer:1
- arn:aws:lambda:us-west-2:111111111111:layer:mySecondLayer:1
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-layers.html
I got it to work with the following script. Tested with Ubuntu 18 and CodeBuild
It pip install's the layer's requirements to .aws-sam/build/layername/python/. Then you can run sam package and sam deploy as normal
build-layers.py:
import yaml
import subprocess
import sys
import shutil
SAM_BUILD_PATH = ".aws-sam/build"
with open("template.yaml", "r") as f:
template = yaml.safe_load(f)
for key, resource in template["Resources"].items():
if resource["Type"] not in ["AWS::Serverless::LayerVersion", "AWS::Lambda::LayerVersion"]:
continue
properties = resource["Properties"]
content_uri = properties["ContentUri"]
layer_name = properties["LayerName"]
requirements_path = f"{content_uri}/requirements.txt"
subprocess.check_call([sys.executable, "-m", "pip", "install", "-r", requirements_path, "-t", f"{SAM_BUILD_PATH}/{layer_name}/python"])
shutil.copyfile("template.yaml", f"{SAM_BUILD_PATH}/template.yaml")
template.yaml:
Transform: AWS::Serverless-2016-10-31
Resources:
pandas:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: pandas
ContentUri: pandas
CompatibleRuntimes:
- python3.6
- python3.7
- python3.8
sqlparse:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: sqlparse
ContentUri: sqlparse
CompatibleRuntimes:
- python3.6
- python3.7
- python3.8
so call python build-layers.py first, then sam package then sam deploy
my directories look like this:
lambda
layers
pandas
requirements.txt (content = pandas)
sqlparse
requirements.txt (content = sqlparse)
template.yaml
build-layers.py
buildspec.yml:
--- # build spec for AWS CodeBuild
version: 0.2
phases:
install:
runtime-versions:
python: 3.8
commands:
- pip install aws-sam-cli
build:
commands:
- cd lambda/layers
- python build-layers.py
- sam package --s3-bucket foo --s3-prefix sam/lambda/layers | sam deploy --capabilities CAPABILITY_IAM -t /dev/stdin --stack-name LAYERS

Serverless - Lambda Layers "Cannot find module 'request'"

When I deploy my serverless api using:
serverless deploy
The lambda layer gets created but when I go to run the function is gives me this error:
"Cannot find module 'request'"
But if I upload the .zip file manually through the console (the exactly same file thats uploaded when I deploy), it works fine.
Any one have any idea why this is happening?
environment:
SLS_DEBUG: "*"
provider:
name: aws
runtime: nodejs8.10
stage: ${opt:api-type, 'uat'}-${opt:api, 'payment'}
region: ca-central-1
timeout: 30
memorySize: 128
role: ${file(config/prod.env.json):ROLE}
vpc:
securityGroupIds:
- ${file(config/prod.env.json):SECURITY_GROUP}
subnetIds:
- ${file(config/prod.env.json):SUBNET}
apiGateway:
apiKeySourceType: HEADER
apiKeys:
- ${file(config/${opt:api-type, 'uat'}.env.json):${opt:api, "payment"}-APIKEY}
functions:
- '${file(src/handlers/${opt:api, "payment"}.serverless.yml)}'
package:
# individually: true
exclude:
- node_modules/**
- nodejs/**
plugins:
- serverless-offline
- serverless-plugin-warmup
- serverless-content-encoding
custom:
contentEncoding:
minimumCompressionSize: 0 # Minimum body size required for compression in bytes
layers:
nodejs:
package:
artifact: nodejs.zip
compatibleRuntimes:
- nodejs8.10
allowedAccounts:
- "*"
Thats what my serverless yaml script looks like.
I was having a similar error to you while using the explicit layers keys that you are using to define a lambda layer.
My error (for the sake of web searches) was this:
Runtime.ImportModuleError: Error: Cannot find module <package name>
I feel this is a temporary solution b/c I wanted to explicitly define my layers like you were doing, but it wasn't working so it seemed like a bug.
I created a bug report in Serverless for this issue. If anyone else is having this same issue they can track it there.
SOLUTION
I followed this this post in the Serverless forums based on these docs from AWS.
I zipped up my node_modules under the folder nodejs so it looks like this when it is unzipped nodejs/node_modules/<various packages>.
Then instead of using the explicit definition of layers I used the package and artifact keys like so:
layers:
test:
package:
artifact: test.zip
In the function layer it is referred to like this:
functions:
function1:
handler: index.handler
layers:
- { Ref: TestLambdaLayer }
The TestLambdaLayer is a convention of <your name of layer>LambdaLayer as documented here
Make sure you run npm install inside your layers before deploying, ie:
cd ~/repos/repo-name/layers/utilityLayer/nodejs && npm install
Otherwise your layers will get deployed without a node_modules folder. You can download the .zip of your layer from the Lambda UI to confirm the contents of that layer.
If anyone face a similar issue Runtime.ImportModuleError, is fair to say that another cause of this issue could be a package exclude statement in the serverless.yml file.
Be aware that if you have this statement:
package:
exclude:
- './**'
- '!node_modules/**'
- '!dist/**'
- '.git/**'
It will cause exactly the same error, on runtime once you've deployed your lambda function (with serverless framework). Just, ensure to remove the ones that could create a conflict across your dependencies
I am using typescript with the serverless-plugin-typescript and I was having a same error, too.
When I switched from
const myModule = require('./src/myModule');
to
import myModule from './src/myModule';
the error disappeared. It seems like the files were not included into the zip file by serverless when I was using require.
PS: Removing the serverless-plugin-typescript and switching back to javascript also solved the problem.

Resources