lambda container image serverless - aws-lambda

I have a working lambda deployment using serverless. I am trying to put the lambda functions inside a docker image. Originally I had a handler.js that contains 2 module.exports and in my original serverless.yml I specified:
functions:
func1:
handler: handler.func1
events:
...
func2:
handler: handler.func2
events:
...
The new serverless.yml is as follows:
functions:
func1:
image: <account>.dkr.ecr.<region>.amazonaws.com/<repository>#<digest>
events:
...
func2:
image: <account>.dkr.ecr.<region>.amazonaws.com/<repository>#<digest>
events:
...
My question is, what do I put into the CMD in the Dockerfile so I can access both func1 and func2?
Currently I have:
FROM public.ecr.aws/lambda/nodejs:14
ARG FUNCTION_DIR="/var/task"
# Create function directory
RUN mkdir -p ${FUNCTION_DIR}
# Copy handler function and package.json
COPY handler.js ${FUNCTION_DIR}
COPY package.json ${FUNCTION_DIR}
# Install NPM dependencies for function
RUN npm install
# Set the CMD to your handler
CMD [ "handler" ]

Since you're using the AWS base image for Node.js, you have to define your handler like you did in serverless.yml. Like this:
CMD [ "handler.func1" ]
There's a very similar example to your code in the AWS documentation which explains how you should set the CMD arguments.
The CMD arguments are provided to the ENTRYPOINT. From the AWS Docs:
CMD – Specifies parameters that you want to pass in with ENTRYPOINT.

Related

How to run multiple lambda functions when deploying as a Docker image?

How does the dockerfile look like for aws lambda with docker image via aws-sam when declaring multiple functions/apps in templates.yaml?
Here is the sample dockerfile to run "a single app"
FROM public.ecr.aws/lambda/python:3.8
COPY app.py requirements.txt ./
RUN python3.8 -m pip install -r requirements.txt -t .
# Command can be overwritten by providing a different command in the template directly.
CMD ["app.lambda_handler"]
The Dockerfile itself looks the same. No changes needed there.
The presence of the CMD line in the Docker file looks like it needs to change, but that is misleading. The CMD value can be specified on a per-function basis in the template.yaml file.
The template.yaml file must be updated with information about the new function. You will need to add an ImageConfig property to each function. The ImageConfig property must specify the name of the function in the same way the CMD value otherwise would have done so.
You will also need to update each function's DockerTag value to be unique, though this may be a bug.
Here's the NodeJs "Hello World" example template.yaml's Resources section, updated to support multiple functions with a single Docker image:
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
PackageType: Image
ImageConfig:
Command: [ "app.lambdaHandler" ]
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
Metadata:
DockerTag: nodejs14.x-v1-1
DockerContext: ./hello-world
Dockerfile: Dockerfile
HelloWorldFunction2:
Type: AWS::Serverless::Function
Properties:
PackageType: Image
ImageConfig:
Command: [ "app.lambdaHandler2" ]
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello2
Method: get
Metadata:
DockerTag: nodejs14.x-v1-2
DockerContext: ./hello-world
Dockerfile: Dockerfile
This assumes the app.js file has been modified to provide both exports.lambdaHandler and exports.lambdaHandler2. I assume the corresponding python file should be modified similarly.
After updating template.yaml in this way, sam local start-api works as expected, routing /hello to lambdaHandler and /hello2 to lambdaHandler2.
This technically creates two separate Docker images (one for each distinct DockerTag value). However, the two images will be identical save for the tag, and based on the same Dockerfile, and the second image will therefore make use of Docker's cache of the first image.

How do you deploy cloudformation with a lambda function without inline code?

the lambda function size is over 4096 characters, so I can't deploy lambda function as inline codes in cloudformation template.
(https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html)
ZipFile
Your source code can contain up to 4096 characters. For JSON, you must escape quotes and special characters such as newline (\n) with a backslash.
I have to zip it first, upload to a s3 bucket, set s3 bucket and file details in cloudformation, and deploy it.
I can't find a way to deploy with one command. If I update the lambda code, I have to repeat the above steps
But with both AWS SAM or Serverless Framework, they can deploy lambda functions without inline codes.
The only issue is, AWS SAM or serverless framework create API gateway as default, that I don't need it to be created
Any solution or recommendations for me?
If you're managing your deployment with plain CloudFormation and the aws command line interface, you can handle this relatively easily using aws cloudformation package to generate a "packaged" template for deployment.
aws cloudformation package accepts a template where certain properties can be written using local paths, zips the content from the local file system, uploads to a designated S3 bucket, and then outputs a new template with these properties rewritten to refer to the location on S3 instead of the local file system. In your case, it can rewrite Code properties for AWS::Lambda::Function that point to local directories, but see aws cloudformation package help for a full list of supported properties. You do need to setup an S3 bucket ahead of time to store your assets, but you can reuse the same bucket in multiple CloudFormation projects.
So, let's say you have an input.yaml with something like:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Code: my-function-directory
You might package this up with something like:
aws cloudformation package \
--template-file input.yaml \
--s3-bucket my-packaging-bucket \
--s3-prefix my-project/ \
--output-template-file output.yaml
Which would produce an output.yaml with something resembling this:
MyLambdaFunction:
Properties:
Code:
S3Bucket: my-packaging-bucket
S3Key: my-project/0123456789abcdef0123456789abcdef
Type: AWS::Lambda::Function
You can then use output.yaml with aws cloudformation deploy (or any other aws cloudformation command accepting a template).
To truly "deploy with one command" and ensure you always do deployments consistently, you can combine these two commands into a script, Makefile, or something similar.
you can zip the file first then use aws cli to update your lambda function
zip function.zip lambda_function.py
aws lambda update-function-code --function-name <your-lambda-function-name> --zip-file fileb://function.zip
Within CloudFormation (last 3 lines):
BackupLambda:
Type: "AWS::Lambda::Function"
Properties:
Handler: "backup_lambda.lambda_handler"
Role: !Ref Role
Runtime: "python2.7"
MemorySize: 128
Timeout: 120
Code:
S3Bucket: !Ref BucketWithLambdaFunction
S3Key: !Ref PathToLambdaFile
Re. your comment:
The only issue is, aws SAM or serverless framework create API gateway as default, that I don't need it to be created
For Serverless Framework by default that's not true. The default generated serverless.yml file includes config for the Lambda function itself but the configuration for API Gateway is provided only as an example in the following commented out section.
If you uncomment the 'events' section for http then it will also create an API Gateway config for your Lambda, but not unless you do.
functions:
hello:
handler: handler.hello
# The following are a few example events you can configure
# NOTE: Please make sure to change your handler code to work with those events
# Check the event documentation for details
# events:
# - http:
# path: users/create
# method: get

AWS SAM template doesn't execute BuildMethod

I have a lambda functions that has somewhat non standard packaging. I am using a Makefile to help me package what I need and use it as my build method with sam build command. However I don't see this makefile being executed. Can't figure out why not.
Here is what I have :
sam_template.yaml:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
subscriptions_functions
Sample SAM Template for subscriptions_functions
Globals:
Function:
Timeout: 3
Resources:
GetSubscriptionsFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: .
Handler: app.lambda_handler_individual_methods
Runtime: python3.7
Events:
GetSubscriptions:
Type: Api
Properties:
Path: /subscriptions
Method: get
Metadata:
BuildMethod: makefile
Environment:
Variables:
SERVICE_METHOD_NAME: 'xyz'
REQ_CLASS_NAME: 'xyz'
RES_CLASS_NAME: 'xyz'
Makefile: (the name is based on some AWS examples)
build-GetSubscriptionsFunction:
#echo "Buliding artifacts with sls. Destination dir " $(ARTIFACTS_DIR)
sls package --env aws
mkdir -p $(ARTIFACTS_DIR)
unzip .serverless/subscriptions.zip -d $(ARTIFACTS_DIR)
cp requirements.txt $(ARTIFACTS_DIR)
python -m pip install -r requirements.txt -t $(ARTIFACTS_DIR)
rm -rf $(ARTIFACTS_DIR)/bin
Build succeeded when I run sam build -t sam_template.yaml , but I can tell the Makefile didn't run (no messages printed out and it would create a .serverless directory, but it didn't)
Anyone has an idea what is wrong in this setup?
so I figured it out and it wasn't anything to do with the syntax.
I was running from IntelliJ terminal. Since I was hitting a wall with this one, I started pocking around and running few other SAM commands. Running sam validate also kept failing, but with an error pointing to unset default region.
My region was properly set in both .aws/config and I even tried to export an env variable AWS_DEFAULT_REGION , but nothing worked. It kept failing with unset region.
So I started looking at my plugins in IntelliJ and turns out I had both AWS Toolkit and Debugger for AWS Lambda (by Thundera) installed.
I uninstalled the later and I'm back in business. Not clear on why this plugin was interfering with my console and cli, but it did. Getting rid of it did the trick

cx_Oracle problem in AWS Lambda built using AWS CodeBuild

I'm trying to use cx_Oracle to connect to an RDS(Oracle) database from inside an AWS Lambda function (python3.7). Moreover, the Lambda function itself is automatically built from AWS CodeBuild using a buildspec.yml file. The CodeBuild itself runs by configuring AWS CodePipeline in such a way that whenever the repository where I put my code in (in this case AWS CodeCommit) is updated, it automatically builds the stuff.
Things that I have done:
1. I have an AWS Lambda function with code as follows.
import cx_Oracle
def lambda_handler(event, context):
dsn = cx_Oracle.makedsn('www.host.com', '1521', 'dbname')
connection = cx_Oracle.connect(user='user', password='password', dsn=dsn)
cursor = connection.cursor()
cursor.execute('select * from table_name')
return cursor
Inside the buildspec.yml I have the following build commands.
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- pip install cx_Oracle -t ./ # to install cx_Oracle package in the same directory as the script
- unzip instantclient-basic-linux*.zip -d /opt/oracle # I have downloaded the zip file beforehand
<other code>
-
I have also configured the template.yml of the Lambda function as follows
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: Making a test lambda function using codepipeline
Resources:
funcAuthorityReceive:
Type: 'AWS::Serverless::Function'
Properties:
FunctionName: testFunction
Environment:
Variables:
PATH: '/opt/oracle/instantclient_19_5:$PATH'
LD_LIBRARY_PATH : '$LD_LIBRARY_PATH:/opt/oracle/instantclient_19_5'
Handler: lambda_function.lambda_handler
MemorySize: 128
Role: 'arn:aws:iam::XXXXXXXXXXXXXX:role/role-for-lambda
Runtime: python3.7
CodeUri: ./
Here, everything runs smoothly and the Lambda function itself gets built, but when I run the lambda this error shows up:
"DPI-1047: Cannot locate a 64-bit Oracle Client library: \"libclntsh.so: cannot open shared object file: No such file or directory\". See https://oracle.github.io/odpi/doc/installation.html#linux for help"
Any help would be greatly appreciated.
When you want to use cx_Oracle to reach you oracle database, the moment you zip the lambda package (code and other dependencies), make sure you preserve the symlinks
zip --symlinks -r lambda.zip .
I haven't worked with codebuild, but I have build the lambda package in a linux server, soon I will be creating a build pipeline in Azure Devops.

AWS CloudFormation update Lambda Code to use latest version in S3 bucket

I'm trying to create a CloudFormation template supporting Lambda Function and AWS CodeBuild project for building .netcore source code into a deployed zip file in S3 bucket.
Here are the particulars:
Using a GitHub mono-repo with multiple Lambda functions as different projects in the .netcore solution
Each Lambda function (aka .netcore project) has a CloudFormation YAML file generating a stack containing the Lambda function itself and CodeBuild project.
CodeBuild project is initiated from GitHub web hook which retrieves the code from GitHub sub-project and uses its buildspec.yaml to govern how build should happen.
buildspec uses .netcore for building project, then zips and copies output to a target S3 bucket
Lambda function points to S3 bucket for source code
This is all working just fine. What I'm struggling with is how to update Lambda function to use updated compiled source code in S3 bucket.
Here is subset of CloudFormation template:
Resources:
Lambda:
Type: AWS::Lambda::Function
Properties:
FunctionName: roicalculator-eventpublisher
Handler: RoiCalculator.Serverless.EventPublisher::RoiCalculator.Serverless.EventPublisher.Function::FunctionHandler
Code:
S3Bucket: deployment-artifacts
S3Key: RoiCalculatorEventPublisher.zip
Runtime: dotnetcore2.1
CodeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Name: RoiCalculator-EventPublisher-Master
Artifacts:
Location: deployment-artifacts
Name: RoiCalculatorEventPublisher.zip
Type: S3
Source:
Type: GITHUB
Location: https://github.com/XXXXXXX
BuildSpec: RoiCalculator.Serverless.EventPublisher/buildspec.yml
Here is subset of buildspec.yaml:
phases:
install:
runtime-versions:
dotnet: 2.2
commands:
dotnet tool install -g Amazon.Lambda.Tools
build:
commands:
- dotnet restore
- cd RoiCalculator.Serverless.EventPublisher
- dotnet lambda package --configuration release --framework netcoreapp2.1 -o .\bin\release\netcoreapp2.1\RoiCalculatorEventPublisher.zip
- aws s3 cp .\bin\release\netcoreapp2.1\RoiCalculatorEventPublisher.zip s3://deployment-artifacts/RoiCalculatorEventPublisher.zip
You can see the same artifact name (RoiCalculatorEventPublisher.zip) and S3 bucket (deployment-artifacts) are being used in buildspec (for generating and copying) and CloudFormation template (for Lambda function's source).
Since I'm overwriting application code in S3 bucket using same file name Lambda is using, how come Lambda is not being updated with latest code?
How do version numbers work? Is it possible to have a 'system variable' containing the name of the artifact (file name + version number) and access same 'system variable' in buildspec AND CloudFormation template?
What's the secret sauce for utilizing CloudFormation template to generate source code (via buildspec) using CodeBuild as well as update Lambda function which consumes the generated code?
Thank you.
Unfortunately, unless you change the "S3Key" on 'AWS::Lambda::Function' resource on every update, CloudFormation will not see it as a change (it will not look inside the zipped code for changes).
Options:
Option 1) Update S3 Key with every upload
Option 2) Recommended advice is to use AWS SAM to author Lambda template, then use "cloudformation package" command to package the template, which takes cares of creating a unique key for S3 and uploading the file to the bucket. Details here: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-deploying.html
Edit 1:
In response to your comment, let me add some details of SAM approach:
To use CloudFormation as a Deployment tool for your Lambda function in your Pipeline. The basic idea to deploy a Lambda function will be as follows:
1) Create a a SAM template of your Lambda function
2) A basic SAM template looks like:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
FunctionName:
Type: 'AWS::Serverless::Function'
Properties:
Handler: index.handler
Runtime: nodejs6.10
CodeUri: ./code
3) Add a directory "code" and keep the lambda code files in this directory
4) Install SAM Cli [1]
5) Run the command to package and upload:
$ sam package --template-file template.yaml --output-template packaged.yaml --s3-bucket {your_S3_bucket}
6) Deploy the package:
$ aws cloudformation deploy --template-file packaged.yaml --stack-name stk1 --capabilities CAPABILITY_IAM
You can keep the Template Code (Step1-2) in CodeCommit/Github and do the Steps4-5 in a CodeBuild Step. For Step6, I recommend to do it via a CloudFormation action in CodePipeline that is fed the "packaged.yaml" file as input artifact.
See also [2].
References:
[1] Installing the AWS SAM CLI on Linux - https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-linux.html
[2] Building a Continuous Delivery Pipeline for a Lambda Application with AWS CodePipeline - https://docs.aws.amazon.com/en_us/lambda/latest/dg/build-pipeline.html
I am using aws scp instead of aws cp and never had this problem.
I am working on a project with serverless architecture with multiple lambdas, where in we have multiple folder with just a python file and requirement.txt file inside it.
Usually the directory and lambda is named the same for convenience for eg. folder email_sender would have python file as email_sender.py and a requirement.txt if it needs one.
In the code build after installing the dependencies i am just showing below how we are ziping
echo "--- Compiling lambda zip: ${d}.zip"
d=$(tr "_" "-" <<< "${d}")
zip -q -r ${d}.zip . --exclude ".gitignore" --exclude "requirements.txt" --exclude "*__pycache__/*" > /dev/null 2>&1
mv ${d}.zip ../../${CODEBUILD_SOURCE_VERSION}/${d}.zip
And while doing a copy to s3 bucket we use scp as following
aws s3 sync ${CODEBUILD_SOURCE_VERSION}/ ${S3_URI} --exclude "*" --include "*.zip" --sse aws:kms --sse-kms-key-id ${KMS_KEY_ALIAS} --content-type "binary/octet-stream" --exact-timestamps

Resources