cx_Oracle problem in AWS Lambda built using AWS CodeBuild - aws-lambda

I'm trying to use cx_Oracle to connect to an RDS(Oracle) database from inside an AWS Lambda function (python3.7). Moreover, the Lambda function itself is automatically built from AWS CodeBuild using a buildspec.yml file. The CodeBuild itself runs by configuring AWS CodePipeline in such a way that whenever the repository where I put my code in (in this case AWS CodeCommit) is updated, it automatically builds the stuff.
Things that I have done:
1. I have an AWS Lambda function with code as follows.
import cx_Oracle
def lambda_handler(event, context):
dsn = cx_Oracle.makedsn('www.host.com', '1521', 'dbname')
connection = cx_Oracle.connect(user='user', password='password', dsn=dsn)
cursor = connection.cursor()
cursor.execute('select * from table_name')
return cursor
Inside the buildspec.yml I have the following build commands.
version: 0.2
phases:
install:
runtime-versions:
python: 3.7
commands:
- pip install cx_Oracle -t ./ # to install cx_Oracle package in the same directory as the script
- unzip instantclient-basic-linux*.zip -d /opt/oracle # I have downloaded the zip file beforehand
<other code>
-
I have also configured the template.yml of the Lambda function as follows
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Description: Making a test lambda function using codepipeline
Resources:
funcAuthorityReceive:
Type: 'AWS::Serverless::Function'
Properties:
FunctionName: testFunction
Environment:
Variables:
PATH: '/opt/oracle/instantclient_19_5:$PATH'
LD_LIBRARY_PATH : '$LD_LIBRARY_PATH:/opt/oracle/instantclient_19_5'
Handler: lambda_function.lambda_handler
MemorySize: 128
Role: 'arn:aws:iam::XXXXXXXXXXXXXX:role/role-for-lambda
Runtime: python3.7
CodeUri: ./
Here, everything runs smoothly and the Lambda function itself gets built, but when I run the lambda this error shows up:
"DPI-1047: Cannot locate a 64-bit Oracle Client library: \"libclntsh.so: cannot open shared object file: No such file or directory\". See https://oracle.github.io/odpi/doc/installation.html#linux for help"
Any help would be greatly appreciated.

When you want to use cx_Oracle to reach you oracle database, the moment you zip the lambda package (code and other dependencies), make sure you preserve the symlinks
zip --symlinks -r lambda.zip .
I haven't worked with codebuild, but I have build the lambda package in a linux server, soon I will be creating a build pipeline in Azure Devops.

Related

How do you deploy cloudformation with a lambda function without inline code?

the lambda function size is over 4096 characters, so I can't deploy lambda function as inline codes in cloudformation template.
(https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-lambda-function-code.html)
ZipFile
Your source code can contain up to 4096 characters. For JSON, you must escape quotes and special characters such as newline (\n) with a backslash.
I have to zip it first, upload to a s3 bucket, set s3 bucket and file details in cloudformation, and deploy it.
I can't find a way to deploy with one command. If I update the lambda code, I have to repeat the above steps
But with both AWS SAM or Serverless Framework, they can deploy lambda functions without inline codes.
The only issue is, AWS SAM or serverless framework create API gateway as default, that I don't need it to be created
Any solution or recommendations for me?
If you're managing your deployment with plain CloudFormation and the aws command line interface, you can handle this relatively easily using aws cloudformation package to generate a "packaged" template for deployment.
aws cloudformation package accepts a template where certain properties can be written using local paths, zips the content from the local file system, uploads to a designated S3 bucket, and then outputs a new template with these properties rewritten to refer to the location on S3 instead of the local file system. In your case, it can rewrite Code properties for AWS::Lambda::Function that point to local directories, but see aws cloudformation package help for a full list of supported properties. You do need to setup an S3 bucket ahead of time to store your assets, but you can reuse the same bucket in multiple CloudFormation projects.
So, let's say you have an input.yaml with something like:
MyLambdaFunction:
Type: AWS::Lambda::Function
Properties:
Code: my-function-directory
You might package this up with something like:
aws cloudformation package \
--template-file input.yaml \
--s3-bucket my-packaging-bucket \
--s3-prefix my-project/ \
--output-template-file output.yaml
Which would produce an output.yaml with something resembling this:
MyLambdaFunction:
Properties:
Code:
S3Bucket: my-packaging-bucket
S3Key: my-project/0123456789abcdef0123456789abcdef
Type: AWS::Lambda::Function
You can then use output.yaml with aws cloudformation deploy (or any other aws cloudformation command accepting a template).
To truly "deploy with one command" and ensure you always do deployments consistently, you can combine these two commands into a script, Makefile, or something similar.
you can zip the file first then use aws cli to update your lambda function
zip function.zip lambda_function.py
aws lambda update-function-code --function-name <your-lambda-function-name> --zip-file fileb://function.zip
Within CloudFormation (last 3 lines):
BackupLambda:
Type: "AWS::Lambda::Function"
Properties:
Handler: "backup_lambda.lambda_handler"
Role: !Ref Role
Runtime: "python2.7"
MemorySize: 128
Timeout: 120
Code:
S3Bucket: !Ref BucketWithLambdaFunction
S3Key: !Ref PathToLambdaFile
Re. your comment:
The only issue is, aws SAM or serverless framework create API gateway as default, that I don't need it to be created
For Serverless Framework by default that's not true. The default generated serverless.yml file includes config for the Lambda function itself but the configuration for API Gateway is provided only as an example in the following commented out section.
If you uncomment the 'events' section for http then it will also create an API Gateway config for your Lambda, but not unless you do.
functions:
hello:
handler: handler.hello
# The following are a few example events you can configure
# NOTE: Please make sure to change your handler code to work with those events
# Check the event documentation for details
# events:
# - http:
# path: users/create
# method: get

AWS SAM template doesn't execute BuildMethod

I have a lambda functions that has somewhat non standard packaging. I am using a Makefile to help me package what I need and use it as my build method with sam build command. However I don't see this makefile being executed. Can't figure out why not.
Here is what I have :
sam_template.yaml:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
subscriptions_functions
Sample SAM Template for subscriptions_functions
Globals:
Function:
Timeout: 3
Resources:
GetSubscriptionsFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: .
Handler: app.lambda_handler_individual_methods
Runtime: python3.7
Events:
GetSubscriptions:
Type: Api
Properties:
Path: /subscriptions
Method: get
Metadata:
BuildMethod: makefile
Environment:
Variables:
SERVICE_METHOD_NAME: 'xyz'
REQ_CLASS_NAME: 'xyz'
RES_CLASS_NAME: 'xyz'
Makefile: (the name is based on some AWS examples)
build-GetSubscriptionsFunction:
#echo "Buliding artifacts with sls. Destination dir " $(ARTIFACTS_DIR)
sls package --env aws
mkdir -p $(ARTIFACTS_DIR)
unzip .serverless/subscriptions.zip -d $(ARTIFACTS_DIR)
cp requirements.txt $(ARTIFACTS_DIR)
python -m pip install -r requirements.txt -t $(ARTIFACTS_DIR)
rm -rf $(ARTIFACTS_DIR)/bin
Build succeeded when I run sam build -t sam_template.yaml , but I can tell the Makefile didn't run (no messages printed out and it would create a .serverless directory, but it didn't)
Anyone has an idea what is wrong in this setup?
so I figured it out and it wasn't anything to do with the syntax.
I was running from IntelliJ terminal. Since I was hitting a wall with this one, I started pocking around and running few other SAM commands. Running sam validate also kept failing, but with an error pointing to unset default region.
My region was properly set in both .aws/config and I even tried to export an env variable AWS_DEFAULT_REGION , but nothing worked. It kept failing with unset region.
So I started looking at my plugins in IntelliJ and turns out I had both AWS Toolkit and Debugger for AWS Lambda (by Thundera) installed.
I uninstalled the later and I'm back in business. Not clear on why this plugin was interfering with my console and cli, but it did. Getting rid of it did the trick

SAM build - does it also build layers?

I'm new to both lambda's and also SAM - so if I've screwed anything simple up don't yell :D.
Summary: I can't get sam build to build a layer specified in template.yaml, it only builds the lambda function.
Background: I'm trying to build a lambda function in python3.7 that uses the skimage (scikit-image) module. To do that, I'm trying to use SAM to build and deploy it all. ...this is working
I'm trying to deploy the scikit-image module as a layer (and also build with SAM), rather than having it included in the lambda function direction ...this isn't working
As a start, I'm simply extending the standard SAM Hello World app.
I've got skimage working by simply add it to requirements.txt , then using sam build -u, then manually removing the numpy/scipy dependencies from the built package directory (I've got the AWS scipy/numpy layer included).
(I added import numpy, scipy.ndimage and skimage.draw to the standard hello world app, and included some test function calls to each)
requirements.txt:
requests
scikit-image
After that, everything works fine (running locally and/or on AWS).
However, I'd now like to move the skimage module out of my app and into a new custom layer (I'd like to have skimage in a layer to re-use for a few functions)
To set that up, I've created a dependencies directory and moved requirements.txt into there (leaving empty requirements.txt in the app directory).
I then updated template.yaml to also specify the new layer:
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: >
sam-app
Sample SAM Template for sam-app
# More info about Globals: https://github.com/awslabs/serverless-application-model/blob/master/docs/globals.rst
Globals:
Function:
Timeout: 3
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: hello_world/
Handler: app.lambda_handler
Runtime: python3.7
Layers:
- arn:aws:lambda:us-west-2:420165488524:layer:AWSLambda-Python37-SciPy1x:2
- !Ref SkimageLayer
Events:
HelloWorld:
Type: Api
Properties:
Path: /hello
Method: get
SkimageLayer:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: Skimage
Description: Skimage module layer
ContentUri: dependencies/
CompatibleRuntimes:
- python3.7
RetentionPolicy: Retain
DependsOn:
- Skimage
directory structure:
▾ dependencies/
requirements.txt (responses and scikit-image)
▸ events/
▾ hello_world/
__init__.py
app.py
requirements.txt (now empty)
▸ tests/
README.md
template.yaml
However, when I run sam build -u with that template file, nothing gets built for the layer specified in ./dependencies: SkimageLayer in the template.yml file. However the HelloWorldFunction still gets built correctly (now of course without any included modules)
Since SAM Cli version v0.50.0, it is building layers as part of sam build.
Design document could be a good starting point to understand how it works.
Basically you have to set a custom BuildMethod with your lambda's target runtime:
MyLayer:
Type: AWS::Serverless::LayerVersion
Properties:
ContentUri: my_layer
CompatibleRuntimes:
- python3.8
Metadata:
BuildMethod: python3.8 (or nodejs8.10 etc..)
Warning: For compiled language as Java, it has a issue which it tries to build layers before functions. It's expected to have it fixed on the next release (PR opened already).
Quick answer - No, currently SAM does not build layers you define in a SAM template.yaml file.
It will only build any functions you define.
However (curiously) it will package (upload to S3) and deploy (setup within AWS, assign ARN so it can be used etc) any layers you define.
There is a feature request on the SAM github issues to implement layer building with SAM.
This can actually be hacked right now to get SAM to build your layers as well, by creating a dummy function in your SAM template file, as well as a layer entry, and having the layer ContentUri entry point to the .aws-sam build directory that gets created for the function.
See my post here on that.
That approach actually seems to work pretty well for twisting SAM right now to build your layers for you.
I'm not sure if something changed recently but I'm able to do this without issue. My template file and structure is very similar to the OP except I've put all my common code into...
/dependencies/python/lib/python3.7/site-packages/
I didn't include a requirements.txt file in that directory... just the __init__.py file and various .py files that I need to import into my functions.
SAM then finds the code and builds the layer. You don't even need to zip the contents of the directory as some tutorials tell you to do.
The best part is Layers: is able to be put into the Globals: section of the template file so that the layer is available to all of your functions!
Globals:
Function:
Handler: main.lambda_handler
Timeout: 10
Runtime: python3.7
Layers:
- !Ref HelperFunctions
Resources:
HelperFunctions:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: MyHelperFunctions
Description: My Lambda Layer with Helper Functions for accessing RDS, Logging, and other utilities.
ContentUri: dependencies/
CompatibleRuntimes:
- python3.6
- python3.7
LicenseInfo: MIT
RetentionPolicy: Delete
The AWS team must have made things easier, relative to these older answers. From the current docs, all you do is list a layer as a property in your template (Nov 2020):
ServerlessFunction:
Type: AWS::Serverless::Function
Properties:
CodeUri: .
Handler: my_handler
Runtime: Python3.7
Layers:
- arn:aws:lambda:us-west-2:111111111111:layer:myLayer:1
- arn:aws:lambda:us-west-2:111111111111:layer:mySecondLayer:1
https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-layers.html
I got it to work with the following script. Tested with Ubuntu 18 and CodeBuild
It pip install's the layer's requirements to .aws-sam/build/layername/python/. Then you can run sam package and sam deploy as normal
build-layers.py:
import yaml
import subprocess
import sys
import shutil
SAM_BUILD_PATH = ".aws-sam/build"
with open("template.yaml", "r") as f:
template = yaml.safe_load(f)
for key, resource in template["Resources"].items():
if resource["Type"] not in ["AWS::Serverless::LayerVersion", "AWS::Lambda::LayerVersion"]:
continue
properties = resource["Properties"]
content_uri = properties["ContentUri"]
layer_name = properties["LayerName"]
requirements_path = f"{content_uri}/requirements.txt"
subprocess.check_call([sys.executable, "-m", "pip", "install", "-r", requirements_path, "-t", f"{SAM_BUILD_PATH}/{layer_name}/python"])
shutil.copyfile("template.yaml", f"{SAM_BUILD_PATH}/template.yaml")
template.yaml:
Transform: AWS::Serverless-2016-10-31
Resources:
pandas:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: pandas
ContentUri: pandas
CompatibleRuntimes:
- python3.6
- python3.7
- python3.8
sqlparse:
Type: AWS::Serverless::LayerVersion
Properties:
LayerName: sqlparse
ContentUri: sqlparse
CompatibleRuntimes:
- python3.6
- python3.7
- python3.8
so call python build-layers.py first, then sam package then sam deploy
my directories look like this:
lambda
layers
pandas
requirements.txt (content = pandas)
sqlparse
requirements.txt (content = sqlparse)
template.yaml
build-layers.py
buildspec.yml:
--- # build spec for AWS CodeBuild
version: 0.2
phases:
install:
runtime-versions:
python: 3.8
commands:
- pip install aws-sam-cli
build:
commands:
- cd lambda/layers
- python build-layers.py
- sam package --s3-bucket foo --s3-prefix sam/lambda/layers | sam deploy --capabilities CAPABILITY_IAM -t /dev/stdin --stack-name LAYERS

AWS CloudFormation update Lambda Code to use latest version in S3 bucket

I'm trying to create a CloudFormation template supporting Lambda Function and AWS CodeBuild project for building .netcore source code into a deployed zip file in S3 bucket.
Here are the particulars:
Using a GitHub mono-repo with multiple Lambda functions as different projects in the .netcore solution
Each Lambda function (aka .netcore project) has a CloudFormation YAML file generating a stack containing the Lambda function itself and CodeBuild project.
CodeBuild project is initiated from GitHub web hook which retrieves the code from GitHub sub-project and uses its buildspec.yaml to govern how build should happen.
buildspec uses .netcore for building project, then zips and copies output to a target S3 bucket
Lambda function points to S3 bucket for source code
This is all working just fine. What I'm struggling with is how to update Lambda function to use updated compiled source code in S3 bucket.
Here is subset of CloudFormation template:
Resources:
Lambda:
Type: AWS::Lambda::Function
Properties:
FunctionName: roicalculator-eventpublisher
Handler: RoiCalculator.Serverless.EventPublisher::RoiCalculator.Serverless.EventPublisher.Function::FunctionHandler
Code:
S3Bucket: deployment-artifacts
S3Key: RoiCalculatorEventPublisher.zip
Runtime: dotnetcore2.1
CodeBuildProject:
Type: AWS::CodeBuild::Project
Properties:
Name: RoiCalculator-EventPublisher-Master
Artifacts:
Location: deployment-artifacts
Name: RoiCalculatorEventPublisher.zip
Type: S3
Source:
Type: GITHUB
Location: https://github.com/XXXXXXX
BuildSpec: RoiCalculator.Serverless.EventPublisher/buildspec.yml
Here is subset of buildspec.yaml:
phases:
install:
runtime-versions:
dotnet: 2.2
commands:
dotnet tool install -g Amazon.Lambda.Tools
build:
commands:
- dotnet restore
- cd RoiCalculator.Serverless.EventPublisher
- dotnet lambda package --configuration release --framework netcoreapp2.1 -o .\bin\release\netcoreapp2.1\RoiCalculatorEventPublisher.zip
- aws s3 cp .\bin\release\netcoreapp2.1\RoiCalculatorEventPublisher.zip s3://deployment-artifacts/RoiCalculatorEventPublisher.zip
You can see the same artifact name (RoiCalculatorEventPublisher.zip) and S3 bucket (deployment-artifacts) are being used in buildspec (for generating and copying) and CloudFormation template (for Lambda function's source).
Since I'm overwriting application code in S3 bucket using same file name Lambda is using, how come Lambda is not being updated with latest code?
How do version numbers work? Is it possible to have a 'system variable' containing the name of the artifact (file name + version number) and access same 'system variable' in buildspec AND CloudFormation template?
What's the secret sauce for utilizing CloudFormation template to generate source code (via buildspec) using CodeBuild as well as update Lambda function which consumes the generated code?
Thank you.
Unfortunately, unless you change the "S3Key" on 'AWS::Lambda::Function' resource on every update, CloudFormation will not see it as a change (it will not look inside the zipped code for changes).
Options:
Option 1) Update S3 Key with every upload
Option 2) Recommended advice is to use AWS SAM to author Lambda template, then use "cloudformation package" command to package the template, which takes cares of creating a unique key for S3 and uploading the file to the bucket. Details here: https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-deploying.html
Edit 1:
In response to your comment, let me add some details of SAM approach:
To use CloudFormation as a Deployment tool for your Lambda function in your Pipeline. The basic idea to deploy a Lambda function will be as follows:
1) Create a a SAM template of your Lambda function
2) A basic SAM template looks like:
AWSTemplateFormatVersion: '2010-09-09'
Transform: 'AWS::Serverless-2016-10-31'
Resources:
FunctionName:
Type: 'AWS::Serverless::Function'
Properties:
Handler: index.handler
Runtime: nodejs6.10
CodeUri: ./code
3) Add a directory "code" and keep the lambda code files in this directory
4) Install SAM Cli [1]
5) Run the command to package and upload:
$ sam package --template-file template.yaml --output-template packaged.yaml --s3-bucket {your_S3_bucket}
6) Deploy the package:
$ aws cloudformation deploy --template-file packaged.yaml --stack-name stk1 --capabilities CAPABILITY_IAM
You can keep the Template Code (Step1-2) in CodeCommit/Github and do the Steps4-5 in a CodeBuild Step. For Step6, I recommend to do it via a CloudFormation action in CodePipeline that is fed the "packaged.yaml" file as input artifact.
See also [2].
References:
[1] Installing the AWS SAM CLI on Linux - https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/serverless-sam-cli-install-linux.html
[2] Building a Continuous Delivery Pipeline for a Lambda Application with AWS CodePipeline - https://docs.aws.amazon.com/en_us/lambda/latest/dg/build-pipeline.html
I am using aws scp instead of aws cp and never had this problem.
I am working on a project with serverless architecture with multiple lambdas, where in we have multiple folder with just a python file and requirement.txt file inside it.
Usually the directory and lambda is named the same for convenience for eg. folder email_sender would have python file as email_sender.py and a requirement.txt if it needs one.
In the code build after installing the dependencies i am just showing below how we are ziping
echo "--- Compiling lambda zip: ${d}.zip"
d=$(tr "_" "-" <<< "${d}")
zip -q -r ${d}.zip . --exclude ".gitignore" --exclude "requirements.txt" --exclude "*__pycache__/*" > /dev/null 2>&1
mv ${d}.zip ../../${CODEBUILD_SOURCE_VERSION}/${d}.zip
And while doing a copy to s3 bucket we use scp as following
aws s3 sync ${CODEBUILD_SOURCE_VERSION}/ ${S3_URI} --exclude "*" --include "*.zip" --sse aws:kms --sse-kms-key-id ${KMS_KEY_ALIAS} --content-type "binary/octet-stream" --exact-timestamps

How to fix error about Import dlib on AWS Lambda?

I want to use dlib on AWS Lambda.
I use serverless framework(runtime is python3.6). I import dlib package using serverless-python-requirements plugins.
It works very well at local $ serverless invoke local -f function. But, when I deploy it and use inovek $ serverless invoke -f function, It makes errors.
serverless.yml's code
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: non-linux
requirements.txt
boto3==1.9.135
botocore==1.12.135
Pillow==6.0.0
dlib==19.17.0
docutils==0.14
imutils==0.5.2
jmespath==0.9.4
numpy==1.16.3
opencv-python==4.1.0.25
python-dateutil==2.8.0
s3transfer==0.2.0
six==1.12.0
urllib3==1.24.2
error log of lamda aws
Unable to import module 'handler': libpng16.so.16: cannot open shared object file: No such file or directory
Could you tell me the way to use dlib on aws lambda...

Resources