Serverless Framework - Python and Requirements.txt - aws-lambda

Using the serverless framework v1.0.0, I have a 'requirements.txt' in my service root with the contents being the list of dependant python packages. (e.g. requests).
However my resulting deployed function fails as it seems these dependencies are not installed as part of the packaging
'Unable to import module 'handler': No module named requests'
I assume it is serverless that does the pip install, but my resulting zip file is small and clearly its not doing it, either by design or my fault as I am missing something? Is it because its Lambda that does this? If so what am I missing?)
Is there documentation on what is required to do this and how it works? Is it serverless that pip installs these or on aws lambda side?

You need to install serverless-python-requirements and docker
$ npm install serverless-python-requirements
Then add the following to your serverless.yml
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: non-linux
Make sure you have your python virtual environment active in CLI:
$ source venv/bin/activate
Install any dependencies with pip - note that in CLI you can tell if venv is active by the venv to the left of the terminal text
(venv) $ pip install <NAME>
(venv) $ pip freeze > requirements.txt
Make sure you have opened docker then deploy serverless as normal
$ serverless deploy
What will happen is that serverless-python-requirements will build you python packages in docker using a lambda environment, and then zip them up ready to be uploaded with the rest of your code.
Full guide here

Now you can use serverless-python-requirements. It works both for pure Python and libraries needing native compilation (using Docker):
A Serverless v1.x plugin to automatically bundle dependencies from requirements.txt and make them available in your PYTHONPATH.
Requires Serverless >= v1.12

The Serverless Framework doesn't handle the pip install. See https://stackoverflow.com/a/39791686/1111215 for the solution

Related

aws sam cli build - how to avoid pip install every build

When developing a lambda using sam cli, I often need to run sam build which create a new image from my Dockerfile
This process include fetching all the dependency:
RUN python3.8 -m pip install -r requirements.txt -t .
And takes a lot of time, I'm looking for a way to avoid this step, but couldn't find any info about it
I figure this solution, run sam build, and use the generated image as a base for the next run (and remove the pip install cmd)
it is working, but I was wondering if it is a good practice or can it cause problems later?
any other solutions?

Getting pipenv working in a Virtualenv where the App is working

I have my Django App working in a Virtualenv.
I would like to switch to pipenv. However, pipenv install fails with a dependency error.
Given that the App is working, I guess all the libraries are in the Virtualenv.
When getting the App working through Virtualenv + pip, I had to resolve the library dependency, but was able to and got it working. The thinking behind moving to pipenv is to avoid the dependency issues in a multiple member team setup.
Is there a way to tell pipenv to just take the versions of the libraries in the virtualenv and just go with it?
If you have a setup.py file you can install it with pipenv install .. Or even better, make it an editable development dependency: pipenv install -e . --dev.
You can also create a Pipfile/virtual env from a requirements.txt file. So you could do a pip freeze, then install from the requirements file.
Freezing your dependencies
From your working app virtual env, export your dependencies to a requirements file.
pip freeze > frozen-reqs.txt
Then create a new virtual env with pipenv, and install from the frozen requirements.
pipenv install -r frozen-reqs.txt
Then go into the Pipfile and start removing everything but the top level dependencies, and re-lock. Also where-ever possible, avoid pinning requirements as this makes dependency resolution much harder.
You can use pipenv graph and pipenv graph --reverse to help with this.

Issue using M2Crypto on lambda (works on EC2)

I am trying to install a python function using M2Crypto in AWS Lambda.
I spun up an EC2 instance with the Lambda AMI image, installed M2Crypto into a virtualenv, and was able to get my function working on EC2.
Then I zipped up the site-package and uploaded to Lambda. I got this error
Unable to import module 'epd_M2Crypto':
/var/task/M2Crypto/_m2crypto.cpython-36m-x86_64-linux-gnu.so: symbol
sk_deep_copy, version libcrypto.so.10 not defined in file
libcrypto.so.10 with link time reference
There are similar questions and hints here and here. I tried uploading the offending lib (libcrypto.so.10) in the zip file, but still get the same error. I am assuming the error means that the EC2 version of libcrypto.so.10 (used to install M2Crypto) is different than the version on Lambda (that I trying to run with), so M2Crypto complains.
If I look at the versions of openssl they are different:
OpenSSL 1.0.0-fips 29 Mar 2010 (lambda version)
OpenSSL 1.0.2k-fips 26 Jan 2017 (ec2 version)
I don't think the answer is to downgrade openssl on ec2 as the 1.0.0 version is obsolete (AWS applies security patches but the version still shows as 1.0.0). (Also the yum doesn't have versions this old)
Here's the steps i used on the EC2 instance to get it working on EC2:
$ sudo yum -y update
$ sudo yum -y install python36
$ sudo yum -y install python-virtualenv
$ sudo yum -y groupinstall "Development Tools"
$ sudo yum -y install python36-devel.x86_64
$ sudo yum -y install openssl-devel.x86_64
$ mkdir ~/forlambda
$ cd ~/forlambda
$ virtualenv -p python3 venv
$ source venv/bin/activate
$ cd ~
$ pip install M2Crypto -t ~/forlambda/venv/lib/python3.6/site-packages/
$ cd ~/forlambda/venv/lib/python3.6/site-packages/
$ (create python function that uses M2Crypto)
$ zip -r9 ~/forlambda/archive.zip .
Then added to the zip file
/usr/bin/openssl
/usr/lib64/libcrypto.so.10
/usr/lib64/libssl.so.10
And uploaded to Lambda, which is where I am now stuck.
Do I need to do something to get Lambda to use the version of libcrypto.so.10 that I have included in the uploaded zip?
My function:
"""
Wrapper for M2Crypto
https://github.com/mcepl/M2Crypto
https://pypi.org/project/M2Crypto/
"""
from __future__ import print_function
from M2Crypto import RSA
import base64
import json
def decrypt_string(string_b64):
rsa = RSA.load_key('private_key.pem')
string_encrypted = base64.b64decode(string_b64)
bytes = rsa.private_decrypt(string_encrypted, 1)
string_plaintext = bytes.decode("utf-8")
response = {
's': string_plaintext,
'status': "OK",
'statuscode': 200
};
return response
def lambda_handler(event, context):
response = ""
action = event['action']
if action == "decrypt":
string_b64 = event['s']
response = decrypt_string(string_b64)
return response
AWS support provided a resolution, upgrading to use Python 3.7 where the issue is resolved:
Our internal team has confirmed that the issue is with Lambda's Python
runtime. In a few rare cases, when the Lambda function is being
initialised, Lambda is unable to link against the correct OpenSSL
libraries - instead linking against Lambda's own in-built OpenSSL
binaries.
The team suggests trying this out in the Python3.7 environment where
this behaviour has been fixed. Also, python3.7 is compiled with the
newer openssl 1.0.2 and you should not have to include the binaries in
the Lambda package. ... still had to include the OpenSSL binaries in
the package and could not get it working with the default libraries.
First I ran this command on the EC2 instance to make sure I had included the correct .so file in my .zip:
$ ldd -v _m2crypto.cpython-36m-x86_64-linux-gnu.so
The output of the ldd command (edited for brevity):
libssl.so.10 => /lib64/libssl.so.10 (0x00007fd5f1892000)
libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007fd5f1433000)
Based on the output above, I included /lib64/libcrypto.so.10 in my .zip.
Also (at the suggestion of AWS Support), on the Lambda console, under 'Environment variables', I added a key 'LD_LIBRARY_PATH' with value '/var/task'.
I'm not sure if I needed both those changes to fix my problem, but it works right now and after three days of troubleshooting I am afraid to touch it to see if it was one or the other that made it work.
It is perhaps too brutal, but would it be possible to use LD_PRELOAD to force using your preferred version of OpenSSL library?
AWS lambda runs code on an old version of amazon linux (amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2) as mentioned in the official documentation
https://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html
So to run a code that depends on shared libraries, it needs to be compiled in the same environment so it can link correctly.
What I usually do in such cases is that I create virtualenv using docker container. The virtualenv can than be packaged with lambda code.
Please note that if you need install anything using yum (in the docker container), you must use same release server as the amazon linux version:
yum --releasever=2017.03 install ...
virtualenv can be built using an EC2 instance as well instead of docker container (though, I find docker method easier). Just make sure that the AMI used for EC2 is same as the one used by lambda.

Build from Dockerfile-runtime Import h2o4gpu - No h2o4gpu Module

I have recently built the h2o4gpu docker image using the Dockerfile-runtime, and managed to run it and log into the Jupyter notebooks.
However, when trying to run
import h2o4gpu
I get the error that there is no h2o4gpu module. After, I tried installing by adding the below command to the dockerfile.
pip install --extra-index-url https://pypi.anaconda.org/gpuopenanalytics/simple h2o4gpu
pip install h2o4gpu-0.2.0-cp36-cp36m-linux_x86_64.whl
This also failed, so I was wondering if there were other changes I should make, or if I should be making the docker file from scratch.
Thank you
To build the project, you can follow this recipe:
git clone https://github.com/h2oai/h2o4gpu.git
cd h2o4gpu
make centos7_cuda9_in_docker
This will work on either an x86_64 or ppc64le host with a modern docker installed.
The python .whl file artifact is written to the dist directory.
Even if the build process is significantly refactored, this style of build API is very likely to remain.

python deployment pacakge issue

I am using module paramiko in my python code which is an aws lambda function.
I followed same procedure in python package deployment following the link http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html#with-s3-example-deployment-pkg-python
I got some strange error after running the deployment package
I see you're following AWS documentation, but I'm not sure exactly how you are creating the deployment package, so I'll try to illustrate with an example.
My Python code(3.5)
/paramiko
/paramiko
my_function.py
requirements.txt
Where requirements.txt:
paramiko==2.3.1
my_function.py contains:
import paramiko
print(paramiko.__version__)
Creating the virtual environment.
Create the virtualenv: python3 -m venv /path/to/your/venv.
Navigate to the venv root, and activate it: source bin/activate.
Install dependencies: pip install -r requirements.txt
Execute the following shell commands from the root of your venv:
cd lib/python3.5/site-packages/
zip -r9 ~/my_deployment_package.zip *
cd /path/to/your/project/root
zip -g ~/my_deployment_package.zip *
You should have a deployment package, ~/my_deployment_package that contains all the dependencies for your project.

Resources