I am using module paramiko in my python code which is an aws lambda function.
I followed same procedure in python package deployment following the link http://docs.aws.amazon.com/lambda/latest/dg/with-s3-example-deployment-pkg.html#with-s3-example-deployment-pkg-python
I got some strange error after running the deployment package
I see you're following AWS documentation, but I'm not sure exactly how you are creating the deployment package, so I'll try to illustrate with an example.
My Python code(3.5)
/paramiko
/paramiko
my_function.py
requirements.txt
Where requirements.txt:
paramiko==2.3.1
my_function.py contains:
import paramiko
print(paramiko.__version__)
Creating the virtual environment.
Create the virtualenv: python3 -m venv /path/to/your/venv.
Navigate to the venv root, and activate it: source bin/activate.
Install dependencies: pip install -r requirements.txt
Execute the following shell commands from the root of your venv:
cd lib/python3.5/site-packages/
zip -r9 ~/my_deployment_package.zip *
cd /path/to/your/project/root
zip -g ~/my_deployment_package.zip *
You should have a deployment package, ~/my_deployment_package that contains all the dependencies for your project.
Related
I am running
poetry install
from within a python local virtualenv ".venv" . The project is supposed to create an executable hercl that becomes available on the user's path. Two questions:
What options / configuration of I'm not sure if that's supposed to gets installed into the local .venv/bin or in the pyenv shims.
Since poetry reuses / redirects many functions to pip it may be the case that the feature I'm asking about is actually from pip itself. I have not been able to discover from either poetry or pip documentation about this shell script installation. How is this achieved?
Update
After running running pip install outside of the virtualenv it pulls from pypi and creates a bash script ~/.pyenv/shims/my_app .
In my case the my_app is "hercl" and we see this:
$which hercl
~/.pyenv/shims/hercl
Its contents are :
$cat $(which hercl)
#!/usr/bin/env bash
set -e
[ -n "$PYENV_DEBUG" ] && set -x
program="${0##*/}"
export PYENV_ROOT="~/.pyenv"
exec "~/.pyenv/libexec/pyenv" exec "$program" "$#"
Somehow this script is installed when running pip install: I am wondering how pip knows to do this. Is it from the pyproject.ml from poetry ? Is it from a setup.py or setup.cfg associated with pip ?
Anoterh Update #sinoroc has another tack on this: poetry has a scripts section that I did not notice (noobie on that tool).
[tool.poetry.scripts]
hercl = "hercl.hercl:main"
hercl is a command that I was looking for .
But there was also an actual _bash script that would launch hercl that got installed under the shims as part of the virtualenv. i think that script were in the
In a Poetry-based project such executable scripts are defined in the scripts section of pyproject.toml.
If a virtual environment is active when installing the application then the executable is installed in the virtual environment's bin directory. So it is available only while the virtual environment is "active".
I'm trying to create a new lambda layer to import the zip file with psycopg2, because the library made my deployment package get over 3MB, and I can not see the inline code in my lambda function any more.
I created lambda layer for the following 2 cases with Python 3.7:
psycopg2_lib.zip (contains psycopg2, psycopg2_binary.libs and psycopg2_binary-2.8.5.dist-info folders)
psycopg2_only.zip which contains only the psycopg2 folder.
I added they new created layer into my lambda function.
But, in both cases, my lambda_function throws an error as follows:
{
"errorMessage": "Unable to import module 'lambda_function': No module named 'psycopg2'",
"errorType": "Runtime.ImportModuleError"
}
The error seems as if something went wrong with my zip file that they are not recognized. But when it works well in my deployment package.
Any help or reason would be much appriciated. Thanks!
not sure if the OP found a solution to this but in case others land here. I resolved this using the following steps:
download the code/clone the git from:
https://github.com/jkehler/awslambda-psycopg2
create the following directory tree, if building for python3.7, otherwise replace 'python3.7' with the version choice:
mkdir -p python/lib/python3.7/site-packages/psycopg2
choose the python version of interest and copy the files from the folders downloaded in step 1. to the directory tree in step 2. e.g. if building a layer for python 3.7:
cp psycopg2-3.7/* python/lib/python3.7/site-packages/psycopg2
create the zip file for the layer. e.g.: zip -r9 psycopg2-py37.zip python
create a layer in the console or cli and upload the zip
If you end up on this page in >= 2022 year. Use official psycopg2-binary https://pypi.org/project/psycopg2-binary/
Works well for me. Just
pip install --target ./python psycopg2-binary
zip -r python.zip python
Maintainers of psycopg2 do not recommend using psycopg2-binary because it comes with linked libpq and libssl and others that may cause issues in production under certain circumstances.
I may imagine this being an issue when upgrading postgresql server while bundled libpq is incompatible. I also had issues w/ psycopg2-binary on AWS Lambda running in arm64 environment.
I've resorted to building postgresql and psycopg in Docker running on linux/arm64 platform using public.ecr.aws/lambda/python:3.9 as the base image.
FROM public.ecr.aws/lambda/python:3.9
RUN yum -y update && \
yum -y upgrade && \
yum -y install libffi-devel postgresql-devel postgresql-libs zip rsync wget openssl openssl-devel && \
yum -y groupinstall "development tools" && \
pip install pipenv
ENTRYPOINT ["/bin/bash"]
The build script is the following and valid for aarch64 platform. Just change path to x86_64 version on Prepare psycopg2 step.
#!/usr/bin/env bash
set -e
PG_VERSION="14.5"
cd "$TERRAFORM_ROOT"
if [ ! -f "postgresql-$PG_VERSION.tar.bz2" ]; then
wget "https://ftp.postgresql.org/pub/source/v$PG_VERSION/postgresql-$PG_VERSION.tar.bz2"
tar -xf "$(pwd)/postgresql-$PG_VERSION.tar.bz2"
fi
if [ ! -d "psycopg2" ]; then
git clone https://github.com/psycopg/psycopg2.git
fi
# Build postgres
cd "$TERRAFORM_ROOT/postgresql-$PG_VERSION"
./configure --without-readline --without-zlib
make
make install
# Build psycopg2
cd "$TERRAFORM_ROOT/psycopg2"
make clean
python setup.py build_ext \
--pg-config "$TERRAFORM_ROOT/postgresql-14.5/src/bin/pg_config/pg_config"
# Prepare psycopg2
cd build/lib.linux-aarch64-3.9
mkdir -p python/
cp -r psycopg2 python/
zip -9 -r "$BUNDLE" ./python
# Prepare libpq
cd "$TERRAFORM_ROOT/postgresql-$PG_VERSION/src/interfaces/libpq/"
mkdir -p lib/
cp libpq.so.5 lib/
zip -9 -r "$BUNDLE" ./lib
where $BUNDLE is the path to already existing .zip file.
I also tried to statically build psycopg2 binary and link libpq.a, however, I have had quite a lot of issues with missing symbols.
From AWS post How do I add Python packages with compiled binaries to my deployment package and make the package compatible with Lambda?:
To create a Lambda deployment package or layer that's compatible with Lambda Python runtimes when using pip outside of Linux operating system, run the pip install command with manylinux2014 as the value for the --platform parameter.
pip install \
--platform manylinux2014_x86_64 \
--target=my-lambda-function \
--implementation cp \
--python 3.9 \
--only-binary=:all: --upgrade \
psycopg2-binary
You can then zip the content of directory my-lambda-function
I deployed a dag in Airflow (on GCP) but I receive error "No module named 'scipy'".
How do I install packages in Airflow?
I've tried adding a separate DAG to run
def pip_install(package):
subprocess.call([sys.executable, "-m", "pip", "install", package])
def update_packages(**kwargs):
logging.info(list(sys.modules.keys()))
for package in PACKAGES:
pip_install(package)
I've tried writing pip3 install scipy on the shell of GCP;
I've tried adding pip install scipy to the image builder.
None of these approaches had any result.
If you are using Cloud Composer on GCP, you should check https://cloud.google.com/composer/docs/how-to/using/installing-python-dependencies
Pass a requirements.txt file to the gcloud command-line tool. Format the file with each requirement specifier on a separate line.
Sample requirements.txt file:
scipy>=0.13.3
scikit-learn
nltk[machine_learning]
Pass the requirements.txt file to the gcloud command to set your installation dependencies.
gcloud composer environments update ENVIRONMENT-NAME \\
--update-pypi-packages-from-file requirements.txt \\
--location LOCATION
Using the serverless framework v1.0.0, I have a 'requirements.txt' in my service root with the contents being the list of dependant python packages. (e.g. requests).
However my resulting deployed function fails as it seems these dependencies are not installed as part of the packaging
'Unable to import module 'handler': No module named requests'
I assume it is serverless that does the pip install, but my resulting zip file is small and clearly its not doing it, either by design or my fault as I am missing something? Is it because its Lambda that does this? If so what am I missing?)
Is there documentation on what is required to do this and how it works? Is it serverless that pip installs these or on aws lambda side?
You need to install serverless-python-requirements and docker
$ npm install serverless-python-requirements
Then add the following to your serverless.yml
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: non-linux
Make sure you have your python virtual environment active in CLI:
$ source venv/bin/activate
Install any dependencies with pip - note that in CLI you can tell if venv is active by the venv to the left of the terminal text
(venv) $ pip install <NAME>
(venv) $ pip freeze > requirements.txt
Make sure you have opened docker then deploy serverless as normal
$ serverless deploy
What will happen is that serverless-python-requirements will build you python packages in docker using a lambda environment, and then zip them up ready to be uploaded with the rest of your code.
Full guide here
Now you can use serverless-python-requirements. It works both for pure Python and libraries needing native compilation (using Docker):
A Serverless v1.x plugin to automatically bundle dependencies from requirements.txt and make them available in your PYTHONPATH.
Requires Serverless >= v1.12
The Serverless Framework doesn't handle the pip install. See https://stackoverflow.com/a/39791686/1111215 for the solution
How can I install a package under development to an Anaconda environment?
With pip:
pip install -e /path/to/mypackage
or with regular setuptools:
python /path/to/mypackage/setup.py develop
There is also conda develop available now.
http://conda.pydata.org/docs/commands/build/conda-develop.html
Update in 2019: conda develop hasn't been maintained and is not recommended. See https://github.com/conda/conda-build/issues/1992
Recommendation is to use python setup.py develop or pip install -e .
Using either of those will work with Anaconda. Make sure that you have pip or setuptools installed into the conda environment you want to install into, and that you have it activated.
This is the equivalent to pip install -e .
conda install conda-build
conda develop .
As explained in this gh issue thread, because of build isolation and dependency installation, Anaconda developers recommend using:
pip install --no-build-isolation --no-deps -e .
Build / Host Environment
To create build and host environments and a build script go to your recipe directory and use
conda debug /path/to/your/recipe-directory
as documented here. This will print an instructive message like
################################################################################
Build and/or host environments created for debugging. To enter a debugging environment:
cd /home/UserName/miniconda3/conda-bld/debug_1542385789430/work && source /home/UserName/miniconda3/conda-bld/debug_1542385789430/work/build_env_setup.sh
To run your build, you might want to start with running the conda_build.sh file.
################################################################################
(The message might tell you incorrectly, that it created a test environment.) Your source code has been copied to the .../work directory and there is also a conda_build.sh script. Note, that sourcing the build_env_setup.sh will load both build and host environments.
You can work on your code and your recipe and build with the conda_build.sh, but you won't get a proper conda package, as far as I know. When you are finished, you can remove the debug environment:
conda deactivate # maybe twice
conda build purge
Test Environment
To get the test environment, you have to build the package first and then debug that. This might be useful to fix your test files.
conda build /path/to/your/recipe-directory # creates mypackage*.tar.bz2
# find file location of mypackage*.tar.bz2 with:
conda search --info --use-local mypackage # look at the url row for the path
cd /path/to/miniconda3/conda-bld/linux-64/ # go to that path, can be different
conda debug mypackage*.tar.bz2
This will print e. g.:
################################################################################
Test environment created for debugging. To enter a debugging environment:
cd /home/UserName/miniconda3/conda-bld/debug_1542385789430/test_tmp && source /home/UserName/miniconda3/conda-bld/debug_1542385789430/work/conda_test_env_vars.sh
To run your tests, you might want to start with running the conda_test_runner.sh file.
################################################################################
Again, remove with
conda deactivate
conda build purge
Run Environment
This is actually no debugging, but the general process of building and installing a local package. With the run environment you can check, whether all dependencies are specified in the requirements/run section. Also pinning can be an issue.
(base) $ conda build /path/to/your/recipe-directory
(base) $ conda create --name package-env --use-local mypackage
(base) $ conda activate package-env
(package-env) $ python
>>> import mypackage
You can also list the dependencies of your package with (man page)
conda search --info --use-local mypackage
A last hint: If you want to know the versions of the dependencies and see, whether pinning works, try (man page)
conda render /path/to/your/recipe-directory