Issue using M2Crypto on lambda (works on EC2) - aws-lambda

I am trying to install a python function using M2Crypto in AWS Lambda.
I spun up an EC2 instance with the Lambda AMI image, installed M2Crypto into a virtualenv, and was able to get my function working on EC2.
Then I zipped up the site-package and uploaded to Lambda. I got this error
Unable to import module 'epd_M2Crypto':
/var/task/M2Crypto/_m2crypto.cpython-36m-x86_64-linux-gnu.so: symbol
sk_deep_copy, version libcrypto.so.10 not defined in file
libcrypto.so.10 with link time reference
There are similar questions and hints here and here. I tried uploading the offending lib (libcrypto.so.10) in the zip file, but still get the same error. I am assuming the error means that the EC2 version of libcrypto.so.10 (used to install M2Crypto) is different than the version on Lambda (that I trying to run with), so M2Crypto complains.
If I look at the versions of openssl they are different:
OpenSSL 1.0.0-fips 29 Mar 2010 (lambda version)
OpenSSL 1.0.2k-fips 26 Jan 2017 (ec2 version)
I don't think the answer is to downgrade openssl on ec2 as the 1.0.0 version is obsolete (AWS applies security patches but the version still shows as 1.0.0). (Also the yum doesn't have versions this old)
Here's the steps i used on the EC2 instance to get it working on EC2:
$ sudo yum -y update
$ sudo yum -y install python36
$ sudo yum -y install python-virtualenv
$ sudo yum -y groupinstall "Development Tools"
$ sudo yum -y install python36-devel.x86_64
$ sudo yum -y install openssl-devel.x86_64
$ mkdir ~/forlambda
$ cd ~/forlambda
$ virtualenv -p python3 venv
$ source venv/bin/activate
$ cd ~
$ pip install M2Crypto -t ~/forlambda/venv/lib/python3.6/site-packages/
$ cd ~/forlambda/venv/lib/python3.6/site-packages/
$ (create python function that uses M2Crypto)
$ zip -r9 ~/forlambda/archive.zip .
Then added to the zip file
/usr/bin/openssl
/usr/lib64/libcrypto.so.10
/usr/lib64/libssl.so.10
And uploaded to Lambda, which is where I am now stuck.
Do I need to do something to get Lambda to use the version of libcrypto.so.10 that I have included in the uploaded zip?
My function:
"""
Wrapper for M2Crypto
https://github.com/mcepl/M2Crypto
https://pypi.org/project/M2Crypto/
"""
from __future__ import print_function
from M2Crypto import RSA
import base64
import json
def decrypt_string(string_b64):
rsa = RSA.load_key('private_key.pem')
string_encrypted = base64.b64decode(string_b64)
bytes = rsa.private_decrypt(string_encrypted, 1)
string_plaintext = bytes.decode("utf-8")
response = {
's': string_plaintext,
'status': "OK",
'statuscode': 200
};
return response
def lambda_handler(event, context):
response = ""
action = event['action']
if action == "decrypt":
string_b64 = event['s']
response = decrypt_string(string_b64)
return response

AWS support provided a resolution, upgrading to use Python 3.7 where the issue is resolved:
Our internal team has confirmed that the issue is with Lambda's Python
runtime. In a few rare cases, when the Lambda function is being
initialised, Lambda is unable to link against the correct OpenSSL
libraries - instead linking against Lambda's own in-built OpenSSL
binaries.
The team suggests trying this out in the Python3.7 environment where
this behaviour has been fixed. Also, python3.7 is compiled with the
newer openssl 1.0.2 and you should not have to include the binaries in
the Lambda package. ... still had to include the OpenSSL binaries in
the package and could not get it working with the default libraries.

First I ran this command on the EC2 instance to make sure I had included the correct .so file in my .zip:
$ ldd -v _m2crypto.cpython-36m-x86_64-linux-gnu.so
The output of the ldd command (edited for brevity):
libssl.so.10 => /lib64/libssl.so.10 (0x00007fd5f1892000)
libcrypto.so.10 => /lib64/libcrypto.so.10 (0x00007fd5f1433000)
Based on the output above, I included /lib64/libcrypto.so.10 in my .zip.
Also (at the suggestion of AWS Support), on the Lambda console, under 'Environment variables', I added a key 'LD_LIBRARY_PATH' with value '/var/task'.
I'm not sure if I needed both those changes to fix my problem, but it works right now and after three days of troubleshooting I am afraid to touch it to see if it was one or the other that made it work.

It is perhaps too brutal, but would it be possible to use LD_PRELOAD to force using your preferred version of OpenSSL library?

AWS lambda runs code on an old version of amazon linux (amzn-ami-hvm-2017.03.1.20170812-x86_64-gp2) as mentioned in the official documentation
https://docs.aws.amazon.com/lambda/latest/dg/current-supported-versions.html
So to run a code that depends on shared libraries, it needs to be compiled in the same environment so it can link correctly.
What I usually do in such cases is that I create virtualenv using docker container. The virtualenv can than be packaged with lambda code.
Please note that if you need install anything using yum (in the docker container), you must use same release server as the amazon linux version:
yum --releasever=2017.03 install ...
virtualenv can be built using an EC2 instance as well instead of docker container (though, I find docker method easier). Just make sure that the AMI used for EC2 is same as the one used by lambda.

Related

Cannot run yum due to libssl.so.10: cannot open shared object file: No such file or directory

I inadvertently ran the following command in AWS EC2 Lightsail instance
rpm --nodeps -e openssl-1.0.2k-16.150.amzn1.x86_64
and ever since I am unable to run any yum commands
[root#ip-172-26-3-161 abc]# yum update
There was a problem importing one of the Python modules
required to run yum. The error leading to this problem was:
libssl.so.10: cannot open shared object file: No such file or directory
Please install a package which provides this module, or
verify that the module is installed correctly.
It's possible that the above module doesn't match the
current version of Python, which is:
2.7.16 (default, Oct 14 2019, 21:26:56)
[GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]
If you cannot solve this problem yourself, please go to
the yum faq at:
http://yum.baseurl.org/wiki/Faq
Any pointers on how to recover from this
You should find the original version of openssl, download it from amazon repo and install it with rpm command. And next time use command yum localinstall ... to install local packages.

Unusual installation of `aws` cli

I am using "MacOS High Sierra".
I installed the aws cli tool a long time ago, don't remember how I installed it.
The installation is a little unusual.
I can run aws from any folder, this is working
$ aws --version
aws-cli/1.11.121 Python/2.7.13 Darwin/17.4.0 botocore/1.7.12
However running
$ which aws
this returns nothing.
I thought it might be an alias, but running
$ alias | grep aws
This also returns nothing.
Its not installed with homebrew either
$ brew list | grep aws
The reason why this is a problem, because there have now been a few cli programs I have ran (Including "AWS Sam" and a build script from my work) which are complaining because aws is not in the path.
I would much rather have a "regular installation" of the aws cli, where I put the executable in some bin folder and then put it in the environment path.
But instead its using some "magic" which I am unfamiliar with. And not even AWS owns tools ("AWS Sam") seem to like the way its installed.
Any advice would be appreciated.
I solved the problem by running
$ pip uninstall awscli
$ brew upgrade
$ brew install awscli
Now I get this result
$ which aws
/usr/local/bin/aws
"AWS Sam" and the other build script I use at work are now working.

Synapse Home server(Matrix) not running

I have installed synapse using the following commands:
link: https://github.com/matrix-org/synapse
Installing prerequisites on Mac OS X:
xcode-select --install
sudo easy_install pip
sudo pip install virtualenv
brew install pkg-config libffi
To install the synapse homeserver run:
virtualenv -p python2.7 ~/.synapse
source ~/.synapse/bin/activate
pip install --upgrade setuptools
pip install https://github.com/matrix-org/synapse/tarball/master
Generate a configuration file
cd ~/.synapse
python -m synapse.app.homeserver \
--server-name my.domain.name \
--config-path homeserver.yaml \
--generate-config \
--report-stats=yes
To get started, it is easiest to use the command line to register new users:
$ source ~/.synapse/bin/activate
$ synctl start # if not already running
$ register_new_matrix_user -c homeserver.yaml https://localhost:8448
New user localpart: user123
Password:
Confirm password:
Server started successfully, but user registration failed and
i opened "https://localhost:8448" in the browser and i got the following:
Can anybody help to solve this?
Your homeserver is probably not starting correctly. Try to get the JSON response about supported versions by executing following in you shell:
curl https://localhost:8448/_matrix/client/versions -k
This should result in a JSON response listing protocol versions:
{
"versions": [
"r0.0.1",
"r0.1.0",
"r0.2.0"
]
}
If that's not working - to find out the real issue you can try:
Check if it's running at all with sudo service matrix-synapse status
Check the log file at /var/log/matrix-synapse/homeserver.log
I will update the answer, in case you can provide more details.
The web client should be accessible at the following URL:
https://localhost:8448/_matrix/client/
However the documentation states:
(The homeserver runs a web client by default at
https://localhost:8448/, though as of the time of writing it is
somewhat outdated and not really recommended -
https://github.com/matrix-org/synapse/issues/1527).
You should use a client such as the one at https://riot.im/app/

Serverless Framework - Python and Requirements.txt

Using the serverless framework v1.0.0, I have a 'requirements.txt' in my service root with the contents being the list of dependant python packages. (e.g. requests).
However my resulting deployed function fails as it seems these dependencies are not installed as part of the packaging
'Unable to import module 'handler': No module named requests'
I assume it is serverless that does the pip install, but my resulting zip file is small and clearly its not doing it, either by design or my fault as I am missing something? Is it because its Lambda that does this? If so what am I missing?)
Is there documentation on what is required to do this and how it works? Is it serverless that pip installs these or on aws lambda side?
You need to install serverless-python-requirements and docker
$ npm install serverless-python-requirements
Then add the following to your serverless.yml
plugins:
- serverless-python-requirements
custom:
pythonRequirements:
dockerizePip: non-linux
Make sure you have your python virtual environment active in CLI:
$ source venv/bin/activate
Install any dependencies with pip - note that in CLI you can tell if venv is active by the venv to the left of the terminal text
(venv) $ pip install <NAME>
(venv) $ pip freeze > requirements.txt
Make sure you have opened docker then deploy serverless as normal
$ serverless deploy
What will happen is that serverless-python-requirements will build you python packages in docker using a lambda environment, and then zip them up ready to be uploaded with the rest of your code.
Full guide here
Now you can use serverless-python-requirements. It works both for pure Python and libraries needing native compilation (using Docker):
A Serverless v1.x plugin to automatically bundle dependencies from requirements.txt and make them available in your PYTHONPATH.
Requires Serverless >= v1.12
The Serverless Framework doesn't handle the pip install. See https://stackoverflow.com/a/39791686/1111215 for the solution

Offline Ansible Control Machine installation

I need to install Ansible Control Machine behind a corporate firewall with no internet access. I can't find documentation for an offline install. I have access on my workstation to download anything I want and can copy it to the target machine. I have tried searching online but have not been able to find examples on how to do this. My server is Ubuntu 14.04 but if anyone has documentation for Red Hat or another distro that would also help.
I did a testing on my RH6, so if you have a RH6 with Internet access to download all required installation file, and a RH6 installation ISO. You should be able to achieve this.
Assuming you have a RH6 which has Internet access, let's call it A. And another one doesn't have access: B.
download Ansible and Jinja2 from A, and copy the files to B.
For Ansible: http://docs.ansible.com/ansible/intro_installation.html
Jinja2 is required for Ansible, download it here:
https://pypi.python.org/pypi/Jinja2
Mount the RH6 installation ISO to your RH6 B, then install the required RPM.
In my case, i installed PIP as well:
rpm -ivh python-paramiko-1.7.5-2.1.el6.noarch.rpm libyaml-0.1.3-4.el6_6.x86_64.rpm PyYAML-3.10-3.1.el6.x86_64.rpm perl-TermReadKey-2.30-13.el6.x86_64.rpm perl-Error-0.17015-4.el6.noarch.rpm python-six-1.9.0-2.el6.noarch.rpm
//following required for Git
rpm -ivh --force --nodeps perl-Git-1.7.1-3.el6_4.1.noarch.rpm
rpm -ivh git-1.7.1-3.el6_4.1.x86_64.rpm
Note: i didn't install httplib2 here, you can do it later.
install MarkupSafe (required for Jinja2)
//install MarkupSafe
tar -xvf MarkupSafe-0.23.tar.gz
cd MarkupSafe-0.23/
sudo python setup.py install
install Jinja2
//install Jinjia2
tar -xvf Jinja2-2.8.tar.gz
cd Jinja2-2.8/
sudo python setup.py install
On RH6 B, you should be able to run Ansible now:
tar -zxvf ansible.tar.gz
source ./hacking/env-setup
echo "127.0.0.1" > ~/ansible_hosts
export ANSIBLE_INVENTORY=~/ansible_hosts
ansible --version
I know this is a very old question, but I've found the answer in this blog post and I believe that could help someone out there.
Although this post aproach is on a CentOS/RHEL machine, I believe the procedure is very similar to other distros:
Download the packages (RPM) dependencies
Download the Ansible packages
Upload the downloaded packages to the target machine
Install it using yum localinstall
Or you could also install it from the source.

Resources