Build aliases for a Python porject installed via pip - pip

What is the simplest way to make an alias for python -m myproject when the user installs the package myproject via pip?
Can poetry manages that?
Remainder: python -m myproject launches myproject/__main__.py.

It is possible. With Poetry, it is solved by its scripts feature in the pyproject.toml:
[tool.poetry.scripts]
my-command = "myproject.__main__:my_main"
where my_main is a function in myproject/__main__.py:
def my_main():
# ...
if __name__ == '__main__':
my_main()
From then on, you can poetry install again and then the my-command executable is available. Later, when one "pip-installs" your project, they also have the my-command executable available.

Related

pyproject.toml doesn't support customized develop/build?

I can customized the installing process in develop mode by setup.py like below:
from setuptools import setup
from setuptools.command import develop
class CustomDevelop(develop.develop, object):
"""
Class needed for "pip install -e ."
"""
def run(self):
super(CustomDevelop, self).run()
print("CustomDevelop=====================")
install_requires = ["numpy"]
setup(
name="example_package",
version="0.0.1",
packages=["example_package"],
cmdclass={'develop': CustomDevelop},
)
When running python3 -m pip install -vvv -e ., the output is
Running setup.py develop for example-package
Running command python setup.py develop
running develop
running egg_info
writing example_package.egg-info/PKG-INFO
writing dependency_links to example_package.egg-info/dependency_links.txt
writing top-level names to example_package.egg-info/top_level.txt
reading manifest file 'example_package.egg-info/SOURCES.txt'
writing manifest file 'example_package.egg-info/SOURCES.txt'
running build_ext
Creating /home/example_package/venv_example_package/lib/python3.8/site-packages/example-package.egg-link (link to .)
Adding example-package 0.0.1 to easy-install.pth file
Installed /home/lai/example_package
CustomDevelop=====================
However, it's not possible to do it with PEP518 which introduces pyproject.toml. Is there a workaround for this issue?

unittest fails when calling conda environment from outside the environment, but succeeds if the environment is active

I'm writing some unittests for a project, and the following works:
> conda activate env_name
(env_name) > python -m unittest
But this gives me a ModuleNotFoundError:
> conda run -n env_name python -m unittest
The module that is failing is called within my project, like this:
import snowflake.connector
def add(a,b):
return a+b
It may be worth noting that if I change the import to numpy, I don't get this issue. One possibility may be that in the site-packages for my env, numpy is just named "numpy", but snowflake.connector is named "snowlake_connector_python-2.7.4.dist-info"

Brew Formula install with multiple files

I've created a python tool and want to install it via brew. Creating the formula worked fine at first when i simply had one python file named myTool. Then i seperated the code into more files as it became larger and more complex.
How do i set up the install to bundle those files, because right now the imports are failing because the other files are not found.
My current install
def install
bin.install 'myTool'
end
The error shown when running the brew installed tool
from myModule import someFunc, someOtherFunc ModuleNotFoundError: No module named 'myModule'
The current setup only installs the angler file without any of the other python modules. This results in the ModuleNotFoundError error. Here is my suggestion:
def install
# Determines the python version being used
# e.g. If python3.10, xy = 3.10
xy = Language::Python.major_minor_version "python3"
# Where to install the python files
packages = libexec/"lib/python#{xy}/site-packages/angler"
# For each file present, install them in the right location
%w[angler anglerEnums.py anglerGen.py anglerHelperFunctions.py].each do |file|
packages.install file
end
# Create a symlink to the angler python file
bin.install_symlink packages/"angler"
end

Unable to use locally built python package in development mode

I'm trying to spin up a super simple package for proof of concept and I can't see what i'm missing.
My aim is to be able to do the following:
python3 import mypackage
mypackage.add2(2)
>> 4
Github link
I created a public repo to reproduce the issue here
git clone https://github.com/OliverFarren/testPackage
Problem
I have a basic file structure as follows:
src/
mypackage/
__init__.py
mymodule.py
setup.cfg
setup.py
pyproject.toml
setup.cfg is pretty boiler plate from here
setup.py is just to allow pip install in editable mode:
import setuptools
setuptools.setup()
I ran the following commands at the top level directory in my Pycharm virtual env:
python3 -m pip install --upgrade build
python3 -m build
That created my dist and build directories and mypackage.egg-info file so now the directory looks like this:
testpackage
build/
bdist.linux-x86_64/
dist/
mypackage-0.1.0.tar.gz
mypackage-0.1.0-py3-none-any.whl
src/
mypackage/
mypackage.egg-info
__init__.py
mymodule.py
setup.cfg
setup.py
pyproject.toml
I've then tried install the package as follows:
sudo pip3 install -e .
Installing build dependencies ... done
Getting requirements to build wheel ... done
Preparing wheel metadata ... done
Installing collected packages: mypackage
Running setup.py develop for mypackage
Successfully installed mypackage
Which I think should have installed it. Except when I try and import the package I get a ModuleNotFoundError
I'm wondering whether this is a permissions issue of some sort. When I try:
sudo pip3 list
pip3 list
I notice i'm getting different outputs, I can see my package present in the list and in my sys.path:
~/testpackage/src/mypackage'
I just don't understand what i'm missing here. Any advice would be appreciated!
Ok so I found the issue. Posting solution and leaving the github repo live - with fix, incase anyone else has this issue.
It turns out my setup.cfg wasn't boiler plate.
Here was my incorrect code:
[metadata]
# replace with your username:
name = mypackage
author = Oliver Farren
version = 0.1.0
description = Test Package
classifiers =
Programming Language :: Python :: 3
License :: OSI Approved :: MIT License
Operating System :: OS Independent
[options]
package_dir =
= src/mypackage
packages = find:
python_requires = >=3.6
[options.packages.find]
where = src/mypackage
src/mypackage should be src, it was looking inside the package for packages.
A key step in debugging this issue was checking the mypackage.egg.info files. The SOURCES.txt contained a list of all the files in the build package and I could clearly see that in the incorrect build, that src/mypackage/mymodules.py and src/mypackage/__init__.py were missing. So the package was correctly installed by pip, but being empty was making for a very confusing error message.

AWS Lambda + Python-ldap

I am trying to use python-ldap with AWS Lambda. I downloaded the tarball from : https://pypi.python.org/pypi/python-ldap
and code to use lambda (lambda_function.py)
from ldap_dir.ldap_query.Lib import ldap
and uploaded the zip to Lambda.
where my directory structure is
ldap_dir -> ldap_query -> Lib -> ldap folder
ldap_dir -> lambda_function.py
Am I missing out something?
python-ldap is built on top of native OpenLDAP libraries. This article - even though unrelated to the python ldap module - describes how to bundle Python packages that have native dependencies.
The outline of this is the following:
Create an Amazon EC2 instance with Amazon Linux
Install compiler packages as well as the OpenLDAP developer package. yum install -y gcc openldap-devel
Create a virtual environment: virtualenv env
Activate the virtual environment: env/bin/activate
Upgrade pip (I am not sure this is necessary, but I got a warning without this): pip install --upgrade pip
Install python-ldap: pip install python-ldap
Create a handler Python script, for example, lambda.py with the following code:
import os
import subprocess
libdir = os.path.join(os.getcwd(), 'local', 'lib')
def handler(event, context):
command = 'LD_LIBRARY_PATH={} python ldap.py'.format(libdir)
subprocess.call(command, shell=True)
Implement your LDAP function, in this example ldap.py:
import ldap
print ldap.PORT
Create a zip package, let's say ldap.zip:
zip -9 ~/ldap.zip ldap.py
zip -9 ~/ldap.zip lambda.py
cd env/lib/python2.7/site-packages
zip -r9 ~/ldap.zip *
cd ../../../lib64/python2.7/site-packages
zip -r9 ~/ldap.zip *
Download the zip to your system (or put it into an S3 bucket). Now you can create your Lambda function using lambda.handler as the function name and use the zip file as the code.
I hope this helps.
one more step/check to the solution above:
still you might get No module named '_ldap', then check if the python version that you install on local/EC2 are the same as the Runtime on lambda

Resources