I have created a userbot to translate one channel to another channel. So I want to add this python file to Heoroku.
How can I install the googletrans, unidecode, pyrogram and tgcrypto libraries for a Heroku account?
In the main directory add the following files :
requirements.txt
The above file should contain all third party modules u need
runtime.txt
python-3.8.3
Procfile
worker: python3 main.py
Related
I am trying to generate requirements.txt for someone to replicate my environment. As you may know, the standard way is
pip freeze > requirements.txt
I noticed that this will list all the packages, including the dependencies of installed packages, which makes this list unnecessary huge. I then browsed around and came across pip-chill that allows us to only list installed packages in requirements.txt.
Now, from my understanding when someone tries to replicate the environment with pip install -r requirements.txt, this will automatically install the dependencies of installed packages.
If this is true, this means it is safe to use pip-chill instead of pip to generate the requirements.txt. My question is, is there any other risk of omitting dependencies of installed packages using pip-chill that I am missing here?
I believe using pip-compile from pip-tools is a good practice when constructing your requirements.txt. This will make sure that builds are predictable and deterministic.
The pip-compile command lets you compile a requirements.txt file from your dependencies, specified in either setup.py or requirements.in
Here's my recommended steps in constructing your requirements.txt (if using requirements.in):
Create a virtual env and install pip-tools there
$ source /path/to/venv/bin/activate
(venv)$ python -m pip install pip-tools
Specify your application/project's direct dependencies your requirements.in file:
# requirements.in
requests
boto3==1.16.51
Use pip-compile to generate requirements.txt
$ pip-compile --output-file=- > requirements.txt
your requirements.txt files will have:
#
# This file is autogenerated by pip-compile
# To update, run:
#
# pip-compile --output-file=-
#
boto3==1.16.51
# via -r requirements.in
botocore==1.19.51
# via
# boto3
# s3transfer
certifi==2020.12.5
# via requests
chardet==4.0.0
# via requests
idna==2.10
# via requests
jmespath==0.10.0
# via
# boto3
# botocore
python-dateutil==2.8.1
# via botocore
requests==2.25.1
# via -r requirements.in
s3transfer==0.3.3
# via boto3
six==1.15.0
# via python-dateutil
urllib3==1.26.2
# via
# botocore
# requests
Your application should always work with the dependencies installed from this generated requirements.txt. If you have to update a dependency you just need to update the requirements.in file and redo pip-compile. I believe this is a much better approach than doing pip freeze > requirements.txt which I see some people do.
I guess the main advantage of using this is you can keep track of the actual direct dependencies of your project in a separate requirement.in file
I find this very similar to how node modules/dependencies are being managed in a node app project with the package.json (requirements.in) and package-lock.json (requirements.txt).
From my point of view requirements.txt files should list all dependencies, direct dependencies as well as their dependencies (indirect, transient). If for some reason, only direct dependencies are wanted there are tools that can help with that, from a cursory look, pip-chill seems inadequate since it doesn't actually look at the code to figure out what packages are directly imported. Maybe better look at projects such as pipreqs, pigar, they seem to be more accurate in figuring out what the actual direct dependencies are (based on the imports in your code).
But at the end of the day you should curate such lists by hand. When writing the code you choose carefully which packages you want to import, with the same care you should curate a list of the projects (and their versions) containing those packages. Tools can help, but the developer knows better.
I was trying to deploy a python script to Heroku that uses the time module in Python. I put the time module in requirements.txt and Heroku tries to collect it but I get this error:
No matching distribution found for time
Why is this happening, and how can I fix it?
I put the time module in requirements.txt
Don't do that.
time is part of Python's standard library; it comes with Python and is available automatically.
requirements.txt is only for third-party modules. Remove time from your requirements.txt, commit, and redeploy.
Time is a module that is already included with python. This is why you cannot install the module "time" using pip. So if you do not include it in the requirements.txt it will work fine. As Heroku has the python version thus also the time module you need.
Make sure to read the documentation of python: https://docs.python.org/3/library/time.html
I'm having issues with pip failing to install editable packages from a local directory. I was able to install the packages manually using commands like pip install -e pkg1. I wanted to use a requirements.txt file to automate future installs, because my coworkers will be working on the same packages. My ideal development workflow is for each developer to checkout the source from version control and run pip install -r requirements.txt. The requirements file would designate all the packages as editable so we can import our code without the need for .pth files but we wouldn't have to keep updating our environments. And by using namespace packages, we can decouple the import semantics from the file structures.
But it's not working out.
I have a directory with packages like so:
index/
pkg1/
src/
pkg1/
__init__.py
pkg1.py
setup.py
pkg2/
src/
...etc.
Each setup.py file contains something like:
from setuptools import setup, find_packages
setup(
name="pkg1",
version="0.1",
packages=find_packages('src'),
package_dir={'':'src'},
)
I generated my requirements.txt file using pip freeze, which yielded something like this:
# Editable install with no version control (pkg1==0.1)
-e c:\source\pkg1
# Editable install with no version control (pkg2==0.1)
-e c:\source\pkg2
...etc...
I was surprised when pip choked on the requirements file that it created for itself:
(venv) C:\Source>pip install -r requirements.txt
c:sourcepkg1 should either be a path to a local project or a VCS url beginning with svn+, git+, hg+, or bzr+
Also, some of our packages rely on other of our packages and pip has been absolutely useless at identifying these dependencies. I have resorted to manually installing packages in dependency order.
Maybe I'm pushing pip to its limits here. The documentation and help online has not been helpful, so far. Most sources discuss editable installation, installation from requirements files, package dependencies, or namespace packages, but never all these concepts at once. Usually when the online help is scarce, it means that I'm trying to use a tool for something it wasn't intended to do or I've discovered a bug.
Is this development process viable? Do I need to make a private package index or something?
I'm working on a support library for a large Python project which heavily uses relative imports by appending various project directories to sys.path.
Using The Hitchhiker's Guide to Packaging as a template I attempted to create a package structure which will allow me to do a local install, but can easily be changed to a global install later if desired.
One of the dependencies of my package is the pyasn1 package for the encoding and decoding of ASN.1 annotated objects. I have to include the pyasn1 library separately as the version supported by the CentOS 6.3 default repositories is one major version back and has known bugs that will break my custom package.
The top-level of the library structure is as follows:
MyLibrary/
setup.py
setup.cfg
LICENSE.txt
README.txt
MyCustomPackage/
pyasn1-0.1.6/
In my setup configuration file I define the install directory for my library to be a local directory called .lib. This is desirable as it allows me to do absolute imports by running the command import site; site.addsitedir("MyLibrary/.lib") in the project's main application without requiring our engineers to pass command line arguments to the setup script.
setup.cfg
[install]
install-lib=.lib
setup.py
setup(
name='MyLibrary',
version='0.1a',
package_dir = {'pyasn1': 'pyasn1-0.1.6/pyasn1'},
packages=[
'MyCustomPackage',
'pyasn1',
'pyasn1.codec',
'pyasn1.compat','
pyasn1.codec.ber',
'pyasn1.codec.cer',
'pyasn1.codec.der',
'pyasn1.type'
],
license='',
long_description=open('README.txt').read(),
data_files = []
)
The problem I've run into with doing the installation this way is that when my package tries to import pyasn1 it imports the global version and ignores the locally installed version.
As a possible workaround I have tried installing the pyasn1 package under a different name than the global package (eg pyasn1_0_1_6) by doing package_dir = {'pyasn1_0_1_6':'pyasn1-0.1.6/pyasn1'}. However, this fails since the imports used internally to the pyasn1 package do not use the pyasn1_0_1_6 name.
Is there some way to either a) force Python to import a locally installed package over a globally installed one or b) force a package to install under a different name?
Use virtualenv to ensure that your application runs in a fully known configuration which is independent from the OS version of libraries.
EDIT: a quick (unix) solution is setting the PYTHONPATH environment variable, which works just like PATH for Python modules (module loaded from first path in which is found, so simply append you directory at the beginning of the PYTHONPATH). Anwyay, I strongly recommend you to proceed with virtualenv, since it was specifically engineered for handling situations like the one you are facing.
Rationale
The process is easily automatable if you write a setuptools script specifying dependencies with install_requires. For a complete example, refer to this one I wrote
Setup
Note that you can easily insert the steps below in a setup.sh shell script.
First create a virtualenv and enter it:
$ virtualenv $name
$ cd $name
Activate it:
$ source bin/activate
Now cd to your project directory and run the installer script:
$ cd $my_project_dir
$ python ./setup.py --prefix $path_to_virtualenv
Note the --prefix $path_to_virtualenv, which is used to tell the script to install in the virtualenv instead of system-wide. Call this after activating the virtualenv. Note that all the depencies are automatically downloaded and installed in the virtualenv.
Then you are done. When you want to leave the virtualenv, issue:
$ deactivate
On subsequent calls, you will only need to activate the virtualenv (step 2), maybe using a runawesomeproject.sh if you really want.
As noted on the virtualenv website, you should use virtualenv >= 1.9, as the previous versions did not download dependencies via HTTPS. If you consider plain HTTP to be sufficient, then any version should do.
You might also try relocatable virtualenvs: setup it and copy the folder to your host. Anyway, note that this feature is still experimental.
I have a flask app where I'm trying to automate deployment to EC2.
Not a big deal, but is there a setting in either Fabric or Distribute that reads the requirements.txt file directly for the setup.py, so I don't have to spell everything out in the setup(install_requires=[]) list, rather than writing a file reader for my requirements.txt? If not, do people have recommendations or suggestions on auto-deployment and with pip?
I'm reviewing from here and here.
Not a big deal, but is there a setting in either Fabric or Distribute
that reads the requirements.txt file directly for the setup.py, so I
don't have to spell everything out in the setup(install_requires=[])
list, rather than writing a file reader for my requirements.txt?
You might still want to checkout frb's answer to the duplicate question How can I reference requirements.txt for the install_requires kwarg in setuptools.setup?, which provides a straight forward two line solution for writing a file reader.
If you really want to avoid this, you could alternatively add the common pip install -r requirements.txtto your fabfile.py, e.g.:
# ...
# create a place where we can unzip the tarball, then enter
# that directory and unzip it
run('mkdir /tmp/yourapplication')
with cd('/tmp/yourapplication'):
run('tar xzf /tmp/yourapplication.tar.gz')
# now install the requirements with our virtual environment's
# pip installer
run('/var/www/yourapplication/env/scripts/pip install -r requirements.txt')
# now setup the package with our virtual environment's
# python interpreter
run('/var/www/yourapplication/env/bin/python setup.py install')