Keeping track of a virtual environment's requirements via pip freeze is simple.
pip freeze > requirements.txt
Currently, however, whenever a new package is added to the venv, it needs to be added to the requirements file manually. To do so, I usually just run the freeze command again and pipe it into the requirements file, but sometimes I forget to run this command, and this can be troublesome especially in repositories across different locations whenever I have to remember which packages I need to install!
Whenever a new package is installed in a virtual environment, is there any way to automatically update a requirements.txt file automatically to include this new package?
When using just plain pip to install packages, there is currently no way to make it automatically generate or update a requirements.txt file. It is still a manual process using pip freeze > requirements.txt.
If the purpose is to make sure the installed packages are tracked or registered properly (i.e. tracked in version control for a repository), then you will have to use other tools that "wrap" around pip's functionality.
You have two options.
Option 1: Use a package manager
There are a number of Python package managers that combines "install package" with "record installed packages somewhere".
pipenv
"It automatically creates and manages a virtualenv for your projects, as well as adds/removes packages from your Pipfile as you install/uninstall packages. It also generates the ever-important Pipfile.lock, which is used to produce deterministic builds.
Workflow (see Example Pipenv Workflow)
$ pipenv install some-package
$ cat Pipfile
...
[packages]
some-package = "*"
# Commit modified Pipfile and Pipfile.lock
$ git add Pipfile*
# On some other copy of the repo, install stuff from Pipfile
$ pipenv install
poetry
"poetry is a tool to handle dependency installation as well as building and packaging of Python packages. It only needs one file to do all of that: the new, standardized pyproject.toml. In other words, poetry uses pyproject.toml to replace setup.py, requirements.txt, setup.cfg, MANIFEST.in and the newly added Pipfile.*"
Workflow (see Basic Usage)
$ poetry add requests
$ cat pyproject.toml
...
[tool.poetry.dependencies]
requests = "*"
# Commit modified pyproject.toml
$ git add pyproject.toml
# On some other copy of the repo, install stuff from Pipfile
$ poetry install
Option 2: git pre-commit hook
This solution isn't going to happen "during installation of the package", but if the purpose is to make sure your tracked "requirements.txt" is synchronized with your virtual environment, then you can add a git pre-commit hook that:
Generates a separate requirements_check.txt file
Compares requirements_check.txt to your requirements.txt
Aborts your commit if there are differences
Example .git/hooks/pre-commit:
#!/usr/local/bin/bash
pip freeze > requirements_check.txt
cmp --silent requirements_check.txt requirements.txt
if [ $? -gt 0 ];
then
echo "There are packages in the env not in requirements.txt"
echo "Aborting commit"
rm requirements_check.txt
exit 1
fi
rm requirements_check.txt
exit 0
Output:
$ git status
...
nothing to commit, working tree clean
$ pip install pydantic
$ git add .
$ git commit
The output of pip freeze is different from requirements.txt
Aborting commit
$ pip freeze > requirements.txt
$ git add .
$ git commit -m "Update requirements.txt"
[master 313b685] Update requirements.txt
1 file changed, 1 insertion(+)
Instead of pip, use pipenv. It is a much better dependency manager, that will ensure best practices and remove manual work.
To learn the use of pipenv read this article.
Related
I am trying to use two requirements.txt in environment.yml to create a conda environment.
One of them has --no-index --find-links=/home/myuser/mydownloadedpackage.
For some reason, it always go to download that package which I want it to install from my downloaded directory.
Is there a way that I can force it to use find-links for one requirements.txt and global configuration for the other ?
I am running
poetry install
from within a python local virtualenv ".venv" . The project is supposed to create an executable hercl that becomes available on the user's path. Two questions:
What options / configuration of I'm not sure if that's supposed to gets installed into the local .venv/bin or in the pyenv shims.
Since poetry reuses / redirects many functions to pip it may be the case that the feature I'm asking about is actually from pip itself. I have not been able to discover from either poetry or pip documentation about this shell script installation. How is this achieved?
Update
After running running pip install outside of the virtualenv it pulls from pypi and creates a bash script ~/.pyenv/shims/my_app .
In my case the my_app is "hercl" and we see this:
$which hercl
~/.pyenv/shims/hercl
Its contents are :
$cat $(which hercl)
#!/usr/bin/env bash
set -e
[ -n "$PYENV_DEBUG" ] && set -x
program="${0##*/}"
export PYENV_ROOT="~/.pyenv"
exec "~/.pyenv/libexec/pyenv" exec "$program" "$#"
Somehow this script is installed when running pip install: I am wondering how pip knows to do this. Is it from the pyproject.ml from poetry ? Is it from a setup.py or setup.cfg associated with pip ?
Anoterh Update #sinoroc has another tack on this: poetry has a scripts section that I did not notice (noobie on that tool).
[tool.poetry.scripts]
hercl = "hercl.hercl:main"
hercl is a command that I was looking for .
But there was also an actual _bash script that would launch hercl that got installed under the shims as part of the virtualenv. i think that script were in the
In a Poetry-based project such executable scripts are defined in the scripts section of pyproject.toml.
If a virtual environment is active when installing the application then the executable is installed in the virtual environment's bin directory. So it is available only while the virtual environment is "active".
OPAM is a great Package Manager for OCaml.
Is there a way to install a given list of packages like pip does in Python (with the command pip -r requirements.txt)?
We have a small Git repo shared with several people and it would be nice to just install all project dependencies in one go.
And yes, we might have a little shell script or list the packages in a .txt file and pipe it to opam install ... but there might be a better solution.
Thanks
Assuming you have your dependencies specified in NAME.opam (or just opam) file in your project, you can run
opam install . --deps-only
This will install all dependencies for your package (or packages if you have several opam files in your project).
By default opam ignores uncommitted changes, so if you want to run this command with modified opam files you'll need to add --working-dir.
Optionally you could lock down versions of dependencies used by running opam lock, this will create .opam.locked files. opam-lock is a separate plugin in 2.0.5 (and likely until 2.1), so opam will prompt to install it.
With lock files present, you should also add --locked to opam install to ask it to use lock file and install exactly the same versions.
I would also recommend adding -j X where X is the number of cores you have available, that will speed things up.
I typically have the following in my Makefiles:
deps:
opam install . --deps-only --locked --working-dir
The syntax of .opam files is described here
How can I install a package under development to an Anaconda environment?
With pip:
pip install -e /path/to/mypackage
or with regular setuptools:
python /path/to/mypackage/setup.py develop
There is also conda develop available now.
http://conda.pydata.org/docs/commands/build/conda-develop.html
Update in 2019: conda develop hasn't been maintained and is not recommended. See https://github.com/conda/conda-build/issues/1992
Recommendation is to use python setup.py develop or pip install -e .
Using either of those will work with Anaconda. Make sure that you have pip or setuptools installed into the conda environment you want to install into, and that you have it activated.
This is the equivalent to pip install -e .
conda install conda-build
conda develop .
As explained in this gh issue thread, because of build isolation and dependency installation, Anaconda developers recommend using:
pip install --no-build-isolation --no-deps -e .
Build / Host Environment
To create build and host environments and a build script go to your recipe directory and use
conda debug /path/to/your/recipe-directory
as documented here. This will print an instructive message like
################################################################################
Build and/or host environments created for debugging. To enter a debugging environment:
cd /home/UserName/miniconda3/conda-bld/debug_1542385789430/work && source /home/UserName/miniconda3/conda-bld/debug_1542385789430/work/build_env_setup.sh
To run your build, you might want to start with running the conda_build.sh file.
################################################################################
(The message might tell you incorrectly, that it created a test environment.) Your source code has been copied to the .../work directory and there is also a conda_build.sh script. Note, that sourcing the build_env_setup.sh will load both build and host environments.
You can work on your code and your recipe and build with the conda_build.sh, but you won't get a proper conda package, as far as I know. When you are finished, you can remove the debug environment:
conda deactivate # maybe twice
conda build purge
Test Environment
To get the test environment, you have to build the package first and then debug that. This might be useful to fix your test files.
conda build /path/to/your/recipe-directory # creates mypackage*.tar.bz2
# find file location of mypackage*.tar.bz2 with:
conda search --info --use-local mypackage # look at the url row for the path
cd /path/to/miniconda3/conda-bld/linux-64/ # go to that path, can be different
conda debug mypackage*.tar.bz2
This will print e. g.:
################################################################################
Test environment created for debugging. To enter a debugging environment:
cd /home/UserName/miniconda3/conda-bld/debug_1542385789430/test_tmp && source /home/UserName/miniconda3/conda-bld/debug_1542385789430/work/conda_test_env_vars.sh
To run your tests, you might want to start with running the conda_test_runner.sh file.
################################################################################
Again, remove with
conda deactivate
conda build purge
Run Environment
This is actually no debugging, but the general process of building and installing a local package. With the run environment you can check, whether all dependencies are specified in the requirements/run section. Also pinning can be an issue.
(base) $ conda build /path/to/your/recipe-directory
(base) $ conda create --name package-env --use-local mypackage
(base) $ conda activate package-env
(package-env) $ python
>>> import mypackage
You can also list the dependencies of your package with (man page)
conda search --info --use-local mypackage
A last hint: If you want to know the versions of the dependencies and see, whether pinning works, try (man page)
conda render /path/to/your/recipe-directory
I have a Virtual env for my django project, but when I hit pip freeze, I get what must be a global site package list, includes too many packages, like ubuntu packages and so much irrelevant stuff. This happens whether virtualenv is active or not. My site packages list looks a bit slim too, so I wonder whether venv has been working at all.
(env)~/code/django/ssc/dev/env/lib/python2.7/site-packages> ls
django
Django-1.4-py2.7.egg-info
easy-install.pth
pip-1.0.2-py2.7.egg
setuptools-0.6c11-py2.7.egg
setuptools.pth
What's my problem?
If your virtual environment has access to the system's site-packages dir (ie. you used virtualenv --system-site-packages) then it's normal for the list to be a rather long one.
Compare the following:
$ virtualenv --system-site-packages v1 && source v1/bin/activate
$ (v1) pip freeze | wc -l # 100
$ virtualenv v2 && source v2/bin/activate
$ (v2) pip freeze | wc -l # 2
Can you try recreating the virtualenv?
Alternatively, adding a no-global-site-packages.txt file should tell pip to ignore the global site-packages:
$ touch $VIRTUAL_ENV/lib/python${version}/no-global-site-packages.txt
I don't understand why the most concise option was just left in the comments. Since I have just almost missed it I will put it here as a separate answer with some tweaks.
You can add --local flag with your pip freeze if you are running a virtual env with system-site-packages enables.
So, if you had:
py -m venv --system-site-packages env
To make sure you are not getting all system deps into your requirements.txt, just run:
python -m pip freeze --local > requirements.txt
Another, a bit more elaborate option, but still viable because dependencies are not supposed to change all that often, would be to go into pyvenv.cfg file located in your virtual env library and manually change:
include-system-site-packages = true/false