How can I activate virtualenv in a Makefile?
I have tried:
venv:
#virtualenv venv
active:
#source venv/bin/activate
And I've also tried:
active:
#. venv/bin/activate
and it doesn't activate virtualenv.
Here's how to do it:
You can execute a shell command in a Makefile using ();
E.g.
echoTarget:
(echo "I'm an echo")
Just be sure to put a tab character before each line in the shell command.
i.e. you will need a tab before (echo "I'm an echo")
Here's what will work for activating virtualenv:
activate:
( \
source path/to/virtualenv/activate; \
pip install -r requirements.txt; \
)
UPD Mar 14 '21
One more variant for pip install inside virtualenv:
# Makefile
all: install run
install: venv
: # Activate venv and install smthing inside
. venv/bin/activate && pip install -r requirements.txt
: # Other commands here
venv:
: # Create venv if it doesn't exist
: # test -d venv || virtualenv -p python3 --no-site-packages venv
test -d venv || python3 -m venv venv
run:
: # Run your app here, e.g
: # determine if we are in venv,
: # see https://stackoverflow.com/q/1871549
. venv/bin/activate && pip -V
: # Or see #wizurd's answer to exec multiple commands
. venv/bin/activate && (\
python3 -c 'import sys; print(sys.prefix)'; \
pip3 -V \
)
clean:
rm -rf venv
find -iname "*.pyc" -delete
So you can run make with different 'standard' ways:
make - target to default all
make venv - to just create virtualenv
make install - to make venv and execute other commands
make run - to execute your app inside venv
make clean - to remove venv and python binaries
When it is time to execute recipes to update a target, they are executed by invoking a new sub-shell for each line of the recipe. --- GNU Make
Since each line of the recipe executes in a separate sub-shell, we should run the python code in the same line of the recipe.
Simple python script for showing the current source of python environment (filename: whichpy.py):
import sys
print(sys.prefix)
Running python virtual environment (Make recipes run on sh instead of bash, using . to activate the virtual environment is the correct syntax):
test:
. pyth3.6/bin/activate && python3.6 whichpy.py
. pyth3.6/bin/activate; python3.6 whichpy.py
Both the above 2 recipes are acceptable and we are free to use backslash/newline to separate one recipe into multiple lines.
Makefiles can't activate an environment directly. This is what worked for me:
activate:
bash -c "venv/bin/activate"
If you get a permission denied error make venv/bin/activate executable:
chmod +x venv/bin/activate
I built a simple way to do that by combining an ACTIVATE_LINUX:=. venv/bin/activate variable with .ONESHELL: at the begging of the Makefile:
.ONESHELL:
.PHONY: clean venv tests scripts
ACTIVATE_LINUX:=. venv/bin/activate
setup: venv install scripts
venv:
#echo "Creating python virtual environment in 'venv' folder..."
#python3 -m venv venv
install:
#echo "Installing python packages..."
#$(ACTIVATE_LINUX)
#pip install -r requirements.txt
clean:
#echo "Cleaning previous python virtual environment..."
#rm -rf venv
As pointed out in other answers, Make recipes run on sh instead of bash, so using . in the ACTIVATE_LINUX variable instead of # to activate the virtual environment is the correct syntax.
I combined this strategy with .ONESHELL:. As shown at this StackOverflow anwser, normally Make runs every command in a recipe in a different subshell. However, setting .ONESHELL: will run all the commands in a recipe in the same subshell, allowing you to activate a virtual environment and then run commands inside it. That's exactly what's happening in make install target and this approach could be applied wherever you need an activated environment to run some python code in your project.
So you can run the following make targets:
make - target to default setup;
make venv - to just create virtual environment;
make install - to activate venv and execute the pip install command; and
make clean - to remove previous venv and python binaries.
Related
I created a bash script to scaffold boilerplates for several apps (React, Next.js, django, etc).
In part of my django_install() function, I run the following (reduced here):
mkdir "$app_name"
cd ./"$app_name" || exit 0
gh repo clone <my-repo-boilerplate> .
rm -rf .git
pipenv install
pipenv install --dev
exit 0
I would also like to execute pipenv shell and some commands that need to run inside that virtual environment, as my boilerplate has some custom scripts that I'd like to run to automatise the script completely.
I understand I cannot just run pipenv shell or python manage.py [etc...] in my bash script.
How could I achieve it?
I think you can use pipenv run for that. E.g.:
pipenv run python manage.py [etc...]
Which will run python manage.py within the virtual environment created my pipenv.
https://pipenv.pypa.io/en/latest/cli/#pipenv-run
I need to run a while loop to install Python dependencies. In the Python world recently there are 2 ways to install dependencies which have become established:
using conda (for some people this is the "robust/stable/desired way", provided by a "Python distribution" called Anaconda/Miniconda),
using pip (in the last few years included as the official way of Python itself).
The "pseudocode" should be:
try to install the dependency with the conda command
if it fails then install it with the pip command
In the Python world dependencies are specified in a requirements.txt file, usually exact versions (==) as one dependency per line with the pattern <MY_DEPENDENCY>==<MY_VERSION>.
The equivalent bash desired command is: while read requirement; do conda install --yes $requirement || pip install $requirement; done < requirements.txt, however this does not work in the GNU make/Makefile world for reasons that I don't completely get.
I've tried a few different flavors of that while loop - all unsuccessful. Basically once the conda command fails I am not able to go on with the pip attempt. I am not sure why this happens (as it works in "normal bash") and I can not find a way to manage some sort of low-level try/catch pattern (for those familiar with high level programming languages).
This is my last attempt which is not working because it stops when conda fails:
foo-target:
# equivalent to bash: conda install --yes $requirement || pip install $requirement;
while read requirement; do \
conda install --yes $requirement ; \
[ $$? != 0 ] || pip install $requirement; \
done < requirements.txt
How do I make sure I try to install each requirement inside requirements.txt first with conda, when conda fails then with pip?
Why is my code not working? I see people pointing to the differences between sh and bash, but I am not able to isolate the issue.
Edit:
I ended up working around using the bash command inside the Makefile, but I find this solution not ideal, because I need to maintain yet another chunk of code in a one-line bash script (see below), is there a way to keep all the stuff inside a Makefile avoiding bash at all?
The Makefile target:
foo-target:
bash install-python-dependencies.sh
The bash one line script:
#!/usr/bin/env bash
while read requirement; do conda install --yes $requirement || pip install $requirement; done < requirements.txt
I can run the script directly from the command line (bash), I can also run it from within the Makefile, but I would like to get rid of the bash script and always execute make foo-target without using bash (avoiding bash even inside the Makefile).
As shown above, your makefile will work as you expect, other than that you have to escape the $ in shell variables like $$requirement.
I couldn't reproduce your problem with a simplified example to emulate the behavior:
foo-target:
for i in 1 2 3; do \
echo conda; \
test $$i -ne 2; \
[ $$? -eq 0 ] || echo pip; \
done
gives the expected output:
$ make
conda
conda
pip
conda
Have you added the .POSIX: target to your makefile, that you don't show here? If I do that then I get the behavior you claim to see:
conda
make: *** [Makefile:2: foo-target] Error 1
The reason for this is described in the manual for .POSIX:
In particular, if this target is mentioned then recipes will be invoked as if the shell had been passed the '-e' flag: the first failing command in a recipe will cause the recipe to fail immediately.
If you want to keep .POSIX mode but not get this error the simplest way is to use the method you show in your first example; I don't know why you stopped using it:
foo-target:
while read requirement; do \
conda install --yes $$requirement || pip install $$requirement; \
done < requirements.txt
I use lots of virtual environment nowadays since the different project parallely going on in my company.
Following is what I usually do for the initial setting of conda creation of new virtual environment
conda install --yes --file requirements.txt
source activate myenv
python -m ipykernel install --user --name myenv --display-name “kernel_name”
Upon the above sequence of code must be ran sequentially while myenv and kernel_name being passed as an manually given argument.
How could I do this at once with wrapped up .sh file? or is this possible without creating .sh file?
You can do it using a shell script. I would do:
#!/usr/bin/env bash
myenv="$1"
kernel_name="$2"
source /path/to/base/conda/installation/etc/profile.d/conda.sh
conda install --yes --file /path/to/requirements.txt
conda activate "$myenv"
python -m ipykernel install --user --name "$myenv" --display-name "$kernel_name"
And run it like: /path/to/script.sh <env-name> <kernel-name>
I have the following docker setup:
python27.Dockerfile
FROM python:2.7
COPY ./entrypoint.sh /entrypoint.sh
RUN mkdir /src
RUN apt-get update && apt-get install -y bash libmysqlclient-dev python-pip build-essential && pip install virtualenv
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 8000
WORKDIR /src
CMD source /src/env/bin/activate && python /src/manage.py runserver
entrypoint.sh
#!/bin/bash
# some code here...
# some code here...
# some code here...
exec "$#"
Whenever I try to run my docker container I get python27 | /bin/sh: 1: source: not found.
I understand that the error comes from the fact that the command is run with sh instead of bash, but I can't understand why is that happening, given the fact that I have the correct shebang at the top of my entrypoint.
Any ideas why is that happening and how can I fix it?
The problem is that for CMD you're using the shell form that uses /bin/sh, and the /src/env/bin/activate likely contains a "source" command, which isn't available on POSIX /bin/sh (the equivalent builtin would be just .).
You must use the exec form for CMD using brackets:
CMD ["/bin/bash", "-c", "source /src/env/bin/activate && python /src/manage.py runserver"]
More details in:
https://docs.docker.com/engine/reference/builder/#run
https://docs.docker.com/engine/reference/builder/#cmd
https://docs.docker.com/engine/reference/builder/#entrypoint
I want to have a generic BASH script witch activates my virtual
environment inside a given folder.
The script should be able to be called from from any folder I have a virtual environment in.
If there is no virtual environment it should create one and install the pip requirements.
I can not run the activation inside my original BASH only as a subprocess (see --rcfile). Just source-ing it is not working!
Thats my current script:
#!/bin/bash -e
# BASH script to run virtual environment
# Show errors
set -x
DIR_CURRENT="$PWD"
DIR_VIRTUAL_ENV="$PWD/venv"
FILE_PYTHON="/usr/bin/python2.7"
FILE_REQUIREMENTS="requirements.txt"
FILE_VIRTUAL_ACTIVATE_BASH="$DIR_VIRTUAL_ENV/bin/activate"
# CD to current folder
cd ${DIR_CURRENT}
echo DIR: $(pwd)
# Create the virtual environment if not existing
if [ ! -d ${DIR_VIRTUAL_ENV} ]; then
virtualenv -p ${FILE_PYTHON} ${DIR_VIRTUAL_ENV}
chmod a+x ${FILE_VIRTUAL_ACTIVATE_BASH}
source ${FILE_VIRTUAL_ACTIVATE_BASH}
pip install -r ${FILE_REQUIREMENTS}
fi
/bin/bash --rcfile "$FILE_VIRTUAL_ACTIVATE_BASH"
# Disable errors
set +x
I use Mac OSX 10.10.5 and Python 2.7.
Sadly existing 1, 2, 3 questions couldn't answer my problem.
First, what you are trying to do has already been nicely solved by a project called virtualenvwrapper.
About your question: Use a function instead of a script. Place this into your bashrc for example:
function enter_venv(){
# BASH script to run virtual environment
# Show errors
set -x
DIR_CURRENT="$PWD"
DIR_VIRTUAL_ENV="$PWD/venv"
FILE_PYTHON="/usr/bin/python2.7"
FILE_REQUIREMENTS="requirements.txt"
FILE_VIRTUAL_ACTIVATE_BASH="$DIR_VIRTUAL_ENV/bin/activate"
# CD to current folder
cd ${DIR_CURRENT}
echo DIR: $(pwd)
# Create the virtual environment if not existing
if [ ! -d ${DIR_VIRTUAL_ENV} ]; then
virtualenv -p ${FILE_PYTHON} ${DIR_VIRTUAL_ENV}
chmod a+x ${FILE_VIRTUAL_ACTIVATE_BASH}
source ${FILE_VIRTUAL_ACTIVATE_BASH}
pip install -r ${FILE_REQUIREMENTS}
fi
/bin/bash --rcfile "$FILE_VIRTUAL_ACTIVATE_BASH"
# Disable errors
set +x
}
The you can call it like this:
enter_venv