Can someone explain what is being assigned into use_database? - makefile

I understand that the := operator is a simple assignment that only gets assigned one and that = is a recursive assignment. But when they are combined I am confused and cannot figure out what is going on in this code block:
use_database := MAIN_DATABASE=sqlite
$(use_database) python3 manage.py makemigrations amlcenter
My first thought was that it assigned sqlite to both use_database and MAIN_DATABASE but I don't think that's it. Then I thought it assigned "MAIN_DATABASE=sqlite" to use_database but that would make the second line:
MAIN_DATABASE=sqlite python3 manage.py makemigrations amlcenter
which I don't feel makes sense. Any help would be appreciated. This is in a Makefile.
use_database := MAIN_DATABASE=sqlite
use_runserver_str := python3 manage.py runserver 0.0.0.0:8050
use_runscript_str := python3 manage.py runscript
db_migrate: ## Db migrate
$(use_database) python3 manage.py makemigrations amlcenter
$(use_database) TEST_MODE=True python3 manage.py migrate
clean: ## Clean Directory
rm -f db.sqlite3
rm -rf static/
rm -rf media/
rm -f aml.log
dev: clean db_migrate ## Set up development server with sample data
if [[ $(use_database) = *"psql"* ]] ; then $(use_database) python3 manage.py flush --noinput; echo 'Flushed psql' ; fi
$(use_database) $(use_runscript_str) sample_data_generator

The shell syntax var=value cmd args temporarily sets the variable var to the value value for the duration of the execution of cmd args.
Apparently this would be used inside a Makefile recipe, and apparently the Python script which runs will examine its environment to pick up this variable, probably something like
import os
# ...
if os.environ('MAIN_DATABASE') == 'sqlite':
do_sqlitey_things()
# probably else if it's 'mysql', do mysqly things or Postgressy things for 'postgres', etc

Related

Jenkins file and alias [duplicate]

This question already has answers here:
alias in a script
(3 answers)
Closed 8 months ago.
I want to make a multi stage test where I successively test an application with Python 3.10, 3.9, 3.8, etc. I have a docker container with 3 executable available.
/usr/local/bin/python3.8
/usr/local/bin/python3.9
/usr/local/bin/python3.10
I have this section of Jenkinsfile
stage ('Test Python 3.10') {
steps {
sh '''
echo $(which python3)
alias python3=python3.10
echo $(which python3)
alias pip3=pip3.10
scripts/check_python_version.sh 3.10 && scripts/runtests.sh
'''
}
}
stage ('Test Python 3.9') {
steps {
sh '''
alias python3=python3.9
alias pip3=pip3.9
scripts/check_python_version.sh 3.9 && scripts/runtests.sh
'''
}
}
My alias is not taken in account. Looking at the first stage output, we get
+ which python3
+ echo /usr/local/bin/python3
/usr/local/bin/python3
+ alias python3=python3.10
+ which python3
+ echo /usr/local/bin/python3
/usr/local/bin/python3
+ alias pip3=pip3.10
+ scripts/check_python_version.sh 3.10
ERROR - Reported python3 version is Python 3.8.13
How can I change the default python3 during the Jenkins test stage?
My alias is not taken in account
Sure it isn't, aliases only affect interactive shell when you type stuff. It do not affect shebang, or kernel, or anything else.
How can I change the default python3 during the Jenkins test stage?
Create a temporary directory. In that directory create a shell script python3 file that will call the actual executable. Add that directory to path.
mkdir bin
printf "%s\n" "#!/bin/sh" "$(which python3.10)"' "$#"' >> bin/python3
chmod +x bin/python3
export PATH=$PWD/bin:$PATH
You might be interested in pyenv. And you might consider just running your pipelines from docker containers that ship proper python version installed.

Env variables not being picked up by script

Creating a script to pass to a few different people and ran into an env problem. The script wouldn't run unless I supplied it with $PATH, $HOME, and $GOPATH at the beginning of the file. Like so:
HOME=/home/Hustlin
PATH=/home/Hustlin/bin:/home/Hustlin/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/go/bin:/bin:/home/Hustlin/go/bin
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
This is not advantageous when trying to pass the script around and each person has to set these variables themselves. This file would rarely be run by the User and would most often be run via crontab.
I would love to hear a better way of coding this so I'm not asking everyone I send the script to update these variables.
Thank you all in advance!!!
EDIT
The script is being run via crontab with no special permissions.
1,16,31,46 * * * * /home/Hustlin/directory1/super_cool_script.sh
Here is the script I am running:
#!/bin/bash
# TODO Manually put your $PATH and $HOME here.
PATH=/home/Hustlin/bin:/home/Hustlin/.local/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin:/usr/local/go/bin:/bin:/home/Hustlin/go/bin
HOME=/home/Hustlin
export GOPATH=$HOME/go
export PATH=$PATH:$GOROOT/bin:$GOPATH/bin
# Field1
field1="foo"
# Welcome message.
echo Starting the update process...
# Deposit directory.
mkdir -p $HOME/directory1/sub1/data/body
mkdir -p $HOME/directory1/sub2/system
# Run command
program1 command1
# Run longer command.
program1 command2 field1
sleep 3
program1 command3 -o $HOME/directory1/sub1/data $field1
sleep 1
# Unzip and discard unnecessary files.
unzip $HOME/directory1/sub1/data/$field1 -d $HOME/directory1/sub1/data
rm $HOME/directory1/sub1/data/bar.yaml $HOME/dircetory1/sub1/data/char.txt
rm $HOME/directory1/sub1/data/$field1.zip
# Rename
mv $HOME/directory1/sub1/data/body.json $HOME/directory1/sub1/data/body/$(date -d '1 hour ago' +%d-%m-%Y_%H).json
echo Process complete.
I changed most of the program and command names for privacy. What I did post still represents what is being done and how the files are being moved.
The issue is crontab, not the script.
When you run the script on your terminal, you are logged in a session with all environment variables set, so the script can use it.
But when you run it from crontab it an "empty" session, so it does not have any environment variable set, it doesn't even know about your user.
Run the script on crontab like this:.
su --login Hustlin /home/Hustlin/directory1/super_cool_script.sh
Check this documentation.
http://man7.org/linux/man-pages/man1/su.1.html
bash -l -c /path/to/script will make bash execute all .bashrc and .profile files first, so it will have HOME and PATH variables set.

Activate virtualenv INSIDE the my current bash (no subprocess)

I want to have a generic BASH script witch activates my virtual
environment inside a given folder.
The script should be able to be called from from any folder I have a virtual environment in.
If there is no virtual environment it should create one and install the pip requirements.
I can not run the activation inside my original BASH only as a subprocess (see --rcfile). Just source-ing it is not working!
Thats my current script:
#!/bin/bash -e
# BASH script to run virtual environment
# Show errors
set -x
DIR_CURRENT="$PWD"
DIR_VIRTUAL_ENV="$PWD/venv"
FILE_PYTHON="/usr/bin/python2.7"
FILE_REQUIREMENTS="requirements.txt"
FILE_VIRTUAL_ACTIVATE_BASH="$DIR_VIRTUAL_ENV/bin/activate"
# CD to current folder
cd ${DIR_CURRENT}
echo DIR: $(pwd)
# Create the virtual environment if not existing
if [ ! -d ${DIR_VIRTUAL_ENV} ]; then
virtualenv -p ${FILE_PYTHON} ${DIR_VIRTUAL_ENV}
chmod a+x ${FILE_VIRTUAL_ACTIVATE_BASH}
source ${FILE_VIRTUAL_ACTIVATE_BASH}
pip install -r ${FILE_REQUIREMENTS}
fi
/bin/bash --rcfile "$FILE_VIRTUAL_ACTIVATE_BASH"
# Disable errors
set +x
I use Mac OSX 10.10.5 and Python 2.7.
Sadly existing 1, 2, 3 questions couldn't answer my problem.
First, what you are trying to do has already been nicely solved by a project called virtualenvwrapper.
About your question: Use a function instead of a script. Place this into your bashrc for example:
function enter_venv(){
# BASH script to run virtual environment
# Show errors
set -x
DIR_CURRENT="$PWD"
DIR_VIRTUAL_ENV="$PWD/venv"
FILE_PYTHON="/usr/bin/python2.7"
FILE_REQUIREMENTS="requirements.txt"
FILE_VIRTUAL_ACTIVATE_BASH="$DIR_VIRTUAL_ENV/bin/activate"
# CD to current folder
cd ${DIR_CURRENT}
echo DIR: $(pwd)
# Create the virtual environment if not existing
if [ ! -d ${DIR_VIRTUAL_ENV} ]; then
virtualenv -p ${FILE_PYTHON} ${DIR_VIRTUAL_ENV}
chmod a+x ${FILE_VIRTUAL_ACTIVATE_BASH}
source ${FILE_VIRTUAL_ACTIVATE_BASH}
pip install -r ${FILE_REQUIREMENTS}
fi
/bin/bash --rcfile "$FILE_VIRTUAL_ACTIVATE_BASH"
# Disable errors
set +x
}
The you can call it like this:
enter_venv

Activate virtualenv in Makefile

How can I activate virtualenv in a Makefile?
I have tried:
venv:
#virtualenv venv
active:
#source venv/bin/activate
And I've also tried:
active:
#. venv/bin/activate
and it doesn't activate virtualenv.
Here's how to do it:
You can execute a shell command in a Makefile using ();
E.g.
echoTarget:
(echo "I'm an echo")
Just be sure to put a tab character before each line in the shell command.
i.e. you will need a tab before (echo "I'm an echo")
Here's what will work for activating virtualenv:
activate:
( \
source path/to/virtualenv/activate; \
pip install -r requirements.txt; \
)
UPD Mar 14 '21
One more variant for pip install inside virtualenv:
# Makefile
all: install run
install: venv
: # Activate venv and install smthing inside
. venv/bin/activate && pip install -r requirements.txt
: # Other commands here
venv:
: # Create venv if it doesn't exist
: # test -d venv || virtualenv -p python3 --no-site-packages venv
test -d venv || python3 -m venv venv
run:
: # Run your app here, e.g
: # determine if we are in venv,
: # see https://stackoverflow.com/q/1871549
. venv/bin/activate && pip -V
: # Or see #wizurd's answer to exec multiple commands
. venv/bin/activate && (\
python3 -c 'import sys; print(sys.prefix)'; \
pip3 -V \
)
clean:
rm -rf venv
find -iname "*.pyc" -delete
So you can run make with different 'standard' ways:
make - target to default all
make venv - to just create virtualenv
make install - to make venv and execute other commands
make run - to execute your app inside venv
make clean - to remove venv and python binaries
When it is time to execute recipes to update a target, they are executed by invoking a new sub-shell for each line of the recipe. --- GNU Make
Since each line of the recipe executes in a separate sub-shell, we should run the python code in the same line of the recipe.
Simple python script for showing the current source of python environment (filename: whichpy.py):
import sys
print(sys.prefix)
Running python virtual environment (Make recipes run on sh instead of bash, using . to activate the virtual environment is the correct syntax):
test:
. pyth3.6/bin/activate && python3.6 whichpy.py
. pyth3.6/bin/activate; python3.6 whichpy.py
Both the above 2 recipes are acceptable and we are free to use backslash/newline to separate one recipe into multiple lines.
Makefiles can't activate an environment directly. This is what worked for me:
activate:
bash -c "venv/bin/activate"
If you get a permission denied error make venv/bin/activate executable:
chmod +x venv/bin/activate
I built a simple way to do that by combining an ACTIVATE_LINUX:=. venv/bin/activate variable with .ONESHELL: at the begging of the Makefile:
.ONESHELL:
.PHONY: clean venv tests scripts
ACTIVATE_LINUX:=. venv/bin/activate
setup: venv install scripts
venv:
#echo "Creating python virtual environment in 'venv' folder..."
#python3 -m venv venv
install:
#echo "Installing python packages..."
#$(ACTIVATE_LINUX)
#pip install -r requirements.txt
clean:
#echo "Cleaning previous python virtual environment..."
#rm -rf venv
As pointed out in other answers, Make recipes run on sh instead of bash, so using . in the ACTIVATE_LINUX variable instead of # to activate the virtual environment is the correct syntax.
I combined this strategy with .ONESHELL:. As shown at this StackOverflow anwser, normally Make runs every command in a recipe in a different subshell. However, setting .ONESHELL: will run all the commands in a recipe in the same subshell, allowing you to activate a virtual environment and then run commands inside it. That's exactly what's happening in make install target and this approach could be applied wherever you need an activated environment to run some python code in your project.
So you can run the following make targets:
make - target to default setup;
make venv - to just create virtual environment;
make install - to activate venv and execute the pip install command; and
make clean - to remove previous venv and python binaries.

Stay in directory changed after ending of bash script

My bash script:
#!/bin/bash
cd /tmp
Before running my script:
pwd: /
After running my script:
pwd: /
After runnig my script trough sourcing it:
pwd: /tmp
How I can stay at the path from the script without sourcing it ?
You can't. Changes to the current directory only affect the current process.
Let me elaborate a little bit on this:
When you run the script, bash creates a new process for it, and changes to the current directory only affect that process.
When you source the script, the script is executed directly by the shell you are running, without creating extra processes, so changes to the current directory are visible to your main shell process.
So, as Ignacio pointed out, this can not be done
Ignacio is correct. However, as a heinous hack (totally ill advised and this really should get me at least 10 down votes) you can exec a new shell when you're done
#!/bin/bash
...
cd /
exec bash
Here's a silly idea. Use PROMPT_COMMAND. For example:
$ export PROMPT_COMMAND='test -f $CDFILE && cd $(cat $CDFILE) && rm $CDFILE'
$ export CDFILE=/tmp/cd.$$
Then, make the last line of your script be 'pwd > $CDFILE'
If you really need this behavior, you can make your script return the directory, then use it somehow. Something like:
#!/bin/bash
cd /tmp
echo $(pwd)
and then you can
cd $(./script.sh)
Ugly, but does the trick in this simple case.
You can define a function to run in the current shell to support this. E.g.
md() { mkdir -p "$1" && cd "$1"; }
I have the above in my ~/.bashrc

Resources