Jenkins file and alias [duplicate] - bash

This question already has answers here:
alias in a script
(3 answers)
Closed 8 months ago.
I want to make a multi stage test where I successively test an application with Python 3.10, 3.9, 3.8, etc. I have a docker container with 3 executable available.
/usr/local/bin/python3.8
/usr/local/bin/python3.9
/usr/local/bin/python3.10
I have this section of Jenkinsfile
stage ('Test Python 3.10') {
steps {
sh '''
echo $(which python3)
alias python3=python3.10
echo $(which python3)
alias pip3=pip3.10
scripts/check_python_version.sh 3.10 && scripts/runtests.sh
'''
}
}
stage ('Test Python 3.9') {
steps {
sh '''
alias python3=python3.9
alias pip3=pip3.9
scripts/check_python_version.sh 3.9 && scripts/runtests.sh
'''
}
}
My alias is not taken in account. Looking at the first stage output, we get
+ which python3
+ echo /usr/local/bin/python3
/usr/local/bin/python3
+ alias python3=python3.10
+ which python3
+ echo /usr/local/bin/python3
/usr/local/bin/python3
+ alias pip3=pip3.10
+ scripts/check_python_version.sh 3.10
ERROR - Reported python3 version is Python 3.8.13
How can I change the default python3 during the Jenkins test stage?

My alias is not taken in account
Sure it isn't, aliases only affect interactive shell when you type stuff. It do not affect shebang, or kernel, or anything else.
How can I change the default python3 during the Jenkins test stage?
Create a temporary directory. In that directory create a shell script python3 file that will call the actual executable. Add that directory to path.
mkdir bin
printf "%s\n" "#!/bin/sh" "$(which python3.10)"' "$#"' >> bin/python3
chmod +x bin/python3
export PATH=$PWD/bin:$PATH
You might be interested in pyenv. And you might consider just running your pipelines from docker containers that ship proper python version installed.

Related

Run pipenv shell as part of a bash script

I created a bash script to scaffold boilerplates for several apps (React, Next.js, django, etc).
In part of my django_install() function, I run the following (reduced here):
mkdir "$app_name"
cd ./"$app_name" || exit 0
gh repo clone <my-repo-boilerplate> .
rm -rf .git
pipenv install
pipenv install --dev
exit 0
I would also like to execute pipenv shell and some commands that need to run inside that virtual environment, as my boilerplate has some custom scripts that I'd like to run to automatise the script completely.
I understand I cannot just run pipenv shell or python manage.py [etc...] in my bash script.
How could I achieve it?
I think you can use pipenv run for that. E.g.:
pipenv run python manage.py [etc...]
Which will run python manage.py within the virtual environment created my pipenv.
https://pipenv.pypa.io/en/latest/cli/#pipenv-run

bash script starting new shell and continuing to run commands [duplicate]

This question already has answers here:
Pass commands as input to another command (su, ssh, sh, etc)
(3 answers)
Closed 5 years ago.
I'm a complete noob to writing bash scripts. I'm trying to do the following:
#!/bin/bash
mkdir New_Project
cd New_Project
pipenv install ipykernel
pipenv shell
python -m ipykernel install --user --name==new-virtual-env
jupyter notebook
The problem I'm having is that after it executes pipenv shell, it starts the new shell and then doesn't execute the last two commands. When I exit the new shell it then tries to execute the remaining lines. Is there any way to get a script to run all of these commands from start to finish?
As per the manual :
shell will spawn a shell with the virtualenv activated.
which is not what you need. Instead use run :
run will run a given command from the virtualenv, with any arguments
forwarded (e.g. $ pipenv run python).
In your case, something like
pipenv run python -m ipykernel install --user --name==new-virtual-env

Bash on Windows - alias for exe files

I am using the Bash on Ubuntu on Windows, the way to run bash on Windows 10. I have the Creators update installed and the Ubuntu version is 16.04.
I was playing recently with things as npm, node.js and Docker and for docker I found it is possible to install it and run it in windows and just use the client part from bash, calling directly the docker.exe file from Windows's Program Files files folder. I just update my path variable to include the path to docker as PATH=$PATH:~/mnt/e/Program\ Files/Docker/ (put in .bashrc) and then I am able to run docker from bash calling docker.exe.
But hey this bash and I dont want to write .exe at the end of the commands (programs). I can simply add an alias alias docker="docker.exe", but then I want to use lets say docker-compose and I have to add another one. I was thinking about adding a script to .bashrc that would go over path variable and search for .exe files in every path specified in the path variable and add an alias for every occurance, but it does not seem to be a very clean solution (but I guess it would serve its purpose quite well).
Is there a simple and clean solution to achieve this?
I've faced the same problem when trying to use Docker for Windows from WSL.
Had plenty of existing shell scripts that run fine under Linux and mostly under WSL too until failing due to docker: command not found. Changing everywhere docker to docker.exe would be too cumbersome and non-portable.
Tried workaround with aliases in ~/.bashrc as here at first:
shopt -s expand_aliases
alias docker=docker.exe
alias docker-compose=docker-compose.exe
But it requires every script to be run in interactive mode and still doesn't work within backticks without script modification.
Then tried exported bash functions in ~/.bashrc:
docker() { docker.exe "$#"; }
export -f docker
docker-compose() { docker-compose.exe "$#"; }
export -f docker-compose
This works. But it's still too tedious to add every needed exe.
Finally ended up with easier symlinks approach and a modified wslshim custom helper script.
Just add once to ~/.local/bin/wslshim:
#!/bin/bash -x
cd ~/.local/bin && ln -s "`which $1.exe`" "$1" || ln -s "`which $1.ps1`" "$1" || ln -s "`which $1.cmd`" "$1" || ln -s "`which $1.bat`" "$1"
Make it executable: chmod +x ~/.local/bin/wslshim
Then adding any "alias" becomes as easy as typing two words:
$ wslshim docker
+ cd ~/.local/bin
++ which docker.exe
+ ln -s '/mnt/c/Program Files/Docker/Docker/resources/bin/docker.exe' docker
$ wslshim winrm
+ cd ~/.local/bin
++ which winrm.exe
+ ln -s '' winrm
ln: failed to create symbolic link 'winrm' -> '': No such file or directory
++ which winrm.ps1
+ ln -s '' winrm
ln: failed to create symbolic link 'winrm' -> '': No such file or directory
++ which winrm.cmd
+ ln -s /mnt/c/Windows/System32/winrm.cmd winrm
The script auto picks up an absolute path to any windows executable in $PATH and symlinks it without extension into ~/.local/bin which also resides in $PATH on WSL.
This approach can be easily extended further to auto link any exe in a given directory if needed. But linking the whole $PATH would be an overkill. )
You should be able to simply set the executable directory to your PATH. Use export to persist.
Command:
export PATH=$PATH:/path/to/directory/executable/is/located/in
In my windows 10 the solution was to install git-bash and docker for windows.
in this bash, when I press "docker" it works
for example "docker ps"
I didnt need to make an alias or change the path.
you can download git-bash from https://git-scm.com/download/win
then from Start button, search "git bash".
Hope this solution good for you

Activate virtualenv INSIDE the my current bash (no subprocess)

I want to have a generic BASH script witch activates my virtual
environment inside a given folder.
The script should be able to be called from from any folder I have a virtual environment in.
If there is no virtual environment it should create one and install the pip requirements.
I can not run the activation inside my original BASH only as a subprocess (see --rcfile). Just source-ing it is not working!
Thats my current script:
#!/bin/bash -e
# BASH script to run virtual environment
# Show errors
set -x
DIR_CURRENT="$PWD"
DIR_VIRTUAL_ENV="$PWD/venv"
FILE_PYTHON="/usr/bin/python2.7"
FILE_REQUIREMENTS="requirements.txt"
FILE_VIRTUAL_ACTIVATE_BASH="$DIR_VIRTUAL_ENV/bin/activate"
# CD to current folder
cd ${DIR_CURRENT}
echo DIR: $(pwd)
# Create the virtual environment if not existing
if [ ! -d ${DIR_VIRTUAL_ENV} ]; then
virtualenv -p ${FILE_PYTHON} ${DIR_VIRTUAL_ENV}
chmod a+x ${FILE_VIRTUAL_ACTIVATE_BASH}
source ${FILE_VIRTUAL_ACTIVATE_BASH}
pip install -r ${FILE_REQUIREMENTS}
fi
/bin/bash --rcfile "$FILE_VIRTUAL_ACTIVATE_BASH"
# Disable errors
set +x
I use Mac OSX 10.10.5 and Python 2.7.
Sadly existing 1, 2, 3 questions couldn't answer my problem.
First, what you are trying to do has already been nicely solved by a project called virtualenvwrapper.
About your question: Use a function instead of a script. Place this into your bashrc for example:
function enter_venv(){
# BASH script to run virtual environment
# Show errors
set -x
DIR_CURRENT="$PWD"
DIR_VIRTUAL_ENV="$PWD/venv"
FILE_PYTHON="/usr/bin/python2.7"
FILE_REQUIREMENTS="requirements.txt"
FILE_VIRTUAL_ACTIVATE_BASH="$DIR_VIRTUAL_ENV/bin/activate"
# CD to current folder
cd ${DIR_CURRENT}
echo DIR: $(pwd)
# Create the virtual environment if not existing
if [ ! -d ${DIR_VIRTUAL_ENV} ]; then
virtualenv -p ${FILE_PYTHON} ${DIR_VIRTUAL_ENV}
chmod a+x ${FILE_VIRTUAL_ACTIVATE_BASH}
source ${FILE_VIRTUAL_ACTIVATE_BASH}
pip install -r ${FILE_REQUIREMENTS}
fi
/bin/bash --rcfile "$FILE_VIRTUAL_ACTIVATE_BASH"
# Disable errors
set +x
}
The you can call it like this:
enter_venv

scl enable python27 bash

I've hit a snag with a shell script intended to run every 30 minutes in cron on a Redhat 6 server. The shell script is basically just a command to run a python script.
The native version python on the server is 2.6.6 but the python version required by this particular script is python 2.7+. I am able to easily run this on the command line by using the "scl" command (this example includes the python -V command to show the version change):
$ python -V
Python 2.6.6
$ scl enable python27 bash
$ python -V
Python 2.7.3
At this point I can run the python 2.7.3 scripts on the command line no problem.
Here's the snag.
When you issue the scl enable python27 bash command it starts a new bash shell session which (again) is fine for interactive commandline work. But when doing this inside a shell script, as soon as it runs the bash command, the script exits because of the new session.
Here's the shell script that is failing:
#!/bin/bash
cd /var/www/python/scripts/
scl enable python27 bash
python runAllUpserts.py >/dev/null 2>&1
It simply stops as soon as it hits line 4 because "bash" pops it out of the script and into a fresh bash shell. So it never sees the actual python command I need it to run.
Plus, if run every 30 minutes, this would add a new bash each time which is yet another problem.
I am reluctant to update the native python version on the server to 2.7.3 right now due to several reasons. The Redhat yum repos don't yet have python 2.7.3 and a manual install would be outside of the yum update system. From what I understand, yum itself runs on python 2.6.x.
Here's where I found the method for using scl
http://developerblog.redhat.com/2013/02/14/setting-up-django-and-python-2-7-on-red-hat-enterprise-6-the-easy-way/
Doing everything in one heredoc in the SCL environment is the best option, IMO:
scl enable python27 - << \EOF
cd /var/www/python/scripts/
python runAllUpserts.py >/dev/null 2>&1
EOF
Another way is to run just the second command (which is the only one that uses Python) in scl environment directly:
cd /var/www/python/scripts/
scl enable python27 "python runAllUpserts.py >/dev/null 2>&1"
scl enable python27 bash activates a python virtual environment.
You can do this from within a bash script by simply sourcing the enable script of the virtual environment, of the SCL package, which is located at /opt/rh/python27/enable
Example:
#!/bin/bash
cd /var/www/python/scripts/
source /opt/rh/python27/enable
python runAllUpserts.py >/dev/null 2>&1
Isn't the easiest to just your python script directly? test_python.py:
#!/usr/bin/env python
import sys
f = open('/tmp/pytest.log','w+')
f.write(sys.version)
f.write('\n')
f.close()
then in your crontab:
2 * * * * scl python27 enable $HOME/test_python.py
Make sure you make test_python.py executable.
Another alternative is to call a shell script that calls the python. test_python.sh:
#/bin/bash
python test_python.py
in your crontab:
2 * * * * scl python27 enable $HOME/test_python.sh
One liner
scl enable python27 'python runAllUpserts.py >/dev/null 2>&1'
I use it also with the devtoolsets on the CentOS 6.x
me#my_host:~/tmp# scl enable devtoolset-1.1 'gcc --version'
gcc (GCC) 4.7.2 20121015 (Red Hat 4.7.2-5)
Copyright (C) 2012 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
scl is the dumbest "let us try and lock you in` nonsense I've seen in a while.
Here's how I made it so I could pass arguments to a series of scripts that all linked to a single skeleton file:
$ cat /usr/bin/skeleton
#!/bin/sh
tmp="$( mktemp )"
me="$( basename $0 )"
echo 'scl enable python27 - << \EOF' >> "${tmp}"
echo "python '/opt/rh/python27/root/usr/bin/${me}' $#" >> "${tmp}"
echo "EOF" >> "${tmp}"
sh "${tmp}"
rm "${tmp}"
So if there's a script you want to run that lives in, say, /opt/rh/python27/root/usr/bin/pepper you can do this:
# cd /usr/bin
# ln -s skeleton pepper
# pepper foo bar
and it should work as expected.
I've only seen this scl stuff once before and don't have ready access to a system with it installed. But I think it's just setting up PATH and some other environment variables in some way that vaguely similar to how they're done under virtualenv.
Perhaps changing the script to have the bash subprocess call python would work:
#!/bin/bash
cd /var/www/python/scripts/
(scl enable python27 bash -c "python runAllUpserts.py") >/dev/null 2>&1
The instance of python found on the subprocess bash's shell should be your 2.7.x copy ... and all the other environmental settings done by scl should be inherited thereby.

Resources