What does "noroot" means in Variant Vagrant Vagrant (VVV) scripts - bash

I'm trying to write a custom provisionner for VVV.
I've noticed that many command lines in shell scripts are prefixed by 'noroot'.
Here are a few examples:
noroot mkdir -p "${VVV_PATH_TO_SITE}/log"
noroot touch "${VVV_PATH_TO_SITE}/log/nginx-error.log"
noroot wp plugin install "${plugin}" --activate
I can't figure out what is the purpose of 'noroot'.
Could anyone explain me what is 'noroot' and where to find some doc about it.
thank you.

It turns out that noroot is a specific function defined in VVV core scripts contained in VVV-path/provision/provision-helpers.sh:
function noroot() {
sudo -EH -u "vagrant" "$#";
}
export -f noroot
It runs the command following norootas the user vagrant

Related

execute aws command in script with sudo

I am running a bash script with sudo and have tried the below but am getting the error below using aws cp. I think the problem is that the script is looking for the config in /root which does not exist. However doesn't the -E preserve the original location? Is there an option that can be used with aws cp to pass the location of the config. Thank you :).
sudo -E bash /path/to/.sh
- inside of this script is `aws cp`
Error
The config profile (name) could not be found
I have also tried `export` the name profile and `source` the path to the `config`
You can use the original user like :
sudo -u $SUDO_USER aws cp ...
You could also run the script using source instead of bash -- using source will cause the script to run in the same shell as your open terminal window, which will keep the same env together (such as user) - though honestly, #Philippe answer is the better, more correct one.

Bash on Windows - alias for exe files

I am using the Bash on Ubuntu on Windows, the way to run bash on Windows 10. I have the Creators update installed and the Ubuntu version is 16.04.
I was playing recently with things as npm, node.js and Docker and for docker I found it is possible to install it and run it in windows and just use the client part from bash, calling directly the docker.exe file from Windows's Program Files files folder. I just update my path variable to include the path to docker as PATH=$PATH:~/mnt/e/Program\ Files/Docker/ (put in .bashrc) and then I am able to run docker from bash calling docker.exe.
But hey this bash and I dont want to write .exe at the end of the commands (programs). I can simply add an alias alias docker="docker.exe", but then I want to use lets say docker-compose and I have to add another one. I was thinking about adding a script to .bashrc that would go over path variable and search for .exe files in every path specified in the path variable and add an alias for every occurance, but it does not seem to be a very clean solution (but I guess it would serve its purpose quite well).
Is there a simple and clean solution to achieve this?
I've faced the same problem when trying to use Docker for Windows from WSL.
Had plenty of existing shell scripts that run fine under Linux and mostly under WSL too until failing due to docker: command not found. Changing everywhere docker to docker.exe would be too cumbersome and non-portable.
Tried workaround with aliases in ~/.bashrc as here at first:
shopt -s expand_aliases
alias docker=docker.exe
alias docker-compose=docker-compose.exe
But it requires every script to be run in interactive mode and still doesn't work within backticks without script modification.
Then tried exported bash functions in ~/.bashrc:
docker() { docker.exe "$#"; }
export -f docker
docker-compose() { docker-compose.exe "$#"; }
export -f docker-compose
This works. But it's still too tedious to add every needed exe.
Finally ended up with easier symlinks approach and a modified wslshim custom helper script.
Just add once to ~/.local/bin/wslshim:
#!/bin/bash -x
cd ~/.local/bin && ln -s "`which $1.exe`" "$1" || ln -s "`which $1.ps1`" "$1" || ln -s "`which $1.cmd`" "$1" || ln -s "`which $1.bat`" "$1"
Make it executable: chmod +x ~/.local/bin/wslshim
Then adding any "alias" becomes as easy as typing two words:
$ wslshim docker
+ cd ~/.local/bin
++ which docker.exe
+ ln -s '/mnt/c/Program Files/Docker/Docker/resources/bin/docker.exe' docker
$ wslshim winrm
+ cd ~/.local/bin
++ which winrm.exe
+ ln -s '' winrm
ln: failed to create symbolic link 'winrm' -> '': No such file or directory
++ which winrm.ps1
+ ln -s '' winrm
ln: failed to create symbolic link 'winrm' -> '': No such file or directory
++ which winrm.cmd
+ ln -s /mnt/c/Windows/System32/winrm.cmd winrm
The script auto picks up an absolute path to any windows executable in $PATH and symlinks it without extension into ~/.local/bin which also resides in $PATH on WSL.
This approach can be easily extended further to auto link any exe in a given directory if needed. But linking the whole $PATH would be an overkill. )
You should be able to simply set the executable directory to your PATH. Use export to persist.
Command:
export PATH=$PATH:/path/to/directory/executable/is/located/in
In my windows 10 the solution was to install git-bash and docker for windows.
in this bash, when I press "docker" it works
for example "docker ps"
I didnt need to make an alias or change the path.
you can download git-bash from https://git-scm.com/download/win
then from Start button, search "git bash".
Hope this solution good for you

How to run a programme inside a virtual environment from a script

I have set up the google assistant sdk on my Raspberry Pi as shown here: https://developers.google.com/assistant/sdk/prototype/getting-started-pi-python/run-sample
Now in order to re-run the assistant I have worked out the two commands are
$ source env/bin/activate
and
(env) $ google-assistant-demo
however I want to automate this process into a script that I can call from rc.local (followed by an &) in order to make the assistant boot from start up.
However if I run a simple script
#!/bin/bash
source env/bin/activate
google-assistant-demo
the assistant does not run inside the environment
my environment path is /home/pi/env/bin/activate
How can I have it so the script starts the environment and then runs the assistant inside the virtual environment?
EDIT: In the end I went with the following method:
using this as a base :
https://youtu.be/ohUszBxuQA4?t=774 – thanks to Eric Parisot
You will need to download the src file he uses and extract its contents into /home/pi/src/
However with a few changes.
I did not run gassist.sh as sudo, as it gave me the following error:
OpenAlsaHandle PcmOpen: No such file or directory
[7689:7702:ERROR:audio_input_processor.cc(756)] Input error
ON_MUTED_CHANGED:
{‘is_muted’: False}
ON_START_FINISHED
ON_ASSISTANT_ERROR:
{‘is_fatal’: True}
[7689:7704:ERROR:audio_input_processor.cc(756)] Input error
ON_ASSISTANT_ERROR:
{‘is_fatal’: True}
Fix: DO NOT run as sudo
If gassist.sh gives an error about RPi.GPIO you need to do https://youtu.be/ohUszBxuQA4?t=580:
$ cd /home/pi/env/bin
$ source activate
(env) $ pip install RPi.GPIO
(env) $ deactivate
And then I did sudo nano /etc/profile
and the appended this to the end:
#Harvs was here on 24/06/17
if pidof -x "gassist.sh" >/dev/null; then
echo ""
echo "/etc/profile says:"
echo "An instance of Google Assistant is already running, will not start again"
echo ""
else
echo "Starting Google Assistant..."
echo "If you are seeing this, perhaps you have SSH within seconds of reboot"
/home/pi/src/gassist.sh &
fi
And now it works perfectly, and inside the virtual enviroment :)
found solution here :https://raspberrypi.stackexchange.com/a/45089
Create a startup shell script in your root directory (I named mine "launch"), make it executable too :
sudo nano launch.sh
I wrote it that way :
#!/bin/bash
source /home/pi/env/bin/activate
/home/pi/env/bin/google-assistant-demo
Save the file
Edit the LXDE-pi autostart file
sudo nano /home/pi/.config/lxsession/LXDE-pi/autostart
Add this to the bottom of that file
./launch.sh
reboot
Scripts run from rc.local execute in the root directory (or possibly in the home directory of the root user, depending on the distro, I think?)
The easy fix is to code the full path to the environment.
#!/bin/bash
source /home/pi/env/bin/activate
google-assistant-demo
# or maybe /home/pi/google-assistant-demo
There is no need to explicitly background anything in rc.local
In the end I went with the following method:
using this as a base : https://youtu.be/ohUszBxuQA4?t=774 – thanks to Eric Parisot
However with a few changes.
You will need to download the src file he uses and extract its contents into /home/pi/src/
I did not run gassist.sh as sudo, as it gave me the following error:
OpenAlsaHandle PcmOpen: No such file or directory
[7689:7702:ERROR:audio_input_processor.cc(756)] Input error
ON_MUTED_CHANGED:
{‘is_muted’: False}
ON_START_FINISHED
ON_ASSISTANT_ERROR:
{‘is_fatal’: True}
[7689:7704:ERROR:audio_input_processor.cc(756)] Input error
ON_ASSISTANT_ERROR:
{‘is_fatal’: True}
Fix: DO NOT run as sudo
If gassist.sh gives an error about RPi.GPIO you need to do https://youtu.be/ohUszBxuQA4?t=580:
$ cd /home/pi/env/bin
$ source activate
(env) $ pip install RPi.GPIO
(env) $ deactivate
And then I did sudo nano /etc/profile and the appended this to the end:
#Harvs was here on 24/06/17
if pidof -x "gassist.sh" >/dev/null; then
echo ""
echo "/etc/profile says:"
echo "An instance of Google Assistant is already running, will not start again"
echo ""
else
echo "Starting Google Assistant..."
echo "If you are seeing this, perhaps you have SSH within seconds of reboot"
/home/pi/src/gassist.sh &
fi
And now it works perfectly, and inside the virtual enviroment, and in boot to CLI mode! :)

Activate virtualenv INSIDE the my current bash (no subprocess)

I want to have a generic BASH script witch activates my virtual
environment inside a given folder.
The script should be able to be called from from any folder I have a virtual environment in.
If there is no virtual environment it should create one and install the pip requirements.
I can not run the activation inside my original BASH only as a subprocess (see --rcfile). Just source-ing it is not working!
Thats my current script:
#!/bin/bash -e
# BASH script to run virtual environment
# Show errors
set -x
DIR_CURRENT="$PWD"
DIR_VIRTUAL_ENV="$PWD/venv"
FILE_PYTHON="/usr/bin/python2.7"
FILE_REQUIREMENTS="requirements.txt"
FILE_VIRTUAL_ACTIVATE_BASH="$DIR_VIRTUAL_ENV/bin/activate"
# CD to current folder
cd ${DIR_CURRENT}
echo DIR: $(pwd)
# Create the virtual environment if not existing
if [ ! -d ${DIR_VIRTUAL_ENV} ]; then
virtualenv -p ${FILE_PYTHON} ${DIR_VIRTUAL_ENV}
chmod a+x ${FILE_VIRTUAL_ACTIVATE_BASH}
source ${FILE_VIRTUAL_ACTIVATE_BASH}
pip install -r ${FILE_REQUIREMENTS}
fi
/bin/bash --rcfile "$FILE_VIRTUAL_ACTIVATE_BASH"
# Disable errors
set +x
I use Mac OSX 10.10.5 and Python 2.7.
Sadly existing 1, 2, 3 questions couldn't answer my problem.
First, what you are trying to do has already been nicely solved by a project called virtualenvwrapper.
About your question: Use a function instead of a script. Place this into your bashrc for example:
function enter_venv(){
# BASH script to run virtual environment
# Show errors
set -x
DIR_CURRENT="$PWD"
DIR_VIRTUAL_ENV="$PWD/venv"
FILE_PYTHON="/usr/bin/python2.7"
FILE_REQUIREMENTS="requirements.txt"
FILE_VIRTUAL_ACTIVATE_BASH="$DIR_VIRTUAL_ENV/bin/activate"
# CD to current folder
cd ${DIR_CURRENT}
echo DIR: $(pwd)
# Create the virtual environment if not existing
if [ ! -d ${DIR_VIRTUAL_ENV} ]; then
virtualenv -p ${FILE_PYTHON} ${DIR_VIRTUAL_ENV}
chmod a+x ${FILE_VIRTUAL_ACTIVATE_BASH}
source ${FILE_VIRTUAL_ACTIVATE_BASH}
pip install -r ${FILE_REQUIREMENTS}
fi
/bin/bash --rcfile "$FILE_VIRTUAL_ACTIVATE_BASH"
# Disable errors
set +x
}
The you can call it like this:
enter_venv

Applying sudo to some commands in script

I have a bash script that partially needs to be running with default user rights, but there are some parts that involve using sudo (like copying stuff into system folders) I could just run the script with sudo ./script.sh, but that messes up all file access rights, if it involves creating or modifying files in the script.
So, how can I run script using sudo for some commands? Is it possible to ask for sudo password in the beginning (when the script just starts) but still run some lines of the script as a current user?
You could add this to the top of your script:
while ! echo "$PW" | sudo -S -v > /dev/null 2>&1; do
read -s -p "password: " PW
echo
done
That ensures the sudo credentials are cached for 5 minutes. Then you could run the commands that need sudo, and just those, with sudo in front.
Edit: Incorporating mklement0's suggestion from the comments, you can shorten this to:
sudo -v || exit
The original version, which I adapted from a Python snippet I have, might be useful if you want more control over the prompt or the retry logic/limit, but this shorter one is probably what works well for most cases.
Each line of your script is a command line. So, for the lines you want, you can simply put sudo in front of those lines of your script. For example:
#!/bin/sh
ls *.h
sudo cp *.h /usr/include/
echo "done" >>log
Obviously I'm just making stuff up. But, this shows that you can use sudo selectively as part of your script.
Just like using sudo interactively, you will be prompted for your user password if you haven't done so recently.

Resources