Run set of commands as sudo in shell script - shell

I want the following in the middle of my shell script .
---
sudo -u user1
cp /files ./ (as these files are only accessbile to user1)
exit
----
continue as me .
The problem is that till sudo it runs fine, the rest of the commands after sudo aren't executed .

You are using sudo incorrectly. Try:
sudo -u user1 cp /files ./
There's no need to split it into two lines (and indeed doing so is wrong). The "exit" exits your script, so nothing after it is executed.

Related

How to catch SIGTERM properly in Docker?

I have a docker container created by the following Dockerfile:
ARG TAG=latest
FROM continuumio/miniconda3:${TAG}
ARG GROUP_ID=1000
ARG USER_ID=1000
ARG ORG=my-org
ARG USERNAME=user
ARG REPO=none
ARG COMMIT=none
ARG BRANCH=none
ARG MAKEAPI=True
RUN addgroup --gid $GROUP_ID $USERNAME
RUN adduser --uid $USER_ID --disabled-password --gecos "" $USERNAME --ingroup $USERNAME
COPY . /api_maker
RUN /opt/conda/bin/pip install pyyaml psutil packaging
RUN apt install -y openssh-client git
RUN mkdir -p -m 0700 ~/.ssh && ssh-keyscan github.com >> ~/.ssh/known_hosts
ENV GIT_SSH_COMMAND="ssh -i /run/secrets/thekey"
RUN --mount=type=secret,id=thekey git clone git#github.com:$ORG/$REPO.git /repo
RUN /opt/conda/bin/python3 /api_maker/repo_setup.py $BRANCH $COMMIT
RUN /repo/root_script.sh
RUN chown -R $USERNAME:$USERNAME /api_maker
RUN chown -R $USERNAME:$USERNAME /repo
RUN mkdir -p /data
RUN chown -R $USERNAME:$USERNAME /data
RUN mkdir -p /working
RUN chown -R $USERNAME:$USERNAME /working
RUN mkdir -p /opt/conda/pkgs
RUN mkdir -p /opt/conda/envs
RUN chmod -R 777 /opt/conda
RUN touch /opt/conda/pkgs/urls.txt
USER $USERNAME
RUN /api_maker/user_env_setup.sh $MAKEAPI
CMD /repo/run_api.sh $#;
with the following run_api.sh script:
#!/bin/bash
cd /repo
PROCESSES=${1:-9}
LOCAL_DOCKER_PORT=${2:-7001}
exec /opt/conda/envs/environment/bin/gunicorn --bind 0.0.0.0:$LOCAL_DOCKER_PORT --workers=$PROCESSES restful_api:app
My app contains some signal handling. If I manually send SIGTERM to gunicorn (either the worker or the parent process) from inside the container, my signal handling works properly. However, it does not work right when I run docker stop on the container. How can I make my shell script properly forward the SIGTERM it is supposedly receiving?
You need to make sure the main container process is your actual application, and not a shell wrapper.
As you have the CMD currently, a shell invokes it. The argument list $# will always be empty. The shell parses /repo/run_api.sh and sees that it's followed by a semicolon, so it might need to do something else. So even though your script correctly ends with exec gunicorn ... to hand off control directly to the other process, it's still running underneath a shell, and when you docker stop the container, it goes to the shell wrapper.
The easiest way to avoid this shell is to use an exec form CMD:
CMD ["/repo/run_api.sh"]
This will cause your script to run directly, without having a /bin/sh -c wrapper invoking it, and when the script eventually exec another process, that process becomes the main process and will receive the docker stop signal.

Running "sudo su" within a gitlab pipeline

I've installed some software on a server that my gitlab runner SSH's to, and one of the commands needs to be run after doing sudo su. If I run it as a regular user, but with sudo in front of it - it doesn't work. I have to first completely switch to the sudo user first.
This works fine when I SSH into the server and do the commands manually. But when I try it from the pipeline (rough code below):
my_script:
stage: stage
script:
- ssh -o -i id_rsa -tt user#1.1.1.1 << EOF
- sudo su
- run_special_command <blah blah>
- exit
# above exits from the SSH. below should stop the pipeline
- exit 0
- EOF
I get very weird output like the below:
$ sudo su
[user#1.1.1.1 user]$ sudo su
echo $'\x1b[32;1m$ run_special_command <blah blah>\x1b[0;m'
run_special_command <blah blah>
echo $'\x1b[32;1m$ exit\x1b[0;m'
exit
echo $'\x1b[32;1m$ exit 0\x1b[0;m'
exit 0
echo $'\x1b[32;1m$ EOF\x1b[0;m'
And what I'm seeing is that it doesn't even run the command at all - and I can't figure out why.
In this case, you need to put your script as a multi-line string in your YAML. Alternatively, commit a shell script to repo and execute that.
and one of the commands needs to be run after doing sudo su. If I run it as a regular user, but with sudo in front of it - it doesn't work.
As a side note, you can probably use sudo -E instead of sudo su before the command. But what you have should also work with the multi-line script.
MyJob:
script: |
ssh -o -i id_rsa -tt user#host << EOF
sudo -E my_command
EOF
exit 0
Alternatively, write your script into a shell script committed to the repository (with executable permissions set) and run it from your job:
MyJob:
script: “my_script.sh”

Running bash sript which runs pkill -f using sudo, taking the password automatically

I want to be able run a bash script which calls another command sudo pkill -f "test.py".
How can I do this in a way the password will be taken automatically during command launch?
Is there a way of hiding my password out of being able to see it in the script in some **** manner?
Better don't use any password at all just for a particular root script by using sudo properly. :
all as root :
create a script with :
#!/bin/sh
/usr/bin/pkill -f "test.py"
put it in /path/to/script
chmod 700 /path/to/script
chown root:root /path/to/script
Type visudo as root, then by example :
# /etc/sudoers
# blah...
yourusername ALL = (ALL:ALL) NOPASSWD: /path/to/script
Then,
sudo /path/to/script

Can't execute script from within rc.local

Within rc.local I have
sudo -H -u myUser -s -- "cd /home/myUser/parlar && /usr/local/bin/meteor &"
I want to test it but when I execute that with
myUser:~$ sudo service rc.local start
/bin/bash: cd /home/myUser/parlar && /usr/local/bin/meteor &: No such file or directory
If I execute the command
cd /home/myUser/parlar && /usr/local/bin/meteor &
it works
How can I execute rc.local so that it changes into the relevant directory, and runs the command as the requested user?
Whatever arguments you give to sudo after -- are considered as command & its arguments.
There is no command/executable named "cd /home/myUser/parlar && /usr/local/bin/meteor". You can however, start bash & run the command within that bash shell.
e.g.
sudo -H -u myUser -s -- bash -c "cd /home/myUser/parlar && /usr/local/bin/meteor &"
Since the first command is cd, this alternate approach may also work:
sudo -H -u myUser -s -i PWD=/home/myUser/parlar -- /usr/local/bin/meteor
To see the log of rc.local itself, it's better to run these commands:
**systemctl restart rc-local.service**
**systemctl status rc-local.service**
May be it can be help full for better trouble shooting

Bash script stops execution in the middle of the script without any error

I have this simple bash script that gets a copy from my dev server:
#!/bin/sh
DATE=`date +%Y-%m-%d_%H%M.%S`
BASEDIR="/var/www/db"
RELEASEDIR="$DATE";
RELEASEDIRFULL="$BASEDIR/releases/$RELEASEDIR"
mkdir -p "$RELEASEDIRFULL"
echo "Chdir to \"$RELEASEDIRFULL\""
cd "$RELEASEDIRFULL"
echo "Getting copy from dev"
ssh dev.example.tld "cd /tmp; cd /sites/db; tar -zcvp --exclude data --exclude scripts -f - *" | tar zxvpf -
ln -s /var/www/db/data data
ln -s /var/www/db/scripts scripts
cd $BASEDIR
rm htdocs; ln -s releases/$RELEASEDIR htdocs
Recently it stopped working properly with no apparent reason. It gets to the ssh line, executes it fine (files appear on live server) but does not proceed with ln commands. If I comment the ssh line out, ln lines will get executed properly.
UPDATE: I noticed that when I'm logged on as www-data and start the script, it completes as expected, without errors.
No time to check up the man page, but looks like your tar input is - * - all files + stdin? Are you meaning -- for suspension of further argument processing (if tar supports that)

Resources