Bash check if script is running with exact options - bash

I know how to check if a script is already running (if pidof -o %PPID -x "scriptname.sh"; then...). But now I have a script that accepts inputs as flags, so it can be used in several different scenarios, many of which will probably run at the same time.
Example:
/opt/scripts/backup/tar.sh -d /directory1 -b /backup/dir -c /config/dir
and
/opt/scripts/backup/tar.sh -d /directory2 -b /backup/dir -c /config/dir
The above runs a backup script that I wrote, and the flags are the parameters for the script: the directory being backed up, the backup location, and the configuration location. The above example are two different backups (directory 1 and directory 2) and therefore should be allowed to run simultaneously.
Is there any way for a script to check if it is being run and check if the running version is using the exact same parameters/flags?

The ps -Af command will provide you all the processes that run on you os with the "command" line used to run them.

One solution :
if ps auxwww | grep '/[o]pt/scripts/backup/tar.*/directory2'; then
echo "running"
else
echo "NOT running"
fi

Related

Running an if statement in shell script as a single line with docker -c option

I need to run below code as a single line in docker run -it image_name -c \bin\bash --script with --script below
(dir and dockerImageName being parameters)
'''cd ''' + dir+ ''' \
&& if make image ''' + dockerImageName''' 2>&1 | grep -m 1 "No rule to make target"; then
exit 1
fi'''
How can this be run as a single line?
You can abstract all of this logic into your higher-level application. If you can't do this, write a standard shell script and COPY it into your image.
The triple quotes look like Python syntax. You can break this up into three parts:
The cd $dir part specifies the working directory for the subprocess;
make ... is an actual command to run;
You're inspecting its output for some condition.
In Python, you can call subprocess.run() with an array of arguments and specify these various things at the application level. The array of arguments isn't reinterpreted by a shell and so protects you from this particular security issue. You might run:
completed = subprocess.run(['make', 'image', dockerImageName],
cwd=dir,
stdout=subprocess.PIPE,
stderr=subprocess.STDOUT)
if 'No rule to make target' in completed.stdout:
...
If you need to do this as a shell script, doing it as a proper shell script and making sure to quote your arguments again protects you.
#!/bin/sh
set -e
cd "$1"
if make image "$2" 2>&1 | grep -m 1 "No rule to make target"; then
exit 1
fi
You should never construct a command line by combining strings in the way you've shown. This makes you vulnerable to a shell injection attack. Especially if an attacker knows that the user has permissions to run docker commands, they can set
dir = '.; docker run --rm -v /:/host busybox cat /host/etc/shadow'
and get a file of encrypted passwords they can crack at their leisure. Pretty much anything else is possible once the attacker uses this technique to get unlimited root-level read/write access to the host filesystem.

In this simple Docker wrapper script example, how may one correctly pass a current working directory path which contains spaces?

My Docker wrapper script works as intended when the current working directory does not contain spaces, however there is a bug when it does.
I have simplified an example to make use of the smallest official Docker image I could find and a well known GNU core utility. Of course this example is not very useful. In my real world use case, a much more complicated environment is packaged.
Docker Wrapper Script:
#!/usr/bin/env bash
##
## Dockerized ls
##
set -eux
# Only allocate tty if one is detected
# See https://stackoverflow.com/questions/911168/how-to-detect-if-my-shell-script-is-running-through-a-pipe
if [[ -t 0 ]]; then
DOCKER_RUN_OPTIONS+="-i "
fi
if [[ -t 1 ]]; then
DOCKER_RUN_OPTIONS+="-t "
fi
WORK_DIR="$(realpath .)"
DOCKER_RUN_OPTIONS+="--rm --user=$(id -u $(logname)):$(id -g $(logname)) --workdir=${WORK_DIR} --mount type=bind,source=${WORK_DIR},target=${WORK_DIR}"
exec docker run ${DOCKER_RUN_OPTIONS} busybox:latest ls "$#"
You can save this somewhere as /tmp/docker_ls for example. Remember to chmod +x /tmp/docker_ls
Now you are able to use this Dockerized ls in any path which contains no spaces as follows:
/tmp/docker_ls -lah
/tmp/docker_ls -lah | grep 'r'
Note that /tmp/docker_ls -lah /path/to/something is not implemented. The wrapper script would have to be adapted to parse parameters and mount the path argument into the container.
Can you see why this would not work when current working directory path contains spaces? What can be done to rectify it?
Solution:
#david-maze's answer solved the problem. Please see: https://stackoverflow.com/a/55763212/1782641
Using his advice I refactored my script as follows:
#!/usr/bin/env bash
##
## Dockerized ls
##
set -eux
# Only allocate tty if one is detected. See - https://stackoverflow.com/questions/911168
if [[ -t 0 ]]; then IT+=(-i); fi
if [[ -t 1 ]]; then IT+=(-t); fi
USER="$(id -u $(logname)):$(id -g $(logname))"
WORKDIR="$(realpath .)"
MOUNT="type=bind,source=${WORKDIR},target=${WORKDIR}"
exec docker run --rm "${IT[#]}" --user "${USER}" --workdir "${WORKDIR}" --mount "${MOUNT}" busybox:latest ls "$#"
If your goal is to run a process on the current host directory as the current host user, you will find it vastly easier and safer to use a host process, and not an isolation layer like Docker that intentionally tries to hide these things from you. For what you’re showing I would just skip Docker and run
#!/bin/sh
ls "$#"
Most software is fairly straightforward to install without Docker, either using a package manager like APT or filesystem-level isolation like Python’s virtual environments and Node’s node_modules directory. If you’re writing this script then Docker is just getting in your way.
In a portable shell script there’s no way to make “a list of words” in a way that keeps their individual wordiness. If you know you’ll always want to pass some troublesome options then this is still fairly straightforward: include them directly in the docker run command and don’t try to create a variable of options.
#!/bin/sh
RM_IT="--rm"
if [[ -t 0 ]]; then RM_IT="$RM_IT -i"; fi
if [[ -t 1 ]]; then RM_IT="$RM_IT -t"; fi
UID=$(id -u $(logname))
GID=$(id -g $(logname))
# We want the --rm -it options to be expanded into separate
# words; we want the volume options to stay as a single word
docker run $RM_IT "-u$UID:$GID" "-w$PWD" "-v$PWD:$PWD" \
busybox \
ls "$#"
Some shells like ksh, bash, and zsh have array types, but these shells may not be present on every system or environment (your busybox image doesn’t have any of these for example). You also might consider picking a higher-level scripting language that can more explicitly pass words into an exec type call.
I'm taking a stab at this to give you something to try:
Change this:
DOCKER_RUN_OPTIONS+="--rm --user=$(id -u $(logname)):$(id -g $(logname)) --workdir=${WORK_DIR} --mount type=bind,source=${WORK_DIR},target=${WORK_DIR}"
To this:
DOCKER_RUN_OPTIONS+="--rm --user=$(id -u $(logname)):$(id -g $(logname)) --workdir=${WORK_DIR} --mount type=bind,source='${WORK_DIR}',target='${WORK_DIR}'"
Essentially, we are putting the ' in there to escape the space when the $DOCKER_RUN_OPTIONS variable is evaluated by bash on the 'exec docker' command.
I haven't tried this - it's just a hunch / first shot.

Running programs only if they are installed, and ignoring them otherwise

When writing shell scripts, is the an idiom or swift way to run a program only if it is installed, and if it is not, just let it be (or handle the error in some other way apart from installing it)?
More specifically, I have a lot of servers which I access over ssh, and whenever I get a new server, I simply copy all my rc-files to it. The .zshrc starts tmux unless it is already running. Some of the servers (not all) do not have tmux installed. I do not want to install it because of disk space limitations, I do not want to have different rc-files for different servers, and I do not want my rc-files to be interrupted when executing them.
I have seen solutions involving apt-cache policy <package-name>, so I guess I could use that and pipe it to something like grep -e 'Installed: (none)', but that would assume that the server is running Debian or Ubuntu, which I can not do, and it would only work for packages that were installed with apt, not things I have installed in other ways.
command -v <command> is the common (and POSIX) way to check if a command could be executed (is executable and on the $PATH).
E.g:
command -v tmux >/dev/null &&
tmux a -t name
(>/dev/null since, if the command exists, its path will be printed to STDOUT.)
It could be nice to put it in a reusable function:
maybe() {
! command -v "${1}" >/dev/null ||
"$#"
}
Then one could use:
maybe tmux a -t name
And if tmux is available then tmux a -t name will be run, otherwise it’ll be silently ignored.
Or, if you want some feedback when a command is not available:
maybe() {
if command -v "${1}" >/dev/null
then
"$#"
else
printf 'Command "%s" not available, skipping\n' "${1}" >&2
fi
}
This might help-
1) Assuming tmux is available in PATH (as it must be executable)
isAvailable=$(type -P tmux)
if [[ -x $isAvailable ]]; then
...
2) Verify file is present on a specific path (Copying all rc-files)
export FILEPATH="..."
if[[ -f $FILEPATH ]]; then

Add string to default SAS log file name from bash script

I'm trying to build a simple bash script to automate running some monthly SAS programs at work. The problem I'm running into is that we like to keep logs based on the day a program was run, in case the underlying data changes, but I can't find a way to append the date to the log file.
My base code is as follows:
#!/bin/bash
month=`date +%Y%m -d "1 month ago"` #Previous month for log folder
sysdate=`date "+%Y_%m_%d"` #today's date
sasbatdir=/c01/sasdata/public
sasdir=/n04/directory-where-programs-are
saslog=/n04/directory-where-programs-are/Log/$month
cd $sasdir
$sasbatdir/batchsas.sh -s PROGRAM_01.sas -o $saslog -k traditional
$sasbatdir/batchsas.sh -s PROGRAM_02.sas -o $saslog -k traditional
$sasbatdir/batchsas.sh -s PROGRAM_03.sas -o $saslog -k traditional
... etc
exit 0
So the above works, but it obviously only outputs log files with the name PROGRAM_01.log, PROGRAM_02.log, etc. format, which get overwritten the next time the script is run in that month.
Things I have tried:
$sasbatdir/batchsas.sh -s PROGRAM_01.sas -o $saslog/PROGRAM_01_"$sysdate".log -k traditional
and
$sasbatdir/batchsas.sh -s PROGRAM_01.sas -log $saslog/PROGRAM_01_"$sysdate".log -k traditional
Does not work. nohup returns an "Output directory not found" error, and appears to be treating the log name as a directory instead of a file.
$sasbatdir/batchsas.sh -s PROGRAM_01.sas -o -t 1 > $saslog/PROGRAM_01_"$sysdate".log -k traditional 2>&1
Mostly works, but returns two log files: one with the correct name, but only containing the nohup output, and the other with the SAS log, but with both the date (in the wrong format) and the job ID appended. Removing the 2>$1 prevents either from being written. I'd honestly take the second one, if I could figure out how to produce it without the first, though I would prefer to stick to the Program_Name_YYYY_MM_DD.log format.
In case it's relevant, the command I'm using to test the programs is nohup /n04/directory-where-program-is-stored/test_script.sh
I would add the following command:
exec > "$saslog/PROGRAM_01_${sysdate}.log" 2>&1
It would be preferible to add this command inside the "batchas.sh" script, but in case it is not possible, you can add this line before every "batchas.sh" script call with the new log file.
This command redirects stdout and stderr to the indicated file, regardless the script was launched from command line, crontab or with nohup.

No display of variable data on mail output - Shell scripting

I've scheduled a task in a UNIX environment, which sends a report of services running/stopped using Shell scripting. Here is the code for same;
#!/bin/bash
echo -e "\t\tServer daily monitoring report\n">/home/user/MailLog.txt
echo -e "\t\t`date "+%Y-%m-%d %H:%M:%S"`\n">>/home/user/MailLog.txt
sudo bash /home/user/commands.sh>>/home/user/MailLog.txt
echo >>/home/user/MailLog.txt
cat /home/user/MailLog.txt>>/home/user/StatusLog.txt
rn=`grep -c "running" MailLog.txt`
sp=`grep -c "stopped" MailLog.txt`
echo -e "Server status report\n\nServices running:\t $rn \nServices
stopped:\t $sp "|mailx -v -s "Services report." -a /home/user/MailLog.txt
useremail1#domain.com,useremail2#domain.com
#echo $run $stp
#rm /home/user/MailLog.txt
As per scheduled task, I receive the mail and attachment alright. But I get a blank in front of 'Services running: ' and 'Services stopped: '.
When I manually run the script, I get the proper output (numbers + attachment).
Please tell me what I'm doing wrong.
Replace MailLog.txt by /home/user/MailLog.txt in both grep commands. It's very likely that you usually manually run the commands from the /home/user/ directory but the script's working directory isn't /home/user, which makes the relative path MailLog.txt point to an inexistant file.
rn=$(grep -c "running" /home/user/MailLog.txt)
sp=$(grep -c "stopped" /home/user/MailLog.txt)
Better yet, set the file path in a variable and reuse that one each time you want to refer to the file :
work_file="/home/user/MailLog.txt"
#[...]
rn=$(grep -c "running" "$work_file")
sp=$(grep -c "stopped" "$work_file")
Note that your code could be improved in many other ways, I suggest you validate it with shellcheck (you can ignore the sudo+redirect warning since your user has write permissions to the MailLog.txt file).

Resources