Lockfile into a bash script with parameters - bash

I have to create a bash script that check if there are other same scripts in execution. To do that I have implemented this solution
scriptToVerify="sl_dynamic_procedure.sh_${1}";
LOCKFILE=${SL_ROOT_FOLDER}/work/$scriptToVerify
if [ -e ${LOCKFILE} ] && kill -0 `cat ${LOCKFILE}`; then
sl_log "---------------------------Warning---------------------------"
sl_log "$scriptToVerify already in execution"
exit
fi
trap "rm -f ${LOCKFILE}; exit" INT TERM EXIT
echo $$ > ${LOCKFILE}
I have addedd ${1} because my script has got a parameter.
If I try to execute a script without a parameter (without ${1}) it works correctly. If I try to execute more than once the script with the parameter sometimes works and sometimes not. How can I fix my code?

First, did you want to allow the script to execute even if another copy is running, so long as they have different arguments? Without knowing what the script does and what the argument is I can't know if that's sensible, but in general it looks like you're buying trouble.
Second, using a lockfile is common, but subject to a race condition. Much better to make the creation of the lock and the test for it a single atomic action. This is almost impossible with a file, but is really easy with a directory.
myLock=/tmp/ThisIsMySingleLockDirectoryName
lockMe() { mkdir $myLock 2>&-; }
unlockMe() { rmdir $myLock 2>&-; }
if lockMe
then : do stuff
unlockMe
else echo "Can't get a lock."
exit 1
fi
This is simplistic, as it throws away the stderr, and doesn't test reasons... but you get the idea.
The point is that the creation of the directory returns an error if it already exists.

Related

what does "if { set -C; 2>/dev/null >~/test.lock; }" in a bash mean?

I have encountered this in a bash script:
if { set -C; 2>/dev/null >~/test.lock; }; then
echo "Acquired lock"
else
echo "Lock file exists… exiting"
exit 1
fi
It enters on the else flow. I know set -C will not overwrite the files, 2>/dev/null means something as : redirect errors to "void", but then I have >~/test.lock and will mean to redirect something in the lock file, (what exactly, the errors probably). I have test.lock the file in home, created, and empty. Being a if , it must return false in my case.
{ ... ; ... ; } is a compound command. That is bash executes every command in it, and the exit code is the one of the last one.
It is a bit like ( ... ; ... ), except that with ( you execute a subshell (a bit like sh -c "... ; ...") which is less efficient, and, moreover, prevent affecting local variables of your current shell, for example.
So, in short, { set -C; 2>/dev/null >~/test.lock; } means "do set -C, then do 2>/dev/null >~/test.lock and the return (exit code) is the one of that last command".
So if { set -C; 2>/dev/null >~/test.lock; } is "if 2>/dev/null >~/test.lock succeeds in that compound command, that is after set -C".
Now, set -C means that you can't overwrite existing files
And 2>/dev/null > ~/test.lock is an attempt to overwrite test.lock if it exists, or to create it if it doesn't.
So, what you have here, is
If lock file already exist, fail and say "lock file exists, exiting"
If lock file did not exit, create it, and say "lock acquired".
And it does it in one operation.
So it is different than
# Illustration of how NOT to do it. Do not use this code :-)
if [[ -f "test.lock" ]]
then
echo "lock file exists, exiting"
else
2>/dev/null > ~/test.lock
echo "lock file acquired"
fi
because that, clearer but wrong, version does not guarantee that something will have created the lock file between the evaluation of the if condition and the execution of 2>/dev/null > ~/test.lock.
The version you've shown has the advantage that the creation and test of lock is the same thing.
set -C disallows writing to existing files
2>/dev/null suppresses warnings
>~/test.lock attempts to write to a file called test.lock. If the file already exists this returns an error because of set -C. Otherwise it will create a new test.lock file, making the next instance of this script fail on this step.
The purpose of lock files is to ensure that only one instance of a script runs at the same time. When the program is finished it could delete ~/test.lock to let another instance run.

Exiting a shell-script at the end with a non-zero code if any command fails

I am making a shell script that runs a bunch of tests as part of a CI pipeline. I would like to run all of the tests (I DO NOT WANT TO EXIT EARLY if one test fails). Then, at the end of the script, I would like to return with a negative exit code if any of the tests failed.
Any help would be appreciated. I feel like this would be a very common use case, but I wasn't able to find a solution with a bit of research. I am pretty sure that I don't want set -e, since this exits early.
My current idea is to create a flag to keep track of any failed tests:
flag=0
pytest -s || flag=1
go test -v ./... || flag=1
exit $flag
This seems strange, and like more work than necessary, but I am new to bash scripts. Am I missing something?
One possible way would be to catch the non-zero exit code via trap with ERR. Assuming your tests don't contain pipelines | and just return the error code straight to the shell launched, you could do
#!/usr/bin/env bash
exitCodeArray=()
onFailure() {
exitCodeArray+=( "$?" )
}
trap onFailure ERR
# Add all your tests here
addNumbers () {
local IFS='+'
printf "%s" "$(( $* ))"
}
Add your tests anywhere after the above snippet. So we keep adding the exit code to the array whenever a test returns a non-zero return code. So for the final assertion we check if the sum of the array elements is 0, because in an ideal case all cases should return that if it is successful. We reset the trap set before
trap '' ERR
if (( $(addNumbers "${exitCodeArray[#]}") )); then
printf 'some of your tests failed\n' >&2
exit -1
fi
The only way I could imagine using less code is if the shell had some sort of special all compound command that might look something like
# hypothetical all command
all do
pytest -s
go test -v ./...
done
whose exit status is the logical or of the exit statuses of the contained command. (An analogous any command would have the logical and of its commands' exit statuses as its own exit status.)
Lacking such a command, you current approach is what I would use. You could adapt #melpomene's suggestion of a chk function (which I would call after a command rather than having it call your command so that it works with arbitrary shell commands):
chk () { flag=$(( flag | $? )); }
flag=0
pytest -s; chk
go test -v ./...; chk
exit "$flag"
If you aren't using it for anything else, you could abuse the DEBUG trap to update flag before each command.
trap 'flag=$((flag | $?))' DEBUG
pytest -s
go test -v ./...
exit "$flag"
(Be aware that a debug trap executes before the shell executes another command, not immediately after a command is executed. It's possible that the only time this matters is if you expect the trap to fire between the last command completing and the shell exiting, but it's still worth being aware of.)
I vote for Inian's answer. Traps seem like the perfect way to go.
That said, you might also streamline things by use of arrays.
#!/usr/bin/env bash
testlist=(
"pytest -s"
"go test -v ./..."
)
for this in "${testlist[#]}"; do
$this || flag=1
done
exit $flag
You could of course fetch the content of the array from another file, if you wanted to make a more generic test harness that could be used by multiple tools. Heck, mapfile could be a good way to populate an array.

Prevent other terminals from running a script while another terminal is using it

I would like prevent other terminals from running a certain script whenever another terminal is running it however in bash but I'm not quite sure on how I would be able to go about in doing it. Any help or tip could be greatly appreciated!
In example:
When that script is being run on another terminal, all other terminals would be unable to run that certain script as well. And display a message "UNDER MAINTENANCE".
You can use the concept of a "lockfile." For example:
if [ -f ~/.mylock ]; then
echo "UNDER MAINTENANCE"
exit 1
fi
touch ~/.mylock
# ... the rest of your code
rm ~/.mylock
To get fancier/safer, you can "trap" the EXIT signal to remove it automatically at the end:
trap 'rm ~/.mylock' EXIT
Use flock and put this on top of your script:
if ! flock -xn /path/to/lockfile ; then
echo "script is already running."
echo "Aborting."
exit 1
fi
Note: path/to/lockfile could be the path to your script. Doing so would avoid to create an extra file.
To avoid race conditions, you could use flock(1) along with a
lock file. There is one flock(1) implementation
which claims to work on Linux, BSD, and OS X. I haven't seen one
explicitly for Unix.
There is some interesting discussion here.
UPDATE:
I found a really clever way from Randal L. Schwartz here. I really like this one. It relies on having flock(1) and bash, and it uses the script itself as its own lockfile. Check this out:
/usr/local/bin/onlyOne is a script to obtain the lock
#!/bin/bash
exec 200< $0
if ! flock -n 200; then
echo "there can be only one"
exit 1
fi
Then myscript uses onlyOne to obtain the lock (or not):
#!/bin/bash
source /usr/local/bin/onlyOne
# The real work goes here.
echo "${BASHPID} working"
sleep 120

Bash script does not quit on first "exit" call when calling the problematic function using $(func)

Sorry I cannot give a clear title for what's happening but here is the simplified problem code.
#!/bin/bash
# get the absolute path of .conf directory
get_conf_dir() {
local path=$(some_command) || { echo "please install some_command first."; exit 100; }
echo "$path"
}
# process the configuration
read_conf() {
local conf_path="$(get_conf_dir)/foo.conf"
[ -r "$conf_path" ] || { echo "conf file not found"; exit 200; }
# more code ...
}
read_conf
So basically here what I am trying to do is, reading a simple configuration file in bash script, and I have some trouble in error handling.
The some_command is a command which comes from a 3rd party library (i.e. greadlink from coreutils), required for obtain the path.
When running the code above, I expect it outputs "command not found" because that's where the FIRST error occurs, but actually it always prints "conf file not found".
I am very confused about such behavior, and I think BASH probably intent to handle thing like this but I don't know why. And most importantly, how to fix it?
Any idea would be greatly appreciated.
Do you see your please install some_command first message anywhere? Is it in $conf_path from the local conf_path="$(get_conf_dir)/foo.conf" line? Do you have a $conf_path value of please install some_command first/foo.conf? Which then fails the -r test?
No, you don't. (But feel free to echo the value of $conf_path in that exit 200 block to confirm this fact.) (Also Error messages should, in general, get sent to standard error and not standard output anyway. So they should be echo "..." 2>&1. That way they don't be caught by the normal command substitution at all.)
The reason you don't is because that exit 100 block is never happening.
You can see this with set -x at the top of your script also. Go try it.
See what I mean?
The reason it isn't happening is that the failure return of some_command is being swallowed by the local path=$(some_command) assignment statement.
Try running this command:
f() { local a=$(false); echo "Returned: $?"; }; f
Do you expect to see Returned: 1? You might but you won't see that.
What you will see is Returned: 0.
Now try either of these versions:
f() { a=$(false); echo "Returned: $?"; }; f
f() { local a; a=$(false); echo "Returned: $?"; }; f
Get the output you expected in the first place?
Right. local and export and declare and typeset are statements on their own. They have their own return values. They ignore (and replace) the return value of the commands that execute in their contexts.
The solution to your problem is to split the local path and path=$(some_command) statements.
http://www.shellcheck.net/ catches this (and many other common errors). You should make it your friend.
In addition to the above (if you've managed to follow along this far) even with the changes mentioned so far your exit 100 won't exit the main script since it will only exit the sub-shell spawned by the command substitution in the assignment.
If you want that exit 100 to exit your script then you either need to notice and re-exit with it (check for get_conf_dir failure after the conf_path assignment and exit with the previous exit code) or drop the get_conf_dir function itself and just do that inline in read_conf.

How can I "try to do something and then detect if it fails" in bash?

In an answer to a previous question:
How can I use 'do I have root access?' as a conditional in bash?
The suggestion to 'try to do something and detect if it fails' instead of 'check permission and then do something'
I have found plenty of rationale for this e.g.:
Thorough use of 'if' statements or 'try/catch' blocks?
What is the advantage of using try {} catch {} versus if {} else {}
However, I have found very little clear information about how to implementing try/catch in bash. I would guess that it is too easy, except that what I have found seems rather complicated- using functions or other scripts:
Error handling in Bash
Executing code in if-statement (Bash)
I am relatively new to bash but confused that there is not a simple function similar to the try function in other languages.
specifically, I would like to do the following:
CMD=`./path/to/script.sh`
if [ <echo $CMD | grep error is true> ]; then
.. do this ..
else
.. do that ..
fi
if sudo chmod a-x /etc/shadow 2>/dev/null
then : Yes - I have root permissions
else : No - I do not have root permissions or /etc/shadow does not exist or ...
fi
This chooses an operation that does no damage if it succeeds (the shadow password file is not supposed to be executable; you could do something like chmod o-w / - remove public write permission from the root directory if you prefer), and check that it worked by looking at the exit status of the command. This throws away the error message - you have to decide whether that matters.
The 'sudo' is there to raise the privileges; if you think the user should already be 'root', then omit the 'sudo'.
Bash depends on exit status so there isn't any try/catch equivalent. But it's still powerful to fit your needs.
For simple cases, you can use
[[ your_test_expression ]] && commands
This is equivalent to
if [[ your_test_expression ]]; then
commands
fi
If uses the "exit status" of [[ ... ]] so actually you can use any command after if. Just make sure your control logic depends on the exit status of the command.
For complicated cases, you still need if or case statements to express your logic.
unless the script you are calling has an exit condition, there isn't much you can do. However look up "set" in the bash man page.
set -e
will cause a script to exit if a simple command in it fails. You can add it to the top of script.sh in your example to cause it to exit if it fails.
also look at trap. I believe
trap 'exit 2' ERR
is similar

Resources