Shell script help - shell

I need help with two scripts I'm trying to make as one. There are two different ways to detect if there are issues with a bad NFS mount. One is if there is an issue, doing a df will hang and the other is the df works but there is are other issues with the mount which a find (mount name) -type -d will catch.
I'm trying to combine the scripts to catch both issues to where it runs the find type -d and if there is an issue, return an error. If the second NFS issue occurs and the find hangs, kill the find command after 2 seconds; run the second part of the script and if the NFS issue is occurring, then return an error. If neither type of NFS issue is occurring then return an OK.
MOUNTS="egrep -v '(^#)' /etc/fstab | grep nfs | awk '{print $2}'"
MOUNT_EXCLUDE=()
if [[ -z "${NFSdir}" ]] ; then
echo "Please define a mount point to be checked"
exit 3
fi
if [[ ! -d "${NFSdir}" ]] ; then
echo "NFS CRITICAL: mount point ${NFSdir} status: stale"
exit 2
fi
cat > "/tmp/.nfs" << EOF
#!/bin/sh
cd \$1 || { exit 2; }
exit 0;
EOF
chmod +x /tmp/.nfs
for i in ${NFSdir}; do
CHECK="ps -ef | grep "/tmp/.nfs $i" | grep -v grep | wc -l"
if [ $CHECK -gt 0 ]; then
echo "NFS CRITICAL : Stale NFS mount point $i"
exit $STATE_CRITICAL;
else
echo "NFS OK : NFS mount point $i status: healthy"
exit $STATE_OK;
fi
done

The MOUNTS and MOUNT_EXCLUDE lines are immaterial to this script as shown.
You've not clearly identified where ${NFSdir} is being set.
The first part of the script assumes ${NFSdir} contains a single directory value; the second part (the loop) assumes it may contain several values. Maybe this doesn't matter since the loop unconditionally exits the script on the first iteration, but it isn't the clear, clean way to write it.
You create the script /tmp/.nfs but:
You don't execute it.
You don't delete it.
You don't allow for multiple concurrent executions of this script by making a per-process file name (such as /tmp/.nfs.$$).
It is not clear why you hide the script in the /tmp directory with the . prefix to the name. It probably isn't a good idea.
Use:
tmpcmd=${TMPDIR:-/tmp}/nfs.$$
trap "rm -f $tmpcmd; exit 1" 0 1 2 3 13 15
...rest of script - modified to use the generated script...
rm -f $tmpcmd
trap 0
This gives you the maximum chance of cleaning up the temporary script.
There is no df left in the script, whereas the question implies there should be one. You should also look into the timeout command (though commands hung because NFS is not responding are generally very difficult to kill).

Related

How to check if file exists in Google Cloud Storage with the gcloud bash command?

I need to check if a file exists in a gitlab deployment pipeline. How to do it efficiently and reliably?
Use gsutil ls gs://bucket/object-name and check the return value for 0.
If the object does not exist, the return value is 1.
You can add the following Shell script in a Gitlab job :
#!/usr/bin/env bash
set -o pipefail
set -u
gsutil -q stat gs://your_bucket/folder/your_file.csv
PATH_EXIST=$?
if [ ${PATH_EXIST} -eq 0 ]; then
echo "Exist"
else
echo "Not Exist"
fi
I used gcloud cli and gsutil with stat command with -q option.
In this case, if the file exists the command returns 0 otherwise 1.
This answer evolved from the answer of Mazlum Tosun. Because I think it is a substantial improvement with less lines and no global settings switching it needs to be a separate answer.
Ideally the answer would be something like this
- gsutil stat $BUCKET_PATH
- if [ $? -eq 0 ]; then
... # do if file exists
else
... # do if file does not exists
fi
$? stores the exit_status of the previous command. 0 if success. This works fine in a local console. The problem with Gitlab will be that if the file does not exists, then "gsutil stat $BUCKET_PATH" will produce a non-zero exit code and the whole pipeline will stop at that line with an error. We need to catch the error, while still storing the exit code.
We will use the or operator || to suppress the error. FILE_EXISTS=false will only be executed if gsutil stat fails.
- gsutil stat $BUCKET_PATH || FILE_EXISTS=false
- if [ "$FILE_EXISTS" = false ]; then
... # do stuff if file does not exist
else
... # do stuff if file exists
fi
Also we can use the -q flag to let the command stats be silent if that is desired.
- gsutil -q stat $BUCKET_PATH || FILE_EXISTS=false

Bash execute a for loop while time is outside working hours

I am trying to make the below script to execute a Restore binary between hours 17:00 - 07:00 for each folders which name starts with EAR_* in /backup_local/ARCHIVES/ but for some reason it is not working as expected, meaning that the for loop is not breaking if the time condition gets invalid.
Should I add the while loop inside the for loop?
#! /usr/bin/bash
#set -x
while :; do
currenttime=$(date +%H:%M)
if [[ "$currenttime" > "17:00" ]] || [[ "$currenttime" < "07:00" ]]; then
for path in /backup_local/ARCHIVES/EAR_*; do
[ -d "${path}" ] || continue # if not a directory, skip
dirname="$(basename "${path}")"
nohup /Restore -a /backup_local/ARCHIVES -c -I 0 -force -v > /backup_local/$dirname.txt &
wait $!
if [ $? -eq 0 ]; then
rm -rf $path
rm /backup_local/$dirname.txt
echo $dirname >> /backup_local/completed.txt
fi
done &
else
echo "Restore can be ran only outside working hours!"
break
fi
done &
your script looks like this in pseudo-code:
START
IF outside workinghours
EXIT
ELSE
RUN /Restore FOR EACH backupdir
GOTO START
The script only checks the time once, before starting a restore run (which will call /Restore for each directory to restore in a for loop)
It will continue to start the for loop, until the working hours start. Then it will exit.
E.g. if you have restore 3 folders to restore, each taking 2 hours; and you start the script at midnight; then the script will check whether it's outside working hours (it is), and will start the restore for the first folder (at 0:00), after two hours of work it will start the restore the 2nd folder (at 2:00), after another two hours it will start the restore of the 3rd folder (at 4:00). Once the 3rd folder has been restored, it will check the working hours again. Since it's now only 6:00, that is: outside the working hours, it will start the restore for the first folder (at 6:00), after two hours of work it will start the restore the 2nd folder (at 8:00), after another two hours it will start the restore of the 3rd folder (at 10:00).
It's noon when it does the next check against the working hours; since 12:00 falls within 7:00..17:00, the script will now stop. With an error message.
You probably only want the restore to run once for each folder, and stop proceeding to the next folder if the working hours start.
#!/bin/bash
for path in /backup_local/ARCHIVES/EAR_*/; do
currenttime=$(date +%H:%M)
if [[ "$currenttime" > "7:00" ]] && [[ "$currenttime" < "17:00" ]]; then
echo "Not restoring inside working hours!" 1>&2
break
fi
dirname="$(basename "${path}")"
/Restore -a /backup_local/ARCHIVES -c -I 0 -force -v > /backup_local/$dirname.txt
# handle exit code
done
update
I've just noticed your liberal spread of & for backgrounding jobs.
This is presumably to allow running the script from a remote shell. don't
What this will really do is:
it will run all the iterations over the restore-directories in parallel. This might create a bottleneck on your storage (if the directories to restore to/from share the same hardware)
it will background the entire loop-to-restore and immediately return to the out-of-hours check. if the check succeeds, it will spawn another loop-to-restore (and background it). then it will return to the out-of-hours check and spawn another backgrounded loop-to-restore.
Before dawn you probably have a few thousands background threads to restore directories. More likely you've exceeded your ressources and the process get's killed.
My example script above has omitted all the backgrounding (and the nohup).
If you want to run the script from a remote shell (and exit the shell after launching it), just run it with
nohup restore-script.sh &
Alternatively you could use
echo "restore-script.sh" | at now
or use a cron-job (if applicable)
The shebang contains an unwanted space. On my ubuntu the bash is found at /bin/bash.
Yours, is located there :
type bash
The while loop breaks in my test, replace the #!/bin/bash path with the result of the previous command:
#!/bin/bash --
#set -x
while : ; do
currenttime=$(date +%H:%M)
if [[ "$currenttime" > "17:00" ]] || [[ "$currenttime" < "07:00" ]]; then
for path in /backup_local/ARCHIVES/EAR_*; do
[ -d "${path}" ] || continue # if not a directory, skip
dirname="$(basename "${path}")"
nohup /Restore -a /backup_local/ARCHIVES -c -I 0 -force -v > /backup_local/$dirname.txt &
wait $!
if [ $? -eq 0 ]; then
rm -rf $path
rm /backup_local/$dirname.txt
echo $dirname >> /backup_local/completed.txt
fi
done &
else
echo "Restore can be ran only outside working hours!"
break
fi
done &

Shell Script, When executing commands do something if an error is returned

I am trying to automate out a lot of our pre fs/db tasks and one thing that bugs me is not knowing whether or not a command i issue REALLY happened. I'd like a way to be able to watch for a return code of some sort from executing that command. Where if it fails to rm because of a permission denied or any error. to issue an exit..
If i have a shell script as such:
rm /oracle/$SAPSID/mirrlogA/cntrl/cntrl$SAPSID.ctl;
psuedo code could be something similar to..
rm /oracle/$SAPSID/mirrlogA/cntrl/cntrl$SAPSID.ctl;
if [returncode == 'error']
exit;
fi
how could i for example, execute that rm command and exit if its NOT rm'd. I will be adapting the answer to execute with multiple other types of commands such as sed -i -e, and cp and umount
edit:
Lets suppose i have a write protected file such as:
$ ls -lrt | grep protectedfile
-rwx------ 1 orasmq sapsys 0 Nov 14 12:39 protectedfile
And running the below script generates the following error because obviously theres no permissions..
rm: remove write-protected regular empty file `/tmp/protectedfile'? y
rm: cannot remove `/tmp/protectedfile': Operation not permitted
Here is what i worked out from your guys' answers.. is this the right way to do something like this? Also how could i dump the error rm: cannot remove /tmp/protectedfile': Operation not permitted` to a logfile?
#! /bin/bash
function log(){
//some logging code, simply writes to a file and then echo's out inpit
}
function quit(){
read -p "Failed to remove protected file, permission denied?"
log "Some log message, and somehow append the returned error message from rm"
exit 1;
}
rm /tmp/protectedfile || quit;
If I understand correctly what you want, just use this:
rm blah/blah/blah || exit 1
a possibility: a 'wrapper' so that you can retrieve the original commands stderr |and stdout?], and maybe also retry it a few times before giving up?, etc.
Here is a version that redirects both stdout and stderr
Of course you could not redirect stdout at all (and usually, you shouldn't, i guess, making the "try_to" function a bit more useful in the rest of the script!)
export timescalled=0 #external to the function itself
try_to () {
let "timescalled += 1" #let "..." allows white spaces and simple arithmetics
try_to_out="/tmp/try_to_${$}.${timescalled}"
#tries to avoid collisions within same script and/or if multiple script run in parrallel
zecmd="$1" ; shift ;
"$1" "$#" 2>"${try_to_out}.ERR" >"${try_to_out}.OUT"
try_to_ret=$?
#or: "$1" "$#" >"${try_to_out}.ERR" 2>&1 to have both in the same one
if [ "$try_to_ret" -ne "0" ]
then log "error $try_to_ret while trying to : '${zecmd} $#' ..." "${try_to_out}.ERR"
#provides custom error message + the name of the stderr from the command
rm -f "${try_to_out}.ERR" "${try_to_out}.OUT" #before we exit, better delete this
exit 1 #or exit $try_to_ret ?
fi
rm -f "${try_to_out}.ERR" "${try_to_out}.OUT"
}
it's ugly, but could help ^^
note that there are many things that could go wrong: 'timecalled' could become too high, the tmp file(s) could not be writable, zecmd could contain special caracters, etc...
Usualy some would use a command like this:
doSomething.sh
if [ $? -ne 0 ]
then
echo "oops, i did it again"
exit 1;
fi
B.T.W. searching for 'bash exit status' will give you already a lot of good results

Bash curl returns 0 whenever the copy has finished or not

I'm calling curl on bash to copy a file from a mounted SD card with the option to resume the copy later if the device gets unmounted. I receive the same status exit code 0 when I interrupt the copy by unmounting the volume and when the file gets actually copied. Any suggestions how to catch the case where the file has not been copied?
I'm copying only one file at a time.
This is the command:
curl -C - -O file:///mnt/sdcard/DCIM/100/0044.MP4
I came to a solution which is not as clear as I want, but still working. I'm executing the command 2 times one after another, so when the first command returns 0 upon unmount, the second now tries to copy the file and return error code 37 because of the unreachable source. If the second command returns 0 I consider the file copied.
Following your concept you could have a script like this:
#!/bin/bash
# Copies files persistently.
#
# Usage: pc <filepath> [<filepath2>] ...
#
function pc {
local FILE
for FILE; do
echo "Copying $FILE."
until curl -C - -O "file://${FILE}" && curl -C - -O "file://${FILE}"; do
if [[ -e $FILE ]]; then
echo "File $FILE can't be copied."
break
else
echo "Waiting for $FILE."
until
sleep 5
[[ -e $FILE ]]
do
continue
done
fi
done
done
}
pc "$#"
You could also just embed the function to a bash startup script if you like.

Test if a command outputs an empty string

How can I test if a command outputs an empty string?
Previously, the question asked how to check whether there are files in a directory. The following code achieves that, but see rsp's answer for a better solution.
Empty output
Commands don’t return values – they output them. You can capture this output by using command substitution; e.g. $(ls -A). You can test for a non-empty string in Bash like this:
if [[ $(ls -A) ]]; then
echo "there are files"
else
echo "no files found"
fi
Note that I've used -A rather than -a, since it omits the symbolic current (.) and parent (..) directory entries.
Note: As pointed out in the comments, command substitution doesn't capture trailing newlines. Therefore, if the command outputs only newlines, the substitution will capture nothing and the test will return false. While very unlikely, this is possible in the above example, since a single newline is a valid filename! More information in this answer.
Exit code
If you want to check that the command completed successfully, you can inspect $?, which contains the exit code of the last command (zero for success, non-zero for failure). For example:
files=$(ls -A)
if [[ $? != 0 ]]; then
echo "Command failed."
elif [[ $files ]]; then
echo "Files found."
else
echo "No files found."
fi
More info here.
TL;DR
if [[ $(ls -A | head -c1 | wc -c) -ne 0 ]]; then ...; fi
Thanks to netj
for a suggestion to improve my original:if [[ $(ls -A | wc -c) -ne 0 ]]; then ...; fi
This is an old question but I see at least two things that need some improvement or at least some clarification.
First problem
First problem I see is that most of the examples provided here simply don't work. They use the ls -al and ls -Al commands - both of which output non-empty strings in empty directories. Those examples always report that there are files even when there are none.
For that reason you should use just ls -A - Why would anyone want to use the -l switch which means "use a long listing format" when all you want is test if there is any output or not, anyway?
So most of the answers here are simply incorrect.
Second problem
The second problem is that while some answers work fine (those that don't use ls -al or ls -Al but ls -A instead) they all do something like this:
run a command
buffer its entire output in RAM
convert the output into a huge single-line string
compare that string to an empty string
What I would suggest doing instead would be:
run a command
count the characters in its output without storing them
or even better - count the number of maximally 1 character using head -c1(thanks to netj for posting this idea in the comments below)
compare that number with zero
So for example, instead of:
if [[ $(ls -A) ]]
I would use:
if [[ $(ls -A | wc -c) -ne 0 ]]
# or:
if [[ $(ls -A | head -c1 | wc -c) -ne 0 ]]
Instead of:
if [ -z "$(ls -lA)" ]
I would use:
if [ $(ls -lA | wc -c) -eq 0 ]
# or:
if [ $(ls -lA | head -c1 | wc -c) -eq 0 ]
and so on.
For small outputs it may not be a problem but for larger outputs the difference may be significant:
$ time [ -z "$(seq 1 10000000)" ]
real 0m2.703s
user 0m2.485s
sys 0m0.347s
Compare it with:
$ time [ $(seq 1 10000000 | wc -c) -eq 0 ]
real 0m0.128s
user 0m0.081s
sys 0m0.105s
And even better:
$ time [ $(seq 1 10000000 | head -c1 | wc -c) -eq 0 ]
real 0m0.004s
user 0m0.000s
sys 0m0.007s
Full example
Updated example from the answer by Will Vousden:
if [[ $(ls -A | wc -c) -ne 0 ]]; then
echo "there are files"
else
echo "no files found"
fi
Updated again after suggestions by netj:
if [[ $(ls -A | head -c1 | wc -c) -ne 0 ]]; then
echo "there are files"
else
echo "no files found"
fi
Additional update by jakeonfire:
grep will exit with a failure if there is no match. We can take advantage of this to simplify the syntax slightly:
if ls -A | head -c1 | grep -E '.'; then
echo "there are files"
fi
if ! ls -A | head -c1 | grep -E '.'; then
echo "no files found"
fi
Discarding whitespace
If the command that you're testing could output some whitespace that you want to treat as an empty string, then instead of:
| wc -c
you could use:
| tr -d ' \n\r\t ' | wc -c
or with head -c1:
| tr -d ' \n\r\t ' | head -c1 | wc -c
or something like that.
Summary
First, use a command that works.
Second, avoid unnecessary storing in RAM and processing of potentially huge data.
The answer didn't specify that the output is always small so a possibility of large output needs to be considered as well.
if [ -z "$(ls -lA)" ]; then
echo "no files found"
else
echo "There are files"
fi
This will run the command and check whether the returned output (string) has a zero length.
You might want to check the 'test' manual pages for other flags.
Use the "" around the argument that is being checked, otherwise empty results will result in a syntax error as there is no second argument (to check) given!
Note: that ls -la always returns . and .. so using that will not work, see ls manual pages. Furthermore, while this might seem convenient and easy, I suppose it will break easily. Writing a small script/application that returns 0 or 1 depending on the result is much more reliable!
For those who want an elegant, bash version-independent solution (in fact should work in other modern shells) and those who love to use one-liners for quick tasks. Here we go!
ls | grep . && echo 'files found' || echo 'files not found'
(note as one of the comments mentioned, ls -al and in fact, just -l and -a will all return something, so in my answer I use simple ls
Bash Reference Manual
6.4 Bash Conditional Expressions
-z string
True if the length of string is zero.
-n string
string
True if the length of string is non-zero.
You can use shorthand version:
if [[ $(ls -A) ]]; then
echo "there are files"
else
echo "no files found"
fi
As Jon Lin commented, ls -al will always output (for . and ..). You want ls -Al to avoid these two directories.
You could for example put the output of the command into a shell variable:
v=$(ls -Al)
An older, non-nestable, notation is
v=`ls -Al`
but I prefer the nestable notation $( ... )
The you can test if that variable is non empty
if [ -n "$v" ]; then
echo there are files
else
echo no files
fi
And you could combine both as if [ -n "$(ls -Al)" ]; then
Sometimes, ls may be some shell alias. You might prefer to use $(/bin/ls -Al). See ls(1) and hier(7) and environ(7) and your ~/.bashrc (if your shell is GNU bash; my interactive shell is zsh, defined in /etc/passwd - see passwd(5) and chsh(1)).
I'm guessing you want the output of the ls -al command, so in bash, you'd have something like:
LS=`ls -la`
if [ -n "$LS" ]; then
echo "there are files"
else
echo "no files found"
fi
sometimes "something" may come not to stdout but to the stderr of the testing application, so here is the fix working more universal way:
if [[ $(partprobe ${1} 2>&1 | wc -c) -ne 0 ]]; then
echo "require fixing GPT parititioning"
else
echo "no GPT fix necessary"
fi
Here's a solution for more extreme cases:
if [ `command | head -c1 | wc -c` -gt 0 ]; then ...; fi
This will work
for all Bourne shells;
if the command output is all zeroes;
efficiently regardless of output size;
however,
the command or its subprocesses will be killed once anything is output.
All the answers given so far deal with commands that terminate and output a non-empty string.
Most are broken in the following senses:
They don't deal properly with commands outputting only newlines;
starting from Bash≥4.4 most will spam standard error if the command output null bytes (as they use command substitution);
most will slurp the full output stream, so will wait until the command terminates before answering. Some commands never terminate (try, e.g., yes).
So to fix all these issues, and to answer the following question efficiently,
How can I test if a command outputs an empty string?
you can use:
if read -n1 -d '' < <(command_here); then
echo "Command outputs something"
else
echo "Command doesn't output anything"
fi
You may also add some timeout so as to test whether a command outputs a non-empty string within a given time, using read's -t option. E.g., for a 2.5 seconds timeout:
if read -t2.5 -n1 -d '' < <(command_here); then
echo "Command outputs something"
else
echo "Command doesn't output anything"
fi
Remark. If you think you need to determine whether a command outputs a non-empty string, you very likely have an XY problem.
Here's an alternative approach that writes the std-out and std-err of some command a temporary file, and then checks to see if that file is empty. A benefit of this approach is that it captures both outputs, and does not use sub-shells or pipes. These latter aspects are important because they can interfere with trapping bash exit handling (e.g. here)
tmpfile=$(mktemp)
some-command &> "$tmpfile"
if [[ $? != 0 ]]; then
echo "Command failed"
elif [[ -s "$tmpfile" ]]; then
echo "Command generated output"
else
echo "Command has no output"
fi
rm -f "$tmpfile"
Sometimes you want to save the output, if it's non-empty, to pass it to another command. If so, you could use something like
list=`grep -l "MY_DESIRED_STRING" *.log `
if [ $? -eq 0 ]
then
/bin/rm $list
fi
This way, the rm command won't hang if the list is empty.
As mentioned by tripleee in the question comments , use moreutils ifne (if input not empty).
In this case we want ifne -n which negates the test:
ls -A /tmp/empty | ifne -n command-to-run-if-empty-input
The advantage of this over many of the another answers when the output of the initial command is non-empty. ifne will start writing it to STDOUT straight away, rather than buffering the entire output then writing it later, which is important if the initial output is slowly generated or extremely long and would overflow the maximum length of a shell variable.
There are a few utils in moreutils that arguably should be in coreutils -- they're worth checking out if you spend a lot of time living in a shell.
In particular interest to the OP may be dirempty/exists tool which at the time of writing is still under consideration, and has been for some time (it could probably use a bump).

Resources