I wrote the following Nagios check, which checks /etc/fstab for mounts and by using df checks if they are mounted properly:
#!/bin/bash
# Check mounts based on /etc/fstab
grep="/bin/grep"
awk="/bin/awk"
df="/bin/df"
mounts=$($grep nfs /etc/fstab | $awk '{print $2}')
# Check if mounts exist
for mount in $mounts; do
$df | $grep $mount &>/dev/null
if [ "$?" -eq "0" ]; then
msg="Mount $mount is mounted!"
else
msg="Mount $mount is not mounted!"
fi
echo $msg
done
When I run the check it returns a proper result:
[root#nyproxy5 ~]# ./check_mount.sh
Mount /proxy_logs is mounted!
Mount /proxy_dump is mounted!
Mount /sync_logs is mounted!
[root#nyproxy5 ~]#
But I want the output of the script to be 1 line rather than 3 lines, how can it be achieved?
I realize that the way the script is written at the moment doesn't allow it, even the "Mount X is mounted" message should be changed, but I'm having a hard time with the logic.
Thanks in advance
Change echo $msg to echo -n $msg
-n option will avoid printing newline
There is a general solution in case of some complicated command. Newlines or any other character could be removed this way:
./script.sh | tr -d '\n'
Or a win32 executable in cygwin:
./asd.exe | tr -d '\r\n'
Related
I want to run a shell script when a specific file or directory changes.
How can I easily do that?
You may try entr tool to run arbitrary commands when files change. Example for files:
$ ls -d * | entr sh -c 'make && make test'
or:
$ ls *.css *.html | entr reload-browser Firefox
or print Changed! when file file.txt is saved:
$ echo file.txt | entr echo Changed!
For directories use -d, but you've to use it in the loop, e.g.:
while true; do find path/ | entr -d echo Changed; done
or:
while true; do ls path/* | entr -pd echo Changed; done
I use this script to run a build script on changes in a directory tree:
#!/bin/bash -eu
DIRECTORY_TO_OBSERVE="js" # might want to change this
function block_for_change {
inotifywait --recursive \
--event modify,move,create,delete \
$DIRECTORY_TO_OBSERVE
}
BUILD_SCRIPT=build.sh # might want to change this too
function build {
bash $BUILD_SCRIPT
}
build
while block_for_change; do
build
done
Uses inotify-tools. Check inotifywait man page for how to customize what triggers the build.
Use inotify-tools.
The linked Github page has a number of examples; here is one of them.
#!/bin/sh
cwd=$(pwd)
inotifywait -mr \
--timefmt '%d/%m/%y %H:%M' --format '%T %w %f' \
-e close_write /tmp/test |
while read -r date time dir file; do
changed_abs=${dir}${file}
changed_rel=${changed_abs#"$cwd"/}
rsync --progress --relative -vrae 'ssh -p 22' "$changed_rel" \
usernam#example.com:/backup/root/dir && \
echo "At ${time} on ${date}, file $changed_abs was backed up via rsync" >&2
done
How about this script? Uses the 'stat' command to get the access time of a file and runs a command whenever there is a change in the access time (whenever file is accessed).
#!/bin/bash
while true
do
ATIME=`stat -c %Z /path/to/the/file.txt`
if [[ "$ATIME" != "$LTIME" ]]
then
echo "RUN COMMNAD"
LTIME=$ATIME
fi
sleep 5
done
Check out the kernel filesystem monitor daemon
http://freshmeat.net/projects/kfsmd/
Here's a how-to:
http://www.linux.com/archive/feature/124903
As mentioned, inotify-tools is probably the best idea. However, if you're programming for fun, you can try and earn hacker XPs by judicious application of tail -f .
Just for debugging purposes, when I write a shell script and want it to run on save, I use this:
#!/bin/bash
file="$1" # Name of file
command="${*:2}" # Command to run on change (takes rest of line)
t1="$(ls --full-time $file | awk '{ print $7 }')" # Get latest save time
while true
do
t2="$(ls --full-time $file | awk '{ print $7 }')" # Compare to new save time
if [ "$t1" != "$t2" ];then t1="$t2"; $command; fi # If different, run command
sleep 0.5
done
Run it as
run_on_save.sh myfile.sh ./myfile.sh arg1 arg2 arg3
Edit: Above tested on Ubuntu 12.04, for Mac OS, change the ls lines to:
"$(ls -lT $file | awk '{ print $8 }')"
Add the following to ~/.bashrc:
function react() {
if [ -z "$1" -o -z "$2" ]; then
echo "Usage: react <[./]file-to-watch> <[./]action> <to> <take>"
elif ! [ -r "$1" ]; then
echo "Can't react to $1, permission denied"
else
TARGET="$1"; shift
ACTION="$#"
while sleep 1; do
ATIME=$(stat -c %Z "$TARGET")
if [[ "$ATIME" != "${LTIME:-}" ]]; then
LTIME=$ATIME
$ACTION
fi
done
fi
}
Quick solution for fish shell users who wanna track a single file:
while true
set old_hash $hash
set hash (md5sum file_to_watch)
if [ $hash != $old_hash ]
command_to_execute
end
sleep 1
end
replace md5sum with md5 if on macos.
Here's another option: http://fileschanged.sourceforge.net/
See especially "example 4", which "monitors a directory and archives any new or changed files".
inotifywait can satisfy you.
Here is a common sample for it:
inotifywait -m /path -e create -e moved_to -e close_write | # -m is --monitor, -e is --event
while read path action file; do
if [[ "$file" =~ .*rst$ ]]; then # if suffix is '.rst'
echo ${path}${file} ': '${action} # execute your command
echo 'make html'
make html
fi
done
Suppose you want to run rake test every time you modify any ruby file ("*.rb") in app/ and test/ directories.
Just get the most recent modified time of the watched files and check every second if that time has changed.
Script code
t_ref=0; while true; do t_curr=$(find app/ test/ -type f -name "*.rb" -printf "%T+\n" | sort -r | head -n1); if [ $t_ref != $t_curr ]; then t_ref=$t_curr; rake test; fi; sleep 1; done
Benefits
You can run any command or script when the file changes.
It works between any filesystem and virtual machines (shared folders on VirtualBox using Vagrant); so you can use a text editor on your Macbook and run the tests on Ubuntu (virtual box), for example.
Warning
The -printf option works well on Ubuntu, but do not work in MacOS.
I have a script I am trying to work out to scan my LAN and send me notification if there is a new MAC address that does not appear in my master list. I believe my variables may be messed up. This is what I have:
#!/bin/bash
LIST=$HOME/maclist.log
MASTERFILE=$HOME/master
FILEDIFF="$(diff $LIST $MASTERFILE)"
# backup the maclist first
if [ -f $LIST ]; then
cp $LIST maclist_`date +%Y%m%H%M`.log.bk
else
touch $LIST
fi
# this will scan the network and extract the IP and MAC address
nmap -n -sP 192.168.122.0/24 | awk '/^Nmap scan/{IP=$5};/^MAC/{print IP,$3};{next}' > $LIST
# this will use a diff command to compare the maclist created above and master list of known good devices on the LAN
if [ $FILEDIFF ] 2> /dev/null; then
echo
echo "---- All is well on `date` ----" >> macscan.log
echo
else
# echo -e "\nWARNING!!" | `mutt -e 'my_hdr From:user#email.com' -s "WARNIG!! NEW DEVICE ON THE LAN" -i maclist.log user#email.com`
echo "emailing you"
fi
When I execute this when the maclist.log does not exist I get this response:
diff: /root/maclist.log: No such file or directory
If I execute it again with the maclist.log file existing the file gets renamed from the cp line without any issue.
The line
FILEDIFF="$(diff $LIST $MASTERFILE)"
executes the diff when it is run (not when you use $FILELIST later). At that time the list file hasn't been created.
The easiest fix is just to put the diff command in full where $FILELIST is currently used.
Struggling with this for a couple of days. Trying to create a space delimited list of $host $useraccount $crontab entries.
I've tried a couple of different ways. Each ending in a different level of disaster, The closest I've come is this, someone point out the obvious thing I'm missing.
#!/usr/bin/bash
#Global Crontab Inventory for Scripts
#
outputfile="/localpath/cronoutput.txt"
LPARLIST=/pathto/LPAR.txt
while read LPAR;
do
ping -c 1 $LPAR > /dev/null
if [ $? -eq 0 ]; then
for user in $(ssh -n $LPAR /opt/freeware/bin/ls /var/spool/cron/crontabs);
do
while read line;
do
echo "$LPAR $user $line"
done <"$(ssh -n "$LPAR" /opt/freeware/bin/tail -n +29 /var/spool/cron/crontabs/$user)"
done
fi
done <$LPARLIST
It seems to be complaining about trying to execute the output of the tail as a command.
./crons.sh: line 11: (Several pages of cropped cron entries): File name too long
./crons.sh: line 11: : No such file or directory
./crons.sh: line 11: #
This is working for me.
#!/bin/bash
#Global Crontab Inventory for Scripts
#
outputfile="/localpath/cronoutput.txt"
LPARLIST=LPAR.txt
cat $LPARLIST |
while read LPAR;
do
ping -c 1 $LPAR > /dev/null
if [ $? -eq 0 ]; then
for user in $(ssh -n root#$LPAR ls /var/spool/cron/crontabs);
do
ssh -n "root#$LPAR" tail -n +29 /var/spool/cron/crontabs/$user |
while read line;
do
echo "$LPAR $user $line"
done
done
fi
done
I prefer cat xxx | while ... instead of redirecting input as you did. In theory it should be the same. I did not spot anything specifically wrong and I did not really change anything -- just rearranged what you had.
The advantage of the cat xxx | while ... technique is you can insert commands between the cat and the while. In this case, I would not do the tail -n +29 because you are guessing the first 29 lines are junk but that might not be true. Rather I would just do a cat of the file and then egrep out the lines that start with a hash #. Again, .... yes, the cat is redundant but who really cares. It is more general and easier to add and delete things.
I don't have the /opt packages installed and I would not depend upon them unless absolutely necessary. You are increasing dependencies. So I just used the local "ls" and "tail". I also added an explicit root# but you don't need that. It just simplified my testing.
Perhaps the ls is overflowing. Try this one:
#!/bin/bash
#Global Crontab Inventory for Scripts
#
outputfile="/localpath/cronoutput.txt"
LPARLIST=LPAR.txt
cat $LPARLIST |
while read LPAR;
do
ping -c 1 $LPAR > /dev/null
if [ $? -eq 0 ]; then
ssh -n root#$LPAR ls /var/spool/cron/crontabs |
while read user
do
ssh -n "root#$LPAR" tail -n +29 /var/spool/cron/crontabs/$user |
while read line;
do
echo "$LPAR $user $line"
done
done
fi
done
Hope this helps...
Im making a bash script for nagios custom plugins & configuration
i use equivs for its simplicity
here'is my control file.
in Files: section , i tell files to copy themselves to their right path.
Files: check_cpu_loadx /usr/lib/nagios/plugins
check_ipmi_sensors /usr/lib/nagios/plugins
check_libreoffice_count /usr/lib/nagios/plugins
check_ram_per_user /usr/lib/nagios/plugins
check_ram_usage2 /usr/lib/nagios/plugins
check_ram_usage_percentage /usr/lib/nagios/plugins
check_tcptraffic /usr/lib/nagios/plugins
nrpe_custom.cfg /etc/nagios
in the postinst section , it's a bash script that is used for post-install
File: postinst
#!/bin/bash -e
set -x
echo 'configuring nrpe.conf file.'
mv /etc/nagios/nrpe.cfg /etc/nagios/nrpe.original.backup
mv /etc/nagios/nrpe_custom.cfg /etc/nagios/nrpe.cfg
chmod -R +x /usr/lib/nagios/plugins
echo 'Installing tcp-ip addon..'
FLAG=0
Interfaces=`ifconfig -a | grep -o -e "[a-z][a-z]*[0-9]*[ ]*Link" | perl -pe "s|^([a-z]*[0-9]*)[ ]*Link|\1|"`
for Interface in $Interfaces; do
INET=`ifconfig $Interface | grep -o -e "inet addr:[^ ]*" | grep -o -e "[^:]*$"`
MASK=`ifconfig $Interface | grep -o -e "Mask:[^ ]*" | grep -o -e "[^:]*$"`
STATUS="up"
#loopback
if [ "$Interface" == "lo" ]; then
continue
fi
#if eth is down
if [ -z "$INET" ]; then
continue
fi
#if eth ip not starts with 10. or 192.
if [[ "$INET" == 10.* ]]
then
ActiveEth=$Interface;
break
elif [[ "$INET" == 192.* ]]
then
ActiveEth=$Interface;
break
else
echo "Ethernet Selection Failed!Configure nrpe.cfg manually.Change tcp_traffic plugin paramethers according to your current ethernet.";
FLAG=1
break
fi
done
if [[ "$FLAG" == 0 ]]
then
echo 'Selected Ethernet :'$ActiveEth
sed -i -e "s/eth0/$ActiveEth/g" /etc/nagios/nrpe.cfg
fi
echo 'nrpe.conf changed.'
echo 'Nagios-nrpe-server restarting.'
service nagios-nrpe-server restart
echo 'IPMI modules are loading.'
modprobe ipmi_devintf
modprobe ipmi_msghandler
echo "IPMI modules are added to startup."
#echo "ipmi_si" >> /etc/modules
echo "ipmi_devintf" >> /etc/modules
echo "ipmi_msghandler" >> /etc/modules
the problem here , when i compile it to deb package i got "subprocess installed post-installation script return error exit status 1"
then i added set -x for debugging.the problem is for configuring tcp-ip addon , there are some machines that have more then one ethernet card.So i need to choose with the one that has a ip that starts with 10.* or 192.*
in the second section , there is a line
INET=ifconfig $Interface | grep -o -e "inet addr:[^ ]*" | grep -o -e "[^:]*$"
when a ethernet device has no ip , grep returns null and INET variable becomes null , that why the process exit status is 1.
after that line , when i enter "$?" , it says 1
so the problem here is , when i run dpkg -i to install that package , bash script quits after it sees that INET becomes null ..
any help would be appreciated. Im new to this bash thing.
if you want to make sure that a bash command always succeeds, even if the last program gives a non-null return value, just add a "very last" command that will succeed.
something like
INET=$(/sbin/ifconfig eth0 | grep -o -e "inet addr:[^ ]*" | grep -o -e "[^:]*$" || true)
here we call true (a small program that always succeeds), whenever grep fails (|| means OR and is a way to chain programs depending on the exit state of the previous one)
however, your script has a number of flaws:
your grep expression "inet addr:" will only give correct results in an english locale; e.g. when running in a german environment (LANG=de) you could get strings like inet Adresse: 192.168.7.10 (sic!)
you are unconditionally moving files around; what happens if these files are not there?
you are unconditionally moving files in /etc. /etc is the place where the sysadmin adjust the system to their needs; you shall not delete or revert the configuration of the sysadmin. you should rather document how to properly configure the system (so the sysadmins can do it themselves); if you insist in "helping" by automatically configuring the system, you should use something like debconf
you assume that a lot of software is installed, and that this software is in your path. you should probably use fully qualified paths to the binaries you are using, e.g. /sbin/ifconfig rather than just ifconfig
How can I test if a command outputs an empty string?
Previously, the question asked how to check whether there are files in a directory. The following code achieves that, but see rsp's answer for a better solution.
Empty output
Commands don’t return values – they output them. You can capture this output by using command substitution; e.g. $(ls -A). You can test for a non-empty string in Bash like this:
if [[ $(ls -A) ]]; then
echo "there are files"
else
echo "no files found"
fi
Note that I've used -A rather than -a, since it omits the symbolic current (.) and parent (..) directory entries.
Note: As pointed out in the comments, command substitution doesn't capture trailing newlines. Therefore, if the command outputs only newlines, the substitution will capture nothing and the test will return false. While very unlikely, this is possible in the above example, since a single newline is a valid filename! More information in this answer.
Exit code
If you want to check that the command completed successfully, you can inspect $?, which contains the exit code of the last command (zero for success, non-zero for failure). For example:
files=$(ls -A)
if [[ $? != 0 ]]; then
echo "Command failed."
elif [[ $files ]]; then
echo "Files found."
else
echo "No files found."
fi
More info here.
TL;DR
if [[ $(ls -A | head -c1 | wc -c) -ne 0 ]]; then ...; fi
Thanks to netj
for a suggestion to improve my original:if [[ $(ls -A | wc -c) -ne 0 ]]; then ...; fi
This is an old question but I see at least two things that need some improvement or at least some clarification.
First problem
First problem I see is that most of the examples provided here simply don't work. They use the ls -al and ls -Al commands - both of which output non-empty strings in empty directories. Those examples always report that there are files even when there are none.
For that reason you should use just ls -A - Why would anyone want to use the -l switch which means "use a long listing format" when all you want is test if there is any output or not, anyway?
So most of the answers here are simply incorrect.
Second problem
The second problem is that while some answers work fine (those that don't use ls -al or ls -Al but ls -A instead) they all do something like this:
run a command
buffer its entire output in RAM
convert the output into a huge single-line string
compare that string to an empty string
What I would suggest doing instead would be:
run a command
count the characters in its output without storing them
or even better - count the number of maximally 1 character using head -c1(thanks to netj for posting this idea in the comments below)
compare that number with zero
So for example, instead of:
if [[ $(ls -A) ]]
I would use:
if [[ $(ls -A | wc -c) -ne 0 ]]
# or:
if [[ $(ls -A | head -c1 | wc -c) -ne 0 ]]
Instead of:
if [ -z "$(ls -lA)" ]
I would use:
if [ $(ls -lA | wc -c) -eq 0 ]
# or:
if [ $(ls -lA | head -c1 | wc -c) -eq 0 ]
and so on.
For small outputs it may not be a problem but for larger outputs the difference may be significant:
$ time [ -z "$(seq 1 10000000)" ]
real 0m2.703s
user 0m2.485s
sys 0m0.347s
Compare it with:
$ time [ $(seq 1 10000000 | wc -c) -eq 0 ]
real 0m0.128s
user 0m0.081s
sys 0m0.105s
And even better:
$ time [ $(seq 1 10000000 | head -c1 | wc -c) -eq 0 ]
real 0m0.004s
user 0m0.000s
sys 0m0.007s
Full example
Updated example from the answer by Will Vousden:
if [[ $(ls -A | wc -c) -ne 0 ]]; then
echo "there are files"
else
echo "no files found"
fi
Updated again after suggestions by netj:
if [[ $(ls -A | head -c1 | wc -c) -ne 0 ]]; then
echo "there are files"
else
echo "no files found"
fi
Additional update by jakeonfire:
grep will exit with a failure if there is no match. We can take advantage of this to simplify the syntax slightly:
if ls -A | head -c1 | grep -E '.'; then
echo "there are files"
fi
if ! ls -A | head -c1 | grep -E '.'; then
echo "no files found"
fi
Discarding whitespace
If the command that you're testing could output some whitespace that you want to treat as an empty string, then instead of:
| wc -c
you could use:
| tr -d ' \n\r\t ' | wc -c
or with head -c1:
| tr -d ' \n\r\t ' | head -c1 | wc -c
or something like that.
Summary
First, use a command that works.
Second, avoid unnecessary storing in RAM and processing of potentially huge data.
The answer didn't specify that the output is always small so a possibility of large output needs to be considered as well.
if [ -z "$(ls -lA)" ]; then
echo "no files found"
else
echo "There are files"
fi
This will run the command and check whether the returned output (string) has a zero length.
You might want to check the 'test' manual pages for other flags.
Use the "" around the argument that is being checked, otherwise empty results will result in a syntax error as there is no second argument (to check) given!
Note: that ls -la always returns . and .. so using that will not work, see ls manual pages. Furthermore, while this might seem convenient and easy, I suppose it will break easily. Writing a small script/application that returns 0 or 1 depending on the result is much more reliable!
For those who want an elegant, bash version-independent solution (in fact should work in other modern shells) and those who love to use one-liners for quick tasks. Here we go!
ls | grep . && echo 'files found' || echo 'files not found'
(note as one of the comments mentioned, ls -al and in fact, just -l and -a will all return something, so in my answer I use simple ls
Bash Reference Manual
6.4 Bash Conditional Expressions
-z string
True if the length of string is zero.
-n string
string
True if the length of string is non-zero.
You can use shorthand version:
if [[ $(ls -A) ]]; then
echo "there are files"
else
echo "no files found"
fi
As Jon Lin commented, ls -al will always output (for . and ..). You want ls -Al to avoid these two directories.
You could for example put the output of the command into a shell variable:
v=$(ls -Al)
An older, non-nestable, notation is
v=`ls -Al`
but I prefer the nestable notation $( ... )
The you can test if that variable is non empty
if [ -n "$v" ]; then
echo there are files
else
echo no files
fi
And you could combine both as if [ -n "$(ls -Al)" ]; then
Sometimes, ls may be some shell alias. You might prefer to use $(/bin/ls -Al). See ls(1) and hier(7) and environ(7) and your ~/.bashrc (if your shell is GNU bash; my interactive shell is zsh, defined in /etc/passwd - see passwd(5) and chsh(1)).
I'm guessing you want the output of the ls -al command, so in bash, you'd have something like:
LS=`ls -la`
if [ -n "$LS" ]; then
echo "there are files"
else
echo "no files found"
fi
sometimes "something" may come not to stdout but to the stderr of the testing application, so here is the fix working more universal way:
if [[ $(partprobe ${1} 2>&1 | wc -c) -ne 0 ]]; then
echo "require fixing GPT parititioning"
else
echo "no GPT fix necessary"
fi
Here's a solution for more extreme cases:
if [ `command | head -c1 | wc -c` -gt 0 ]; then ...; fi
This will work
for all Bourne shells;
if the command output is all zeroes;
efficiently regardless of output size;
however,
the command or its subprocesses will be killed once anything is output.
All the answers given so far deal with commands that terminate and output a non-empty string.
Most are broken in the following senses:
They don't deal properly with commands outputting only newlines;
starting from Bash≥4.4 most will spam standard error if the command output null bytes (as they use command substitution);
most will slurp the full output stream, so will wait until the command terminates before answering. Some commands never terminate (try, e.g., yes).
So to fix all these issues, and to answer the following question efficiently,
How can I test if a command outputs an empty string?
you can use:
if read -n1 -d '' < <(command_here); then
echo "Command outputs something"
else
echo "Command doesn't output anything"
fi
You may also add some timeout so as to test whether a command outputs a non-empty string within a given time, using read's -t option. E.g., for a 2.5 seconds timeout:
if read -t2.5 -n1 -d '' < <(command_here); then
echo "Command outputs something"
else
echo "Command doesn't output anything"
fi
Remark. If you think you need to determine whether a command outputs a non-empty string, you very likely have an XY problem.
Here's an alternative approach that writes the std-out and std-err of some command a temporary file, and then checks to see if that file is empty. A benefit of this approach is that it captures both outputs, and does not use sub-shells or pipes. These latter aspects are important because they can interfere with trapping bash exit handling (e.g. here)
tmpfile=$(mktemp)
some-command &> "$tmpfile"
if [[ $? != 0 ]]; then
echo "Command failed"
elif [[ -s "$tmpfile" ]]; then
echo "Command generated output"
else
echo "Command has no output"
fi
rm -f "$tmpfile"
Sometimes you want to save the output, if it's non-empty, to pass it to another command. If so, you could use something like
list=`grep -l "MY_DESIRED_STRING" *.log `
if [ $? -eq 0 ]
then
/bin/rm $list
fi
This way, the rm command won't hang if the list is empty.
As mentioned by tripleee in the question comments , use moreutils ifne (if input not empty).
In this case we want ifne -n which negates the test:
ls -A /tmp/empty | ifne -n command-to-run-if-empty-input
The advantage of this over many of the another answers when the output of the initial command is non-empty. ifne will start writing it to STDOUT straight away, rather than buffering the entire output then writing it later, which is important if the initial output is slowly generated or extremely long and would overflow the maximum length of a shell variable.
There are a few utils in moreutils that arguably should be in coreutils -- they're worth checking out if you spend a lot of time living in a shell.
In particular interest to the OP may be dirempty/exists tool which at the time of writing is still under consideration, and has been for some time (it could probably use a bump).