bash: perform action (grep) when file changed - bash

Googled a lot and didnt, susprisingly, find a working solution. Im an engineer, not a programmer. Just need this tool.
So: I have a file "test2.dat" that I want to grep every time it changes.
I dont have inotifywait or when-changed or any similar stuff installed and I dont have the rights to do so (and dont even want to as I would like this script to be working universally).
Any suggestions?
What I tried:
LTIME='stat -c %Z test2.dat'
while true
do
ATIME='stat -c %Z test2.dat'
if [[ "$ATIME" != "$LTIME" ]]
then
grep -i "15 RT" test2.dat > test_grep2.txt
LTIME=$ATIME
fi
sleep 5
done
but that doesn't do basically anything.

Your syntax for command-substitution is wrong. If you are expecting the command to run within the quotes you are wrong. The command-substitution syntax in bash is to do $(cmd)
Also by doing [[ "$ATIME" != "$LTIME" ]] you are doing a literal string comparison which will never work. Once you store LTIME=$ATIME the subsequent comparison of the strings will never be right.
The appropriate syntax for your script should have been,
#!/bin/bash
LTIME=$(stat -c %Z test2.dat)
while true
do
ATIME=$(stat -c %Z test2.dat)
if [[ "$ATIME" != "$LTIME" ]]
then
grep -i "15 RT" test2.dat > test_grep2.txt
LTIME="$ATIME"
fi
sleep 5
done
I would recommend using lower-case letters for variable definitions in bash, just re-used your template in the example above.

Related

Prevent "mv" command from raising error if no file matches the glob. eg" mv *.json /dir/

I want to move all JSON files created within a jenkins job to a different folder.
It is possible that the job does not create any json file.
In that case the mv command is raising an error and so that job is failing.
How do I prevent mv command from raising error in case no file is found?
Welcome to SO.
Why do you not want the error?
If you just don't want to see the error, then you could always just throw it away with 2>/dev/null, but PLEASE don't do that. Not every error is the one you expect, and this is a debugging nightmare. You could write it to a log with 2>$logpath and then build in logic to read that to make certain it's ok, and ignore or respond accordingly --
mv *.json /dir/ 2>$someLog
executeMyLogParsingFunction # verify expected err is the ONLY err
If it's because you have set -e or a trap in place, and you know it's ok for the mv to fail (which might not be because there is no file!), then you can use this trick -
mv *.json /dir/ || echo "(Error ok if no files found)"
or
mv *.json /dir/ ||: # : is a no-op synonym for "true" that returns 0
see https://www.gnu.org/software/bash/manual/html_node/Conditional-Constructs.html
(If it's failing simply because the mv is returning a nonzero as the last command, you could also add an explicit exit 0, but don't do that either - fix the actual problem rather than patching the symptom. Any of these other solutions should handle that, but I wanted to point out that unless there's a set -e or a trap that catches the error, it shouldn't cause the script to fail unless it's the very last command.)
Better would be to specifically handle the problem you expect without disabling error handling on other problems.
shopt -s nullglob # globs with no match do not eval to the glob as a string
for f in *.json; do mv "$f" /dir/; done # no match means no loop entry
c.f. https://www.gnu.org/software/bash/manual/html_node/The-Shopt-Builtin.html
or if you don't want to use shopt,
for f in *.json; do [[ -e "$f" ]] && mv "$f" /dir/; done
Note that I'm only testing existence, so that will include any match, including directories, symlinks, named pipes... you might want [[ -f "$f" ]] && mv "$f" /dir/ instead.
c.f. https://www.gnu.org/software/bash/manual/html_node/Bash-Conditional-Expressions.html
This is expected behavior -- it's why the shell leaves *.json unexpanded when there are no matches, to allow mv to show a useful error.
If you don't want that, though, you can always check the list of files yourself, before passing it to mv. As an approach that works with all POSIX-compliant shells, not just bash:
#!/bin/sh
# using a function here gives us our own private argument list.
# that's useful because minimal POSIX sh doesn't provide arrays.
move_if_any() {
dest=$1; shift # shift makes the old $2 be $1, the old $3 be $2, etc.
# so, we then check how many arguments were left after the shift;
# if it's only one, we need to also check whether it refers to a filesystem
# object that actually exists.
if [ "$#" -gt 1 ] || [ -e "$1" ] || [ -L "$1" ]; then
mv -- "$#" "$dest"
fi
}
# put destination_directory/ in $1 where it'll be shifted off
# $2 will be either nonexistent (if we were really running in bash with nullglob set)
# ...or the name of a legitimate file or symlink, or the string '*.json'
move_if_any destination_directory/ *.json
...or, as a more bash-specific approach:
#!/bin/bash
files=( *.json )
if (( ${#files[#]} > 1 )) || [[ -e ${files[0]} || -L ${files[0]} ]]; then
mv -- "${files[#]}" destination/
fi
Loop over all json files and move each of them, if it exists, in a oneliner:
for X in *.json; do [[ -e $X ]] && mv "$X" /dir/; done

Getting Bad substitution error with a Shell Script on a mac?

I'm getting an error message "./query.sh: line 5: ${1,,}: bad substitution" whenever I run a shell script in a Mac OSX terminal
#!/bin/bash
dir=$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )
if [ "$1" != "" ]; then
letter1=$(echo ${1,,}|(cut -b1))new
if [[ $letter1 == [a-zA-Z0-9] ]]; then
if [ -f "$dir/data/$letter1" ]; then
grep -ai "^$1" "$dir/data/$letter1"
else
letter2=$(echo ${1,,}|cut -b2)
if [[ $letter2 == [a-zA-Z0-9] ]]; then
if [ -f "$dir/data/$letter1/$letter2" ]; then
grep -ai "^$1" "$dir/data/$letter1/$letter2"
else
letter3=$(echo ${1,,}|cut -b3)
if [[ $letter3 == [a-zA-Z0-9] ]]; then
if [ -f "$dir/data/$letter1/$letter2/$letter3" ]; then
grep -ai "^$1" "$dir/data/$letter1/$letter2/$letter3"
fi
else
if [ -f "$dir/data/$letter1/$letter2/symbols" ]; then
grep -ai "^$1" "$dir/data/$letter1/$letter2/symbols"
fi
fi
fi
else
if [ -f "$dir/data/$letter1/symbols" ]; then
grep -ai "^$1" "$dir/data/$letter1/symbols"
fi
fi
fi
else
if [ -f "$dir/data/symbols" ]; then
grep -ai "^$1" "$dir/data/symbols"
fi
fi
else
echo "[*] Example: ./query name#domain.com"
fi
The scripts function is to search throuhg a huge number of data files so could anybody help me pinpoint the source of the problem ?
The ,, operator was introduced in bash 4.0, but /bin/bash on macOS is still version 3.2. You can install a newer version of bash and change your shebang accordingly, or you can use letter1=$(echo "$1" | tr '[:upper:]' '[:lower:]' | cut -b1) instead.
(You can, however, use ${letter:0:1}, ${letter:1:1}, etc, in place of a call to cut to get a single letter from the string.)
My advice is to treat /bin/bash on macOS as nothing more than a POSIX-compatible shell. Use #!/bin/sh if you want a portable script, or use #!/usr/local/bin/bash (or whatever is appropriate, after installing a new version of bash) if you want to take advantage of bash extensions at the expense of portability.
If you are using a subshell to produce the value anyway, you might as well change the call to something which is portable back to earlier versions of Bash.
Running cut -b1 in a subshell (in parentheses) is useless and doing it after the conversion means you convert a potentially long string and then discard everything except the first character.
letter1=$(echo "${1:0:1}" | tr '[:upper:]' '[:lower:]')new
Notice also the double quotes around the argument to echo.
Do you really want the suffix new to be attached to the value? The first condition in the if can never match in that case.
You should probably also refactor the code to avoid the deep "arrow antipattern" and only perform the grep once you have a value for the file name you want to search.
As mentioned in other answers, the builtin bash on Mac is very old and must be upgraded to use some operators, e.g. the caret operator. If you use homebrew, run brew install bash and follow the instructions in this post.

Best Way to Get File Modified Time in Seconds

This seems to be a classic case of non-standard features on various platforms.
Quite simply, I want a universally (or at least widely) supported method for getting the modified time of a file as a unix timestamp in seconds.
Now I know of various ways to do this with stat but most are platform specific; for example stat -c %Y $file works for some, but won't work on OS X (and presumably other FreeBSD systems) which uses stat -f %m $file instead.
Likewise, some platforms support date -r $file +%s, however OS X/FreeBSD again does not as the -r option seems to just be an alternate to using +%s for getting a unix timestamp, rather than the reference file option as on other platforms.
The other alternative I'm familiar with is to use find with the -printf option, but again this is not widely supported. The last method I know of is parsing ls which, aside from being an unpleasant thing to have to do, is not something I believe can (or at least should) be relied upon either.
So, is there a more compatible method for getting a file's modified time? Currently I'm just throwing different variations of stat into a script and running them until one exits with a status of zero, but this is far from ideal, even if I cache the successful command to run first in future.
Since it seems like there might not be a "correct" solution I figured I'd post my current one for comparison:
if stat -c %Y . >/dev/null 2>&1; then
get_modified_time() { stat -c %Y "$1" 2>/dev/null; }
elif stat -f %m . >/dev/null 2>&1; then
get_modified_time() { stat -f %m "$1" 2>/dev/null; }
elif date -r . +%s >/dev/null 2>&1; then
get_modified_time() { date -r "$1" +%s 2>/dev/null; }
else
echo 'get_modified_time() is unsupported' >&2
get_modified_time() { printf '%s' 0; }
fi
[edit]
I'm updating this to reflect the more up to date version of the code I use, basically it tests the two main stat methods and a somewhat common date method in any attempt to get the modified time for the current working directory, and if one of the methods succeeds it creates a function encapsulating it for use later in the script.
This method differs from the previous one I posted since it always does some processing, even if get_modified_time is never called, but it's more efficiently overall if you do need to call it most of the time. It also lets you catch an unsupported platform earlier on.
If you prefer the function that only tests functions when it is called, then here's the other form:
get_modified_time() {
modified_time=$(stat -c %Y "$1" 2> /dev/null)
if [ "$?" -ne 0 ]; then
modified_time=$(stat -f %m "$1" 2> /dev/null)
if [ "$?" -ne 0 ]; then
modified_time=$(date -r "$1" +%s 2> /dev/null)
[ "$?" -ne 0 ] && modified_time=0
fi
fi
echo "$modified_time"
}
Why so complicated?
after not finding something on the web I simply read the manual of ls
man ls
which gave me
ls --time-style=full-iso -l
and change time in the format hh:mm:ss.sssssssss
As
ls --time-style=+FORMAT -l
while FORMAT is used like with date (see man date)
ls --time-style=+%c
will give you local date and time with seconds as an integer (without decimal point).
You can strip off additional ls information (e.g. file name, owner...) when you pipe through awk.
Just ask the operating system for its name, and go from there. Alternatively, write a C program, or use Python or something else that's pretty common and more standardized: How to get file creation & modification date/times in Python?

Deleting files by date in a shell script?

I have a directory with lots of files. I want to keep only the 6 newest. I guess I can look at their creation date and run rm on all those that are too old, but is the a better way for doing this? Maybe some linux command I could use?
Thanks!
:)
rm -v $(ls -t mysvc-*.log | tail -n +7)
ls -t, list sorted by time
tail -n +7, +7 here means length-7, so all but first 7 lines
$() makes a list of strings from the enclosed command output
rm to remove the files, of course
Beware files with space in their names, $() splits on any white-space!
Here's my take on it, as a script. It does handle spaces in file names even if it is a bit of a hack.
#!/bin/bash
eval set -- $(ls -t1 | sed -e 's/.*/"&"/')
if [[ $# -gt 6 ]] ; then
shift 6
while [[ $# -gt 0 ]] ; do
echo "remove this file: $1" # rm "$1"
shift
done
fi
The second option to ls up there is a "one" for one file name per line. Doesn't actually seem to matter, though, since that appears to be the default when ls isn't feeding a tty.

Test if a command outputs an empty string

How can I test if a command outputs an empty string?
Previously, the question asked how to check whether there are files in a directory. The following code achieves that, but see rsp's answer for a better solution.
Empty output
Commands don’t return values – they output them. You can capture this output by using command substitution; e.g. $(ls -A). You can test for a non-empty string in Bash like this:
if [[ $(ls -A) ]]; then
echo "there are files"
else
echo "no files found"
fi
Note that I've used -A rather than -a, since it omits the symbolic current (.) and parent (..) directory entries.
Note: As pointed out in the comments, command substitution doesn't capture trailing newlines. Therefore, if the command outputs only newlines, the substitution will capture nothing and the test will return false. While very unlikely, this is possible in the above example, since a single newline is a valid filename! More information in this answer.
Exit code
If you want to check that the command completed successfully, you can inspect $?, which contains the exit code of the last command (zero for success, non-zero for failure). For example:
files=$(ls -A)
if [[ $? != 0 ]]; then
echo "Command failed."
elif [[ $files ]]; then
echo "Files found."
else
echo "No files found."
fi
More info here.
TL;DR
if [[ $(ls -A | head -c1 | wc -c) -ne 0 ]]; then ...; fi
Thanks to netj
for a suggestion to improve my original:if [[ $(ls -A | wc -c) -ne 0 ]]; then ...; fi
This is an old question but I see at least two things that need some improvement or at least some clarification.
First problem
First problem I see is that most of the examples provided here simply don't work. They use the ls -al and ls -Al commands - both of which output non-empty strings in empty directories. Those examples always report that there are files even when there are none.
For that reason you should use just ls -A - Why would anyone want to use the -l switch which means "use a long listing format" when all you want is test if there is any output or not, anyway?
So most of the answers here are simply incorrect.
Second problem
The second problem is that while some answers work fine (those that don't use ls -al or ls -Al but ls -A instead) they all do something like this:
run a command
buffer its entire output in RAM
convert the output into a huge single-line string
compare that string to an empty string
What I would suggest doing instead would be:
run a command
count the characters in its output without storing them
or even better - count the number of maximally 1 character using head -c1(thanks to netj for posting this idea in the comments below)
compare that number with zero
So for example, instead of:
if [[ $(ls -A) ]]
I would use:
if [[ $(ls -A | wc -c) -ne 0 ]]
# or:
if [[ $(ls -A | head -c1 | wc -c) -ne 0 ]]
Instead of:
if [ -z "$(ls -lA)" ]
I would use:
if [ $(ls -lA | wc -c) -eq 0 ]
# or:
if [ $(ls -lA | head -c1 | wc -c) -eq 0 ]
and so on.
For small outputs it may not be a problem but for larger outputs the difference may be significant:
$ time [ -z "$(seq 1 10000000)" ]
real 0m2.703s
user 0m2.485s
sys 0m0.347s
Compare it with:
$ time [ $(seq 1 10000000 | wc -c) -eq 0 ]
real 0m0.128s
user 0m0.081s
sys 0m0.105s
And even better:
$ time [ $(seq 1 10000000 | head -c1 | wc -c) -eq 0 ]
real 0m0.004s
user 0m0.000s
sys 0m0.007s
Full example
Updated example from the answer by Will Vousden:
if [[ $(ls -A | wc -c) -ne 0 ]]; then
echo "there are files"
else
echo "no files found"
fi
Updated again after suggestions by netj:
if [[ $(ls -A | head -c1 | wc -c) -ne 0 ]]; then
echo "there are files"
else
echo "no files found"
fi
Additional update by jakeonfire:
grep will exit with a failure if there is no match. We can take advantage of this to simplify the syntax slightly:
if ls -A | head -c1 | grep -E '.'; then
echo "there are files"
fi
if ! ls -A | head -c1 | grep -E '.'; then
echo "no files found"
fi
Discarding whitespace
If the command that you're testing could output some whitespace that you want to treat as an empty string, then instead of:
| wc -c
you could use:
| tr -d ' \n\r\t ' | wc -c
or with head -c1:
| tr -d ' \n\r\t ' | head -c1 | wc -c
or something like that.
Summary
First, use a command that works.
Second, avoid unnecessary storing in RAM and processing of potentially huge data.
The answer didn't specify that the output is always small so a possibility of large output needs to be considered as well.
if [ -z "$(ls -lA)" ]; then
echo "no files found"
else
echo "There are files"
fi
This will run the command and check whether the returned output (string) has a zero length.
You might want to check the 'test' manual pages for other flags.
Use the "" around the argument that is being checked, otherwise empty results will result in a syntax error as there is no second argument (to check) given!
Note: that ls -la always returns . and .. so using that will not work, see ls manual pages. Furthermore, while this might seem convenient and easy, I suppose it will break easily. Writing a small script/application that returns 0 or 1 depending on the result is much more reliable!
For those who want an elegant, bash version-independent solution (in fact should work in other modern shells) and those who love to use one-liners for quick tasks. Here we go!
ls | grep . && echo 'files found' || echo 'files not found'
(note as one of the comments mentioned, ls -al and in fact, just -l and -a will all return something, so in my answer I use simple ls
Bash Reference Manual
6.4 Bash Conditional Expressions
-z string
True if the length of string is zero.
-n string
string
True if the length of string is non-zero.
You can use shorthand version:
if [[ $(ls -A) ]]; then
echo "there are files"
else
echo "no files found"
fi
As Jon Lin commented, ls -al will always output (for . and ..). You want ls -Al to avoid these two directories.
You could for example put the output of the command into a shell variable:
v=$(ls -Al)
An older, non-nestable, notation is
v=`ls -Al`
but I prefer the nestable notation $( ... )
The you can test if that variable is non empty
if [ -n "$v" ]; then
echo there are files
else
echo no files
fi
And you could combine both as if [ -n "$(ls -Al)" ]; then
Sometimes, ls may be some shell alias. You might prefer to use $(/bin/ls -Al). See ls(1) and hier(7) and environ(7) and your ~/.bashrc (if your shell is GNU bash; my interactive shell is zsh, defined in /etc/passwd - see passwd(5) and chsh(1)).
I'm guessing you want the output of the ls -al command, so in bash, you'd have something like:
LS=`ls -la`
if [ -n "$LS" ]; then
echo "there are files"
else
echo "no files found"
fi
sometimes "something" may come not to stdout but to the stderr of the testing application, so here is the fix working more universal way:
if [[ $(partprobe ${1} 2>&1 | wc -c) -ne 0 ]]; then
echo "require fixing GPT parititioning"
else
echo "no GPT fix necessary"
fi
Here's a solution for more extreme cases:
if [ `command | head -c1 | wc -c` -gt 0 ]; then ...; fi
This will work
for all Bourne shells;
if the command output is all zeroes;
efficiently regardless of output size;
however,
the command or its subprocesses will be killed once anything is output.
All the answers given so far deal with commands that terminate and output a non-empty string.
Most are broken in the following senses:
They don't deal properly with commands outputting only newlines;
starting from Bash≥4.4 most will spam standard error if the command output null bytes (as they use command substitution);
most will slurp the full output stream, so will wait until the command terminates before answering. Some commands never terminate (try, e.g., yes).
So to fix all these issues, and to answer the following question efficiently,
How can I test if a command outputs an empty string?
you can use:
if read -n1 -d '' < <(command_here); then
echo "Command outputs something"
else
echo "Command doesn't output anything"
fi
You may also add some timeout so as to test whether a command outputs a non-empty string within a given time, using read's -t option. E.g., for a 2.5 seconds timeout:
if read -t2.5 -n1 -d '' < <(command_here); then
echo "Command outputs something"
else
echo "Command doesn't output anything"
fi
Remark. If you think you need to determine whether a command outputs a non-empty string, you very likely have an XY problem.
Here's an alternative approach that writes the std-out and std-err of some command a temporary file, and then checks to see if that file is empty. A benefit of this approach is that it captures both outputs, and does not use sub-shells or pipes. These latter aspects are important because they can interfere with trapping bash exit handling (e.g. here)
tmpfile=$(mktemp)
some-command &> "$tmpfile"
if [[ $? != 0 ]]; then
echo "Command failed"
elif [[ -s "$tmpfile" ]]; then
echo "Command generated output"
else
echo "Command has no output"
fi
rm -f "$tmpfile"
Sometimes you want to save the output, if it's non-empty, to pass it to another command. If so, you could use something like
list=`grep -l "MY_DESIRED_STRING" *.log `
if [ $? -eq 0 ]
then
/bin/rm $list
fi
This way, the rm command won't hang if the list is empty.
As mentioned by tripleee in the question comments , use moreutils ifne (if input not empty).
In this case we want ifne -n which negates the test:
ls -A /tmp/empty | ifne -n command-to-run-if-empty-input
The advantage of this over many of the another answers when the output of the initial command is non-empty. ifne will start writing it to STDOUT straight away, rather than buffering the entire output then writing it later, which is important if the initial output is slowly generated or extremely long and would overflow the maximum length of a shell variable.
There are a few utils in moreutils that arguably should be in coreutils -- they're worth checking out if you spend a lot of time living in a shell.
In particular interest to the OP may be dirempty/exists tool which at the time of writing is still under consideration, and has been for some time (it could probably use a bump).

Resources