I'm using fswatch to keep track of changes to a directory, but would like this process to stop if a certain file exists (with a wildcard). This certain file is created in an alternative directory (not in the tracked directory) by another process (which is generating changes that need to be tracked).
Here's what I tried to do:
while [[ $(shopt -s nullglob; set -- "${file_to_check}"; echo $#) -eq 1 ]]; do
fswatch "${path_to_the_tracked_directory}"
done && echo "Done"
However, this script does not terminate after ${file_to_check} appears.
The complex bit in the condition is to take care of wildcards as per: Bash check if file exists with double bracket test and wildcards
EDIT:
The complex bit can be simplified to:
while [ $(set -- "${path_to_the_file_to_check}"${file_to_check_with_wildcards}; echo $#) -eq 0 ]; do
fswatch "${path_to_the_tracked_directory}"
done && echo "Done"
One solution is to use -1/--one-event option (https://github.com/emcrisostomo/fswatch/wiki/How-to-Use-fswatch).
The code then looks as:
while [ $(set -- "${path_to_the_file_to_check}"${file_to_check_with_wildcards}; echo $#) -eq 0 ]; do
fswatch -1 "${path_to_the_tracked_directory}"
done && echo "Done"
Related
On MacOS Catalina, I have a bash script with
if [[ ! -f $CR/home/files/Recovery_*.txt ]]
then
echo "File does not exists in /home/files directory. Exiting" >> $log
echo "Aborted - $CR/home/files/Recovery_*txt not exist"
exit
fi
Even though there are 2 files in the directory the script exits. If I list directory contents beforehand there are 2 files. If I change it as follows the if is skipped:
if [[ `ls -la $CR/home/files/Recovery_*.txt | wc -l` -eq 0 ]]
then
echo "No files are :"
exit
fi
I am wanting to use the if -f conditional.
Any suggestions please?
Cheers, C
If you use nullglob, a glob expression that doesn't match returns an empty string. This lets us count files in bash without spawning other processes. Create an array with the expression, then check its length.
shopt -s nullglob
files=($CR/home/files/Recovery_*.txt)
if [[ ${#files[#]} -eq 0 ]]
then
echo "No files"
exit
fi
[Edited]
The error was not the variables, but the missing shebang as the script has come across from W2K3 SFU.
The tip about shellchecker.net was awesome and I will use that from now.
Thanks.
I want to create a directory with increasing numbers every time I run the script. My current solution is:
COUNTER=1
while mkdir $COUNTER; (( $? != 0 ))
do
COUNTER=$((COUNTER + 1))
done
Is separating the commands in the while condition with a ;(semicolon) the best practice?
The very purpose of while and the shell's other control statements is to run a command and examine its exit code. You can use ! to negate the exit code.
while ! mkdir "$COUNTER"
do
COUNTER=$((COUNTER + 1))
done
Notice also the quoting; see further Why is testing "$?" to see if a command succeeded or not, an anti-pattern?
As such, if you want two commands to run and only care about the exit code from the second, a semicolon is the correct separator. Often, you want both commands to succeed; then, && is the correct separator to use.
You don't need to test the exit status, just check if the directory exists already and increment. Here is one way
#!/usr/bin/env bash
counter=1
while [[ -e $counter ]]; do
((counter++))
done
if ! mkdir "$counter"; then ##: mkdir failed
echo failed ##: execute this code
fi
POSIX sh shell.
#!/usr/bin/env sh
counter=1
while [ -e "$counter" ]; do
counter=$((counter+1))
done
if ! mkdir "$counter"; then ##: If mkdir failed
echo failed ##: Execute this code
fi
The bang ! negates the exit status of mkdir.
See help test | grep '^[[:blank:]]*!'
Well if you're just going to negate the exit status of mkdir inside the while loop then you might as well use until, which is the opposite of while
counter=1
until mkdir "$COUNTER"; do
:
COUNTER=$((COUNTER + 1))
done
This a small bash program that is tasked with looking through a directory and counting how many files are in the directory. It's to ignore other directories and only count the files.
Below is my bash code, which seems to fail to count the files specifically in the directory, I say this because if I remove the if statement and just increment the counter the for loop continues to iterate and prints 4 in the counter (this is including directories though). With the if statement it prints this to the console.
folder1 has files
Looking at other questions I think the expression in my if statement is right and I am getting no compilation errors for syntax or another problems.
So I just simply dumbfounded as to why it is not counting the files.
#!/bin/bash
folder=$1
if [ $1 = empty ]; then
folder=empty
counter=0
echo $folder has $counter files
exit
fi
for d in $(ls $folder); do
if [[ -f $d ]]; then
let 'counter++'
fi
done
echo $folder has $counter files
Thank you.
Your entire script could be very well simplified as below with enhancements made. Never use output of ls programmatically. It should be used only in the command-line. The -z construct allows to you assert if the parameter following it is empty or non-empty.
For looping over files, use the default glob expansion provided by the shell. Note the && is a short-hand to do a action when the left-side of the operand returned a true condition, in a way short-hand equivalent of if <condition>; then do <action>; fi
#!/usr/bin/env bash
[ -z "$1" ] && { printf 'invalid argument passed\n' >&2 ; exit 1 ; }
shopt -s nullglob
for file in "$1"/*; do
[ -f "$file" ] && ((count++))
done
printf 'folder %s had %d files\n' "$1" "$count"
The issue that I have is with the line: "does not work" - below. The last line does indeed work - but I need to understand why the second to last line does not. I need to check for file existence on the remote server.
Have a need to check for existence for files at the following location:
/home/remoteuser/files/
and when the files are processed, they are moved to:
/home/remoteuser/logs/archive/
Would like to create an alert if the files exist at - in other words, the files were not processed:
/home/remoteuser/logs/
Found the following page and seems to be what I am looking for:
http://www.cyberciti.biz/tips/find-out-if-file-exists-with-conditional-expressions.html
Testing this and I know there are files there, but does not work:
ssh remoteuser#1.2.3.4 [ ! -f /home/remoteuser/logs/archive/*.* ] && echo "File does not exist in the root" >> /home/localuser/files/dirlist.txt
Because we know this works and does indeed list files on the local server:
ssh remoteuser#1.2.3.4 ls -l /home/remoteuser/logs/archive/*.* >> /home/localuser/files/dirlist.txt
Wildcards and test construct in Bash
You cannot use the wildcards in the [ command to test the existence of multiple files. In fact, the wildcards will be expanded and all the files will be passed to the test. Te results is that it would complain that "-f" is given too many arguments.
Try this in any non empty directory to see the output:
[ ! -f *.* ]
The only situation in which the above command does not fail is when there is only one file matching the expression, in your case a non hidden file of the form "*.*" in /home/remoteuser/logs/archive/
Using Find
A possible solution is to use find in combination with grep:
ssh name#server find /path/to/the/files -type f -name "\*.\*" 2>/dev/null | grep -q . && echo "Not Empty" || echo "Empty"
find search for regular files (-type f) whose names are in the form . (-name) and return false if nothing is found, then "grep -q ." return 1 or 0 if something is found or not.
Your goal can be accomplished with only shell builtins -- and without any uses of those builtins which depend on their behavior when passed invalid syntax (as the [ ! -e *.* ] approach does). This removes the dependency on having an accessible, working find command on your remote system.
Consider:
rmtfunc() {
set -- /home/remoteuser/logs/*.* # put contents of directory into $# array
for arg; do # ...for each item in that array...
[ -f "$arg" ] && exit 0 # ...if it's a file that exists, success
done
exit 1 # if nothing matched above, failure
}
# emit text that defines that function into the ssh command, then run same
if ssh remoteuser#host "$(declare -f rmtfunc); rmtfunc"; then
echo "Found remote logfiles"
else
echo "No remote logfiles exist"
fi
ANSWER:
Did find the following about the use of -e for a regular file.
http://www.cyberciti.biz/faq/unix-linux-test-existence-of-file-in-bash/
Even though it says "too many arguments" it does seem to test out OK.
ssh remoteuser#1.2.3.4 [ ! -e /home/remoteuser/logs/archive/*.zip ] && echo "File does not exists in the root" >> /home/localuser/files/dirlist.txt || echo "File does exists in the root" >> /home/localuser/files/dirlist.txt
Your script will work simply using double parenthesis:
ssh remoteuser#1.2.3.4 [[ ! -f /home/remoteuser/logs/archive/*.* ]] && echo "File does not exist in the root" >> /home/localuser/files/dirlist.txt
From man bash
Word splitting and pathname expansion are not performed on the words between the [[ and ]].
I'm currently writing a bash script for executing test suites. Besides passing the suites directly to this script, like
./bash-specs test.suite
it should also be able to execute all scripts in a given directory if no suite is passed to it, like so
./bash-specs # executes all tests in the directory, namely test.suite
This is implemented like this
(($# == 0)) && set -- *.suite
So, if no suite is passed, all the files ending on .suite are executed. This works fine but fails if the directory contains no such files.
That means I will also need a check to test if there actually are files with that ending.
How would I do this in bash?
I thought a test like
[[ -f *.suite ]]
should work but it seems to fail when there are more than one file in the directory.
The reason -f is failing is because -f only takes a single parameter. When you do [[ -f *.suite ]], it expands to:
[[ -f test.suite test2.suite test3.suite ]]
... which is not valid.
Instead, do this:
shopt -s nullglob
FILES=`echo *.suite`
if [[ -z $FILES ]]; then
echo "No suites found"
exit
fi
for i in $FILES; do
# Run your test on file $i
done
nullglob is a shell option that makes wildcard patterns that aren't found expand to nothing, rather than expanding to the wildcard pattern itself. Once $FILES is set to either a list of files or nothing, we can use -z to test for emptiness, and display the appropriate error message.
ls -al | grep "\.suite";echo $?
This will show 0 if file is present and 1 if file is not present
I would iterate over every suite file like this:
for i in *.suite ; do
if [ -x $i ] ; then
echo running $i
fi
done
Right after :
(($# == 0)) && set -- *.suite
Test if $1 is empty (with -z), then it means that there's no file named *.suite.