Bash Script: unary operator expected error? - bash

#!/usr/bin/env bash
FILETYPES=( "*.html" "*.css" "*.js" "*.xml" "*.json" )
DIRECTORIES=`pwd`
MIN_SIZE=1024
for currentdir in $DIRECTORIES
do
for i in "${FILETYPES[#]}"
do
find $currentdir -iname "$i" -exec bash -c 'PLAINFILE={};GZIPPEDFILE={}.gz; \
if [ -e $GZIPPEDFILE ]; \
then if [ `stat --printf=%Y $PLAINFILE` -gt `stat --printf=%Y $GZIPPEDFILE` ]; \
then gzip -k -4 -f -c $PLAINFILE > $GZIPPEDFILE; \
fi; \
elif [ `stat --printf=%s $PLAINFILE` -gt $MIN_SIZE ]; \
then gzip -k -4 -c $PLAINFILE > $GZIPPEDFILE; \
fi' \;
done
done
This script compresses all web static files using gzip. When I try to run it, I get this error bash: line 5: [: 93107: unary operator expected. What is going wrong in this script?

You need to export the MIN_SIZE variable. The bash you are having find spawn doesn't have a value for it so the script runs (as I just mentioned in my comment on #ooga's answer) [ $result_from_stat -gt ] which is an error and (when the result is 93107) gets you [ 93107 -gt ] which (if you run that in your shell) gets you output of:
$ [ 93107 -gt ]
-bash: [: 93107: unary operator expected

This could be simpler:
#!/usr/bin/env bash
FILETYPES=(html css js xml json)
DIRECTORIES=("$PWD")
MIN_SIZE=1024
IFS='|' eval 'FILTER="^.*[.](${FILETYPES[*]})\$"'
for DIR in "${DIRECTORIES[#]}"; do
while IFS= read -ru 4 FILE; do
GZ_FILE=$FILE.gz
if [[ -e $GZ_FILE ]]; then
[[ $GZ_FILE -ot "$FILE" ]] && gzip -k -4 -c "$FILE" > "$GZ_FILE"
elif [[ $(exec stat -c '%s' "$FILE") -ge MIN_SIZE ]]; then
gzip -k -4 -c "$FILE" > "$GZ_FILE"
fi
done 4< <(exec find "$DIR" -mindepth 1 -type f -regextype egrep -iregex "$FILTER")
done
There's no need to use pwd. You can just have $PWD. And probably what you needed was an array variable as well.
Instead of calling bash multiple times as an argument to find with static string commands, just read input from a pipe or better yet from a named pipe through process substitution.
Instead of comparing stats, you can just use -ot or -nt.
You don't need -f if you're writing the output through redirection (>) as that form of redirection overwrites the target by default.
You can just call find against multiple files once by making a pattern as it's more efficient. You can check how I made the filter and used -iregex. Probably doing \( -iname one_ext_pat -or -iname another_ext_pat \) can also be applicable but it's more difficult.
exec is optional to prevent unnecessary use of another process.
Always prefer [[ ]] over [ ].
4< opens input with file descriptor 4 and -u 4 makes read read from that file descriptor, not stdin (0).
What you probably need is -ge MIN_SIZE (greater than or equal) not -gt.
Come to think of it, readarray is a cleaner option if your bash is version 4.0 or newer:
for DIR in "${DIRECTORIES[#]}"; do
readarray -t FILES < <(exec find "$DIR" -mindepth 1 -type f -regextype egrep -iregex "$FILTER")
for FILE in "${FILES[#]}"; do
...
done
done

Related

Bash script comparison in combination with getfattr

I am currently stuck with a problem in my Bash script and seem to run even deeper in the dark with every attempt of trying to fix it.
Background:
We have a folder which is getting filled with numbered crash folders, which get filled with crash files. Someone is exporting a list of these folders on a daily basis. During that export, the numbered crash folders get an attribute "user.exported=1".
Some of them do not get exported, so they will not have the attribute and these should be deleted only if they are older than 30 days.
My problem:
I am setting up a bash script, which is being run via Cron in the end to check on a regular basis for folders, which have the attribute "user.exported=1" and are older than 14 days and deletes them via rm -rfv FOLDER >> deleted.log
We however also have folders which do not have or get the attribute "user.exported=1" which then need to be deleted after they are older than 30 days. I created an IF ELIF FI comparison to check for that but that is where I got stuck.
My Code:
#!/bin/bash
# Variable definition
LOGFILE="/home/crash/deleted.log"
DATE=`date '+%d/%m/%Y'`
TIME=`date '+%T'`
FIND=`find /home/crash -maxdepth 1 -mindepth 1 -type d`
# Code execution
printf "\n$DATE-$TIME\n" >> "$LOGFILE"
for d in $FIND; do
# Check if crash folders are older than 14 days and have been exported
if [[ "$(( $(date +"%s") - $(stat -c "%Y" $d) ))" -gt "1209600" ]] && [[ "$(getfattr -d --absolute-names -n user.exported --only-values $d)" == "1" ]]; then
#echo "$d is older than 14 days and exported"
"rm -rfv $d" >> "$LOGFILE"
# Check if crash folders are older than 30 days and delete regardless
elif [[ "$(( $(date +"%s") - $(stat -c "%Y" $d) ))" -gt "1814400" ]] && [[ "$(getfattr -d --absolute-names -n user.exported $d)" == FALSE ]]; then
#echo "$d is older than 30 days"
"rm -rfv $d" >> "$LOGFILE"
fi
done
The IF part is working fine and it deleted the folders with the attribute "user.exported=1" but the ELIF part does not seem to work, as I only get an output in my bash such as:
/home/crash/1234: user.exported: No such attribut
./crash_remove.sh: Line 20: rm -rfv /home/crash/1234: File or Directory not found
When I look into the crash folder after the script ran, the folder and its content is still there.
I definitely have an error in my script but cannot see it. Please could anyone help me out with this?
Thanks in advance
Only quote the expansions, not the whole command.
Instead of:
"rm -rfv $d"
do:
rm -rfv "$d"
If you quote it all, bash tries to run a command named literally rm<space>-rfv<space><expansion of d>.
Do not use backticks `. Use $(...) instead. Bash hackers wiki obsolete deprecated syntax.
Do not for i in $(cat) or var=$(...); for i in $var. Use a while IFS= read -r loop. How to read a file line by line in bash.
Instead of if [[ "$(( $(date +"%s") - $(stat -c "%Y" $d) ))" -gt "1814400" ]] just do the comparison in the arithmetic expansion, like: if (( ( $(date +"%s") - $(stat -c "%Y" $d) ) > 1814400 )).
I think you could just do it all in find, like::
find /home/crash -maxdepth 1 -mindepth 1 -type d '(' \
-mtime 14 \
-exec sh -c '[ "$(getfattr -d --absolute-names -n user.exported --only-values "$1")" = "1" ]' -- {} \; \
-exec echo rm -vrf {} + \
')' -o '(' \
-mtime 30 \
-exec sh -c '[ "$(getfattr -d --absolute-names -n user.exported "$1")" = FALSE ]' -- {} \; \
-exec echo rm -vrf {} + \
')' >> "$LOGFILE"

Is there a way to optimize this code and make it faster or anyother better solution?

I am looking to collect yang models from my project .jar files.Though i came with an approach but it takes time and my colleagues are not happy.
#!/bin/sh
set -e
# FIXME: make this tuneable
OUTPUT="yang models"
INPUT="."
JARS=`find $INPUT/system/org/linters -type f -name '*.jar' | sort -u`
# FIXME: also wipe output?
[ -d "$OUTPUT" ] || mkdir "$OUTPUT"
for jar in $JARS; do
artifact=`basename $jar | sed 's/.jar$//'`
echo "Extracting modules from $artifact"
# FIXME: better control over unzip errors
unzip -q "$jar" 'META-INF/yang/*' -d "$artifact" \
2>/dev/null || true
dir="$artifact/META-INF/yang"
if [ -d "$dir" ]; then
for file in `find $dir -type f -name '*.yang'`; do
module=`basename "$file"`
echo -e "\t$module"
# FIXME: better duplicate detection
mv -n "$file" "$OUTPUT"
done
fi
rm -rf "$artifact"
done
If the .jar files don't all change between invocations of your script then you could make the script significantly faster by caching the .jar files and only operating on the ones that changed, e.g.:
#!/bin/env bash
set -e
# FIXME: make this tuneable
output='yang models'
input='.'
cache='/some/where'
mkdir -p "$cache" || exit 1
readarray -d '' jars < <(find "$input/system/org/linters" -type f -name '*.jar' -print0 | sort -zu)
# FIXME: also wipe output?
mkdir -p "$output" || exit 1
for jarpath in "${jars[#]}"; do
diff -q "$jarpath" "$cache" || continue
cp "$jarpath" "$cache"
jarfile="${jarpath##*/}"
artifact="${jarfile%.*}"
printf 'Extracting modules from %s\n' "$artifact"
# FIXME: better control over unzip errors
unzip -q "$jarpath" 'META-INF/yang/*' -d "$artifact" 2>/dev/null
dir="$artifact/META-INF/yang"
if [ -d "$dir" ]; then
readarray -d '' yangs < <(find "$dir" -type f -name '*.yang' -print0)
for yangpath in "${yangs[#]}"; do
yangfile="${yangpath##*/}"
printf '\t%s\n' "$yangfile"
# FIXME: better duplicate detection
mv -n "$yangpath" "$output"
done
fi
rm -rf "$artifact"
done
See Correct Bash and shell script variable capitalization, http://mywiki.wooledge.org/BashFAQ/082, https://mywiki.wooledge.org/Quotes, How can I store the "find" command results as an array in Bash for some of the other changes I made above.
I assume you have some reason for looping on the .yang files and not moving them if a file by the same name already exists rather than unzipping the .jar file into the final output directory.

How to search for several strings in log file with bash scripting

I have an issue optimising my bash script. I have few patterns i need to look for in the log file. If one of the patters is listed in the log file then do SOMETHING. So far i have this , but can and how can i optimize it without so many variables:
search_trace() {
TYPE=$1
for i in `find ${LOGTRC}/* -prune -type f -name "${USER}${TYPE}*" `
do
res1=0
res1=`grep -c "String1" $i`
res2=0
res2=`grep -c "String2" $i`
res3=0
res3=`grep -c "String3" $i`
res4=0
res4=`grep -c "String4" $i`
if [ $res1 -gt 0 ] || [ $res2 -gt 0 ] || [ $res3 -gt 0 ] || [ $res4 -gt 0 ]; then
write_log W "Something is done ,because of connection reset in ${i}"
sleep 5
fi
done
You could simply use alternation syntax in the regular expression you pass to grep, e.g.
if grep -q -E '(String1|String2|String3|String4) filename'; then
# do something
fi
The -E option makes grep use extended regular expressions (including the alternation (|) operator).
search_trace() {
find "$LOGTRC"/* -prune -type f -name "$USER${1}*" |
while IFS= read -r filename; do
if grep -q -e String1 -e String2 -e String3 -e String4 "$filename"; then
write_log W "Something is done ,because of connection reset in $filename"
sleep 5
fi
done
}
grep's -q option is good for use in an if-condition: it is efficient since it will exit successfully when it finds the first match -- it doesn't have to read the rest of the file.

Unix to verify file has no content and empty lines

How to verify that a file has absolutely no content. [ -s $file ] gives if file is zero bytes but how to know if file is absolutely empty with no data that including empty lines ?
$cat sample.text
$ ls -lrt sample.text
-rw-r--r-- 1 testuser userstest 1 Jul 31 16:38 sample.text
When i "vi" the file the bottom has this - "sample.text" 1L, 1C
Your file might have new line character only.
Try this check:
[[ $(tr -d "\r\n" < file|wc -c) -eq 0 ]] && echo "File has no content"
A file of 0 size by definition has nothing in it, so you are good to go. However, you probably want to use:
if [ \! -s f ]; then echo "0 Sized and completely empty"; fi
Have fun!
Blank lines add data to the file and will therefore increase the file size, which means that just checking whether the file is 0 bytes is sufficient.
For a single file, the methods using the bash built-in -s (for test, [ or [[). ([[ makes dealing with ! less horrible, but is bash-specific)
fn="file"
if [[ -f "$fn" && ! -s "$fn" ]]; then # -f is needed since -s will return false on missing files as well
echo "File '$fn' is empty"
fi
A (more) POSIX shell compatible way: (The escaping of exclamation marks can be shell dependant)
fn="file"
if test -f "$fn" && test \! -s "$fn"; then
echo "File '$fn' is empty"
fi
For multiple files, find is a better method.
For a single file you can do: (It will print the filename if empty)
find "$PWD" -maxdepth 1 -type f -name 'file' -size 0 -print
for multiple files, matching the glob glob*:(It will print the filenames if empty)
find "$PWD" -maxdepth 1 -type f -name 'glob*' -size 0 -print
To allow subdirectories:
find "$PWD" -type f -name 'glob*' -size 0 -print
Some find implementations does not require a directory as the first parameter (some do, like the Solaris one). On most implementations, the -print parameter can be omitted, if it is not specified, find defaults to printing matching files.

How to loop through a directory recursively to delete files with certain extensions

I need to loop through a directory recursively and remove all files with extension .pdf and .doc. I'm managing to loop through a directory recursively but not managing to filter the files with the above mentioned file extensions.
My code so far
#/bin/sh
SEARCH_FOLDER="/tmp/*"
for f in $SEARCH_FOLDER
do
if [ -d "$f" ]
then
for ff in $f/*
do
echo "Processing $ff"
done
else
echo "Processing file $f"
fi
done
I need help to complete the code, since I'm not getting anywhere.
As a followup to mouviciel's answer, you could also do this as a for loop, instead of using xargs. I often find xargs cumbersome, especially if I need to do something more complicated in each iteration.
for f in $(find /tmp -name '*.pdf' -or -name '*.doc'); do rm $f; done
As a number of people have commented, this will fail if there are spaces in filenames. You can work around this by temporarily setting the IFS (internal field seperator) to the newline character. This also fails if there are wildcard characters \[?* in the file names. You can work around that by temporarily disabling wildcard expansion (globbing).
IFS=$'\n'; set -f
for f in $(find /tmp -name '*.pdf' -or -name '*.doc'); do rm "$f"; done
unset IFS; set +f
If you have newlines in your filenames, then that won't work either. You're better off with an xargs based solution:
find /tmp \( -name '*.pdf' -or -name '*.doc' \) -print0 | xargs -0 rm
(The escaped brackets are required here to have the -print0 apply to both or clauses.)
GNU and *BSD find also has a -delete action, which would look like this:
find /tmp \( -name '*.pdf' -or -name '*.doc' \) -delete
find is just made for that.
find /tmp -name '*.pdf' -or -name '*.doc' | xargs rm
Without find:
for f in /tmp/* tmp/**/* ; do
...
done;
/tmp/* are files in dir and /tmp/**/* are files in subfolders. It is possible that you have to enable globstar option (shopt -s globstar).
So for the question the code should look like this:
shopt -s globstar
for f in /tmp/*.pdf /tmp/*.doc tmp/**/*.pdf tmp/**/*.doc ; do
rm "$f"
done
Note that this requires bash ≥4.0 (or zsh without shopt -s globstar, or ksh with set -o globstar instead of shopt -s globstar). Furthermore, in bash <4.3, this traverses symbolic links to directories as well as directories, which is usually not desirable.
If you want to do something recursively, I suggest you use recursion (yes, you can do it using stacks and so on, but hey).
recursiverm() {
for d in *; do
if [ -d "$d" ]; then
(cd -- "$d" && recursiverm)
fi
rm -f *.pdf
rm -f *.doc
done
}
(cd /tmp; recursiverm)
That said, find is probably a better choice as has already been suggested.
Here is an example using shell (bash):
#!/bin/bash
# loop & print a folder recusively,
print_folder_recurse() {
for i in "$1"/*;do
if [ -d "$i" ];then
echo "dir: $i"
print_folder_recurse "$i"
elif [ -f "$i" ]; then
echo "file: $i"
fi
done
}
# try get path from param
path=""
if [ -d "$1" ]; then
path=$1;
else
path="/tmp"
fi
echo "base path: $path"
print_folder_recurse $path
This doesn't answer your question directly, but you can solve your problem with a one-liner:
find /tmp \( -name "*.pdf" -o -name "*.doc" \) -type f -exec rm {} +
Some versions of find (GNU, BSD) have a -delete action which you can use instead of calling rm:
find /tmp \( -name "*.pdf" -o -name "*.doc" \) -type f -delete
For bash (since version 4.0):
shopt -s globstar nullglob dotglob
echo **/*".ext"
That's all.
The trailing extension ".ext" there to select files (or dirs) with that extension.
Option globstar activates the ** (search recursivelly).
Option nullglob removes an * when it matches no file/dir.
Option dotglob includes files that start wit a dot (hidden files).
Beware that before bash 4.3, **/ also traverses symbolic links to directories which is not desirable.
This method handles spaces well.
files="$(find -L "$dir" -type f)"
echo "Count: $(echo -n "$files" | wc -l)"
echo "$files" | while read file; do
echo "$file"
done
Edit, fixes off-by-one
function count() {
files="$(find -L "$1" -type f)";
if [[ "$files" == "" ]]; then
echo "No files";
return 0;
fi
file_count=$(echo "$files" | wc -l)
echo "Count: $file_count"
echo "$files" | while read file; do
echo "$file"
done
}
This is the simplest way I know to do this:
rm **/#(*.doc|*.pdf)
** makes this work recursively
#(*.doc|*.pdf) looks for a file ending in pdf OR doc
Easy to safely test by replacing rm with ls
The following function would recursively iterate through all the directories in the \home\ubuntu directory( whole directory structure under ubuntu ) and apply the necessary checks in else block.
function check {
for file in $1/*
do
if [ -d "$file" ]
then
check $file
else
##check for the file
if [ $(head -c 4 "$file") = "%PDF" ]; then
rm -r $file
fi
fi
done
}
domain=/home/ubuntu
check $domain
There is no reason to pipe the output of find into another utility. find has a -delete flag built into it.
find /tmp -name '*.pdf' -or -name '*.doc' -delete
The other answers provided will not include files or directories that start with a . the following worked for me:
#/bin/sh
getAll()
{
local fl1="$1"/*;
local fl2="$1"/.[!.]*;
local fl3="$1"/..?*;
for inpath in "$1"/* "$1"/.[!.]* "$1"/..?*; do
if [ "$inpath" != "$fl1" -a "$inpath" != "$fl2" -a "$inpath" != "$fl3" ]; then
stat --printf="%F\0%n\0\n" -- "$inpath";
if [ -d "$inpath" ]; then
getAll "$inpath"
#elif [ -f $inpath ]; then
fi;
fi;
done;
}
I think the most straightforward solution is to use recursion, in the following example, I have printed all the file names in the directory and its subdirectories.
You can modify it according to your needs.
#!/bin/bash
printAll() {
for i in "$1"/*;do # for all in the root
if [ -f "$i" ]; then # if a file exists
echo "$i" # print the file name
elif [ -d "$i" ];then # if a directroy exists
printAll "$i" # call printAll inside it (recursion)
fi
done
}
printAll $1 # e.g.: ./printAll.sh .
OUTPUT:
> ./printAll.sh .
./demoDir/4
./demoDir/mo st/1
./demoDir/m2/1557/5
./demoDir/Me/nna/7
./TEST
It works fine with spaces as well!
Note:
You can use echo $(basename "$i") # print the file name to print the file name without its path.
OR: Use echo ${i%/##*/}; # print the file name which runs extremely faster, without having to call the external basename.
Just do
find . -name '*.pdf'|xargs rm
If you can change the shell used to run the command, you can use ZSH to do the job.
#!/usr/bin/zsh
for file in /tmp/**/*
do
echo $file
done
This will recursively loop through all files/folders.
The following will loop through the given directory recursively and list all the contents :
for d in /home/ubuntu/*;
do
echo "listing contents of dir: $d";
ls -l $d/;
done

Resources