Why: Bash syntax error near unexpected token `fi' - bash

so the error I'm getting is syntax error near unexpected token `fi' on the second last 'fi'. Been scratching my head for a while about this. Any help is greatly appreciated! Thanks!
#!/bin/bash
TFILE=/tmp/scripts/pdsh_demo.tmp
if [ -f $TFILE ]; then
rm $TFILE
fi
/usr/bin/pdsh -R ssh -w host[0001-0200] 'command | grep -v "something"' >> $TFILE
if [ ! -s $TFILE ]; then
exit
fi
if [ -f $TFILE ]; then
if grep -q "something" $TFILE ; then
grep -i "something" $TFILE | mailx -r "test.server" -s "Critical: something" -a $TFILE "test#test.com"
fi
fi

you should make this:
if grep -q "something" $TFILE ; then
into this:
if [ $(grep -q "something" $TFILE) ] ; then
add brackets around your args
space before & after brackets
put then on a new line

in
if grep -q "something" $TFILE ; then
should be
if [ $(grep -q "something" $TFILE) ]; then

Related

UNIX shell scripting if and grep command

currently I'm working on a code :
egrep '("$1"|"$2")' Cities.txt > test.txt
if [ $# -eq 1] && grep -q "$1" test.txt ; then
grep $1 Cities.txt
elif [ $# -eq 2 ] && egrep -q '("$1"|"$2")' test.txt ; then
egrep '("$1"|"$2")' Cities.txt > $2.txt
else $1 not found on Cities.txt
fi
exit
basically, it lets user to enter 1 or 2 arguments and the argument(s) is/are used as a grep pattern in Cities.txt and redirect the output to a file named test.txt
If the user entered 1 argument and the argument matched the content of the test.txt , then it display the lines that contain argument 1 on file Cities.txt.
If the user entered 2 argument and both argument matched the content of the file test.txt, then it matched both argument in Cities.txt and redirect the output to the file named by the user's second argument.
I couldn't seem to get the code to work, may be some of you guys could help me inspect the error.
thanks
egrep "($1|$2)" Cities.txt > test.txt # change single quote to double quote
if [ $# -eq 1 ] && grep -q -- "$1" test.txt ; then
grep -- "$1" Cities.txt
elif [ $# -eq 2 ] && egrep -q -- "($1|$2)" test.txt ; then
egrep -- "($1|$2)" Cities.txt > $2.txt
else
$1 not found on Cities.txt
fi
This greatly changes the semantics, but I believe is what you are trying to do. I've added -- to try to make this slightly robust, but if either argument contains metacharacters for the regex this will fail. But you could try:
if test $# -eq 1 && grep -F -q -e "$1" test.txt ; then
grep -F -e "$1" Cities.txt
elif [ $# -eq 2 ] && grep -q -F -e "$1" -e "$2" test.txt; then
grep -F -e "$1" -e "$2" Cities.txt > $2.txt
else
$1 not found on Cities.txt >&2
fi

Grep inside bash script not finding item

I have a script which is checking a key in one file against a key in another to see if it exists in both. However in the script the grep never returns anything has been found but on the command line it does.
#!/bin/bash
# First arg is the csv file of repo keys separated by line and in
# this manner 'customername,REPOKEY'
# Second arg is the log file to search through
log_file=$2
csv_file=$1
while read line;
do
customer=`echo "$line" | cut -d ',' -f 1`
repo_key=`echo "$line" | cut -d ',' -f 2`
if [ `grep "$repo_key" $log_file` ]; then
echo "1"
else
echo "0"
fi
done < $csv_file
The CSV file is formatted as follows:
customername,REPOKEY
and the log file is as follows:
REPOKEY
REPOKEY
REPOKEY
etc
I call the script by doing ./script csvfile.csv logfile.txt
Rather then checking output of grep command use grep -q to check its return status:
if grep -q "$repo_key" "$log_file"; then
echo "1"
else
echo "0"
fi
Also your script can be simplified to:
log_file=$2
csv_file=$1
while IFS=, read -r customer repo_key; do
if grep -q "$repo_key" "$log_file"; then
echo "1"
else
echo "0"
fi
done < "$csv_file"
use the exit status of the grep command to print 1 or 0
repo_key=`echo "$line" | cut -d ',' -f 2`
grep -q "$repo_key" $log_file
if [ $? -eq 1 ]; then
echo "1"
else
echo "0"
fi
-q supresses the output so that no output is printed
$? is the exit status of grep command 1 on successfull match and 0 on unsuccessfull
you can have a much simpler version as
grep -q "$repo_key" $log_file
echo $?
which will produce the same output

Unix shell - how to filter out files by number of lines?

I am trying to extract all files with a line count greater than x using the following code.
for i in massive*;
do
if [ wc -l $i | cut -d ' ' -f 1 > 50 ]; then
mv $i subset_massive_subcluster_num_gt50/;
fi;
done
However I am getting the following error everytime it goes through the loop:
cut: ]: No such file or directory
-bash: [: missing `]'
Any ideas?
Change this:
for i in massive*;
do
if [ wc -l $i | cut -d ' ' -f 1 > 50 ]; then
mv $i subset_massive_subcluster_num_gt50/;
fi;
done
To this:
for i in massive*;
do
if [ "$(wc -l "$i" | cut -d ' ' -f 1)" -gt 50 ]; then
mv "$i" subset_massive_subcluster_num_gt50/;
fi;
done
Maybe you can try:
for file in massive*
do
[[ $(grep -c '' "$file") > 50 ]] && echo mv "$file" subset_massive_subcluster_num_gt50/
done
the grep -c '' is nicer (and safer) than wc -l | cut
The above is for "dry run". Remove the echo if satisfied.

tail command throws errors when used in bash script

for file in swap_pricer swap_id_marks swaption_id_marks
do
if [ ! -e $file ] && [ "$context" == "INTRADAY" ]
then
cp -f $working_dir/brl/$file $file
else
tail -n 7 $working_dir/brl/$file >> $file
fi
echo "[`date +'%D %T'`] Removing file ${excel_txt_dir}/$file.txt"
if [ -e ${excel_txt_dir}/${file}.txt ]
then
rm -f ${excel_txt_dir}/${file}.txt
fi
cp -f ${file}.txt ${excel_txt_dir}/${file}.txt
cp -f ${file}.txt ${excel_txt_dir}/${file}_${naming_date}.txt
cp -f ${file}.txt ${file}_${naming_date}.txt
cp -f ${file}.txt ${excel_txt_dir}/${file}_${naming_date}_${price_time}.txt
done
The code above is part of a bash script...which has been copied from csh script.
I am getting an error:
tail: cannot open input
Please help me to resolve the error.
In your script, tail is interpreting +7 as a file parameter and complaining that it cant open it.
Try tail -n 7 instead of tail +7

Syntax error: “(” unexpected (expecting “fi”)

filein="users.csv"
IFS=$'\n'
if [ ! -f "$filein" ]
then
echo "Cannot find file $filein"
else
#...
groups=(`cut -d: -f 6 "$filein" | sed 's/ //'`)
fullnames=(`cut -d: -f 1 "$filein"`)
userid=(`cut -d: -f 2 "$filein"`)
usernames=(`cut -d: -f 1 "$filein" | tr [A-Z] [a-z] | awk '{print substr($1,1,1) $2}'`)
#...
for group in ${groups[*]}
do
grep -q "^$group" /etc/group ; let x=$?
if [ $x -eq 1 ]
then
groupadd "$group"
fi
done
#...
x=0
created=0
for user in ${usernames[*]}
do
useradd -n -c ${fullnames[$x]} -g "${groups[$x]}" $user 2> /dev/null
if [ $? -eq 0 ]
then
let created=$created+1
fi
#...
echo "${userid[$x]}" | passwd --stdin "$user" > /dev/null
#...
echo "Welcome! Your account has been created. Your username is $user and temporary
password is \"$password\" without the quotes." | mail -s "New Account for $user" -b root $user
x=$x+1
echo -n "..."
sleep .25
done
sleep .25
echo " "
echo "Complete. $created accounts have been created."
fi
I'm guessing the problem is that you're trying to capture command output in arrays without actually using command substitution. You want something like this:
groups=( $( cut... ) )
Note the extra set of parentheses with $ in front of the inner set.

Resources