Bash Function Components Working Independently but Not Together - bash

I have a bash script that has this function in it:
function start_vi()
{
echo "Please enter a file name with complete path to open in vi:"
read input_file
if [ -d "$input_file" ]
then
echo "You entered a directory."
echo "Please try again and enter a readable/writable file."
fi
grep_var="file $input_file | grep -c data"
if [ $? -eq 0 ]
then
vi $input_file
else
echo "File not found or invalid file type. Please try again."
fi
}
For the most part the function works fine. My problem is that the two if statements work fine independently, eg, if I comment out one of them, the test works and it does what I want. But together, as written, for example, when I type in a directory at the prompt, vi opens it as a file, where the test should return an echo saying that it's a directory, as it does when functioning alone.
Any ideas on why this is? I'm still relatively new at bash scripting so it is probably easy for the pros, but I have been banging my head against the wall for a while now.
Thanks in advance.

Add a return statement in the first if/then:
function start_vi()
{
echo "Please enter a file name with complete path to open in vi:"
read input_file
if [ -d "$input_file" ]
then
echo "You entered a directory."
echo "Please try again and enter a readable/writable file."
return
fi
grep_var="file $input_file | grep -c data"
if [ $? -eq 0 ]
then
vi $input_file
else
echo "File not found or invalid file type. Please try again."
fi
}
Otherwise, it will print and then open the file anyway, as your second test should be like this:
file $input_file | grep -c data
if [ $? -eq 0 ]
The $? is the exit code of the last run command. Assigning to a variable (i.e. grep_var="...") sets $? to 0. What you seem to have wanted is the exit code of grep -c data - in that case, use backticks ` instead of quotes " to run commands, like below. Or you can write it like this:
grep_var=`file $input_file | grep -c data`
if [ $grep_var != 0 ]
to compare the string value (i.e. what grep -c data returns - the count of data lines).
Doing some of the above should solve the problem.

You need a loop
function start_vi()
{
echo "Please enter a file name with complete path to open in vi:"
read input_file
while [ -d "$input_file" ]
do
echo "You entered a directory."
echo "Please try again and enter a readable/writable file."
read input_file
done
grep_var="file $input_file | grep -c data"
if [ $? -eq 0 ]
then
vi $input_file
else
echo "File not found or invalid file type. Please try again."
fi
}

All you need is a loop:
....
read input_file
while [ ! -f "$input_file" ]
do
echo "You did not enter a file"
echo "Please try again and enter a readable/writable file."
read input_file
done
grep_var="file $input_file | grep -c data"
if [ $? -eq 0 ]
then
vi $input_file
else
echo "File not found or invalid file type. Please try again."
fi

I think this may be closer to what you want to do..
function start_vi()
{
echo "Please enter a file name with complete path to open in vi:"
read input_file
grep_var=`file $input_file 2>&1 | grep -c data`
while [ $? -ne 0 ]
do
echo "File not found or invalid file type. Please try again."
read input_file
grep_var=`file $input_file 2>&1 | grep -c data`
done
vi $input_file
}

Related

I am having a hard time why the if statements don't work in ShellScript

I am having a hard wondering why is it that when i type the most random names they are accepted as a directory then. and when an if statement checks if its a readable file it says yes for all i type in. the goal here is search for a directory check if it is a directory. then search the directory for a file then in that file search for word in it using forloops. the while loop is to ask 3 times for the file name. It a little bit rough I just need an explanation for the if statements not working
#!/bin/sh
DIR='/home/collin2/'
x=1
echo "Please enter directory"
read directory
for directory in "$DIR";
do
if [ -d "$directory" ];
then echo "This is a directory Please enter the file name"
read filename
while [ $x -le 3 ]; do
for filename in "$directory";
do
if [ -r "$filename" ]
then echo "The filename is readable"
echo "Please Enter a word "
read word
grep "$word" "$filename"
exit 1
fi
done
echo "Doesn't exist please try again"
read filename
x=`expr $x + 1`
done
#exit 1
fi
done
echo "not a directory"
exit 0
Your for commands are wrong:
Wheb you write for directory in "$DIR";, it will set the value of the variable directory to "$DIR". You wanted to check, that you could find directory in "$DIR" , that can be done without a for command:
cd "$DIR" || { echo "Can not go to $DIR"; exit 1; }
test -d "${directory}" || { echo "Wrong directory ${directory}"; exit 1; }
# or
test -d "$DIR/${directory}" || { echo "Can not go to $DIR"; exit 1; }
The same problem with the other for-loop.
The test if [ -r "$filename" ] should be done after cd "$DIR/${directory}" or include the complete path.

bash unary operator expected error

This is the script I came up with
#!/bin/bash
expression=$1
field=$2
if [ -z "$expression" ]; then
echo "expression is missing"
exit 1
fi
if [ -f /home/miked/table ]; then
if [ -f table ] && [ grep "$expression table" ]; then
grep "$expression" table | cut -d: -f${2:-3} | clipit
else
echo "no match found"
fi
else
echo "there is no table file"
fi
As a matter of fact I know how to fix it, but I don't know why
it's fixed.
If I remove the space between grep and ", all is working fine, I just can't seem to understand why.
If I do grep something file directly to the command line, it's working good. Why do I to stick the grep to the " in the script?
You do not need to wrap grep call inside square brackets. [ is alias to test command (in general, however most shells replicate that using builtin). You can see syntax of that command using man test. What you want do is to check if $expression table exist in some file, so instead you need to write it as:
#!/bin/bash
expression="$1"
field="$2"
if [ -z "$expression" ]; then
echo "expression is missing"
exit 1
fi
if [ -f /home/miked/table ]; then
if [ -f table ] && grep "$expression table"; then
grep "$expression" table | cut -d: -f${2:-3} | clipit
else
echo "no match found"
fi
else
echo "there is no table file"
fi
However there are more problems with your script.
You print errors to stdout instead of stderr which makes them invisible when piping output of your script to other tools, instead you should use echo "error" >&2, ideally use separate function for that.
You pass only 1 argument to grep and I believe that there should be 2: grep "$expression" table.
Your first grep call will also print to stdout and I believe you want to surpass that, so instead use -q flag.
It is good idea to enable "exit on error" using set -e and "exit on pipe error" using set -o pipefail
You do not use outer file, so you can just remove check for that.
You do not use your $field variable.
Use guard clausules instead of ifs for fatal error checking as it will make easier to refactor your script.
So whole file could be written as:
#!/bin/bash
set -eo pipefail
perror() {
echo "$1" >&2 && exit 1
}
expression=$1
field=${2:-3}
file=${3:table}
[ -z "$expression" ] || perror "expression is missing"
[ -f "$file" ] || perror "there is no '$file' file"
grep "$expression" "$file" | cut -d: -f"${field}" | clipit || perror "no match found"
You don't need [ ] at all here. You're interested in the exit status of grep and it gives you one already; to suppress its output, you can use the -q option:
if [ -f table ] && grep -q "$expression" table; then
The filename should not be within the quotes, either.
If you use [ ] without any test, it defaults to -n: "true if string is not empty". This test expects a single argument, which is why it seems to work for you if you remove the space: it just checks if the string grep$expression table expands to is non-zero, and it always is.
There are quite a few things that could be done to fix this, your main problem is the following line:
if [ -f table ] && [ grep "$expression table" ]; then
You've already tested if "table" exists, so you're doing it again and once it succeeds, the expression [ grep "$expression table" ] is being evaluated, which is broken down to '[' grep 'expression table' ']' which means essentially nothing.
You should instead use $() and evaluate the number of occurrences, or like Benjamin mentions, skip it altogether if that's what you want.
I would suggest something like this
#!/bin/bash
expression=$1
field=$2
table_file=/home/miked/table
if [ -z "$expression" ]; then
echo "expression is missing"
exit 1
fi
if [ -f $table_file ]; then
if [ $(grep -q "$expression $table_file") -gt 0 ]; then
grep "$expression" $table_file | cut -d: -f${2:-3} | clipit
else
echo "no match found"
fi
else
echo "there is no table file"
fi
Notice how we're still using a test, this could be condensed to:
#!/bin/bash
expression=$1
field=$2
table_file=/home/miked/table
if [ -z "$expression" ]; then
echo "expression is missing"
exit 1
fi
if [ -f $table_file ]; then
grep -q $expression $table_file && grep "$expression" $table_file | cut -d: -f${2:-3} | clipit || echo "no match found"
else
echo "there is no table file"
fi

Checking input with an if statement (bash)

I am trying to write a file that mimics the cat -n bash command. It is supposed to accept a filename as input and if no input is given print a usage statement.
This is what I have so far but I am not sure what I am doing wrong:
#!/bin/bash
echo "OK"
read filename
if [ $filename -eq 0 ]
then
echo "Usage: cat-n.sh file"
else
cat -n $filename
fi
I suggest to use -z to check for empty variable $filename:
if [ -z $filename ]
See: help test

if [ `ls $op_file` ]; then is returning true even the file is not found

#!/bin/bash
echo "Enter the o/p file name"
read op_file
echo "Enter the count"
read count
echo "OP filename : "
echo $op_file
if [ `ls $op_file` ]; then
echo "O/P file found"
else
exit 0
fi
I'm trying to check whether the filename is existing or not.Have to proceed if the file is existing. Though the above code doesn't give me error. It prints O/P file found even though the file is not found by ls.
Your script can be refactored to this:
#!/bin/bash
read -p "Enter the o/p file name" op_file
read -p "Enter the count" count
echo "OP filename: $op_file"
if [ -f "$op_file" ]; then
echo "O/P file found"
else
exit 1
fi
Better to use -f "$file" check for checking existence of a file. Also see man test
Use the -e operator in your if, this checks the for existence of a file, so:
if [ -e $op_file ]; then
echo "O/P file found"
else
exit 0
fi

How can I avoid multiple starting of a bash script?

I wrote a little bash script called "wp", which upload files to an ftp server. It uses the wput utility. It takes the list of files from a text file. When uploading is ready it comments out the line with a double cross in the text file. The success of the upload is detected according to the last line in the logfile. My question is how can I avoid multiple starting of my script? I am trying to detect with pgrep if the instance is running, but doesn't work correctly:
#!/bin/bash
if [ "$(pgrep ^wp$|wc -l)" -eq "2" ]
then
echo "$(pgrep ^wp$)"
echo "$(pgrep ^wp$|wc -l)"
echo "wp script is starting..."
else
echo "$(pgrep ^wp$)"
echo "$(pgrep ^wp$|wc -l)"
echo "wp script is already running!"
exit
fi
server="ftp://username:password#ftp.ftpserver.com"
logfile=~/uploads.log
listfile=~/uploads.txt
list_backup=~/uploads_bak000.txt
while read f;
do
ret=""
if [ "${f:0:1}" = "#" -o "$f"1 = 1 ]
then
if [ "$f"1 = 1 ]
then
:
#echo "invalid string: "$f
else
#first character is remark sign # then empty command -> :
echo "remark line skipped: "$f
fi
else
#while string $ret is empty
while [ -z "$ret" ]
do
wput "$f" --tries=-1 "$server" 2>&1|tee -a $logfile #> /dev/null
ret=$(tail -n 1 "$logfile"|grep "FINISHED\|Nothing\|Skipped\|Transfered")
done
if [ -n "$ret" ]
then
cat $listfile > $list_backup
awk -v f="$f" '{if ($0==f && $0!~/#/) print "#" $0; else print $0;}' $list_backup > $listfile
fi
fi
done < $listfile
There are quick-n-dirty solutions that use ps with grep (don't do this).
It is better to use a lock file as a "mutex". A nice way of doing this is by using a directory as a lock file (http://mywiki.wooledge.org/BashFAQ/045).
I would also suggest taking a look at:
http://mywiki.wooledge.org/ProcessManagement#How_do_I_make_sure_only_one_copy_of_my_script_can_run_at_a_time.3F
, which mentions use of setlock(http://cr.yp.to/daemontools/setlock.html) that abstracts the lock file handling for you.

Resources