UNIX shell scripting if and grep command - bash

currently I'm working on a code :
egrep '("$1"|"$2")' Cities.txt > test.txt
if [ $# -eq 1] && grep -q "$1" test.txt ; then
grep $1 Cities.txt
elif [ $# -eq 2 ] && egrep -q '("$1"|"$2")' test.txt ; then
egrep '("$1"|"$2")' Cities.txt > $2.txt
else $1 not found on Cities.txt
fi
exit
basically, it lets user to enter 1 or 2 arguments and the argument(s) is/are used as a grep pattern in Cities.txt and redirect the output to a file named test.txt
If the user entered 1 argument and the argument matched the content of the test.txt , then it display the lines that contain argument 1 on file Cities.txt.
If the user entered 2 argument and both argument matched the content of the file test.txt, then it matched both argument in Cities.txt and redirect the output to the file named by the user's second argument.
I couldn't seem to get the code to work, may be some of you guys could help me inspect the error.
thanks

egrep "($1|$2)" Cities.txt > test.txt # change single quote to double quote
if [ $# -eq 1 ] && grep -q -- "$1" test.txt ; then
grep -- "$1" Cities.txt
elif [ $# -eq 2 ] && egrep -q -- "($1|$2)" test.txt ; then
egrep -- "($1|$2)" Cities.txt > $2.txt
else
$1 not found on Cities.txt
fi
This greatly changes the semantics, but I believe is what you are trying to do. I've added -- to try to make this slightly robust, but if either argument contains metacharacters for the regex this will fail. But you could try:
if test $# -eq 1 && grep -F -q -e "$1" test.txt ; then
grep -F -e "$1" Cities.txt
elif [ $# -eq 2 ] && grep -q -F -e "$1" -e "$2" test.txt; then
grep -F -e "$1" -e "$2" Cities.txt > $2.txt
else
$1 not found on Cities.txt >&2
fi

Related

Unix write to file creating new line only

I have the following code
#!/bin/bash
output= cat $1 | sed s/$2/$3/
if [ -f "$1" ]
then
echo $output > "$1"
echo "Done"
fi
Arguments:
1 is file
2 old word
3 new word to replace
File Permission is 777 and for some reason the code will replace the current file with a newline and that's it. Any possible reason for this issue?
Try:
#!/bin/bash
output=`cat $1 | sed s/$2/$3/`
if [ -f "$1" ]
then
echo $output > "$1"
echo "Done"
fi

Bash - sometimes creates only empty output

I am trying to create a bash dictionary script that accepts first argument and creates file named after that, then script accepts next arguments (which are files inside same folder) and outputs their content into file (first argument). It also sorts, deletes symbols etc., but main problem is, that sometimes ouptut file is empty (I am passing one non empty file and one non existing file), after deleting and running script few more times it is sometimes empty sometimes not.
#!/bin/bash
numberoffileargs=$(( $# - 1 ))
exitstat=0
counterexit=0
acceptingstdin=0;
> "$1";
#check if we have given input files given
if [ "$#" -gt 1 ]; then
#for cycle going through input files
for i in "${#:2}"
do
#check whether input file is readable
if [ -r "${i}" ]; then
cat "${i}" >> "$1"
#else redirect to standard output
else
exitstat=2
counterexit=$((counterexit + 1))
echo "file does not exist" 1>&2
fi
done
else
echo "stdin code to be done"
acceptingstdin=1
#stdin input to output file
#stdin=$(cat)
fi
#one word for each line, alphabetical sort, alphabet only, remove duplicates
#all lowercase
#sort -u >> "$1"
if [ "$counterexit" -eq "$numberoffileargs" ] && [ "$acceptingstdin" -eq 0 ]; then
exitstat=3
fi
cat "$1" | sed -r 's/[^a-zA-Z\-]+/ /g' | tr A-Z a-z | tr ' ' '\n' | sort -u | sed '/^$/d' > "$1"
echo "$numberoffileargs"
echo "$counterexit"
echo "$exitstat"
exit $exitstat
Here is your script with some syntax improvement. Your trouble came from the fact that the dictionary was both on input and output on your pipeline; I added a temp file to fix it.
#!/bin/bash
(($# >= 1)) || { echo "Usage: $0 dictionary file ..." >&2 ; exit 1;}
dict="$1"
shift
echo "Creating $dict ..."
>| "$dict" || { echo "Failed." >&2 ; exit 1;}
numberoffileargs=$#
exitstat=0
counterexit=0
acceptingstdin=0
if (($# > 0)); then
for i ; do
#check whether input file is readable
if [ -r "${i}" ]; then
cat "${i}" >> "$dict"
else
exitstat=2
let counterexit++
echo "file does not exist" >&2
fi
done
else
echo "stdin code to be done"
acceptingstdin=1
fi
if ((counterexit == numberoffileargs && acceptingstdin == 0)); then
exitstat=3
fi
sed -r 's/[^a-zA-Z\-]+/ /g' < "$dict" | tr '[:upper:]' '[:lower:]' | tr ' ' '\n' |
sort -u | sed '/^$/d' >| tmp$$
mv -f tmp$$ "$dict"
echo "$numberoffileargs"
echo "$counterexit"
echo "$exitstat"
exit $exitstat
The pipeline might be improved.

Grep inside bash script not finding item

I have a script which is checking a key in one file against a key in another to see if it exists in both. However in the script the grep never returns anything has been found but on the command line it does.
#!/bin/bash
# First arg is the csv file of repo keys separated by line and in
# this manner 'customername,REPOKEY'
# Second arg is the log file to search through
log_file=$2
csv_file=$1
while read line;
do
customer=`echo "$line" | cut -d ',' -f 1`
repo_key=`echo "$line" | cut -d ',' -f 2`
if [ `grep "$repo_key" $log_file` ]; then
echo "1"
else
echo "0"
fi
done < $csv_file
The CSV file is formatted as follows:
customername,REPOKEY
and the log file is as follows:
REPOKEY
REPOKEY
REPOKEY
etc
I call the script by doing ./script csvfile.csv logfile.txt
Rather then checking output of grep command use grep -q to check its return status:
if grep -q "$repo_key" "$log_file"; then
echo "1"
else
echo "0"
fi
Also your script can be simplified to:
log_file=$2
csv_file=$1
while IFS=, read -r customer repo_key; do
if grep -q "$repo_key" "$log_file"; then
echo "1"
else
echo "0"
fi
done < "$csv_file"
use the exit status of the grep command to print 1 or 0
repo_key=`echo "$line" | cut -d ',' -f 2`
grep -q "$repo_key" $log_file
if [ $? -eq 1 ]; then
echo "1"
else
echo "0"
fi
-q supresses the output so that no output is printed
$? is the exit status of grep command 1 on successfull match and 0 on unsuccessfull
you can have a much simpler version as
grep -q "$repo_key" $log_file
echo $?
which will produce the same output

If or while loop inside case command positional parameters

Being relatively new to anything other than bash scripting, I have created a script to
check if a process is running
output PID's to the shell
if not prompt user input and start etc/etc.
I've moved onto positional parameters and can't see where I'm going wrong:
if [ "$1" == "" ]; then
proc_finder
elif [ $1 != "" ];then
case $1 in
-p | --process )
shift
z=$(ps aux |grep $1 |grep -v grep > /dev/null)
if [ ! -z "$z" ]; then
echo "YES"
else
echo "NO"
fi
;;
* )
echo "Usage -p (process)"
esac
fi
This always seems to return yes even when putting in -p test for example. I know im doing something fundamentally wrong, looking at the verbose output the grep -v grep is being done last hence I believe it always returnes an exit state of 0.
Shouldn't that be if [ $? -eq 0 ]?
EDIT 1
You can try this:
z=`ps aux | grep $1 | grep -v grep > /dev/null`
if [ ! -z "$z" ]; then
echo "YES"
else
echo "NO"
fi
If $z is not empty (-z: test for zero-length string) this implies the process was found with the ps command.
EDIT 2
The ps ... grep ... grep is being redirect to /dev/null. That means z will contain nothing. remove the redirection and z should have some output.
z=`ps aux | grep $1 | grep -v grep`
EDIT 3
Alternatively, you can just do this:
ps aux | grep $1 | grep -v grep > /dev/null 2>&1
if [ $? -eq 0 ]; then
echo "YES"
else
echo "NO"
fi
In this case, you are not saving the grep output. That's good if you don't really need it.

Find lines containing all keywords in bash script

Essentially, I would like something that behaves similarly to:
cat file | grep -i keyword1 | grep -i keyword2 | grep -i keyword3
How can I do this with a bash script that takes a variable-length list of keyword arguments? The script should do a case-insensitive match of lines containing all keywords.
Use this as a script
#! /bin/bash
awk -v IGNORECASE=1 -f <(
P=; for k; do [ -z "$P" ] && P="/$k/" || P="$P&&/$k/"; done
echo "$P{print}"
)
and invoke it as
script.sh keyword1 keyword2 keyword3 < file
I don't know if this is efficient, and I think this is ugly, also there might be some utility for that, but:
#!/bin/bash
unset keywords matchlist
keywords=("$#")
for kw in "${keywords[#]}"; do
matchlist="$matchlist /$kw/ &&"
done
matchlist="${matchlist% &&}"
# awk "$matchlist { print; }" < <(tr '[:upper:]' '[:lower:]' <file)
awk "$matchlist { print; }" file
And yes, it needs some robustness regarding special characters and stuff. It's just to show the idea.
Give this a try:
shopt -s nocasematch
keywords="keyword1|keyword2|keyword3"
while read line; do [[ $line =~ $keywords ]] && echo $line; done < file
Edit:
Here's a version that tests for all keywords being present, not just any:
keywords=(keyword1 keyword2 keyword3) # or keywords=("$#")
qty=${#keywords[#]}
while read line
do
count=0
for keyword in "${keywords[#]}"
do
[[ "$line" =~ $keyword ]] && (( count++ ))
done
if (( count == qty ))
then
echo $line
fi
done < textlines
Found a way to do this with grep.
KEYWORDS=$#
MATCH_EXPR="cat file"
for keyword in ${KEYWORDS};
do
MATCH_EXPR="${MATCH_EXPR} | grep -i ${keyword}"
done
eval ${MATCH_EXPR}
you can use bash 4.0++
shopt -s nocasematch
while read -r line
do
case "$line" in
*keyword1*) f=1;;&
*keyword2*) g=1;;&
*keyword3*)
[ "$f" -eq 1 ] && [ "$g" -eq 1 ] && echo $line;;
esac
done < "file"
shopt -u nocasematch
or gawk
gawk '/keyword/&&/keyword2/&&/keyword3/' file
I'd do it in Perl.
For finding all lines that contain at least one of them:
perl -ne'print if /(keyword1|keyword2|keyword3)/i' file
For finding all lines that contain all of them:
perl -ne'print if /keyword1/i && /keyword2/i && /keyword3/i' file
Here is a script called search.sh in bash that will search lines within a file or folder for all keywords specified:
#!/bin/bash
if [ $# -lt 2 ]; then
echo "[-] $0 file_to_search/folder_to_search keyword1 keyword2 keyword3 ..."
exit
fi
all_args="$#"
i=0
results="" # this will store the cumulative results from each keyword search
for arg in $all_args; do
if [ $i -eq 0 ]; then
# first argument is the file/folder to search
file_to_search="$arg"
i=$(($i + 1))
elif [ $i -eq 1 ]; then
# search the file/folder with first keyword (first search)
results=`grep --color=always -r -n -i "$arg" "$file_to_search"`
i=$(($i + 1))
else
# now keep searching the results from first search for other keywords
results=`echo "$results" | grep --color=always -i "$arg"`
i=$(($i + 1))
fi
done
echo "$results"
Example invocation of script above will search the 'tools.txt' file for 'python' and 'jira' keywords:
./search.sh tools.txt python jira

Resources