How to read multiple lines in while statement in ksh - ksh

I am creating a script to help me through my daily work and automate it. I have encountered my problem when trying to input multiple lines in my while loop. I usually do it in my for loop but I execute it via command.
Sample:
for i in `cat listoffiles.txt`
do
echo $i
find <path> -name *$i* | awk -F "." {'print $4'} #to display a specific value
done
Now I am trying to automate it with a while loop. Having problems to read multiple input lines in it.
For example:
i want to search for these inputs:
For
Example
only
here is my script for it:
#!/bin/ksh
echo Please enter file #:
read Var1
while true
do
VarSession=`find $OT_DIR/archive*/ -name *$Var1* | awk -F "." {'print $4'}`
if [ "$VarSession" = "" ]
then
echo No match for File# $Var1 on this leg or is out of retention.
else
echo File# $Var1 is under Session# $VarSession
fi
done

VarSession=`find $OT_DIR/archive*/ -name *$Var1* | awk -F "." {'print $4'}`
Assuming that you provide 1 2 3 as input, The line above translates to this
VarSession=`find $OT_DIR/archive*/ -name "1 2 3" | awk -F "." {'print $4'}`
But you want to search all those values separately so you need another loop. for loop serves the purpose if traversing white-space separated entries.
Also, based upon the original script that you showed, I assume you want the script to search file by file, rather than scanning entire directories. However, the statement above will put all output in the variable without traversing it. To traverse line by line, while loop does the job.
#!/bin/ksh
# -n switch suppresses printing a newline
echo -n 'Please enter file #: '
read Var1
# Traverse over all entered values in Var1 (separated by white space)
for i in $Var1
do
#Set a flag to zero, logic explained later
Flag=0
find $OT_DIR/archive*/ -name *$i* | while read FileName
do
#Set the Flag to 1 if find command finds something
Flag=1
VarSession=`echo $FileName | awk -F "." {'print $4'}`
if [ "$VarSession" = "" ]
then
#If find found a file but VarSession has nothing then file name is not correct
echo "Some conventions went wrong in file name: $FileName"
else
echo "File# $Var1 is under Session# $VarSession"
fi
done
#If find found nothing, there was no match
if [ $Flag -eq 0 ]
then
echo No match for File# $Var1 on this leg or is out of retention.
fi
done

Related

Counting number of lines in file and saving it in a bash file

I am trying to loop through all the files in a folder and add the file name of those files with 10 lines to a txt file but I don't know how to write the if statement.
As of right now, what I have is:
for FILE in *.txt do if wc $FILE == 10; then "$FILE" >> saved_names.txt fi done
I am getting stuck in how to format the statement that will evaluate to a boolean for the if statement.
I have already tried the if statement as:
if [ wc $FILE != 10 ]
if "wc $FILE" != 10
if "wc $FILE != 10"
as well as other ways but I don't seem to get it right. I know I am new to Bash but I can't seem to find a solution to this question.
There are a few problems in your code.
To count the number of lines in the file you should run "wc -l" command. However, that command will result in the number of lines and the name of the file (so for example - 10 a.txt - you can test it by running the command on a file in your terminal). To receive only the number of lines you need to pass the file's name to the standard input of that command
"==" is used in bash to compare strings. To compare integers as in that case, you should use "-eq" (take a look here https://tldp.org/LDP/abs/html/comparison-ops.html)
In terms of brackets: To get the wc command result you need to run it in a terminal and switch the command in the code to the result. To do that, you need correct brackets - $(wc -l). To receive a result of the comparison as a bool, you need to use square brackets with spaces [ 1 -eq 1 ].
To save the name of the file in another file using >> you need to first put the name to the standard output (as >> redirect the standard output to the chosen place). To do that you can just use the echo command.
The code should look like this:
#!/bin/bash
for FILE in *.txt
do
if [ "$(wc -l < "$FILE")" -eq 10 ]
then
echo "$FILE" >> saved_names.txt
fi
done
Try:
for file in *.txt; do
if [[ $(wc -l < "$file") -eq 10 ]]; then
printf '%s\n' "$file"
fi
done > saved_names.txt
Change > to >> if you want to append the filenames.
Related docs:
Command Substitution
Conditional Constructs
Extract the actual number of lines from a file with wc -l $FILE | cut -f1 -d' ' and use -eq operator:
for FILE in *.txt; do if [ "$(wc -l $FILE | cut -f1 -d' ')" -eq 10 ]; then "$FILE" >> saved_names.txt; fi; done

Bash (split) file name comparison fails

In my directory I have files (*fastq.gz.fasta) and directories, whose names contain the filenames (*fastq.gz.fasta-blastdb):
IVC6_Meino.clust.gz.fasta-blastdb
IVC5_Mehiv.clust.gz.fasta-blastdb
....
IVC6_Meino.clust.gz.fasta
IVC5_Mehiv.clust.gz.fasta
....
In a bash script I want to compare the filenames with the direcories using the cut option on the latter to extract only the filename part. If those two names match I want to do further stuff (for now echo match or no match respectively).
I have written the following piece of code:
#!/bin/bash
for file in *.fasta
do
for db in *-blastdb
do
echo $file, $db | cut -d '-' -f 1
if [[ $file = "$db | cut -d '-' -f 1" ]]; then
echo "match"
else
echo "no match"
fi
done
done
But it does not detect matches. The output looks like this:
...
IVC6_Meino.clust.gz.fasta, IIIA11_Meova.clust.gz.fasta
no match
IVC6_Meino.clust.gz.fasta, IVC5_Mehiv.clust.gz.fasta
no match
IVC6_Meino.clust.gz.fasta, IVC6_Meino.clust.gz.fasta
no match
The last line should read match as you can see, the strings look the same.
What am i missing?
You can use parameter expansion to do this more easily:
for file in *.fasta
do
for db in *-blastdb
do
echo "$file", "$db"
if [[ "${file%%.fasta}" = "${db%%.fasta-blastdb}" ]]; then
echo "match"
else
echo "no match"
fi
done
done
If you want to fix yours, the problem is the use of $db | cut -d '-' -f 1 With echo it appears that echo is printing the pipe. It isn't. cut is printing. When you do [[ $file = "$db | cut -d '-' -f 1" ]] it is equivalent to [[ $file = [return code from last pipe component] ]]
You need to use the $(..) shell construct to capture the output of the pipe and you need to echo to get the contents of $db to start the pipe. You should quote "$db" so you do not have word splitting or globbing from the contents of the variable.
Like so:
for file in *.fasta
do
for db in *-blastdb
do
ts=$(echo "$db" | cut -d '-' -f 1)
echo "$file", "$ts"
if [[ "$file" = "$ts" ]]; then
echo "match"
else
echo "no match"
fi
done
done # this works I think -- not tested...
Please be careful with your quoting with Bash and liberally use ShellCheck.
The structure you have is also not the most efficient. You will loop over the *-blastdb glob once for every file in *-blastdb. If you have a lot of files, that could get really slow.
To solve that, you could rewrite this loop with Bash arrays (best if you have Bash 4+) or use awk:
ext1=.fasta
ext2=.fasta-blastdb
awk 'FNR==NR{
s=$0
sub("\\"ext1"$","",s)
seen[s]=$0
next}
{
s=$0
sub("\\"ext2"$","",s)
if (s in seen)
print seen[s], $0
}
' ext1="$ext1" ext2="$ext2" <(for fn in *$ext1; do echo "$fn"; done) <(for fn in *$ext2; do echo "$fn"; done)
Each glob is only executing once and awk is using an array to test if the basenames are the same.
Best

Find file names in other Bash files using grep

How do I loop through a list of Bash file names from an input text file and grep each file in a directory for each file name (to see if the file name is contained in the file) and output to text all file names that weren't found in any files?
#!/bin/sh
# This script will be used to output any unreferenced bash files
# included in the WebAMS Project
# Read file path of bash files and file name input
SEARCH_DIR=$(awk -F "=" '/Bash Dir/ {print $2}' bash_input.txt)
FILE_NAME=$(awk -F "=" '/Input File/ {print $2}' bash_input.txt)
echo $SEARCH_DIR
echo $FILE_NAME
exec<$FILE_NAME
while read line
do
echo "IN WHILE"
if (-z "$(grep -lr $line $SEARCH_DIR)"); then
echo "ENTERED"
echo $filename
fi
done
Save this as search.sh, updating SEARCH_DIR as appropriate for your environment:
#!/bin/bash
SEARCH_DIR=some/dir/here
while read filename
do
if [ -z "$(grep -lr $filename $SEARCH_DIR)" ]
then
echo $filename
fi
done
Then:
chmod +x search.sh
./search.sh files-i-could-not-find.txt
It could be possible through grep and find commands,
while read -r line; do (find . -type f -exec grep -l "$line" {} \;); done < file
OR
while read -r line; do grep -rl "$line"; done < file
-r --> recursive
-l --> files-with-matches(Displays the filenames which contains the search string)
It will read all the filenames present inside the input file and search for the filenames which contains the readed filenames. If it found any, then it returns the corresponding filename.
You're using regular parentheses instead of square brackets in your if statement.
The square brackets are a test command. You're running a test (in your case, whether a string has zero length or not. If the test is successful, the [ ... ] command returns an exit code of zero. The if statement sees that exit code and runs the then clause of the if statement. Otherwise, if an else statement exists, that is run instead.
Because the [ .. ] are actually commands, you must leave a blank space around each side.
Right
if [ -z "$string" ]
Wrong
if [-z "$string"] # Need white space around the brackets
Sort of wrong
if [ -z $sting ] # Won't work if "$string" is empty or contains spaces
By the way, the following are the same:
if test -z "$string"
if [ test -z "$string" ]
Be careful with that grep command. If there are spaces or newlines in the string returned, it may not do what you think it does.

ksh: shell script to search for a string in all files present in a directory at a regular interval

I have a directory (output) in unix (SUN). There are two types of files created with timestamp prefix to the file name. These file are created on a regular interval of 10 minutes.
e. g:
1. 20140129_170343_fail.csv (some lines are there)
2. 20140129_170343_success.csv (some lines are there)
Now I have to search for a particular string in all the files present in the output directory and if the string is found in fail and success files, I have to count the number of lines present in those files and save the output to the cnt_succ and cnt_fail variables. If the string is not found I will search again in the same directory after a sleep timer of 20 seconds.
here is my code
#!/usr/bin/ksh
for i in 1 2
do
grep -l 0140127_123933_part_hg_log_status.csv /osp/local/var/log/tool2/final_logs/* >log_t.txt; ### log_t.txt will contain all the matching file list
while read line ### reading the log_t.txt
do
echo "$line has following count"
CNT=`wc -l $line|tr -s " "|cut -d" " -f2`
CNT=`expr $CNT - 1`
echo $CNT
done <log_t.txt
if [ $CNT > 0 ]
then
exit
fi
echo "waiitng"
sleep 20
done
The problem I'm facing is, I'm not able to get the _success and _fail in file in line and and check their count
I'm not sure about ksh, but while ... do; ... done is notorious for running off with whatever variables you're using in bash. ksh might be similar.
If I've understand your question right, SunOS has grep, uniq and sort AFAIK, so a possible alternative might be...
First of all:
$ cat fail.txt
W34523TERG
ADFLKJ
W34523TERG
WER
ASDTQ34T
DBVSER6
W34523TERG
ASDTQ34T
DBVSER6
$ cat success.txt
abcde
defgh
234523452
vxczvzxc
jkl
vxczvzxc
asdf
234523452
vxczvzxc
dlkjhgl
jkl
wer
234523452
vxczvzxc
And now:
egrep "W34523TERG|ASDTQ34T" fail.txt | sort | uniq -c
2 ASDTQ34T
3 W34523TERG
egrep "234523452|vxczvzxc|jkl" success.txt | sort | uniq -c
3 234523452
2 jkl
4 vxczvzxc
Depending on the input data, you may want to see what options sort has on your system. Examining uniq's options may prove useful too (it can do more than just count duplicates).
Think you want something like this (will work in both bash and ksh)
#!/bin/ksh
while read -r file; do
lines=$(wc -l < "$file")
((sum+=$lines))
done < <(grep -Rl --include="[1|2]*_fail.csv" "somestring")
echo "$sum"
Note this will match files starting with 1 or 2 and ending in _fail.csv, not exactly clear if that's what you want or not.
e.g. Let's say I have two files, one starting with 1 (containing 4 lines) and one starting with 2 (containing 3 lines), both ending in `_fail.csv somewhere under my current working directory
> abovescript
7
Important to understand grep options here
-R, --dereference-recursive
Read all files under each directory, recursively. Follow all
symbolic links, unlike -r.
and
-l, --files-with-matches
Suppress normal output; instead print the name of each input
file from which output would normally have been printed. The
scanning will stop on the first match. (-l is specified by
POSIX.)
Finaly I'm able to find the solution. Here is the complete code:
#!/usr/bin/ksh
file_name="0140127_123933.csv"
for i in 1 2
do
grep -l $file_name /osp/local/var/log/tool2/final_logs/* >log_t.txt;
while read line
do
if [ $(echo "$line" |awk '/success/') ] ## will check the success file
then
CNT_SUCC=`wc -l $line|tr -s " "|cut -d" " -f2`
CNT_SUCC=`expr $CNT_SUCC - 1`
fi
if [ $(echo "$line" |awk '/fail/') ] ## will check the fail file
then
CNT_FAIL=`wc -l $line|tr -s " "|cut -d" " -f2`
CNT_FAIL=`expr $CNT_FAIL - 1`
fi
done <log_t.txt
if [ $CNT_SUCC > 0 ] && [ $CNT_FAIL > 0 ]
then
echo " Fail count = $CNT_FAIL"
echo " Success count = $CNT_SUCC"
exit
fi
echo "waitng for next search..."
sleep 10
done
Thanks everyone for your help.
I don't think I'm getting it right, but You can't diffrinciate the files?
maybe try:
#...
CNT=`expr $CNT - 1`
if [ $(echo $line | grep -o "fail") ]
then
#do something with fail count
else
#do something with success count
fi

how to prevent for loop from using space as deliminator, bash script

I am trying to right a bash script to do multiple checks and searches for a CMS my company uses. I trying to implement a function for a user to be able to search for a certain macro call and the function return all the files that contain the call, the line the macro is called on, and the actual code in the macro call. What I have seems to be getting screwed up by the fact I am using a for loop to format the output. Here's the snippet of the script I am working on:
elif [ "$choice" = "2" ]
then
echo -e "\n What macro call are we looking for $name?"
read macrocall
for i in $(grep -inR "$macrocall" $sitepath/templates/macros/); do
file=$(echo $i | cut -d\: -f1 | awk -F\/ '{ print $NF }')
line=$(echo $i | cut -d\: -f2)
calltext=$(echo $i | cut -d\: -f3-)
echo -e "\nFile: $file"
echo -e "\nLine: $line"
echo -e "\nMacro Call from file: $calltext"
done
fi
the current script runs the first few fields until it gets a a space and then everything gets all screwy. Anybody have any idea how I can have the for loops deliminator to be each result of the grep? any suggestions would be helpful. Let me know if any of you need more info. Thanks!
The right way to do this would be more like:
printf "\n What macro call are we looking for %s?" "$name"
read macrocall
# ensure globbing is off and set IFS to a newline after saving original values
oSET="$-"; set -f; oIFS="$IFS"; IFS=$'\n'
awk -v macrocall="$macrocall" '
BEGIN { lc_macrocall = "\\<" tolower(macrocall) "\\>" }
tolower($0) ~ lc_macrocall {
file=FILENAME
sub(/.*\//,"",file)
printf "\n%s\n", file
printf "\n%d\n", FNR
printf "\nMacro Call from file: %s\n", $0
}
' $(find "$sitepath/templates/macros" -type f -print)
# restore original IFS and globbing values
IFS="$oIFS"; set +f -"$oSET"
This solves the problem of having spaces in your file names as originally requested, but also handles globbing characters in your file names, and the various typical echo issues.
You can set the internal field separator $IFS (which is normally set to space, tab and newline) to just newline to get around this problem:
IFS="\n"

Resources