Bash: Echo file contents precedes with counter - bash

I am looking for a bash script which reads the file content and it should echo the output as mentioned below:
Input File: file.txt
host1a
host2b
host3c
host4e
I want my output like:
--START--
opt1:host1a
opt2:host2b
opt3:host3c
opt4:host4e
--END--

there is a many possibilities, try this way for example.
#!/bin/bash
opt="1";
while read line;
do
if [ ! -z "$line" ]
then
echo "opt$opt:$line"
opt=$(($opt+1))
fi
done <your_input_file.txt

Related

Generate multiple output files for loop

i'm trying to generate a new output file from each existing file in a directory of .txt files. I want to check line by line in each file for two substrings. And append the lines that match that substring to each new output file.
I'm having trouble generating the new files.
This is what i currently have:
#!/bin/sh
# My first Script
success="(Compiling)\s\".*\"\s\-\s(Succeeded)"
failure="(Compiling)\s\".*\"\s\-\s(Failed)"
count_success=0
count_failure=0
for i in ~/Documents/reports/*;
do
while read -r line;
do
if [[$success=~$line]]; then
echo $line >> output_$i
count_success++
elif [[$failure=~$]]; then
echo $line >> output_$i
count_failure++
fi
done
done
echo "$count_success of jobs ran succesfully"
echo "$count_failure of jobs didn't work"
~
Any help would be appreciated, thanks
Please, use https://www.shellcheck.net/ to check your shell scripts.
If you use Visual Studio Code, you could install "ShellCheck" (by Timon Wong) extension.
About your porgram.
Assume bash
Define different extensions for input and output files (really important if there are in the same directory)
Loop on report, input, files only
Clear output file
Read input file
if sequence:
if [[ ... ]] with space after [[ and before ]]
spaces before and after operators (=~)
reverse operands order for operators =~
Prevent globbing with "..."
#! /bin/bash
# Input file extension
declare -r EXT_REPORT=".txt"
# Output file extension
declare -r EXT_OUTPUT=".output"
# RE
declare -r success="(Compiling)\s\".*\"\s\-\s(Succeeded)"
declare -r failure="(Compiling)\s\".*\"\s\-\s(Failed)"
# Counters
declare -i count_success=0
declare -i count_failure=0
for REPORT_FILE in ~/Documents/reports/*"${EXT_REPORT}"; do
# Clear output file
: > "${REPORT_FILE}${EXT_OUTPUT}"
# Read input file (see named file in "done" line)
while read -r line; do
# does the line match the success pattern ?
if [[ $line =~ $success ]]; then
echo "$line" >> "${REPORT_FILE}${EXT_OUTPUT}"
count_success+=1
# does the line match the failure pattern ?
elif [[ $line =~ $failure ]]; then
echo "$line" >> "${REPORT_FILE}${EXT_OUTPUT}"
count_failure+=1
fi
done < "$REPORT_FILE"
done
echo "$count_success of jobs ran succesfully"
echo "$count_failure of jobs didn't work"
What about using grep?
success='Compiling\s".*"\s-\sSucceeded'
failure='Compiling\s".*"\s-\sFailed'
count_success=0
count_failure=0
for i in ~/Documents/reports/*; do
(( count_success += $(grep -E "$success" "$i" | tee "output_$i" | wc -l) ))
(( count_failure += $(grep -E "$failure" "$i" | tee -a "output_$i" | wc -l) ))
done
echo "$count_success of jobs ran succesfully"
echo "$count_failure of jobs didn't work"

Array variable in command in url

I have problem with url formatting in bash script. In below code url request:
text="$(lynx --dump https://address/"${array[${i}]}")"
returns HTTP Error 400. The request URL is invalid. I assume that on
"${array[${i}]}"
is something wrong in url part. But I can't figure out what is right format.
#!/bin/bash
saveIFS="$IFS"
IFS=$'\n'
array=($(<words))
IFS="$saveIFS"
elements=${#array[#]}
for (( i=0;i<$elements;i++))
do
text="$(lynx --dump https://address/"${array[${i}]}")"
echo "$text" >> "outputfilename"
fi
done
I also tried:
text="$(lynx --dump https://address/${array[${i}]})"
Try
#!/bin/bash
IFS=$'\n' read -rd '' -a array <words
elements=${#array[#]}
for (( i=0;i<$elements;i++))
do
text="$(lynx --dump https://address/"${array[${i}]}")"
echo "$text" >> "outputfilename"
done
The array variable wasn't being set with array=($(<words))
You can use read or readarray, but this example is with read
Incidentally, putting IFS=$'\n' before read without a command separator ; sets $IFS only for the read command, removing the need to save and re-set $IFS
You don't need an array at all; the following will work in any POSIX-compatible shell, assuming you have one URL component per line:
while IFS= read -r line; do
text=$(lynx --dump https://address/"$line")
echo "$text"
done < words >> output filename
My two cents...
I prefer use printf -v for this, and this could be build like a filter:
catWeb() {
while IFS= read -r word;do
printf -v url "https://address/%s" "$word"
lynx --dump "$url"
done
}
catWeb <words >outputfilename
I was reading windows file. Lines ended with CR LF. So address contains
\r
character. I can remove it:
array[${i}]=${array[${i}]%$'\r'}
Or I can reformat input file so lines end only with LF.
Main structure of working script reading from CR LF file is
#!/bin/bash
IFS=$'\n' read -rd '' -a array <words
elements=${#array[#]}
for (( i=0;i<$elements;i++))
do
array[${i}]=${array[${i}]%$'\r'}
text="$(lynx --dump https://adrress/"${array[${i}]}")"
if [ ${#text} -gt 1 ]
then
echo "$text" >> "filename"
else
echo "${array[${i}]}" >> "filename2"
fi
done

reading text file - bash

I am trying to read a text file-"info.txt" which contains the following information
info.txt
1,john,23
2,mary,21
what I want to do is to store each columns into a variable and print any one of the columns out.
I know this may seems simple to you guys but I am new to writing bash script, I only know how to read the file but I don't know how to delimit the , away and need help. Thanks.
while read -r columnOne columnTwo columnThree
do
echo $columnOne
done < "info.txt"
output
1,
2,
expected output
1
2
You need to set the record separator:
while IFS=, read -r columnOne columnTwo columnThree
do
echo "$columnOne"
done < info.txt
Is good to check if the file exists too.
#!/bin/bash
INPUT=./info.txt
OLDIFS=$IFS
IFS=,
[ ! -f $INPUT ] && { echo "$INPUT file not found"; exit 99; }
while read -r columnOne columnTwo columnThree
do
echo "columnOne : $columnOne"
echo "columnTwo : $columnTwo"
echo "columnThree : $columnThree"
done < $INPUT
IFS=$OLDIFS

Load List From Text File To Bash Script

I've a .txt file which contains
abc.com
google.com
....
....
yahoo.com
And I'm interested in loading it to a bash script as a list (i.e. Domain_List=( "abc.com" "google.com" .... "yahoo.com") ). Is it possible to do?
Additional information, once the list is obtained it is used in a for loop and if statements.
for i in "${Domain_list[#]}
do
if grep -q "${Domain_list[counter]}" domains.log
....
....
fi
....
let counter=counter+1
done
Thank you,
Update:
I've changed the format to Domain_list=( "google.com .... "yahoo.com" ), and using source Doamin.txt allows me to use Domain_list as a list in the bash script.
#!/bin/bash
counter=0
source domain.txt
for i in "${domain_list[#]}"
do
echo "${domain_list[counter]}"
let counter=counter+1
done
echo "$counter"
Suppose, your datafile name is web.txt. Using command substitution (backtics) and cat, the array can be built. Pl. see the following code,
myarray=(`cat web.txt`)
noofelements=${#myarray[*]}
#now traverse the array
counter=0
while [ $counter -lt $noofelements ]
do
echo " Element $counter is ${myarray[$counter]}"
counter=$(( $counter + 1 ))
done
Domain_list=()
while read addr
do
Domain_list+=($addr)
done < addresses.txt
That should store each line of the text file into the array.
I used the source command, and it works fine.
#!/bin/bash
counter=0
source domain.txt
for i in "${domain_list[#]}"
do
echo "${domain_list[counter]}"
let counter=counter+1
done
echo "$counter"
There's no need for a counter if we're sourcing the list from a file. You can simply iterate through the list and echo the value.
#!/bin/bash
source domain.txt
for i in ${domain_list[#]}
do
echo $i
done

Read user input inside a loop

I am having a bash script which is something like following,
cat filename | while read line
do
read input;
echo $input;
done
but this is clearly not giving me the right output as when I do read in the while loop it tries to read from the file filename because of the possible I/O redirection.
Any other way of doing the same?
Read from the controlling terminal device:
read input </dev/tty
more info: http://compgroups.net/comp.unix.shell/Fixing-stdin-inside-a-redirected-loop
You can redirect the regular stdin through unit 3 to keep the get it inside the pipeline:
{ cat notify-finished | while read line; do
read -u 3 input
echo "$input"
done; } 3<&0
BTW, if you really are using cat this way, replace it with a redirect and things become even easier:
while read line; do
read -u 3 input
echo "$input"
done 3<&0 <notify-finished
Or, you can swap stdin and unit 3 in that version -- read the file with unit 3, and just leave stdin alone:
while read line <&3; do
# read & use stdin normally inside the loop
read input
echo "$input"
done 3<notify-finished
Try to change the loop like this:
for line in $(cat filename); do
read input
echo $input;
done
Unit test:
for line in $(cat /etc/passwd); do
read input
echo $input;
echo "[$line]"
done
I have found this parameter -u with read.
"-u 1" means "read from stdout"
while read -r newline; do
((i++))
read -u 1 -p "Doing $i""th file, called $newline. Write your answer and press Enter!"
echo "Processing $newline with $REPLY" # united input from two different read commands.
done <<< $(ls)
It looks like you read twice, the read inside the while loop is not needed. Also, you don't need to invoke the cat command:
while read input
do
echo $input
done < filename
echo "Enter the Programs you want to run:"
> ${PROGRAM_LIST}
while read PROGRAM_ENTRY
do
if [ ! -s ${PROGRAM_ENTRY} ]
then
echo ${PROGRAM_ENTRY} >> ${PROGRAM_LIST}
else
break
fi
done

Resources