Why does outer while loop in Bash not finishing? - bash

I don't understand why this outer loop exits just because the inner loop can finish.
The $1 refers to a file with a lot of pattern/replacement lines. The $2 is a list of words. The problem is that the outer loop exits already after the first pattern/replacement line. I want it to exit after all the lines in $1 are read.
#!/bin/bash
#Receive SED SCRIPT WORDLIST
if [ -f temp.txt ];
then
> temp.txt
else
touch temp.txt
fi
while IFS='' read -r line || [[ -n "$line" ]];
do
echo -e "s/$line/p" >> temp.txt
while IFS='' read -r line || [[ -n "$line" ]];
do
sed -nf temp.txt $2
done
> temp.txt
done < $1

I understand that you want calculate de sed expressions and write it on a file, and then apply this expresions to other file.
This is so much easier than your are doing it.
First of all, you dont need to check if temp.txt already exists. When you redirect the output of a command to a file, if this file do not exist, it will be created. But if you want to reset the file, I recommend you to use truncate command.
In the body of the script, I don't understand why you put a second while loop to read from a file, but you don't put a file to read.
I think that you need is something like this:
truncate -s 0 sed_expressions.txt
while IFS='' read -r line || [[ -n "$line" ]]; do
echo -e "s/$line/p" >> sed_expressions.txt
done < $1
sed -nf sed_expressions.txt $2 > out_file.txt
Try it and tell me if is this that you need.
Bye!

Related

Read file line by line and delete after

I'd like to read connectedclients.now line by line, doing something while read each, and delete the line when done the work.
Actually tried:
ClientWork.sh:
unset n
while read -r user work codename; do
echo $user $work $codename
: $[n++]
done <connectedclients.now
sed "1 $n d" connectedclients.now
Original code from stackexchange
Getting sed: -e expression #1, char 5: unexpected,'` as an issue. Any ideas to fix it?
Your arithmetic is wrong: $[n++], instead try :
((n++))
Check http://mywiki.wooledge.org/ArithmeticExpression
and if you want to remove line by line :
n=0
while read -r user work codename; do
echo "$user $work $codename"
((n++))
sed -i "$n d" connectedclients.now
done < connectedclients.now
If you really really need to remove lines as you're processing each line, I'd first read the whole file:
file=connectedclients.now
echo "before processing, $(wc -l < "$file") lines"
mapfile -t all_lines < "$file"
for line in "${all_lines[#]}"; do
read -r user work codename <<<"$line"
echo $user $work $codename
sed -i 1d "$file"
done
echo "after processing, $(wc -l < "$file") lines"

Unable to redirect the output of each line read from the file and create a new file for each redirected output

#!/bin/bash
file="/home/vdabas2/file2"
while IFS='' read -r line || [[ -n "$line" ]];
do
pbreplay -O "$line" >> output
done < "$file"
I am able to read a file line by line and the output of each line processed is being redirected to output by using the above shell script.
But I need a different file as a redirected output for each line processed and save it like output1, output2 and so on. So, if there are 10 lines in that file which are being passed on as arguments then I need 10 output files.
#!/bin/bash
file="/home/vdabas2/file2"
i=1
while IFS='' read -r line || [[ -n "$line" ]];
do
pbreplay -O "$line" >> output.${i}
((i++))
done < "$file"
Add increment for example.

bash while loop "eats" my space characters

I am trying to parse a huge text file, say 200mb.
the text file contains some strings
123
1234
12345
12345
so my script looked like
while read line ; do
echo "$line"
done <textfile
however using this above method, my string " 12345" gets truncated to "12345"
I tried using
sed -n "$i"p textfile
but the the throughput is reduced from 27 to 0.2 lines per second, which is inacceptable ;-)
any Idea howto solve this?
You want to echo the lines without a fieldsep:
while IFS="" read line; do
echo "$line"
done <<< " 12345"
When you also want to skip interpretation of special characters, use
while IFS="" read -r line; do
echo "$line"
done <<< " 12345"
You can write the IFS without double quotes:
while IFS= read -r line; do
echo "$line"
done <<< " 12345"
This seems to be what you're looking for:
while IFS= read line; do
echo "$line"
done < textfile
The safest method is to use read -r in comparison to just read which will skip interpretation of special characters (thanks Walter A):
while IFS= read -r line; do
echo "$line"
done < textfile
OPTION 1:
#!/bin/bash
# read whole file into array
readarray -t aMyArray < <(cat textfile)
# echo each line of the array
# this will preserve spaces
for i in "${aMyArray[#]}"; do echo "$i"; done
readarray -- read lines from standard input
-t -- omit trailing newline character
aMyArray -- name of array to store file in
< <() -- execute command; redirect stdout into array
cat textfile -- file you want to store in variable
for i in "${aMyArray[#]}" -- for every element in aMyArray
"" -- needed to maintain spaces in elements
${ [#]} -- reference all elements in array
do echo "$i"; -- for every iteration of "$i" echo it
"" -- to maintain variable spaces
$i -- equals each element of the array aMyArray as it cycles through
done -- close for loop
OPTION 2:
In order to accommodate your larger file you could do this to help alleviate the work and speed up the processing.
#!/bin/bash
sSearchFile=textfile
sSearchStrings="1|2|3|space"
while IFS= read -r line; do
echo "${line}"
done < <(egrep "${sSearchStrings}" "${sSearchFile}")
This will grep the file (faster) before it cycles it through the while command. Let me know how this works for you. Notice you can add multiple search strings to the $sSearchStrings variable.
OPTION 3:
and an all in one solution to have a text file with your search criteria and everything else combined...
#!/bin/bash
# identify file containing search strings
sSearchStrings="searchstrings.file"
while IFS= read -r string; do
# if $sSearchStrings empty read in strings
[[ -z $sSearchStrings ]] && sSearchStrings="${string}"
# if $sSearchStrings not empty read in $sSearchStrings "|" $string
[[ ! -z $sSearchStrings ]] && sSearchStrings="${sSearchStrings}|${string}"
# read search criteria in from file
done <"${sSearchStrings}"
# identify file to be searched
sSearchFile="text.file"
while IFS= read -r line; do
echo "${line}"
done < <(egrep "${sSearchStrings}" "${sSearchFile}")

applying sed to certains line from file using bash

I need you help on this;
I am currently trying to apply a sed command to lines from a file.
2014-08-05T09:29:13+01:00 (INFO:3824.87075728): [27219] [ <email#domain.com>] A message from <user1#domain.com> source <asdfg> this is a test.
I need to apply this sed cmd to this line but keep this others that does not have 'this is a test'
pattern="this\ is\ a test"
while IFS='' read -r line; do
if [[ $line = *"${pattern}"* ]]; then
sed 's/\[ .*\(source\)/\1/g' ${line}
else
echo "${line}"
fi
done < ${INPUT} > ${OUPUT}
I have set the input and output; however ideally keeping the same file would be ideal.
Thank you for your input.
You don't need a loop for this. Use this sed:
sed -i.bak '/this is a test/s/\[ .*\(source\)/\1/g' "${INPUT}"

Skip line in text file which starts with '#' via KornShell (ksh)

I am trying to write a script which reads a text file and saves each line to a string. I would also like the script to skip any lines which start with a hash symbol. Any suggestions?
You should not leave skipping lines to ksh. E.g. do this:
grep -v '^#' INPUTFILE | while IFS="" read line ; do echo $line ; done
And instead of the echo part do whatever you want.
Or if ksh does not support this syntax:
grep -v '^#' INPUTFILE > tmpfile
while IFS="" read line ; do echo $line ; done < tmpfile
rm tmpfile
while read -r line; do
[[ "$line" = *( )#* ]] && continue
# do something with "$line"
done < filename
look for "File Name Patterns" or "File Name Generation" in the ksh man page.

Resources