I am trying to write a function in a bash script that gets lines from stdin and picks out the first line which is not contained in a file.
Here is my approach:
doubles=file.txt
firstnotdouble(){
while read input_line; do
found=0;
cat $doubles |
while read double_line; do
if [ "$input_line" = "$double_line" ]
then
found=1;
break
fi
done
if [ $found -eq 0 ] # no double found, echo and break!
then
echo $input_line
break
fi
done
}
After some debugging attempts I realized that when found is set to 1 in the first if block, it does not keep its value until the next if block. That's why it's not working. Why does the script act as if there were two found variables in different "scopes"?
The second question would be if the approach as a whole could be optimized.
As indicated in the comments, the issue with environment variables is that the commands in a pipeline (that is, a series of commands separated by |) run in subshells, and each subshell has its own environment variables. You could have avoided the problem by avoiding the UUOC (useless use of cat), writing:
while read ...; do ... done < "$doubles"
instead of the pipeline.
A (much) faster way than using a while read loop repeatedly through the doubles file is to use grep:
# Specify the file to be scanned as the first argument
firstnotdouble() {
while IFS= read -r double_line; do
if ! grep -qxF "$double_line" "$1"; then
echo "$double_line"
return
fi
done
return 1
}
In the grep:
-q suppress print out, and stop on first match
-x pattern must match the entire line
-F pattern is a simple string instead of a regular expression.
In the read:
IFS= avoids spaces being trimmed
-r avoids backslashes being deleted
With GNU grep, you could use -xF -m1 (or even -xFm1 if you like being cryptic) instead of -qxF, and then leave out the echo. The grep extension -m N limits the number of matches found to N.
Related
Tried to keep my code as simple as possible:
1: What are the rules for using echo within a while loop?
All my $a and some of my $word variables are echoed not my echo kk?
2: What is the scope of my count variable? Why is it not working within my while loop? can I extend the variable to make it global?
3: When I use the grep in the final row the $word cariable only prints the first word in the passing rows ehile if I remove the grep line in the end $work functions as intended and prints all the words.
count=1
while read a; do
((count=count+1))
if [ $count -le 2 ]
then
echo $a
echo kk
for word in $a; do
echo $word
done
fi
done < data.txt | grep Iteration
Use Process Substitution
In a comment, you say:
I thtought I was using grep on data.txt (sic)
No. Your current pipeline passes the loop's results through grep, not the source file. To do that, you need to rewrite your redirection to use process substitution. For example:
count=1
while read a; do
((count=count+1))
if [ $count -le 2 ]
then
echo $a
echo kk
for word in $a; do
echo $word
done
fi
done < <(fgrep Iteration data.txt)
#CodeGnome answered your question but there's other problems with your script that will come back to bite you at some point. (see https://unix.stackexchange.com/questions/169716/why-is-using-a-shell-loop-to-process-text-considered-bad-practice for discussions on some of them and also google quoting shell variables). Just don't do it. Shell scripts are just for sequencing calls to tools and the UNIX tool for manipulating text is awk. In this case all you'd need to do the job robustly, portably and efficiently would be:
awk '
/Iteration/ {
if (++count <= 2) {
print
print "kk"
for (i=1; i<=NF; i++) {
print $i
}
}
}' data.txt
and of course it'd be more efficient still if you just stop reading the input when count hits 2:
awk '
/Iteration/ {
print
print "kk"
for (i=1; i<=NF; i++) {
print $i
}
if (++count == 2) {
exit
}
}' data.txt
To complement CodeGnome's helpful answer with an explanation of how your command actually works and why it doesn't do what you want:
In Bash's grammar, an input redirection such as < data.txt is part of a single command, whereas |, the pipe symbol, chains multiple commands, from left to right, to form a pipeline.
Technically, while ... done ... < data.txt | grep Iteration is a single pipeline composed of 2 commands:
a single compound command (while ...; do ...; done) with an input redirection (< data.txt),
and a simple command (grep Iteration) that receives the stdout output from the compound command via its stdin, courtesy of the pipe.
In other words:
only the contents of data.txt is fed to the while loop as input (via stdin),
and whatever stdout output the while loop produces is then sent to the next pipeline segment, the grep command.
By contrast, it sounds like you want to apply grep to data.txt first, and only sent the matching lines to the while loop.
You have the following options for sending a command's output to another command:
Note: The following solutions use a simplified while loop for brevity - whether a while command is single-line or spans multiple lines is irrelevant.
Also, instead of using input redirection (< data.txt) to pass the file content to grep, data.txt is passed as a filename argument.
Option 1: Place the command whose output to send to your while loop first in the pipeline:
grep 'Iteration' data.txt | while read -r a; do echo "$a"; done
The down-side of this approach is that your while loop then runs in a subshell (as all segments of a pipeline do by default), which means that variables defined or modified in your while command won't be visible to the current shell.
In Bash v4.2+, you can fix this by running shopt -s lastpipe, which tells Bash to run the last pipeline segment - the while command in this case - in the current shell instead.
Note that lastpipe is a nonstandard bash extension to the POSIX standard.
(To try this in an interactive shell, you must first turn off job control with set +m.)
Option 2: Use a process substitution:
Loosely speaking, a process substitution <(...) allows you to present command output as the content of a temporary file that cleans up after itself.
Since <(...) expands to the temporary file's (FIFO's) path, and read in the while loop only accepts stdin input, input redirection must be applied as well: < <(...):
while read -r a; do echo "$a"; done < <(grep 'Iteration' data.txt)
The advantage of this approach is that the while loop runs in the current subshell, and any variables definitions or modifications therefore remain in scope after the command completes.
The potential down-side of this approach is that process substitutions are a nonstandard bash extension to the POSIX standard (although ksh and zsh support them too).
Option 3: Use a command substitution inside a here-document:
Using the command first in the pipeline (option 1) is a POSIX-compliant approach, but doesn't allow you to modify variables in the current shell (and Bash's lastpipe option is not POSIX-compliant).
The only POSIX-compliant way to send command output to a command that runs in the current shell is to use a command substitution ($(...)) inside a double-quoted here-document:
while read -r a; do echo "$a"; done <<EOF
$(grep 'Iteration' data.txt)
EOF
Streamlining your code and making it more robust:
The rest of your code has some non-obvious pitfalls that are worth addressing:
Double-quote your variable references (e.g., echo "$a" instead of echo $a), unless you specifically want word-splitting and globbing (filename expansion) applied to the values; word splitting and globbing are two kinds of shell expansions.
Similarly, don't use for to iterate over an (of necessity unquoted) variable reference (don't use for word in $a, in your case), unless you want globbing applied to the individual words - see what happens when you run $a='one *'; for word in $a; do echo "$word"; done
You could turn globbing off beforehand (set -f) and back on after (set +f), but it's better to use read -ra words ... to read the words into an array first, and then safely iterate over the array elements with for word in "${words[#]}"; ...- note the "..." around the array variable reference.
Always use -r with read; without it, rarely used \-preprocessing is applied, which will "eat" embedded \ chars.
If we heed the advice above, apply a few additional tweaks, and use a process substitution to feed grep's output to the while loop, we get:
count=1
while read -r a; do # Note the -r
if (( ++count <= 2 )); then
echo "$a"
# Split $a safely into words and store the words in
# array variable ${words[#]}.
read -ra words <<<"$a" # Note the -a to read into an *array*.
# Loop over the words (elements of the array).
# Note: To simply print the words, you could use
# `printf '%s\n' "${words[#]}"`` instead of the loop.
for word in "${words[#]}"; do
echo "$word"
done
fi
done < <(grep 'Iteration' data.txt)
Note: As written, you don't need a loop at all, because you always exit after the 1st iteration.
Finally, as a general alternative for larger input sets, consider Ed Morton's helpful answer, which is much faster due to using awk to process your input file, whereas looping in shell code is generally slow.
I'm new to UNIX and have this really simple problem:
I have a text-file (input.txt) containing a string in each line. It looks like this:
House
Monkey
Car
And inside my shell script I need to read this input file line by line to get to a variable like this:
things="House,Monkey,Car"
I know this sounds easy, but I just couldnt find any simple solution for this. My closest attempt so far:
#!/bin/sh
things=""
addToString() {
things="${things},$1"
}
while read line; do addToString $line ;done <input.txt
echo $things
But this won't work. Regarding to my google research I thought the while loop would create a new sub shell, but this I was wrong there (see the comment section). Nevertheless the variable "things" was still not available in the echo later on. (I cannot just write the echo inside the while loop, because I need to work with that string later on)
Could you please help me out here? Any help will be appreciated, thank you!
What you proposed works fine! I've only made two changes here: Adding missing quotes, and handling the empty-string case.
things=""
addToString() {
if [ -n "$things" ]; then
things="${things},$1"
else
things="$1"
fi
}
while read -r line; do addToString "$line"; done <input.txt
echo "$things"
If you were piping into while read, this would create a subshell, and that would eat your variables. You aren't piping -- you're doing a <input.txt redirection. No subshell, code works without changes.
That said, there are better ways to read lists of items into shell variables. On any version of bash after 3.0:
IFS=$'\n' read -r -d '' -a things <input.txt # read into an array
printf -v things_str '%s,' "${things[#]}" # write array to a comma-separated string
echo "${things_str%,}" # print that string w/o trailing comma
...on bash 4, that first line can be:
readarray -t things <input.txt # read into an array
This is not a shell solution, but the truth is that solutions in pure shell are often excessively long and verbose. So e.g. to do string processing it is better to use special tools that are part of the “default” Unix environment.
sed ':b;N;$!bb;s/\n/,/g' < input.txt
If you want to omit empty lines, then:
sed ':b;N;$!bb;s/\n\n*/,/g' < input.txt
Speaking about your solution, it should work, but you should really always use quotes where applicable. E.g. this works for me:
things=""
while read line; do things="$things,$line"; done < input.txt
echo "$things"
(Of course, there is an issue with this code, as it outputs a leading comma. If you want to skip empty lines, just add an if check.)
This might/might not work, depending on the shell you are using. On my Ubuntu 14.04/x64, it works with both bash and dash.
To make it more reliable and independent from the shell's behavior, you can try to put the whole block into a subshell explicitly, using the (). For example:
(
things=""
addToString() {
things="${things},$1"
}
while read line; do addToString $line ;done
echo $things
) < input.txt
P.S. You can use something like this to avoid the initial comma. Without bash extensions (using short-circuit logical operators instead of the if for shortness):
test -z "$things" && things="$1" || things="${things},${1}"
Or with bash extensions:
things="${things}${things:+,}${1}"
P.P.S. How I would have done it:
tr '\n' ',' < input.txt | sed 's!,$!\n!'
You can do this too:
#!/bin/bash
while read -r i
do
[[ $things == "" ]] && things="$i" || things="$things","$i"
done < <(grep . input.txt)
echo "$things"
Output:
House,Monkey,Car
N.B:
Used grep to tackle with empty lines and the probability of not having a new line at the end of file. (Normal while read will fail to read the last line if there is no newline at the end of file.)
There are 2 pieces of code here, and the value in $1 is the name of a file which contains 3 lines of text.
Now, I have a problem. In the first piece of the code, I can't get the "right" value out of the loop, but in the second piece of the code, I can get the right result. I don't know why.
How can I make the first piece of the code get the right result?
#!/bin/bash
count=0
cat "$1" | while read line
do
count=$[ $count + 1 ]
done
echo "$count line(s) in all."
#-----------------------------------------
count2=0
for var in a b c
do
count2=$[ $count2 + 1 ]
done
echo "$count2 line(s) in all."
This happens because of the pipe before the while loop. It creates a sub-shell, and thus the changes in the variables are not passed to the main script. To overcome this, use process substitution instead:
while read -r line
do
# do some stuff
done < <( some commad)
In version 4.2 or later, you can also set the lastpipe option, and the last command
in the pipeline will run in the current shell, not a subshell.
shopt -s lastpipe
some command | while read -r line; do
# do some stuff
done
In this case, since you are just using the contents of the file, you can use input redirection:
while read -r line
do
# do some stuff
done < "$file"
This question already has answers here:
Read a file line by line assigning the value to a variable [duplicate]
(10 answers)
Closed 3 years ago.
I am writing a script to read commands from a file and execute a specific command. I want my script to work for either single input arguments or when an argument is a filename which contains the arguments in question.
My code below works except for one problem, it ignores the last line of the file. So, if the file were as follows.
file.txt
file1
file2
The script posted below only runs the command for file.txt
for currentJob in "$#"
do
if [[ "$currentJob" != *.* ]] #single file input arg
then
echo $currentJob
serverJobName="$( tr '[A-Z]' '[a-z]' <<< "$currentJob" )" #Change to lowercase
#run cURL job
curl -o "$currentJob"PisaInterfaces.xml http://www.ebi.ac.uk/msd-srv/pisa/cgi-bin/interfaces.pisa?"$serverJobName"
else #file with list of pdbs
echo "Reading "$currentJob
while read line; do
echo "-"$line
serverJobName="$( tr '[A-Z]' '[a-z]' <<< "$line" )"
curl -o "$line"PisaInterfaces.xml http://www.ebi.ac.uk/msd-srv/pisa/cgi-bin/interfaces.pisa?"$serverJobName"
done < "$currentJob"
fi
done
There is, of course, the obvious work around where after the while loop I repeat the steps for inside the loop to complete those commands with the last file, but this is not desirable as any changes I make inside the while loop must be repeated again outside of the while loop. I have searched around online and could not find anyone asking this precise question. I am sure it is out there, but I have not found it.
The output I get is as follows.
>testScript.sh file.txt
Reading file.txt
-file1
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 642k 0 642k 0 0 349k 0 --:--:-- 0:00:01 --:--:-- 492k
My bash version is 3.2.48
It sounds like your input file is missing the newline character after its last line. When read encounters this, it sets the variable ($line in this case) to the last line, but then returns an end-of-file error so the loop doesn't execute for that last line. To work around this, you can make the loop execute if read succeeds or it read anything into $line:
...
while read line || [[ -n "$line" ]]; do
...
EDIT: the || in the while condition is what's known as a short-circuit boolean -- it tries the first command (read line), and if that succeeds it skips the second ([[ -n "$line" ]]) and goes through the loop (basically, as long as the read succeeds, it runs just like your original script). If the read fails, it checks the second command ([[ -n "$line" ]]) -- if read read anything into $line before hitting the end of file (i.e. if there was an unterminated last line in the file), this'll succeed, so the while condition as a whole is considered to have succeeded, and the loop runs one more time.
After that last unterminated line is processed, it'll run the test again. This time, the read will fail (it's still at the end of file), and since read didn't read anything into $line this time the [[ -n "$line" ]] test will also fail, so the while condition as a whole fails and the loop terminates.
EDIT2: The [[ ]] is a bash conditional expression -- it's not a regular command, but it can be used in place of one. Its primary purpose is to succeed or fail, based on the condition inside it. In this case, the -n test means succeed if the operand ("$line") is NONempty. There's a list of other test conditions here, as well as in the man page for the test command.
Note that a conditional expression in [[ ... ]] is subtly different from a test command [ ... ] -- see BashFAQ #031 for differences and a list of available tests. And they're both different from an arithmetic expression with (( ... )), and completely different from a subshell with( ... )...
Your problem seems to be a missing carriage return in your file.
If you cat your file, you need to see the last line successfully appearing before the promopt.
Otherwise try adding :
echo "Reading "$currentJob
echo >> $currentJob #add new line
while read line; do
to force the last line of the file.
Using grep with while loop:
while IFS= read -r line; do
echo "$line"
done < <(grep "" file)
Using grep . instead of grep "" will skip the empty lines.
Note:
Using IFS= keeps any line indentation intact.
You should almost always use the -r option with read.
I found some code that reads a file including the last line, and works without the [[ ]] command:
http://www.unix.com/shell-programming-and-scripting/161645-read-file-using-while-loop-not-reading-last-line.html
DONE=false
until $DONE; do
read s || DONE=true
# you code
done < FILE
Given a text file with multiple lines, I would like to iterate over each line in a Bash script. I had attempted to use cut, but cut does not accept \n (newline) as a delimiter.
This is an example of the file I am working with:
one
two
three
four
Does anyone know how I can loop through each line of this text file in Bash?
I found myself in the same problem, this works for me:
cat file.cut | cut -d$'\n' -f1
Or:
cut -d$'\n' -f1 file.cut
Use cat for concatenating or displaying. No need for it here.
file="/path/to/file"
while read line; do
echo "${line}"
done < "${file}"
Simply use:
echo -n `cut ...`
This suppresses the \n at the end
cat FILE|while read line; do # 'line' is the variable name
echo "$line" # do something here
done
or (see comment):
while read line; do # 'line' is the variable name
echo "$line" # do something here
done < FILE
So, some really good (possibly better) answers have been provided already. But looking at the phrasing of the original question, in wanting to use a BASH for-loop, it amazed me that nobody mentioned a solution with change of Field Separator IFS. It's a pure bash solution, just like the accepted read line
old_IFS=$IFS
IFS='\n'
for field in $(<filename)
do your_thing;
done
IFS=$old_IFS
If you are sure that the output will always be newline-delimited, use head -n 1 in lieu of cut -f1 (note that you mentioned a for loop in a script and your question was ultimately not script-related).
Many of the other answers, including the accepted one, have multiple lines unnecessarily. No need to do this over multiple lines or changing the default delimiter on the system.
Also, the solution provided by Ivan with -d$'\n' did not work for me either on Mac OSX or CentOS 7. Since his answer is four years old, I assume something must have changed on the logic of the $ character for this situation.
While loop with input redirection and read command.
You should not be using cut to perform a sequential iteration of each line in a file as cut was not designed to do this.
Print selected parts of lines from each FILE to standard output.
— man cut
TL;DR
You should use a while loop with the read -r command and redirect standard input to your file inside a function scope where IFS is set to \n and use -E when using echo.
processFile() { # Function scope to prevent overwriting IFS globally
file="$1" # Any file that exists
local IFS="\n" # Allows spaces and tabs
while read -r line; do # Read exits with 1 when done; -r allows \
echo -E "$line" # -E allows printing of \ instead of gibberish
done < $file # Input redirection allows us to read file from stdin
}
processFile /path/to/file
Iteration
In order to iterate over each line of a file, we can use a while loop. This will let us iterate as many times as we need to.
while <condition>; do
<body>
done
Getting our file ready to read
We can use the read command to store a single line from standard input in a variable. Before we can use that to read a line from our file, we need to redirect standard input to point to our file. We can do this with input redirection. According to the man pages for bash, the syntax for redirection is [fd]<file where fd defaults to standard input (a.k.a file descriptor 0). We can place this before or after our while loop.
while <condition>; do
<body>
done < /path/to/file
# or the non-traditional way
</path/to/file while <condition>; do
<body>
done
Reading the file and ending the loop
Now that our file can be read from standard input, we can use read. The syntax for read in our context is read [-r] var... where -r preserves the \ (backslash) character, instead of using it as an escape sequence character, and var is the name of the variable to store the input in. You can have multiple variables to store pieces of the input in but we only need one to read an entire line. Along with this, to preserve any backslashes in any output from echo you will likely need to use the -E flag to disable the interpretation of backslash escapes. If you have any indentation (spaces or tabs), you will need to temporarily change the IFS (Input Field Separators) variable to only "\n"; normally it is set to " \t\n".
main() {
local IFS="\n"
read -r line
echo -E "$line"
}
main
How do we use read to end our while loop?
There is really only one reliable way, that I know of, to determine when you've finished reading a file with read: check the exit value of read. If the exit value of read is 0 then we successfully read a line, if it is 1 or higher then we reached EOF (end of file). With that in mind, we can place the call to read in our while loop's condition section.
processFile() {
# Could be any file you want hardcoded or dynamic
file="$1"
local IFS="\n"
while read -r line; do
# Process line here
echo -E "$line"
done < $file
}
processFile /path/to/file1
processFile /path/to/file2
A visual breakdown of the above code via Explain Shell.
If I am executing a command and want to cut the output but it has multiple lines I found it helpful to do
echo $([command]) | cut [....]
This puts all the output of [command] on a single line that can be easier to process.
My opinion is that "cut" uses '\n' as its default delimiter.
If you want to use cut, I have two ways:
cut -d^M -f1 file_cut
I make ^M By click Enter After Ctrl+V. Another way is
cut -c 1- file_cut
Does that help?