How to read entire line from bash - bash

I have a file file.txt with contents like
i love this world
I hate stupid managers
I love linux
I have MS
When I do the following:
for line in `cat file.txt`; do
echo $line
done
It gives output like
I
love
this
world
I
..
..
But I need the output as entire lines like below — any thoughts ?
i love this world
I hate stupid managers
I love linux
I have MS

while read -r line; do echo "$line"; done < file.txt

As #Zac noted in the comments, the simplest solution to the question you post is simply cat file.txt so i must assume there is something more interesting going on so i have put the two options that solve the question as asked as well:
There are two things you can do here, either you can set IFS (Internal Field Separator) to a newline and use existing code, or you can use the read or line command in a while loop
IFS="
"
or
(while read line ; do
//do something
done) < file.txt

I believe the question was how to read in an entire line at a time. The simple script below will do this. If you don't specify a variable name for "read" it will stuff the entire line into the variable $REPLY.
cat file.txt|while read; do echo $REPLY; done
Dave..

You can do it by using read if the file is coming into stdin. If you need to do it in the middle of a script that already uses stdin for other purposes, you can temporarily reassign the stdin file descriptor.
#!/bin/bash
file=$1
# save stdin to usually unused file descriptor 3
exec 3<&0
# connect the file to stdin
exec 0<"$file"
# read from stdin
while read -r line
do
echo "[$line]"
done
# when done, restore stdin
exec 0<&3

Try
(while read l; do echo $l; done) < temp.txt
read: Read a line from the standard
input and split it into fields.
Reads a single line from the standard input, or from file
descriptor FD
if the -u option is supplied. The line is split into fields as with word
splitting, and the first word is assigned to the first NAME, the second
word to the second NAME, and so on, with any leftover words assigned
to
the last NAME. Only the characters found in $IFS are
recognized as word
delimiters.

Related

During a while loop file read, where is the first line of stdin lost?

Assume we have a file with the numbers 1 to 5 written down line by line.
When I open a file for reading as standard input and use 'while read,' commands which can read stdin are unable to read the first line of that file.
$ while read x; do sed ''; done<file
2
3
4
5
It makes no difference which command you use: sed, awk, cat, etc. That problem occurs if the command is able to read from stdin.There is also no difference between the shells I use. I try the same thing in sh, bash, and zsh, and the results are identical.
It's worth noting that the loop iterates five times, once for each line. For example:
$ while read x; do printf 'something\n'; done<file
something
something
something
something
something
I understand that if I want to read all lines correctly, I must specify a variable in the read command and then pass it to the command. But I'm trying to figure out what's going on here. Why does this problem occur when I do not specify input for a command directly?
Perhaps it is a side effect with no functional purpose.
I couldn't find any information about this behavior of the 'while read' statement, and neither did I find anyone who had a similar problem.
Your code only iterates once.
while read x; do sed ''; done<file
...behaves as follows:
file is opened and attached to stdin
read consumes the first line of the file from stdin and puts it into $x
sed '' consumes the entire rest of the file from stdin and prints it to stdout without changes.
read sees there's no more data (because sed consumed it all), and the loop ends.
If you want sed to operate on only the one line that read x consumed, and to safeguard against other bugs, you might instead write:
while IFS= read -r x; do printf '%s\n' "$x" | sed ''; done <file
The changes:
Using IFS= prevents leading or trailing whitespace from being deleted by read.
Using the -r argument prevents backslashes from being consumed by read.
Piping from printf '%s\n' "$x" into sed changes sed's stdin, such that instead of containing the rest of the file, it only contains the one line. Thus, this ensures that sed is processing the line that was consumed by read, instead of ignoring that line and processing the entire rest of the file. (Using printf instead of echo is a correctness concern; see Why is printf better than echo? on UNIX & Linux Stack Exchange).
the first line of stdin is not lost, but rather it is consumed by the shell when the redirection operator '<' is used to redirect the contents of the file to the while loop. The first line is used as the input to initialize the while loop, and subsequent lines are read inside the loop. This is why the first line is not processed by the commands inside the loop. To avoid this, you can redirect the file to a new file descriptor using '<&', as follows:
$ while read x; do sed ''; done <&3 3<file

Read a file line-by-line on bash; each line containing the path to another unqiue file

Each line in a given file 'a.txt' contains the directory/path to another unique file. Suppose we want to parse 'a.txt' line-by-line, extract the path in string format, and then use a tool such as vim to process the file at this path, and so on.
After going through this thread - Read a file line by line assigning the value to a variable, I wrote the following script, say 'open-file.sh' on bash (I'm new to it)
#!/bin/bash
while IFS='' read -r line || [[ -n "$line" ]]; do
vim -c ":q" -cq $line # Just open the file and close it using :q
done < "$1"
We would then run the above script as -
./open-file.sh a.txt
The problem is that although the path to a new file is correctly specified by $line, when vim opens the file, vim continues to receive the text contained in 'a.txt' as a command. How can I write a script where I can correctly obtain the path from 'a.txt', open it using vim, and then continue parsing the remaining lines in 'a.txt' ?
Replace:
vim -c ":q" -cq $line
With:
vim -c ":q" -cq "$line" </dev/tty
The redirection </dev/tty tells vim to take its standard input from the terminal. Without that, the standard input for vim is "$1".
Also, it is good practice to put $line in double-quotes to protect it from word splitting, etc.
Lastly, while vim is excellent for interactive work, if your end-goal is fully automated processing of each file, you might want to consider tools such as sed or awk.
Although I'm not sure of your ultimate goal, this shell command will execute vim once per line in a.txt:
xargs -o -n1 vim -c ':q' < a.txt
As explained in the comments to Read a file line by line assigning the value to a variable, the issue you're encountering is due to the fact that vim is an interactive program and thus continues to read input from $line.
The problem was already mentioned in a comment under the answer you based your script on.
vim is consuming stdin which is given to the loop by done < $1. We can observe the same behavior in the following example:
$ while read i; do cat; done < <(seq 3)
2
3
<(seq 3) simulates a file with the three lines 1, 2, and 3. Instead of three silent iterations we get only one iteration and the output 2 and 3.
stdin is not only passed to read in the head of the loop, but also to cat in the body of the loop. Therefore read reads one line, the loop is entered, cat reads all remaining lines, stdin is empty, read has nothing to read anymore, the loop exits.
You could circumvent the problem by redirecting something to vim, however there is an even better way. You don't need the loop at all:
< "$1" xargs -d\\n -n1 vim -c :q -cq
xargs will execute vim once for every line in the file given by $1.

Read content from stdout in realtime

I have an external device that I need to power up and then wait for it to get started properly. The way I want to do this is by connecting to it via serial port (via Plink which is a command-line tool for PuTTY) and read all text lines that it prints and try to find the text string that indicates that it has been started properly. When that text string is found, the script will proceed.
The problem is that I need to read these text lines in real-time. So far, I have only seen methods for calling a command and then process its output when that command is finished. Alternatively, I could let Plink run in the background by appending an & to the command and redirecting the output to a file. But the problem is that this file will be empty from the beginning so the script will just proceed directly. Is there maybe a way to wait for a new line of a certain file and read it once it comes? Or does anyone have any other ideas how to accomplish this?
Here is the best solution I have found so far:
./plink "connection_name" > new_file &
sleep 10 # Because I know that it will take a little while before the correct text string pops up but I don't know the exact time it will take...
while read -r line
do
# If $line is the correct string, proceed
done < new_file
However, I want the script to proceed directly when the correct text string is found.
So, in short, is there any way to access the output of a command continously before it has finished executing?
This might be what you're looking for:
while read -r line; do
# do your stuff here with $line
done < <(./plink "connection_name")
And if you need to sleep 10:
{
sleep 10
while read -r line; do
# do your stuff here with $line
done
} < <(./plink "connection_name")
The advantage of this solution compared to the following:
./plink "connection_name" | while read -r line; do
# do stuff here with $line
done
(that I'm sure someone will suggest soon) is that the while loop is not run in a subshell.
The construct <( ... ) is called Process Substitution.
Hope this helps!
Instead of using a regular file, use a named pipe.
mkfifo new_file
./plink "connection_name" > new_file &
while read -r line
do
# If $line is the correct string, proceed
done < new_file
The while loop will block until there is something to read from new_file, so there is no need to sleep.
(This is basically what process substitution does behind the scenes, but doesn't require any special shell support; POSIX shell does not support process substitution.)
Newer versions of bash (4.2 or later) also support an option to allow the final command of a pipeline to execute in the current shell, making the simple solution
shopt +s lastpipe
./plink "connection_name" | while read -r line; do
# ...
done
possible.

Read file line by line and perform action for each in bash

I have a text file, it contains a single word on each line.
I need a loop in bash to read each line, then perform a command each time it reads a line, using the input from that line as part of the command.
I am just not sure of the proper syntax to do this in bash. If anyone can help, it would be great. I need to use the line from the test file obtained as a paramter to call another function. The loop should stop when there are no more lines in the text file.
Psuedo code:
Read testfile.txt.
For each in testfile.txt
{
some_function linefromtestfile
}
How about:
while read line
do
echo $line
// or some_function "$line"
done < testfile.txt
As an alternative, using a file descriptor (#4 in this case):
file='testfile.txt'
exec 4<$file
while read -r -u4 t ; do
echo "$t"
done
Don't use cat! In a loop cat is almost always wrong, i.e.
cat testfile.txt | while read -r line
do
# do something with "$line" here
done
and people might start to throw an UUoCA at you.
while read line
do
nikto -Tuning x 1 6 -h $line -Format html -o NiktoSubdomainScans.html
done < testfile.txt
Tried this to automate nikto scan of list of domains after changing from cat approach. Still just read the first line and ignored everything else.

How do I iterate over each line in a file with Bash?

Given a text file with multiple lines, I would like to iterate over each line in a Bash script. I had attempted to use cut, but cut does not accept \n (newline) as a delimiter.
This is an example of the file I am working with:
one
two
three
four
Does anyone know how I can loop through each line of this text file in Bash?
I found myself in the same problem, this works for me:
cat file.cut | cut -d$'\n' -f1
Or:
cut -d$'\n' -f1 file.cut
Use cat for concatenating or displaying. No need for it here.
file="/path/to/file"
while read line; do
echo "${line}"
done < "${file}"
Simply use:
echo -n `cut ...`
This suppresses the \n at the end
cat FILE|while read line; do # 'line' is the variable name
echo "$line" # do something here
done
or (see comment):
while read line; do # 'line' is the variable name
echo "$line" # do something here
done < FILE
So, some really good (possibly better) answers have been provided already. But looking at the phrasing of the original question, in wanting to use a BASH for-loop, it amazed me that nobody mentioned a solution with change of Field Separator IFS. It's a pure bash solution, just like the accepted read line
old_IFS=$IFS
IFS='\n'
for field in $(<filename)
do your_thing;
done
IFS=$old_IFS
If you are sure that the output will always be newline-delimited, use head -n 1 in lieu of cut -f1 (note that you mentioned a for loop in a script and your question was ultimately not script-related).
Many of the other answers, including the accepted one, have multiple lines unnecessarily. No need to do this over multiple lines or changing the default delimiter on the system.
Also, the solution provided by Ivan with -d$'\n' did not work for me either on Mac OSX or CentOS 7. Since his answer is four years old, I assume something must have changed on the logic of the $ character for this situation.
While loop with input redirection and read command.
You should not be using cut to perform a sequential iteration of each line in a file as cut was not designed to do this.
Print selected parts of lines from each FILE to standard output.
— man cut
TL;DR
You should use a while loop with the read -r command and redirect standard input to your file inside a function scope where IFS is set to \n and use -E when using echo.
processFile() { # Function scope to prevent overwriting IFS globally
file="$1" # Any file that exists
local IFS="\n" # Allows spaces and tabs
while read -r line; do # Read exits with 1 when done; -r allows \
echo -E "$line" # -E allows printing of \ instead of gibberish
done < $file # Input redirection allows us to read file from stdin
}
processFile /path/to/file
Iteration
In order to iterate over each line of a file, we can use a while loop. This will let us iterate as many times as we need to.
while <condition>; do
<body>
done
Getting our file ready to read
We can use the read command to store a single line from standard input in a variable. Before we can use that to read a line from our file, we need to redirect standard input to point to our file. We can do this with input redirection. According to the man pages for bash, the syntax for redirection is [fd]<file where fd defaults to standard input (a.k.a file descriptor 0). We can place this before or after our while loop.
while <condition>; do
<body>
done < /path/to/file
# or the non-traditional way
</path/to/file while <condition>; do
<body>
done
Reading the file and ending the loop
Now that our file can be read from standard input, we can use read. The syntax for read in our context is read [-r] var... where -r preserves the \ (backslash) character, instead of using it as an escape sequence character, and var is the name of the variable to store the input in. You can have multiple variables to store pieces of the input in but we only need one to read an entire line. Along with this, to preserve any backslashes in any output from echo you will likely need to use the -E flag to disable the interpretation of backslash escapes. If you have any indentation (spaces or tabs), you will need to temporarily change the IFS (Input Field Separators) variable to only "\n"; normally it is set to " \t\n".
main() {
local IFS="\n"
read -r line
echo -E "$line"
}
main
How do we use read to end our while loop?
There is really only one reliable way, that I know of, to determine when you've finished reading a file with read: check the exit value of read. If the exit value of read is 0 then we successfully read a line, if it is 1 or higher then we reached EOF (end of file). With that in mind, we can place the call to read in our while loop's condition section.
processFile() {
# Could be any file you want hardcoded or dynamic
file="$1"
local IFS="\n"
while read -r line; do
# Process line here
echo -E "$line"
done < $file
}
processFile /path/to/file1
processFile /path/to/file2
A visual breakdown of the above code via Explain Shell.
If I am executing a command and want to cut the output but it has multiple lines I found it helpful to do
echo $([command]) | cut [....]
This puts all the output of [command] on a single line that can be easier to process.
My opinion is that "cut" uses '\n' as its default delimiter.
If you want to use cut, I have two ways:
cut -d^M -f1 file_cut
I make ^M By click Enter After Ctrl+V. Another way is
cut -c 1- file_cut
Does that help?

Resources