Read variable line by line in shell (dash included) - shell

I have a variable with stored logs. What I want is to read every line of the variable containing logs and keep that line or remove base on some filtering stuff.
The problem is that my code is working on bash, but not working on the dash.
this is my code:
filtered_logs=""
while IFS= read -r line
do
...(store line to $filterered_logs if it comes throught filter )
done <<< "$logs"
logs="$filtered_logs"
this code is working in bash. But ' done <<< "$logs" ' is not working in dash (since is default sh in ubuntu). It's homework and I need, that it works on every shell possible.
What i tried was:
filtered_logs=""
echo "$logs" |
while IFS= read -r line
do
...(store line to $filterered_logs if it comes throught filter )
done
logs="$filtered_logs"
But if I store something in from while cycle to $filtered_logs, it's not working. And I can't access this while cycling with my debugger too. (I think that the whole while cycle is a whole new process since I run it with |.
My question is how to make it works, please. Thank you

It's easy to use clean posix api in your case:
logs=$(
printf "%s" "$logs" |
while IFS= read -r line
do
echo "store this to logs"
done
)

Related

Why is the first input being skipped

#!/bin/bash
while read line;
do
cat
done
Hi, I'm trying to get the program to print each line given from stdin. Why isn't the first input being printed here? 2nd and following inputs work fine.
Thanks
Edit: I fixed it by adding a cat before the loop. Can someone explain why it was needed though?
#!/bin/bash
while read -r line;
do
echo "${line}"
done
This is a working version of your script. In your script, the first line is placed in the variable ${line} by the bash builtin read. Since you only ever output via cat, the first line in that variable is never output.

Read an input file in shell script and store its lines in a variable

I'm new to UNIX and have this really simple problem:
I have a text-file (input.txt) containing a string in each line. It looks like this:
House
Monkey
Car
And inside my shell script I need to read this input file line by line to get to a variable like this:
things="House,Monkey,Car"
I know this sounds easy, but I just couldnt find any simple solution for this. My closest attempt so far:
#!/bin/sh
things=""
addToString() {
things="${things},$1"
}
while read line; do addToString $line ;done <input.txt
echo $things
But this won't work. Regarding to my google research I thought the while loop would create a new sub shell, but this I was wrong there (see the comment section). Nevertheless the variable "things" was still not available in the echo later on. (I cannot just write the echo inside the while loop, because I need to work with that string later on)
Could you please help me out here? Any help will be appreciated, thank you!
What you proposed works fine! I've only made two changes here: Adding missing quotes, and handling the empty-string case.
things=""
addToString() {
if [ -n "$things" ]; then
things="${things},$1"
else
things="$1"
fi
}
while read -r line; do addToString "$line"; done <input.txt
echo "$things"
If you were piping into while read, this would create a subshell, and that would eat your variables. You aren't piping -- you're doing a <input.txt redirection. No subshell, code works without changes.
That said, there are better ways to read lists of items into shell variables. On any version of bash after 3.0:
IFS=$'\n' read -r -d '' -a things <input.txt # read into an array
printf -v things_str '%s,' "${things[#]}" # write array to a comma-separated string
echo "${things_str%,}" # print that string w/o trailing comma
...on bash 4, that first line can be:
readarray -t things <input.txt # read into an array
This is not a shell solution, but the truth is that solutions in pure shell are often excessively long and verbose. So e.g. to do string processing it is better to use special tools that are part of the “default” Unix environment.
sed ':b;N;$!bb;s/\n/,/g' < input.txt
If you want to omit empty lines, then:
sed ':b;N;$!bb;s/\n\n*/,/g' < input.txt
Speaking about your solution, it should work, but you should really always use quotes where applicable. E.g. this works for me:
things=""
while read line; do things="$things,$line"; done < input.txt
echo "$things"
(Of course, there is an issue with this code, as it outputs a leading comma. If you want to skip empty lines, just add an if check.)
This might/might not work, depending on the shell you are using. On my Ubuntu 14.04/x64, it works with both bash and dash.
To make it more reliable and independent from the shell's behavior, you can try to put the whole block into a subshell explicitly, using the (). For example:
(
things=""
addToString() {
things="${things},$1"
}
while read line; do addToString $line ;done
echo $things
) < input.txt
P.S. You can use something like this to avoid the initial comma. Without bash extensions (using short-circuit logical operators instead of the if for shortness):
test -z "$things" && things="$1" || things="${things},${1}"
Or with bash extensions:
things="${things}${things:+,}${1}"
P.P.S. How I would have done it:
tr '\n' ',' < input.txt | sed 's!,$!\n!'
You can do this too:
#!/bin/bash
while read -r i
do
[[ $things == "" ]] && things="$i" || things="$things","$i"
done < <(grep . input.txt)
echo "$things"
Output:
House,Monkey,Car
N.B:
Used grep to tackle with empty lines and the probability of not having a new line at the end of file. (Normal while read will fail to read the last line if there is no newline at the end of file.)

Read content from stdout in realtime

I have an external device that I need to power up and then wait for it to get started properly. The way I want to do this is by connecting to it via serial port (via Plink which is a command-line tool for PuTTY) and read all text lines that it prints and try to find the text string that indicates that it has been started properly. When that text string is found, the script will proceed.
The problem is that I need to read these text lines in real-time. So far, I have only seen methods for calling a command and then process its output when that command is finished. Alternatively, I could let Plink run in the background by appending an & to the command and redirecting the output to a file. But the problem is that this file will be empty from the beginning so the script will just proceed directly. Is there maybe a way to wait for a new line of a certain file and read it once it comes? Or does anyone have any other ideas how to accomplish this?
Here is the best solution I have found so far:
./plink "connection_name" > new_file &
sleep 10 # Because I know that it will take a little while before the correct text string pops up but I don't know the exact time it will take...
while read -r line
do
# If $line is the correct string, proceed
done < new_file
However, I want the script to proceed directly when the correct text string is found.
So, in short, is there any way to access the output of a command continously before it has finished executing?
This might be what you're looking for:
while read -r line; do
# do your stuff here with $line
done < <(./plink "connection_name")
And if you need to sleep 10:
{
sleep 10
while read -r line; do
# do your stuff here with $line
done
} < <(./plink "connection_name")
The advantage of this solution compared to the following:
./plink "connection_name" | while read -r line; do
# do stuff here with $line
done
(that I'm sure someone will suggest soon) is that the while loop is not run in a subshell.
The construct <( ... ) is called Process Substitution.
Hope this helps!
Instead of using a regular file, use a named pipe.
mkfifo new_file
./plink "connection_name" > new_file &
while read -r line
do
# If $line is the correct string, proceed
done < new_file
The while loop will block until there is something to read from new_file, so there is no need to sleep.
(This is basically what process substitution does behind the scenes, but doesn't require any special shell support; POSIX shell does not support process substitution.)
Newer versions of bash (4.2 or later) also support an option to allow the final command of a pipeline to execute in the current shell, making the simple solution
shopt +s lastpipe
./plink "connection_name" | while read -r line; do
# ...
done
possible.

Read file line by line and perform action for each in bash

I have a text file, it contains a single word on each line.
I need a loop in bash to read each line, then perform a command each time it reads a line, using the input from that line as part of the command.
I am just not sure of the proper syntax to do this in bash. If anyone can help, it would be great. I need to use the line from the test file obtained as a paramter to call another function. The loop should stop when there are no more lines in the text file.
Psuedo code:
Read testfile.txt.
For each in testfile.txt
{
some_function linefromtestfile
}
How about:
while read line
do
echo $line
// or some_function "$line"
done < testfile.txt
As an alternative, using a file descriptor (#4 in this case):
file='testfile.txt'
exec 4<$file
while read -r -u4 t ; do
echo "$t"
done
Don't use cat! In a loop cat is almost always wrong, i.e.
cat testfile.txt | while read -r line
do
# do something with "$line" here
done
and people might start to throw an UUoCA at you.
while read line
do
nikto -Tuning x 1 6 -h $line -Format html -o NiktoSubdomainScans.html
done < testfile.txt
Tried this to automate nikto scan of list of domains after changing from cat approach. Still just read the first line and ignored everything else.

How do I iterate over each line in a file with Bash?

Given a text file with multiple lines, I would like to iterate over each line in a Bash script. I had attempted to use cut, but cut does not accept \n (newline) as a delimiter.
This is an example of the file I am working with:
one
two
three
four
Does anyone know how I can loop through each line of this text file in Bash?
I found myself in the same problem, this works for me:
cat file.cut | cut -d$'\n' -f1
Or:
cut -d$'\n' -f1 file.cut
Use cat for concatenating or displaying. No need for it here.
file="/path/to/file"
while read line; do
echo "${line}"
done < "${file}"
Simply use:
echo -n `cut ...`
This suppresses the \n at the end
cat FILE|while read line; do # 'line' is the variable name
echo "$line" # do something here
done
or (see comment):
while read line; do # 'line' is the variable name
echo "$line" # do something here
done < FILE
So, some really good (possibly better) answers have been provided already. But looking at the phrasing of the original question, in wanting to use a BASH for-loop, it amazed me that nobody mentioned a solution with change of Field Separator IFS. It's a pure bash solution, just like the accepted read line
old_IFS=$IFS
IFS='\n'
for field in $(<filename)
do your_thing;
done
IFS=$old_IFS
If you are sure that the output will always be newline-delimited, use head -n 1 in lieu of cut -f1 (note that you mentioned a for loop in a script and your question was ultimately not script-related).
Many of the other answers, including the accepted one, have multiple lines unnecessarily. No need to do this over multiple lines or changing the default delimiter on the system.
Also, the solution provided by Ivan with -d$'\n' did not work for me either on Mac OSX or CentOS 7. Since his answer is four years old, I assume something must have changed on the logic of the $ character for this situation.
While loop with input redirection and read command.
You should not be using cut to perform a sequential iteration of each line in a file as cut was not designed to do this.
Print selected parts of lines from each FILE to standard output.
— man cut
TL;DR
You should use a while loop with the read -r command and redirect standard input to your file inside a function scope where IFS is set to \n and use -E when using echo.
processFile() { # Function scope to prevent overwriting IFS globally
file="$1" # Any file that exists
local IFS="\n" # Allows spaces and tabs
while read -r line; do # Read exits with 1 when done; -r allows \
echo -E "$line" # -E allows printing of \ instead of gibberish
done < $file # Input redirection allows us to read file from stdin
}
processFile /path/to/file
Iteration
In order to iterate over each line of a file, we can use a while loop. This will let us iterate as many times as we need to.
while <condition>; do
<body>
done
Getting our file ready to read
We can use the read command to store a single line from standard input in a variable. Before we can use that to read a line from our file, we need to redirect standard input to point to our file. We can do this with input redirection. According to the man pages for bash, the syntax for redirection is [fd]<file where fd defaults to standard input (a.k.a file descriptor 0). We can place this before or after our while loop.
while <condition>; do
<body>
done < /path/to/file
# or the non-traditional way
</path/to/file while <condition>; do
<body>
done
Reading the file and ending the loop
Now that our file can be read from standard input, we can use read. The syntax for read in our context is read [-r] var... where -r preserves the \ (backslash) character, instead of using it as an escape sequence character, and var is the name of the variable to store the input in. You can have multiple variables to store pieces of the input in but we only need one to read an entire line. Along with this, to preserve any backslashes in any output from echo you will likely need to use the -E flag to disable the interpretation of backslash escapes. If you have any indentation (spaces or tabs), you will need to temporarily change the IFS (Input Field Separators) variable to only "\n"; normally it is set to " \t\n".
main() {
local IFS="\n"
read -r line
echo -E "$line"
}
main
How do we use read to end our while loop?
There is really only one reliable way, that I know of, to determine when you've finished reading a file with read: check the exit value of read. If the exit value of read is 0 then we successfully read a line, if it is 1 or higher then we reached EOF (end of file). With that in mind, we can place the call to read in our while loop's condition section.
processFile() {
# Could be any file you want hardcoded or dynamic
file="$1"
local IFS="\n"
while read -r line; do
# Process line here
echo -E "$line"
done < $file
}
processFile /path/to/file1
processFile /path/to/file2
A visual breakdown of the above code via Explain Shell.
If I am executing a command and want to cut the output but it has multiple lines I found it helpful to do
echo $([command]) | cut [....]
This puts all the output of [command] on a single line that can be easier to process.
My opinion is that "cut" uses '\n' as its default delimiter.
If you want to use cut, I have two ways:
cut -d^M -f1 file_cut
I make ^M By click Enter After Ctrl+V. Another way is
cut -c 1- file_cut
Does that help?

Resources