I'm having an issue in something that seems to be a rookie error, but I can't find a way to find a solution.
I have a bash script : log.sh
which is :
#!/bin/bash
echo $1 >> log_out.txt
And with a file made of filenames (taken from the output of "find" which names is filesnames.txt and contains 53 lines of absolute paths) I try :
./log.sh $(cat filenames.txt)
the only output I have in the log_out.txt is the first line.
I need each line to be processed separately as I need to put them in arguments in a pipeline with 2 softwares.
I checked for :
my lines being terminated with /n
using a simple echo without writing to a file
all the sorts of cat filenames.txt or (< filenames.txt) found on internet
I'm sure it's a very dumb thing, but I can't find why I can't iterate more than one line :(
Thanks
It is because your ./log.sh $(cat filenames.txt) is being treated as one argument.
while IFS= read -r line; do
echo "$line";
done < filenames.txt
Edit according to: https://mywiki.wooledge.org/DontReadLinesWithFor
Edit#2:
To preserve leading and trailing whitespace in the result, set IFS to the null string.
You could simplify more and skip using explicit variable and use the default $REPLY
Source: http://wiki.bash-hackers.org/commands/builtin/read
You need to quote the command substitution. Otherwise $1 will just be the first word in the file.
./log.sh "$(cat filenames.txt)"
You should also quote the variable in the script, otherwise all the newlines will be converted to spaces.
echo "$1" >> log_out.txt
If you want to process each word separately, you can leave out the quotes
./log.sh $(cat filenames.txt)
and then use a loop in the script:
#!/bin/bash
for word in "$#"
do
echo "$word"
done >> log_out.txt
Note that this solution only works correctly when the file has one word per line and there are no wildcards in the words. See mywiki.wooledge.org/DontReadLinesWithFor for why this doesn't generalize to more complex lines.
You can iterate with each line.
#!/bin/bash
for i in $*
do
echo $i >> log_out.txt
done
Related
How would I write a bash script that parses a text file, finding any lines that contain the word command: and then saves the entirety of each line on which it was found on to a text file?
The command would be
grep command: your_filename >> save_filename
Which is
#!/bin/bash
grep command: $1 >> $2
Executed by
scriptname your_filename save_filename
Thanks David
Note that I'm using an appender >>, instead of a create >. The latter ensures a file with only your last run in it, whereas the appender will add new lines to the file if it already exists.
If you are looking for a pure bash solution of this grep-like behaviour:
#!/bin/bash
# Usage: ./mygrep ERE_PATTERN FILENAME
while IFS= read -r line || [[ $line ]]; do
[[ $line =~ $1 ]] && echo "$line"
done <"$2"
(We iterate over lines of the file given in the second positional parameter, $2, in a pretty standard way, checking for a match with a pattern given as the first parameter inside a conditional expression with the =~ operator, printing all lines that match.)
Invoke it like:
./mygrep command: file
Although much slower than grep, one nice thing about this script is that it supports POSIX ERE (extended regular expressions) by default (you don't need to specify -E like you do in grep), e.g.:
./mygrep 'com.*:' file
./mygrep '^[[:digit:]]{3}' file
# etc
I have written a script to change file ownerships based on an input list read in. My script works fine on directories without space in their name. However it fails to change files on directories with space in their name. I also would like to capture the output from the chown command to a file. Could anyone help ?
here is my script in ksh:
#!/usr/bin/ksh
newowner=eg27395
dirname=/home/sas/sastest/
logfile=chowner.log
date > $dir$logfile
command="chown $newowner:$newowner"
for fname in list
do
in="$dirname/$fname"
if [[ -e $in ]]
then
while read line
do
tmp=$(print "$line"|awk '{if (substr($2,1,1) == "/" ) print $2; if (substr($0,1,1) == "/" ) print '})
if [[ -e $tmp ]]
then
eval $command \"$tmp\"
fi
done < $in
else
echo "input file $fname is not present. Check file location in the script."
fi
done
a couple of other errors:
date > $dir$logfile -- no $dir variable defined
to safely read from a file: while IFS= read -r line
But to answer your main concern, don't try to build up the command so dynamically: don't bother with the $command variable, don't use eval, and quote the variable.
chmod "$newowner:$newowner" "$tmp"
The eval is stripping the quotes on this line
command="chown $newowner:$newowner"
In order to get the line to work with spaces you will need to provide backslashed quotes
command="chown \"$newowner:$newowner\""
This way the command that eval actually runs is
chown "$newowner:$newowner"
Also, you probably need quotes around this variable setting, although you'll need to tweak the syntax
tmp="$(print "$line"|awk '{if (substr($2,1,1) == "/" ) print $2; if (substr($0,1,1) == "/" ) print '})"
To capture the output you can add 2>&1 > file.out where file.out is the name of the file ... in order to get it working with eval as you are using it you will need to backslash any special characters much in the same way you need to backslash the double quotes
Your example code suggests that list is a "meta" file: A list of files that each has a list of files to be changed. When you only have one file you can remove the while loop.
When list is a variable with filenames you need echo "${list}"| while ....
It is not completely clear why you sometimes want to start with the third field. It seems that sometimes you have 2 words before the filename and want them to be ignored. Cutting the string on spaces becomes a problem when your filenames have spaces as well. The solution is look for a space followed by a slash: that space is not part of a filename and everything up to that space can be deleted.
newowner=eg27395
# The slash on the end is not really part of the dir name, doesn't matter for most commands
dirname=/home/sas/sastest
logfile=chowner.log
# Add braces, quotes and change dir into dirname
date > "${dirname}/${logfile}"
# Line with command not needed
# Is list an inputfile? It is streamed using "< list" at the end of while []; do .. done
while IFS= read -r fname; do
in="${dirname}/${fname}"
# Quotes are important
if [[ -e "$in" ]]; then
# get the filenames with a sed construction, and give it to chmod with xargs
# The sed construction is made for the situation that in a line with a space followed by a slash
# the filename starts with the slash
# sed is with # to avoid escaping the slashes
# Do not redirect the output here but after the loop.
sed 's#.* /#/#' "${in}" | xargs chmod ${newowner}:${newowner}
else
echo "input file ${fname} is not present. Check file location in the script."
fi
done < list >> "${dirname}/${logfile}"
I'm new to UNIX and have this really simple problem:
I have a text-file (input.txt) containing a string in each line. It looks like this:
House
Monkey
Car
And inside my shell script I need to read this input file line by line to get to a variable like this:
things="House,Monkey,Car"
I know this sounds easy, but I just couldnt find any simple solution for this. My closest attempt so far:
#!/bin/sh
things=""
addToString() {
things="${things},$1"
}
while read line; do addToString $line ;done <input.txt
echo $things
But this won't work. Regarding to my google research I thought the while loop would create a new sub shell, but this I was wrong there (see the comment section). Nevertheless the variable "things" was still not available in the echo later on. (I cannot just write the echo inside the while loop, because I need to work with that string later on)
Could you please help me out here? Any help will be appreciated, thank you!
What you proposed works fine! I've only made two changes here: Adding missing quotes, and handling the empty-string case.
things=""
addToString() {
if [ -n "$things" ]; then
things="${things},$1"
else
things="$1"
fi
}
while read -r line; do addToString "$line"; done <input.txt
echo "$things"
If you were piping into while read, this would create a subshell, and that would eat your variables. You aren't piping -- you're doing a <input.txt redirection. No subshell, code works without changes.
That said, there are better ways to read lists of items into shell variables. On any version of bash after 3.0:
IFS=$'\n' read -r -d '' -a things <input.txt # read into an array
printf -v things_str '%s,' "${things[#]}" # write array to a comma-separated string
echo "${things_str%,}" # print that string w/o trailing comma
...on bash 4, that first line can be:
readarray -t things <input.txt # read into an array
This is not a shell solution, but the truth is that solutions in pure shell are often excessively long and verbose. So e.g. to do string processing it is better to use special tools that are part of the “default” Unix environment.
sed ':b;N;$!bb;s/\n/,/g' < input.txt
If you want to omit empty lines, then:
sed ':b;N;$!bb;s/\n\n*/,/g' < input.txt
Speaking about your solution, it should work, but you should really always use quotes where applicable. E.g. this works for me:
things=""
while read line; do things="$things,$line"; done < input.txt
echo "$things"
(Of course, there is an issue with this code, as it outputs a leading comma. If you want to skip empty lines, just add an if check.)
This might/might not work, depending on the shell you are using. On my Ubuntu 14.04/x64, it works with both bash and dash.
To make it more reliable and independent from the shell's behavior, you can try to put the whole block into a subshell explicitly, using the (). For example:
(
things=""
addToString() {
things="${things},$1"
}
while read line; do addToString $line ;done
echo $things
) < input.txt
P.S. You can use something like this to avoid the initial comma. Without bash extensions (using short-circuit logical operators instead of the if for shortness):
test -z "$things" && things="$1" || things="${things},${1}"
Or with bash extensions:
things="${things}${things:+,}${1}"
P.P.S. How I would have done it:
tr '\n' ',' < input.txt | sed 's!,$!\n!'
You can do this too:
#!/bin/bash
while read -r i
do
[[ $things == "" ]] && things="$i" || things="$things","$i"
done < <(grep . input.txt)
echo "$things"
Output:
House,Monkey,Car
N.B:
Used grep to tackle with empty lines and the probability of not having a new line at the end of file. (Normal while read will fail to read the last line if there is no newline at the end of file.)
Given a text file with multiple lines, I would like to iterate over each line in a Bash script. I had attempted to use cut, but cut does not accept \n (newline) as a delimiter.
This is an example of the file I am working with:
one
two
three
four
Does anyone know how I can loop through each line of this text file in Bash?
I found myself in the same problem, this works for me:
cat file.cut | cut -d$'\n' -f1
Or:
cut -d$'\n' -f1 file.cut
Use cat for concatenating or displaying. No need for it here.
file="/path/to/file"
while read line; do
echo "${line}"
done < "${file}"
Simply use:
echo -n `cut ...`
This suppresses the \n at the end
cat FILE|while read line; do # 'line' is the variable name
echo "$line" # do something here
done
or (see comment):
while read line; do # 'line' is the variable name
echo "$line" # do something here
done < FILE
So, some really good (possibly better) answers have been provided already. But looking at the phrasing of the original question, in wanting to use a BASH for-loop, it amazed me that nobody mentioned a solution with change of Field Separator IFS. It's a pure bash solution, just like the accepted read line
old_IFS=$IFS
IFS='\n'
for field in $(<filename)
do your_thing;
done
IFS=$old_IFS
If you are sure that the output will always be newline-delimited, use head -n 1 in lieu of cut -f1 (note that you mentioned a for loop in a script and your question was ultimately not script-related).
Many of the other answers, including the accepted one, have multiple lines unnecessarily. No need to do this over multiple lines or changing the default delimiter on the system.
Also, the solution provided by Ivan with -d$'\n' did not work for me either on Mac OSX or CentOS 7. Since his answer is four years old, I assume something must have changed on the logic of the $ character for this situation.
While loop with input redirection and read command.
You should not be using cut to perform a sequential iteration of each line in a file as cut was not designed to do this.
Print selected parts of lines from each FILE to standard output.
— man cut
TL;DR
You should use a while loop with the read -r command and redirect standard input to your file inside a function scope where IFS is set to \n and use -E when using echo.
processFile() { # Function scope to prevent overwriting IFS globally
file="$1" # Any file that exists
local IFS="\n" # Allows spaces and tabs
while read -r line; do # Read exits with 1 when done; -r allows \
echo -E "$line" # -E allows printing of \ instead of gibberish
done < $file # Input redirection allows us to read file from stdin
}
processFile /path/to/file
Iteration
In order to iterate over each line of a file, we can use a while loop. This will let us iterate as many times as we need to.
while <condition>; do
<body>
done
Getting our file ready to read
We can use the read command to store a single line from standard input in a variable. Before we can use that to read a line from our file, we need to redirect standard input to point to our file. We can do this with input redirection. According to the man pages for bash, the syntax for redirection is [fd]<file where fd defaults to standard input (a.k.a file descriptor 0). We can place this before or after our while loop.
while <condition>; do
<body>
done < /path/to/file
# or the non-traditional way
</path/to/file while <condition>; do
<body>
done
Reading the file and ending the loop
Now that our file can be read from standard input, we can use read. The syntax for read in our context is read [-r] var... where -r preserves the \ (backslash) character, instead of using it as an escape sequence character, and var is the name of the variable to store the input in. You can have multiple variables to store pieces of the input in but we only need one to read an entire line. Along with this, to preserve any backslashes in any output from echo you will likely need to use the -E flag to disable the interpretation of backslash escapes. If you have any indentation (spaces or tabs), you will need to temporarily change the IFS (Input Field Separators) variable to only "\n"; normally it is set to " \t\n".
main() {
local IFS="\n"
read -r line
echo -E "$line"
}
main
How do we use read to end our while loop?
There is really only one reliable way, that I know of, to determine when you've finished reading a file with read: check the exit value of read. If the exit value of read is 0 then we successfully read a line, if it is 1 or higher then we reached EOF (end of file). With that in mind, we can place the call to read in our while loop's condition section.
processFile() {
# Could be any file you want hardcoded or dynamic
file="$1"
local IFS="\n"
while read -r line; do
# Process line here
echo -E "$line"
done < $file
}
processFile /path/to/file1
processFile /path/to/file2
A visual breakdown of the above code via Explain Shell.
If I am executing a command and want to cut the output but it has multiple lines I found it helpful to do
echo $([command]) | cut [....]
This puts all the output of [command] on a single line that can be easier to process.
My opinion is that "cut" uses '\n' as its default delimiter.
If you want to use cut, I have two ways:
cut -d^M -f1 file_cut
I make ^M By click Enter After Ctrl+V. Another way is
cut -c 1- file_cut
Does that help?
My input file's contents are:
welcome
welcome1
welcome2
My script is:
for groupline in `cat file`
do
echo $groupline;
done
I got the following output:
welcome
welcome1
welcome2
Why doesn't it print the empty line?
you need to set IFS to newline \n
IFS=$"\n"
for groupline in $(cat file)
do
echo "$groupline";
done
Or put double quotes. See here for explanation
for groupline in "$(cat file)"
do
echo "$groupline";
done
without meddling with IFS, the "proper" way is to use while read loop
while read -r line
do
echo "$line"
done <"file"
Because you're doing it all wrong. You want while not for, and you want read, not cat:
while read groupline
do
echo "$groupline"
done < file
The solution ghostdog74 provided is helpful, but has a flaw.
IFS could not use double quotes (at least in Mac OS X), but can use single quotes like:
IFS=$'\n'
It's nice but not dash-compatible, maybe this is better:
IFS='
'
The blank line will be eaten in the following program:
IFS='
'
for line in $(cat file)
do
echo "$line"
done
But you can not add double quotes around $(cat file), it will treat the whole file as one single string.
for line in "$(cat file)"
If want blank line also be processed, using the following
while read line
do
echo "$line"
done < file
Using IFS=$"\n" and var=$(cat text.txt) removes all the "n" characters from the output echo $var