bash command line append to variable will prepend instead - bash

in shell scripts I usually append a string to variable with "${variable} end". However, I have a file "file.txt" in which I want all lines to be appended by "end". So command line I do, for instance, for i in `cat file.txt`; do echo "${i} end"; done. But the word "end" (pluse the space) will not be appended but appended. The same thing happends when I use a while loop. Could anybody tell me what is going on right there? I am using GNU bash version 4.2.37 on LinuxMint13 64bit (both Cinammon and Mate).
Thank you for any help!

You should use a while loop instead of a for loop, as explained here.
while IFS= read -r line
do
echo "$line end"
done < "file.txt"

It may just be your syntax - don't forget do. That is:
for i in `cat file.txt`; do echo "${i} end"; done
If you're asking how to make a new file with "end" appended to each line, try this:
for i in `cat file.txt`; do echo "${i} end" >> some_new_file; done

Is using a loop the only option? If all you want to do is append something to the end of every line, it's probably easier to use sed:
sed -ie 's/.*/& end/' file.txt

Related

Unix Bash content of a file as argument stops at first line

I'm having an issue in something that seems to be a rookie error, but I can't find a way to find a solution.
I have a bash script : log.sh
which is :
#!/bin/bash
echo $1 >> log_out.txt
And with a file made of filenames (taken from the output of "find" which names is filesnames.txt and contains 53 lines of absolute paths) I try :
./log.sh $(cat filenames.txt)
the only output I have in the log_out.txt is the first line.
I need each line to be processed separately as I need to put them in arguments in a pipeline with 2 softwares.
I checked for :
my lines being terminated with /n
using a simple echo without writing to a file
all the sorts of cat filenames.txt or (< filenames.txt) found on internet
I'm sure it's a very dumb thing, but I can't find why I can't iterate more than one line :(
Thanks
It is because your ./log.sh $(cat filenames.txt) is being treated as one argument.
while IFS= read -r line; do
echo "$line";
done < filenames.txt
Edit according to: https://mywiki.wooledge.org/DontReadLinesWithFor
Edit#2:
To preserve leading and trailing whitespace in the result, set IFS to the null string.
You could simplify more and skip using explicit variable and use the default $REPLY
Source: http://wiki.bash-hackers.org/commands/builtin/read
You need to quote the command substitution. Otherwise $1 will just be the first word in the file.
./log.sh "$(cat filenames.txt)"
You should also quote the variable in the script, otherwise all the newlines will be converted to spaces.
echo "$1" >> log_out.txt
If you want to process each word separately, you can leave out the quotes
./log.sh $(cat filenames.txt)
and then use a loop in the script:
#!/bin/bash
for word in "$#"
do
echo "$word"
done >> log_out.txt
Note that this solution only works correctly when the file has one word per line and there are no wildcards in the words. See mywiki.wooledge.org/DontReadLinesWithFor for why this doesn't generalize to more complex lines.
You can iterate with each line.
#!/bin/bash
for i in $*
do
echo $i >> log_out.txt
done

Reading and writing line by line in a bash script

After searching online I was able to figure out how to read a file line by line:
while read p; do
echo $p
done < file.txt
But I would actually like to modify the line in the file.
For example:
while read p; do
if condition
then
echo $p | perl -i -pe 's/a/b/'
fi
done < file.txt
However this doesn't actually modify the file.
Update   A far better version of bash code added. Thanks to Charles Duffy for comments.
Your Perl one-liner takes a line piped into it by echo $p |, getting its standard input that way. It doesn't do anything with the file itself, so the -i flag has no effect. The -p makes it print to the standard output stream. So that whole line, echo ..., doesn't touch the file.
You can redirect the output to a new file and then move that to overwrite file.txt. Here is a simple minded example, that appends each line to a new file. For better bash code see the update below.
while read p; do
if condition
then
echo $p | perl -pe 's/a/b/' >> temp_out.txt
else
echo $p >> temp_out.txt
fi
done < file.txt
mv temp_out.txt file.txt
We have to add the else where all unmodified lines are also appended. Note that in general we cannot have just some lines replaced but the whole file has to be re-written.
If this is all that the script does you can do it with a very simple one-liner, see the end. If more work is done you can also put it all in a Perl script but I take it that there may be other good reasons for a bash script.
Update   A much better version of the above. See read and echo in Builtins in Bash manual
Appending each line opens the file anew each time without a need for that.
Just redirect at the end of the loop, much like it is done in the terminal
read uses backslash for escaping, removing it from input. Turn that off with -r
Trailing white space is removed, as a part of breaking the line into words. Suppress this by unsetting the variable that controls which characters are used for splitting, IFS=
The echo $p can do all kinds of unintended things. A formatted print is better, printf '%s\n' "$p", or at least echo "$p"
With this,
while IFS= read -r p; do
if condition
then
echo "$p" | perl -pe 's/a/b/'
else
echo "$p"
fi
done < file.txt > temp_out.txt
mv temp_out.txt file.txt
Finally, if the sole purpose of the Perl one-liner were to run a simple substitution, it is much better to simply do that in the shell itself than to have a pipeline and run a whole new process for each line.
echo "${p//a/b}"
Thanks to Charles Duffy for raising all these points in comments.
A few comments on Perl one-liners. See documentation at perlrun.
The command perl -e '...' executes any valid Perl code between ''. When we add the -n or -p switch it also reads standard input and executes that code on a line of it at the time, where -p also prints out each line after it's processed. The standard input can be supplied to it from a file,
perl -pe '...' input.txt
in which case adding -i flag will result in the file being changed in-place. Or, the input can be piped into it, for example
echo "input text" | perl -pe '...'
in which case the processed line is printed to standard output. This can be redirected to a file, as in the answer above.
To make changes to a given file a line at a time you only need this on the command line
perl -i -pe 's/a/b/' file.txt
If there is more work to do then it may well be better to put it in a script, of course. In this case the one-liner can be a command in the bash script as well, replacing all that code above (unless some bash-specific functionality is preferred for processing lines).

Print file out in bash

I'm looking at creating a bash script to print out a text file with some extra variables, see below
I want to have a text file with something like this in it
bot1
bot2
bot3
And then have a bash script print it out like so
--exclude-agent="bot1" --exclude-agent="bot2" --exclude-agent="bot3"
Is this possible? So that if I add another line to that first file it'll just print another --exclude-agent="whatever I put in the file"
At the moment I've got the below which is close, but not quite what I want
#!/bin/bash
while read line
do
echo "--exclude-agent="$line" \\"
done < bots.txt
Any help would be great!
It depends on what did you mean by
not quite what I want
First of all, you have a problem with quoting. Correct way is
echo "--exclude-agent=$line \\"
If you want to print it on the same line try
echo -n "--exclude-agent=$line"
If you want to hold it in a variable try this code
params=''
while read -r line
do
params+="--exclude-agent=$line "
done < bots.txt
printf '%s\n' "$params"
Also it seems like you're trying to store parameters in a variable. Which is the worst idea ever
echo -n "--exclude-agent="$line". -n does not print the last \n so that the all options come in the same line
to print out the expected output:
echo "--exclude-agent="\"$line"\" "

How do I iterate over each line in a file with Bash?

Given a text file with multiple lines, I would like to iterate over each line in a Bash script. I had attempted to use cut, but cut does not accept \n (newline) as a delimiter.
This is an example of the file I am working with:
one
two
three
four
Does anyone know how I can loop through each line of this text file in Bash?
I found myself in the same problem, this works for me:
cat file.cut | cut -d$'\n' -f1
Or:
cut -d$'\n' -f1 file.cut
Use cat for concatenating or displaying. No need for it here.
file="/path/to/file"
while read line; do
echo "${line}"
done < "${file}"
Simply use:
echo -n `cut ...`
This suppresses the \n at the end
cat FILE|while read line; do # 'line' is the variable name
echo "$line" # do something here
done
or (see comment):
while read line; do # 'line' is the variable name
echo "$line" # do something here
done < FILE
So, some really good (possibly better) answers have been provided already. But looking at the phrasing of the original question, in wanting to use a BASH for-loop, it amazed me that nobody mentioned a solution with change of Field Separator IFS. It's a pure bash solution, just like the accepted read line
old_IFS=$IFS
IFS='\n'
for field in $(<filename)
do your_thing;
done
IFS=$old_IFS
If you are sure that the output will always be newline-delimited, use head -n 1 in lieu of cut -f1 (note that you mentioned a for loop in a script and your question was ultimately not script-related).
Many of the other answers, including the accepted one, have multiple lines unnecessarily. No need to do this over multiple lines or changing the default delimiter on the system.
Also, the solution provided by Ivan with -d$'\n' did not work for me either on Mac OSX or CentOS 7. Since his answer is four years old, I assume something must have changed on the logic of the $ character for this situation.
While loop with input redirection and read command.
You should not be using cut to perform a sequential iteration of each line in a file as cut was not designed to do this.
Print selected parts of lines from each FILE to standard output.
— man cut
TL;DR
You should use a while loop with the read -r command and redirect standard input to your file inside a function scope where IFS is set to \n and use -E when using echo.
processFile() { # Function scope to prevent overwriting IFS globally
file="$1" # Any file that exists
local IFS="\n" # Allows spaces and tabs
while read -r line; do # Read exits with 1 when done; -r allows \
echo -E "$line" # -E allows printing of \ instead of gibberish
done < $file # Input redirection allows us to read file from stdin
}
processFile /path/to/file
Iteration
In order to iterate over each line of a file, we can use a while loop. This will let us iterate as many times as we need to.
while <condition>; do
<body>
done
Getting our file ready to read
We can use the read command to store a single line from standard input in a variable. Before we can use that to read a line from our file, we need to redirect standard input to point to our file. We can do this with input redirection. According to the man pages for bash, the syntax for redirection is [fd]<file where fd defaults to standard input (a.k.a file descriptor 0). We can place this before or after our while loop.
while <condition>; do
<body>
done < /path/to/file
# or the non-traditional way
</path/to/file while <condition>; do
<body>
done
Reading the file and ending the loop
Now that our file can be read from standard input, we can use read. The syntax for read in our context is read [-r] var... where -r preserves the \ (backslash) character, instead of using it as an escape sequence character, and var is the name of the variable to store the input in. You can have multiple variables to store pieces of the input in but we only need one to read an entire line. Along with this, to preserve any backslashes in any output from echo you will likely need to use the -E flag to disable the interpretation of backslash escapes. If you have any indentation (spaces or tabs), you will need to temporarily change the IFS (Input Field Separators) variable to only "\n"; normally it is set to " \t\n".
main() {
local IFS="\n"
read -r line
echo -E "$line"
}
main
How do we use read to end our while loop?
There is really only one reliable way, that I know of, to determine when you've finished reading a file with read: check the exit value of read. If the exit value of read is 0 then we successfully read a line, if it is 1 or higher then we reached EOF (end of file). With that in mind, we can place the call to read in our while loop's condition section.
processFile() {
# Could be any file you want hardcoded or dynamic
file="$1"
local IFS="\n"
while read -r line; do
# Process line here
echo -E "$line"
done < $file
}
processFile /path/to/file1
processFile /path/to/file2
A visual breakdown of the above code via Explain Shell.
If I am executing a command and want to cut the output but it has multiple lines I found it helpful to do
echo $([command]) | cut [....]
This puts all the output of [command] on a single line that can be easier to process.
My opinion is that "cut" uses '\n' as its default delimiter.
If you want to use cut, I have two ways:
cut -d^M -f1 file_cut
I make ^M By click Enter After Ctrl+V. Another way is
cut -c 1- file_cut
Does that help?

Cat with new line

My input file's contents are:
welcome
welcome1
welcome2
My script is:
for groupline in `cat file`
do
echo $groupline;
done
I got the following output:
welcome
welcome1
welcome2
Why doesn't it print the empty line?
you need to set IFS to newline \n
IFS=$"\n"
for groupline in $(cat file)
do
echo "$groupline";
done
Or put double quotes. See here for explanation
for groupline in "$(cat file)"
do
echo "$groupline";
done
without meddling with IFS, the "proper" way is to use while read loop
while read -r line
do
echo "$line"
done <"file"
Because you're doing it all wrong. You want while not for, and you want read, not cat:
while read groupline
do
echo "$groupline"
done < file
The solution ghostdog74 provided is helpful, but has a flaw.
IFS could not use double quotes (at least in Mac OS X), but can use single quotes like:
IFS=$'\n'
It's nice but not dash-compatible, maybe this is better:
IFS='
'
The blank line will be eaten in the following program:
IFS='
'
for line in $(cat file)
do
echo "$line"
done
But you can not add double quotes around $(cat file), it will treat the whole file as one single string.
for line in "$(cat file)"
If want blank line also be processed, using the following
while read line
do
echo "$line"
done < file
Using IFS=$"\n" and var=$(cat text.txt) removes all the "n" characters from the output echo $var

Resources