Append out from reading lines in a txt file - bash

I have a test.txt file with the following contents
100001
100003
100007
100008
100009
I am trying to loop through the text file and append each one with .xml.
Ex:
100001.xml
100003.xml
100007.xml
100008.xml
100009.xml
I have tried different variations of
while read p; do
echo "$p.zip"
done < test.txt
But it prints out weird like this
.xml01
.xml03
.xml07
.xml08
.xml09

Appending a .xml at the end of each line while removing CRLF, if present.
With sed and bash:
#!/bin/bash
sed -E $'s/\r?$/.xml/' test.txt
With awk:
awk -v suffix='.xml' '{sub(/\r?$/,suffix)}1' test.txt
Using it in a bash loop:
#!/bin/bash
while IFS='' read -r filename
do
printf '%q\n' "$filename"
done < <(
awk -v suffix='.xml' '{sub(/\r?$/,suffix)}1' test.txt
)
Or doing the whole thing in pure shell:
while IFS='' read -r filename
do
fullname="${filename%\r}.xml"
printf '%s\n' "$fullname"
done < test.txt

Related

How to add lines at the beginning of either empty or not file?

I want to add lines at beginning of file, it works with:
sed -i '1s/^/#INFO\tFORMAT\tunknown\n/' file
sed -i '1s/^/##phasing=none\n/' file
However it doesn't work when my file is empty. I found these commands:
echo > file && sed '1s/^/#INFO\tFORMAT\tunknown\n/' -i file
echo > file && sed '1s/^/##phasing=none\n/' -i file
but the last one erase the first one (and also if file isn't empty)
I would like to know how to add lines at the beginning of file either if the file is empty or not
I tried a loop with if [ -s file ] but without success
Thanks!
You can use the insert command (i).
if [ -s file ]; then
sed -i '1i\
#INFO\tFORMAT\tunknown\
##phasing=none' file
else
printf '#INFO\tFORMAT\tunknown\n##phasing=none' > file
fi
Note that \t for tab is not POSIX, and does not work on all sed implementations (eg BSD/Apple, -i works differently there too). You can use a raw tab instead, or a variable: tab=$(printf '\t').
You should use i command in sed:
file='inputFile'
# insert a line break if file is empty
[[ ! -s $file ]] && echo > "$file"
sed -i.bak $'1i\
#INFO\tFORMAT\tunknown
' "$file"
Or you can ditch sed and do it in the shell using printf:
{ printf '#INFO\tFORMAT\tunknown\n'; cat file; } > file.new &&
mv file.new file
With plain bash and shell utilities:
#!/bin/bash
header=(
$'#INFO\tFORMAT\tunknown'
$'##phasing=none'
)
mv file file.bak &&
{ printf '%s\n' "${header[#]}"; cat file.bak; } > file &&
rm file.bak
Explicitely creating a new file, then moving it:
#!/bin/bash
echo -e '#INFO\tFORMAT\tunknown' | cat - file > file.new
mv file.new file
or slurping the whole content of the file into memory:
#!/bin/bash
printf '#INFO\tFORMAT\tunknown\n%s' "$(<file)" > file
It is trivial with ed if available/acceptable.
printf '%s\n' '0a' $'#INFO\tFORMAT\tunknown' $'##phasing=none' . ,p w | ed -s file
It even creates the file if it does not exists.

Bash to read lines from file and assign to variable with delimiter

In bash script, how can I read the file line by line and assign to the variable with delimiter?
example.txt file contents:
string1
string2
string3
string4
Expected output:
string1,string2,string3,string4
Thanks in advance
Apparently my answer below leaves a comma at the end of the line. A quick workaround is to use the following builtin in Unix:
paste -sd, example.txt
Where you use the paste program to concatenate all the lines into one and then add the string delimiter ','
Using the builtin commands in unix:
tr '\n' ',' < example.txt
This can be broken down as truncating all Newline widcards and inserting a comma delimiter instead.
Other possible ways, just for fun:
mapfile -t a < example.txt
(IFS=,; echo "${a[*]}")
mapfile -t a < example.txt
foo=$(printf '%s' "${a[#]/%/,}")
echo "${foo%,}"
foo=$(<example.txt)
echo "${foo//$'\n'/,}"
{
IFS= read -r foo
while IFS= read -r line; do
foo+=,$line
done
} < example.txt
echo "$foo"
sed ':a;N;$!ba;s/\n/,/g' example.txt
It should work:
#!/bin/bash
output=''
while IFS='' read -r line || [[ -n "$line" ]]; do
output=$output:",$line"
done < "$1"
echo $output
Give the file as argument

How to write a command line script that will loop through every line in a text file and append a string at the end of each? [duplicate]

How do I add a string after each line in a file using bash? Can it be done using the sed command, if so how?
If your sed allows in place editing via the -i parameter:
sed -e 's/$/string after each line/' -i filename
If not, you have to make a temporary file:
typeset TMP_FILE=$( mktemp )
touch "${TMP_FILE}"
cp -p filename "${TMP_FILE}"
sed -e 's/$/string after each line/' "${TMP_FILE}" > filename
I prefer echo. using pure bash:
cat file | while read line; do echo ${line}$string; done
I prefer using awk.
If there is only one column, use $0, else replace it with the last column.
One way,
awk '{print $0, "string to append after each line"}' file > new_file
or this,
awk '$0=$0"string to append after each line"' file > new_file
If you have it, the lam (laminate) utility can do it, for example:
$ lam filename -s "string after each line"
Pure POSIX shell and sponge:
suffix=foobar
while read l ; do printf '%s\n' "$l" "${suffix}" ; done < file |
sponge file
xargs and printf:
suffix=foobar
xargs -L 1 printf "%s${suffix}\n" < file | sponge file
Using join:
suffix=foobar
join file file -e "${suffix}" -o 1.1,2.99999 | sponge file
Shell tools using paste, yes, head
& wc:
suffix=foobar
paste file <(yes "${suffix}" | head -$(wc -l < file) ) | sponge file
Note that paste inserts a Tab char before $suffix.
Of course sponge can be replaced with a temp file, afterwards mv'd over the original filename, as with some other answers...
This is just to add on using the echo command to add a string at the end of each line in a file:
cat input-file | while read line; do echo ${line}"string to add" >> output-file; done
Adding >> directs the changes we've made to the output file.
Sed is a little ugly, you could do it elegantly like so:
hendry#i7 tmp$ cat foo
bar
candy
car
hendry#i7 tmp$ for i in `cat foo`; do echo ${i}bar; done
barbar
candybar
carbar

Unable to output values in the required format using shell script

I need to output an array to a file in the following format.
File: a.txt
b.txt
I tried doing the following :
declare -a files=("a.txt" "b.txt")
empty=""
printf "File:" >> files.txt
for i in "${files[#]}"
do
printf "%-7s %-30s \n" "$empty" "$i" >> files.txt
done
But, I get the output as
File: a.txt
b.txt
Can anyone please help me to get the output in the required format.
#!/bin/bash
files=( 'a.txt' 'b.txt' 'c.txt' 'd.txt' )
set -- "${files[#]}"
printf 'File: %s\n' "$1"
shift
printf ' %s\n' "$#"
Output:
File: a.txt
b.txt
c.txt
d.txt
This uses the fact that printf will reuse its formatting string for all its other command line parameters.
We set the positional parameters to the list and then output the first element with the File: string prepended. We then shift $1 off the list of positional parameters and print the rest with a spacer string inserted.
Using sed
#!/bin/bash
declare -a files=("a.txt" "b.txt")
for i in "${files[#]}"
do
echo "$i" >> files.txt
done
sed -i '1 s/^/File: /' files.txt
sed -i '1 ! s/^/ /' files.txt
If you are using Mac, you have to modify sed commands in this way
sed -i '' '1 s/^/File: /' files.txt
sed -i '' '1 ! s/^/ /' files.txt
The output will be:
File: a.txt
b.txt
First of all we put into txt file all the file names (for loop). After that, via first sed command we add File: to the first line and via second sed command we add to all lines, except the first, six spaces equal to length of string File:
You could always started with a variable containing File: for the first iteration, and overwrite it with the correct number of spaces each time. The repeated assignment won't induce too much overhead.
prefix="File:"
for i in "${files[#]}"
do
printf "%-7s %-30s \n" "$prefix" "$i"
prefix=
done > Files.txt

How to remove a filename from the list of path in Shell

I would like to remove a file name only from the following configuration file.
Configuration File -- test.conf
knowledgebase/arun/test.rf
knowledgebase/arunraj/tester/test.drl
knowledgebase/arunraj2/arun/test/tester.drl
The above file should be read. And removed contents should went to another file called output.txt
Following are my try. It is not working to me at all. I am getting empty files only.
#!/bin/bash
file=test.conf
while IFS= read -r line
do
# grep --exclude=*.drl line
# awk 'BEGIN {getline line ; gsub("*.drl","", line) ; print line}'
# awk '{ gsub("/",".drl",$NF); print line }' arun.conf
# awk 'NF{NF--};1' line arun.conf
echo $line | rev | cut -d'/' -f 1 | rev >> output.txt
done < "$file"
Expected Output :
knowledgebase/arun
knowledgebase/arunraj/tester
knowledgebase/arunraj2/arun/test
There's the dirname command to make it easy and reliable:
#!/bin/bash
file=test.conf
while IFS= read -r line
do
dirname "$line"
done < "$file" > output.txt
There are Bash shell parameter expansions that will work OK with the list of names given but won't work reliably for some names:
file=test.conf
while IFS= read -r line
do
echo "${line%/*}"
done < "$file" > output.txt
There's sed to do the job — easily with the given set of names:
sed 's%/[^/]*$%%' test.conf > output.txt
It's harder if you have to deal with names like /plain.file (or plain.file — the same sorts of edge cases that trip up the shell expansion).
You could add Perl, Python, Awk variants to the list of ways of doing the job.
You can get the path like this:
path=${fullpath%/*}
It cuts away the string after the last /
Using awk one liner you can do this:
awk 'BEGIN{FS=OFS="/"} {NF--} 1' test.conf
Output:
knowledgebase/arun
knowledgebase/arunraj/tester
knowledgebase/arunraj2/arun/test

Resources