Read from a file and stdin in Bash - bash

I would like to know if I can write a shell script that accepts two arguments simultaneously, one from a file and the another one from stdin. Could you give some example please?.
I trying
while read line
do
echo "$line"
done < "${1}" < "{/dev/stdin}"
But this does not work.

You can use cat - or cat /dev/stdin:
while read line; do
# your code
done < <(cat "$1" -)
or
while read line; do
# your code
done < <(cat "$1" /dev/stdin)
or, if you want to read from all files passed through command line as well as stdin, you could do this:
while read line; do
# your code
done < <(cat "$#" /dev/stdin)
See also:
How to read from a file or stdin in Bash?

This topic seems to be helpful here:
{ cat $1; cat; } | while read line
do
echo "$line"
done
Or just
cat $1
cat
if all you're doing is printing the content

Related

Why does outer while loop in Bash not finishing?

I don't understand why this outer loop exits just because the inner loop can finish.
The $1 refers to a file with a lot of pattern/replacement lines. The $2 is a list of words. The problem is that the outer loop exits already after the first pattern/replacement line. I want it to exit after all the lines in $1 are read.
#!/bin/bash
#Receive SED SCRIPT WORDLIST
if [ -f temp.txt ];
then
> temp.txt
else
touch temp.txt
fi
while IFS='' read -r line || [[ -n "$line" ]];
do
echo -e "s/$line/p" >> temp.txt
while IFS='' read -r line || [[ -n "$line" ]];
do
sed -nf temp.txt $2
done
> temp.txt
done < $1
I understand that you want calculate de sed expressions and write it on a file, and then apply this expresions to other file.
This is so much easier than your are doing it.
First of all, you dont need to check if temp.txt already exists. When you redirect the output of a command to a file, if this file do not exist, it will be created. But if you want to reset the file, I recommend you to use truncate command.
In the body of the script, I don't understand why you put a second while loop to read from a file, but you don't put a file to read.
I think that you need is something like this:
truncate -s 0 sed_expressions.txt
while IFS='' read -r line || [[ -n "$line" ]]; do
echo -e "s/$line/p" >> sed_expressions.txt
done < $1
sed -nf sed_expressions.txt $2 > out_file.txt
Try it and tell me if is this that you need.
Bye!

Bash script to remove lines containing any of a list of words

I have a large config file that I use to define variables for a script to pull from it, each defined on a single line. It looks something like this:
var val
foo bar
foo1 bar1
foo2 bar2
I have gathered a list of out of date variables that I want to remove from the list. I could go through it manually, but I would like to do it with a script, which would be at least more stimulating. The file that contains the vlaues may contain multiple instances. The idea is to find the value, and if it's found, remove the entire line.
Does anyone know if this is possible? I know sed does this but I do not know how to make it use a file input.
#!/bin/bash
shopt -s extglob
REMOVE=(foo1 foo2)
IFS='|' eval 'PATTERN="#(${REMOVE[*]})"'
while read -r LINE; do
read A B <<< "$LINE"
[[ $A != $PATTERN ]] && echo "$LINE"
done < input_file.txt > output_file.txt
Or (Use with a copy first)
#!/bin/bash
shopt -s extglob
FILE=$1 REMOVE=("${#:2}")
IFS='|' eval 'PATTERN="#(${REMOVE[*]})"'
SAVE=()
while read -r LINE; do
read A B <<< "$LINE"
[[ $A != $PATTERN ]] && SAVE+=("$LINE")
done < "$FILE"
printf '%s\n' "${SAVE[#]}" > "$FILE"
Running with
bash script.sh your_config_file pattern1 pattern2 ...
Or
#!/bin/bash
shopt -s extglob
FILE=$1 PATTERNS_FILE=$2
readarray -t REMOVE < "$PATTERNS_FILE"
IFS='|' eval 'PATTERN="#(${REMOVE[*]})"'
SAVE=()
while read -r LINE; do
read A B <<< "$LINE"
[[ $A != $PATTERN ]] && SAVE+=("$LINE")
done < "$FILE"
printf '%s\n' "${SAVE[#]}" > "$FILE"
Running with
bash script.sh your_config_file patterns_file
Here's one with sed. Add words to the array. Then use
./script target_filename
(assuming you put the following in a file called script). (Not very efficient). I think it might be more efficient if we concat the words and put it in the regex like bbonev did
#!/bin/bash
declare -a array=("foo1" "foo2")
for i in "${array[#]}";
do
sed -i "/^${i}\s.*/d" $1
done
It's actually even simpler using file input
If you have a word file
word1
word2
word3
.....
then the following will do the job
#!/bin/bash
while read i;
do
sed -i "/^${i}\s.*/d" $2
done <$1
usage:
./script wordlist target_file

Skip line in text file which starts with '#' via KornShell (ksh)

I am trying to write a script which reads a text file and saves each line to a string. I would also like the script to skip any lines which start with a hash symbol. Any suggestions?
You should not leave skipping lines to ksh. E.g. do this:
grep -v '^#' INPUTFILE | while IFS="" read line ; do echo $line ; done
And instead of the echo part do whatever you want.
Or if ksh does not support this syntax:
grep -v '^#' INPUTFILE > tmpfile
while IFS="" read line ; do echo $line ; done < tmpfile
rm tmpfile
while read -r line; do
[[ "$line" = *( )#* ]] && continue
# do something with "$line"
done < filename
look for "File Name Patterns" or "File Name Generation" in the ksh man page.

bash/zsh input process substitution gives syntax error in conjunction with while

These work fine and do what they should (print the contents of the file foo):
cat <foo
while read line; do echo $line; done <foo
cat <(cat foo)
However this gives me a syntax error in zsh:
zsh$ while read line; do echo $line; done <(cat foo)
zsh: parse error near `<(cat foo)'
and bash:
bash$ while read line; do echo $line; done <(cat foo)
bash: syntax error near unexpected token `<(cat foo)'
Does anybody know the reason and maybe a workaround?
Note: This is obviously a toy example. In the real code I need the body of the while loop to be executed in the main shell process, so I can't just use
cat foo | while read line; do echo $line; done
You need to redirect the process substitution into the while loop:
You wrote
while read line; do echo $line; done <(cat foo)
You need
while read line; do echo $line; done < <(cat foo)
# ...................................^
Treat a process substitution like a filename.
bash/zsh replaces <(cat foo) by a pipe (kind of file) having a name as /dev/fd/n where n is the file descriptor (number).
You can check the pipe name using the command echo <(cat foo).
As you may know, bash/zsh also runs the command cat foo in another process. The output of this second process is written to that named pipe.
without process substitution:
while ... do ... done inputfile #error
while ... do ... done < inputfile #correct
same rules using process substitution:
while ... do ... done <(cat foo) #error
while ... do ... done < <(cat foo) #correct
Alternative:
cat foo >3 & while read line; do echo $line; done <3;
I can suggest only workaround like this:
theproc() { for((i=0;i<5;++i)) do echo $i; }
while read line ; do echo $line ; done <<<"$(theproc)"

How to concatenate all lines from a file in Bash? [duplicate]

This question already has answers here:
How to concatenate multiple lines of output to one line?
(12 answers)
Closed 4 years ago.
I have a file csv :
data1,data2,data2
data3,data4,data5
data6,data7,data8
I want to convert it to (Contained in a variable):
variable=data1,data2,data2%0D%0Adata3,data4,data5%0D%0Adata6,data7,data8
My attempt :
data=''
cat csv | while read line
do
data="${data}%0D%0A${line}"
done
echo $data # Fails, since data remains empty (loop emulates a sub-shell and looses data)
Please help..
Simpler to just strip newlines from the file:
tr '\n' '' < yourfile.txt > concatfile.txt
In bash,
data=$(
while read line
do
echo -n "%0D%0A${line}"
done < csv)
In non-bash shells, you can use `...` instead of $(...). Also, echo -n, which suppresses the newline, is unfortunately not completely portable, but again this will work in bash.
Some of these answers are incredibly complicated. How about this.
data="$(xargs printf ',%s' < csv | cut -b 2-)"
or
data="$(tr '\n' ',' < csv | cut -b 2-)"
Too "external utility" for you?
IFS=$'\n', read -d'\0' -a data < csv
Now you have an array! Output it however you like, perhaps with
data="$(tr ' ' , <<<"${data[#]}")"
Still too "external utility?" Well fine,
data="$(printf "${data[0]}" ; printf ',%s' "${data[#]:1:${#data}}")"
Yes, printf can be a builtin. If it isn't but your echo is and it supports -n, use echo -n instead:
data="$(echo -n "${data[0]}" ; for d in "${data[#]:1:${#data[#]}}" ; do echo -n ,"$d" ; done)"
Okay, now I admit that I am getting a bit silly. Andrew's answer is perfectly correct.
I would much prefer a loop:
for line in $(cat file.txt); do echo -n $line; done
Note: This solution requires the input file to have a new line at the end of the file or it will drop the last line.
Another short bash solution
variable=$(
RS=""
while read line; do
printf "%s%s" "$RS" "$line"
RS='%0D%0A'
done < filename
)
awk 'END { print r }
{ r = r ? r OFS $0 : $0 }
' OFS='%0D%0A' infile
With shell:
data=
while IFS= read -r; do
[ -n "$data" ] &&
data=$data%0D%0A$REPLY ||
data=$REPLY
done < infile
printf '%s\n' "$data"
Recent bash versions:
data=
while IFS= read -r; do
[[ -n $data ]] &&
data+=%0D%0A$REPLY ||
data=$REPLY
done < infile
printf '%s\n' "$data"
A very simple single-line solution which requires no extra files as its quite easy to understand (I think, just cat the file together and perform sed-replace):
output=$(echo $(cat ./myFile.txt) | sed 's/ /%0D%0A/g')
Useless use of cat, punished! You want to feed the CSV into the loop
while read line; do
# ...
done < csv

Resources