This question already has an answer here:
POSIX shell equivalent to <()
(1 answer)
Closed 3 years ago.
I want to create a script to read a .txt file. This is my code:
while IFS= read -r lines
do
echo "$lines"
done < <(tail -n +2 filename.txt)
I tried a lot of things like:
<<(tail -n +2 in.txt)
< < (tail -n +2 in.txt)
< (tail -n +2 in.txt)
<(tail -n +2 in.txt)
(tail -n +2 in.txt)
I expected to print me from the second line but instead I get an error:
Syntax error: redirection unexpected
If you just want to ignore the first line, there's no good reason to use tail at all!
{
read -r first_line
while IFS= read -r line; do
printf '%s\n' "$line"
done
} <filename.txt
Using read to consume the first line leaves the original file pointer intact, so following code can read directly from the file, instead of reading from a FIFO attached to the output of the tail program; it's thus much lower-overhead.
If you did want to use tail, for the specific case raised, you don't need to use a process substitution (<(...)), but can simply pipe into your while loop. Note that this has a serious side effect, insofar as any variables you set in the loop will no longer be available after it exits; this is documented (in a cross-shell manner) in BashFAQ #24.
tail -n +2 filename.txt | while IFS= read -r line
do
printf '%s\n' "$line"
done
As it says in this answer
POSIX shell equivalent to <()
you could use named pipes to simulate process substitution in
POSIX. Your script would look like that:
#!/usr/bin/env sh
mkfifo foo.fifo
tail -n +2 filename.txt >foo.fifo &
while IFS= read -r lines
do
echo "$lines"
done < foo.fifo
rm foo.fifo
Related
I have the following two scripts:
#script1.sh:
#!/bin/bash
this_chunk=(1 2 3 4)
printf "%s\n" "${this_chunk[#]}" | ./script2.sh
#script2.sh:
#!/bin/bash
while read -r arr
do
echo "--$arr"
done
When I execute script1.sh, the output is as expected:
--1
--2
--3
--4
which shows that I was able to pipe the elements of the array this_chunk as arguments to script2.sh. However, if I change the line calling script2.sh to
printf "%s\n" "${this_chunk[#]}" | xargs ./script2.sh
there is no output. My question is, how to pass the array this_chunk using xargs, rather than simple piping? The reason is that I will have to deal with large arrays and thus long argument lists which will be a problem with piping.
Edit:
Based on the answers and comments, this is the correct way to do it:
#script1.sh
#!/bin/bash
this_chunk=(1 2 3 4)
printf "%s\0" "${this_chunk[#]}" | xargs -0 ./script2.sh
#script2.sh
#!/bin/bash
for i in "${#}"; do
echo $i
done
how to pass the array this_chunk using xargs
Note that xargs by default interprets ' " and \ sequences. To disable the interpretation, either preprocess the data, or better use GNU xargs with -d '\n' option. -d option is not part of POSIX xargs.
printf "%s\n" "${this_chunk[#]}" | xargs -d '\n' ./script2.sh
That said, with GNU xargs prefer zero terminated streams, to preserve newlines:
printf "%s\0" "${this_chunk[#]}" | xargs -0 ./script2.sh
Your script ./script2.sh ignores command line arguments, and your xargs spawns the process with standard input closed. Because the input is closed, read -r arr fails, so your scripts does not print anything, as expected. (Note that in POSIX xargs, when the spawned process tries to read from stdin, the result is unspecified.)
I have a log file with a lot of lines with the following format:
IP - - [Timestamp Zone] 'Command Weblink Format' - size
I want to write a script.sh that gives me the number of times each website has been clicked.
The command:
awk '{print $7}' server.log | sort -u
should give me a list which puts each unique weblink in a separate line. The command
grep 'Weblink1' server.log | wc -l
should give me the number of times the Weblink1 has been clicked. I want a command that converts each line created by the Awk command above to a variable and then create a loop that runs the grep command on the extracted weblink. I could use
while IFS='' read -r line || [[ -n "$line" ]]; do
echo "Text read from file: $line"
done
(source: Read a file line by line assigning the value to a variable) but I don't want to save the output of the Awk script in a .txt file.
My guess would be:
while IFS='' read -r line || [[ -n "$line" ]]; do
grep '$line' server.log | wc -l | ='$variabel' |
echo " $line was clicked $variable times "
done
But I'm not really familiar with connecting commands in a loop, as this is my first time. Would this loop work and how do I connect my loop and the Awk script?
Shell commands in a loop connect the same way they do without a loop, and you aren't very close. But yes, this can be done in a loop if you want the horribly inefficient way for some reason such as a learning experience:
awk '{print $7}' server.log |
sort -u |
while IFS= read -r line; do
n=$(grep -c "$line" server.log)
echo "$line" clicked $n times
done
# you only need the read || [ -n ] idiom if the input can end with an
# unterminated partial line (is illformed); awk print output can't.
# you don't really need the IFS= and -r because the data here is URLs
# which cannot contain whitespace and shouldn't contain backslash,
# but I left them in as good-habit-forming.
# in general variable expansions should be doublequoted
# to prevent wordsplitting and/or globbing, although in this case
# $line is a URL which cannot contain whitespace and practically
# cannot be a glob. $n is a number and definitely safe.
# grep -c does the count so you don't need wc -l
or more simply
awk '{print $7}' server.log |
sort -u |
while IFS= read -r line; do
echo "$line" clicked $(grep -c "$line" server.log) times
done
However if you just want the correct results, it is much more efficient and somewhat simpler to do it in one pass in awk:
awk '{n[$7]++}
END{for(i in n){
print i,"clicked",n[i],"times"}}' |
sort
# or GNU awk 4+ can do the sort itself, see the doc:
awk '{n[$7]++}
END{PROCINFO["sorted_in"]="#ind_str_asc";
for(i in n){
print i,"clicked",n[i],"times"}}'
The associative array n collects the values from the seventh field as keys, and on each line, the value for the extracted key is incremented. Thus, at the end, the keys in n are all the URLs in the file, and the value for each is the number of times it occurred.
The below bash seems to run but no file names are displayed on the terminal screen, rather it just stalls. I can not seem to figure out why it is not working now as it used to. Thank you :).
bash
while read line; do
sed -i -e 's|https://www\.example\.com/xx/x/xxx/||' /home/file
echo $line
done
file
Auto_user_xxx-39-160506_file_name_x-x_custom_xx_91-1.pdf
Auto_user_xxx-48-160601_V4-2_file_name_x-x_custom_xx_101.pdf
coverageAnalysisReport(10).zip
The read command is waiting for input, since nothing is specified it will read from stdin. If you type a few lines and press you will see thats the input for the loop.
But you most likely want to redirect a file to the loop:
while IFS= read -r line; do
printf "%s\n" "$line"
done < /home/file
But afai can understand you have a file with other file names which you would like to run the substitution on, in that case you should use xargs:
xargs -n 1 -I {} sed -i -e 's|https://www\.example\.com/xx/x/xxx/||' {} < /home/file
Why isn't this bash array populating? I believe I've done them like this in the past. Echoing ${#XECOMMAND[#]} shows no data..
DIR=$1
TEMPFILE=/tmp/dir.tmp
ls -l $DIR | tail -n +2 | sed 's/\s\+/ /g' | cut -d" " -f5,9 > $TEMPFILE
i=0
cat $TEMPFILE | while read line ;do
if [[ $(echo $line | cut -d" " -f1) == 0 ]]; then
XECOMMAND[$i]="$(echo "$line" | cut -d" " -f2)"
(( i++ ))
fi
done
When you run the while loop like
somecommand | while read ...
then the while loop is executed in sub-shell, i.e. a different process than the main script. Thus, all variable assignments that happen in the loop, will not be reflected in the main process. The workaround is to use input redirection and/or command substitution, so that the loop executes in the current process. For example if you want to read from a file you do
while read ....
do
# do stuff
done < "$filename"
or if you wan't the output of a process you can do
while read ....
do
# do stuff
done < <(some command)
Finally, in bash 4.2 and above, you can set shopt -s lastpipe, which causes the last command in the pipeline to be executed in the current process.
I think you're trying to construct an array consisting of the names of all zero-length files and directories in $DIR. If so, you can do it like this:
mapfile -t ZERO_LENGTH < <(find "$DIR" -maxdepth 1 -size 0)
(Add -type f to the find command if you're only interested in regular files.)
This sort of solution is almost always better than trying to parse ls output.
The use of process substitution (< <(...)) rather than piping (... |) is important, because it means that the shell variable will be set in the current shell, not in an ephimeral subshell.
How to retrieve a single line from the file?
file.txt
"aaaaaaa"
"bbbbbbb"
"ccccccc"
"ddddddd"
I need to retrieve the line 3 ("ccccccc")
Thank you.
sed is your friend. sed -n 3p prints the third line (-n: no automatic print, 3p: print when line number is 3). You can also have much more complex patterns, for example sed -n 3,10p to print lines 3 to 10.
If the file is very big, you may consider to not cycle through the whole file, but quit after the print. sed -n '3{p;q}'
If you know you need line 3, one approach is to use head to get the first three lines, and tail to get only the last of these:
varname="$(head -n 3 file.txt | tail -n 1)"
Another approach, using only Bash builtins, is to call read three times:
{ read ; read ; IFS= read -r varname } < file.txt
Here's a way to do it with awk:
awk 'FNR==3 {print; exit}' file.txt
Explanation:
awk '...' : Invoke awk, a tool for manipulating files line-by-line. Instructions enclosed by single quotes are executed by awk.
FNR==3 {print; exit}: FNR stands for "File Number Records"; just think of it as "number of lines read so far for this file". Here we are saying, if we are on the 3rd line of the file, print the entire line and then exit awk immediately so we don't waste time reading the rest of a large file.
file.txt: specify the input file as an argument to awk to save a cat.
There are many possibilities: Try so:
sed '3!d' test
Here is a very fast version:
sed "1d; 2d; 3q"
Are other tools than bash allowed? On systems that include bash, you'll usually find sed and awk or other basic tools:
$ line="$(sed -ne 3p input.txt)"
$ echo "$line"
or
$ read line < <(awk 'NR==3' input.txt)
$ echo "$line"
or if you want to optimize this by quitting after the 3rd line is read:
$ read line < <(awk 'NR==3{print;nextfile}' input.txt)
$ echo "$line"
or how about even simpler tools (though less optimized):
$ line="`head -n 3 input.txt | tail -n 1`"
$ echo "$line"
Of course, if you really want to do this all within bash, you can still make it a one-liner, without using any external tools.
$ for (( i=3 ; i-- ; )); do read line; done < input.txt
$ echo "$line"
There are many ways to achieve the same thing. Pick one that makes sense for your task. Next time, perhaps explain your overall needs a bit better, so we can give you answers more applicable to your situation.
Since, as usual, all the other answers involve trivial and usual stuff (pipe through grep then awk then sed then cut or you-name-it), here's a very unusual and (sadly) not very well-known one (so, I hereby claim that I have the most original answer):
mapfile -s2 -n3 -t < input.txt
echo "$MAPFILE"
I would say this is fairly efficient (mapfile is quite efficient and it's a bash builtin).
Done!
Fast bash version;
while (( ${i:-1} <= 3 )); do
(( $i == 3 )) && read -r line; (( i++ ))
done < file.txt
Output
echo "$line" # Third line
"ccccccc"
Explanation
while (( ${i:-1} <= 3 )) - Count until $i equals 3 then exit loop.
(( $i == 3 )) - If $i is equal to 3 execute read line.
read -r line - Read the file line into variable $line.
(( i++ )) - Increment $i by 1 at each loop.
done < file.txt - Pipe file into while loop.