bash "wc -l" command output differs if call or through tee - bash

When I issued two equivalent commands in Bash I got different output (from "wc -l" command), see below:
root#devel:~# ls /usr/bin -lha | tee >(wc -l) >(head) > /dev/null
total 76M
drwxr-xr-x 2 root root 20K Nov 11 18:58 .
drwxr-xr-x 10 root root 4.0K Oct 8 15:31 ..
-rwxr-xr-x 1 root root 51K Feb 22 2017 [
-rwxr-xr-x 1 root root 96 Jan 19 2017 2to3-3.5
-rwxr-xr-x 1 root root 23K Mar 22 2017 addpart
lrwxrwxrwx 1 root root 26 May 10 2017 addr2line -> x86_64-linux-gnu- addr2line
lrwxrwxrwx 1 root root 6 Dec 13 2016 apropos -> whatis
-rwxr-xr-x 1 root root 15K Sep 13 19:47 apt
-rwxr-xr-x 1 root root 79K Sep 13 19:47 apt-cache
137
root#devel:~# ls /usr/bin -lha | wc -l
648
what am I missing?
it's strange, but when I call it this way it gets even stranger output:
root#devel:~# ls /usr/bin -lha | tee >(wc) >(wc) > /dev/null
648 6121 39179
648 6121 39179
root#devel:~# ls /usr/bin -lha | tee >(wc) >(wc) > /dev/null
648 6121 39179
648 6121 39179
root#devel:~# ls /usr/bin -lha | tee >(wc) >(wc -l) > /dev/null
648
root#devel:~# 648 6121 39179
seems like commands running asynchronously and ends in different time... or what it can be?

Simple answer:
how to fix:
ls /usr/bin -lha | tee --output-error=exit-nopipe >(wc -l) >(head) > /dev/null
Details:
The command head only prints the head of input, so it can finish its job as long as it gets enough input, then exits without waiting for all input.
So let's replace command head with the simple "head".
ls /usr/bin -lha | tee >(wc -l) >(read l; echo $l) > /dev/null
The simple "head" will read only one line, then exit, which causes that the pipe file gets closed immediately before tee finishes transferring all data to it.
So no doubt, you'll get same result with the simple "head". wc still prints wrong number.
The root reason of your issue, I think you can conclude yourself, is that one of the output pipes of tee is closed earlier, tee hits a write error, and then stops writing to other output files.
After understanding the root reason, I think it would be very easy for you to understand the following section in man page.
MODE determines behavior with write errors on the outputs:
'warn' diagnose errors writing to any output
'warn-nopipe'
diagnose errors writing to any output not a pipe
'exit' exit on error writing to any output
'exit-nopipe'
exit on error writing to any output not a pipe
The default MODE for the -p option is 'warn-nopipe'. The default operation
when --output-error is not specified, is to exit immediately on error writing to
a pipe, and diagnose errors writing to non pipe outputs.
Some extra words
Actually if you replace >(wc -l) with a regular file in your problematic command line, you will find the file size will always be 16384 or 20480 or 32768 or 36864 or 28672 or ..., all of which are the multiple of 4096. (The writing to the regular file is incomplete because tee aborts earlier. If the writing was complete, the file size would be any value.)
4096 is the value of PIPE_BUF for most UNIX-like system. If you know what PIPE_BUF is, you will easily understand why the file size is always the multiple of 4096.

Related

Behaviour of /dev/stdout in Cygwin

I have a script (executed with zsh 5.8; but this should not be relevant in this case) in a Cygwin environment, which takes as parameter the name of some output file and writes to this files via redirection in various places, like this:
outfile=$1
: >$outfile # Ensure that the file exists and is empty.
.... do some work
command_x >>$outfile
.... do more work
command_y >>$outfile
... and so on
I would like to modify the behviour of the script in that if no parameter is supplied, the output of the commands goes to standard output instead. I thought that it would be sufficient to modify the script in one line:
outfile=${1:-/dev/stdout}
But nothing is written to stdout. Investigating the case further, I found that instead a regular file named stdout had been created in the /dev directory. It seems that in the Cygwin environment, /dev/stdout does not represent the standard output of the process.
How would I achieve my goal under Cygwin?
UPDATE
As requested by #matzeri, here is a simple testcase:
echo x >/dev/stdout
Expected behaviour: Seeing x on stdout
Real behaviour: A regular file /dev/stdout has been created
on a standard windows installation the /dev/std* are a symlink to the /proc/self/fd/*
ls -l /dev/std*
lrwxrwxrwx 1 Marco Kein 15 Jun 19 2018 /dev/stderr -> /proc/self/fd/2
lrwxrwxrwx 1 Marco Kein 15 Jun 19 2018 /dev/stdin -> /proc/self/fd/0
lrwxrwxrwx 1 Marco Kein 15 Jun 19 2018 /dev/stdout -> /proc/self/fd/1
if for any reason that is not anymore true they can be recreated
by /etc/postinstall/bash.sh.done script
$ grep self /etc/postinstall/bash.sh.done
/bin/test -h /dev/stdin || ln -sf /proc/self/fd/0 /dev/stdin || result=1
/bin/test -h /dev/stdout || ln -sf /proc/self/fd/1 /dev/stdout || result=1
/bin/test -h /dev/stderr || ln -sf /proc/self/fd/2 /dev/stderr || result=1
/bin/test -h /dev/fd || ln -sf /proc/self/fd /dev/fd || result=1
In that condition the command
$ echo x > /dev/stdout
x
produces the expected output on both Bash and Zsh

Process substitution not working with sudo

From a main bash script run as root, I want to execute a subprocess using sudo as unpriviledge user nobody; that subprocess should source a file, which content is provided by the main script.
I am trying to solve this using bash process substitution. But I cannot manage to get this to work.
Can someone tell me why the following script, ...
#! /bin/bash
sudo -u nobody \
bash -c 'source /dev/stdin || ls -l /dev/stdin /proc/self/fd/0 /proc/$$/fd/0; echo "A=$A"' \
< <(echo "A=$(ls /root/.profile)")
... when run as root, produces the following ouput ?
root#raspi:~# ./test3.sh
bash: line 1: /dev/stdin: Permission denied
lrwxrwxrwx 1 root root 15 Mar 20 20:55 /dev/stdin -> /proc/self/fd/0
lr-x------ 1 nobody nogroup 64 Aug 21 14:38 /proc/3243/fd/0 -> 'pipe:[79069]'
lr-x------ 1 nobody nogroup 64 Aug 21 14:38 /proc/self/fd/0 -> 'pipe:[79069]'
A=
I would expect reading from stdin to work because, as indicated by ls -l, read access to stdin is granted to nobody (which makes sense).
So why this does not work ? And is there any way to get this to work ?
Answers to this question did not help: as sample above shows, code in the <(...) bloc should access data that only root can.
To see why you have Permission denied, use ls -lL
sudo -u nobody \
bash -c 'source /dev/stdin || ls -lL /dev/stdin /proc/self/fd/0 /proc/$$/fd/0; echo "A=$A"' \
< <(echo "A=$(ls /root/.profile)")
To get around the error, use cat |
sudo -u nobody \
bash -c 'cat | { source /dev/stdin || ls -lL /dev/stdin /proc/self/fd/0 /proc/$$/fd/0; echo "A=$A"; }' \
< <(echo "A=$(ls /root/.profile)")

Terminal Piping and Writing to File

I am trying to copy the first two items in my 'Downloads' directory using only the terminal.
I open up zsh, cd into my 'Downloads' directory and start typing.
The below reflects what is shown in the terminal:
% ls -lt | head -3
file1.csv
file2.csv (exactly the files I want)
% ls -lt | head -3 > ToBeCopied.txt
% vim ToBeCopied.txt
total 24625744
-rw-r--r-- 1 Aaron staff 0 22 Apr 15:28 ToBeCopied.txt
-rw-r--r--# 1 Aaron staff 42042 22 Apr 15:16 file1.csv
What happened to file2.csv?

How to pipe output to stdout and variable? [duplicate]

This question already has answers here:
How to store the output of a command in a variable at the same time as printing the output?
(4 answers)
Closed 2 years ago.
When I run the following script, output of ls -la is stored to variable $output.
#! /bin/sh
output=$(ls -la)
How can I pipe output of ls -la to stdout and $output?
I am asking in the context of running borgbackup which can output for a long time during backups.
I would like to be able to track progress when I manually run script while storing output in $output to send it to sysadmin via email.
Use tee;
output=$(ls -lta | tee /dev/tty)
Another way of doing this is by creating a copy of STDOUT, and again using tee to send the output there;
# Create copy of stdout
exec 3>&1
# Run command
output=$(ls -lta | tee /dev/fd/3)
# Close copy
exec 3>&-
Use tee util and pass it to /dev/tty to print stdout.
box: ~/demo
➜ out=$(ls -la | tee /dev/tty)
total 16
drwxr-xr-x 4 chen staff 128 Feb 15 15:59 .
drwxr-xr-x+ 103 chen staff 3296 Feb 15 15:59 ..
-rw-r--r-- 1 chen staff 141 Sep 1 12:38 docker-compose.yml
-rw-r--r-- 1 chen staff 84 Sep 1 12:31 ubuntu.Dockerfile
box: ~/demo
➜ echo $out
total 16
drwxr-xr-x 4 chen staff 128 Feb 15 15:59 .
drwxr-xr-x+ 103 chen staff 3296 Feb 15 15:59 ..
-rw-r--r-- 1 chen staff 141 Sep 1 12:38 docker-compose.yml
-rw-r--r-- 1 chen staff 84 Sep 1 12:31 ubuntu.Dockerfile

How to test permission file with bash

Locked for 2 days. There are disputes about this question’s content being resolved at this time. It is not currently accepting new answers or interactions.
How do you test permissions on files using bash ? And how does it work ? Does it look for owner's permissions only or all of them (owner, group, others) ? I used -r and -w to test permissions on some files but I got some inaccurate responses.
Here is what I did :
[root#server1 ~]# cat script.sh
#!/bin/bash
FILE="$1"
[ $# -eq 0 ] && exit 1
if [[ -r "$FILE" && -w "$FILE" ]]
then
echo "We can read and write the $FILE"
else
echo "Access denied"
fi
[root#server1 ~]# ll file*
-rw-r--r--. 2 root root 1152 Jun 2 18:24 file1
-rwx------. 1 root root 3 Jun 6 20:35 file2
-r--------. 1 root root 3 Jun 6 20:35 file3
--w-------. 1 root root 3 Jun 6 20:35 file4
---x------. 1 root root 3 Jun 6 20:35 file5
----------. 1 root root 3 Jun 6 20:35 file6
[root#server1 ~]#
[root#server1 ~]# ./script.sh file1
We can read and write the file1
[root#server1 ~]# ./script.sh file2
We can read and write the file2
[root#server1 ~]# ./script.sh file3
We can read and write the file3
[root#server1 ~]# ./script.sh file4
We can read and write the file4
[root#server1 ~]# ./script.sh file5
We can read and write the file5
[root#server1 ~]# ./script.sh file6
We can read and write the file6
Thanks
There is nothing essentially wrong with your script. You are executing it as root so you do have permission to read and write, in fact, root has permission to do anything!
Check this post and you will see that even suppressing the permissions, root user can have access to them. Your output is correct. If you look into the man page of the test command, you can see that the -r and -w flags test if the file exist and in addition permissions to read and write respectively are granted to the user executing the command (both of them since you use a logical and).

Resources