''cat | tr <file1'' -- why does cat wait for input instead of reading from file1? - bash

I'm working on recreating my own shell environment copied from bash, and I found a pretty weird behavior with the real bash: when I enter
cat | tr -d 'o' < file1
(file1 contains only the text "Bonjour")
It outputs Bnjur, so until here no problem but it stays in a 'waiting for input' state until I press enter. At first I thought it was cat reading on stdin after tr execution, but it doesn't behave the same, it just waits for the user to press enter and (apparently) does nothing.
I saw on some bash documentation that < redirection redirects the input to the first SimpleCommand (before the first pipe), so it should redirect file1 on cat then redirect cat's output to tr, so it should only output Bnjur and nothing else, so why do we have to press enter to exit the command ?
Thanks for your help.

The < file1 redirection only applies to the tr command, not the entire pipeline.
So cat is reading from the original standard input, which is connected to the terminal. It's hanging because it's waiting for you to type something.
Meanwhile, tr is reading from the file. It exits when it finishes processing the file.
Once you type something, cat will write it to the pipe. Since tr has exited, there's no reader on the pipe, so cat will get a SIGPIPE signal and terminate.
If you want the redirection to apply to cat, use
cat < file1 | tr -d 'o'
If you want it to apply to the entire pipeline, you can group it in a subshell:
( cat | tr -d '0' ) < file1

You are redirecting input from file into tr, cat itself has no input and is thus taking input from stdin. Try this instead.
cat file1 | tr -d 'o'

Related

Why does "ls > out | cat < out" only output the first time I run it in Bash?

I am programming a Bash-like shell. I am having trouble understanding how this interaction works.
This command
ls > out | cat < out
only outputs the ls the first time I run it, and then nothing. In zsh it outputs everytime but not in Bash.
You're trying to give the parser conflicting directives.
This is like telling someone to "Turn to the left on your right."
<, >, and | all instruct the interpreter to redirect I/O according to rules.
Look at this bash example:
$: echo two>two # creates a file named two with the word two in it
$: echo one | cat < two <<< "three" << END
four
END
four
$: echo one | cat < two <<< three
three
$: echo one | cat < two
two
$: echo one | cat
one
Understand that putting a pipe character (|) between commands links the output of the first one to the input of the second one, so also giving each an input/output redirection that conflicts with that is nonsensical.
ls | cat # works - output of ls is input for cat
ls > out; cat < out # works - ls outputs to out, then cat reads out
ls > >(cat) # works
cat < <(ls) # works
but ls >out | cat sends the output from ls to out, and then attaches the output of that operation (of which there is none, because it's already been captured) to cat, which exits with no input or output.
If what you wanted was to have the output both go to a file and to the console, then either use ls > out; cat < out which makes them separate operations, or try
ls | tee out
which explicitly splits the stream to both the file and stdout.

shell: send grep output to stderr and leave stdout intact

i have a program that outputs to stdout (actually it outputs to stderr, but i can easily redirect that to stdout with 2>&1 or the like.
i would like to run grep on the output of the program, and redirect all matches to stderr while leaving the unmatched lines on stdout (alternatively, i'd be happy with getting all lines - not just the unmatched ones - on stdout)
e.g.
$ myprogram() {
cat <<EOF
one line
a line with an error
another line
EOF
}
$ myprogram | greptostderr error >/dev/null
a line with an error
$ myprogram | greptostderr error 2>/dev/null
one line
another line
$
a trivial solution would be:
myprogram | tee logfile
grep error logfile 1>&2
rm logfile
however, i would rather get the matching lines on stderr when they occur, not when the program exits...
eventually, I found this, which gave me a hint to for a a POSIX solution like so:
greptostderr() {
while read LINE; do
echo $LINE
echo $LINE | grep -- "$#" 1>&2
done
}
for whatever reasons, this does not output anything (probably a buffering problem).
a somewhat ugly solution that seems to work goes like this:
greptostderr() {
while read LINE; do
echo $LINE
echo $LINE | grep -- "$#" | tee /dev/stderr >/dev/null
done
}
are there any better ways to implement this?
ideally i'm looking for a POSIX shell solution, but bash is fine as well...
I would use awk instead of grep, which gives you more flexibility in handling both matched and unmatched lines.
myprogram | awk -v p=error '{ print > ($0 ~ p ? "/dev/stderr" : "/dev/stdout")}'
Every line will be printed; the result of $0 ~ p determines whether the line is printed to standard error or standard output. (You may need to adjust the output file names based on your file system.)

pipe a command not printing newline (but using \r)

I want to pipe the output of a program that doesn't print newline, as it uses carriage return to replace it's line with new content.
this code represents the behavior of the program I'd like to retreive the output.
#!/usr/bin/env bash
for i in {1..100};do
echo -ne "[ $i% ] long unneeded log\r"
sleep 0.3
done
i'd like , in a bash script, to cut this output live to display only the important info,
but as the program doesn't prints newline a ./program | awk ... shows the output only when the command is ended.
I cannot modify the program that gives this output I'm trying to trim.
(I don't have it's source + I want to share my own script with other users)
I know my request is pretty specific, but is there a way to pipe the output character by character instead that by line?
you may try
./program | tr '\r' '\n'
you may continue piping with a third program that would process line per line.
I found it thanks to a mix of #OznOg answer and #Walter-A link.
indeed replacing carriage returns with newline with tr works,
but it is buffered by default, stdbuf can unbuffer it with stdbuf -o0.
so the final command is:
./program | stdbuf -o0 tr '\r' '\n' | awk -F'[][]' '{printf $2 "\r"}'
this indeed prints live the first match between brackets, with a carriage return.
so a live log from the program showing on a single updated line [x%] long compile detail would be abbreviated just to x%, still using 1 line.
Change your echo command from:
echo -ne
To:
echo -e
From the echo docs:
ā€˜-nā€™
Do not output the trailing newline.

Redirecting stdin to a file with the file content being reflected on the console

Is there a way to redirect stdin to a file but at the same time reflect what's being read from the file on the console?
Update: I'm trying to redirect the contents of a file to the standard input of a program, but at the same time reflect the standard input and output of that program on the console. I've tried something like:
echo "$(cat inputfile)" | tee /dev/tty | ./program
which doesn't seem to be the right thing to do.
What you are doing seems fine to me. You can avoid the crazy stuff, though:
tee /dev/tty <inputfile | ./program
echo $(cat) will coincidentally squish whitespace. I assume you used this by mistake, but if that's what you genuinely want to accomplish, try
tr -s '\n\t' ' ' <inputfile | tee /dev/tty | ./program

pipe tail output into another script

I am trying to pipe the output of a tail command into another bash script to process:
tail -n +1 -f your_log_file | myscript.sh
However, when I run it, the $1 parameter (inside the myscript.sh) never gets reached. What am I missing? How do I pipe the output to be the input parameter of the script?
PS - I want tail to run forever and continue piping each individual line into the script.
Edit
For now the entire contents of myscripts.sh are:
echo $1;
Generally, here is one way to handle standard input to a script:
#!/bin/bash
while read line; do
echo $line
done
That is a very rough bash equivalent to cat. It does demonstrate a key fact: each command inside the script inherits its standard input from the shell, so you don't really need to do anything special to get access to the data coming in. read takes its input from the shell, which (in your case) is getting its input from the tail process connected to it via the pipe.
As another example, consider this script; we'll call it 'mygrep.sh'.
#!/bin/bash
grep "$1"
Now the pipeline
some-text-producing-command | ./mygrep.sh bob
behaves identically to
some-text-producing-command | grep bob
$1 is set if you call your script like this:
./myscript.sh foo
Then $1 has the value "foo".
The positional parameters and standard input are separate; you could do this
tail -n +1 -f your_log_file | myscript.sh foo
Now standard input is still coming from the tail process, and $1 is still set to 'foo'.
Perhaps your were confused with awk?
tail -n +1 -f your_log_file | awk '{
print $1
}'
would print the first column from the output of the tail command.
In the shell, a similar effect can be achieved with:
tail -n +1 -f your_log_file | while read first junk; do
echo "$first"
done
Alternatively, you could put the whole while ... done loop inside myscript.sh
Piping connects the output (stdout) of one process to the input (stdin) of another process. stdin is not the same thing as the arguments sent to a process when it starts.
What you want to do is convert the lines in the output of your first process into arguments for the the second process. This is exactly what the xargs command is for.
All you need to do is pipe an xargs in between the initial command and it will work:
tail -n +1 -f your_log_file | xargs | myscript.sh

Resources