Why is there a difference in behavior when the command are piped to bash versus when bash reads the file with the commands? - bash

Consider this bash script
§ cat sample.sh
echo "PRESS ENTER:"
read continue;
echo "DONE";
If I run it this way, the script exits after the first echo without waiting for the read:
§ cat sample.sh | bash --noprofile --norc
PRESS ENTER:
However, if I run it this way, it works as expected:
§ bash --noprofile --norc sample.sh
PRESS ENTER:
DONE
Why the difference?

In the first instance, the read will absorb echo "DONE"; as both the script and user input for read are coming from stdin.
$ cat sample.sh
echo "PRESS ENTER:"
read continue;
echo "DONE";
echo "REALLY DONE ($continue)";
$ cat sample.sh | bash --noprofile --norc
PRESS ENTER:
REALLY DONE (echo "DONE";)
$

If you add echo "$continue" to the end, the issue becomes obvious:
(Also I removed the semicolons since they do nothing.)
$ cat test.sh
echo "PRESS ENTER:"
read continue
echo "DONE"
echo "$continue"
$ bash test.sh
PRESS ENTER:
foo
DONE
foo
$ bash < test.sh
PRESS ENTER:
echo "DONE"
read continue is taking echo "DONE" as input since it's coming from stdin

Related

Bash use of zenity with console redirection

In efforts to create more manageable scripts that write their own output to only one location themselves (via 'exec > file'), is there a better solution than below for combining stdout redirection + zenity (which in this use relies on piped stdout)?
parent.sh:
#!/bin/bash
exec >> /var/log/parent.out
( true; sh child.sh ) | zenity --progress --pulsate --auto-close --text='Executing child.sh')
[[ "$?" != "0" ]] && exit 1
...
child.sh:
#!/bin/bash
exec >> /var/log/child.out
echo 'Now doing child.sh things..'
...
When doing something like-
sh child.sh | zenity --progress --pulsate --auto-close --text='Executing child.sh'
zenity never receives stdout from child.sh since it is being redirected from within child.sh. Even though it seems to be a bit of a hack, is using a subshell containing a 'true' + execution of child.sh acceptable? Or is there a better way to manage stdout?
I get that 'tee' is acceptable to use in this scenario, though I would rather not have to write out child.sh's logfile location each time I want to execute child.sh.
Your redirection exec > stdout.txt will lead to error.
$ exec > stdout.txt
$ echo hello
$ cat stdout.txt
cat: stdout.txt: input file is output file
You need an intermediary file descriptor.
$ exec 3> stdout.txt
$ echo hello >&3
$ cat stdout.txt
hello

How to pass argument in bash pipe from terminal

i have a bash script show below in a file called test.sh
#!/usr/bin/env bash
echo $1
echo "execution done"
when i execute this script using
Case-1
./test.sh "started"
started
execution done
showing properly
Case-2
If i execute with
bash test.sh "started"
i'm getting the out put as
started
execution done
But i would like to execute this using a cat or wget command with arguments
For example like.
Q1
cat test.sh |bash
Or using a command
Q2
wget -qO - "url contain bash" |bash
So in Q1 and Q2 how do i pass argument
Something simlar to this shown in this github
https://github.com/creationix/nvm
Please refer installation script
$ bash <(curl -Ls url_contains_bash_script) arg1 arg2
Explanation:
$ echo -e 'echo "$1"\necho "done"' >test.sh
$ cat test.sh
echo "$1"
echo "done"
$ bash <(cat test.sh) "hello"
hello
done
$ bash <(echo -e 'echo "$1"\necho "done"') "hello"
hello
done
You don't need to pipe to bash; bash runs as standard in your terminal.
If I have a script and I have to use cat, this is what I'll do:
cat script.sh > file.sh; chmod 755 file.sh; ./file.sh arg1 arg2 arg3
script.sh is the source script. You can replace that call with anything you want.
This has security implications though; just running an arbitrary code in your shell - especially with wget where the code comes from a remote location.

Read full stdin until EOF when stdin comes from `cat` bash

I'm trying to read full stdin into a variable :
script.sh
#/bin/bash
input=""
while read line
do
echo "$line"
input="$input""\n""$line"
done < /dev/stdin
echo "$input" > /tmp/test
When I run ls | ./script.sh or mostly any other commands, it works fine.
However It doesn't work when I run cat | ./script.sh , enter my message, and then hit Ctrl-C to exit cat.
Any ideas ?
I would stick to the one-liner
input=$(cat)
Of course, Ctrl-D should be used to signal end-of-file.

Bash script: how to get the whole command line which ran the script

I would like to run a bash script and be able to see the command line used to launch it:
sh myscript.sh arg1 arg2 1> output 2> error
in order to know if the user used the "std redirection" '1>' and '2>', and therefore adapt the output of my script.
Is it possible with built-in variables ??
Thanks.
On Linux and some unix-like systems, /proc/self/fd/1 and /proc/self/fd/2 are symlinks to where your std redirections are pointing to. Using readlink, we can query if they were redirected or not by comparing them to the parent process' file descriptor.
We will however not use self but $$ because $(readlink /proc/"$$"/fd/1) spawns a new shell so self would no longer refer to the current bash script but to a subshell.
$ cat test.sh
#!/usr/bin/env bash
#errRedirected=false
#outRedirected=false
parentStderr=$(readlink /proc/"$PPID"/fd/2)
currentStderr=$(readlink /proc/"$$"/fd/2)
parentStdout=$(readlink /proc/"$PPID"/fd/1)
currentStdout=$(readlink /proc/"$$"/fd/1)
[[ "$parentStderr" == "$currentStderr" ]] || errRedirected=true
[[ "$parentStdout" == "$currentStdout" ]] || outRedirected=true
echo "$0 ${outRedirected:+>$currentStdout }${errRedirected:+2>$currentStderr }$#"
$ ./test.sh
./test.sh
$ ./test.sh 2>/dev/null
./test.sh 2>/dev/null
$ ./test.sh arg1 2>/dev/null # You will lose the argument order!
./test.sh 2>/dev/null arg1
$ ./test.sh arg1 2>/dev/null >file ; cat file
./test.sh >/home/camusensei/file 2>/dev/null arg1
$
Do not forget that the user can also redirect to a 3rd file descriptor which is open on something else...!
Not really possible. You can check whether stdout and stderr are pointing to a terminal: [ -t 1 -a -t 2 ]. But if they do, it doesn't necessarily mean they weren't redirected (think >/dev/tty5). And if they don't, you can't distinguish between stdout and stderr being closed and them being redirected. And even if you know for sure they are redirected, you can't tell from the script itself where they point after redirection.

Command prompt appearing during execution of a bash script

This is a simplified version of a script I use:
In its simplified version tt should read the file input line by line and then print it to the standard output and also write to a file log.
input file:
asas
haha
asha
hxa
The script (named simple):
#!/bin/bash
FILE=input
logfile="log"
exec > >(tee "$logfile") # redirect the output to a file but keep it on stdout
exec 2>&1
DONE=false
until $DONE; do
read || DONE=true
[[ ! $REPLY ]] && continue #checks if a line is empty
echo "----------------------"
echo $REPLY
done < "$FILE"
echo "----------------------"
echo ">>> Finished"
The output (on console):
-bash-3.2$ ./simple
-bash-3.2$ ----------------------
asas
----------------------
haha
----------------------
asha
----------------------
hxa
----------------------
>>> Finished
At this time I need to press enter to terminate the script. Notice that a command prompt -bash-3.2$ showed up during execution.
I checked that those lines are to blame:
exec > >(tee "$logfile") # redirect the output to a file but keep it on stdout
exec 2>&1
Without them the output is as expected:
-bash-3.2$ ./simple
----------------------
asas
----------------------
haha
----------------------
asha
----------------------
hxa
----------------------
>>> Finished
-bash-3.2$
What is more I don't need to press enter to terminate the script.
Unfortunately I need saving the output both to the console (stdout) and a log file.
How can this be fixed?
You can use tee on the echo lines directly.
For example:
[ben#lappy ~]$ echo "echo and save to log" | tee -a example.log
echo and save to log
[ben#lappy ~]$ cat example.log
echo and save to log
The -a argument to tee will append to the log.
If you just need it to pause and wait for user input use the pause command:
pause
What's happening is that tee "$logfile" is being run asynchronously. When you use process substitution like that, the main script doesn't wait for the process to exit.
So the until loop runs, the main script exits, the shell prints the prompt, and then tee prints its output.
You can see this more easily with:
echo Something > >(sleep 5; cat)
You'll get a command prompt, and then 5 seconds later Something will appear.
There was a thread about this behavior in comp.unix.shell a couple of years ago. See it here

Resources