Difference between "echo 'hello'; ls" vs "echo 'hello' && ls"? - bash

I wonder what the difference between
"echo 'hello'; ls"
and
"echo 'hello' && ls"
is? they both do the same thing

"echo 'hello' && ls" means : execute "ls" if "echo 'hello'" runs successfully. To understand what is "successful" in bash. Try this :
bash> cd /
bash> echo $?
if the previous command runs successfully, you should see 0
After that, try this :
bash> asdfdf
bash> echo $?
You should see a non-zero value between 1 and 255. This means previous command didn't run successfully
On the other hand, "echo 'hello'; ls" means execute "ls" whether "echo 'hello'" runs successfully or not.

The && is the logical AND operator. The idea in its use in command1 && command2 is that command2 is only evaluated/run if command1 was successful. So here ls will only be run if the echo command returned successful (which will always be the case here, but you never know ;-). You could also write this as:
if echo 'hello'
then
ls
fi
The semicolon just delimits two commands. So you could also write echo 'hello' ; ls as:
echo 'hello'
ls
Thus ls will also be executed even when echo fails.
BTW, successful in this context means that the program was exited with something like exit(0) and thus returned 0 as a return code (the $? shell variable tells you the return status of the last executed command).

To supplement DarkDust's answer:
So here ls will only be run if the echo command returned successful (which will always be the case here, but you never know ;-).
Well, not always. For example, the standard output stream may be unwritable
:; ( echo "foo" && touch /tmp/succeeded) >/dev/full
bash: echo: write error: No space left on device
:; ls -l /tmp/succeeded
ls: cannot access /tmp/succeeded: No such file or directory

Here, they do the same.
But take an other example:
cp file target && ls
cp file target; ls
In the first case, ls would only be executed if cp succeeds.
In the second case, no matter if cp fails or succeeds, ls will always be executed.

Related

Continue running npm test script after test fails in a bash loop [duplicate]

I have a bash script with -e option set, which fails the whole script on the very first error.
In the script, I am trying to do an ls on a directory. But that path may or may not exist. If the path does not exist, the ls command fails, since the -e flag is set.
Is there a way by which I can prevent the script from failing?
As a side note, I have tried the trick to do an set +e and set -e before and after that command and it works. But I am looking for some better solution.
You can "catch" the error using || and a command guaranteed to exit with 0 status:
ls $PATH || echo "$PATH does not exist"
Since the compound command succeeds whether or not $PATH exists, set -e is not triggered and your script will not exit.
To suppress the error silently, you can use the true command:
ls $PATH || true
To execute multiple commands, you can use one of the compound commands:
ls $PATH || { command1; command2; }
or
ls $PATH || ( command1; command2 )
Just be sure nothing fails inside either compound command, either. One benefit of the second example is that you can turn off immediate-exit mode inside the subshell without affecting its status in the current shell:
ls $PATH || ( set +e; do-something-that-might-fail )
Another option is to use trap to catch the EXIT signal:
trap 'echo "ls failed" ; some_rescue_action' EXIT
ls /non_exist
one solution would be testing the existence of the folder
function myLs() {
LIST=""
folder=$1
[ "x$folder" = "x" ] && folder="."
[ -d $folder ] && LIST=`ls $folder`
echo $LIST
}
This way bash won't fail if $folder does not exist

Bash script: how to get the whole command line which ran the script

I would like to run a bash script and be able to see the command line used to launch it:
sh myscript.sh arg1 arg2 1> output 2> error
in order to know if the user used the "std redirection" '1>' and '2>', and therefore adapt the output of my script.
Is it possible with built-in variables ??
Thanks.
On Linux and some unix-like systems, /proc/self/fd/1 and /proc/self/fd/2 are symlinks to where your std redirections are pointing to. Using readlink, we can query if they were redirected or not by comparing them to the parent process' file descriptor.
We will however not use self but $$ because $(readlink /proc/"$$"/fd/1) spawns a new shell so self would no longer refer to the current bash script but to a subshell.
$ cat test.sh
#!/usr/bin/env bash
#errRedirected=false
#outRedirected=false
parentStderr=$(readlink /proc/"$PPID"/fd/2)
currentStderr=$(readlink /proc/"$$"/fd/2)
parentStdout=$(readlink /proc/"$PPID"/fd/1)
currentStdout=$(readlink /proc/"$$"/fd/1)
[[ "$parentStderr" == "$currentStderr" ]] || errRedirected=true
[[ "$parentStdout" == "$currentStdout" ]] || outRedirected=true
echo "$0 ${outRedirected:+>$currentStdout }${errRedirected:+2>$currentStderr }$#"
$ ./test.sh
./test.sh
$ ./test.sh 2>/dev/null
./test.sh 2>/dev/null
$ ./test.sh arg1 2>/dev/null # You will lose the argument order!
./test.sh 2>/dev/null arg1
$ ./test.sh arg1 2>/dev/null >file ; cat file
./test.sh >/home/camusensei/file 2>/dev/null arg1
$
Do not forget that the user can also redirect to a 3rd file descriptor which is open on something else...!
Not really possible. You can check whether stdout and stderr are pointing to a terminal: [ -t 1 -a -t 2 ]. But if they do, it doesn't necessarily mean they weren't redirected (think >/dev/tty5). And if they don't, you can't distinguish between stdout and stderr being closed and them being redirected. And even if you know for sure they are redirected, you can't tell from the script itself where they point after redirection.

why echo return value ($?) after pipeline always return "0"

I realize the fact but I don't know why:
cat abc | echo $?
if abc does not exist, but above command still return 0. Anyone knows the theory about why?
The reason why it must be this way is that a pipeline is made of processes running simultaneously. cat's exit code can't possibly be passed to echo as an argument because arguments are set when the command begins running, and echo begins running before cat has finished.
echo doesn't take input from stdin, so echo on the right side of a pipe character is always a mistake.
UPDATE:
Since it is now clear that you are asking about a real problem, not just misunderstanding what you saw, I tried it myself. I get what I think is the correct result (1) from a majority of shells I tried (dash, zsh, pdksh, posh, and bash 4.2.37) but 0 from bash 4.1.10 and ksh (Version JM 93u+ 2012-02-29).
I assume the change in bash's behavior between versions is intentional, and the 4.1.x behavior is considered a bug. You'd probably find it in the changelog if you looked hard enough. Don't know about ksh.
csh and tcsh (with $status in place of $?) also say 0, but I bet nobody cares about that.
People with bigger shell collections are invited to test:
for sh in /bin/sh /bin/ksh /bin/bash /bin/zsh ...insert more shells here...; do
echo -n "$sh "
$sh -c 'false;true|echo $?'
done
It does not have anything to do with cat abc, but with the previous command you executed. So the code you get when doing cat abc | echo $? is telling if the previous command in your history was successful or not.
From man bash:
Special Parameters
? - Expands to the exit status of the most recently executed foreground pipeline.
So when you do:
cat abc | echo $?
The echo $? refers to the previous command you used, not cat abc.
Example
$ cat a
hello
$ echo $?
0
$ cat aldsjfaslkdfj
cat: aldsjfaslkdfj: No such file or directory
$ echo $?
1
So
$ cat a
$ cat a | echo $?
0
$ cat aldsjfaslkdfj
cat: aldsjfaslkdfj: No such file or directory
$ cat a | echo $?
1
echo $? will give output of previous command which you have executed before not output of piped command. So, you will always get echo $? as 0 even if command failed before pipe.
You pipe the output from 'cat abc' to 'echo $?' which is not what you want.
You want to echo the exit code of 'cat'
cat abc; echo $?
is what you want. Or simply write it in two lines if you can.

Getting exit code of last shell command in another script

I am trying to beef up my notify script. The way the script works is that I put it behind a long running shell command and then all sorts of notifications get invoked after the long running script finished.
For example:
sleep 100; my_notify
It would be nice to get the exit code of the long running script. The problem is that calling my_notify creates a new process that does not have access to the $? variable.
Compare:
~ $: ls nonexisting_file; echo "exit code: $?"; echo "PPID: $PPID"
ls: nonexisting_file: No such file or directory
exit code: 1
PPID: 6203
vs.
~ $: ls nonexisting_file; my_notify
ls: nonexisting_file: No such file or directory
exit code: 0
PPID: 6205
The my_notify script has the following in it:
#!/bin/sh
echo "exit code: $?"
echo "PPID: $PPID"
I am looking for a way to get the exit code of the previous command without changing the structure of the command too much. I am aware of the fact that if I change it to work more like time, e.g. my_notify longrunning_command... my problem would be solved, but I actually like that I can tack it at the end of a command and I fear complications of this second solution.
Can this be done or is it fundamentally incompatible with the way that shells work?
My shell is Z shell (zsh), but I would like it to work with Bash as well.
You'd really need to use a shell function in order to accomplish that. For a simple script like that it should be pretty easy to have it working in both zsh and bash. Just place the following in a file:
my_notify() {
echo "exit code: $?"
echo "PPID: $PPID"
}
Then source that file from your shell startup files. Although since that would be run from within your interactive shell, you may want to use $$ rather than $PPID.
It is incompatible. $? only exists within the current shell; if you want it available in subprocesses then you must copy it to an environment variable.
The alternative is to write a shell function that uses it in some way instead.
One method to implement this could be to use EOF tag and a master script which will create your my_notify script.
#!/bin/bash
if [ -f my_notify ] ; then
rm -rf my_notify
fi
if [ -f my_temp ] ; then
rm -rf my_temp
fi
retval=`ls non_existent_file &> /dev/null ; echo $?`
ppid=$PPID
echo "retval=$retval"
echo "ppid=$ppid"
cat >> my_notify << 'EOF'
#!/bin/bash
echo "exit code: $retval"
echo " PPID =$ppid"
EOF
sh my_notify
You can refine this script for your purpose.

"set -e" in shell and command substitution

In shell scripts set -e is often used to make them more robust by stopping the script when some of the commands executed from the script exits with non-zero exit code.
It's usually easy to specify that you don't care about some of the commands succeeding by adding || true at the end.
The problem appears when you actually care about the return value, but don't want the script to stop on non-zero return code, for example:
output=$(possibly-failing-command)
if [ 0 == $? -a -n "$output" ]; then
...
else
...
fi
Here we want to both check the exit code (thus we can't use || true inside of command substitution expression) and get the output. However, if the command in command substitution fails, the whole script stops due to set -e.
Is there a clean way to prevent the script from stopping here without unsetting -e and setting it back afterwards?
Yes, inline the process substitution in the if-statement
#!/bin/bash
set -e
if ! output=$(possibly-failing-command); then
...
else
...
fi
Command Fails
$ ( set -e; if ! output=$(ls -l blah); then echo "command failed"; else echo "output is -->$output<--"; fi )
/bin/ls: cannot access blah: No such file or directory
command failed
Command Works
$ ( set -e; if ! output=$(ls -l core); then echo "command failed"; else echo "output is: $output"; fi )
output is: -rw------- 1 siegex users 139264 2010-12-01 02:02 core

Resources