In bash, in this specific case, echo behaves like so:
$ bash -c 'echo "a\nb"'
a\nb
but in zsh the same thing turns out very differently...:
$ zsh -c 'echo "a\nb"'
a
b
and fwiw in fish, because I was curious:
$ fish -c 'echo "a\nb"'
a\nb
I did realize that I can run:
$ zsh -c 'echo -E "a\nb"'
a\nb
But now I am worried that I'm about to stumble into more gotchas on such a basic operation. (Thus my investigation into fish: if I'm going to have to make changes at such a low level for zsh, why not go all the way and switch up to something that is blatant about being so different?)
I did not myself find any documentation to help clarify this echo difference in bash vs zsh or pages directly listing the differences, so can someone here list them out? And maybe direct me to any broader set of potentially impactful gotchas when making the switch, that would cover this case?
Usually prefer printf for consistent results.
If you need predictable consistent echo implementation, you can override it with your own function.
This will behave the same, regardless of the shell.
echo(){ printf %s\\n "$*";}
echo "a\nb"
echo is good and portable only for printing literal strings that end
with a newline but it's not good for anything more complex, you can
read more about it in this
answer and in
Shellcheck documentation
here.
Even though according to
POSIX
each implementation has to understand character sequences without any
additional options you cannot rely on that. As you already noticed, in
Bash for example echo 'a\nb' produces a\nb but it can be changed
with xpg_echo shell option:
$ echo 'a\nb'
a\nb
$ shopt -s xpg_echo
$ echo 'a\nb'
a
b
And maybe direct me to any broader set of potentially impactful
gotchas when making the switch, that would cover this case?
Notice that the inconsistency between different echo implementations
can manifest itself not only in shell but also in other places where
shell is used indirectly, for example in Makefile. I've once come
across Makefile that looked like this:
all:
#echo "target\tdoes this"
#echo "another-target\tdoes this"
make uses /bin/sh to run these commands so if /bin/sh is a symlink
to bash on your system what you get is:
$ make
target\tdoes this
another-target\tdoes this
If you want portability in shell use printf. This:
printf 'a\nb\n'
should produce the same output in most shells.
echo provides the same output, regardless of the shell interpreter it's called from.
The difference lies in the way that each shell will print the standard output buffer to the screen. zsh interprets escape characters/sequences (i.e., "\n\t\v\r..." or ANSI escape sequences) automatically where bash does not.
Using bash, you'll have to supply the -e flag to print newlines:
#Input:
echo "[text-to-print]\n[text-to-print-on-newline]"
#Output:
[text-to-print]\n[text-to-print-on-newline]
#Input:
echo -e "[text-to-print]\n[text-to-print-on-newline]"
#Output:
[text-to-print]
[text-to-print-on-newline]
Using zsh, the interpreter does the escape sequence interpretation by itself, and the output is the same regardless of the -e flag:
#Input:
echo "[text-to-print]\n[text-to-print-on-newline]"
#Output:
[text-to-print]
[text-to-print-on-newline]
#Input:
echo -e "[text-to-print]\n[text-to-print-on-newline]"
#Output:
[text-to-print]
[text-to-print-on-newline]
This should fix the discrepancy that you're seeing between shell interpreters.
Related
Here is a simple test case script which behaves differently in zsh vs bash when I run with $ source test_script.sh from the command line. I don't necessarily know why there is a difference if my shebang clearly states that I want bash to run my script other than the fact that the which command is a built-in in zsh and a program in bash. (FYI - the shebang directory is where my bash program lives which may not be the same as yours--I installed a new version using homebrew)
#!/usr/local/bin/bash
if [ "$(which ls)" ]; then
echo "ls command found"
else
echo "ls command not found"
fi
if [ "$(which foo)" ]; then
echo "foo command found"
else
echo "foo command not found"
I run this script with source ./test-script.sh from zsh and Bash.
Output in zsh:
ls command found
foo command found
Output in bash:
ls command found
foo command not found
My understanding is that default for test or [ ] (which are the same thing) evaluate a string to true if it's not empty/null. To illustrate:
zsh:
$ which foo
foo not found
bash:
$ which foo
$
Moreover if I redirect standard error in zsh like:
$ which foo 2> /dev/null
foo not found
zsh still seems to send foo not found to standard output which is why (I am guessing) my test case passed for both under the zshell; because the expansion of "$(which xxx)" returned a string in both cases (e.g. /some/directory and foo not found (zsh will ALWAYS return a string?).
Lastly, if I remove the double quotes (e.g. $(which xxx)), zsh gives me an error. Here is the output:
ls command found
test_scritp.sh:27: condition expected not:
I am guessing zsh wanted me to use [ ! "$(which xxx)" ]. I don't understand why? It never gave that error when running in bash (and isn't this supposed to run in bash anyway?!).
Why isn't my script using bash? Why is something so trivial as this not working? I understand how to make it work fine in both using the -e option but I simply want to understand why this is all happening. Its driving me bonkers.
There are two separate problems here.
First, the proper command to use is type, not which. Like you note, the command which is a zsh built-in, whereas in Bash, it will execute whatever which command happens to be on your system. There are many variants with different behaviors, which is why POSIX opted to introduce a replacement instead of trying to prescribe a particular behavior for which -- then there would be yet one more possible behavior, and no way to easily root out all the other legacy behaviors. (One early common problem was with a which command which would examine the csh environment, even if you actually used a different shell.)
Secondly, examining a command's string output is a serious antipattern, because strings differ between locales ("not found" vs. "nicht gefunden" vs. "ei löytynyt" vs. etc etc) and program versions -- the proper solution is to examine the command's exit code.
if type ls >/dev/null 2>&1; then
echo "ls command found"
else
echo "ls command not found"
fi
if type foo >/dev/null 2>&1; then
echo "foo command found"
else
echo "foo command not found"
fi
(A related antipattern is to examine $? explicitly. There is very rarely any need to do this, as it is done naturally and transparently by the shell's flow control statements, like if and while.)
Regarding quoting, the shell performs whitespace tokenization and wildcard expansion on unquoted values, so if $string is command not found, the expression
[ $string ]
without quotes around the value evaluates to
[ command not found ]
which looks to the shell like the string "command" followed by some cruft which isn't syntactically valid.
Lastly, as we uncovered in the chat session (linked from comments) the OP was confused about the precise meaning of source, and ended up running a Bash script in a separate process instead. (./test-script instead of source ./test-script). For the record, when you source a file, you cause your current shell to read and execute it; in this setting, the script's shebang line is simply a comment, and is completely ignored by the shell.
I have a bash variable:
A="test; ls"
I want to use it as part of a call:
echo $A
I'm expecting it to be expanded into:
echo test; ls
However, it is expanded:
echo "test;" "ls"
How is it possible to achieve what I want? The only solution I can think of is this:
bash -c "echo $A"
Maybe there is something more elegant?
If you're building compound commands, running redirections, etc., then you need to use eval:
A="test; ls"
eval "$A"
However, it's far better not to do this. A typical use case that follows good practices follows:
my_cmd=( something --some-arg "another argument with spaces" )
if [[ $foo ]]; then
my_cmd+=( --foo="$foo" )
fi
"${my_cmd[#]}"
Unlike the eval version, this can only run a single command line, and will run it exactly as-given -- meaning that even if foo='$(rm -rf /)' you won't get your hard drive wiped. :)
If you absolutely must use eval, or are forming a shell command to be used in a context where it will necessarily be shell-evaluated (for instance, passed on a ssh command line), you can achieve a hybrid approach using printf %q to form command lines safe for eval:
printf -v cmd_str 'ls -l %q; exit 1' "some potentially malicious string"
eval "$cmd_str"
See BashFAQ #48 for more details on the eval command and why it should be used only with great care, or BashFAQ #50 for a general discussion of pitfalls and best practices around programmatically constructing commands. (Note that bash -c "$cmd_str" is equivalent to eval in terms of security impact, and requires the same precautions to use safely).
dont use echo, just have the var
A="echo test; ls"
$A
This command works fine:
$ bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer)
However, I don't understand how exactly stable is passed as a parameter to the shell script that is downloaded by curl. That's the reason why I fail to achieve the same functionality from within my own shell script - it gives me ./foo.sh: 2: Syntax error: redirection unexpected:
$ cat foo.sh
#!/bin/sh
bash -s stable < <(curl -s https://raw.github.com/wayneeseguin/rvm/master/binscripts/rvm-installer)
So, the questions are: how exactly this stable param gets to the script, why are there two redirects in this command, and how do I change this command to make it work inside my script?
Regarding the "redirection unexpected" error:
That's not related to stable, it's related to your script using /bin/sh, not bash. The <() syntax is unavailable in POSIX shells, which includes bash when invoked as /bin/sh (in which case it turns off nonstandard functionality for compatibility reasons).
Make your shebang line #!/bin/bash.
Understanding the < <() idiom:
To be clear about what's going on -- <() is replaced with a filename which refers to the output of the command which it runs; on Linux, this is typically a /dev/fd/## type filename. Running < <(command), then, is taking that file and directing it to your stdin... which is pretty close the behavior of a pipe.
To understand why this idiom is useful, compare this:
read foo < <(echo "bar")
echo "$foo"
to this:
echo "bar" | read foo
echo "$foo"
The former works, because the read is executed by the same shell that later echoes the result. The latter does not, because the read is run in a subshell that was created just to set up the pipeline and then destroyed, so the variable is no longer present for the subsequent echo.
Understanding bash -s stable:
bash -s indicates that the script to run will come in on stdin. All arguments, then, are fed to the script in the $# array ($1, $2, etc), so stable becomes $1 when the script fed in on stdin is run.
I have two variables in bash that complete the name of another one, and I want to expand it but don't know how to do it
I have:
echo $sala
a
echo $i
10
and I want to expand ${a10} in this form ${$sala$i} but apparently the {} scape the $ signs.
There are a few ways, with different advantages and disadvantages. The safest way is to save the complete parameter name in a single parameter, and then use indirection to expand it:
tmp="$sala$i" # sets $tmp to 'a10'
echo "${!tmp}" # prints the parameter named by $tmp, namely $a10
A slightly simpler way is a command like this:
eval echo \${$sala$i}
which will run eval with the arguments echo and ${a10}, and therefore run echo ${a10}. This way is less safe in general — its behavior depends a bit more chaotically on the values of the parameters — but it doesn't require a temporary variable.
Use the eval.
eval "echo \${$sala$i}"
Put the value in another variable.
result=$(eval "echo \${$sala$i}")
The usual answer is eval:
sala=a
i=10
a10=37
eval echo "\$$sala$i"
This echoes 37. You can use "\${$sala$i}" if you prefer.
Beware of eval, especially if you need to preserve spaces in argument lists. It is vastly powerful, and vastly confusing. It will work with old shells as well as Bash, which may or may not be a merit in your eyes.
You can do it via indirection:
$ a10=blah
$ sala=a
$ i=10
$ ref="${sala}${i}"
$ echo $ref
a10
$ echo ${!ref}
blah
However, if you have indexes like that... an array might be more appropriate:
$ declare -a a
$ i=10
$ a[$i]="test"
$ echo ${a[$i]}
test
I often find Bash syntax very helpful, e.g. process substitution like in diff <(sort file1) <(sort file2).
Is it possible to use such Bash commands in a Makefile? I'm thinking of something like this:
file-differences:
diff <(sort file1) <(sort file2) > $#
In my GNU Make 3.80 this will give an error since it uses the shell instead of bash to execute the commands.
From the GNU Make documentation,
5.3.2 Choosing the Shell
------------------------
The program used as the shell is taken from the variable `SHELL'. If
this variable is not set in your makefile, the program `/bin/sh' is
used as the shell.
So put SHELL := /bin/bash at the top of your makefile, and you should be good to go.
BTW: You can also do this for one target, at least for GNU Make. Each target can have its own variable assignments, like this:
all: a b
a:
#echo "a is $$0"
b: SHELL:=/bin/bash # HERE: this is setting the shell for b only
b:
#echo "b is $$0"
That'll print:
a is /bin/sh
b is /bin/bash
See "Target-specific Variable Values" in the documentation for more details. That line can go anywhere in the Makefile, it doesn't have to be immediately before the target.
You can call bash directly, use the -c flag:
bash -c "diff <(sort file1) <(sort file2) > $#"
Of course, you may not be able to redirect to the variable $#, but when I tried to do this, I got -bash: $#: ambiguous redirect as an error message, so you may want to look into that before you get too into this (though I'm using bash 3.2.something, so maybe yours works differently).
One way that also works is putting it this way in the first line of the your target:
your-target: $(eval SHELL:=/bin/bash)
#echo "here shell is $$0"
If portability is important you may not want to depend on a specific shell in your Makefile. Not all environments have bash available.
You can call bash directly within your Makefile instead of using the default shell:
bash -c "ls -al"
instead of:
ls -al
There is a way to do this without explicitly setting your SHELL variable to point to bash. This can be useful if you have many makefiles since SHELL isn't inherited by subsequent makefiles or taken from the environment. You also need to be sure that anyone who compiles your code configures their system this way.
If you run sudo dpkg-reconfigure dash and answer 'no' to the prompt, your system will not use dash as the default shell. It will then point to bash (at least in Ubuntu). Note that using dash as your system shell is a bit more efficient though.
It's not a direct answer to the question, makeit is limited Makefile replacement with bash syntax and it can be useful in some cases (I'm the author)
rules can be defined as bash-functions
auto-completion feature
Basic idea is to have while loop in the end of the script:
while [ $# != 0 ]; do
if [ "$(type -t $1)" == 'function' ]; then
$1
else
exit 1
fi
shift
done
https://asciinema.org/a/435159