Suppose for the sake of simplicity that I am working with bash and zsh. In bash, I have a .bash_profile that puts ~/bash/bin in the PATH, and in zsh, I have in my .zshrc the path ~/zsh/bin in the PATH. Now, suppose I have two executables at ~/bash/bin/foobar and ~/zsh/bin/foobar. As such, if I run command -v foobar, I should be returning one of the two, depending if I was working in bash or zsh. The question I have is as follows: is it possible to, in a bash script, determine what command -v foobar would output in zsh, or vice-versa?
I'm not confident that
#!/bin/bash
zsh -c 'command -v foobar'
would give me the output of ~/zsh/bin/foobar in this case.
The command
zsh -c 'command -v foobar'
does not process ~/.zshrc , so you can't expect any effect of this file on this command. You could do a
zsh -i -c 'command -v foobar'
to force .zshrc to be processed, but this does not necessarily mean that you would see here a different directory. For instance, assume that in your bash, the PATH is set to /usr/bin:$HOME/bash/bin, and your .zshrc sets the PATH by doing a
PATH=$PATH:$HOME/zsh/bin
In this case, even zsh -i -c .... would still show $HOME/bash/bin/foobar as a match.
Related
I have a line of code that works fine in my terminal:
for i in *.mp4; do echo ffmpeg -i "$i" "${i/.mp4/.mp3}"; done
Then I put the exact same line of code in a script myscript.sh:
#!/bin/sh
for i in *.mp4; do echo ffmpeg -i "$i" "${i/.mp4/.mp3}"; done
However, now I get an error when running it:
$ sh myscript.sh
myscript.sh: 2: myscript.sh: Bad substitution
Based on other questions I tried changing the shebang to #!/bin/bash, but I get the exact same error. Why can't I run this script?
TL;DR: Since you are using Bash specific features, your script has to run with Bash and not with sh:
$ sh myscript.sh
myscript.sh: 2: myscript.sh: Bad substitution
$ bash myscript.sh
ffmpeg -i bar.mp4 bar.mp3
ffmpeg -i foo.mp4 foo.mp3
See Difference between sh and Bash. To find out which sh you are using: readlink -f $(which sh).
The best way to ensure a bash specific script always runs correctly
The best practices are to both:
Replace #!/bin/sh with #!/bin/bash (or whichever other shell your script depends on).
Run this script (and all others!) with ./myscript.sh or /path/to/myscript.sh, without a leading sh or bash.
Here's an example:
$ cat myscript.sh
#!/bin/bash
for i in *.mp4
do
echo ffmpeg -i "$i" "${i/.mp4/.mp3}"
done
$ chmod +x myscript.sh # Ensure script is executable
$ ./myscript.sh
ffmpeg -i bar.mp4 bar.mp3
ffmpeg -i foo.mp4 foo.mp3
(Related: Why ./ in front of scripts?)
The meaning of #!/bin/sh
The shebang suggests which shell the system should use to run a script. This allows you to specify #!/usr/bin/python or #!/bin/bash so that you don't have to remember which script is written in what language.
People use #!/bin/sh when they only use a limited set of features (defined by the POSIX standard) for maximum portability. #!/bin/bash is perfectly fine for user scripts that take advantage of useful bash extensions.
/bin/sh is usually symlinked to either a minimal POSIX compliant shell or to a standard shell (e.g. bash). Even in the latter case, #!/bin/sh may fail because bash will run in compatibility mode as explained in the man page:
If bash is invoked with the name sh, it tries to mimic the startup behavior of historical versions of sh as closely as possible, while conforming to the POSIX standard as well.
The meaning of sh myscript.sh
The shebang is only used when you run ./myscript.sh, /path/to/myscript.sh, or when you drop the extension, put the script in a directory in your $PATH, and just run myscript.
If you explicitly specify an interpreter, that interpreter will be used. sh myscript.sh will force it to run with sh, no matter what the shebang says. This is why changing the shebang is not enough by itself.
You should always run the script with its preferred interpreter, so prefer ./myscript.sh or similar whenever you execute any script.
Other suggested changes to your script:
It is considered good practice to quote variables ("$i" instead of $i). Quoted variables will prevent problems if the stored file name contains white space characters.
I like that you use advanced parameter expansion. I suggest to use "${i%.mp4}.mp3" (instead of "${i/.mp4/.mp3}"), since ${parameter%word} only substitutes at the end (for example a file named foo.mp4.backup).
The ${var/x/y/} construct is not POSIX. In your case, where you just remove a string at the end of a variable and tack on another string, the portable POSIX solution is to use
#!/bin/sh
for i in *.mp4; do
ffmpeg -i "$i" "${i%.mp4}.mp3"
done
or even shorter, ffmpeg -i "$i" "${i%4}3".
The definitive dope for these constructs is the chapter on Parameter Expansion for the POSIX shell.
Perl 6's shell sends commands to the "shell" but doesn't say what that is. I consistently get bash on my machine but I don't know if I can rely on that.
$ perl6 -e 'shell( Q/echo $SHELL/ )'
/bin/bash
$ csh
% perl6 -e 'shell( Q/echo $SHELL/ )'
/bin/bash
% zsh
$ perl6 -e 'shell( Q/echo $SHELL/ )'
/bin/bash
That's easy enough on Unix when it's documented, but what about cmd.exe or PowerShell on Windows (or bash if it's installed)? I figure it's the cmd.exe but a documented answer would be nice.
Looking at the source, rakudo just calls /bin/sh -c on non-windows and uses %*ENV<ComSpec> /c on windows.
dash (installed as /bin/sh on many systems), doesn't set $SHELL, nor should it. $SHELL isn't the name of the parent process; it's the name of the shell that should be used when an interactive shell is desired.
To get the name of the parent process, you could use the following on some systems:
echo "$0"
or
# Command line
perl -e'$ppid=getppid(); #ARGV="/proc/$ppid/cmdline"; CORE::say "".<>'
or
# Program file
perl -e'$ppid=getppid(); CORE::say readlink("/proc/$ppid/exe")'
You'll find you'll get /bin/sh in all cases.
I'm trying to see what the output of a command would be if I were in a login shell, without having to go into a login shell. I've tried several variations of
zsh --login -c "alias"
But none of my aliases get shown; are --login and -c incompatible?
To test the difference between zsh --login -c "alias" and a normal login shell, you can/should add the -x option to see what the shell is up to.
When I run zsh -x --login -c "alias", then it processes /etc/zprofile.
When I run zsh -x --login, then it processes /etc/zprofile and /etc/zshrc.
I don't normally use zsh, so I don't have any personalized profile or start up file for it, but it seems plausible that it might look for (but, in my case, not find) ~/.zprofile and ~/.zshrc too.
I created trivial versions of those files:
$ echo "echo in .zprofile" > ~/.zprofile
$ echo "echo in .zshrc" > ~/.zshrc
and sure enough, they're processed. Further, the -c command with --login processed the .zprofile but did not process the .zshrc file.
Thus, using -c "alias" after the --login suppresses the processing of /etc/zshrc and ~/.zshrc. If you want those executed even so, you need to use something like:
zsh --login -c "[ -f /etc/zshrc ] && . /etc/zshrc; [ -f ~/.zshrc ] && . ~/.zshrc; alias"
Using -x to debug login processing is often informative.
It's nice that modern shells provide a command line option to induce login processing. I still have a program (which I don't use any more) that runs a login shell the old-fashioned way, by adding a - before the shell name in argv[0]. Thus, running -ksh would trigger login processing; the login program would run the login shell with the - at the start.
Here is a test:
$ bash -c "pgrep -f novalidname"
$ sh -c "pgrep -f novalidname"
11202
Why is pgrep giving output when run from sh? (As far as I can see, there are no processes on my computer that is named novalidname)
It's probably a timing issue and pgrep finds itself, as you're issuing it with -f and novalidname is present in the command line. Try with -l to confirm.
The actual explanation:
Regardless of flags, pgrep never returns its own PID.
If you execute bash -c with a simple command, then bash will exec the command rather than creating a redundant subshell to execute it in. Consequently, bash -c "pgrep -f blah" will replace the bash process with a pgrep process. If that pgrep process is the only process whose command line includes blah, then pgrep will not display any PIDs (as per 1).
dash does not perform the above optimization. (zsh and ksh do.) So if on your system, sh is implemented with dash, then sh -c "pgrep -f blah" will result in two processes being executed -- the sh process and the pgrep child -- both of which contain blah in their command lines. pgrep will not report itself, but it will report its parent.
That's one thing (finding itself because of delay) see also:
$ ps ax | grep novalidname
Here it usually shows as well. (on Ubuntu does for me. (under bash)
The other thing is what is /bin/sh bound to?
On most Linux distros /bin/sh is a soft link to default shell which is usually actually bash, but can be any other shell.
The time difference that causes grep/pgrep to show itself may be introduced by finding a soft link location (hm, odd) or some other shell is bound to /bin/sh which executes slightly different than bash, thus causing the delay needed for process to show in pgrep.
Also, bash will firstly try to source ~/.bashrc and load its history, while /bin/sh will do what will do. In .bashrc can be pgrep defined as alias in another way which may also affect the difference.
To see where /bin/sh points to do:
$ readlink -e /bin/sh
Or just run sh to see what will show up. :D
I can't seem to find the difference between a script run two different ways.
Here's the script (named test.sh):
#! /bin/bash
printf "%b\n" "\u5A"
When the script is sourced:
. test.sh
> Z ## Result I want ##
When the script is run:
./test.sh
> \u5A ## Result I get ##
I want the run script to give the results of the sourced script... what setting do I need to set/change?
You are probably getting different versions of printf; the script you are sourcing from is probably a /bin/sh script, not a Bash script proper?
Shouldn't you be using \x instead of \u? printf "%b\n" "\x5A" works fine in both cases for me.
(Totally different idea here, so I'm posting it as another answer.)
Try running these at the command line:
builtin printf "%b\n" "\u5A"
/usr/bin/env printf "%b\n" "\u5A"
printf is both a shell builtin and an executable, and you may be getting different ones depending on whether you source or run the script. To find out, insert this in the script and run it each way:
type printf
While you're at it, you may as well insert this line too:
echo $SHELL
That will reveal if you're getting different shells, per tripleee.
HAHA!!! I finally traced down the problem! Read ahead if interested (leave the page if not).
These are the only command that will translate \u properly:
. ./test.sh ## Sourcing the script, hash-bang = #! /bin/sh
. ./test.bash ## Sourcing the script, hash-bang = #! /bin/bash
./test ## Running the script with no hash-bang
All of the following produce identical results in that they do NOT translate \u:
./test.sh ## Script is run from an interactive shell but in a non-interactive shell
## test.sh has first line: #! /bin/sh
/bin/sh -c "./test.sh" ## Running the script in a non-interactive sh shell
/bin/sh -lc "./test.sh" ## Running the script in a non-interactive, login sh shell
/bin/sh -c ". ./test.sh" ## Sourcing the file in a non-interactive sh shell
/bin/sh -lc ". ./test.sh" ## Sourcing the file in a non-interactive, login sh shell
## test.bash has first line: #! /bin/bash
/bin/bash -c "./test.bash" ## Running the script in a non-interactive bash shell
/bin/bash -lc "./test.bash" ## Running the script in a non-interactive, login bash shell
/bin/bash -c ". ./test.bash" ## Sourcing the file in a non-interactive bash shell
/bin/bash -lc ". ./test.bash" ## Sourcing the file in a non-interactive, login bash shell
## And from ***tripleee*** (thanks btw):
/bin/sh --norc; . ./test.sh ## Sourcing from an interactive sh shell without the ~/.bashrc file read
/bin/bash --norc; . ./test.bash ## Sourcing from an interactive bash shell without the ~/.bashrc file read
The only way to get proper translation is to run the script without a hash-bang... and I finally figured out why! Without a hash-bang my system chooses the default shell, which btw is NOT /bin/bash... it turns out to be /opt/local/bin/bash... two different versions of bash!
Finally, I removed the OSX /bin/bash [v3.2.48(1)] and replaced it with the MacPorts /opt/local/bin/bash [v4.2.10(2)] and now running the script works! It actually solved about 10-15 other problems I've had (like ${var,,}, read sN1 char, complete -EC "echo ' '", and a host of other commands I have scattered throughout my scripts, ~/.bashrc amd ~/.profile). Honestly, I really should have noticed when my scripts using associative arrays suddenly crapped out on me... how stupid can I get!?
I've been using bash v4 for a looong time now, and my Lion upgrade went and down-graded bash back to v3 (get with the program Apple!)... ugh, I feel so ashamed! Everyone still using bash v3, upgrade!! bash v4 is has many, many beautiful upgrades over version 3. Type bash --version to see what version you are running. One advantage is now bash can translate \uHEX into Unicode!
Try removing the space in the first line, I seem to recall that can cause problems. Offhand I'd guess that because of that space, you're not getting bash, but sh.
Glad you solved it. Still, you might be looking for a portable solution.
Assuming you are always using the same formatting string, we can just discard it, and use something like this;
printf () {
# Discard format string
shift
perl -CSD -le '
print map { s/^\\u//; chr(hex($_)) } #ARGV' "$#"
}
Edit to add: You would simply add this function definition at the beginning of your existing script, overriding the builtin printf. Obviously, if you also use printf for other stuff, this special-purpose replacement isn't good enough.
You could rename the function to uprintf or something, still. It merely translates a sequence of hex codes to the corresponding Unicode characters, discarding any \u prefix.