when I use zsh, exec this command as follow
$ echo "\`"
preexec: parse error
`
if change back bash, it's OK.
preexec is a hook that runs before the command: "pre-exec"ution. My hunch is you've got some prompt or zsh framework like oh-my-zsh that is choking on "`" character.
preexec
Executed just after a command has been read and is about to be
executed. If the history mechanism is active (and the line was not
discarded from the history buffer), the string that the user typed is
passed as the first argument, otherwise it is an empty string. The
actual command that will be executed (including expanded aliases) is
passed in two different forms: the second argument is a single-line,
size-limited version of the command (with things like function bodies
elided); the third argument contains the full text that is being
executed.
Related
So I generally create job files with a list of commands in it. Then I execute it like so
cat jobFile | while read a; do $a; done
Which always works in bash. However, I've just started working in Mac which apparently uses zsh. And this command fails with "no such file" etc. I've tested the job file by running few lines from it manually, so it should be fine.
I've found questions on zsh read inbut they tend to be reading in from variables e.g. $a=('a' 'b' 'c') or echo $a
Thank you for your answers!
In bash, unquoted parameter expansions always undergo word-splitting, so if a="foo bar", then $a expands to two words, foo and bar. As a command, this means running the command foo with an argument bar.
In zsh, parameter expansions to not undergo word-splitting by default, which means the same expansion $a would produce a single word foo bar, treated as the name of the command to execute.
In either case, relying on parameter expansion to "parse" a shell command is fragile; in addition to word-splitting, the expansion is subject to pathname expansion (globbing), and you are limited to simple commands and their arguments. No pipes, lists (&&, ||), or redirections allowed, as everything will be treated as the command name and a sequence of arguments.
What you want in both shells is to simply treat your job file as a shell script, which can be executed in the current shell using the . command:
. jobFile
Why are you executing it in such a cumbersome way? Assuming jobFile is a file holding a sequence of bash commands, you can simply run it as
bash jobFile
If it contains a sequence of zsh commands, you can likewise run it as
zsh jobFile
If you follow this approach, I would however reflect in the name of the job file, what shell it is intended for, i.e.
bash jobFile.bash
zsh jobFile.zsh
and, if you write a job file so that it is supposed to be compatible with either shell, I would name it jobFile.sh.
When posting this question originally, I totally misworded it, obtaining another, reasonable but different question, which was correctly answered here.
The following is the correct version of the question I originally wanted to ask.
In one of my Bash scripts, there's a point where I have a variable SCRIPT which contains the /path/to/an/exe which, when executed, outputs a line to be executed.
What my script ultimately needs to do, is executing that line to be executed. Therefore the last line of the script is
$($SCRIPT)
so that $SCRIPT is expanded to /path/to/an/exe, and $(/path/to/an/exe) executes the executable and gives back the line to be executed, which is then executed.
However, running shellcheck on the script generates this error:
In setscreens.sh line 7:
$($SCRIPT)
^--------^ SC2091: Remove surrounding $() to avoid executing output.
For more information:
https://www.shellcheck.net/wiki/SC2091 -- Remove surrounding $() to avoid e...
Is there a way I can rewrite that $($SCRIPT) in a more appropriate way? eval does not seem to be of much help here.
If the script outputs a shell command line to execute, the correct way to do that is:
eval "$("$SCRIPT")"
$($SCRIPT) would only happen to work if the command can be completely evaluated using nothing but word splitting and pathname expansion, which is generally a rare situation. If the program instead outputs e.g. grep "Hello World" or cmd > file.txt then you will need eval or equivalent.
You can make it simple by setting the command to be executed as a positional argument in your shell and execute it from the command line
set -- "$SCRIPT"
and now run the result that is obtained by expansion of SCRIPT, by doing below on command-line.
"$#"
This works in case your output from SCRIPT contains multiple words e.g. custom flags that needs to be run. Since this is run in your current interactive shell, ensure the command to be run is not vulnerable to code injection. You could take one step of caution and run your command within a sub-shell, to not let your parent environment be affected by doing ( "$#" ; )
Or use shellcheck disable=SCnnnn to disable the warning and take the occasion to comment on the explicit intention, rather than evade the detection by cloaking behind an intermediate variable or arguments array.
#!/usr/bin/env bash
# shellcheck disable=SC2091 # Intentional execution of the output
"$("$SCRIPT")"
By disabling shellcheck with a comment, it clarifies the intent and tells the questionable code is not an error, but an informed implementation design choice.
you can do it in 2 steps
command_from_SCRIPT=$($SCRIPT)
$command_from_SCRIPT
and it's clean in shellcheck
I run the following shell command:
nvim +CheckHealth +'w ~/Desktop/file.txt' +qall
This calls nvim (Neovim) and tells it to run three commands in succession:
CheckHealth to verify common errors. It runs in a buffer.
w ~/Desktop/file.txt to save that same buffer to a file.
qall to close all buffers.
I’m trying to run this from ruby, using system. If I run it as a single argument, it works fine:
system("nvim +CheckHealth +'w ~/Desktop/file.txt' +qall")
However, it fails (it runs but does not output a file) if ran as multiple arguments:
system("nvim", "+CheckHealth", "+'w ~/Desktop/file.txt'", "+qall")
What am I doing wrong? Note I am not asking for workarounds. I have a workaround, which is to run it as a single argument. My question is why doesn’t it work when ran as multiple arguments? What am I misunderstanding about system?
When you use the single argument version of system:
system("nvim +CheckHealth +'w ~/Desktop/file.txt' +qall")
You're launching a shell and handing it that whole string:
nvim +CheckHealth +'w ~/Desktop/file.txt' +qall
to execute. That means that everything in that string will be interpreted by the shell; in particular, the shell will be handling the single quotes in +'w ~/Desktop/file.txt' and by the time vim gets to parse its argument list, it sees three arguments that look like this:
+CheckHealth
+w ~/Desktop/file.txt
+qall
In the multi-argument version of system:
system("nvim", "+CheckHealth", "+'w ~/Desktop/file.txt'", "+qall")
no shell will be launched (a good thing as you don't have to worry about shell command injection and escaping) so the single quotes in the +w argument won't be removed by the shell. That means that vim sees these arguments:
+CheckHealth
+'w ~/Desktop/file.txt'
+qall
Presumably vim isn't happy with the single quotes in the second argument.
Executive summary:
The single argument version of system uses a shell to parse the command line, the multi-argument version doesn't use a shell at all.
The single quotes in +'w ~/Desktop/file.txt' are there to keep the shell from treating that as two arguments, they're not there for vim.
If you're using the multi-argument version of system (which you should be doing), then you'd say:
system("nvim", "+CheckHealth", "+w ~/Desktop/file.txt", "+qall")
and not have to worry about quoting and escaping things to get past the shell.
Consider this little shell script.
# Save the first command line argument
cmd="$1"
# Execute the command specified in the first command line argument
out=$($cmd)
# Do something with the output of the specified command
# Here we do a silly thing, like make the output all uppercase
echo "$out" | tr -s "a-z" "A-Z"
The script executes the command specified as the first argument, transforms the output obtained from that command and prints it to standard output. This script may be executed in this manner.
sh foo.sh "echo select * from table"
This does not do what I want. It may print something like the following,
$ sh foo.sh "echo select * from table"
SELECT FILEA FILEB FILEC FROM TABLE
if fileA, fileB and fileC is present in the current directory.
From a user perspective, this command is reasonable. The user has quoted the * in the command line argument, so the user doesn't expect the * to be globbed. But my script astonishes the user by using this argument in a command substitution which causes globbing of * as seen in the above output.
I want the output to be the following instead.
SELECT * FROM TABLE
The entire text in cmd actually comes from command line arguments to the script so I would like to preserve any * symbol present in the argument without globbing them.
I am looking for a solution that works for any POSIX shell.
One solution I have come up with is to disable globbing with set -o noglob just before the command substitution. Here is the complete code.
# Save the first command line argument
cmd="$1"
# Execute the command specified in the first command line argument
set -o noglob
out=$($cmd)
# Do something with the output of the specified command
# Here we do a silly thing, like make the output all uppercase
echo "$out" | tr -s "a-z" "A-Z"
This does what I expect.
$ sh foo.sh "echo select * from table"
SELECT * FROM TABLE
Apart from this, is there any other concept or trick (such as a quoting mechanism) I need to be aware of to disable globbing only within a command substitution without having to use set -o noglob.
I am not against set -o noglob. I just want to know if there is another way. You know, globbing can be disabled for normal command line arguments just by quoting them, so I was wondering if there is anything similar for command substiution.
If I understand correctly, you want the user to provide a shell command as a command-line argument, which will be executed by the script, and is expected to produce an SQL string, which will be processed (upper-cased) and echoed to stdout.
The first thing to say is that there is no point in having the user provide a shell command that the script just blindly executes. If the script applied some kind of modification/preprocessing of the command before it executed it then perhaps it could make sense, but if not, then the user might as well execute the command himself and pass the output to the script as a command-line argument, or via stdin.
But that being said, if you really want to do it this way, then there are two things that need to be said. Firstly, this is the proper form to use:
out=$(eval "$cmd");
A fairly advanced understanding of the shell grammer and expansion rules would be required to fully understand the rationale for using the above syntax, but basically executing $cmd and executing eval "$cmd" have subtle differences that render the $cmd form inappropriate for executing a given shell command string.
Just to give some detail that will hopefully clarify the above point, there are seven kinds of expansion that are performed by the shell in the following order when processing input: (1) brace expansion, (2) tilde expansion, (3) parameter and variable expansion, (4) arithmetic expansion, (5) command substitution, (6) word splitting, and (7) pathname expansion. Notice that variable expansion happens somewhat in the middle of that sequence, and thus the variable-expanded shell command (which was provided by the user) will not receive the benefit of the prior expansion types. Other issues are that leading variable assignments, pipelines, and command list tokens will not be executed correctly under the $cmd form, because they are parsed and processed prior to variable expansion (actually prior to all expansions) as well.
By running the command through eval, properly double-quoted, you ensure that the full shell parsing/processing/execution algorithm will be applied to the shell command string that was given by the user of your script.
The second thing to say is this: If you try the above proper form in your script, you will find that it has not solved your problem. You will still get SELECT FILEA FILEB FILEC FROM TABLE as output.
The reason is this: Since you've decided you want to accept an arbitrary shell command from the user of your script, it is now the user's responsibility to properly quote all metacharacters that may be embedded in that piece of code. It does not make sense for you to accept a shell command as a command-line argument, but somehow change the processing rules for shell commands so that certain metacharacters will no longer be metacharacters when the given shell command is executed. Actually, you could do something like that, perhaps using set -o noglob as you discovered, but then that must become a contract between the script and the user of the script; the user must be made aware of exactly what the precise processing rules will be when the command is executed so that he can properly use the script.
Under this design, the user could call the script as follows (notice the extra layer of quoting for the shell command string evaluation; could alternatively backslash-escape just the asterisk):
$ sh foo.sh "echo 'select * from table'";
I'd like to return to my earlier comment about the overall design; it doesn't really make sense to do it this way. It makes more sense to take the text-to-process itself, not a shell command that is expected to produce the text-to-process.
Here is how that could be done:
## take the text-to-process via a command-line argument
sql="$1";
## process and echo it
echo "$sql"| tr a-z A-Z;
(I also removed the -s option of tr, which really doesn't make sense here.)
Notice that the script is simpler now, and usage is also simpler:
$ sh foo.sh 'select * from table';
When I'm interacting with the shell or writing a bash script I can do:
somecmd "some
arg"
Say now that I want to do the same in vim command-line mode:
:!somecmd "some<Enter>arg"
obviously won't work: as soon as I press <Enter> the command is executed. But neither the following do:
:!somecmd "some<C-V><Enter>arg"
:!somecmd "some<C-V>x0Aarg"
The first one inserts a carriage return instead of a line feed, which is right. The second one will break the command in two, trying to execute somecmd "some<C-V> first and then arg", both of which fail miserably.
I guess I could work around this using some echo -e command substitution, or embedding $'\n', but is it possible to type it directly in vim's command-line? I don't fully understand why the "some<C-V>x0Aarg" form doesn't work while $'some\narg' does. Is vim parsing the string previously to shell evaluation?
Well, I've found the answer myself, but I'm leaving here for further reference anyway. The documentation of :! states:
A newline character ends {cmd}, what follows is interpreted as a following ":" command. However, if there is a backslash before the newline it is removed and {cmd} continues. It doesn't matter how many backslashes are before the newline, only one is removed.
So you (I) should type "some\<C-V>x0Aarg" instead of "some<C-V>x0Aarg".
Plus, I could have done it using the system() function instead of the :! command:
:call system("somecmd 'some<C-V>x0Aarg'")