This question already has answers here:
Getting ‘Command not found’ errors when there is no space after the opening square bracket [duplicate]
(2 answers)
How do I compare two string variables in an 'if' statement in Bash? [duplicate]
(12 answers)
Closed 5 years ago.
I am new with bash and I try to use the if statement, so I tried this short piece of code:
#!/bin/bash
if ["lol" = "lol"];
then
echo "lol"
fi
And I get the following error:
./script.sh: line 2: [lol: command not found
I tried other combinations, like:
#!/bin/bash
if ["lol" == "lol"];
then
echo "lol"
fi
but I still get errors, so what would be the correct formulation ?
Thank you
Although the problem was solved in a comment above, let me give you a bit more information.
For Bash, [ is a command, and there is nothing special about that command. Bash parses the following arguments as data and operators. Since these are normal arguments to a normal command, they need to be separated by spaces.
If you express a test as ["lol" = "lol"], Bash reads this as it would read any command, by performing word splitting and expansions. This gets rid of quotes, and what is left after that is [lol = lol]. Of course, [lol is not a valid command, so you get the error message you saw.
You can test this with another command. For instance, type l"s" at the command line, and you will see Bash execute ls.
You would not write ls/ (without the space), so for the same reason you cannot write [a=b] either.
Please note ] simply is a closing argument that command [ expects. It has no purpose in itself, and requiring it is simply a design choice. The test command actually is entirely equivalent to [, aside from not needing the closing bracket.
One last word... [ is a builtin in Bash (meaning a command that is part of Bash itself and executes without launching a separate process, not a separate program), but on many systems you will also find an executable named [. Try which [ at the command line on your system, it will probably be there.
Related
My company has a tool that dynamically generates commands to run based on an input json. It works very well when all arguments to the compiled command are single words, but is failing when we attempt multi word args. Here is the minimal example of how it fails.
# Print and execute the command.
print_and_run() { local command=("$#")
if [[ ${command[0]} == "time" ]]; then
echo "Your command: time ${command[#]:1}"
time ${command[#]:1}
fi
}
# How print_and_run is called in the script
print_and_run time docker run our-conainer:latest $generated_flags
# Output
Your command: time docker run our-container:latest subcommand --arg1=val1 --arg2="val2 val3"
Usage: our-program [OPTIONS] COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]...
Try 'our-program --help' for help.
Error: No such command 'val3"'.
But if I copy the printed command and run it myself it works fine (I've omitted docker flags). Shelling into the container and running the program directly with these arguments works as well, so the parsing logic there is solid (It's a python program that uses click to parse the args).
Now, I have a working solution that uses eval, but my entire team jumped down my throat at that suggestion. I've also proposed a solution using delineating characters for multi-word arguments, but that was shot down as well.
No other solutions proposed by other engineers have worked either. So can I ask someone to perhaps explain why val3 is being treated as a separate command, or to help me find a solution to get bash to properly evaluate the dynamically determined command without using eval?
Your command after expanding $generated_flags is:
print_and_run time docker run our-conainer:latest subcommand --arg1=val1 --arg2="val2 val3"
Your specific problem is that in --arg2="val2 val3" the quotes are literal, not syntactical, because quotes are processed before variables are expanded. This means --arg2="val2 and val3" are being split into two separate arguments. Then, I assume, docker is trying to interpret val3" as some kind of docker command because it's not part of any argument, and it's throwing out an error because it doesn't know what that means.
Normally you'd fix this via an array to properly maintain the string boundary.
generated_flags=( "subcommand" "--arg1=val1" "--arg2=val2 val3" )
print_and_run time docker run our-container:latest "${generated_flags[#]}"
This will maintain --arg2=val2 val3 as a single argument as it gets passed into print_and_run, then you just have to expand your command array correctly inside the function (make sure to quote the expansion).
The question is:
why val3 is being treated as a separate command
Unquoted variable expansion undergo word splitting and filename expansion. Word splitting splits the result of the variable expansion on spcaes, tabs and newlines. Splits it into separate "words".
a="something else"
$a # results in two "words"; 'something' and 'else'
It is irrelevent what you put inside the variable value or how many quotes or escape sequences you put inside. Every consecutive spaces splits it into words. Quotes " ' and escapes \ are parsed when part of the input line, not when part of the result of unquoted expansion.
help me find a solution to
Write a parser that will actually parse the commands and split it according to the rules that you want to use and then execute the command split into separate words. For example, a very crude such parser is included in xargs:
$ echo " 'quotes quotes' not quotes" | xargs printf "'%s'\n"
'quotes quotes'
'not'
'quotes'
For example, python has shlex.split which you can just use, and at the same time introduce python which is waaaaay easier to manage than badly written Bash scripts.
tool that dynamically generates commands to run based on an input json
Overall, the proper way forward would is to upgrade the tool to generate a JSON array that represents the words of the command to be executed. Than you can just execute that array of words, which is, again, trivial to do properly in python with json and subprocess.run, and will require some gymnastics with jq and read and Bash arrays in shell.
Check your scripts with shellcheck.
Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 12 months ago.
Improve this question
In bash, you can sort-of do multi-line comments, like this:
: '
echo "You will never see this message :)"
'
But why does it only work out like that? If you do it without space after the colon, an error occurs. And also, if I did what I did above with echo in apostrophes, it still would not be read by the machine.
: '
echo 'You will also never see this message :D'
'
And also without anything around it:
: '
echo You will never see these messages :(
How does this work, and why did everything I look up about multiline comments in bash tell me there wasn't such a thing?
Colon : is a built-in bash command that essentially does nothing.
From the bash documentation:
Do nothing beyond expanding arguments and performing redirections. The return status is zero.
So you can think of : as being a like any other bash command which you can pass arguments to. So it's not a comment, but it sort of works like one because it and every argument passed to it is a no-op. You could accomplish the same thing by creating a "comment" function:
comment () {}
comment echo 'You will never see this message'
The space is required after : because without the space, the whole thing becomes the command name. For example, if you run:
:echo 'You will never see this message'
Bash sees that as running a command called :echo with the argument `'You will never see this message'. It returns an error because there is no such command.
The second part is just how bash handles unmatched quotes. Bash will continue to read data until a matching quote is encountered. So in your multi-line example, you are passing one argument to the : command (in the first example) or padding multiple arguments (in the second example).
This isn't a comment. But it has no effect, so it seems like a comment. But it is parsed and evaluated by bash: you can introduce syntax errors if you use this incorrectly.
Understanding some of the basic building blocks of shell syntax and some built-in commands will help make sense of this.
The shell (such as bash) reads commands, figures out the command name and the arguments from the user input, and then runs the command with the arguments.
For example:
echo hi
Is parsed by the shell as the command echo with 1 argument hi.
Generally, the shell spits things based on spaces/tabs. Which is why it parses echo hi as two things. You can use single quotes and double quotes to tell it to parse things differently:
echo 'foo bar' baz 'ignore me'
is parsed by the shell as the command echo with arguments foo bar, baz and ignore me. Notice that the quotes aren't part of the arguments, they are parsed and removed by bash.
Another piece of the puzzle is the builtin : command. man : will tell you that this is command does nothing. It parses arguments, does directions, but does nothing by itself.
That means when you enter this:
: 'echo hi'
Bash parses it as the command : with argument echo hi. Then the command is run and the output is thrown away. That has no effect, so it feels like a comment (but it really isn't, unlike # which is a comment character).
: '
echo 'You will also never see this message :D'
'
Is parsed by bash as the command : with arguments \necho (that is, a new line character followed by echo ), You, will, also, never, see, this, message :D and then \n (that is, a newline character). Then bash runs this command. That does nothing, so it again behaves mostly like a comment.
When posting this question originally, I totally misworded it, obtaining another, reasonable but different question, which was correctly answered here.
The following is the correct version of the question I originally wanted to ask.
In one of my Bash scripts, there's a point where I have a variable SCRIPT which contains the /path/to/an/exe which, when executed, outputs a line to be executed.
What my script ultimately needs to do, is executing that line to be executed. Therefore the last line of the script is
$($SCRIPT)
so that $SCRIPT is expanded to /path/to/an/exe, and $(/path/to/an/exe) executes the executable and gives back the line to be executed, which is then executed.
However, running shellcheck on the script generates this error:
In setscreens.sh line 7:
$($SCRIPT)
^--------^ SC2091: Remove surrounding $() to avoid executing output.
For more information:
https://www.shellcheck.net/wiki/SC2091 -- Remove surrounding $() to avoid e...
Is there a way I can rewrite that $($SCRIPT) in a more appropriate way? eval does not seem to be of much help here.
If the script outputs a shell command line to execute, the correct way to do that is:
eval "$("$SCRIPT")"
$($SCRIPT) would only happen to work if the command can be completely evaluated using nothing but word splitting and pathname expansion, which is generally a rare situation. If the program instead outputs e.g. grep "Hello World" or cmd > file.txt then you will need eval or equivalent.
You can make it simple by setting the command to be executed as a positional argument in your shell and execute it from the command line
set -- "$SCRIPT"
and now run the result that is obtained by expansion of SCRIPT, by doing below on command-line.
"$#"
This works in case your output from SCRIPT contains multiple words e.g. custom flags that needs to be run. Since this is run in your current interactive shell, ensure the command to be run is not vulnerable to code injection. You could take one step of caution and run your command within a sub-shell, to not let your parent environment be affected by doing ( "$#" ; )
Or use shellcheck disable=SCnnnn to disable the warning and take the occasion to comment on the explicit intention, rather than evade the detection by cloaking behind an intermediate variable or arguments array.
#!/usr/bin/env bash
# shellcheck disable=SC2091 # Intentional execution of the output
"$("$SCRIPT")"
By disabling shellcheck with a comment, it clarifies the intent and tells the questionable code is not an error, but an informed implementation design choice.
you can do it in 2 steps
command_from_SCRIPT=$($SCRIPT)
$command_from_SCRIPT
and it's clean in shellcheck
This question already has answers here:
How do you pass on filenames to other programs correctly in bash scripts?
(3 answers)
Closed 7 years ago.
I am attempting to parse the parameters sent to shell script. For example the values sent to the script are as follows:
-to someone#somewhere.com -a file1.txt file2.txt "new file.txt"
I can parse the string so that I get -a as my operator, but I want to reformat the parameter part file1.txt file2.txt "new file.txt" so that it looks like 'file1.txt' 'file2.txt' 'new file.txt' so that I can pass it down to the zip utility.
Right now I am using the following to parse the parameter, but it is not getting me the results I want. It is close but not quite right.
for file in `echo $PARM`
do
FILE_LIST="$FILE_LIST '"$file"'"
done
This gives me 'file1.txt' file2.txt' 'new' 'file.txt' How can I rework the above code to give me what I want.
Thank you
First, you need to understand the sequence of operations when the shell parses a command line. Here's a partial list: first, it interprets quotes and escapes, then removes them (after they've had their effects), then expands any variable references (and similar things like backquote expressions), word-splits and wildcard-expands the expanded variable values, then finally treats the result of all of that as a command and its arguments.
This has two important implications for what you're trying to do: by the time your script receives its arguments, they no longer have quotes; the quotes have had their effect (new file.txt is a single argument rather than two), but the quotes themselves are gone. Also, when putting quotes in a variable is useless because by they time the variable gets expanded and the quotes are part of the command line, it's too late for them to do anything useful -- they aren't parsed as quotes, they're just passed on to the command as part of the argument (which you don't want).
Fortunately, the answer is easy (and Stephen P summarized it in his comment): put double-quotes around all variable references. This prevents the word-splitting and wildcard-expansion phases from messing with their values, which means that whatever was passed to your script as a single argument (e.g. new file.txt) gets passed on as a single argument. If you need to pass on all of your arguments, use "$#". If you need to pass on only some, you can either use shift to get rid of the options and then "$#" will pass on the remaining ones, or use e.g. "${#:4}" to pass all argument starting at #4, or "${#:4:3}" to pass on three arguments starting at #4.
I'm looking for a way (other than ".", '.', \.) to use bash (or any other linux shell) while preventing it from parsing parts of command line. The problem seems to be unsolvable
How to interpret special characters in command line argument in C?
In theory, a simple switch would suffice (e.g. -x ... telling that the
string ... won't be interpreted) but it apparently doesn't exist. I wonder whether there is a workaround, hack or idea for solving this problem. The original problem is a script|alias for a program taking youtube URLs (which may contain special characters (&, etc.)) as arguments. This problem is even more difficult: expanding "$1" while preventing shell from interpreting the expanded string -- essentially, expanding "$1" without interpreting its result
Use a here-document:
myprogramm <<'EOF'
https://www.youtube.com/watch?v=oT3mCybbhf0
EOF
If you wrap the starting EOF in single quotes, bash won't interpret any special chars in the here-doc.
Short answer: you can't do it, because the shell parses the command line (and interprets things like "&") before it even gets to the point of deciding your script/alias/whatever is what will be run, let alone the point where your script has any control at all. By the time your script has any influence in the process, it's far too late.
Within a script, though, it's easy to avoid most problems: wrap all variable references in double-quotes. For example, rather than curl -o $outputfile $url you should use curl -o "$outputfile" "$url". This will prevent the shell from applying any parsing to the contents of the variable(s) before they're passed to the command (/other script/whatever).
But when you run the script, you'll always have to quote or escape anything passed on the command line.
Your spec still isn't very clear. As far as I know the problem is you want to completely reinvent how the shell handles arguments. So… you'll have to write your own shell. The basics aren't even that difficult. Here's pseudo-code:
while true:
print prompt
read input
command = (first input)
args = (argparse (rest input))
child_pid = fork()
if child_pid == 0: // We are inside child process
exec(command, args) // See variety of `exec` family functions in posix
else: // We are inside parent process and child_pid is actual child pid
wait(child_pid) // See variety of `wait` family functions in posix
Your question basically boils down to how that "argparse" function is implemented. If it's just an identity function, then you get no expansion at all. Is that what you want?