Why is : ' ' a comment in Bash? [closed] - bash

Closed. This question is not reproducible or was caused by typos. It is not currently accepting answers.
This question was caused by a typo or a problem that can no longer be reproduced. While similar questions may be on-topic here, this one was resolved in a way less likely to help future readers.
Closed 12 months ago.
Improve this question
In bash, you can sort-of do multi-line comments, like this:
: '
echo "You will never see this message :)"
'
But why does it only work out like that? If you do it without space after the colon, an error occurs. And also, if I did what I did above with echo in apostrophes, it still would not be read by the machine.
: '
echo 'You will also never see this message :D'
'
And also without anything around it:
: '
echo You will never see these messages :(
How does this work, and why did everything I look up about multiline comments in bash tell me there wasn't such a thing?

Colon : is a built-in bash command that essentially does nothing.
From the bash documentation:
Do nothing beyond expanding arguments and performing redirections. The return status is zero.
So you can think of : as being a like any other bash command which you can pass arguments to. So it's not a comment, but it sort of works like one because it and every argument passed to it is a no-op. You could accomplish the same thing by creating a "comment" function:
comment () {}
comment echo 'You will never see this message'
The space is required after : because without the space, the whole thing becomes the command name. For example, if you run:
:echo 'You will never see this message'
Bash sees that as running a command called :echo with the argument `'You will never see this message'. It returns an error because there is no such command.
The second part is just how bash handles unmatched quotes. Bash will continue to read data until a matching quote is encountered. So in your multi-line example, you are passing one argument to the : command (in the first example) or padding multiple arguments (in the second example).

This isn't a comment. But it has no effect, so it seems like a comment. But it is parsed and evaluated by bash: you can introduce syntax errors if you use this incorrectly.
Understanding some of the basic building blocks of shell syntax and some built-in commands will help make sense of this.
The shell (such as bash) reads commands, figures out the command name and the arguments from the user input, and then runs the command with the arguments.
For example:
echo hi
Is parsed by the shell as the command echo with 1 argument hi.
Generally, the shell spits things based on spaces/tabs. Which is why it parses echo hi as two things. You can use single quotes and double quotes to tell it to parse things differently:
echo 'foo bar' baz 'ignore me'
is parsed by the shell as the command echo with arguments foo bar, baz and ignore me. Notice that the quotes aren't part of the arguments, they are parsed and removed by bash.
Another piece of the puzzle is the builtin : command. man : will tell you that this is command does nothing. It parses arguments, does directions, but does nothing by itself.
That means when you enter this:
: 'echo hi'
Bash parses it as the command : with argument echo hi. Then the command is run and the output is thrown away. That has no effect, so it feels like a comment (but it really isn't, unlike # which is a comment character).
: '
echo 'You will also never see this message :D'
'
Is parsed by bash as the command : with arguments \necho (that is, a new line character followed by echo ), You, will, also, never, see, this, message :D and then \n (that is, a newline character). Then bash runs this command. That does nothing, so it again behaves mostly like a comment.

Related

Execute command that results from execution of a script whose name is in a variable

When posting this question originally, I totally misworded it, obtaining another, reasonable but different question, which was correctly answered here.
The following is the correct version of the question I originally wanted to ask.
In one of my Bash scripts, there's a point where I have a variable SCRIPT which contains the /path/to/an/exe which, when executed, outputs a line to be executed.
What my script ultimately needs to do, is executing that line to be executed. Therefore the last line of the script is
$($SCRIPT)
so that $SCRIPT is expanded to /path/to/an/exe, and $(/path/to/an/exe) executes the executable and gives back the line to be executed, which is then executed.
However, running shellcheck on the script generates this error:
In setscreens.sh line 7:
$($SCRIPT)
^--------^ SC2091: Remove surrounding $() to avoid executing output.
For more information:
https://www.shellcheck.net/wiki/SC2091 -- Remove surrounding $() to avoid e...
Is there a way I can rewrite that $($SCRIPT) in a more appropriate way? eval does not seem to be of much help here.
If the script outputs a shell command line to execute, the correct way to do that is:
eval "$("$SCRIPT")"
$($SCRIPT) would only happen to work if the command can be completely evaluated using nothing but word splitting and pathname expansion, which is generally a rare situation. If the program instead outputs e.g. grep "Hello World" or cmd > file.txt then you will need eval or equivalent.
You can make it simple by setting the command to be executed as a positional argument in your shell and execute it from the command line
set -- "$SCRIPT"
and now run the result that is obtained by expansion of SCRIPT, by doing below on command-line.
"$#"
This works in case your output from SCRIPT contains multiple words e.g. custom flags that needs to be run. Since this is run in your current interactive shell, ensure the command to be run is not vulnerable to code injection. You could take one step of caution and run your command within a sub-shell, to not let your parent environment be affected by doing ( "$#" ; )
Or use shellcheck disable=SCnnnn to disable the warning and take the occasion to comment on the explicit intention, rather than evade the detection by cloaking behind an intermediate variable or arguments array.
#!/usr/bin/env bash
# shellcheck disable=SC2091 # Intentional execution of the output
"$("$SCRIPT")"
By disabling shellcheck with a comment, it clarifies the intent and tells the questionable code is not an error, but an informed implementation design choice.
you can do it in 2 steps
command_from_SCRIPT=$($SCRIPT)
$command_from_SCRIPT
and it's clean in shellcheck

Script takes only first part of double quotes

Yesterday I asked a similar question about escaping double quotes in env variables, although It didn't solve my problem (Probably because I didn't explain good enough) so I would like to specify more.
I'm trying to run a script (Which I know is written in Perl), although I have to use it as a black box because of permissions issue (so I don't know how the script works). Lets call this script script_A.
I'm trying to run a basic command in Shell: script_A -arg "date time".
If I run from the command line, it's works fine, but If I try to use it from a bash script or perl scrip (for example using the system operator), it will take only the first part of the string which was sent as an argument. In other words, it will fail with the following error: '"date' is not valid..
Example to specify a little bit more:
If I run from the command line (works fine):
> script_A -arg "date time"
If I run from (for example) a Perl script (fails):
my $args = $ENV{SOME_ENV}; # Assume that SOME_ENV has '-arg "date time"'
my $cmd = "script_A $args";
system($cmd");
I think that the problem comes from the environment variable, but I can't use the one quote while defining the env variable. For example, I can't use the following method:
setenv SOME_ENV '-arg "date time"'
Because it fails with the following error: '"date' is not valid.".
Also, I tried to use the following method:
setenv SOME_ENV "-arg '"'date time'"'"
Although now the env variable will containe:
echo $SOME_ENV
> -arg 'date time' # should be -arg "date time"
Another note, using \" fails on Shell (tried it).
Any suggestions on how to locate the reason for the error and how to solve it?
The $args, obtained from %ENV as you show, is a string.
The problem is in what happens to that string as it is manipulated before arguments are passed to the program, which needs to receive strings -arg and date time
If the program is executed in a way that bypasses the shell, as your example is, then the whole -arg "date time" is passed to it as its first argument. This is clearly wrong as the program expects -arg and then another string for its value (date time)
If the program were executed via the shell, what happens when there are shell metacharacters in the command line (not in your example), then the shell would break the string into words, except for the quoted part; this is how it works from the command line. That can be enforced with
system('/bin/tcsh', '-c', $cmd);
This is the most straightforward fix but I can't honestly recommend to involve the shell just for arguments parsing. Also, you are then in the game of layered quoting and escaping, what can get rather involved and tricky. For one, if things aren't right the shell may end up breaking the command into words -arg, "date, time"
How you set the environment variable works
> setenv SOME_ENV '-arg "date time"'
> perl -wE'say $ENV{SOME_ENV}' #--> -arg "date time" (so it works)
what I believe has always worked this way in [t]csh.
Then, in a Perl script: parse this string into -arg and date time strings, and have the program is executed in a way that bypasses the shell (if shell isn't used by the command)
my #args = $ENV{SOME_ENV} =~ /(\S+)\s+"([^"]+)"/; #"
my #cmd = ('script_A', #args);
system(#cmd) == 0 or die "Error with system(#cmd): $?";
This assumes that SOME_ENV's first word is always the option's name (-arg) and that all the rest is always the option's value, under quotes. The regex extracts the first word, as consecutive non-space characters, and after spaces everything in quotes.† These are program's arguments.
In the system LIST form the program that is the first element of the list is executed without using a shell, and the remaining elements are passed to it as arguments. Please see system for more on this, and also for basics of how to investigate failure by looking into $? variable.
It is in principle advisable to run external commands without the shell. However, if your command needs the shell then make sure that the string is escaped just right to to preserve quotes.
Note that there are modules that make it easier to use external commands. A few, from simple to complex: IPC::System::Simple, Capture::Tiny, IPC::Run3, and IPC::Run.
I must note that that's an awkward environment variable; any way to ogranize things otherwise?
† To make this work for non-quoted arguments as well (-arg date) make the quote optional
my #args = $ENV{SOME_ENV} =~ /(\S+)\s+"?([^"]+)/;
where I now left out the closing (unnecessary) quote for simplicity

Bash, if statement issue [duplicate]

This question already has answers here:
Getting ‘Command not found’ errors when there is no space after the opening square bracket [duplicate]
(2 answers)
How do I compare two string variables in an 'if' statement in Bash? [duplicate]
(12 answers)
Closed 5 years ago.
I am new with bash and I try to use the if statement, so I tried this short piece of code:
#!/bin/bash
if ["lol" = "lol"];
then
echo "lol"
fi
And I get the following error:
./script.sh: line 2: [lol: command not found
I tried other combinations, like:
#!/bin/bash
if ["lol" == "lol"];
then
echo "lol"
fi
but I still get errors, so what would be the correct formulation ?
Thank you
Although the problem was solved in a comment above, let me give you a bit more information.
For Bash, [ is a command, and there is nothing special about that command. Bash parses the following arguments as data and operators. Since these are normal arguments to a normal command, they need to be separated by spaces.
If you express a test as ["lol" = "lol"], Bash reads this as it would read any command, by performing word splitting and expansions. This gets rid of quotes, and what is left after that is [lol = lol]. Of course, [lol is not a valid command, so you get the error message you saw.
You can test this with another command. For instance, type l"s" at the command line, and you will see Bash execute ls.
You would not write ls/ (without the space), so for the same reason you cannot write [a=b] either.
Please note ] simply is a closing argument that command [ expects. It has no purpose in itself, and requiring it is simply a design choice. The test command actually is entirely equivalent to [, aside from not needing the closing bracket.
One last word... [ is a builtin in Bash (meaning a command that is part of Bash itself and executes without launching a separate process, not a separate program), but on many systems you will also find an executable named [. Try which [ at the command line on your system, it will probably be there.

Echo-ing an environment variable returns string literal rather than environment variable value

I have two bash scripts. The first listens to a pipe "myfifo" for input and executes the input as a command:
fifo_name="myfifo"
[ -p $fifo_name ] || mkfifo $fifo_name;
while true
do
if read line; then
$line
fi
done <"$fifo_name"
The second passes a command 'echo $SET_VAR' to the "myfifo" pipe:
command='echo $SET_VAR'
command_to_pass="echo $command"
$command_to_pass > myfifo
As you can see, I want to pass 'echo $SET_VAR' through the pipe. In the listener process, I've set a $SET_VAR environment variable. I expect the output of the command 'echo $SET_VAR' to be 'var_value,' which is the value of the environment variable SET_VAR.
Running the first (the listener) script in one bash process and then passing a command via the second in another process gives the following result:
$SET_VAR
I expected to "var_value" to be printed. Instead, the string literal $SET_VAR is printed. Why is this the case?
Before I get to the problem you're reporting, I have to point out that your loop won't work. The while true part (without a break somewhere in the loop) will run forever. It'll read the first line from the file, loop, try to read a second line (which fails), loop again, try to read a third line (also fails), loop again, try to read a fourth line, etc... You want the loop to exit as soon as the read command fails, so use this:
while read line
do
# something I'll get to
done <"$fifo_name"
The other problem you're having is that the shell expands variables (i.e. replaces $var with the value of the variable var) partway through the process of parsing a command line, and when it's done that it doesn't go back and re-do the earlier parsing steps. In particular, if the variable's value included something like $SET_VAR it doesn't go back and expand that, since it's just finished the bit where it expands variables. In fact, the only thing it does with the expanded value is split it into "words" (based on whitespace), and expand any filename wildcards it finds -- no variable expansions happen, no quote or escape interpretation, etc.
One possible solution is to tell the shell to run the parsing process twice, with the eval command:
while read line
do
eval "$line"
done <"$fifo_name"
(Note that I used double-quotes around "$line" -- this prevents the word splitting and wildcard expansion I mentioned from happening before eval goes through the normal parsing process. If you think of your original code half-parsing the command in $line, without double-quotes it gets one and a half-parsed, which is weird. Double-quotes suppress that half-parsing stage, so the contents of the variable get parsed exactly once.)
However, this solution comes with a big warning, because eval has a well-deserved reputation as a bug magnet. eval makes it easy to do complex things without quite understanding what's going on, which means you tend to get scripts that work great in testing, then fail incomprehensibly later. And in my experience, when eval looks like the best solution, it probably means you're trying to solve the wrong problem.
So, what're you actually trying to do? If you're just trying to execute the lines coming from the fifo as shell commands, then you can use bash "$fifo_name" to run them in a subshell, or source "$fifo_name" to run them in the current shell.
BTW, the script that feeds the fifo:
command='echo $SET_VAR'
command_to_pass="echo $command"
$command_to_pass > myfifo
Is also a disaster waiting to happen. Putting commands in variables doesn't work very well in the shell (I second chepner's recommendation of BashFAQ #50: I'm trying to put a command in a variable, but the complex cases always fail!), and putting a command to print another command in a variable is just begging for trouble.
bash, by it's nature, reads commands from stdin. You can simply run:
bash < myfifo

How to input a comment on csh?

In bash, I used # to input comment. Even on interactive session.
bash-3.2$ #
bash-3.2$ #
bash-3.2$ #
bash-3.2$
csh spits error for this. How can I input some comment on interactive csh session? In other words, I am looking for a way to make 100% sure comment in csh.
root#freebsd9:~ # #
#: Command not found.
root#freebsd9:~ # # 3e
#: Command not found.
root#freebsd9:~ # #
#: Command not found.
root#freebsd9:~ #
Interactive csh or tcsh doesn't do comments. The # character introduces a comment only in a script. (This is unlike the behavior of sh and its derivatives, such as bash.) Quoting the csh man page (from Solaris 9, one of the remaining systems where csh is not just a symlink to tcsh):
When the shell's input is not a terminal, the character #
introduces a comment that continues to the end of the input line.
Its special meaning is suppressed when preceded by a \ or enclosed in
matching quotes.
The point, I think, is that interactive commands don't need comments.
If you're using tcsh, you can do something similar with the built-in : command, which does nothing:
% : 'This is not a comment, but it acts like one.'
(where % represents the shell prompt and : is the command). Quoting the argument is a good idea; otherwise, though the command is not executed, it can have some effect:
% : This will create the file "oops.txt" > oops.txt
Note that since : is a command, it must be followed by a space.
The : command was originally introduced in a very early version of the Bourne shell, or perhaps even before that.
However, the /bin/csh version of the : command does not permit any arguments, making it useless as a comment replacement:
csh% : 'This will not work.'
:: Too many arguments
csh%
(I didn't realize that when I initially posted this answer. I must have tested it with tcsh rather than a true csh.)
Since : doesn't work in pure csh, the next best solution is probably to use echo and redirect the output:
csh% echo 'This is not a comment, but it acts like one.' > /dev/null
Obligatory link: http://www.perl.com/doc/FMTEYEWTK/versus/csh.whynot
For the reference, I like to note my current workaround. It's using echo.
#!/bin/tcsh
echo "This is comment line."
echo "But still, beware... Because `expression` is still being evaluated.
This is the best way I could find.
To use # for interactive comments in tcsh or csh, this seems to work in most cases:
alias '#' 'echo \!* >> /dev/null'
It can be run by the user interactively or placed in a .tcshrc or .cshrc configuration file as appropriate.
(The alias can obviously be named differently as you might desire; normal restrictions apply.)
Note: as Keith Thompson noted above, this will still give errors if your comment includes redirect characters such as > or >>, but used this way does not appear to actually create an undesired redirect file. Surrounding the comment in single quotes (') is still a workaround.
Check if your script has UNIX EOL Format. Has some problem with Windows EOL Format.

Resources