I’m trying to build a command string based to pass in a “-e” flag and another variable into a another base script being call as a subroutine and have run into a strange problem; I’m losing the “-e” portion of the string when I pass it into the subroutine. I create a couple example which illustrate the issue, any help?
This works as you would expect:
$echo "-e $HOSTNAME"
-e ops-wfm
This does NOT; we lose the “-e” because it is interpreted as a special qualifier.
$myFlag="-e $HOSTNAME"; echo $myFlag
ops-wfm
Adding the “\” escape charactor doesn’t work either, I get the correct string with the "\" in front:
$myFlag="\-e $HOSTNAME"; echo $myFlag
\-e ops-wfm
How can I prevent -e being swallowed?
Use double-quotes:
$ myFlag="-e $HOSTNAME"; echo "${myFlag}"
-e myhost.local
I use ${var} rather than $var out of habit as it means that I can add characters after the variable without the shell interpreting them as part of the variable name.
echo may not be the best example here. Most Unix commands will accept -- to mark no more switches.
$ var='-e .bashrc' ; ls -l -- "${var}"
ls: -e .bashrc: No such file or directory
Well, you could put your variable in quotes:
echo "$myFlag"
...making it equivalent to your first example, which, as you say, works just fine.
Related
I wrote a little bash script to make a "shell" that prints the date before and after a command is executed:
while true
do date
printf "Prompt: "
read x
date
$x #Solution: this should be eval "$x"
done
The problem is that this "shell" doesn't recognize variables in $x, so for instance echo $PWD outputs $PWD instead of the current directory. A related consequence is that cd does not work. How to fix/work around this? Thanks.
EDIT: Solved. Thanks everyone :-)
LATE EDIT: Actually the above isn't quite the solution; I needed to add the -r option to the read command, to prevent backslash sequences from being prematurely evaluated. Furthermore, the double quotes appear to be superfluous after all in this case (since the prepass would expand $x and "$x" to the same thing unless $x is empty, and eval does the same thing with no argument as it does with an empty string), but I suppose it never hurts to always include the quotes as a matter of security and good style.
If you're just trying to print out the date before and after, you can do that with PROMPT_COMMAND="$(date)" in your .bashrc. This will print out the date before your command prompt (meaning that it will also functionally be after your command since you'll get a prompt after the command finishes). It will be on a separate line from your PS1, but you could add this to your PS1 instead if you'd rather it be there.
If you're trying to type in a literal echo $PWD and have x give the value of that, you can do eval "$x" but most people will recommend against using eval because anyone can type anything, like sudo rm -rf / --no-preserve-root
If you're trying to set x to be the name of a variable, i.e., x=PWD, and you want to get the value of PWD, you can do ${!x}. This is called "indirection".
We started using shellcheck to check our scripts for errors/warnings,
Now common warning what we see in all our scripts is unquoted variables.
is there any script to correct those simple warnings/errors ?
I have below command which I use to change $VAR to ${VAR}
sed -i -r 's:\$([_a-zA-Z?][_a-zA-Z0-9]*):${\1}:g' <scriptname>
I modified it as follows,
sed -i -r 's:\$([_a-zA-Z?][_a-zA-Z0-9]*):"${\1}":g' <scriptname>
above command works fine when variables are unquoted but when they are quoted e.g. "$VAR" it changes to ""${VAR}""
any suggestion to whether continue doing it with sed or better write script to do it ?
any particular suggestions?
Edit carefully.
When you write echo "This is example ${var} in the middle of the line" you do not want to put quotes around ${var}.
You should put all variables (except PATH, PWD and some other system vars) in lowercase.
You might want to add some mappings in .vimrc, that will execute your sed first or second commandline using F4 of F5 (something like . ! ~/bin/make_my_var) making the editing easier. In make_my_var you can add logic for lowercasing the vars when they are not one of a list of exceptions.
And (edited):
You might want some more standards, perhaps use a styleguide.
I used this command in my Bash Shell:
printf $VAR1 >> `printf $VAR2`
and it normally worked. But when I write this into the script file and run it in Shell, it does not work. File "script.sh" contains this:
#!/bin/bash
printf $VAR1 >> `printf $VAR2`
and the output in Shell is:
script.sh: line2: `printf $VAR2`: ambiguous redirect
I don´t know, how is this possible, because the command is absolutely the same. And of course, I run the script on the same system and in the same Shell window.
Thank you for your help.
There are 3 points worth addressing here:
Shell variables vs. environment variables:
Scripts (unless invoked with . / source) run in a child process that only sees the parent [shell]'s environment variables, not its regular shell variables.
This is what likely happened in the OP's case: $VAR1 and $VAR2 existed as regular shell variables, but not environment variables, so script script.sh didn't see them.
Therefore, for a child process to see a parent shell's shell variables, the parent must export them first, as a result of which they (also) become environment variables: export VAR1=... VAR2=...
Bash's error messages relating to output redirection (>, >>):
If the filename argument to a an output redirection is an - unquoted command substitution (`...`, or its modern equivalent, $(...)) - i.e., the output from a command - Bash reports error ambiguous redirect in the following cases:
The command output has embedded whitespace, i.e., contains more than one word.
The command output is empty, which is what likely happened in the OP's case.
As an aside: In this case, the error message's wording is unfortunate, because there's nothing ambiguous about a missing filename - it simply cannot work, because files need names.
It is generally advisable to double-quote command substitutions (e.g., >> "$(...)") and also variable references (e.g., "$VAR2"): this will allow you to return filenames with embedded whitespace, and, should the output be unexpectedly empty, you'll get the (slightly) more meaningful error message No such file or directory.
Not double-quoting a variable reference or command substitution subjects its value / to so-called shell expansions: further, often unintended interpretation by the shell.
The wisdom of using a command substitution to generate a filename:
Leaving aside that printf $VAR2 is a fragile way to print the value of variable $VAR2 in general (the robust form again involves double-quoting: printf "$VAR2", or, even more robustly, to rule out inadvertent interpretation of escape sequences in the variable value, printf %s "$VAR2"), there is no good reason to employ a command substitution to begin with if all that's needed is a variable's value:
>> "$VAR2" is enough to robustly specify the value of variable $VAR2 as the target filename.
I tried this on my Mac (10.11.1) in a terminal window and it worked fine.
Are you sure your default shell is bash?
echo $SHELL
Did you use EXPORT to set your shell vars?
$ export VAR1="UselessData"
$ export VAR2="FileHoldingUselessData"
$ ./script.sh
$ cat FileHoldingUselessData
UselessData$
However.... echo I think does a better job since with printf the output terminates with the first space so....
$ cat script.sh
#!/bin/bash
echo $VAR1 >> `printf $VAR2`
$ ./script.sh
$ cat FileHoldingUselessData
Some Useless Data
Which leads me to believe you might want to just use echo instead of printf all together..
#!/bin/bash
echo $VAR1 >> `echo $VAR2`
I've read the man pages on echo, and it tells me that the -e parameter will allow an escaped character, such as an escaped n for newline, to have its special meaning. When I type the command
$ echo -e 'foo\nbar'
into an interactive bash shell, I get the expected output:
foo
bar
But when I use this same command (i've tried this command character for character as a test case) I get the following output:
-e foo
bar
It's as if echo is interpretting the -e as a parameter (because the newline still shows up) yet also it interprets the -e as a string to echo. What's going on here? How can I prevent the -e showing up?
You need to use #!/bin/bash as the first line in your script. If you don't, or if you use #!/bin/sh, the script will be run by the Bourne shell and its echo doesn't recognize the -e option. In general, it is recommended that all new scripts use printf instead of echo if portability is important.
In Ubuntu, sh is provided by a symlink to /bin/dash.
Different implementations of echo behave in annoyingly different ways. Some don't take options (i.e. will simply echo -e as you describe) and automatically interpret escape sequences in their parameters. Some take flags, and don't interpret escapes unless given the -e flag. Some take flags, and interpret different escape sequences depending on whether the -e flag was passed. Some will cause you to tear your hair out if you try to get them to behave in a predictable manner... oh, wait, that's all of them.
What you're probably seeing here is a difference between the version of echo built into bash vs /bin/echo or maybe vs. some other shell's builtin. This bit me when Mac OS X v10.5 shipped with a bash builtin echo that echoed flags, unlike what all my scripts expected...
In any case, there's a solution: use printf instead. It always interprets escape sequences in its first argument (the format string). The problems are that it doesn't automatically add a newline (so you have to remember do that explicitly), and it also interprets % sequences in its first argument (it is, after all, a format string). Generally, you want to put all the formatting stuff in the format string, then put variable strings in the rest of the arguments so you can control how they're interpreted by which % format you use to interpolate them into the output. Some examples:
printf "foo\nbar\n" # this does what you're trying to do in the example
printf "%s\n" "$var" # behaves like 'echo "$var"', except escapes will never be interpreted
printf "%b\n" "$var" # behaves like 'echo "$var"', except escapes will always be interpreted
printf "%b\n" "foo\nbar" # also does your example
Use
alias echo /usr/bin/echo
to force 'echo' invoking coreutils' echo which interpret '-e' parameter.
Try this:
import subprocess
def bash_command(cmd):
subprocess.Popen(['/bin/bash', '-c', cmd])
code="abcde"
// you can use echo options such as -e
bash_command('echo -e "'+code+'"')
Source: http://www.saltycrane.com/blog/2011/04/how-use-bash-shell-python-subprocess-instead-binsh/
VAR="-e xyz"
echo $VAR
This prints xyz, for some reason. I don't seem to be able to find a way to get a string to start with -e.
What is going on here?
The answers that say to put $VAR in quotes are only correct by side effect. That is, when put in quotes, echo(1) receives a single argument of -e xyz, and since that is not a valid option string, echo just prints it out. It is a side effect as echo could just as easily print an error regarding malformed options. Most programs will do this, but it seems GNU echo (from coreutils) and the version built into bash simply echo strings that start with a hyphen but are not valid argument strings. This behaviour is not documented so it should not be relied upon.
Further, if $VAR contains a valid echo option argument, then quoting $VAR will not help:
$ VAR="-e"
$ echo "$VAR"
$
Most GNU programs take -- as an argument to mean no more option processing — all the arguments after -- are to be processed as non-option arguments. bash echo does not support this so you cannot use it. Even if it did, it would not be portable. echo has other portability issues (-n vs \c, no -e).
The correct and portable solution is to use printf(1).
printf "%s\n" "$VAR"
The variable VAR contains -e xyz, if you access the variable via $ the -e is interpreted as a command-line option for echo. Note that the content of $VAR is not wrapped into "" automatically.
Use echo "$VAR" to fix your problem.
In zsh, you can use a single dash (-) before your arguments. This ensures that no following arguments are interpreted as options.
% VAR="-e xyz"
% echo - $VAR
-e xyz
From the zsh docs:
echo [ -neE ] [ arg ... ]
...
Note that for standards compliance a double dash does not
terminate option processing; instead, it is printed directly.
However, a single dash does terminate option processing, so the
first dash, possibly following options, is not printed, but
everything following it is printed as an argument.
The single dash behaviour is different from other shells.
Keep in mind this behavior is specific to zsh.
Try:
echo "$VAR"
instead.
(-e is a valid option for echo - this is what causes this phenomenon).