Why is bash using single and double quotes literally? - bash

I have a situation in Bash I've never encountered before and don't know how to resolve. I installed bash on Alpine Linux (Docker Container) and for some reason environment variables with quotes translate literally.
MY_PATH="/home/my/path"
> cd $MY_PATH
Result
bash: cd: "/home/my/path": No such file or directory
> echo $MY_PATH
Result
"/home/my/path"
Now if you try it without quotes it works
MY_PATH=/home/my/path
> cd $MY_PATH
Result
bash-4.4# (path changed)
> echo $MY_PATH
Result
/home/my/path
I've never seen this before as I expect bash to gobble up the outer quotes, not even sure what to search for in trying to resolve this.
To fully qualify the scenario let me point out that:
Using Docker with an Alpine (3.8) image
Installing Bash 4 on Alpine that usually defaults to ash shell
Update
This is starting to look like a docker issue. I'm using the env_file in Docker Compose to push environment variables to a container and it looks like its literally copying quotes " => \".
Thanks to #bishop's comment to try od -x
container.env
#!/usr/bin/env bash
MY_PATH="/home/my/path"
Then inside the Alpine 3.8 container running env
MY_PATH="/home/my/path"
Update 2
Looks like there was a bug around this that was closed. But apparently doesn't seem fixed. Is it because I'm the only one in the universe still using Docker Toolbox?

https://docs.docker.com/compose/env-file/
These syntax rules apply to the .env file:
Compose expects each line in an env file to be in VAR=VAL format.
Lines beginning with # are processed as comments and ignored.
Blank lines are ignored.
There is no special handling of quotation marks. This means that they are part of the VAL.
In particular, the env file is not a shell script and not seen by bash (your #!/usr/bin/env bash line is treated as a comment and ignored).

Related

How to inject a bash script with a dollar sign ($) in terraform?

I have a simple bash script that does something like the following:
#!/bin/bash
a=$(curl -s http://metadata/endpoint/internal)
echo "$a - bar"
(this is just a simplification). Note the use of the two $ signs to execute a command and resolve a variable.
Using Terraform, I want to write this file to a GCP instance during startup. Per Terraform's instructions, I'm attempting to avoid using the File Provisioner. Using the metadata_startup_script field in google_compute_instance2, I am including the script so that I can write it to a particular location.
E.g.
metadata_startup_script = <<-EOF
#!/bin/bash -xe
sudo tee /etc/myservice/serv.conf > /dev/null <<EOI
${file("${path.module}/scripts/simple_bash.sh")}
EOI
EOF
Terraform is interpolating the $ in the subscript somewhere (either in the loading into the metadata_startup_script, or in the writing out into the script to disk.
So, depending on what I use to try to escape the interpolation, it still fails to write. For example, I have tried (in the subscript):
echo "\$a - bar"
echo "${“$”}a - bar"
echo "$$a - bar"
According to the terraform docs, I'm supposed to use $$, but when I do it in the above, I get:
echo "1397a - bar"
All which fail to replicate the original script.
I’m just looking for the exact bash script, as written, to be written to disk.
My goal would be to do the above without extra escape sequences (as detailed here - Escaping dollar sign in Terraform) so that i can continue to run the original script (for debugging purposes).
I would also prefer not to build a packer image with the original script in it.
Thanks!
I don't think it's Terraform eating your variable interpolations here, because Terraform only understands ${ (a dollar sign followed by a brace) as starting an interpolation, whereas your example contains only $a.
However, you do seem to be embedded one bash script inside another, so it seems plausible to me that the outer bash is resolving your $a before the inner bash gets a chance to look at it. If so, you can use the literal variant of bash heredoc syntax, as described in answers to How to cat <> a file containing code?, so that the outer bash will take the content as literal and leave it to the inner bash to evaluate.
metadata_startup_script = <<-EOF
#!/bin/bash -xe
sudo tee /etc/myservice/serv.conf > /dev/null <<'EOI'
${file("${path.module}/scripts/simple_bash.sh")}
EOI
EOF
Notice that I wrote <<'EOI' instead of <<EOI, following the guidance from that other question in combination with the main Bash documentation on "here documents" (bold emphasis mine):
This type of redirection instructs the shell to read input from the current source until a line containing only word (with no trailing blanks) is seen. All of the lines read up to that point are then used as the standard input (or file descriptor n if n is specified) for a command.
The format of here-documents is:
[n]<<[-]word
here-document
delimiter
No parameter and variable expansion, command substitution, arithmetic expansion, or filename expansion is performed on word. If any part of word is quoted, the delimiter is the result of quote removal on word, and the lines in the here-document are not expanded. If word is unquoted, all lines of the here-document are subjected to parameter expansion, command substitution, and arithmetic expansion, the character sequence \newline is ignored, and \ must be used to quote the characters \, $, and `.
If the redirection operator is <<-, then all leading tab characters are stripped from input lines and the line containing delimiter. This allows here-documents within shell scripts to be indented in a natural fashion.
If your machine image is configured to run cloud-init at startup -- this is often but not always what is responsible for executing metadata_startup_script -- you may be able to achieve a similar effect without so much Bash scripting indirection by using Cloud Config YAML instead of a shell script directly.
For example, if your intent is only to write the content of that file into the designated location in the filesystem, you could potentially follow the Writing out arbitrary files example:
metadata_startup_script = <<-EOF
#cloud-config
${yamlencode({
write_files = [
{
encoding = "b64"
content = filebase64("${path.module}/scripts/simple_bash.sh")
path = "/etc/myservice/serv.conf"
owner = "root:root"
permissions = "0644"
},
]
}}
EOF
Cloud-init evaluates its modules at various points in the startup lifecycle. The Write Files module used here is specified to run once on the first boot of an instance, which matches how Cloud-init would typically treat a naked shell script too.
I do not think your issue is related to TF interpolation. I think you have problems because of normal bash interpolation, as its bash which is going to try to resolve $ in your /etc/myservice/serv.conf while writing its content.
The regular solution is to use 'EOI', not EOI:
metadata_startup_script = <<-EOF
#!/bin/bash -xe
sudo tee /etc/myservice/serv.conf > /dev/null <<'EOI'
${file("${path.module}/scripts/simple_bash.sh")}
EOI
EOF

Converting a BASH script to run on SH (via BusyBox)

I have an Asus router running a recent version of FreshTomato - that comes with BusyBox.
I need to run a script that was made with BASH in mind - it is an adaptation of this script - but it fails to run with this error: line 41: syntax error: bad substitution
Checking the script with shellcheck.net yields these errors:
Line 41:
for optionvarname in ${!foreign_option_*} ; do
^-- SC3053: In POSIX sh, indirect expansion is undefined.
^-- SC3056: In POSIX sh, name matching prefixes are undefined.
Line 42:
option="${!optionvarname}"
^-- SC3053: In POSIX sh, indirect expansion is undefined.
These are the lines that are causing problems:
for optionvarname in ${!foreign_option_*} ; do # line 41
option="${!optionvarname}" # line 42
# do some stuff with $option...
done
If my understanding is correct, the original script simply does something with all variables that have a name starting with foreign_option_
However, as far as I could determine, both ${!foreign_option_*} and ${!optionvarname} constructs are BASH-specific and not POSIX compliant, so there is no direct "bash to sh" code conversion possible.
I have tried to create a /bin/bash symlink that points to busybox, but I got the Read-only file system error.
So, how can I get this script to run on my router? I see only two options, but I cant figure out how to implement either:
Make BusyBox interpret the script as BASH instead of SH - can I use a specific shebang for this?
Seems like the fastest option, but only if BusyBox has a "complete" implementation of BASH
Alter the script code to not use BASH specifics.
This is safer, but since there is no "collect al variables starting with X" for SH, how can I do it?
how can I get this script to run on my router?
That easy, either:
install bash on your router or
port the script to busybox/posix compatible shell.
Make BusyBox interpret the script as BASH instead of SH - can I use a specific shebang for this?
That doesn't make sense. Busybox comes with ash shell interpreter and bash is bash. Bash can interpret bash extensions, ash can't interpret them. You can't "make busybox interpret bash" - cars don't fly, planes are for that. If you want to make a car fly, you add wings to it and make it faster. The answer to Make BusyBox interpret the script as BASH instead of SH would be: patch busybox and implement all bash extensions in it.
Shebang is used to run a file under different interpreter. Using #!/bin/bash would invoke bash, which would be unrelated to anything busybox related and busybox wouldn't be involved in it.
how can I do it?
Decide on a unrealistic maximum, iterate over variables named foreign_option_{1...some_max}, for each variable see if it is set, if it is set, cotinue the script.
for i in $(seq 100); do
optionvarname="foreign_option_${i}"
# https://stackoverflow.com/questions/3601515/how-to-check-if-a-variable-is-set-in-bash
if eval "[ -z \"\${${optionvarname}+x}\" ]"; then continue; fi;
With enough luck maybe you can use the set output. The following will fail if any variable contains a value as newline + the string that matches the regex:
for optionvarname in $(set | grep -o '^foreign_option_[0-9]\+=' | sed 's/=//'); then
Indirect expansion can be easily replaced by eval:
eval "option=\"\$${optionvarname}\""
If you really cannot install Bash on that router, here is one possible workaround, which seems to work for me in BusyBox on a Qnap NAS :
foreign_option_one=1
foreign_option_two=2
for x in one two; do
opt_var=foreign_option_${x}
eval "opt_value=\$$opt_var"
echo "$opt_var = $opt_value"
done
(But you will probably encounter more problems with moving a Bash script to busybox, so you might want to first consider alternatives like replacing the router)

Reading lines in a text file by Bash Script

I'm going to read lines in a text file:
$SSH_PRIVATE_FILE="address"
I would like to read and evaluate the line in a way to assign a value to the already defined SSH_PRIVATE_FILE.
The follwing is a docker file's contents
ARG SSH_PRIVATE_FILE
COPY build-params build-params
RUN while IFS='' read -r line || [ -n "$line" ]; do\
echo "Text read from file: $line";\
eval `$line`;\
done < "build-params"
RUN echo $SSH_PRIVATE_FILE
UPDATED
But it returns error: /bin/sh: 1: $SSH_PRIVATE_FILE="~/.ssh/id_rsa": not found
Bourne-type shells have a built-in mechanism to read the contents of a text file and evaluate each line, the . directive. (GNU bash has the same functionality under the name source, but this is not part of the POSIX shell standard and some very-light-weight shells in Docker base images don’t support it.) At a shell level, what you’ve written is equivalent to
. ./build-params
However, each Dockerfile RUN line runs a separate container with a separate shell with a clean shell environment, so this turns out to be a pretty bad way to set environment variables. The Dockerfile ENV directive works better.
Furthermore, since you’re writing the Dockerfile, you have complete control over the filesystem layout inside the Dockerfile, and you don’t really need the locations of things inside the Docker container to be parametrizable. In the case of things like credentials, you’d use the docker run -v option to inject things into the container. If I needed a setting like this, I might make my Dockerfile say
ENV SSH_PRIVATE_FILE=/ssh/id_rsa
and then actually launch the container as
docker run -v $HOME/.ssh:/ssh ...
and not make this a build-time option at all.
Just a wild guess, but I'd try putting a space before every \ at the end of each line.

source a file containing environment variables including "dash" character in bash?

I'm using bash and have a file called x.config that contains the following:
MY_VAR=Something1
ANOTHER=Something2
To load these as environment variables I just use source:
$ source x.config
But this doesn't work if MY_VAR is called MY-VAR:
MY-VAR=Something1
ANOTHER=Something2
If I do the same thing I get:
x.config:1: command not found: MY-VAR=Something1
I've tried escaping - and a lot of other things but I'm stuck. Does anyone know a workaround for this?
A pure bash workaround that might work for you is to re-run the script using env to set the environment. Add this to the beginning of your script.
if [[ ! -v myscript_env_set ]]; then
export myscript_env_set=1
readarray -t newenv < x.config
exec env "${newenv[#]}" "$0" "$#"
fi
# rest of the script here
This assumes that x.config doesn't contain anything except variable assignments. If myscript_env_set is not in the current environment, put it there so that the next invocation skips this block. Then read the assignments into an array to pass to env. Using exec replaces the current process with another invocation of the script, but with the desired variables in the environment.
A dash (-) in an environment variable is not portable, and as you noticed, will cause a lot of problems. You can't set these from bash. Fix the application you want to invoke.
That being said, if you can't change the target app, you can do this from python:
#!/usr/bin/python
import os
with open('x.config') as f:
for line in f:
name, value = line.strip().split('=')
os.environ[name] = value
os.system('/path/to/your/app')
This is a very simplistic config reader, and for a more complex syntax you might want to use ConfigParser.

How to accommodate spaces in a variable in a bash shell script?

Hopefully this should be a simple one... Here is my test.sh file:
#!/bin/bash
patch_file="/home/my dir/vtk.patch"
cmd="svn up \"$patch_file\""
$cmd
Note the space in "my dir". When I execute it,
$ ./test.sh
Skipped '"/home/my'
Skipped 'dir/vtk.patch"'
I have no idea how to accommodate the space in the variable and still execute the command. But executing this the following on the bash shell works without problem.
$ svn up "/home/my dir/vtk.patch" #WORKS!!!
Any suggestions will be greatly appreciated! I am using the bash from cygwin on windows.
Use eval $cmd, instead of plain $cmd
Did you try escaping the space?
As a rule UNIX shells don't like non-standard characters in file names or folder names. The normal way of handling this is to escape the offending character. Try:
patch_file="/home/my\ dir/vtk.patch"
Note the backslash.

Resources