Can't Escape $ (Dollar Sign) in Environment Variable in Dockerfile - bash

I am trying to put a password in a command in a Dockerfile. The problem is it has a $ in it, so the variable get evaluated but the $ is in the value so it tries to evaluate that afterwards.
The Dockerfile is called from a bash script, so to get the password into the file I did something like:
read -p "Input Username:" Username
read -s -p "Input Password:" Password
docker build --build-arg Username=$Username --build-arg Password=$Password...
And in the Dockerfile I have:
ARG Username
ARG Password
Then within the Dockerfile, I have tried this:
curl -u "$Username:$Password"
It gets expanded to something like
curl -u "Username:Pass12$34"
Which doesn't work as it tries to evaluate $34 as a variable. I tried using single quotes but that didn't evaluate the variable at all. Any help is appreciated.

Related

How can I assign the output of a mongosh command to a bash variable

I want to assign the output of a mongo command (i.e database names) to a bash array variable but I am not sure how to go about it.
I am getting an error :
dump.sh
Starting-BACKUP
dump.sh
./dump.sh: line 16: declare: `–a': not a valid identifier
./dump.sh: line 24: mongo: command not found
Database backup was successful
when I attempt using dump.sh below:
#!/bin/bash
declare –a databases=()
databases=$(mongo --quiet --uri="mongodb://root:mypassword#mongodb-prom/admin" --authenticationDatabase admin --eval="show dbs;" | cut -d " " --field 1)
echo $databases
Ordinarily I am able to get a listing of the databases when I kubectl into the pod with following steps :
$ kubectl exec -it mongodb-prom-xxxxxxxxx-dddx4 -- sh
$ mongo
> use admin
> db.auth('root','mypassword')
> show dbs
admin 0.000GB
platforms 0.000GB
users 0.000GB
I am not sure why the mongo command is not being recognized because the same script is able to execute the mongodump command below :
mongodump --uri="<uri_here>" --authenticationDatabase admin --gzip --archive=/tmp/"<variable_here>".gz
UPDATE : This is the associated Dockerfile. My expectation is that both mongo and mongodump should be working by default in a mongo container but it seems only mongodump is working for now
FROM mongo
WORKDIR /opt/backup/
WORKDIR /usr/src/configs
COPY dump.sh .
RUN chmod +x dump.sh
My two issues :
Is my syntax correct for the variable assignment (I suspect its not correct) ?
How should I properly declare the array variable to avoid the warnings ?
NB : Mongo tooling is already installed on the container and is actually working for mongodump
You don't need declare -a. Simply putting the value inside () in the assignment will make it an array.
To get all the elements of an array, you have to use ${variable[#]}. $variable is equivalent to ${variable[0]} and just retrieves the first element.
databases=($(mongo --quiet --uri="mongodb://root:mypassword#mongodb-prom/admin" --authenticationDatabase admin --eval="show dbs;" | cut -d " " --field 1))
echo "${databases[#]}"
With bashv4+, mapfile is available, with Process Substitution.
mapfile -t databases < <(mongo --quiet --uri="mongodb://root:mypassword#mongodb-prom/admin" --authenticationDatabase admin --eval="show dbs;" | cut -d " " --field 1)
Now check the value of the databases array
declare -p databases
Your code has
declare -a databases=()
but
databases=$(...)
is not an array, the $(...) construct is Command Substitution.
See also Bash Arrays

How can I script a Docker command into a 'single word' binary? Using bash script?

When I install something like nmap(even from APT), I cant get it to execute correctly, so I like to go the container route. Instead of typing:
docker run --rm -it instrumentisto/nmap -A -T4 scanme.nmap.org
I figured maybe I could script it out, but nothing i've learned or found on google, youtube, etc, has helped so far... Can somebody lend a hand? I need to know how to get Bash to execute a command with args:
execute like:
./nmap.sh -A -T4 -Pn x.x.x.x
#!/bin/bash
echo docker run --rm -it instrumentisto/nmap $1 $2 $3 $4 $5
but how to get bash to run this instead of just echo I dont know. Thanks ahead!
Two solutions: create an alias, create a script.
With an alias
The command you write is replaced with the value of the alias, so
alias nmap="docker run --rm -it instrumentisto/nmap"
nmap -A -T4 -Pn x.x.x.x
# executes docker run --rm -it instrumentisto/nmap -A -T4 -Pn x.x.x.x
Aliases are not persistent so you will have to store it in some bash config (generally ~/.bashrc).
With a script
#!/bin/bash
set -Eeuo pipefail
docker run --rm -it instrumentisto/nmap "$#"
"$#" will forward all the arguments provided to the script directly to the command. The quotes are important, if you call your script with quoted values like ./nmap "something with spaces", that's one argument, it needs to be kept as one argument.
Bonus: With a function
Just like the script, you need to forward arguments when writing functions, just like aliases, they are not persistent so you have to store them in bash config:
nmap() {
docker run --rm -it instrumentisto/nmap "$#"
}

Sending list of filenames in current directory as a single string to a Docker container's STDIN

I have a Docker container and need to pass a single string (an array of strings would work too, but as far as I know this is not possible) containing the names of the file present in the current directory.
My first strategy to get the filenames inside the container was running the command docker build -t doc-validate . && docker run doc-validate (printf "%s " $(ls -p | grep -v /)) , to send the output of (printf "%s " $(ls -p | grep -v /)) directly into the container's STDIN, however, I get zsh: bad pattern: (printf %s Dockerfile returned as an error; seems to me that the container or the shell is trying to get 'Dockerfile' executed somewhere and I don't know why this happens, as running this same printf command directly in the terminal works as expected (printing only file names in the current directory).
My second approach was trying to send these filenames as an environment variable to the container. Running PROJECT_FILES=(printf "%s " $(ls -p | grep -v /)) in the terminal works as expected, $PROJECT_FILES outputs these filenames. However, if I try to pass it directly into the container like this: docker build -t doc-validate . && docker run --env PROJECT_FILES=(printf "%s " $(ls -p | grep -v /)) doc-validate I get the same zsh: bad pattern: PROJECT_FILES=(printf %s Dockerfile as an error.
Seems to me that it the commands are failing before even entering the container due to 'zsh:' being in the error output, however, I don't see why running them standalone in the terminal works and why passing them as parameters to the container is being such a headache.
How can I get the output of running (printf "%s " $(ls -p | grep -v /)) in my local machine as a value accessible inside the container?
Dockerfile:
FROM node:16
WORKDIR /moduleRoot
COPY package.json /moduleRoot
RUN npm install
COPY . /moduleRoot/
# Authorize execution of entrypoint bash script
RUN chmod +x /moduleRoot/docker-container-entrypoint.sh
# these values below will be set later, just register them explicitly here
ENV PROJECT_FILES=""
CMD [ "/moduleRoot/docker-container-entrypoint.sh" ]
Bash entrypoint (docker-container-entrypoint.sh file in CMD):
#!/bin/bash
echo "Args are: $#"
echo "PROJECT_FILES=$PROJECT_FILES"
First thing is that you are using the node:16 and according to the entrypoint of the project it will call node to execute anything you pass as an argument.
On your Dockerfile you are calling a CMD and not changing the ENTRYPOINT, so when you execute the container it will try to execute the bash script using node and you must be getting an error like this:
/moduleRoot/docker-container-entrypoint.sh:2
echo "Args are: $#"
^^^^^^^^^^^^^^
SyntaxError: Unexpected string
at Object.compileFunction (node:vm:353:18)
at wrapSafe (node:internal/modules/cjs/loader:1039:15)
at Module._compile (node:internal/modules/cjs/loader:1073:27)
at Object.Module._extensions..js (node:internal/modules/cjs/loader:1138:10)
at Module.load (node:internal/modules/cjs/loader:989:32)
at Function.Module._load (node:internal/modules/cjs/loader:829:14)
at Function.executeUserEntryPoint [as runMain] (node:internal/modules/run_main:76:12)
at node:internal/main/run_main_module:17:47
If your bash script is there only to help you troubleshoot, then you can use the online heredoc to pass the arguments you need:
» docker run -t --rm doc-validate <<< echo $(ls -a1)
Args are: docker-container-entrypoint.sh Dockerfile package.json
PROJECT_FILES=
On the other hand, if you need the bash script then you'll have to change the CMD with ENTRYPOINT. Another thing worth noting is that Linux's ls doesn't have the -p flag.
Maybe this question/answers should work for you if you are using a CI pipeline with Jenkins: How to mount docker volume with jenkins docker container?

Save output of bash command from Dockerfile after Docker container was launched

I have a Dockerfile with ubuntu image as a base.
FROM ubuntu
ARG var_name
ENV env_var_name=$var_name
ENTRYPOINT ["/bin/bash", "-c", "echo $env_var_name"]
I expect from this
executing of a simple bash script, which will take an environment variable from user keyboard input and output this value after running docker container. It goes right.
(part where i have a problem) saving values of environment variables to file + after every running of docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME i can see a list of entered from keyboard values.
My idea about part 2 were like
docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME > /directory/tosave/values.txt. That works, but only one last value saves, not a list of values.
How can i change Dockerfile to save values to a file, which Docker will see and from which Docker after running will read and ouyput values? May be i shouldn`t use ENTRYPOINT?
Appreciate for any possible help. I`ve stuck.
Emphasizing that output and save of environment variables expected.
Like #lojza hinted at, > overwrites files whereas >> appends to them which is why your command is clobbering the file instead of adding to it. So you could fix it with this:
docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME >> /directory/tosave/values.txt
Or using tee(1):
docker run --rm -e env_var_name=%valueOfVar% IMAGE-NAME | tee -a /directory/tosave/values.txt
To clarify though, the docker container is not writing to values.txt, your shell is what is redirecting the output of the docker run command to the file. If you want the file to be written to by docker you should mount a file or directory into it the container using -v and redirect the output of the echo there. Here's an example:
FROM ubuntu
ARG var_name
ENV env_var_name=$var_name
ENTRYPOINT ["/bin/bash", "-c", "echo $env_var_name | tee -a /data/values.txt"]
And then run it like so:
$ docker run --rm -e env_var_name=test1 -v "$(pwd):/data:rw" IMAGE-NAME
test1
$ docker run --rm -e env_var_name=test2 -v "$(pwd):/data:rw" IMAGE-NAME
test2
$ ls -l values.txt
-rw-r--r-- 1 root root 12 May 3 15:11 values.txt
$ cat values.txt
test1
test2
One more thing worth mentioning. echo $env_var_name is printing the value of the environment variable whose name is literally env_var_name. For example if you run the container with -e env_var_name=PATH it would print the literal string PATH and not the value of your $PATH environment variable. This does seem to be the desired outcome, but I thought it was worth explicitly spelling this out.

Prompting for MySQLDump password in a bash script?

I'm trying to write a bash script that runs a mysqldump command that uses the -p flag. This flag prompts the user for a password, which works as expected when run in the shell directly, but does not appear when run in a script.
#!/usr/bin/env
ssh user#domain.com 'mysqldump -u mysqluser -p --databases foo | bzip2' > ~/temp/foo-dump.sql.bz2
Now I could embed the password in the script or pass it as an arguments, but I really want the script to prompt the user for the password so the password doesn't show up in my scripts repo or in my bash history.
Anyone have any idea on how to accomplish this?
This should do the trick:
read -p "mysql password: " PASS && ssh user#domain.com 'mysqldump -u mysqluser -p'$PASS' --databases foo | bzip2' > foo-dump.sql.bz2 ; PASS=""
In this case, you will first enter the mysql password, and then be prompted for the ssh password. Note that the mysql password will not be hidden, i.e., someone can read it over your shoulder. If you want to avoid that, use the flag -s
read -s -p "mysql password: " PASS && ...
Note also that there mustn't be any space between the "p" (in -p for password) and the quotation mark for the password variable.
Also, your shebang is not specifying which interpreter to use, which might be a problem. I'd suggest you use #!/bin/bash or #!/usr/bin/env bash.

Resources