Weird behavior of calling docker run from bash script - bash

I try to run docker run from bash script and docker says:
“is not a docker command”
If I print the docker command line before I called docker and I copy it to clipboard and paste it to command line it works well!
here is the command in bash script:
local args="run ${nw_param} ${opts} --name ${img} ${repository}/${img}:${tag}"
docker ${args}
the current echo of args string is:
run --net=ehvb-network -d --restart=always --name my-module my-private-registry:5000/my-module:0.0.1-1555334810
When I copied this string to the clipboard and paste it to command line it works well.
I use Debian stretch. My script is using bash (#!/bin/bash)
When I remove ${opts} it runs from bash. Opts currently contains “-d --restart=always”. When I try to use only -d or only --restart=always it works well. But when I try to use both together it doesn’t work well.
And I try to define opts like this:
opts=’–restart=always -d’
the message from docker is:
docker: Error response from daemon: invalid restart policy ‘always -d’, but the print message contains:
opts:–restart=always -d
Somebody removes --restart=

The problem was that, I used variables coming from other bash command in my script (like curl, ps etc). All of these variables end with carriage return \r. When I try to insert these variables into a docker parameter string \r are inside it. I need to add:
| sed 's/\r//' to all of these commands.

Related

Crontab can't execute script directly in container environment

I present to you the following dilemma I execute my script manually and it works well, see next line for example:
docker exec -ti backup_subversion sh -c "/tmp/my_script.sh"
But when I attempt to schedule the process this line is just skipped.
I have tried to execute just a touch command and it too is ignored.
I have tried to execute as root, same problem.
I have tried to execute in another docker environment: same problem.
My OS is Centos 7.
In this script for example the bug part who will crash :
#!/bin/bash
# Create a container.
docker run -d --name=backup_subversion \
-v /subversion/dump:/var/dump \
--net my_network my_server.domaine.com/subversion/billy:1.9
# I copy a script.
docker cp tools_subversion_dump.sh backup_subversion:/tmp
# This line is ignore since crontab exec.
docker exec -ti backup_subversion sh -c "/tmp/tools_subversion_dump.sh"
Thank you in advance for your answers because it's a mystery to me.
It's probably because you used the -it options that only apply to an interactive shell rather then a pseudo one like the one used in scripts as referenced in the question

Corrent passing arguments to docker entrypoint

I have a super dumb script
$ cat script.sh
cat <<EOT > entrypoint.sh
#!/bin/bash
echo "$#"
EOT
docker run -it --rm -v $(pwd)/entrypoint.sh:/root/entrypoint.sh --entrypoint /root/entrypoint.sh bash:4 Hello World
But when I run script I got strange error:
$ sh script.sh
standard_init_linux.go:207: exec user process caused "no such file or directory"
Why script does not print Hello world ?
standard_init_linux.go:207: exec user process caused "no such file or directory"
The above error means one of:
Your script actually doesn't exist. This isn't likely with your volume mount but it doesn't hurt to run the container without the entrypoint, just open a shell with the same volume mount and list the file to be sure it's there. It's possible for the volume mount to fail on desktop versions of docker where the directory isn't shared to the docker VM and you end up with empty folders being created inside the container instead of mounting your file. When checking from inside of another container, also make sure you have execute permissions on the script.
If it's a script, the first line pointing to the interpreter is invalid. Make sure that command exists inside the container. E.g. alpine containers typically do not ship with bash and you need to use /bin/sh instead. This is the most common issue that I see.
If it's a script, similar to above, make sure your first line has linux linefeeds. A windows linefeed adds and extra \r to the name of the command trying to be run, which won't be found on the linux side.
If the command is a binary, it can refer to a missing library. I often see this with "statically" compiled go binaries that didn't have CGO disabled and have links to libc appear when importing networking libraries.
If you use json formatting to run your command, I often see this error with invalid json syntax. This doesn't apply to your use case, but may be helpful to others googling this issue.
This list is pulled from a talk I gave at last year's DockerCon: https://sudo-bmitch.github.io/presentations/dc2018/faq-stackoverflow.html#59
First of all:
Request
docker run -it --rm bash:4 which bash
Output
/usr/local/bin/bash
So
#!/bin/bash
Should be changed to
#!/usr/local/bin/bash
And
docker run -it --rm -v $(pwd)/entrypoint.sh:/root/entrypoint.sh --entrypoint /root/entrypoint.sh bash:4 Hello World
Gives you
Hello World
Update
Code
cat <<EOT > entrypoint.sh
#!/bin/bash
echo "$#"
EOT
Should be fixed as
#!/usr/bin/env bash
cat <<EOT > entrypoint.sh
#!/usr/bin/env bash
echo "\$#"
EOT

Strange character docker exec

I'm trying to create a script to run a docker cluster.
In my script there is a moment that I want to copy some files from the docker to my local machine. So I'm creating the CONTAINER_WORKDIR variable.
CONTAINER_WORKDIR=`docker exec -it jmeter-master /bin/pwd`
The value stored in CONTAINER_WORKDIR is:
/usr/local/apache-jmeter-3.2/bin
The problem is that there is a strange character in the end of this variable. Try to execute the line below:
echo "docker cp jmeter-master:$CONTAINER_WORKDIR/output.csv ."
My expected result is
docker cp jmeter-master:/usr/local/apache-jmeter-3.2/bin/output.csv .
But the real output is:
/output.csv .ter-master:/usr/local/apache-jmeter-3.2/bin
The pwd or the docker exec command is returning a returning character.
There is a way to remove this character from CONTAINER_WORKDIR variable
This script that executes what I presume must be
CONTAINER_WORKDIR=$(docker exec -it jmeter-master /bin/pwd)
may have been written with an editor that stores text files with DOS line-endings (like Notepad++).
Run dos2unix on that script, or use
$ tr -d '\r' <script >script-new
to fix it.

Docker exec - Write text to file in container

I want to write a line of text to a textfile INSIDE a running docker container. Here's what I've tried so far:
docker exec -d app_$i eval echo "server.url=$server_url" >> /home/app/.app/app.config
Response:
/home/user/.app/app.config: No such file or directory
Second try:
cfg_add="echo 'server.url=$server_url' >> /home/user/.app/app.config"
docker exec -i app_$i eval $cfg_add
Response:
exec: "eval": executable file not found in $PATH
Any ideas?
eval is a shell builtin, whereas docker exec requires an external utility to be called, so using eval is not an option.
Instead, invoke a shell executable in the container (bash) explicitly, and pass it the command to execute as a string, via its -c option:
docker exec "app_$i" bash -c "echo 'server.url=$server_url' >> /home/app/.app/app.config"
By using a double-quoted string to pass to bash -c, you ensure that the current shell performs string interpolation first, whereas the container's bash instance then sees the expanded result as a literal, as part of the embedded single-quoted string.
As for your symptoms:
/home/user/.app/app.config: No such file or directory was reported, because the redirection you intended to happen in the container actually happened in your host's shell - and because dir. /home/user/.app apparently doesn't exist in your host's filesystem, the command failed fundamentally, before your host's shell even attempted to execute the command (bash will abort command execution if an output redirection cannot be performed).
Thus, even though your first command also contained eval, its use didn't surface as a problem until your second command, which actually did get executed.
exec: "eval": executable file not found in $PATH happened, because, as stated, eval is not an external utility, but a shell builtin, and docker exec can only execute external utilities.
Additionally:
If you need to write text from outside the container, this also works:
(docker exec -i container sh -c "cat > c.sql") < c.sql
This will pipe you input into the container. Of course, this would also work for plain text (no file). It is important to leave off the -t parameter.
See https://github.com/docker/docker/pull/9537
UPDATE (in case you just need to copy files, not parts of files):
Docker v17.03 has docker cp which copies between the local fs and the container: https://docs.docker.com/engine/reference/commandline/cp/#usage
try to use heredoc:
(docker exec -i container sh -c "cat > /test/iplist") << EOF
10.99.154.146
10.99.189.247
10.99.189.250
EOF

Running a script inside a docker container using shell script

I am trying to create a shell script for setting up a docker container. My script file looks like:
#!bin/bash
docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash
Running this script file will run the container in a newly invoked bash.
Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh)
How to do that?
You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:
docker exec mycontainer /path/to/test.sh
And to run from a bash session:
docker exec -it mycontainer /bin/bash
From there you can run your script.
Assuming that your docker container is up and running, you can run commands as:
docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"
I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.
Dockerfile
...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash
Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.
In case you don't want (or have) a running container, you can call your script directly with the run command.
Remove the iterative tty -i -t arguments and use this:
$ docker run ubuntu:bionic /bin/bash /path/to/script.sh
This will (didn't test) also work for other scripts:
$ docker run ubuntu:bionic /usr/bin/python /path/to/script.py
This command worked for me
cat local_file.sh | docker exec -i container_name bash
You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)
I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)
Here is how you bind your current directory:
docker run -it -v $PWD:/scripts $my_docker_build /bin/bash
Now your current directory is bound to /scripts of your docker instance.
(Outdated)
To save your .bashrc changes commit your working image with this command:
docker commit $container_id $my_docker_build
Update
To solve the issue to open up a new shell for every change I now do the following:
In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.
BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.
Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use
$ docker run ubuntu:bionic /bin/bash -c '
echo "Hello there"
echo "this could be a long script"
'
Have a look at entry points too. You will be able to use multiple CMD
https://docs.docker.com/engine/reference/builder/#/entrypoint
If you want to run the same command on multiple instances you can do this :
for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done
This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.
docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh

Resources