Running a script inside a docker container using shell script - bash

I am trying to create a shell script for setting up a docker container. My script file looks like:
#!bin/bash
docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash
Running this script file will run the container in a newly invoked bash.
Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh)
How to do that?

You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:
docker exec mycontainer /path/to/test.sh
And to run from a bash session:
docker exec -it mycontainer /bin/bash
From there you can run your script.

Assuming that your docker container is up and running, you can run commands as:
docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"

I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.
Dockerfile
...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash
Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.

In case you don't want (or have) a running container, you can call your script directly with the run command.
Remove the iterative tty -i -t arguments and use this:
$ docker run ubuntu:bionic /bin/bash /path/to/script.sh
This will (didn't test) also work for other scripts:
$ docker run ubuntu:bionic /usr/bin/python /path/to/script.py

This command worked for me
cat local_file.sh | docker exec -i container_name bash

You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)
I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)
Here is how you bind your current directory:
docker run -it -v $PWD:/scripts $my_docker_build /bin/bash
Now your current directory is bound to /scripts of your docker instance.
(Outdated)
To save your .bashrc changes commit your working image with this command:
docker commit $container_id $my_docker_build
Update
To solve the issue to open up a new shell for every change I now do the following:
In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.
BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.

Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use
$ docker run ubuntu:bionic /bin/bash -c '
echo "Hello there"
echo "this could be a long script"
'

Have a look at entry points too. You will be able to use multiple CMD
https://docs.docker.com/engine/reference/builder/#/entrypoint

If you want to run the same command on multiple instances you can do this :
for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done

This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.
docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh

Related

Crontab can't execute script directly in container environment

I present to you the following dilemma I execute my script manually and it works well, see next line for example:
docker exec -ti backup_subversion sh -c "/tmp/my_script.sh"
But when I attempt to schedule the process this line is just skipped.
I have tried to execute just a touch command and it too is ignored.
I have tried to execute as root, same problem.
I have tried to execute in another docker environment: same problem.
My OS is Centos 7.
In this script for example the bug part who will crash :
#!/bin/bash
# Create a container.
docker run -d --name=backup_subversion \
-v /subversion/dump:/var/dump \
--net my_network my_server.domaine.com/subversion/billy:1.9
# I copy a script.
docker cp tools_subversion_dump.sh backup_subversion:/tmp
# This line is ignore since crontab exec.
docker exec -ti backup_subversion sh -c "/tmp/tools_subversion_dump.sh"
Thank you in advance for your answers because it's a mystery to me.
It's probably because you used the -it options that only apply to an interactive shell rather then a pseudo one like the one used in scripts as referenced in the question

Entering text into a docker container via ssh from bash file

What I am trying to do is setup a local development database and to prevent everyone having to go through all the steps I thought it would be useful to create a script.
What I have below stop once it is in the terminal, which looks like:
output
./dbSetup.sh
hash of container 0d1b182aa6f1
/ #
At which point I have to manually enter exit.
script
#!/bin/bash
command=$(docker ps | grep personal)
set $command
echo "hash of container ${1}"
docker exec -it ${1} sh
Is there a way I can inject a command via a script into a dockers container terminal?
In order to execute command inside a container, you can use something like this:
docker exec -ti my_container sh -c "echo a && echo b"
More information available at: https://docs.docker.com/engine/reference/commandline/exec/
Your script finds a running Docker container and opens a shell to it. The "-it" makes it interactive and allocates a tty which is why it continues to wait for input, e.g. "exit". If the plan is to execute some commands to initialize a local development database, I'd recommend looking at building an image with a Dockerfile instead. i.e. Once you figure out the commands to run, they would become RUN commands and the container after docker run would expose a local development database.
If you really want some commands to run within the shell after it is started and maintain the session, depending on the base image, you might be able to mount a bash profile that has the required commands, e.g. -v db_profile:/etc/profile.d where db_profile is a folder with the shell scripts you want to run. To get them to run you'd exec sh -l to have the login startup scripts to run.

How to execute the Entrypoint of a Docker images at each "exec" command?

After trying to test Dockerfiles with Dockerspec, I finally had an issue I can't resolve properly.
The problem is, I think, from Docker itself ; If I understand its process, an Entrypoint is only executed at run, but if the container stay started and I launch an "exec" command in, it's not re-called.
I think it's the wanted behavior.
But if the Entrypoint is a "gosu" script which precede all my commands, it's a problem...
Example
"myImage" has this Entrypoint :
gosu 1000:1000 "$#"
If I launch : docker run -it myImage id -u
The output is "1000".
If I start a container : docker run -it myImage bash
In this container, id -u outputs "1000".
But if I start a new command in this container, it starts a new shell, and does not execute the Entrypoint, so : docker exec CONTAINER_ID id -u
Output "0", because the new shell is started as "root".
It there a way to execute each time the entrypoint ?
Or re-use the shell open ?
Or a better way to do that ?
Or, maybe I haven't understand anything ? ;)
Thanks !
EDIT
After reading solutions proposed here, I understand that the problem is not how Docker works but how Serverspec works with ; my goal is to directly test a command as a docker run argument, but Serverspec start a container and test commands with docker exec.
So, the best solution is to found how get the stdout of the docker run executed by Serverspec.
But, in my personal use-case, the best solution is maybe to not use Gosu but --user flag :)
if your goal is to run the docker exec with a specific user inside of the container, you can use the --user option.
docker exec --user myuser container-name [... your command here]
If you want to run gosu every time, you can specify that as the command with docker exec
docke exec container-name gosu 1000:1000 [your actual command here]
in my experience, the best way to encapsulate this into something easily re-usable is with a .sh script (or .cmd file in windows).
drop this into a file in your local folder... maybe gs for example.
#! /bin/sh
docker exec container-name gosu 1000:1000 "$#"
give it execute permissions with chmod +x gs and then run it with ./gs from the local folder

use docker exec in bash script

I have a bash script that is supposed to execute other bash scripts using "docker exec" which are installed in different docker containers. Although each command works correctly when started manually, the script stops after the execution of first docker exec command.
Example:
#!/bin/bash
...
docker exec -it mysql_container /scripts/import_database.sh ## Scripts stops here...
docker exec -it web_container /scripts/copy_doc_root.sh
...
What am I missing? ;)
Thanks for your help!
David
Use docker exec -d since you neither want a terminal nor an interactive session.

Automatically enter only running docker container

In the cloud, I have multiple instances, each running a container with a different random name, e.g.:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5dc97950d924 aws_beanstalk/my-app:latest "/bin/sh -c 'python 3 hours ago Up 3 hours 80/tcp, 5000/tcp, 8080/tcp jolly_galileo
To enter them, I type:
sudo docker exec -it jolly_galileo /bin/bash
Is there a command or can you write a bash script to automatically execute the exec to enter the correct container?
"the correct container"?
To determine what is the "correct" container, your bash script would still need either the id or the name of that container.
For example, I have a function in my .bashrc:
deb() { docker exec -u git -it $1 bash; }
That way, I would type:
deb jolly_galileo
(it uses the account git, but you don't have to)
Here's my final solution. It edits the instance's .bashrc if it hasn't been edited yet, prints out docker ps, defines the dock function, and enters the container. A user can then type "exit" if they want to access the raw instances, and "exit" again to quit ssh.
commands:
bashrc:
command: if ! grep -Fxq "sudo docker ps" /home/ec2-user/.bashrc; then echo -e "dock() { sudo docker exec -it $(sudo docker ps -lq) bash; } \nsudo docker ps\ndock" >> /home/ec2-user/.bashrc; fi
As VonC indicated, usually you have to make some shell scripting of your own if you find yourself doing something repetitive. I made a tool myself here which works if you have Bash 4+.
Install
wget -qO- https://raw.githubusercontent.com/Pithikos/dockerint/master/docker_autoenter >> ~/.bashrc
Then you can enter a container by simply typing the first letters of the container.
$> docker ps
CONTAINER ID IMAGE ..
807b1e7eab7e ubuntu ..
18e953015fa9 ubuntu ..
19bd96389d54 ubuntu ..
$> 18
root#18e953015fa9:/#
This works by taking advantage of the function command_not_found_handle introduced in Bash 4. If a command is not found, the script will try and see if what you typed is a container and if it is, it will run docker exec <container> bash.

Resources