command in docker run and inside docker not behaving similarly - bash

I have script that load a database into RAM and print the adress of the first data into a file (db_REGISTER), and I want to run it inside a docker. This script works fine when it is launched inside a bash after launching the docker with -it
$ docker run -it --env-file $FILE -v $wkp:/app dev_machine
$$ /app/scripts/loadBase.sh
db_REGISTER
<some random number>
However when I launch the same script with docker run directly, the script works but the adress printed is always 0, and I cannot use the database afterward.
$ docker run -it --env-file $FILE -v $wkp:/app dev_machine /app/scripts/loadBase.sh
db_REGISTER
0
Does that mean that the second command does not have access to a persistant adress in the RAM ? What should I do to correct that ?
EDIT : After some advice, I tried to tweak the --ipc setting. Using --ipc="host" made it work. I guess this was a problem of shared RAM

You could try to mount your file inside the Docker container before executing the command:
docker run -it --env-file $FILE -v $PWD/scripts:/t -w /t dev_machine loadBase.sh

Related

Windows docker, mount a volume and run a script in it

I'm working with the servercore Windows docker image for CI purposes (i.e. building).
I'm fine building it, running it mounting the repo as a volume, launching it and keeping it listening on a specific port and launching exec "mountedFolder/script.bat".
docker build . -t build_image
docker run -d -t -p 8585:9090 -v $PATH:c:/repo build_image > id.txt
set /p id=<id.txt
docker exec %id% "build.bat"
But is there any way to launch the script (and cd-ing to its folder before launching it), in the run command itself? Something like
docker run --rm -v $PATH:c:/repo build_image /PATH/to/BAT/build.bat

How to use bash commands alongside Docker restart policies?

In a ROS project, I have the following bash script that I use to run a docker container:
#!/bin/bash
source ~/catkin_ws/devel/setup.bash
rosnode kill some_ros_node
roslaunch supporting_ros_package launch_file.launch &
docker run -it \
--restart=always \
--privileged \
--net=host \
my_image:latest \
/bin/bash -c\
"
roslaunch my_package my_launch_file.launch
"
export containerId=$(docker ps -l -q)
However, what I'd like to happen is, for every time the container restarts (especially as the machine is booted up), the bash commands preceding the docker run command to also re-run on the host machine (not within the container).
How might I achieve this?
There are a few ways I can think of doing this:
Add this script to a system service. See this answer regarding adding a system service: See this
Add this script into another container that is also set to restart always ... but mount the docker socket into this other container like this: See this

Piping docker run container ID to docker exec

In my development, I find myself issuing a docker run command followed by a docker exec command on the resulting container ID quite frequently. It's a little annoying to have to copy/paste the container ID between commands, so I was trying to pipe the container ID into my docker exec command.
Here's my example command.
docker run -itd image | xargs -i docker exec -it {} bash
This starts the container, but then I get the following error.
the input device is not a TTY
Does anyone have any idea how to get around this?
Edit: I also forgot to mention I have an ENTRYPOINT defined and cannot override that.
Do this instead:
ID=$(docker run -itd image) && docker exec -it $ID bash
Because xargs executes it arguments without allocating a new tty.
If you just want to "bash"-into the container you do not have to pass the container-id around. You can simply run
docker run -it --rm <image> /bin/bash
For example, if we take the ubuntu base image
docker run -it --rm ubuntu /bin/bash
root#f80f83eec0d4:/#
from the documentation
-t : Allocate a pseudo-tty
-i : Keep STDIN open even if not attached
--rm : Automatically remove the container when it exits
The command /bin/bash overwrites the default command that is specified with the CMD instruction in the Dockerfile.

DockerFile : how to get bash command line after start?

This question is not duplicated, because I want to obtain an interactive shell without running with -it flags.
I'm moving first steps into Docker to create images only for internal use.
I start from this envirornment_full.df:
FROM ubuntu:16.04
ENTRYPOINT ["/bin/bash"]
I then build
docker rmi environment:full
docker build -t environment:full -f environment.df .
Then run
docker run environment:full
Running docker images -am I see my image
REPOSITORY TAG IMAGE ID CREATED SIZE
environment full aa91bbd39167 4 seconds ago 129 MB
So I run it
docker run environment:full
I see nothing happening ....
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5847c0a18f30 environment:full "/bin/bash" 21 seconds ago Exited (0) 20 seconds ago admiring_mirzakhani
Also
$ docker run environment:full -ti
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
root#aa768a585f33:/# exit
I'd like to have the ubuntu prompt, like if I was in a SSH connection. And this without user must enter -i or -tty flags.
How can I realize this?
bash won't run at all if stdin is closed. If you don't provide the -i flag, bash will simply exit immediately. So when you...
docker run environment:full
...bash exits immediately, and so your container exits. You would see it if you ran docker ps -a, which shows container that have stopped.
bash won't give you an interactive prompt if it's not attached to a tty. So if you were to run...
coerk run -i environment:full
...you would get a bash shell, but with no prompt, or job control, or other features. You need to provide -t for Docker to allocate a tty device.
You can't get what you want without providing both the -i and -t options on the command line.
An alternative would be to set up an image that runs an ssh daemon, and have people ssh into the container. Instead of behaving "like if I was in a SSH connection", it would actually be an ssh session.
Also, note that this:
docker run environment:full -ti
Is not the same as this:
docker run -it environment:full
The former will run bash -ti inside a container, while the latter passes the -i and -t options to docker run.

How can I run a docker container and commit the changes once a script completes?

I want to set up a cron job to run a set of commands inside a docker container and then commit the changes to the docker image. I'm able to run the container as a daemon and get the container ID using this command:
CONTAINER_ID=$(sudo docker run -d my-image /bin/sh -c "sleep 10")
but I'm having trouble with the second part--committing the changes to the image once the sleep 10 command completes. Is there a way for me to tell when the docker container is about to be killed and run another command before it is?
EDIT: As an alternative, is there a way to trigger ctrl-p-q via a shell script in the container to leave the container running but return to the host?
There are following ways to persist container data:
Docker volumes
Docker commit
a) create container from ubuntu image and run a bash terminal.
$ docker run -i -t ubuntu:14.04 /bin/bash
b) Inside the terminal install curl
# apt-get update
# apt-get install curl
c) Exit the container terminal
# exit
d) Take a note of your container id by executing following command :
$ docker ps -a
e) save container as new image
$ docker commit <container_id> new_image_name:tag_name(optional)
f) verify that you can see your new image with curl installed.
$ docker images
$ docker run -it new_image_name:tag_name bash
# which curl
/usr/bin/curl
Run it in the foreground, not as daemon. When it ends the script that launched it takes control and commits/push it
I didn't find any of these answers satisfying, as my goal was to 1) launch a container, 2) run a setup script, and 3) capture/store the state after setup, so I can instantly run various scripts against that state later. And all in a local, automated, continuous integration environment (e.g. scripted and non-interactive).
Here's what I came up with (and I run this in Travis-CI install section) for setting up my test environment:
#!/bin/bash
# Run a docker with the env boot script
docker run ubuntu:14.04 /path/to/env_setup_script.sh
# Get the container ID of the last run docker (above)
export CONTAINER_ID=`docker ps -lq`
# Commit the container state (returns an image_id with sha256: prefix cut off)
# and write the IMAGE_ID to disk at ~/.docker_image_id
(docker commit $CONTAINER_ID | cut -c8-) > ~/.docker_image_id
Note that my base image was ubuntu:14.04 but yours could be any image you want.
With that setup, now I can run any number of scripts (e.g. unit tests) against this snapshot (for Travis, these are in my script section). e.g.:
docker run `cat ~/.docker_image_id` /path/to/unit_test_1.sh
docker run `cat ~/.docker_image_id` /path/to/unit_test_2.sh
Try this if you want an auto commit for all which are running. Put this in a cron or something, if this helps
#!/bin/bash
for i in `docker ps|tail -n +2|awk '{print $1}'`; do docker commit -m "commit new change" $i; done

Resources