Windows docker, mount a volume and run a script in it - windows

I'm working with the servercore Windows docker image for CI purposes (i.e. building).
I'm fine building it, running it mounting the repo as a volume, launching it and keeping it listening on a specific port and launching exec "mountedFolder/script.bat".
docker build . -t build_image
docker run -d -t -p 8585:9090 -v $PATH:c:/repo build_image > id.txt
set /p id=<id.txt
docker exec %id% "build.bat"
But is there any way to launch the script (and cd-ing to its folder before launching it), in the run command itself? Something like
docker run --rm -v $PATH:c:/repo build_image /PATH/to/BAT/build.bat

Related

How to use bash commands alongside Docker restart policies?

In a ROS project, I have the following bash script that I use to run a docker container:
#!/bin/bash
source ~/catkin_ws/devel/setup.bash
rosnode kill some_ros_node
roslaunch supporting_ros_package launch_file.launch &
docker run -it \
--restart=always \
--privileged \
--net=host \
my_image:latest \
/bin/bash -c\
"
roslaunch my_package my_launch_file.launch
"
export containerId=$(docker ps -l -q)
However, what I'd like to happen is, for every time the container restarts (especially as the machine is booted up), the bash commands preceding the docker run command to also re-run on the host machine (not within the container).
How might I achieve this?
There are a few ways I can think of doing this:
Add this script to a system service. See this answer regarding adding a system service: See this
Add this script into another container that is also set to restart always ... but mount the docker socket into this other container like this: See this

Jenkins console does not show the output of command runs on docker container

Running below command to execute my tests on docker container
sudo docker exec -i 6d49272f772c bash -c "mvn clean install test"
Above command running on Jenkins execute bash. But Jenkins console does not show the logs for test execution.
I had a similar problem with docker start (which is similar to docker exec). I used the -i option and it would work fine outside Jenkins, but the console in Jenkins didn't show any output from this command. I replaced -i with -a similar to the following:
sudo docker container create -it --name container-name some-docker-image some-command
sudo docker container start -a container-name
sudo docker container rm -f container-name
The docker exec method doesn't have a -a option so possibly removing the -i option would work too (since you are not interacting with the container in Jenkins), so if that doesn't work than you can convert to the following commands and achieve similar results with standard out being captured.

command in docker run and inside docker not behaving similarly

I have script that load a database into RAM and print the adress of the first data into a file (db_REGISTER), and I want to run it inside a docker. This script works fine when it is launched inside a bash after launching the docker with -it
$ docker run -it --env-file $FILE -v $wkp:/app dev_machine
$$ /app/scripts/loadBase.sh
db_REGISTER
<some random number>
However when I launch the same script with docker run directly, the script works but the adress printed is always 0, and I cannot use the database afterward.
$ docker run -it --env-file $FILE -v $wkp:/app dev_machine /app/scripts/loadBase.sh
db_REGISTER
0
Does that mean that the second command does not have access to a persistant adress in the RAM ? What should I do to correct that ?
EDIT : After some advice, I tried to tweak the --ipc setting. Using --ipc="host" made it work. I guess this was a problem of shared RAM
You could try to mount your file inside the Docker container before executing the command:
docker run -it --env-file $FILE -v $PWD/scripts:/t -w /t dev_machine loadBase.sh

How can I run a docker container and commit the changes once a script completes?

I want to set up a cron job to run a set of commands inside a docker container and then commit the changes to the docker image. I'm able to run the container as a daemon and get the container ID using this command:
CONTAINER_ID=$(sudo docker run -d my-image /bin/sh -c "sleep 10")
but I'm having trouble with the second part--committing the changes to the image once the sleep 10 command completes. Is there a way for me to tell when the docker container is about to be killed and run another command before it is?
EDIT: As an alternative, is there a way to trigger ctrl-p-q via a shell script in the container to leave the container running but return to the host?
There are following ways to persist container data:
Docker volumes
Docker commit
a) create container from ubuntu image and run a bash terminal.
$ docker run -i -t ubuntu:14.04 /bin/bash
b) Inside the terminal install curl
# apt-get update
# apt-get install curl
c) Exit the container terminal
# exit
d) Take a note of your container id by executing following command :
$ docker ps -a
e) save container as new image
$ docker commit <container_id> new_image_name:tag_name(optional)
f) verify that you can see your new image with curl installed.
$ docker images
$ docker run -it new_image_name:tag_name bash
# which curl
/usr/bin/curl
Run it in the foreground, not as daemon. When it ends the script that launched it takes control and commits/push it
I didn't find any of these answers satisfying, as my goal was to 1) launch a container, 2) run a setup script, and 3) capture/store the state after setup, so I can instantly run various scripts against that state later. And all in a local, automated, continuous integration environment (e.g. scripted and non-interactive).
Here's what I came up with (and I run this in Travis-CI install section) for setting up my test environment:
#!/bin/bash
# Run a docker with the env boot script
docker run ubuntu:14.04 /path/to/env_setup_script.sh
# Get the container ID of the last run docker (above)
export CONTAINER_ID=`docker ps -lq`
# Commit the container state (returns an image_id with sha256: prefix cut off)
# and write the IMAGE_ID to disk at ~/.docker_image_id
(docker commit $CONTAINER_ID | cut -c8-) > ~/.docker_image_id
Note that my base image was ubuntu:14.04 but yours could be any image you want.
With that setup, now I can run any number of scripts (e.g. unit tests) against this snapshot (for Travis, these are in my script section). e.g.:
docker run `cat ~/.docker_image_id` /path/to/unit_test_1.sh
docker run `cat ~/.docker_image_id` /path/to/unit_test_2.sh
Try this if you want an auto commit for all which are running. Put this in a cron or something, if this helps
#!/bin/bash
for i in `docker ps|tail -n +2|awk '{print $1}'`; do docker commit -m "commit new change" $i; done

Docker run/star/exec?

Hi i have build and install ziftrCoin wallet on a ubuntu image.
8084e9de3c23 ubuntu:latest "/bin/bash" 25 hours ago Up About a minute 0.0.0.0:10332->10332/tcp ziftrCoin
The problem is that ziftrcoind closing after i exit the container.
Try to run docker exec -it ziftrCoin /root/64/./ziftrcoind the program start but i get connected to the container. Same problem if i exit.
So how to update / edit the COMMAND when i start the container with "ziftrCoin /root/64/./ziftrcoind" and not "/bin/bash"?
UPDATE
IF i build it run it i dont get it to stay open..
docker run -d ziftr
252554f38c2a41bdd29875bcb6ab7b6bbe98522e16828b1f8b06d8899bc5134c
docker run -it ziftr
ZiftrCOIN server starting
FROM ubuntu
MAINTAINER Krister Johansson <hello#nodejs.how>
WORKDIR /var/ziftrCoin
RUN apt-get update
RUN apt-get install -y wget
RUN wget "https://d19y4lldx7po3t.cloudfront.net/assets/downloads/0.9.3/ziftrcoin-0.9.3-linux64.tar.gz"
RUN tar -xvzf ziftrcoin-0.9.3-linux64.tar.gz
RUN rm ziftrcoin-0.9.3-linux64.tar.gz
ADD ./src/ziftrcoin.conf /root/.ziftrcoin/ziftrcoin.conf
EXPOSE 10332 11332
CMD ["64/./ziftrcoind"]
For Docker, when the process with pid 1 (inside the container) quits, it will quit too (and kill all other processed that were running in that container). This is what happens to you as /bin/bash is the process with pid 1. What you need to do is set ziftrcoind process as pid 1.
You did not provide a Dockerfile or a docker run command but I assume you run something like docker run ziftrcoin (where ziftrcoin would be the name of the image you build) and you don't have a CMD in your Dockerfile.
The idea would be either to give docker a default command, using CMD in your Dockerfile or give it the command to run when issuing the docker run.
Let's see the how the Dockerfile would look like :
FROM Ubuntu
RUN # … Install ziftrcoind
CMD ["/root/64/./ziftrcoind"]
If you build this image, when running it, the default command would be /root/64/./ziftrcoind instead of /bin/bash. You could also do docker run ziftrcoint /root/64/./ziftrcoind to achieve the same effect.
As Kevan Ahlquist commented, if you want to run it in background, you can use the flag -d : docker run -d ziftrcoin (with or without the command, depending if you have the CMD in your Dockerfile or not).
Problem found!
I had deamon=1 in ziftrcoin.conf after removing it it workt!
Uploaded it to git.
https://github.com/nodejshow/docker-ziftrcoind

Resources