I'm writing a script where I execute the following command:
docker exec -ti keycloak11_service-keycloak_1 /opt/jboss/keycloak/bin/standalone.sh \
-Djboss.socket.binding.port-offset=151 \
-Dkeycloak.migration.action=export \
-Dkeycloak.migration.provider=singleFile \
-Dkeycloak.migration.realmName=demo \
-Dkeycloak.migration.usersExportStrategy=REALM_FILE \
-Dkeycloak.migration.file=/keycloak/realm-export.json
This exports the file realm-export.json from my running keycloak instance. After the execution I want to to do a scp to another server. The problem is, when I execute the ti comment, the standalone keycloak is started and my script stucks while it says : Listening in port ...
As the export operation is finished at that point, I could kill the process or enter any command (which is cmd+c). Is there a way to do this via script?
Put your commands in a Dockerfile and run the container in detached mode docker run -d. Also you could use a bind-mount to access a host volume from the container. That way your results are stored on the host and you can access them later.
Related
In a ROS project, I have the following bash script that I use to run a docker container:
#!/bin/bash
source ~/catkin_ws/devel/setup.bash
rosnode kill some_ros_node
roslaunch supporting_ros_package launch_file.launch &
docker run -it \
--restart=always \
--privileged \
--net=host \
my_image:latest \
/bin/bash -c\
"
roslaunch my_package my_launch_file.launch
"
export containerId=$(docker ps -l -q)
However, what I'd like to happen is, for every time the container restarts (especially as the machine is booted up), the bash commands preceding the docker run command to also re-run on the host machine (not within the container).
How might I achieve this?
There are a few ways I can think of doing this:
Add this script to a system service. See this answer regarding adding a system service: See this
Add this script into another container that is also set to restart always ... but mount the docker socket into this other container like this: See this
Let's assume I run the following command inside a script:
#!/usr/bin/env bash
docker run --name mydb --rm -e POSTGRES_PASSWORD=kgalli -e POSTGRES_USER=kgalli -p "9999:5432" -v $PWD/db:/opt -d postgres
When I then run the following command to create a database it works fine.
docker exec -e PGPASSWORD=kgalli mydb psql -U kgalli -d template1 -c "CREATE DATABASE kgalli_test WITH OWNER kgalli ENCODING 'UTF8' LC_COLLATE = 'en_US.utf8' LC_CTYPE = 'en_US.utf8';"
However when I add this line to the script above, so the script not only starts the postgres server but also creates the database it fails.
I do not really understand why I get the following error:
psql: could not connect to server: No such file or directory
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
I know I can instruct docker postgres image to create a database on start. But this is actually not what I want to achieve. I just using this as an example to understand the problem.
When you're running it in a script, it's most likely just happening too quickly. The docker run … command returns immediately, and then docker exec … is attempting to use PostgreSQL while the database server is still starting up. You need to wait for it to be ready before creating the extra database.
That said, the postgres image has functionality in its entrypoint script to run custom initialization scripts. You can put your CREATE DATABASE … statement into a .sql file or config and mount it into /docker-entrypoint-initdb.d in the container. The postgres container will automatically run it when the database server is ready.
The docs for this seems to have disappeared, but you can see the implementation in docker-entrypoint.sh.
Using docker run, you are starting a new container, using docker exec, you are executing a command in already running container
The docker run command first creates a writeable container layer over the specified image, and then starts it using the specified command.
The docker exec command runs a new command in a running container.
If the container is paused, then the docker exec command will fail with an error
$ docker pause test
test
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1ae3b36715d2 ubuntu:latest "bash" 17 seconds ago Up 16 seconds (Paused) test
$ docker exec test ls
FATA[0000] Error response from daemon: Container test is paused, unpause the container before exec
$ echo $?
1
(ref.1)
(ref.2)
Running below command to execute my tests on docker container
sudo docker exec -i 6d49272f772c bash -c "mvn clean install test"
Above command running on Jenkins execute bash. But Jenkins console does not show the logs for test execution.
I had a similar problem with docker start (which is similar to docker exec). I used the -i option and it would work fine outside Jenkins, but the console in Jenkins didn't show any output from this command. I replaced -i with -a similar to the following:
sudo docker container create -it --name container-name some-docker-image some-command
sudo docker container start -a container-name
sudo docker container rm -f container-name
The docker exec method doesn't have a -a option so possibly removing the -i option would work too (since you are not interacting with the container in Jenkins), so if that doesn't work than you can convert to the following commands and achieve similar results with standard out being captured.
This question is not duplicated, because I want to obtain an interactive shell without running with -it flags.
I'm moving first steps into Docker to create images only for internal use.
I start from this envirornment_full.df:
FROM ubuntu:16.04
ENTRYPOINT ["/bin/bash"]
I then build
docker rmi environment:full
docker build -t environment:full -f environment.df .
Then run
docker run environment:full
Running docker images -am I see my image
REPOSITORY TAG IMAGE ID CREATED SIZE
environment full aa91bbd39167 4 seconds ago 129 MB
So I run it
docker run environment:full
I see nothing happening ....
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5847c0a18f30 environment:full "/bin/bash" 21 seconds ago Exited (0) 20 seconds ago admiring_mirzakhani
Also
$ docker run environment:full -ti
bash: cannot set terminal process group (-1): Inappropriate ioctl for device
bash: no job control in this shell
root#aa768a585f33:/# exit
I'd like to have the ubuntu prompt, like if I was in a SSH connection. And this without user must enter -i or -tty flags.
How can I realize this?
bash won't run at all if stdin is closed. If you don't provide the -i flag, bash will simply exit immediately. So when you...
docker run environment:full
...bash exits immediately, and so your container exits. You would see it if you ran docker ps -a, which shows container that have stopped.
bash won't give you an interactive prompt if it's not attached to a tty. So if you were to run...
coerk run -i environment:full
...you would get a bash shell, but with no prompt, or job control, or other features. You need to provide -t for Docker to allocate a tty device.
You can't get what you want without providing both the -i and -t options on the command line.
An alternative would be to set up an image that runs an ssh daemon, and have people ssh into the container. Instead of behaving "like if I was in a SSH connection", it would actually be an ssh session.
Also, note that this:
docker run environment:full -ti
Is not the same as this:
docker run -it environment:full
The former will run bash -ti inside a container, while the latter passes the -i and -t options to docker run.
I want to set up a cron job to run a set of commands inside a docker container and then commit the changes to the docker image. I'm able to run the container as a daemon and get the container ID using this command:
CONTAINER_ID=$(sudo docker run -d my-image /bin/sh -c "sleep 10")
but I'm having trouble with the second part--committing the changes to the image once the sleep 10 command completes. Is there a way for me to tell when the docker container is about to be killed and run another command before it is?
EDIT: As an alternative, is there a way to trigger ctrl-p-q via a shell script in the container to leave the container running but return to the host?
There are following ways to persist container data:
Docker volumes
Docker commit
a) create container from ubuntu image and run a bash terminal.
$ docker run -i -t ubuntu:14.04 /bin/bash
b) Inside the terminal install curl
# apt-get update
# apt-get install curl
c) Exit the container terminal
# exit
d) Take a note of your container id by executing following command :
$ docker ps -a
e) save container as new image
$ docker commit <container_id> new_image_name:tag_name(optional)
f) verify that you can see your new image with curl installed.
$ docker images
$ docker run -it new_image_name:tag_name bash
# which curl
/usr/bin/curl
Run it in the foreground, not as daemon. When it ends the script that launched it takes control and commits/push it
I didn't find any of these answers satisfying, as my goal was to 1) launch a container, 2) run a setup script, and 3) capture/store the state after setup, so I can instantly run various scripts against that state later. And all in a local, automated, continuous integration environment (e.g. scripted and non-interactive).
Here's what I came up with (and I run this in Travis-CI install section) for setting up my test environment:
#!/bin/bash
# Run a docker with the env boot script
docker run ubuntu:14.04 /path/to/env_setup_script.sh
# Get the container ID of the last run docker (above)
export CONTAINER_ID=`docker ps -lq`
# Commit the container state (returns an image_id with sha256: prefix cut off)
# and write the IMAGE_ID to disk at ~/.docker_image_id
(docker commit $CONTAINER_ID | cut -c8-) > ~/.docker_image_id
Note that my base image was ubuntu:14.04 but yours could be any image you want.
With that setup, now I can run any number of scripts (e.g. unit tests) against this snapshot (for Travis, these are in my script section). e.g.:
docker run `cat ~/.docker_image_id` /path/to/unit_test_1.sh
docker run `cat ~/.docker_image_id` /path/to/unit_test_2.sh
Try this if you want an auto commit for all which are running. Put this in a cron or something, if this helps
#!/bin/bash
for i in `docker ps|tail -n +2|awk '{print $1}'`; do docker commit -m "commit new change" $i; done