Docker-compose on wsl cannot start terminator in container - windows

Good day to everyone,
I seem to have a problem with docker-compose and wsl2 on windows 10.
I am running docker on ubuntu 20.04 in wsl2 on windows 10. For some reason if I run the docker image with this command:
sudo docker run --rm -it --network host -e DISPLAY -v ${HOME}/.config/terminator:/home/user1/.config/terminator -v /tmp/.X11-unix:/tmp/.X11-unix -v ${PWD}/.bashrc_local:/home/user1/.bashrc_local -e QT_X11_NO_MITSHM=1 --privileged hsp/ros2-bench-test:r1Sim2
Bash runs as expected and I can start terminator and other GUI based software.
But if I use docker-compose I get this error:
sudo docker-compose up
Creating network "docker_compose_default" with the default driver
Creating terminator ... done
Creating docker_compose_yarp-ros2-image_1 ... done
Attaching to terminator, docker_compose_yarp-ros2-image_1
terminator |
terminator | (terminator:20724): dbind-WARNING **: 07:58:24.948: Couldn't register with accessibility bus: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
terminator | Unable to connect to DBUS Server, proceeding as standalone
terminator |
terminator | ** (terminator:20724): WARNING **: 07:58:25.089: Binding '<Control><Alt>a' failed!
terminator | Unable to bind hide_window key, another instance/window has it.
terminator | Traceback (most recent call last):
terminator | File "/usr/bin/terminator", line 133, in <module>
terminator | TERMINATOR.layout_done()
terminator | File "/usr/lib/python3/dist-packages/terminatorlib/terminator.py", line 329, in layout_done
terminator | terminal.spawn_child()
terminator | File "/usr/lib/python3/dist-packages/terminatorlib/terminal.py", line 1500, in spawn_child
terminator | result, self.pid = self.vte.spawn_sync(Vte.PtyFlags.DEFAULT,
terminator | gi.repository.GLib.GError: g-io-error-quark: Failed to execute child process “/bin/bash”: Failed to fdwalk: Operation not permitted (14)
terminator exited with code 1
docker_compose_yarp-ros2-image_1 exited with code 0
The docker-compose is the following:
version: "3.7"
x-base: &base
image: hsp/ros2-bench-test:r1Sim2
environment:
- DISPLAY=${DISPLAY}
- XAUTHORITY=/home/user1/.Xauthority
- QT_X11_NO_MITSHM=1
- LIBGL_ALWAYS_INDIRECT=0
- YARP_COLORED_OUTPUT=1
volumes:
- "/tmp/.X11-unix:/tmp/.X11-unix:rw"
- "/etc/hosts:/etc/hosts"
- "/home/elandini/.gitconfig:/home/user1/.gitconfig"
- ".bashrc_local:/home/user1/.bashrc_local"
- "/home/elandini/.config/terminator:/home/user1/.config/terminator"
network_mode: host
ipc: host
pid: host
security_opt:
- apparmor:unconfined
services:
# Images
yarp-ros2-image:
image: hsp/ros2-bench-test:r1Sim2
build:
dockerfile: Dockerfile
target: ros2CtrlDefault
context: .
terminator:
<<: *base
container_name: terminator
command: terminator -g /home/user1/.config/terminator/config
I cannot see the error in the docker-compose.yaml file, but I am quite new to docker-compose and so it may be really trivial.
EDIT
Thanks to ste93 for the answer. With privileged: true everything works.
Does anybody knows a way to avoid giving privileges to the container and still make this work?

If you put privileged: true in the docker compose it should work.

Related

How do I bind a volume in Windows docker?

I'm running Windows 2019 Server with latest docker.
I need to start a windows container, and bind the current C: to Z: in the container, but this does not work:
docker run -v c:\:z:\ -it XXX cmd.exe
What's the correct syntax?
EDIT
Here's what I've tried
PS C:\Users\Administrator> docker run --mount 'type="bind",source="C:\",target="Z:\"' -it mcr.microsoft.com/windows/nanoserver:1809 cmd.exe
invalid argument "type=bind,source=C:\",target=Z:\"" for "--mount" flag: parse error on line 1, column 19: bare " in non-quoted-field
See 'docker run --help'.
PS C:\Users\Administrator> docker run --mount type=bind,source=C:\,target=Z:\ -it mcr.microsoft.com/windows/nanoserver:1809 cmd.exe
docker: Error response from daemon: hcsshim::CreateComputeSystem 9b4e6759c82a071453bf4449f18dbbb2bd90511651c146a6e561a45771e0548c: The parameter is incorrect.
PS C:\Users\Administrator>
Did you try with mount syntax?
using Powershell:
docker run --mount 'type="bind",source="C:\",target="Z:\"' myimage:latest
or
Without quotes:
docker run --mount type=bind,source=C:\,target=Z:\ myimage:latest
I just got this to work:
docker run -p 80:80 -v //e/testdata/:/opt/testdata imagetag
On host windows: e:\testdata
mapped in container: /opt/testdata
To access e:\testdata one needs to put a double slash before the drive letter, and no colon there. I am mapping into a linux container so that is a normal unix style path. The software inside the container was able to read and write the windows files.

/bin/sh: No such file or directory when setting a docker-compose entrypoint

I have a container that runs a database migration (source):
FROM golang:1.12-alpine3.10 AS downloader
ARG VERSION
RUN apk add --no-cache git gcc musl-dev
WORKDIR /go/src/github.com/golang-migrate/migrate
COPY . ./
ENV GO111MODULE=on
ENV DATABASES="postgres mysql redshift cassandra spanner cockroachdb clickhouse mongodb sqlserver firebird"
ENV SOURCES="file go_bindata github github_ee aws_s3 google_cloud_storage godoc_vfs gitlab"
RUN go build -a -o build/migrate.linux-386 -ldflags="-s -w -X main.Version=${VERSION}" -tags "$DATABASES $SOURCES" ./cmd/migrate
FROM alpine:3.10
RUN apk add --no-cache ca-certificates
COPY --from=downloader /go/src/github.com/golang-migrate/migrate/build/migrate.linux-386 /migrate
ENTRYPOINT ["/migrate"]
CMD ["--help"]
I want to integrate it into a docker-compose and make it dependent on the Postgres database service. However, since I have to wait until the database is fully initialised I have to wrap the migrate command in a script and thus replace the entrypoint of the migration container. I'm using the wait-for script to poll the database, which is a pure shell (not bash) script and should thus work in an alpine container.
This is how the service is defined in the docker-compose:
services:
database:
# ...
migration:
depends_on:
- database
image: migrate/migrate:v4.7.0
volumes:
- ./scripts/migrations:/migrations
- ./scripts/wait-for:/wait-for
entrypoint: ["/bin/sh"]
command: ["./wait-for database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
Running docker-compose up on this fails with
migration_1 | /bin/sh: can't open './wait-for database:5432': No such file or directory
Running the migrate container for itself with
docker run -it --entrypoint /bin/sh -v $(pwd)/scripts/wait-for:/wait-for migrate/migrate:v4.7.0
does work flawlessly, the script is there and can be run with /bin/sh ./wait-for.
So why does it fail as part of the docker-compose?
If you read the error message carefully, you will see that the file that cannot be found is not ./waitfor, it is ./wait-for database:5432. This is consistent with your input file, where that whole thing is given as the first element of the command list:
command: ["./wait-for database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
It's unclear to me what you actually want instead, since the working alternatives presented do not seem to be fully analogous, but possibly it's
command: ["./wait-for", "database:5432", "--", "./migrate", "-path", "/migrations", "-database", "postgres://test:test#database:5432/test?sslmode=disable", "-verbose", "up"]
Running the migrate container for itself with does work flawlessly
When you run it like:
docker run -it --entrypoint /bin/sh -v $(pwd)/scripts/wait-for:/wait-for migrate/migrate:v4.7.0
entrypoint /bin/sh is executed.
When you run it using docker-compose:
entrypoint (/bin/sh ) + command (./wait-for database:5432) ...` is executed.
./wait-for database:5432 as whole stands for executable that will run and it can't be found, that's why you get the error No such file or directory
Try to specify an absolute path to wait-for in command: and split ./wait-for database:5432 into "./wait-for", "database:5432".
It's possible that splitting will be enough
As an alternative you can follow CMD syntax docs and use different command syntax without array: command: ./wait-for database:5432 ...
ENTRYPOINT ["/bin/sh"] is not enough, you also need the -c argument.
Example (testing a docker-compose.yml with docker-compose run --rm MYSERVICENAMEFROMTHEDOCKERCOMPOSEFILE bash here):
entrypoint: ["/bin/sh"]
Throws:
/bin/sh: 0: cannot open bash: No such file
ERROR: 2
And some wrong syntax examples like
entrypoint: ["/bin/sh -c"]
(wrong!)
or
entrypoint: ["/bin/sh, -c"]
(wrong!)
throw errors:
starting container process caused: exec: "/bin/sh, -c": stat /bin/sh, -c: no such file or directory: unknown
ERROR: 1
starting container process caused: exec: "/bin/sh -c": stat /bin/sh -c: no such file or directory: unknown
ERROR: 1
In docker-compose or Dockerfile, for an entrypoint, you need the -c argument.
This is right:
entrypoint: "/bin/sh -c"
or:
entrypoint: ["/bin/sh", "-c"]
The -c is to make clear that this is a command executed in the command line, waiting for an additional command to be used in that command line. but not starting the bash /bin/sh just on its own. You can read that between the lines at What is the difference between CMD and ENTRYPOINT in a Dockerfile?.

Docker: RUN touch doesn't create file

While trying to debug a RUN statements in my Dockerfile, I attempted to redirect output to a file in a bound volume (./mongo/log).
To my surprise I was unable to create files via the RUN command, or to pipe the output of another command to a file using the redirection/appending (>,>>) operators. I was however able to perform the said task by logging in the running container via docker exec -ti mycontainer /bin/sh and issuing the command from there.
Why is this behaviour happening? How can I touch file in the Dockerfile / redirect output to a file or to the console from which the Dockerfile is run?
Here is my Dockerfile:
FROM mongo:3.4
#Installing NodeJS
RUN apt-get update && \
apt-get install -y curl && \
curl -sL https://deb.nodesource.com/setup_6.x | bash - && \
apt-get install -y nodejs
#Setting Up Mongo
WORKDIR /var/www/smq
COPY ./mongo-setup.js mongo-setup.js
##for testing
RUN touch /var/log/node.log && /
node --help 2>&1 > /var/log/node.log
##this was the command to debug
#RUN node mongo-setup.js > /var/log/mongo-setup.log 2> /var/log/mongo-setup.error.log
Here an excerpt from my docker-compose.yml:
mongodb:
build:
context: ./
dockerfile: ./mongodb-dockerfile
container_name: smqmongodb
volumes:
- /var/lib/mongodb/data
- ./mongo/log/:/var/log/
- ../.config:/var/www/.config
You are doing this during your build:
RUN touch /var/log/node.log && /
node --help 2>&1 > /var/log/node.log
The file /var/log/node.log is created and fixed immutably into the resulting image.
Then you run the container with this volume mount:
volumes:
- ./mongo/log/:/var/log/
Whatever is in ./mongo/log/ is mounted as /var/log in the container, which hides whatever was there before (from the image). This is the thing that's making it look like your touch didn't work (even though it probably worked fine).
You're thinking about this backward - your volume mount doesn't expose the container's version of /var/log externally - it replaces whatever was there.
Nothing you do in Dockerfile (build) will ever show up in an external mount.
Instead of RUN node mongo-setup.js > /var/log/mongo-setup.log 2> /var/log/mongo-setup.error.log, within the container, what if you just say `RUN node mongo-setup.js'?
Docker recommends using docker logs. Like so:
docker logs container-name
To accomplish what you're after (see the mongo setup logs?), you can split the stdout & stderr of the container by piping the separate streams: and send them to files:
me#host~$ docker logs foo > stdout.log 2>stderr.log
me#host~$ cat stdout.log
me#host~$ cat stderr.log
Also, refer to the docker logs documentation

How to set hosts in docker for mac

When I use docker before, I can use docker-machine ssh default to set hosts in docker's machine /etc/hosts, but in docker for mac I can't access it's VM because of it don't have it.
So, the problem is how to set hosts in docker for mac ?
My secondary domain wants to point the other ip.
I found a solution, use this command
screen ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/tty
Now, edit the /etc/hosts in the Docker VM.
To exit screen, use Ctrl + a + d.
Here's how I do it with a bash script so the changes persist between Docker for Mac restarts.
cd ~/Library/Containers/com.docker.docker/Data/database
git reset --hard
DFM_HOSTS_FILE="com.docker.driver.amd64-linux/etc/hosts"
if [ ! -f ${DFM_HOSTS_FILE} ]; then
echo "appending host to DFM /etc/hosts"
echo -e "xxx.xxx.xxx.xxx\tmy.special.host" > ${DFM_HOSTS_FILE}
git add ${DFM_HOSTS_FILE}
git commit -m "add host to /etc/hosts for dns lookup"
fi
You can automate it via this script, run this scrip on start up time or login time will save you..
#!/bin/sh
# host entry -> '10.4.1.4 dockerreigstry.senz.local'
# 1. run debian image
# 2. check host entry exists in /etc/hosts file
# 3. if not exists add it to /etc/hosts file
docker run --name debian -it --privileged --pid=host debian nsenter \
-t 1 -m -u -n -i sh \
-c "if ! grep -q dockerregistry.senz.local /etc/hosts; then echo -e '10.4.1.4\tdockerregistry.pagero.local' >> /etc/hosts; fi"
# sleep 2 seconds
# remove stopped debian container
sleep 2
docker rm -f debian
I have created a blog post with more information about this topic.
https://medium.com/#itseranga/set-hosts-in-docker-for-mac-2029276fd448
You must have to create an docker-compose.yml file. This file will be on the same route of your Dockerfile
For example, I use this docker-compose.yml file:
version: '2'
services:
app:
hostname: app
build: .
volumes:
- ./:/var/www/html
working_dir: /var/www/html
depends_on:
- db
- cache
ports:
- 80:80
cache:
image: memcached:1.4.27
ports:
- 11211:11211
rabbitmq:
image: rabbitmq:latest
ports:
- 5672:5672
db:
image: postgres:9.5.3
ports:
- 5432:5432
environment:
- TZ=America/Mazatlan
- POSTGRES_PASSWORD=root
- POSTGRES_DB=restaurantcore
- POSTGRES_USER=rooms
- POSTGRES_PASSWORD=rooms
The ports are binding with the ports of your host docker machine.

Running Docker Commands with a bash script inside a container

I'm trying to automate deployment with webhooks to the Docker hub based on this tutorial. One container runs the web app on port 80. On the same host I run another container that listens for post requests from the docker hub, triggering the host to update the webapp image. The post request triggers a bash script that looks like this:
echo pulling...
docker pull my_username/image
docker stop img
docker rm img
docker run --name img -d -p 80:80 my_username/image
A test payload succesfully triggers the script. However, the container logs the following complaints:
pulling...
app/deploy.sh: line 4: docker: command not found
...
app/deploy.sh: line 7: docker: command not found
It seems that the bash script does not access the host implicitly. How to proceed?
Stuff I tried but did not work:
when firing up the listener container I added the host IP like this based on the docs:
HOSTIP=`ip -4 addr show scope global dev eth0 | grep inet | awk '{print\$2}' | cut -d / -f 1`
docker run --name listener --add-host=docker:${HOSTIP} -e TOKEN="test654321" -d -p 5000:5000 mjhea0/docker-hook-listener
similarly, I substituted the --add-host command with --add-host=dockerhost:$(ip route | awk '/docker0/ { print $NF }') based on this suggestion.
Neither the docker binary nor the docker socket will be present in a container by default (why would it?).
You can solve this fairly easily by mounting the binary and socket from the host when you start the container e.g:
$ docker run -v $(which docker):/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock debian docker --version
Docker version 1.7.0, build 0baf609
You seem to be a bit confused about how Docker works; I'm not sure exactly what you mean by "access the host implicitly" or how you think it would work. Think of a container as a isolated and ephemeral machine, completely separate from your host, something like a fast VM.

Resources