I have a couple of Docker images I've built for this and that;one for scanner program, another for a browser etc. Once I had them working, I created .desktop files that execute a bash run scripts I've created to run a container with them.
My question is: is there a way to run the .desktop file without the terminal GUI showing up? I've tried a couple of approaches with no success.
For instance, I've tried:
[Desktop Entry]
Name=gscan2pdf
Icon=gscan2pdf.png
Exec=gnome-terminal -e
"/home/hildy/Documents/repos/docker/gscan2pdf/run_gscan.sh"
Type=Application
Terminal=false
As well as:
[Desktop Entry]
Name=gscan2pdf
Icon=gscan2pdf.png
Exec="/home/hildy/Documents/repos/docker/gscan2pdf/run_gscan.sh"
Type=Application
Terminal=true
Both of these execute the scripts just fine of course, I'd just like it better if the application launched without a terminal GUI launching first.
Host System:
CentOS 7 - Gnome 3 Desktop
One example of a run script:
#!/bin/bash
HOST_UID=$(id -u)
HOST_GID=$(id -g)
XSOCK=/tmp/.X11-unix &&
XAUTH=/tmp/.docker.xauth &&
touch $XAUTH &&
xauth nlist :0 | sed -e 's/^..../ffff/' | xauth -f $XAUTH nmerge - &&
#These are only run the first time a container is run from the image
#docker run -e NEW_USER="${USER}" -e NEW_UID="${HOST_UID}" -e
#NEW_GID="${HOST_GID}" hildy/gscan2pdf:v1
#LAST_CONTAINER=$(docker ps -lq) &&
#docker commit "${LAST_CONTAINER}" hildy/gscan2pdf:v1
docker run \
-ti \
--user $USER \
--privileged \
-v /dev/bus/usb:/dev/bus/usb \
-v $XAUTH:$XAUTH -v $XSOCK:$XSOCK -v /home/$USER:/home/$USER \
-e XAUTHORITY=$XAUTH -e DISPLAY \
--entrypoint "" hildy/gscan2pdf:v1 gscan2pdf &>/dev/null
I have found an answer to my question. The issue was that the command to run the container contained the -i option for an interactive terminal. #sneep was correct in the comments to the question when he said "It should work with Terminal=false." His technique to add a line to the script to create a log file is also a great technique, which I will certainly use in the future and it helped me to diagnose the issue.
I can also confirm that replacing -it with -d for detached mode, as suggested by #Oleg Skylar, works.
Amended Docker command for the run script:
docker run \
-t \
--user $USER \
--privileged \
-v /dev/bus/usb:/dev/bus/usb \
-v $XAUTH:$XAUTH -v $XSOCK:$XSOCK -v /home/$USER:/home/$USER \
-e XAUTHORITY=$XAUTH -e DISPLAY \
--entrypoint "" hildy/gscan2pdf:v1 gscan2pdf &>/dev/null
Amended .desktop file:
[Desktop Entry]
Name=gscan2pdf
Icon=gscan2pdf.png
Exec=/home/hildy/Documents/repos/docker/gscan2pdf/run_gscan.sh
Type=Application
Terminal=false
StartupNotify=true
Related
I know technically host networking isn't supported MacOS (see https://docs.docker.com/network/host/)
The host networking driver only works on Linux hosts, and is not
supported on Docker Desktop for Mac, Docker Desktop for Windows, or
Docker EE for Windows Server.
However it does actually seem to work. E.g. this works just fine:
docker run \
--name local-mysql \
-e MYSQL_ROOT_PASSWORD=foo \
-e MYSQL_DATABASE=baz \
--network="host" \
-d mysql:latest
However when I try to conditionally specify the host networking with a bash variable, it doesn't work, and I can't make sense of it. Consider the following test.sh:
#!/bin/bash
echo "Test 1"
docker rm -f local-mysql
docker run \
--name local-mysql \
-e MYSQL_ROOT_PASSWORD=foo \
-e MYSQL_USER=master \
-e MYSQL_PASSWORD=bar \
-e MYSQL_DATABASE=baz \
--network="host" \
-d mysql:latest
docker ps
sleep 5
echo "Test 2"
export NETWORKING='--network="host"'
docker rm -f local-mysql
docker run \
--name local-mysql \
-e MYSQL_ROOT_PASSWORD=foo \
-e MYSQL_USER=master \
-e MYSQL_PASSWORD=bar \
-e MYSQL_DATABASE=baz \
${NETWORKING} \
-d mysql:latest
docker ps
This yields:
% ./test.sh
Test 1
local-mysql
6bbd68f0564943b8fb66ed37f1e639b54719bdb3b88b4e13aeef0a11cae4090b
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6bbd68f05649 mysql:latest "docker-entrypoint.s…" Less than a second ago Up Less than a second local-mysql
Test 2
local-mysql
e286028ef9a1a27f4226beb60e766cc163c289239ba506f63a71a35adbc73ef3
docker: Error response from daemon: network "host" not found.
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
I.e. when I hard code --network=host into the docker command, the container starts fine. But the exact same parameter in an environment variable fails to start with network "host" not found.
I'm honestly not sure if this is a failure of bash or docker, but I can't actually figure out what's going wrong.
-- EDIT --
Changing
export NETWORKING='--network="host"'
to
export NETWORKING='--network=host'
works. And for my purposes right now that's enough. But just to be thorough... Why? The working example has quotes in the value (--network="host"), so why does the shell expansion break the non-working example? What if I wanted something like --network="my host"?
I am trying to set up crontab to run two docker containers on system startup/reboot. The line I use to do this after entering the command crontab -e is:
#reboot sh folder_b/run_docker_containers.bash
The script run_docker_containers.bash has the following contents:
#!/bin/bash
# Run containers based on setup_image and main_image
sudo bash /home/user/folder_a/run_setup_docker_container.bash
sudo bash /home/user/folder_b/run_main_docker_container.bash
The scripts run_setup_docker_container.bash and run_main_docker_container.bash both have the following contents (where docker_image is setup_image and main_image, respectively):
#!/bin/bash
/snap/bin/docker run \
--rm \
--detach \
--privileged \
--net=host \
--device /dev/bus/usb \
docker_image:latest \
/bin/bash -c\
"
*SOME COMMANDS*
"
export containerId=$(/snap/bin/docker ps -l -q)
However, the containers are not run when the script is executed on reboot. I prove it finds the script folder_b/run_docker_containers.bash by adding the following code to it and seeing that the new file has been created after reboot.
touch proof_that_crontab_has_done_something.txt
It seems that crontab cannot find the scripts run_setup_docker_container.bash and run_main_docker_container.bash. Any ideas where I'm going wrong?
If you want to execute a shellscript with sudo rights I would recommend using the sudo crontab.
sudo crontab -e
Your personal cronjob should not be able to start a shell with sudo rights. Unless you do some weird modifications.
Use the absolute path
#reboot /...../folder_b/run_docker_containers.bash
I am trying with a bash script to run a docker container then print a message. However the finished message is executed whilst the container is still running - I can exec into it and see PID 1 and multiple other processes.
How can I force the docker run command to complete first?
docker run --name registr \
-v ~/v1:/v1 \
-v ~/logging.yaml:/root/logging.yaml \
-v ~/.aws:/root/.aws \
-v ~/luigi.cfg:/root/luigi.cfg \
-v ~/params:/root/params \
-p 8082:8082 \
simonm3/registr
echo "docker finished"
The docker image has CMD ["python", "/root/worker/start.py"]
I am struggling to run my cucumber tests from a Docker image.
Here is my setup:
I use OSX with XQuartz to run an X11 session
I use an Ubuntu 14 Vagrant image for development where I forward my X11 session
I am trying to run a docker image with Firefox that will use my XQuartz session for display
So far, I managed to start Firefox with the following setup:
# Dockerfile
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y firefox
# Replace 1000 with something appropriate ;)
RUN export uid=1000 gid=1000 && \
mkdir -p /home/developer && \
echo "developer:x:${uid}:${gid}:Developer,,,:/home/dev:/bin/bash" >> /etc/passwd && \
echo "developer:x:${uid}:" >> /etc/group && \
echo "developer ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/developer && \
chmod 0440 /etc/sudoers.d/developer && \
chown ${uid}:${gid} -R /home/developer
USER developer
ENV HOME /home/developer
CMD /usr/bin/firefox
I can start Firefox with --net=host from my Vagrant machine:
docker build -t firefox .
docker run --net=host -ti --rm -e DISPLAY=$DISPLAY -v $HOME/.Xauthority:/home/developer/.Xauthority -v /tmp/.X11-unix:/tmp/.X11-unix:rw firefox:latest
But this is not ideal because I can't link other containers to my machine in the docker-compose.yml file. Ideally, I would like to run my docker machine without --net=host like this:
docker build -t firefox .
docker run -ti --rm -e DISPLAY=$DISPLAY -v $HOME/.Xauthority:/home/developer/.Xauthority -v /tmp/.X11-unix:/tmp/.X11-unix:rw firefox:latest
But I get the following error:
error: XDG_RUNTIME_DIR not set in the environment.
Error: cannot open display: localhost:10.0
Please help :)
You could simply use elgalu/docker-selenium to avoid dealing with what's already solved for you, and maintained:
docker run --rm -ti --net=host --pid=host --name=grid \
-e SELENIUM_HUB_PORT=4444 -e TZ="US/Pacific" \
-v /dev/shm:/dev/shm --privileged elgalu/selenium
If you need advanced features like a dashboard with video recording for example, or live preview, you can use Zalenium and start it with:
curl -sSL https://raw.githubusercontent.com/dosel/t/i/p | bash -s start -i
This works:
# echo 1 and exit:
$ docker run -i -t image /bin/bash -c "echo 1"
1
# exit
# echo 1 and return shell in docker container:
$ docker run -i -t image /bin/bash -c "echo 1; /bin/bash"
1
root#4c064f2554de:/#
Question: How could I source a file into the shell? (this does not work)
$ docker run -i -t image /bin/bash -c "source <(curl -Ls git.io/apeepg) && /bin/bash"
# content from http://git.io/apeepg is sourced and shell is returned
root#4c064f2554de:/#
In my case, I use RUN source command (which will run using /bin/bash) in a Dockerfile to install nvm for node.js
Here is an example.
FROM ubuntu:14.04
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
...
...
RUN source ~/.nvm/nvm.sh && nvm install 0.11.14
I wanted something similar, and expanding a bit on your idea, came up with the following:
docker run -ti --rm ubuntu \
bash -c 'exec /bin/bash --rcfile /dev/fd/1001 \
1002<&0 \
<<<$(echo PS1=it_worked: ) \
1001<&0 \
0<&1002'
--rcfile /dev/fd/1001 will use that file descriptor's contents instead of .bashrc
1002<&0 saves stdin
<<<$(echo PS1=it_worked: ) puts PS1=it_worked: on stdin
1001<&0 moves this stdin to fd 1001, which we use as rcfile
0<&1002 restores the stdin that we saved initially
You can use .bashrc in interactive containers:
RUN curl -O git.io/apeepg.sh && \
echo 'source apeepg.sh' >> ~/.bashrc
Then just run as usual with docker run -it --rm some/image bash.
Note that this will only work with interactive containers.
I don't think you can do this, at least not right now. What you could do is modify your image, and add the file you want to source, like so:
FROM image
ADD my-file /my-file
RUN ["source", "/my-file", "&&", "/bin/bash"]