To which device is the output of a Gitlab CI pipeline written? - terminal

I have the following statement in my .gitlab-ci.yml:
( docker-compose up & ) | ( tee /dev/tty & ) | grep -m 1 "Compiled successfully"
It shall show the output of docker-compose up in the web terminal and wait for a certain string to indicate that the containers are ready.
But /dev/tty fails with the error: tee: /dev/tty: No such device or address
The output of tty is not a tty. How do I find out where the output is actually written to? The Gitlab runner runs on Ubuntu 18.04.2.

I've solved this using:
- docker-compose up -d
- docker-compose logs -f &
This will keep outputting the logs of docker-compose in the foreground.
Notice this will generate mixed output of both your containers as well as any following commands your .gitlab-ci.yml contains.

Related

How to start docker containers and keep them alive during script execution on MacOS

I have a shell script that tries to start two docker containers in a for loop. The script should not continue the rest of its execution before it has detected the output "Service will run on port" in stdout.
The following code works fine on linux:
for i in "${functionsToStart[#]}"
do
echo "Starting ${i}"
(bash start-server.sh) | grep -q "Service will run on port"
done
#more commands
.
.
.
In MacOs however this will start docker in a virtual environment (docker desktop), and the grep will never match.
When I try to run this as a sub process:
(bash start-server.sh &) | grep -q "Service will run on port"
The grep matches fine but it also kills my sub process and therefore also the container.
I need the containers to keep running for the rest of the script execution, how do I do this in MacOs?
To anyone struggling with this issue for MacOs.
The start-server.sh script mentioned above is the script that starts my docker container. The answer here was using the correct options when executing the docker run command from within the script:
docker run -itd

Send commands directly in running process and indirectly (e. g. with tail)

I am currently building a docker project for running a Minecraft Spigot server.
To achieve this I need to be able to run commands in the running shell (when using docker run -it d3strukt0r/spigot) and indirectly with docker exec <name> console <command>. Unfortunately, I'm not too fond of the bash language.
Currently, I am able to send commands indirectly, which is great when being detached. I got this with:
_console_input="/app/input.buffer"
# Clear console buffers
true >$_console_input
# Start the main application
echo "[....] Starting Minecraft server..."
tail -f $_console_input | tee /dev/console | $(command -v java) $JAVA_OPTIONS -jar /app/spigot.jar --nogui "$#"
And when running the console command, all it does is the following:
echo "$#" >>/app/input.buffer
The code can be found here
Does someone know a way of how to be able to now add the functionality to directly enter commands?
USE CASE ONE: A user may run attached using docker run
docker run -it --name spigot -p 25565:25565 -e EULA=true d3strukt0r/spigot:nightly
In this case, the user should definitely be able to use the console as he is used to (when running java -jar spigot.jar).
If he has a second console open he can also send a command with:
docker exec spigot console "time set day"
USE CASE TWO: A user may run detached using docker run -d
docker run -d --name spigot -p 25565:25565 -e EULA=true d3strukt0r/spigot:nightly
In this case, the user is only able to send commands indirectly.
docker exec spigot console "time set day"
USE CASE THREE AND FOUR: Use docker-compose (look at the use case "two", it's basically the same)
You could make a script that acts like a mini-shell, reading from stdin and writing to /app/input.buffer. Set it as the container's CMD so it runs by default. Put it in the same directory as your Dockerfile and make sure it's executable.
interactive_console
#!/bin/sh
while IFS= read -rp '$ ' command; do
printf '%s\n' "$command"
done >> /app/input.buffer
Dockerfile
COPY interactive_console /usr/bin
CMD interactive_console

Docker: Exec a command on spigot console runnning in a container

To summarize the problem I have:
I want to execute a command on the minecraft console that is running in the container like when I attach to it in interactive mode but without the need to attach to it.
docker attach container_name
command
detach_from_contaienr
Like running docker exec but it puts the command into the stdin of the programm that is running inside the container like in docker attach.
I simply search a oneliner that does the same. Like in this question
Edit:
echo 'say test' | docker attach <container id>
Gives the Error:
the input device is not a TTY
Edit2:
after removing the -t flag on the container linke in this post
echo 'say test' | docker attach <container id>
the command gets to the server as the log reveales but after executing that I am stuck in a blank input because the command doesn't stop somehow
If i now do the double ctrl+c the container stops...
Edit3:
I try to execute these commands on the docker host and execute the command in the running spigot minecraft server
Apparently, you can use a named pipe to do this, as shown here: https://stackoverflow.com/a/26765590/2926055
# in the Docker container
$ mkfifo myfifo
$ java -jar minecraft_server.jar nogui < myfifo
# via your `docker exec`
$ echo 'say test' > myfifo
As noted, be careful you don't accidentally send an EOF character.

Log to syslog / console in macOS from Docker container?

I set my Docker container set to send logs to the hosts logs on CentOS 7 (with --log-driver syslog). I'd like to replicate this on macOS (Sierra). But it doesn't show up anywhere, seemingly.
$ docker run --log-driver syslog -it busybox sh
/ # logger "Hello world!"
/ # exit
And:
$ sudo cat /var/log/system.log | grep "Hello world"
Password:
$
What configuration is necessary to make it possible for any Docker system logging command for any arbitrary container to appear in a log file on macOS?
I can view these types of default system logging if I do not configure log-driver. But Ruby's syslog implementation must log differently.
$ docker run --log-driver syslog -it centos /bin/bash
# yum install ruby -y
# ruby -e "require 'syslog/logger'; log = Syslog::Logger.new 'my_program'; log.info 'this line will be logged via syslog(3)'"
# exit
$ sudo tail -n 10000 /var/log/system.log | grep "syslog(3)"
$
It depends on how you are logging your message.
As mentioned in "Better ways of handling logging in containers " by Daniel Walsh:
One big problem with standard docker containers is that any service that writes messages to syslog or directly to the journal get dropped by default.
Docker does not record any logs unless the messages are written to STDIN/STDERR. There is no logging service running inside of the container to catch these messages.
So a simple echo should end up in syslog, as illustrated by the chentex/random-logger image.
From Docker for Mac / Log and Troubleshooting, you can check directly if you see any logs after your docker run:
To view Docker for Mac logs at the command line, type this command in a terminal window or your favorite shell.
$ syslog -k Sender Docker
2017:
Check the content of ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log.
Syslog driver was added in PR 11458
2022:
Brice mentions in the comments:
In Docker Desktop for macOs 4.x the logs are now here
$HOME/Library/Containers/com.docker.docker/Data/log/,
# eg
$HOME/Library/Containers/com.docker.docker/Data/log/vm/console.log

Is it possible to view docker-compose logs in the output window running in Windows?

docker-compose on Windows is not able to be run in interactive mode.
ERROR: Interactive mode is not yet supported on Windows.
Please pass the -d flag when using `docker-compose run`.
When running docker-compose in detached mode, little is displayed to the console, and the only logs displayed under docker-compose logs appear to be:
Attaching to
which obviously isn't very useful.
Is there a way of accessing these logs for transient containers?
I've seen that it's possible to change the docker-daemons logging to use a file (without the ability to select the log location). Following this as a solution I could log to the predefined log location, then execute a copy script to move the files to a mounted volume to be persisted before the container is torn down. This doesn't sound ideal.
The solution I've currently gone with (also not ideal) is to wrap the shell script parameter in a dynamically created proxy script which logs all output to the mounted volume.
tempFile=myproxy.sh
echo '#!/bin/bash' > $tempFile
echo 'do.the.thing.sh 2> /data/log.txt'>>$tempFile
echo 'echo finished >> /data/logs/log.txt' >> $tempFile
Which then I'd call
docker-compose run -d doTheThing $tempFile
instead of
docker-compose run -d doTheThing do.the.thing.sh
docker-compose logs doTheThing

Resources