I set my Docker container set to send logs to the hosts logs on CentOS 7 (with --log-driver syslog). I'd like to replicate this on macOS (Sierra). But it doesn't show up anywhere, seemingly.
$ docker run --log-driver syslog -it busybox sh
/ # logger "Hello world!"
/ # exit
And:
$ sudo cat /var/log/system.log | grep "Hello world"
Password:
$
What configuration is necessary to make it possible for any Docker system logging command for any arbitrary container to appear in a log file on macOS?
I can view these types of default system logging if I do not configure log-driver. But Ruby's syslog implementation must log differently.
$ docker run --log-driver syslog -it centos /bin/bash
# yum install ruby -y
# ruby -e "require 'syslog/logger'; log = Syslog::Logger.new 'my_program'; log.info 'this line will be logged via syslog(3)'"
# exit
$ sudo tail -n 10000 /var/log/system.log | grep "syslog(3)"
$
It depends on how you are logging your message.
As mentioned in "Better ways of handling logging in containers " by Daniel Walsh:
One big problem with standard docker containers is that any service that writes messages to syslog or directly to the journal get dropped by default.
Docker does not record any logs unless the messages are written to STDIN/STDERR. There is no logging service running inside of the container to catch these messages.
So a simple echo should end up in syslog, as illustrated by the chentex/random-logger image.
From Docker for Mac / Log and Troubleshooting, you can check directly if you see any logs after your docker run:
To view Docker for Mac logs at the command line, type this command in a terminal window or your favorite shell.
$ syslog -k Sender Docker
2017:
Check the content of ~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/log.
Syslog driver was added in PR 11458
2022:
Brice mentions in the comments:
In Docker Desktop for macOs 4.x the logs are now here
$HOME/Library/Containers/com.docker.docker/Data/log/,
# eg
$HOME/Library/Containers/com.docker.docker/Data/log/vm/console.log
Related
I have the following statement in my .gitlab-ci.yml:
( docker-compose up & ) | ( tee /dev/tty & ) | grep -m 1 "Compiled successfully"
It shall show the output of docker-compose up in the web terminal and wait for a certain string to indicate that the containers are ready.
But /dev/tty fails with the error: tee: /dev/tty: No such device or address
The output of tty is not a tty. How do I find out where the output is actually written to? The Gitlab runner runs on Ubuntu 18.04.2.
I've solved this using:
- docker-compose up -d
- docker-compose logs -f &
This will keep outputting the logs of docker-compose in the foreground.
Notice this will generate mixed output of both your containers as well as any following commands your .gitlab-ci.yml contains.
I need to connect to windows remote server(shared drive) from GO API hosted in the alpine linux. I tried using tcp,ssh and ftp none of them didn't work. Any suggestions or ideas to tackle this?
Before proceeding with debugging the GO code, it would be needed to do some "unskilled labour" within container in order to ensure pre-requisites are met:
samba client is installed and daemons are running;
the target name gets resolved;
there are no connectivity issues (routing, firewall rules, etc);
there are share access permissions;
mounting remote volume is allowed for the container.
Connect to the container:
$ docker ps
$ docker exec -it container_id /bin/bash
Samba daemons are running:
$ smbd status
$ nmbd status
You use the right name format in your code and command lines:
UNC notation => \\server_name\share_name
URL notation => smb://server_name/share_name
Target name is resolvable
$ nslookup server_name.domain_name
$ nmblookup netbios_name
$ ping server_name
Samba shares are visible
$ smbclient -L //server [-U user] # list of shares
and accessible (ls, get, put commands provide expected output here)
$ smbclient //server/share
> ls
Try to mount remote share as suggested by #cwadley (mount could be prohibited by default in Docker container):
$ sudo mount -t cifs -o username=geeko,password=pass //server/share /mnt/smbshare
For investigation purposes you might use the Samba docker container available at GitHub, or even deploy your application in it since it contains Samba client and helpful command line tools:
$ sudo docker run -it -p 139:139 -p 445:445 -d dperson/samba
After you get this working at the Docker level, you could easily reproduce this in Kubernetes.
You might do the checks from within the running Pod in Kubernetes:
$ kubectl get deployments --show-labels
$ LABEL=label_value; kubectl get pods -l app=$LABEL -o custom-columns=POD:metadata.name,CONTAINER:spec.containers[*].name
$ kubectl exec pod_name -c container_name -- ping -c1 server_name
Having got it working in command line in Docker and Kubernetes, you should get your program code working also.
Also, there is a really thoughtful discussion on StackOverflow regards Samba topic:
Mount SMB/CIFS share within a Docker container
Windows shares use the SMB protocol. There are a couple of Go libraries for using SMB, but I have never used them so I cannot vouch for their utility. Here is one I Googled:
https://github.com/stacktitan/smb
Other options would be to ensure that the Windows share is mounted on the Linux host filesystem using cifs. Then you could just use the regular Go file utilities:
https://www.thomas-krenn.com/en/wiki/Mounting_a_Windows_Share_in_Linux
Or, you could install something like Cygwin on the Windows box and run an SSH server. This would allow you to use SCP:
https://godoc.org/github.com/tmc/scp
I'm running Docker Toolbox on VirtualBox on Windows 10.
I'm having an annoying issue where if I docker exec -it mycontainer sh into a container - to inspect things, the shell will abruptly exit randomly back to the host shell, while I'm typing commands. Some experimenting reveals that it's when I press two letters at the same time (as is common when touch typing) that causes the exit.
The container will still be running.
Any ideas what this is?
More details
Here's a minimal docker image I'm running inside. Essentially, I'm trying to deploy kubernetes clusters to AWS via kops, but because I'm on Windows, I have to use a container to run the kops commands.
FROM alpine:3.5
#install aws-cli
RUN apk add --no-cache \
bind-tools\
python \
python-dev \
py-pip \
curl
RUN pip install awscli
#install kubectl
RUN curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
RUN chmod +x ./kubectl
RUN mv ./kubectl /usr/local/bin/kubectl
#install kops
RUN curl -LO https://github.com/kubernetes/kops/releases/download/$(curl -s https://api.github.com/repos/kubernetes/kops/releases/latest | grep tag_name | cut -d '"' -f 4)/kops-linux-amd64
RUN chmod +x kops-linux-amd64
RUN mv kops-linux-amd64 /usr/local/bin/kops
I build this image:
docker build -t mykube .
I run this in the working directory of my the project I'm trying to deploy:
docker run -dit -v "${PWD}":/app mykube
I exec into the shell:
docker exec -it $containerid sh
Inside the shell, I start running AWS commands as per here.
Here's some example output:
##output of previous dig command
;; Query time: 343 msec
;; SERVER: 10.0.2.3#53(10.0.2.3)
;; WHEN: Wed Feb 14 21:32:16 UTC 2018
;; MSG SIZE rcvd: 188
##me entering a command
/ # aws s3 mb s3://clus
##shell exits abruptly to host shell while I'm writing
DavidJ#DavidJ-PC001 MINGW64 ~/git-workspace/webpack-react-express (master)
##container is still running
$ docker ps --all
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
37a341cfde83 mykube "/bin/sh" 5 minutes ago Up 3 minutes gifted_bhaskara
##nothing in docker logs
$ docker logs --details 37a341cfde83
A more useful update
Adding the -D flag gives an important clue:
$ docker -D exec -it 04eef8107e91 sh -x
DEBU[0000] Error resize: Error response from daemon: no such exec
/ #
/ #
/ #
/ #
/ # sdfsdfjskfdDEBU[0006] [hijack] End of stdin
DEBU[0006] [hijack] End of stdout
Also, I've ascertained that what specifically is causing the issue is pressing two letters at the same time (which is quite common when I'm touch typing).
There appears to be a github issue for this here, though this one is for docker for windows, not docker toolbox.
This issue appears to be a bug with docker and windows. See the github issue here.
As a work around, prefix your docker exec command with winpty, which comes with git bash.
eg.
winpty docker exec -it mycontainer sh
Check the USER which is the one you are login with when doing a docker exec -it yourContainer sh.
Its .bahsrc, .bash_profile or .profile might include a command which would explain why the session abruptly quits.
Check also the logs associated to that container (docker logs --details yourContainer) in order to see if that closed session generated anything in stderr.
Reasons I can think of for a process to be killed in your container include:
Pid 1 exiting in the container. This would cause the container to go into a stopped state, but a restart policy could have restarted it. See your docker container inspect output to see if this is happening. This is the most common cause I've seen.
Out of memory on the OS, where the kernel would then kill processes. View your system logs and dmesg to see if this is happening.
Exceeding the container memory limit, where docker would kill the container, possibly restarting it depending on your policy. You would again view docker container inspect but the status will have different details.
Process being killed on the host, potentially by a security tool.
Perhaps a selinux or apparmor policy being violated.
Networking issues. Never encountered it myself, but since docker is a client / server design, there's a potential for a network disconnect to drop the exec session.
The server itself is failing, and you'd see various logs in syslog / dmesg indicating problems it can't recover from.
I build an image with Python installed and a Python application too. My Python application is a Hello, World! application, just printing "Hello, World!" on the screen. Dockerfile:
FROM python:2-onbuild
CMD ["python", "./helloworld.py"]
In the console I execute:
docker run xxx/zzz
I can see the Hello, World! output. Now I am trying to execute the same application, using the task from ECS. I already pulled it to Docker Hub.
How can I see the output Hello, World!? Is there a way to see that my container runs correctly?
docker logs <container id> will show you all the output of the container run. If you're running it on ECS, you'll probably need to set DOCKER_HOST=tcp://ip:port for the host that ran the container.
To view the logs of a Docker container in real time, use the following command:
docker logs -f <CONTAINER>
The -f or --follow option will show live log output. Also if the container is stopped it will fetch its logs.
Maybe beside of tracing logs is better idea to enter into container with:
docker exec -it CONTAINER_ID /bin/sh
and investigate your process from inside.
You can log in onto your container instance and do, for example, a docker ps there.
This guide describes how to connect to your container instance:
http://docs.aws.amazon.com/AmazonECS/latest/developerguide/troubleshooting.html#instance-connect
You cam use basic output redirection to a file.
Whatever command you have running in your Dockerfile at the end of the command put >> /root/file.txt
So...
RUN ifconfig >> /root/file.txt
RUN curl google.com >> /root/file.txt
Then all you meed to do is log in to the container and type "cat /root/file.txt" to see exactly what was on screen. Is it possible to copy from container to host at end of the Dockerfile? idk but Maybe.
I know it is weird to use multiple terminals in the docker container.
My purpose is to test some commands and build a dockerfile with these commands finally.
So I need to use multiple terminals, say, two. One is running some commands, the other is used to test that commands.
If I use a real machine, I can ssh it to use multiple terminals, but in docker, how can I do this?
Maybe the solution is to run docker with CMD /bin/bash, and in that bash, using screen?
EDIT
In my situation, one shell run a server program, the other run a client program to test the server program. Because the server program and client program are compiled together. So, the default link method in docker is not suitable.
The docker way would be to run the server in one container and the client in another. You can use links to make the server visible from the client and you can use volumes to make the files at the server available from the client. If you really want to have two terminals to the same container there is nothing stopping you from using ssh. I tested this docker server:
from: https://docs.docker.com/examples/running_ssh_service/
# sshd
#
# VERSION 0.0.1
FROM ubuntu:14.04
MAINTAINER Thatcher R. Peskens "thatcher#dotcloud.com"
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
You need to base this image on your image or the otherway around to get all the functionality together. After you have built and started your container you can get it's IP using
docker inspect <id or name of container>
from the docker host you can now ssh in with root and the password from the docker file. Now you can spawn as many ssh clients as you want. I tested with:
while true; do echo "test" >> tmpfile; sleep 1; done
from one client and
tail -f tmpfile
from another
If I understand correctly the problem, then you can use nsenter.
Assuming you have a running docker named nginx (with nginx started), run the following command from the host:
nsenter -m -u -i -n -p -t `docker inspect --format {{.State.Pid}} nginx`
This will start a program in the given name space of the PID (default $SHELL).
You can run more then one shell by issuing it more then once (from the host). Then you can run any binary that exist in the given docker or tail, rm, etc files. For example, tail the log file of nginx.
Further information can be found in the nsenter man.
If you want to just play around, you can run sshd in your image and explore it the way you are used to:
docker run -d -p 22 your_image /usr/sbin/sshd -D
When you are done with your explorations, you can proceed to create Dockerfile as usual.