docker-compose ignores DOCKER_HOST - bash

I am attempting to run 3 Docker images, MySQL, Redis and a project of mine on Bash for Windows (WSL).
To do that I have to connect to the Docker engine running on Windows, specifically on tcp://locahost:2375. I have appended the following line to .bashrc:
export DOCKER_HOST=tcp://127.0.0.1:2375
I can successfully run docker commands like docker ps or docker run hello-world but whenever I cd into my project directory and run
sudo docker-compose up --build to load the images and spin up the containers I get an error:
ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
I know that if I use the -H argument I can supply the address but I'd rather find a more permanent solution. For some reason docker-compose seems to ignore the DOCKER_HOST environmental variable and I can't figure out why..

Your problem is sudo. It's a totally different program than your shell and doesn't transfer the exported environment unless you specifically tell it to. You can either add the following line in your /etc/sudoers (or /etc/sudoers.d/docker):
Defaults env_keep += DOCKER_HOST
Or you can just pass it directly to the command line:
sudo DOCKER_HOST=$DOCKER_HOST docker-compose up --build

By set DOCKER_HOST you tell for every run of docker in command line to use http api, instead of default - socket on localhost.
By default http api is not turned on
$ sudo cat /lib/systemd/system/docker.service | grep ExecStart
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
you can add -H tcp://127.0.0.1:2375 for tern on http api on localhost
but usually you want to tern on api for remote servers by -H tcp://0.0.0.0:2375 (do it only with proper firewall)
so you need to change in /lib/systemd/system/docker.service to next line
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:2375 --containerd=/run/containerd/containerd.sock

Related

How do I prevent root access to my docker container

I am working on hardening our docker images, which I already have a bit of a weak understanding of. With that being said, the current step I am on is preventing the user from running the container as root. To me, that says "when a user runs 'docker exec -it my-container bash', he shall be an unprivileged user" (correct me if I'm wrong).
When I start up my container via docker-compose, the start script that is run needs to be as root since it deals with importing certs and mounted files (created externally and seen through a volume mount). After that is done, I would like the user to be 'appuser' for any future access. This question seems to match pretty well what I'm looking for, but I am using docker-compose, not docker run: How to disable the root access of a docker container?
This seems to be relevant, as the startup command differs from let's say tomcat. We are running a Spring Boot application that we start up with a simple 'java -jar jarFile', and the image is built using maven's dockerfile-maven-plugin. With that being said, should I be changing the user to an unprivileged user before running that, or still after?
I believe changing the user inside of the Dockerfile instead of the start script will do this... but then it will not run the start script as root, thus blowing up on calls that require root. I had messed with using ENTRYPOINT as well, but could have been doing it wrong there. Similarly, using "user:" in the yml file seemed to make the start.sh script run as that user instead of root, so that wasn't working.
Dockerfile:
FROM parent/image:latest
ENV APP_HOME /apphome
ENV APP_USER appuser
ENV APP_GROUP appgroup
# Folder containing our application, i.e. jar file, resources, and scripts.
# This comes from unpacking our maven dependency
ADD target/classes/app ${APP_HOME}/
# Primarily just our start script, but some others
ADD target/classes/scripts /scripts/
# Need to create a folder that will be used at runtime
RUN mkdir -p ${APP_HOME}/data && \
chmod +x /scripts/*.sh && \
chmod +x ${APP_HOME}/*.*
# Create unprivileged user
RUN groupadd -r ${APP_GROUP} && \
useradd -g ${APP_GROUP} -d ${APP_HOME} -s /sbin/nologin -c "Unprivileged User" ${APP_USER} && \
chown -R ${APP_USER}:${APP_GROUP} ${APP_HOME}
WORKDIR $APP_HOME
EXPOSE 8443
CMD /opt/scripts/start.sh
start.sh script:
#!/bin/bash
# setup SSL, modify java command, etc
# run our java application
java -jar "boot.jar"
# Switch users to always be unprivileged from here on out?
# Whatever "hardening" wants... Should this be before starting our application?
exec su -s "/bin/bash" $APP_USER
app.yml file:
version: '3.3'
services:
app:
image: app_image:latest
labels:
c2core.docker.compose.display-name: My Application
c2core.docker.compose.profiles: a_profile
volumes:
- "data_mount:/apphome/data"
- "cert_mount:/certs"
hostname: some-hostname
domainname: some-domain
ports:
- "8243:8443"
environment:
- some_env_vars
depends_on:
- another-app
networks:
a_network:
aliases:
- some-network
networks:
a_network:
driver: bridge
volumes:
data_mount:
cert_mount:
docker-compose shell script:
docker-compose -f app.yml -f another-app.yml $#
What I would expect is that anyone trying to access the container internally will be doing so as appuser and not root. The goal is to prevent someone from messing with things they shouldn't (i.e. docker itself).
What is happening is that the script will change users after the app has started (proven via an echo command), but it doesn't seem to be maintained. If I exec into it, I'm still root.
As David mentions, once someone has access to the docker socket (either via API or with the docker CLI), that typically means they have root access to your host. It's trivial to use that access to run a privileged container with host namespaces and volume mounts that let the attacker do just about anything.
When you need to initialize a container with steps that run as root, I do recommend gosu over something like su since su was not designed for containers and will leave a process running as the root pid. Make sure that you exec the call to gosu and that will eliminate anything running as root. However, the user you start the container as is the same as the user used for docker exec, and since you need to start as root, your exec will run as root unless you override it with a -u flag.
There are additional steps you can take to lock down docker in general:
Use user namespaces. These are defined on the entire daemon, require that you destroy all containers, and pull images again, since the uid mapping affects the storage of image layers. The user namespace offsets the uid's used by docker so that root inside the container is not root on the host, while inside the container you can still bind to low numbered ports and run administrative activities.
Consider authz plugins. Open policy agent and Twistlock are two that I know of, though I don't know if either would allow you to restrict the user of a docker exec command. They likely require that you give users a certificate to connect to docker rather than giving them direct access to the docker socket since the socket doesn't have any user details included in API requests it receives.
Consider rootless docker. This is still experimental, but since docker is not running as root, it has no access back to the host to perform root activities, mitigating many of the issues seen when containers are run as root.
You intrinsically can't prevent root-level access to your container.
Anyone who can run any Docker command at all can always run any of these three commands:
# Get a shell, as root, in a running container
docker exec -it -u 0 container_name /bin/sh
# Launch a new container, running a root shell, on some image
docker run --rm -it -u 0 --entrypoint /bin/sh image_name
# Get an interactive shell with unrestricted root access to the host
# filesystem (cd /host/var/lib/docker)
docker run --rm -it -v /:/host busybox /bin/sh
It is generally considered best practice to run your container as a non-root user, either with a USER directive in the Dockerfile or running something like gosu in an entrypoint script, like what you show. You can't prevent root access, though, in the face of a privileged user who's sufficiently interested in getting it.
When the docker is normally run from one host, you can do some steps.
Make sure it is not run from another host by looking for a secret in a directory mounted from the accepted host.
Change the .bashrc of the users on the host, so that they will start running the docker as soon as they login. When your users needs to do other things on the host, give them an account without docker access and let them sudo to a special user with docker access (or use a startdocker script with a setuid flag).
Start the docker with a script that you made and hardened, something like startserver.
#!/bin/bash
settings() {
# Add mount dirs. The homedir in the docker will be different from the one on the host.
mountdirs="-v /mirrored_home:/home -v /etc/dockercheck:/etc/dockercheck:ro"
usroptions="--user $(id -u):$(id -g) -v /etc/passwd:/etc/passwd:ro"
usroptions="${usroptions} -v/etc/shadow:/etc/shadow:ro -v /etc/group:/etc/group:ro"
}
# call function that fills special variables
settings
image="my_image:latest"
docker run -ti --rm ${usroptions} ${mountdirs} -w $HOME --entrypoint=/bin/bash "${image}"
Adding a variable --env HOSTSERVER=${host} won't help hardening, on another server one can add --env HOSTSERVER=servername_that_will_be_checked.
When the user logins to the host, the startserver will be called and the docker started. After the call to the startserver add exit to the .bash_rc.
Not sure if this work but you can try. Allow sudo access for user/group with limited execution command. Sudo configuration only allow to execute docker-cli. Create a shell script by the name docker-cli with content that runs docker command, eg docker "$#". In this file, check the argument and enforce user to provide switch --user or -u when executing exec or attach command of docker. Also make sure validate the user don't provide a switch saying -u root. Eg
sudo docker-cli exec -it containerid sh (failed)
sudo docker-cli exec -u root ... (failed)
sudo docker-cli exec -u mysql ... (Passed)
You can even limit the docker command a user can run inside this shell script

Connecting to windows shared drive from kubernetes using go

I need to connect to windows remote server(shared drive) from GO API hosted in the alpine linux. I tried using tcp,ssh and ftp none of them didn't work. Any suggestions or ideas to tackle this?
Before proceeding with debugging the GO code, it would be needed to do some "unskilled labour" within container in order to ensure pre-requisites are met:
samba client is installed and daemons are running;
the target name gets resolved;
there are no connectivity issues (routing, firewall rules, etc);
there are share access permissions;
mounting remote volume is allowed for the container.
Connect to the container:
$ docker ps
$ docker exec -it container_id /bin/bash
Samba daemons are running:
$ smbd status
$ nmbd status
You use the right name format in your code and command lines:
UNC notation => \\server_name\share_name
URL notation => smb://server_name/share_name
Target name is resolvable
$ nslookup server_name.domain_name
$ nmblookup netbios_name
$ ping server_name
Samba shares are visible
$ smbclient -L //server [-U user] # list of shares
and accessible (ls, get, put commands provide expected output here)
$ smbclient //server/share
> ls
Try to mount remote share as suggested by #cwadley (mount could be prohibited by default in Docker container):
$ sudo mount -t cifs -o username=geeko,password=pass //server/share /mnt/smbshare
For investigation purposes you might use the Samba docker container available at GitHub, or even deploy your application in it since it contains Samba client and helpful command line tools:
$ sudo docker run -it -p 139:139 -p 445:445 -d dperson/samba
After you get this working at the Docker level, you could easily reproduce this in Kubernetes.
You might do the checks from within the running Pod in Kubernetes:
$ kubectl get deployments --show-labels
$ LABEL=label_value; kubectl get pods -l app=$LABEL -o custom-columns=POD:metadata.name,CONTAINER:spec.containers[*].name
$ kubectl exec pod_name -c container_name -- ping -c1 server_name
Having got it working in command line in Docker and Kubernetes, you should get your program code working also.
Also, there is a really thoughtful discussion on StackOverflow regards Samba topic:
Mount SMB/CIFS share within a Docker container
Windows shares use the SMB protocol. There are a couple of Go libraries for using SMB, but I have never used them so I cannot vouch for their utility. Here is one I Googled:
https://github.com/stacktitan/smb
Other options would be to ensure that the Windows share is mounted on the Linux host filesystem using cifs. Then you could just use the regular Go file utilities:
https://www.thomas-krenn.com/en/wiki/Mounting_a_Windows_Share_in_Linux
Or, you could install something like Cygwin on the Windows box and run an SSH server. This would allow you to use SCP:
https://godoc.org/github.com/tmc/scp

Problem in executing a shell script present on host using docker exec

I'm trying to execute a script on the master node of AWS EMR cluster. The intention is to create a new conda env and link it to jupyter. I'm following this doc from AWS. Problem is, whatever be the content of the script, I'm getting the same error: bash: /home/hadoop/scripts/bootstrap.sh: No such file or directory while executing sudo docker exec jupyterhub bash /home/hadoop/scripts/bootstrap.sh. I've made sure the sh file is in the correct location.
But if I copy the bootstrap.sh file inside the container, and then run the same docker exec cmd, it's working fine. What am I missing here? I've tried with a simple script with the following entries, but it throws the same error:
#!/bin/bash
echo "Hello"
The doc clearly says:
Kernels are installed within the Docker container. The easiest way to
accomplish this is to create a bash script with installation commands,
save it to the master node, and then use the sudo docker exec
jupyterhub script_name command to run the script within the jupyterhub
container.
The docker exec command runs a command within the container's namespaces. One of those namespaces is the filesystem. So unless the command is part of the image, written into the container directly, or you have mounted a host volume to map a host directory into the container, you won't be able to execute it. A host volume could look like:
docker run -v /host/scripts:/container/scripts -n your_container $your_image
docker exec -it your_container /container/scripts/test.sh
That host volume could be the same path on both the host and the container.
If it is a shell script, you could use I/O redirection, e.g.:
docker exec -i $container_id /bin/bash <local_script.sh
but be aware that you cannot do interactive stuff this way since the script content has replaced your terminal as stdin. This works because the shell inside the container is just processing commands from stdin.
Other than those scenarios, I don't know what to tell you other than the documentation from AWS appears to be wrong.

Docker on AWS - Environment Variables not inheriting from host to container

I have a script (hosted on GitHub) that does the following:
Creates an EC2 instance on AWS
Saves the local IP (private IP address) as an environment variable $LOCALIP
Installs Docker (official repo)
Updates the base instance (Ubuntu 16.04 LTS)
Pulls a custom image of mine
Runs said image with the -e LOCALIP trying to pass the hosts environment variable to the container (I have also tried -e LOCALIP=$LOCALIP
However when I docker exec into the container on that instance and run echo $LOCALIP it displays nothing. Running env shows me that LOCALIP is there but nothing is against it
If I destroy the container and remake using the exact same line from the original script (with -e LOCALIP=$LOCALIP) it works - I need this process automating however and some additional help would be greatly appreciated.
Essentially sudo docker run -dit -e LOCALIP -p 1099:1099 -p 50000:50000 screamingjoypad/armada-server /bin/bash is not sharing the hosts LOCALIP variable.
UPDATE
Trying the suggestions from below I added the following line to my script
source /etc/bash.bashrc but this still does not work. I'm still getting a blank when trying the echo $LOCALIP in the container...
The problem is because of this line of your shell script:
echo "export LOCALIP=$(hostname -i)" >> /etc/bash.bashrc
After this command is executed, the export instruction does not take effect yet. It will take effect after your next login.
To make the LOCALIP environment variable take effect immediately, add this line after the echo "export ... command:
source /etc/bash.bashrc
I believe I've solved it. I'm now using the AWS metadata to harvest the private IP address using the following addition to the script:
sudo docker run -dit -e LOCALIP=$(curl http://169.254.169.254/latest/meta-data/local-ipv4) -p 1099:1099 -p 50000:50000 screamingjoypad/armada-server /bin/bash

Ahow to use multiple terminals in the docker container?

I know it is weird to use multiple terminals in the docker container.
My purpose is to test some commands and build a dockerfile with these commands finally.
So I need to use multiple terminals, say, two. One is running some commands, the other is used to test that commands.
If I use a real machine, I can ssh it to use multiple terminals, but in docker, how can I do this?
Maybe the solution is to run docker with CMD /bin/bash, and in that bash, using screen?
EDIT
In my situation, one shell run a server program, the other run a client program to test the server program. Because the server program and client program are compiled together. So, the default link method in docker is not suitable.
The docker way would be to run the server in one container and the client in another. You can use links to make the server visible from the client and you can use volumes to make the files at the server available from the client. If you really want to have two terminals to the same container there is nothing stopping you from using ssh. I tested this docker server:
from: https://docs.docker.com/examples/running_ssh_service/
# sshd
#
# VERSION 0.0.1
FROM ubuntu:14.04
MAINTAINER Thatcher R. Peskens "thatcher#dotcloud.com"
RUN apt-get update && apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
You need to base this image on your image or the otherway around to get all the functionality together. After you have built and started your container you can get it's IP using
docker inspect <id or name of container>
from the docker host you can now ssh in with root and the password from the docker file. Now you can spawn as many ssh clients as you want. I tested with:
while true; do echo "test" >> tmpfile; sleep 1; done
from one client and
tail -f tmpfile
from another
If I understand correctly the problem, then you can use nsenter.
Assuming you have a running docker named nginx (with nginx started), run the following command from the host:
nsenter -m -u -i -n -p -t `docker inspect --format {{.State.Pid}} nginx`
This will start a program in the given name space of the PID (default $SHELL).
You can run more then one shell by issuing it more then once (from the host). Then you can run any binary that exist in the given docker or tail, rm, etc files. For example, tail the log file of nginx.
Further information can be found in the nsenter man.
If you want to just play around, you can run sshd in your image and explore it the way you are used to:
docker run -d -p 22 your_image /usr/sbin/sshd -D
When you are done with your explorations, you can proceed to create Dockerfile as usual.

Resources