I have:
Windows 10
Docker for Windows with WSL2 integration (using Ubuntu 20.04)
I created a simple tftpd server to run in a Linux (Ubuntu 16.04) container and exposed port 69. Here is the Dockerfile:
FROM ubuntu:16.04
RUN apt-get update && \
apt-get install --no-install-recommends -y \
tftpd-hpa \
nfs-kernel-server && \
# Clean rootfs
apt-get clean all && \
apt-get autoremove -y && \
apt-get purge && \
rm -rf /var/lib/{apt,dpkg,cache,log}
# Export the TFTP server port
EXPOSE 69/udp
WORKDIR /
VOLUME ["/var/lib/tftpboot"]
VOLUME ["/nfs"]
VOLUME ["/etc/dhcp"]
VOLUME ["/etc/default"]
COPY entrypoint.sh /entrypoint.sh
# Set correct entrypoint permission
RUN chmod u+x /entrypoint.sh
ENTRYPOINT ["/entrypoint.sh"]
The script extrypoint.sh just start the tftd-hpa service on start-up and then waits until C+C is pressed:
service tftpd-hpa start
echo "Started..."
while true; do sleep 1; done
I spin up a container using a Powershell script this:
docker run --rm -ti --privileged `
-p 69:69/udp `
-v ${PWD}/tftp:/var/lib/tftpboot `
-v ${PWD}/nfs:/nfs `
-v ${PWD}/etc/exports:/etc/exports `
-v ${PWD}/etc/default/nfs-kernel-server:/etc/default/nfs-kernel-server `
-v ${PWD}/etc/network/interfaces:/etc/network/interfaces `
nfs_tftp_server:latest;
As you can see, I am mapping port 69 in the container to 69 on the host. When I do netstat -an | find "UDP" in a command window, I do see entries for port 69. This seems to indicate the port is okay. I checked the status of the tftpd-hpa service in the container, and it is running.
I changed the Firewall settings to allow inbound and outbound traffic on port 69. I also updated the firewall to allow the Trivial File Transfer Protocol App (TFTP.EXE) to communicate through it.
The problem is when I try to do tftp:
tftp 0.0.0.0 GET test.txt
it just hangs and times out. The file does exist.
Why doesn't it work? When I try the same in my Ubuntu Virtual Machine, it works. So this must be related to Docker on Windows.
Related
I want to create a bash script which installs all required software to run a docker, create a new image and then runs, in a container, all required processes. My bash script looks like this:
#! /bin/sh
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containered.io
sudo groupadd docker
sudo gpasswd -a $USER docker
docker pull ros:indigo-robot
docker build -t myimage .
docker run --name myimage-cont -dit myimage
And the Dockerfile:
FROM ros:indigo-robot
RUN apt-get update && apt-get install -y \
git \
ros-indigo-ardrone-autonomy
I am new to Docker and do not know best practices, but what I need to achieve is running 3 different process at the same time.
- roscore
- rosrun ardrone_autonomy ardrone_driver
- rostopic pub ardrone/takeoff std_msgs/Empty "{}" --once
I was able to achieve it 'manually' by opening 3 terminals and executing docker exec myimage-cont... commands. However, what I need it is make it automatically run by the code once I execute my bash script. What is the best way to do it?
I have this Dockerfile:
FROM chekote/gulp:latest
USER root
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y sudo libltdl-dev
ARG dockerUser='my-user-name';
ARG group='docker';
# crate group if not exists
RUN if ! grep -q -E "^$group:" /etc/group; then groupadd $group; fi
# create user if not exists
RUN if ! grep -q -E "^$dockerUser:" /etc/passwd; then useradd -c 'Docker image creator' -m -s '/bin/bash' -g $group $dockerUser; fi
# add user to the group (if it was present and not created at the line above)
RUN usermod -a -G ${group} ${dockerUser}
# set default user that runs the container
USER ${dockerUser}
That I build this way:
docker build --tag my-gulp:latest .
and finally run by script this way:
#!/bin/bash
image="my-gulp:latest";
workDir='/home/gulp/project';
docker run -it --rm \
-v $(pwd):${workDir} \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
${image} /bin/bash
that logs me into the docker container properly but when I want to see images
docker images
or try to pull image
docker pull hello-world:latest
I get this error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.38/images/json: dial unix /var/run/docker.sock: connect: permission denied
How to create docker image from chekote/gulp:latest so I can use docker inside it without the error?
Or maybe the error is because of wrong docker run command?
A quick way to avoid that. Add your user to the group.
sudo gpasswd -a $USER docker
Then set the proper permissions.
sudo setfacl -m "user:$USER:rw" /var/run/docker.sock
Should be good from there.
The permission matching happens only on numeric user ID and group ID. If the socket file is mode 0660 and owned by user ID 0 and group ID 32, and you're calling it as a user with user ID 1000 and group IDs 1000 and 16, it doesn't matter if one /etc/group file names gid 32 as docker and the other one names gid 16 the same; the numeric gids are different and you can't access the file. Also, since the actual numeric gid of the Docker group will vary across systems, this isn't something you can bake into the Dockerfile.
Many Docker images just run as root; if they do, they can access a bind-mounted Docker socket file regardless of its permissions.
If you run as a non-root user, you can use the docker run --group-add option to add a (numeric) gid to the effective user; it doesn't specifically need to be mentioned in the /etc/groups file. On a Linux host you might run:
docker run --group-add $(stat -c '%g' /var/run/docker.sock) ...
You wouldn't usually install sudo in a Dockerfile (it doesn't work well for non-interactive programs, you usually don't do a whole lot in interactive shells because of the ephemeral nature of containers, and you can always docker exec -u 0 to get a root shell) though installing some non-root user is often considered a best practice. You could reduce the Dockerfile to
FROM node:8
RUN apt-get update
# Trying to use the host's `docker` binary may not work well
RUN apt-get install -y docker.io
# Install the single node tool you need
RUN npm install -g gulp
# Get your non-root user
RUN adduser myusername
# Normal Dockerfile bits
WORKDIR ...
COPY ...
RUN gulp
USER myusername
CMD ["npm", "run", "start"]
(That Docker base image has a couple of things that don't really match Docker best practices, and doesn't seem to be updated routinely; I'd just use the standard node image as a base and add the one build tool you need on top of it.)
open terminal and type this command
sudo chmod 666 /var/run/docker.sock
let me know the results...
You need the --privileged flag with your docker run command.
By the way , you can just use the docker in docker , image from docker for this kind of use case.
https://asciinema.org/a/24707
https://hub.docker.com/_/docker/
The error has nothing to do with docker pull or docker image subcommand, but rather that you need to call the docker command as either a user with write access to the docker socket (for example, by being root, using sudo, or by being in the docker group).
I am a beginner in docker . I have installed docker-ce in my ubuntu 18.04 machine using commandsudo apt install docker-ce
As part of a tutorial , I am trying to establish connection between containers by executing series of below commands.
Below command will turn on ports 1234/4321 to listen to traffic inside/outside of containers i'm going to use.
root#ghost-SVE9999CNS:/home/ghost# docker run --rm -ti -p 1234:1234 -p 4321:4321 --name echo-server ubuntu:18.04 bash
Now, I wanted to run netcat commands within docker bash terminal.
root#xxxyyyyzzzz12:/# nc -lp 1234 | nc -lp 4321
Once i inovke above command from my terminal.. Its giving errors "nc: command not found"
bash: nc: command not found
bash: nc: command not found
Later, I have done enough research and i never found any official docker solution for this problem.
Please could anyone help me out installing netcat within docker-ce.
I've tried commands like below.
apt-get install netstat
apt-get install nc
But, no luck.
nc is not installed by default on ubuntu:18.04 image, so you have to install it :
apt-get update && apt-get install -y netcat
apt-get update is necessary to first update list of packages (when the container is started, this list is empty). Once done, you can run nc -lp 1234 from the container.
To test all works as you expected, you can then :
run from a shell (on your host) something like telnet container_ip 1234 or telnet localhost 1234 (since ports have been forwarded)
type something
look at the container output to see what you typed in your host shell
It is not necessary to use ubuntu:18.04 to follow the tutorial, you can use ubuntu:14.04 for example, in which nc installed by default.
docker run --rm -ti -p 1234:1234 -p 4321:4321 --name echo-server ubuntu:14.04 bash
Here is my docker run and the docker file is there a reason why it requires -t and isnt working on ECS thanks for any help. I dont
understand what -t does so if someone could also help with that thanks.
This is just a basic docker that connects to my rds and uses wordpress. I dont have any plugins and shapely is the theme i'm using .
command docker run -t --name wordpress -d -p 80:80 dockcore/wordpress
FROM ubuntu
#pt-get clean all
RUN apt-get -y update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install unzip wget mysql-client mysql-server apache2 libapache2-mod-php7.0 pwgen python-setuptools vim-tiny php7.0-mysql php7.0-lda
RUN rm -fr /var/cashe/*files neeeded
ADD wordpress.conf /etc/apache2/sites-enabled/000-default.conf
# Wordpress install
RUN wget -P /var/www/html/ https://wordpress.org/latest.zip
RUN unzip /var/www/html/latest.zip -d /var/www/html/
RUN rm -fr /var/www/html/latest.zip
# Copy he wp config file
RUN cp /var/www/html/wordpress/wp-config-sample.php /var/www/html/wordpress/wp-config.php
# Expose web port
EXPOSE 80
# wp config for database
RUN sed -ie 's/database_name_here/wordpress/g' /var/www/html/wordpress/wp-config.php
RUN sed -ie 's/username_here/root/g' /var/www/html/wordpress/wp-config.php
RUN sed -ie 's/password_here/password/g' /var/www/html/wordpress/wp-config.php
RUN sed -ie 's/localhost/wordpressrds.xxxxxxxxxxxxxx.ap-southeast-2.rds.amazonaws.com:3306/g' /var/www/html/wordpress/wp-config.php
RUN rm -fr /var/www/html/wordpress/wp-content/themes/*
RUN rm -fr /var/www/html/wordpress/wp-content/plugins/*
ADD /shapely /var/www/html/wordpress/wp-content/themes/
# Start apache on boot
RUN echo "service apache2 start" >> ~/.bashrc
I see a couple problems. First of all your container should never require -t in order to run unless it is a temporary container that you plan to interact with using a shell. Background containers shouldn't require an interactive TTY interface, they just run in the background autonomously.
Second in your docker file I see a lot of RUN statements which are basically the build time commands for setting up the initial state of the container, but you don't have any CMD statement.
You need a CMD which is the process to actually kick off and start in the container when you try to run the container. RUN statements only execute once during the initial docker build, and then the results of those run statements are saved into the container image. When you run a docker container it has the initial state that was setup by the RUN statements, and then the CMD statement kicks off a running process in the container.
So it looks like that last RUN in your Dockerfile should be a CMD since the Apache server is the long running process that you want to run with the container state that you previously setup using all those RUN statements.
Another thing you should do is chain many of those consecutive RUN statements into one. Docker creates a separate layer for each RUN command, where each layer is kind of like a Git commit of the state of the container. So it is very wasteful to have so many RUN statements because it makes way too many container layers. You can do something like this ot chain RUN statements together instead to make a smaller, more efficient container:
RUN apt-get -y update && \
DEBIAN_FRONTEND=noninteractive apt-get -y install unzip wget mysql-client mysql-server apache2 libapache2-mod-php7.0 pwgen python-setuptools vim-tiny php7.0-mysql php7.0-lda && \
rm -fr /var/cashe/*files neeeded
I recommend reading through this guide from Docker that covers best practices for writing a Dockerfile: https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#cmd
I have a simple container that looks like this:
FROM devbox/rails3.2.1
RUN apt-get install -y -q libmysql-ruby libmysqlclient-dev
RUN apt-get install -y -q libqtwebkit-dev
EXPOSE 3000
CMD /bin/bash
where devbox/rails3.2.1 is a container I made that starts with 'FROM ubuntu' and installs Ruby on Rails. This is a running in a Vagrant Virtual Box VM using Ubuntu 12.04.3 LTS. When I run this using:
docker run -t -i -name myapp -p 3000:3000 -v /src/myapp:/src/myapp -link myappsql:myappsql devbox/myapp
The container starts, but my terminal shows a blank line with no prompt and typing doesn't do anything. If I run docker ps I can see that the container is running. Even stranger, If I open a second terminal and run 'docker attach myapp' I get a functioning terminal (though I have to press enter first) and if I switch back to my first terminal and type, the output appears in my second terminal!
Any help much appreciated.
That all sounds like expected functionality.
When doing the "docker run" command put the "/bin/bash" in it to immediately have the bash available to you without having to attach first.
docker run -t -i -name myapp -p 3000:3000 -v /src/myapp:/src/myapp -link myappsql:myappsql devbox/myapp /bin/bash