Running multiple ROS process in a Docker container - bash

I want to create a bash script which installs all required software to run a docker, create a new image and then runs, in a container, all required processes. My bash script looks like this:
#! /bin/sh
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containered.io
sudo groupadd docker
sudo gpasswd -a $USER docker
docker pull ros:indigo-robot
docker build -t myimage .
docker run --name myimage-cont -dit myimage
And the Dockerfile:
FROM ros:indigo-robot
RUN apt-get update && apt-get install -y \
git \
ros-indigo-ardrone-autonomy
I am new to Docker and do not know best practices, but what I need to achieve is running 3 different process at the same time.
- roscore
- rosrun ardrone_autonomy ardrone_driver
- rostopic pub ardrone/takeoff std_msgs/Empty "{}" --once
I was able to achieve it 'manually' by opening 3 terminals and executing docker exec myimage-cont... commands. However, what I need it is make it automatically run by the code once I execute my bash script. What is the best way to do it?

Related

How to properly run entrypoint bash script on docker?

I would like to build a docker image for dumping large SQL Server tables into S3 using the bcp tool by combining this docker and this script. Ideally I could pass table, database, user, password and s3 path as arguments for the docker run command.
The script looks like
#!/bin/bash
TABLE_NAME=$1
DATABASE=$2
USER=$3
PASSWORD=$4
S3_PATH=$5
# read sqlserver...
# write to s3...
# .....
And the Dockerfile is:
# SQL Server Command Line Tools
FROM ubuntu:16.04
LABEL maintainer="SQL Server Engineering Team"
# apt-get and system utilities
RUN apt-get update && apt-get install -y \
curl apt-transport-https debconf-utils \
&& rm -rf /var/lib/apt/lists/*# SQL Server Command Line Tools
FROM ubuntu:16.04
LABEL maintainer="SQL Server Engineering Team"
# apt-get and system utilities
RUN apt-get update && apt-get install -y \
curl apt-transport-https debconf-utils \
&& rm -rf /var/lib/apt/lists/*
# adding custom MS repository
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
# install SQL Server drivers and tools
RUN apt-get update && ACCEPT_EULA=Y apt-get install -y msodbcsql mssql-tools awscli
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
RUN /bin/bash -c "source ~/.bashrc"
ADD ./sql2sss.sh /opt/mssql-tools/bin/sql2sss.sh
RUN chmod +x /opt/mssql-tools/bin/sql2sss.sh
RUN apt-get -y install locales
RUN locale-gen en_US.UTF-8
RUN update-locale LANG=en_US.UTF-8
ENTRYPOINT ["/opt/mssql-tools/bin/sql2sss.sh", "DB.dbo.TABLE", "SQLSERVERDB", "USER", "PASSWORD", "S3PATH"]
If I replae the entrypoint for CMD /bin/bash and run the image with -it, I can manually run the sql2sss.sh and it works properly, reading and writing to s3. However if I try to use the entrypoint as shown yelds bcp: command not found.
I also noticed if I use CMD /bin/sh in iterative mode it will produce the same error. Am I missing some configuration in order for the entrypoint to run the script properly?
Have you tried
ENV PATH="/opt/mssql-tools/bin:${PATH}"
Instead of exporting the bashrc?
As David Maze pointed out docker doesn't read dot files
Basically add your env definitions in the ENV primitive

Permission denied to Docker daemon socket at unix:///var/run/docker.sock

I have this Dockerfile:
FROM chekote/gulp:latest
USER root
RUN apt-get update \
&& apt-get upgrade -y \
&& apt-get install -y sudo libltdl-dev
ARG dockerUser='my-user-name';
ARG group='docker';
# crate group if not exists
RUN if ! grep -q -E "^$group:" /etc/group; then groupadd $group; fi
# create user if not exists
RUN if ! grep -q -E "^$dockerUser:" /etc/passwd; then useradd -c 'Docker image creator' -m -s '/bin/bash' -g $group $dockerUser; fi
# add user to the group (if it was present and not created at the line above)
RUN usermod -a -G ${group} ${dockerUser}
# set default user that runs the container
USER ${dockerUser}
That I build this way:
docker build --tag my-gulp:latest .
and finally run by script this way:
#!/bin/bash
image="my-gulp:latest";
workDir='/home/gulp/project';
docker run -it --rm \
-v $(pwd):${workDir} \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /usr/bin/docker:/usr/bin/docker \
${image} /bin/bash
that logs me into the docker container properly but when I want to see images
docker images
or try to pull image
docker pull hello-world:latest
I get this error:
Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Get http://%2Fvar%2Frun%2Fdocker.sock/v1.38/images/json: dial unix /var/run/docker.sock: connect: permission denied
How to create docker image from chekote/gulp:latest so I can use docker inside it without the error?
Or maybe the error is because of wrong docker run command?
A quick way to avoid that. Add your user to the group.
sudo gpasswd -a $USER docker
Then set the proper permissions.
sudo setfacl -m "user:$USER:rw" /var/run/docker.sock
Should be good from there.
The permission matching happens only on numeric user ID and group ID. If the socket file is mode 0660 and owned by user ID 0 and group ID 32, and you're calling it as a user with user ID 1000 and group IDs 1000 and 16, it doesn't matter if one /etc/group file names gid 32 as docker and the other one names gid 16 the same; the numeric gids are different and you can't access the file. Also, since the actual numeric gid of the Docker group will vary across systems, this isn't something you can bake into the Dockerfile.
Many Docker images just run as root; if they do, they can access a bind-mounted Docker socket file regardless of its permissions.
If you run as a non-root user, you can use the docker run --group-add option to add a (numeric) gid to the effective user; it doesn't specifically need to be mentioned in the /etc/groups file. On a Linux host you might run:
docker run --group-add $(stat -c '%g' /var/run/docker.sock) ...
You wouldn't usually install sudo in a Dockerfile (it doesn't work well for non-interactive programs, you usually don't do a whole lot in interactive shells because of the ephemeral nature of containers, and you can always docker exec -u 0 to get a root shell) though installing some non-root user is often considered a best practice. You could reduce the Dockerfile to
FROM node:8
RUN apt-get update
# Trying to use the host's `docker` binary may not work well
RUN apt-get install -y docker.io
# Install the single node tool you need
RUN npm install -g gulp
# Get your non-root user
RUN adduser myusername
# Normal Dockerfile bits
WORKDIR ...
COPY ...
RUN gulp
USER myusername
CMD ["npm", "run", "start"]
(That Docker base image has a couple of things that don't really match Docker best practices, and doesn't seem to be updated routinely; I'd just use the standard node image as a base and add the one build tool you need on top of it.)
open terminal and type this command
sudo chmod 666 /var/run/docker.sock
let me know the results...
You need the --privileged flag with your docker run command.
By the way , you can just use the docker in docker , image from docker for this kind of use case.
https://asciinema.org/a/24707
https://hub.docker.com/_/docker/
The error has nothing to do with docker pull or docker image subcommand, but rather that you need to call the docker command as either a user with write access to the docker socket (for example, by being root, using sudo, or by being in the docker group).

I have made a dockerfile and I was going to run it on AWS ECS but I cant as it requires -t

Here is my docker run and the docker file is there a reason why it requires -t and isnt working on ECS thanks for any help. I dont
understand what -t does so if someone could also help with that thanks.
This is just a basic docker that connects to my rds and uses wordpress. I dont have any plugins and shapely is the theme i'm using .
command docker run -t --name wordpress -d -p 80:80 dockcore/wordpress
FROM ubuntu
#pt-get clean all
RUN apt-get -y update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install unzip wget mysql-client mysql-server apache2 libapache2-mod-php7.0 pwgen python-setuptools vim-tiny php7.0-mysql php7.0-lda
RUN rm -fr /var/cashe/*files neeeded
ADD wordpress.conf /etc/apache2/sites-enabled/000-default.conf
# Wordpress install
RUN wget -P /var/www/html/ https://wordpress.org/latest.zip
RUN unzip /var/www/html/latest.zip -d /var/www/html/
RUN rm -fr /var/www/html/latest.zip
# Copy he wp config file
RUN cp /var/www/html/wordpress/wp-config-sample.php /var/www/html/wordpress/wp-config.php
# Expose web port
EXPOSE 80
# wp config for database
RUN sed -ie 's/database_name_here/wordpress/g' /var/www/html/wordpress/wp-config.php
RUN sed -ie 's/username_here/root/g' /var/www/html/wordpress/wp-config.php
RUN sed -ie 's/password_here/password/g' /var/www/html/wordpress/wp-config.php
RUN sed -ie 's/localhost/wordpressrds.xxxxxxxxxxxxxx.ap-southeast-2.rds.amazonaws.com:3306/g' /var/www/html/wordpress/wp-config.php
RUN rm -fr /var/www/html/wordpress/wp-content/themes/*
RUN rm -fr /var/www/html/wordpress/wp-content/plugins/*
ADD /shapely /var/www/html/wordpress/wp-content/themes/
# Start apache on boot
RUN echo "service apache2 start" >> ~/.bashrc
I see a couple problems. First of all your container should never require -t in order to run unless it is a temporary container that you plan to interact with using a shell. Background containers shouldn't require an interactive TTY interface, they just run in the background autonomously.
Second in your docker file I see a lot of RUN statements which are basically the build time commands for setting up the initial state of the container, but you don't have any CMD statement.
You need a CMD which is the process to actually kick off and start in the container when you try to run the container. RUN statements only execute once during the initial docker build, and then the results of those run statements are saved into the container image. When you run a docker container it has the initial state that was setup by the RUN statements, and then the CMD statement kicks off a running process in the container.
So it looks like that last RUN in your Dockerfile should be a CMD since the Apache server is the long running process that you want to run with the container state that you previously setup using all those RUN statements.
Another thing you should do is chain many of those consecutive RUN statements into one. Docker creates a separate layer for each RUN command, where each layer is kind of like a Git commit of the state of the container. So it is very wasteful to have so many RUN statements because it makes way too many container layers. You can do something like this ot chain RUN statements together instead to make a smaller, more efficient container:
RUN apt-get -y update && \
DEBIAN_FRONTEND=noninteractive apt-get -y install unzip wget mysql-client mysql-server apache2 libapache2-mod-php7.0 pwgen python-setuptools vim-tiny php7.0-mysql php7.0-lda && \
rm -fr /var/cashe/*files neeeded
I recommend reading through this guide from Docker that covers best practices for writing a Dockerfile: https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#cmd

nginx not starting inside Docker [duplicate]

This question already has answers here:
Dockerized nginx is not starting
(5 answers)
Closed 6 years ago.
Here is my Dockerfile:
FROM ubuntu:14.04.4
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN add-apt-repository ppa:nginx/stable
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y nginx
ADD configurations/nginx.conf /etc/nginx/nginx.conf
ADD configurations/app.conf /etc/nginx/sites-available/default.conf
RUN ln -sf /etc/nginx/sites-available/default.conf /etc/nginx/sites-enabled/default.conf
RUN chown -Rf www-data.www-data /var/www/
ADD scripts/start.sh /start.sh
RUN chmod 755 /start.sh
EXPOSE 443
EXPOSE 80
CMD ["/bin/bash", "/start.sh"]
The start.sh script:
cat scripts/start.sh
service nginx start
echo "test" > /tmp/test
When I log to the container:
docker exec --interactive --tty my_container bash
neither the test file exists nor nginx is running. There are no errors on the nginx log.
The best practice is to run the process in the foreground instead of as a service.
Remove the start.sh file and change the CMD to:
CMD ["nginx", "-g", "daemon off;"]
You can get a better idea reading the official nginx dockerfile: https://github.com/nginxinc/docker-nginx/blob/master/stable/jessie/Dockerfile
Try
RUN /etc/init.d/nginx start

Docker: bash terminal starts without prompt

I have a simple container that looks like this:
FROM devbox/rails3.2.1
RUN apt-get install -y -q libmysql-ruby libmysqlclient-dev
RUN apt-get install -y -q libqtwebkit-dev
EXPOSE 3000
CMD /bin/bash
where devbox/rails3.2.1 is a container I made that starts with 'FROM ubuntu' and installs Ruby on Rails. This is a running in a Vagrant Virtual Box VM using Ubuntu 12.04.3 LTS. When I run this using:
docker run -t -i -name myapp -p 3000:3000 -v /src/myapp:/src/myapp -link myappsql:myappsql devbox/myapp
The container starts, but my terminal shows a blank line with no prompt and typing doesn't do anything. If I run docker ps I can see that the container is running. Even stranger, If I open a second terminal and run 'docker attach myapp' I get a functioning terminal (though I have to press enter first) and if I switch back to my first terminal and type, the output appears in my second terminal!
Any help much appreciated.
That all sounds like expected functionality.
When doing the "docker run" command put the "/bin/bash" in it to immediately have the bash available to you without having to attach first.
docker run -t -i -name myapp -p 3000:3000 -v /src/myapp:/src/myapp -link myappsql:myappsql devbox/myapp /bin/bash

Resources