nginx not starting inside Docker [duplicate] - bash

This question already has answers here:
Dockerized nginx is not starting
(5 answers)
Closed 6 years ago.
Here is my Dockerfile:
FROM ubuntu:14.04.4
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update
RUN apt-get install -y software-properties-common
RUN add-apt-repository ppa:nginx/stable
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y nginx
ADD configurations/nginx.conf /etc/nginx/nginx.conf
ADD configurations/app.conf /etc/nginx/sites-available/default.conf
RUN ln -sf /etc/nginx/sites-available/default.conf /etc/nginx/sites-enabled/default.conf
RUN chown -Rf www-data.www-data /var/www/
ADD scripts/start.sh /start.sh
RUN chmod 755 /start.sh
EXPOSE 443
EXPOSE 80
CMD ["/bin/bash", "/start.sh"]
The start.sh script:
cat scripts/start.sh
service nginx start
echo "test" > /tmp/test
When I log to the container:
docker exec --interactive --tty my_container bash
neither the test file exists nor nginx is running. There are no errors on the nginx log.

The best practice is to run the process in the foreground instead of as a service.
Remove the start.sh file and change the CMD to:
CMD ["nginx", "-g", "daemon off;"]
You can get a better idea reading the official nginx dockerfile: https://github.com/nginxinc/docker-nginx/blob/master/stable/jessie/Dockerfile

Try
RUN /etc/init.d/nginx start

Related

How to properly run entrypoint bash script on docker?

I would like to build a docker image for dumping large SQL Server tables into S3 using the bcp tool by combining this docker and this script. Ideally I could pass table, database, user, password and s3 path as arguments for the docker run command.
The script looks like
#!/bin/bash
TABLE_NAME=$1
DATABASE=$2
USER=$3
PASSWORD=$4
S3_PATH=$5
# read sqlserver...
# write to s3...
# .....
And the Dockerfile is:
# SQL Server Command Line Tools
FROM ubuntu:16.04
LABEL maintainer="SQL Server Engineering Team"
# apt-get and system utilities
RUN apt-get update && apt-get install -y \
curl apt-transport-https debconf-utils \
&& rm -rf /var/lib/apt/lists/*# SQL Server Command Line Tools
FROM ubuntu:16.04
LABEL maintainer="SQL Server Engineering Team"
# apt-get and system utilities
RUN apt-get update && apt-get install -y \
curl apt-transport-https debconf-utils \
&& rm -rf /var/lib/apt/lists/*
# adding custom MS repository
RUN curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add -
RUN curl https://packages.microsoft.com/config/ubuntu/16.04/prod.list > /etc/apt/sources.list.d/mssql-release.list
# install SQL Server drivers and tools
RUN apt-get update && ACCEPT_EULA=Y apt-get install -y msodbcsql mssql-tools awscli
RUN echo 'export PATH="$PATH:/opt/mssql-tools/bin"' >> ~/.bashrc
RUN /bin/bash -c "source ~/.bashrc"
ADD ./sql2sss.sh /opt/mssql-tools/bin/sql2sss.sh
RUN chmod +x /opt/mssql-tools/bin/sql2sss.sh
RUN apt-get -y install locales
RUN locale-gen en_US.UTF-8
RUN update-locale LANG=en_US.UTF-8
ENTRYPOINT ["/opt/mssql-tools/bin/sql2sss.sh", "DB.dbo.TABLE", "SQLSERVERDB", "USER", "PASSWORD", "S3PATH"]
If I replae the entrypoint for CMD /bin/bash and run the image with -it, I can manually run the sql2sss.sh and it works properly, reading and writing to s3. However if I try to use the entrypoint as shown yelds bcp: command not found.
I also noticed if I use CMD /bin/sh in iterative mode it will produce the same error. Am I missing some configuration in order for the entrypoint to run the script properly?
Have you tried
ENV PATH="/opt/mssql-tools/bin:${PATH}"
Instead of exporting the bashrc?
As David Maze pointed out docker doesn't read dot files
Basically add your env definitions in the ENV primitive

Running multiple ROS process in a Docker container

I want to create a bash script which installs all required software to run a docker, create a new image and then runs, in a container, all required processes. My bash script looks like this:
#! /bin/sh
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containered.io
sudo groupadd docker
sudo gpasswd -a $USER docker
docker pull ros:indigo-robot
docker build -t myimage .
docker run --name myimage-cont -dit myimage
And the Dockerfile:
FROM ros:indigo-robot
RUN apt-get update && apt-get install -y \
git \
ros-indigo-ardrone-autonomy
I am new to Docker and do not know best practices, but what I need to achieve is running 3 different process at the same time.
- roscore
- rosrun ardrone_autonomy ardrone_driver
- rostopic pub ardrone/takeoff std_msgs/Empty "{}" --once
I was able to achieve it 'manually' by opening 3 terminals and executing docker exec myimage-cont... commands. However, what I need it is make it automatically run by the code once I execute my bash script. What is the best way to do it?

I have made a dockerfile and I was going to run it on AWS ECS but I cant as it requires -t

Here is my docker run and the docker file is there a reason why it requires -t and isnt working on ECS thanks for any help. I dont
understand what -t does so if someone could also help with that thanks.
This is just a basic docker that connects to my rds and uses wordpress. I dont have any plugins and shapely is the theme i'm using .
command docker run -t --name wordpress -d -p 80:80 dockcore/wordpress
FROM ubuntu
#pt-get clean all
RUN apt-get -y update
RUN DEBIAN_FRONTEND=noninteractive apt-get -y install unzip wget mysql-client mysql-server apache2 libapache2-mod-php7.0 pwgen python-setuptools vim-tiny php7.0-mysql php7.0-lda
RUN rm -fr /var/cashe/*files neeeded
ADD wordpress.conf /etc/apache2/sites-enabled/000-default.conf
# Wordpress install
RUN wget -P /var/www/html/ https://wordpress.org/latest.zip
RUN unzip /var/www/html/latest.zip -d /var/www/html/
RUN rm -fr /var/www/html/latest.zip
# Copy he wp config file
RUN cp /var/www/html/wordpress/wp-config-sample.php /var/www/html/wordpress/wp-config.php
# Expose web port
EXPOSE 80
# wp config for database
RUN sed -ie 's/database_name_here/wordpress/g' /var/www/html/wordpress/wp-config.php
RUN sed -ie 's/username_here/root/g' /var/www/html/wordpress/wp-config.php
RUN sed -ie 's/password_here/password/g' /var/www/html/wordpress/wp-config.php
RUN sed -ie 's/localhost/wordpressrds.xxxxxxxxxxxxxx.ap-southeast-2.rds.amazonaws.com:3306/g' /var/www/html/wordpress/wp-config.php
RUN rm -fr /var/www/html/wordpress/wp-content/themes/*
RUN rm -fr /var/www/html/wordpress/wp-content/plugins/*
ADD /shapely /var/www/html/wordpress/wp-content/themes/
# Start apache on boot
RUN echo "service apache2 start" >> ~/.bashrc
I see a couple problems. First of all your container should never require -t in order to run unless it is a temporary container that you plan to interact with using a shell. Background containers shouldn't require an interactive TTY interface, they just run in the background autonomously.
Second in your docker file I see a lot of RUN statements which are basically the build time commands for setting up the initial state of the container, but you don't have any CMD statement.
You need a CMD which is the process to actually kick off and start in the container when you try to run the container. RUN statements only execute once during the initial docker build, and then the results of those run statements are saved into the container image. When you run a docker container it has the initial state that was setup by the RUN statements, and then the CMD statement kicks off a running process in the container.
So it looks like that last RUN in your Dockerfile should be a CMD since the Apache server is the long running process that you want to run with the container state that you previously setup using all those RUN statements.
Another thing you should do is chain many of those consecutive RUN statements into one. Docker creates a separate layer for each RUN command, where each layer is kind of like a Git commit of the state of the container. So it is very wasteful to have so many RUN statements because it makes way too many container layers. You can do something like this ot chain RUN statements together instead to make a smaller, more efficient container:
RUN apt-get -y update && \
DEBIAN_FRONTEND=noninteractive apt-get -y install unzip wget mysql-client mysql-server apache2 libapache2-mod-php7.0 pwgen python-setuptools vim-tiny php7.0-mysql php7.0-lda && \
rm -fr /var/cashe/*files neeeded
I recommend reading through this guide from Docker that covers best practices for writing a Dockerfile: https://docs.docker.com/engine/userguide/eng-image/dockerfile_best-practices/#cmd

How can I solve "crontab: your UID isn't in the passwd file. bailing out."?

Hi I'm using Docker and whenever to write cron schedule rules, but when I run whenever --update-crontab in my docker container this errors is showing to me.
crontab: your UID isn't in the passwd file.
bailing out.
[fail] Couldn't write crontab; try running `whenever' with no options to ensure your schedule file is valid.
Dockerfile
FROM ruby:2.4.1-slim
RUN apt-get update && apt-get -y install cron
ENV RAILS_ENV production
ENV INSTALL_PATH /app
RUN mkdir -p $INSTALL_PATH
WORKDIR $INSTALL_PATH
COPY Gemfile Gemfile.lock ./
RUN bundle install --binstubs --jobs 20 --retry 5
COPY . .
RUN chown -R nobody:nogroup /app
USER nobody
# use docker run -it --entrypoint="" demo "ls -la" to skip
EXPOSE 3000
CMD puma -C config/puma.rb
Docker Version: Docker version 17.05.0-ce, build 89658be
My Docker compose file
chatbot_web:
container_name: chatbot_web
depends_on:
- postgres
- chatbot_redis
- chatbot_lita
user: "1000:1000"
build: .
image: dpe/chatbot
ports:
- '3000:3000'
volumes:
- '.:/app'
restart: always
How can I solve this?
EDIT:
When I use:
host$ docker run -it dpe/chatbot bash
container $ whenever --update-cron
[write] crontab file updated
Works, but when I use:
host$ docker exec -it chatbot_web bash
I have no name!#352c6a7500d2:/app$ whenever --update-cron
crontab: your UID isn't in the passwd file.
bailing out.
[fail] Couldn't write crontab; try running `whenever' with no options to ensure your schedule file is valid.
Don't Work =(
To fix I use same user in Dockerfile and docker-compose
Dockerfile
RUN chown -R nobody:nogroup /app
USER nobody
Docker Compose
chatbot_web:
user: "nobody:nogroup"

Docker refusing to run bash

I have the following docker setup:
python27.Dockerfile
FROM python:2.7
COPY ./entrypoint.sh /entrypoint.sh
RUN mkdir /src
RUN apt-get update && apt-get install -y bash libmysqlclient-dev python-pip build-essential && pip install virtualenv
ENTRYPOINT ["/entrypoint.sh"]
EXPOSE 8000
WORKDIR /src
CMD source /src/env/bin/activate && python /src/manage.py runserver
entrypoint.sh
#!/bin/bash
# some code here...
# some code here...
# some code here...
exec "$#"
Whenever I try to run my docker container I get python27 | /bin/sh: 1: source: not found.
I understand that the error comes from the fact that the command is run with sh instead of bash, but I can't understand why is that happening, given the fact that I have the correct shebang at the top of my entrypoint.
Any ideas why is that happening and how can I fix it?
The problem is that for CMD you're using the shell form that uses /bin/sh, and the /src/env/bin/activate likely contains a "source" command, which isn't available on POSIX /bin/sh (the equivalent builtin would be just .).
You must use the exec form for CMD using brackets:
CMD ["/bin/bash", "-c", "source /src/env/bin/activate && python /src/manage.py runserver"]
More details in:
https://docs.docker.com/engine/reference/builder/#run
https://docs.docker.com/engine/reference/builder/#cmd
https://docs.docker.com/engine/reference/builder/#entrypoint

Resources