Error running systemctl to start service in Amazon Linux 2 - systemd

I am trying to build a simple Apache/PHP server using the Amazon Linux 2 image. I have the following:
Dockerfile
FROM amazonlinux:2
RUN amazon-linux-extras install epel -y &&\
amazon-linux-extras install php7.4 -y &&\
yum update -y &&\
yum install httpd -y
COPY --chown=root:root docker/script/startup /startup
ENTRYPOINT /startup
startup
#!/usr/bin/env bash
mkdir -p /run/dbus # Added this based on other SO question
dbus-daemon --system # Added this based on other SO question
systemctl enable dbus # Added this based on other SO question
systemctl start dbus # Added this based on other SO question
systemctl status dbus # Added this based on other SO question
systemctl enable httpd
systemctl start httpd
systemctl status httpd
/bin/bash
docker-compose.yml
web:
build: .
container_name: "${APP_NAME}-app"
environment:
VIRTUAL_HOST: "${WEB_HOST}"
env_file:
- ./.env-local
working_dir: "/${APP_NAME}/app"
restart: "no"
privileged: true # Added this based on other SO question
volumes:
- "./app:/${APP_NAME}/app:ro"
- ./docker:/docker
- "./conf:/${APP_NAME}/conf:ro"
- "./vendor:/${APP_NAME}/vendor:ro"
- "./conf:/var/www/conf:ro"
- "./web:/var/www/html/"
depends_on:
- composer
I run this with the following command:
docker run -it web bash
And this is what it gives me:
Failed to get D-Bus connection: Operation not permitted
Failed to get D-Bus connection: Operation not permitted
Created symlink /etc/systemd/system/multi-user.target.wants/httpd.service, pointing to /usr/lib/systemd/system/httpd.service.
Failed to get D-Bus connection: Operation not permitted
Failed to get D-Bus connection: Operation not permitted
I don't understand why I'm getting this or how to resolve?

Suggesting to avoid systemd service units in a docker image.
Instead use cronttab script with #boot directive/selector.
In addition dbus is centrally managed by kernel and not allowed at container level.
If Docker service is up then you probably have dbus active and running.
You can add capabilities to the root user running in the container. Read more here.
As last resort try to disable SELinux in your docker image.

I was running into the same issue trying to run systemctl from within the Amazon Linux 2 docker image
Dockerfile:
FROM amazonlinux:latest
# update and install httpd 2.4.53, php 7.4.28 with php extensions
RUN yum update -y; yum clean all
RUN yum install -y httpd amazon-linux-extras
RUN amazon-linux-extras enable php7.4
RUN yum clean metadata
RUN yum install -y php php-{pear,cli,cgi,common,curl,mbstring,gd,mysqlnd,gettext,bcmath,json,xml,fpm,intl,zip}
# update website files
WORKDIR /var/www/html
COPY phpinfo.php /var/www/html
RUN chown -R apache:apache /var/www
CMD ["/usr/sbin/httpd","-DFOREGROUND"]
EXPOSE 80
EXPOSE 443
$ docker build -t azl1
$ docker run -d -p 8080:80 --name azl1_web azl1
pointing a browser to the IP:8080/phpinfo.php brought up the normal phpinfo page as expected pointing to a successful php 7.4.28 installation.

Related

The command '/bin/sh -c yum install yum-utils' returned a non-zero code: 1

i am trying to setup laravel php setup using docker.
Is something wrong with the docker file or the network configurations !!!!
Using DockerFile:
FROM centos:7
# Install some must-haves
RUN yum -y install vim wget sendmail
RUN yum -y install libtool make automake autoconf nasm libpng-static
RUN yum -y install git
RUN git --version
# Install PHP 7.1 on CentOS
RUN rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm \
&& rpm -Uvh http://rpms.remirepo.net/enterprise/remi-release-7.rpm
RUN yum install yum-utils
RUN yum install epel-release
RUN yum-config-manager --enable remi-php73
RUN yum --enablerepo=remi-php73 -y install php php-bcmath php-cli php-common php-gd php-intl php-ldap php-mbstring \
php-mysqlnd php-pear php-soap php-xml php-xmlrpc php-zip php-fpm
RUN php -v
# Prepare PHP environment
COPY config/php/php-fpm.conf /etc/php-fpm.conf
COPY config/php/www.conf /etc/php-fpm.d/www.conf
COPY config/php/php.ini /usr/local/etc/php/php.ini
COPY config/php/xdebug.ini /usr/local/etc/php/conf.d/xdebug.ini
# Install Composer
RUN curl -sS https://getcomposer.org/installer | php
RUN mv composer.phar /usr/bin/composer
RUN composer --version
# Install Node.js
RUN curl -sL https://rpm.nodesource.com/setup_7.x | bash -
RUN yum -y install nodejs
RUN yum list installed nodejs
RUN node -v
# Final update and clean up
RUN yum -y update --skip-broken
RUN yum clean all
# Define work directory
WORKDIR /var/www/laravel-boilerplate
# Expose ports
EXPOSE 9000
CMD ["php-fpm", "-F", "-O"]
# CMD ["/bin/sh", "-l", "-c", "php-fpm"]
# CMD ["php-fpm", "-F"]
Command which i had run to setup the instances are,
docker-compose up -d
any idea what went wrong?
Adding docker compose file
version: '2'
services:
mysql:
image: mysql:latest
volumes:
- "./data/db:/var/lib/mysql"
ports:
- "3306:3306"
restart: always
environment:
- MYSQL_ROOT_PASSWORD=test
- MYSQL_DATABASE=laravel_boilerplate
- MYSQL_USER=root
- MYSQL_PASSWORD=secret
laravel-env:
build: ./dockerfiles
depends_on:
- mysql
volumes:
- ".:/var/www/laravel-boilerplate"
- "./dockerfiles/config/php/php-fpm.conf:/etc/php-fpm.conf"
- "./dockerfiles/config/php/www.conf:/etc/php-fpm.d/www.conf"
- "./dockerfiles/config/php/php.ini:/usr/local/etc/php/php.ini"
- "./dockerfiles/config/php/xdebug.ini:/usr/local/etc/php/conf.d/xdebug.ini"
nginx:
image: nginx:latest
depends_on:
- laravel-env
volumes:
- ".:/var/www/laravel-boilerplate"
- "./dockerfiles/config/nginx/default.conf:/etc/nginx/conf.d/default.conf"
ports:
- "80:80"
restart: always
Let me know if I missed anything!!!!
Is something has to get removed while building or is something had to deleted like cleanup, am pretty new setting up the code so. Your help is much appreciated.
Thanks folks.

How can I add gatsby-cli or sudo or other packages to a DDEV-Local add-on service like nodejs?

I am using a separate nodejs container as in How to use a separate node container in your ddev setup?, but I'd really like to add the gatsby-cli npm package to it, and maybe add sudo as well. I know how to add a custom Dockerfile to the web service and do these things but how can I do it with a custom service?
You can do the same things ddev does to add a custom Dockerfile, and add a build stanza and a .ddev/<servicename>-build directory with the needed files.
So for a .ddev/docker-compose.node.yaml file:
version: '3.6'
services:
node:
build:
context: "${DDEV_APPROOT}/.ddev/node-build"
dockerfile: "${DDEV_APPROOT}/.ddev/node-build/Dockerfile"
args:
BASE_IMAGE: "node"
image: "node-${DDEV_SITENAME}-built"
user: "node"
restart: "no"
labels:
com.ddev.site-name: ${DDEV_SITENAME}
com.ddev.platform: ddev
com.ddev.app-type: php
com.ddev.approot: $DDEV_APPROOT
volumes:
- "../:/var/www/html:cached"
working_dir: /var/www/html
command: ["tail", "-f", "/dev/null"]
And then mkdir .ddev/node-build and create .ddev/node-build/Dockerfile with
ARG BASE_IMAGE
FROM $BASE_IMAGE
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y -o Dpkg::Options::="--force-confold" --no-install-recommends --no-install-suggests bash sudo
COPY sudoers.d/ddev /etc/sudoers.d
RUN npm install -g gatsby-cli
and .ddev/node-build/sudoers.d/ddev with
ALL ALL=NOPASSWD: ALL
The result in this case is you get gatsby-cli installed via npm, and also get bash and passwordless sudo installed. This is just an example, there is plenty more that can be done, of course.
This saves the trouble of creating and maintaining a custom Docker image and pushing it up to hub.docker.com.

Docker compose work on linux environment but not windows environment

I am using 2 environment for development one is a linux VM at home while another is the windows laptop at office. The dockerfile of angular environment work fine until a few days ago, it show the following error when I tried to start the docker container with docker compose on the laptop:
ng | /bin/sh: 1: sudo: not found
ng exited with code 127
However, the same issue does not occurs on my linux VM.
Dockerfile:
FROM node:12
RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add -
RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list'
RUN apt-get update && apt-get install -yq google-chrome-stable
RUN mkdir /app
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY package.json package-lock.json /app/
RUN npm install
#RUN npm install -g #angular/cli
COPY . /app
EXPOSE 4200
CMD ng serve --host 0.0.0.0
docker-compose.yaml:
version: "3"
services:
dj:
container_name: dj
build: Backend
command: python manage.py runserver 0.0.0.0:80
volumes:
- ./Backend:/code
ports:
- "80:80"
ng:
container_name: ng
build: Frontend/SPort
volumes:
- ./Frontend/SPort:/app
ports:
- "4200:4200"
I think you want to fix the sh script in your Dockerfile
add this:
RUN apt-get update && apt-get install -y dos2unix && dos2unix /path/to/the/script
hope that will help since the error comes from CRLF characters in windows

Can't run docker - repository does not exist

I have created a Dockerfile in my file structure, built a docker repository, and tried to run it, but I keep getting the following error:
Error response from daemon: repository not found, does not exist, or no pull access.
I'm pretty new to docker, so what could possibly be wrong here?
Commands I run:
docker build -t repoName .
docker run -d -p 8080:80 repoName
My Dockerfile:
FROM nginx:1.10.3
RUN mkdir /app
WORKDIR /app
# Setup enviromnet
RUN apt-get update
RUN apt-get install -y curl tar
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash -
RUN apt-get install -y nodejs
# Do some building and copying
EXPOSE 80
CMD ["/bin/sh", "/app/run-dockerized.sh"]

Docker and Magento permission issues

I've dockerized Apache + MySQL correctly and also managed to hit the setup installation page for Magento. But I'm having issues with managing a host <-> container data volume.
Magento is creating read-only log files on the volume, but the installation then returns an error saying the log file is not writeable in the later steps of the installation process.
My suspicions are that Docker's ACL automatically sets new files to read-only, but after it reads it from the volume again it is not writeable and returns an error.
Does anyone know an elegant way of solving this issue?
docker-compose.yml:
apache:
build: .
dockerfile: Dockerfile
command: "/usr/sbin/apache2 -D FOREGROUND"
volumes:
- ./src/magento:/var/www/site
environment:
APACHE_RUN_USER: www-data
APACHE_RUN_GROUP: www-data
APACHE_LOCK_DIR: /var/lock/apache2
APACHE_LOG_DIR: /var/log/apache2
APACHE_PID_FILE: /var/run/apache2.pid
ports:
- "80:80"
mysqldb:
image: mysql
ports:
- "3306:3306"
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 'yes'
MYSQL_ROOT_PASSWORD: pass
MYSQL_DATABASE: magento
Dockerfile:
FROM ubuntu
ENV DEBIAN_FRONTEND noninteractive
RUN apt-get update && apt-get install -y \
apache2 php curl libapache2-mod-php7.0 \
php7.0 php7.0-mysql php7.0-mcrypt \
php7.0-mbstring php7.0-cli php7.0-gd \
php7.0-curl php7.0-xml php7.0-zip php7.0-intl sudo
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
RUN a2enmod php7.0
RUN a2enmod rewrite
ADD apache-config.conf /etc/apache2/sites-enabled/000-default.conf
RUN mkdir -p /var/www/site
ADD src/magento /var/www/site
WORKDIR /var/www/site
EXPOSE 80
Error output during installation which stalls at 0%:
The path
"install.log:///var/www/site/var/log/var/www/site/var/log/"
is not writable
I think you don't run docker with safe user, maybe you should try this:
# Add user
RUN groupadd -g 1000 app \
&& useradd -g 1000 -u 1000 -d /var/www -s /bin/bash app
RUN mkdir -p /var/www/html \
&& chown -R app:app /var/www
USER app:app
VOLUME /var/www
WORKDIR /var/www/html
You can see my full Dockerfile here:
https://github.com/dylanops/docker-magento/blob/master/dockerfile/php-fpm/Dockerfile
Try adding the following line in your dockerfile and restart the process:
VOLUME /var/www/site

Resources