Changing ownership of a directory/volume using linux in Dockerfile - bash

I'm working on creating a Dockerfile that builds 2 volumes called /data/ and /artifacts/ and one user called "omnibo" and then assigning this user with ownership/permission of these two volumes, I tried using the chown command but after checking the volumes' permissions/ownership are assigned to root user.
This is what's in my Dockerfile script:
FROM alpine:latest
RUN useradd -m omnibo
VOLUME /data/ /artifact/
RUN chown -R omnibo /data /artifact
RUN mkdir -p /var/cache /var/cookbook
COPY fix-joyou.sh /root/joyou.sh
COPY Molsfile /var/file/Molsfile
RUN bash /root/fix-joyou.sh && rm -rf /root/fix-joyou.sh && \
yum -y upgrade && \
yum -y install curl iproute hostname && \
curl -L https://monvo.tool.sh/install.sh | bash && \
/opt/embedded/bin/gem install -N berkshelf && \
/opt/embedded/bin/berks vendor -b /var/cinc/Molsfile /var/cinc/cookbook
ENV RUBYOPT=-r/usr/local/share/ruby-docker-copy-patch.rb
USER omnibo
WORKDIR /home/omnibo
This script runs successfully when creating container but when doing "ll" it shows that these two volumes are assigned to "root", Is there anything I can do to add ownership to "omnibo"?

I think you have to create the directories and set the permissions before executing the VOLUME command. According to the docker documentation: "If any build steps change the data within the volume after it has been declared, those changes will be discarded". See https://docs.docker.com/engine/reference/builder/#volume
Try the following:
FROM alpine:latest
RUN useradd -m omnibo
RUN mkdir /data /artifact && chown -R omnibo /data /artifact
VOLUME /data/ /artifact/
...

Related

Can I run docker without root in github actions?

I don't want to use root, for safety, so I did as VSCode suggests, here's my Dockerfile:
FROM ubuntu:focal
# non root user (https://code.visualstudio.com/remote/advancedcontainers/add-nonroot-user)
ARG USERNAME=dev
ARG USER_UID=1000
ARG USER_GID=$USER_UID
# Create the user
RUN groupadd --gid $USER_GID $USERNAME \
&& useradd --uid $USER_UID --gid $USER_GID -m $USERNAME \
&& apt-get update \
&& apt-get install -y sudo \
&& echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME \
&& chmod 0440 /etc/sudoers.d/$USERNAME
USER $USERNAME
I pass the current directory on github workflows:
docker run -u dev -v $PWD:/home/dev/project project /bin/bash -c "./my_script.sh"
but my_script.sh fails to create a directory with permission problems. I also tried docker run -u $USER ... but it does not find the user runner inside the container.
One option is to run with root: docker run -u root ..., but is there a better way? I tried passing docker run -u dev ... but I get Permission Denied also.
&& echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME \
&& chmod 0440 /etc/sudoers.d/$USERNAME ```
Those two lines defeat the entire reason for not running your container as root. It's a passwordless escalation to root making the user effectively the same as having full root access.
docker run -u dev -v $PWD:/home/dev/project project /bin/bash -c
"./my_script.sh"
but my_script.sh fails to create a directory with permission problems.
Host volumes are mounted with the same uid/gid (unless using user namespaces). So you need the uid/gid of the user inside the container to match the host directory uid/gid permissions.
I also tried docker run -u $USER ... but it does not find the user
runner inside the container.
If you specify a username, it looks for that username inside the container's /etc/passwd. You can instead specify the uid/gid like:
docker run -u "$(id -u):$(id -g)" ...
Make sure the directory already exists in the host (which will be the case for $PWD) and that the user has access to write to that directory (which it should if you haven't done anything unusual in GHA).

Why does my bash script not run properly on every docker container that starts up?

My Dockerfile copies an init.sh script to the container.
# DOCKERFILE
FROM ubuntu:latest
# a bunch of installation commands
COPY init.sh /
ENTRYPOINT bash init.sh
EXPOSE 80
And I have a docker-compose file with 2 services:
Service1: This service is being scaled.
Service2: Database
I have it so that when the Service1 container starts up, this script will run.
#!/bin/bash
# script
# Missing files directory
if [[ ! -e /var/www/drupal/sites/default/files ]]; then
mkdir /var/www/drupal/sites/default/files
chmod a+w /var/www/drupal/sites/default/files
fi
# Missing settings file
cp /var/www/drupal/sites/default/default.settings.php /var/www/drupal/sites/default/settings.php
chmod a+w /var/www/drupal/sites/default/settings.php
# Install Drush & Install Drupal
cd /var/www/drupal && composer require --dev drush/drush
cd /var/www/drupal && vendor/bin/drush site-install standard \
--db-url=mysql://root:random#mariadb:3306/drupaldb -y \
--site-name=ExampleWebsite \
--account-name=random \
--account-pass=random
# Post-Installation Steps
chmod go-w /var/www/drupal/sites/default/settings.php
chmod go-w /var/www/drupal/sites/default
cd /var/www/drupal && vendor/bin/drush cache-rebuild
/usr/sbin/apache2ctl -D FOREGROUND
However, when I run the command to start up the containers along with --scale docker-compose up -d --scale Service1=5, some of the containers run the script properly on start up but some don't. For the ones that don't, I would have to go into the container and manually run the script, then it's fine.
Shouldn't all the containers be the same and would've run the same script properly?
Instead, I would have to manually go into some of the containers and run the script.

How to install oracle instantclient on a custom Jenkins Agent

I am struggeling to install an oracle driver and use it in my Jenkins builds.
My goal is this:
I would like to run cypress e2e tests in a jenkins pipe. Cypress should be able to execute oracle statements to prepare the database for some tests.
Therefore I created a script with the npm package 'oracledb'. This is working fine on my windows machine.
But in the Jenkins pipe, I can not download the needed instantclient driver files.
So I started to create my own jenkins agent. This agent is a Docker Image which is created from 'FROM cypress/browsers:node14.17.0-chrome91-ff89'. And it is then downloading and unzipping the instantclient drivers.
So far so good, but when I execute a Task on that created agent. I can not find or use the driver that the agent should provide.
So my question is: How can I provide a instantclient installation on an Jenkins agent to be used in executions?
Dockerfile:
FROM cypress/browsers:node14.17.0-chrome91-ff89
USER root
ENV JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF-8
ENV LANG=en_GB.UTF-8
RUN mkdir -p /home/jenkins && \
chown -R 1001:0 /home/jenkins && \
chmod -R g+w /home/jenkins
RUN ls /home -all
RUN mkdir -p /opt/oracle && \
chown -R 1001:0 /opt/oracle && \
chmod -R g+w /opt/oracle
WORKDIR /opt/oracle
RUN apt-get update && apt-get install -y libaio1 wget unzip
RUN wget https://download.oracle.com/otn_software/linux/instantclient/instantclient-basiclite-linuxx64.zip && \
unzip instantclient-basiclite-linuxx64.zip && rm -f instantclient-basiclite-linuxx64.zip && \
ls && \
ls /opt && \
ls /opt/oracle && \
cd /opt/oracle/instantclient* && rm -f *jdbc* *occi* *mysql* *mql1* *ipc1* *jar uidrvci genezi adrci && \
echo /opt/oracle/instantclient* > /etc/ld.so.conf.d/oracle-instantclient.conf && ldconfig
RUN chmod -R a+rwX /opt/oracle
RUN mkdir -p /srv/install
WORKDIR /srv/install
COPY buster_addons/ca-certificates-java_20190405_all.deb /srv/install/ca-certificates-java_20190405_all.deb
COPY buster_addons/fonts-dejavu-extra_2.37-1_all.deb /srv/install/fonts-dejavu-extra_2.37-1_all.deb
COPY buster_addons/java-common_0.71_all.deb /srv/install/java-common_0.71_all.deb
COPY buster_addons/libpcsclite1_1.8.24-1_amd64.deb /srv/install/libpcsclite1_1.8.24-1_amd64.deb
COPY buster_addons/libatk-wrapper-java_0.38.0-1_all.deb /srv/install/libatk-wrapper-java_0.38.0-1_all.deb
COPY buster_addons/libatk-wrapper-java_0.33.3-22_all.deb /srv/install/libatk-wrapper-java_0.33.3-22_all.deb
COPY buster_addons/libatk-wrapper-java-jni_0.33.3-22_amd64.deb /srv/install/libatk-wrapper-java-jni_0.33.3-22_amd64.deb
COPY buster_addons/openjdk-11-jre_11.0.7+10-3_deb10u1_amd64.deb /srv/install/openjdk-11-jre_11.0.7+10-3_deb10u1_amd64.deb
COPY buster_addons/openjdk-11-jdk_11.0.7+10-3_deb10u1_amd64.deb /srv/install/openjdk-11-jdk_11.0.7+10-3_deb10u1_amd64.deb
COPY buster_addons/openjdk-11-jre-headless_11.0.7+10-3_deb10u1_amd64.deb /srv/install/openjdk-11-jre-headless_11.0.7+10-3_deb10u1_amd64.deb
COPY buster_addons/openjdk-11-jdk-headless_11.0.7+10-3_deb10u1_amd64.deb /srv/install/openjdk-11-jdk-headless_11.0.7+10-3_deb10u1_amd64.deb
RUN dpkg -i /srv/install/java-common_0.71_all.deb && \
dpkg -i /srv/install/libpcsclite1_1.8.24-1_amd64.deb && \
dpkg -i --force-all /srv/install/openjdk-11-jre-headless_11.0.7+10-3_deb10u1_amd64.deb && \
dpkg -i /srv/install/ca-certificates-java_20190405_all.deb && \
dpkg -i /srv/install/libatk-wrapper-java_0.38.0-1_all.deb && \
dpkg -i /srv/install/openjdk-11-jre_11.0.7+10-3_deb10u1_amd64.deb && \
dpkg -i /srv/install/libatk-wrapper-java-jni_0.33.3-22_amd64.deb && \
dpkg -i /srv/install/libatk-wrapper-java_0.33.3-22_all.deb && \
dpkg -i /srv/install/fonts-dejavu-extra_2.37-1_all.deb && \
dpkg -i /srv/install/openjdk-11-jdk-headless_11.0.7+10-3_deb10u1_amd64.deb && \
dpkg -i /srv/install/openjdk-11-jdk_11.0.7+10-3_deb10u1_amd64.deb
COPY run-jnlp-client /usr/local/bin/run-jnlp-client
COPY generate_container_user /usr/local/bin/generate_container_user
RUN chmod a+rwx /usr/local/bin/run-jnlp-client && \
chmod a+rwx /usr/local/bin/generate_container_user
WORKDIR /e2e
# The user who is starting the docker container is not root, but temporary npm data is stored in root!
RUN chmod -R a+rwX /e2e
RUN mkdir -p /tmp && \
chown -R 1001:0 /tmp
RUN ls /tmp -all
# Run the Jenkins JNLP client
ENTRYPOINT ["/usr/local/bin/run-jnlp-client"]
The JNLP stuff is from here: https://github.com/openshift/jenkins/blob/master/slave-base/contrib/bin/run-jnlp-client

Excecute .sh file on start of Docker container and let container run

I have a docker container that serves a webserver. On every startup of the container, I want to excecute a little shell script. The script that has to be executed has only one statement.
/var/www/html/app/Console/cake schema update -y
To achieve this, I created a .sh file called schemaupdate.sh which I copy into the docker container using the dockerfile into the /etc/init.d folder. Furthermore I make it executable and register it to the startup.
COPY schemaupdate.sh /etc/init.d/schemaupdate.sh
chmod 755 /etc/init.d/schemaupdate.sh
update-rc.d schemaupdate.sh defaults
The file is successfully copied into the container. However, the script is not executed when the docker container starts. When I manually call the sh file, everything is running fine.
How can I achieve, that the file / statement is executed on each startup of a container? It is important, that the script is executed at the startup and the container (so the webserver) still continues to run! The script only makes a little update check and after the check the webserver keeps on going.
The container is a debian based container. Here is inital dockerfile.
#start with base Image from php
FROM php:7.3-apache
#install system dependencies and enable PHP modules
RUN apt-get update && apt-get install -y \
libicu-dev \
libpq-dev \
libmcrypt-dev \
mysql-client \
git \
zip \
unzip \
&& rm -r /var/lib/apt/lists/* \
&& docker-php-ext-configure pdo_mysql --with-pdo-mysql=mysqlnd \
&& docker-php-ext-install \
intl \
mbstring \
pcntl \
pdo_mysql \
pdo_pgsql \
pgsql \
opcache
# zip \
# mcrypt \
#configure imap for mails
RUN apt-get update && \
apt-get install -y \
libc-client-dev libkrb5-dev && \
rm -r /var/lib/apt/lists/*
RUN docker-php-ext-configure imap --with-kerberos --with-imap-ssl && \
docker-php-ext-install -j$(nproc) imap
#install mcrypt
RUN apt-get update \
&& apt-get install -y libmcrypt-dev \
&& rm -rf /var/lib/apt/lists/* \
&& pecl install mcrypt-1.0.2 \
&& docker-php-ext-enable mcrypt
#install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin/ --filename=composer
#set our application folder as an environment variable
ENV APP_HOME /var/www/html
#change uid and gid of apache to docker user uid/gid
RUN usermod -u 1000 www-data && groupmod -g 1000 www-data
#change the web_root to cakephp /var/www/html/webroot folder
#RUN sed -i -e "s/html/html\/webroot/g" /etc/apache2/sites-enabled/000-default.conf
# enable apache module rewrite
RUN a2enmod rewrite
#copy source files and run composer
#COPY src/ /var/www/html
#COPY src/ $APP_HOME
# install all PHP dependencies
#RUN composer install --no-interaction
#SET Volume
VOLUME /var/www/html/
#change ownership of our applications
RUN chown -R www-data:www-data $APP_HOME
#SET ENV VARIABLES
COPY schemaupdate.sh /etc/init.d/schemaupdate.sh
chmod 755 /etc/init.d/schemaupdate.sh
update-rc.d schemaupdate.sh defaults
EXPOSE 80
/etc/init.d/ isn't relevant. Containers aren't full blown operating systems with a heavyweight SysV init-style startup sequence. They run a single command, that's it.
You should either add the command as a RUN statement in the Dockerfile so its results are baked into the image, or you should have it called directly by the container's CMD or ENTRYPOINT directive.
I Finally used the Entrypoint. I deleted the COPY, chmod and update-rc. The Entrypoint looks like the following.
ENTRYPOINT [ "sh", "-c", "/var/www/html/app/Console/cake schema update -y && /var/www/html/app/Console/cake schema update -y && /usr/sbin/apachectl -D FOREGROUND"]
It first starts the update statment. After this is finished (so terminated), the apachectl is called to keep the webserver running.

File permissions problems in Dockerfile for a ruby application

Hello I'm trying to create a ruby app using the user id and group id in the container as in the host, ie. 1000.
I run into permissions problems but I can't figure out why.
Here is the error that I get:
There was an error while trying to write to `/home/appuser/myapp/Gemfile.lock`.
It is likely that you need to grant write permissions for that path.
The command '/bin/sh -c bundle install' returned a non-zero code: 23
Here is my Dockerfile:
# Dockerfile
FROM ruby:2.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN groupadd -r -g 1000 appuser
RUN useradd -r -m -u 1000 -g appuser appuser
USER appuser
RUN mkdir /home/appuser/myapp
WORKDIR /home/appuser/myapp
COPY Gemfile Gemfile.lock ./
RUN bundle install
COPY . ./
If you want a pure docker solution, try this:
FROM ruby:2.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
# Reduce layers by grouping related commands into single RUN steps
RUN groupadd -r -g 1000 appuser && \
useradd -r -m -u 1000 -g appuser appuser
# Setting workdir will also create the dir if it doesn't exist, so no need to mkdir
WORKDIR /home/appuser/myapp
# Copy everything over in one go
COPY . ./
# This line should fix your issue
# (give the user ownership of their home dir and make Gemfile.lock writable)
# Must still be root for this to work
RUN chown -R appuser:appuser /home/appuser/ && \
chmod +w /home/appuser/myapp/Gemfile.lock
USER appuser
RUN bundle install
Might be a better idea to fix the permissions on your host system with something like this:
sudo chmod g+w Gemfile.lock

Resources