Hello I'm trying to create a ruby app using the user id and group id in the container as in the host, ie. 1000.
I run into permissions problems but I can't figure out why.
Here is the error that I get:
There was an error while trying to write to `/home/appuser/myapp/Gemfile.lock`.
It is likely that you need to grant write permissions for that path.
The command '/bin/sh -c bundle install' returned a non-zero code: 23
Here is my Dockerfile:
# Dockerfile
FROM ruby:2.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
RUN groupadd -r -g 1000 appuser
RUN useradd -r -m -u 1000 -g appuser appuser
USER appuser
RUN mkdir /home/appuser/myapp
WORKDIR /home/appuser/myapp
COPY Gemfile Gemfile.lock ./
RUN bundle install
COPY . ./
If you want a pure docker solution, try this:
FROM ruby:2.5
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
# Reduce layers by grouping related commands into single RUN steps
RUN groupadd -r -g 1000 appuser && \
useradd -r -m -u 1000 -g appuser appuser
# Setting workdir will also create the dir if it doesn't exist, so no need to mkdir
WORKDIR /home/appuser/myapp
# Copy everything over in one go
COPY . ./
# This line should fix your issue
# (give the user ownership of their home dir and make Gemfile.lock writable)
# Must still be root for this to work
RUN chown -R appuser:appuser /home/appuser/ && \
chmod +w /home/appuser/myapp/Gemfile.lock
USER appuser
RUN bundle install
Might be a better idea to fix the permissions on your host system with something like this:
sudo chmod g+w Gemfile.lock
Related
I'm working on creating a Dockerfile that builds 2 volumes called /data/ and /artifacts/ and one user called "omnibo" and then assigning this user with ownership/permission of these two volumes, I tried using the chown command but after checking the volumes' permissions/ownership are assigned to root user.
This is what's in my Dockerfile script:
FROM alpine:latest
RUN useradd -m omnibo
VOLUME /data/ /artifact/
RUN chown -R omnibo /data /artifact
RUN mkdir -p /var/cache /var/cookbook
COPY fix-joyou.sh /root/joyou.sh
COPY Molsfile /var/file/Molsfile
RUN bash /root/fix-joyou.sh && rm -rf /root/fix-joyou.sh && \
yum -y upgrade && \
yum -y install curl iproute hostname && \
curl -L https://monvo.tool.sh/install.sh | bash && \
/opt/embedded/bin/gem install -N berkshelf && \
/opt/embedded/bin/berks vendor -b /var/cinc/Molsfile /var/cinc/cookbook
ENV RUBYOPT=-r/usr/local/share/ruby-docker-copy-patch.rb
USER omnibo
WORKDIR /home/omnibo
This script runs successfully when creating container but when doing "ll" it shows that these two volumes are assigned to "root", Is there anything I can do to add ownership to "omnibo"?
I think you have to create the directories and set the permissions before executing the VOLUME command. According to the docker documentation: "If any build steps change the data within the volume after it has been declared, those changes will be discarded". See https://docs.docker.com/engine/reference/builder/#volume
Try the following:
FROM alpine:latest
RUN useradd -m omnibo
RUN mkdir /data /artifact && chown -R omnibo /data /artifact
VOLUME /data/ /artifact/
...
I am writing a dockerfile, where one of its dependencies can only be installed only when a homedirectory exist, but how do I set something like that up?
ARG BUILD_FROM=raspbian/stretch:latest
FROM $BUILD_FROM
RUN apt-get -qq update \
&& apt-get -qq install -y --no-install-recommends \
apt-transport-https \
apt-utils \
dirmngr \
gnupg-curl \
mpg123 \
supervisor \
unzip \
curl \
git \
wget \
python3 \
&& pip3 install -U setuptools && pip3 install utils\
&& pip3 install -r requirements.txt
requirements.txt
slugify
google-api-python-client
oauth2client
esptool
this can only be installed using pip3 install --user slugify which requires an homedir which I can't setup..
You can either create the dir manually by using a line like
RUN mkdir /home/slugify
or if you need a full user with permissions etc you can run
RUN adduser slugify
Read more at the manpage https://manpages.debian.org/jessie/adduser/adduser.8.sv.html
I think there should be an user home set by default.
However, you can change it using usermod -d /newhome/username username:
RUN mkdir /homedir
RUN usermod -d /homedir root
Maybe the program just needs an environment variable HOME:
RUN HOME=/homedir;<install command>
I have a docker container that serves a webserver. On every startup of the container, I want to excecute a little shell script. The script that has to be executed has only one statement.
/var/www/html/app/Console/cake schema update -y
To achieve this, I created a .sh file called schemaupdate.sh which I copy into the docker container using the dockerfile into the /etc/init.d folder. Furthermore I make it executable and register it to the startup.
COPY schemaupdate.sh /etc/init.d/schemaupdate.sh
chmod 755 /etc/init.d/schemaupdate.sh
update-rc.d schemaupdate.sh defaults
The file is successfully copied into the container. However, the script is not executed when the docker container starts. When I manually call the sh file, everything is running fine.
How can I achieve, that the file / statement is executed on each startup of a container? It is important, that the script is executed at the startup and the container (so the webserver) still continues to run! The script only makes a little update check and after the check the webserver keeps on going.
The container is a debian based container. Here is inital dockerfile.
#start with base Image from php
FROM php:7.3-apache
#install system dependencies and enable PHP modules
RUN apt-get update && apt-get install -y \
libicu-dev \
libpq-dev \
libmcrypt-dev \
mysql-client \
git \
zip \
unzip \
&& rm -r /var/lib/apt/lists/* \
&& docker-php-ext-configure pdo_mysql --with-pdo-mysql=mysqlnd \
&& docker-php-ext-install \
intl \
mbstring \
pcntl \
pdo_mysql \
pdo_pgsql \
pgsql \
opcache
# zip \
# mcrypt \
#configure imap for mails
RUN apt-get update && \
apt-get install -y \
libc-client-dev libkrb5-dev && \
rm -r /var/lib/apt/lists/*
RUN docker-php-ext-configure imap --with-kerberos --with-imap-ssl && \
docker-php-ext-install -j$(nproc) imap
#install mcrypt
RUN apt-get update \
&& apt-get install -y libmcrypt-dev \
&& rm -rf /var/lib/apt/lists/* \
&& pecl install mcrypt-1.0.2 \
&& docker-php-ext-enable mcrypt
#install composer
RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/bin/ --filename=composer
#set our application folder as an environment variable
ENV APP_HOME /var/www/html
#change uid and gid of apache to docker user uid/gid
RUN usermod -u 1000 www-data && groupmod -g 1000 www-data
#change the web_root to cakephp /var/www/html/webroot folder
#RUN sed -i -e "s/html/html\/webroot/g" /etc/apache2/sites-enabled/000-default.conf
# enable apache module rewrite
RUN a2enmod rewrite
#copy source files and run composer
#COPY src/ /var/www/html
#COPY src/ $APP_HOME
# install all PHP dependencies
#RUN composer install --no-interaction
#SET Volume
VOLUME /var/www/html/
#change ownership of our applications
RUN chown -R www-data:www-data $APP_HOME
#SET ENV VARIABLES
COPY schemaupdate.sh /etc/init.d/schemaupdate.sh
chmod 755 /etc/init.d/schemaupdate.sh
update-rc.d schemaupdate.sh defaults
EXPOSE 80
/etc/init.d/ isn't relevant. Containers aren't full blown operating systems with a heavyweight SysV init-style startup sequence. They run a single command, that's it.
You should either add the command as a RUN statement in the Dockerfile so its results are baked into the image, or you should have it called directly by the container's CMD or ENTRYPOINT directive.
I Finally used the Entrypoint. I deleted the COPY, chmod and update-rc. The Entrypoint looks like the following.
ENTRYPOINT [ "sh", "-c", "/var/www/html/app/Console/cake schema update -y && /var/www/html/app/Console/cake schema update -y && /usr/sbin/apachectl -D FOREGROUND"]
It first starts the update statment. After this is finished (so terminated), the apachectl is called to keep the webserver running.
First, we have a Docker network like so:
docker network create cdt-net
Then I have this bash script which will start a selenium server:
cd $(dirname "$0")
./node_modules/.bin/webdriver-manager update
./node_modules/.bin/webdriver-manager start
The above bash script is called by this Dockerfile:
FROM openjdk:latest
RUN apt-get update && \
apt-get -y install sudo
RUN mkdir -p /root/cdt-webdriver
WORKDIR /root/cdt-webdriver
COPY start-selenium-server.sh .
ENTRYPOINT ["/bin/bash", "/root/cdt-webdriver/start-selenium-server.sh"]
I would build it like so:
docker build -t cdt-selenium .
and then run it like so:
docker run --network=cdt-net --name cdt-selenium -d cdt-selenium
the problem that I am having, is that even though everything is clean with no errors, other processes in the same Docker network cannot talk to the selenium server.
On the other hand, if I create a selenium server using a pre-existing image, like so:
docker run -d --network=cdt-net --name cdt-selenium selenium/standalone-firefox:3.4.0-chromium
then things are working as expected, and I can connect to the selenium server from other processes in the Docker network.
Anyone know what might be wrong with my bash script or Dockerfile? Perhaps my manually created Selenium server is not listening on the right host?
Here is the complete Dockerfile for reference:
FROM openjdk:latest
RUN apt-get update && \
apt-get -y install sudo
RUN sudo apt-get install -y curl
RUN sudo apt-get install -y apt-utils
RUN sudo apt-get -y update
RUN sudo apt-get -y upgrade
RUN sudo apt-get purge nodejs npm
RUN curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
RUN sudo apt-get install -y nodejs
RUN echo "before nodejs => $(which nodejs)"
RUN echo "before npm => $(which npm)"
RUN sudo ln -s `which nodejs` /usr/bin/node || echo "ignore error"
RUN mkdir -p /root/cdt-webdriver
WORKDIR /root/cdt-webdriver
COPY start-selenium-server.sh .
RUN rm -rf node_modules > /dev/null 2>&1
RUN npm init -f || echo "ignore non-zero exit code" > /dev/null 2>&1
RUN npm install webdriver-manager > /dev/null 2>&1
ENTRYPOINT ["/bin/bash", "/root/cdt-webdriver/start-selenium-server.sh"]
You should use -d only when you docker images run fine. Before that use -it.
Change you webdriver-manager to a global install
RUN npm install -g webdriver-manager > /dev/null 2>&1
ENTRYPOINT ["/bin/bash", "/root/cdt-webdriver/start-selenium-server.sh"]
Also change your start-selenium-server.sh to
webdriver-manager update
webdriver-manager start
And use below to run and check if there are any issues
docker run --network=cdt-net --name cdt-selenium -it cdt-selenium
I'm trying to setup Vagrant virtual machines to support my learning through Seven Databases in Seven Weeks. I'm provisioning software using basic shell scripts which performs appropriate actions within a sudo environment. However, I'm using the vagrant user to run the tutorials, and would like the provisioning to install the appropriate node / NPM modules as Vagrant, rather than through sudo.
My current npm command is the last line in this provisioning script, but the module is unavailable when vagrant tried to execute node scripts.
apt-get update
apt-get -y install build-essential
apt-get -y install tcl8.5
wget http://redis.googlecode.com/files/redis-2.6.0-rc3.tar.gz
tar xzf redis-2.6.0-rc3.tar.gz
cd redis-2.6.0-rc3
make
make install
make test
mkdir /etc/redis
mv redis.conf /etc/redis/redis.conf
sed -i.bak 's/127.0.0.1/0.0.0.0/g' /etc/redis/redis.conf
sed -i.bak 's/daemonize no/daemonize yes/g' /etc/redis/redis.conf
sed -i.bak 's/dir .\//dir \/var\/lib\/redis/g' /etc/redis/redis.conf
cd src/
wget https://raw.github.com/gist/1053791/880a4a046e06028e160055406d02bdc7c57f3615/redis-server
mv redis-server.1 /etc/init.d/redis-server
mv redis-cli /etc/init.d/redis-cli
chmod +x /etc/init.d/redis-server
sed -i.bak 's/DAEMON=\/usr\/bin\/redis-server/DAEMON=\/usr\/local\/bin\/redis-server/g' /etc/init.d/redis-server
useradd redis
mkdir -p /var/lib/redis
mkdir -p /var/log/redis
chown redis.redis /var/lib/redis
chown redis.redis /var/log/redis
update-rc.d redis-server defaults
/etc/init.d/redis-server start
cd /etc/init.d/
echo ./redis-cli
echo http://blog.hemantthorat.com/install-redis-2-6-on-ubuntu/
apt-get -y install python-software-properties python g++ make
add-apt-repository -y ppa:chris-lea/node.js
apt-get update
apt-get -y install nodejs
npm install hiredis redis csv
Simply set privileged to false in your VagrantFile like this:
Vagrant.configure(VAGRANTFILE_API_VERSION) do |config|
...
config.vm.provision :shell, privileged: false, path: "script.sh"
...
end
The shell provision runs as the root user. If you with to run as the vagrant user, you can do something like this:
sudo -u vagrant npm install hiredis redis
..or for multiple lines:
sudo -u vagrant << EOF
[...]
npm install hiredis
npm install redis
EOF
Maybe use npm install -g to install it globally in the vm?
sed -i 's/.*requiretty$/Defaults !requiretty/' /etc/sudoers