Permission Denied for cp command on Bitnami Nginx Docker Image - bash

I want to serve a static website using Bitnami's Nginx base image. I have a multi-stage Dockerfile as follows:
# build stage
FROM node:lts-alpine as build-stage
COPY ./ /app
WORKDIR /app
COPY .npmrc .npmrc
RUN npm install && npm run build
# Production stage
FROM bitnami/nginx:1.16 as production-stage
COPY --from=build-stage --chown=1001 /app/dist /app
COPY nginx.conf /opt/bitnami/nginx/conf/nginx.conf
COPY --chown=1001 entrypoint.sh /
RUN chmod +w /entrypoint.sh
CMD ["/entrypoint.sh"]
I use that entrypoint.sh to replace some file content with environment variables like:
#!/bin/bash
function join_by { local IFS="$1"; shift; echo "$*"; }
vars=$(env | grep VUE_APP_ | awk -F = '{print "$"$1}')
vars=$(join_by ' ' $vars)
for file in /app/js/app.*;
do
### T H I S L I N E T H R O W S E R R O R ###
cp $file $file.tmpl
envsubst "$vars" < $file.tmpl > $file
rm $file.tmpl
done
exec "$#"
On cp command it throws an error:
cp: cannot create regular file '/app/js/app.042ea3b0.js.tmpl': Permission denied
As you see, I have copied both the dist files and the entrypoint.sh with --chown=1001 (The default user in the Bitnami image), but no benefits.
Is it because the image folder app is exposed by a volume by default? How can I copy and modify that file I have moved into the image?
P.S: It runs in an OpenShift environment.

The Bitnami image performs some actions in the postunpack.sh script, this script is called from the Dokerfile. One of the actions perfomed by the script is related to configure the permissions due to the user running nginx is a non-root user. You can try implementing something similar with your needs.

It turned out to be a result of Openshift's behavior stated here:
How can I enable an image to run as a set user ID?:
When an application is deployed it will run as a user ID unique to the project it is running in. This overrides the user ID which the application image defines it wants to be run as.
...
The best solution is to build the application image so it can be run as an arbitrary user ID.
So, instead of copying the files and modifying the owner of them (chown), the access levels (chmod) of the files must be set appropriately.

Related

Add bash to Docker image [duplicate]

Seems like a basic issue but couldnt find any answers so far ..
When using ADD / COPY in Dockerfile and running the image on linux, the default file permission of the file copied in the image is 644. The onwner of this file seems to be as 'root'
However, when running the image, a non-root user starts the container and any file thus copied with 644 permission cannot execute this copied/added file and if the file is executed at ENTRYPOINT it fails to start with permission denied error.
I read in one of the posts that COPY/ADD after Docker 1.17.0+ allows chown but in my case i dont know who will be the non-root user starting so i cannot set the permission as that user.
I also saw another work around to ADD/COPY files to a different location and use RUN to copy them from the temp location to actual folder like what am doing below. But this approach doesnt work as the final image doesnt have the files in /otp/scm
#Installing Bitbucket and setting variables
WORKDIR /tmp
ADD atlassian-bitbucket-${BITBUCKET_VERSION}.tar.gz .
COPY bbconfigupdater.sh .
#Copying Entrypoint script which will get executed when container starts
WORKDIR /tmp
COPY entrypoint.sh .
RUN ls -lrth /tmp
WORKDIR /opt/scm
RUN pwd && cp /tmp/bbconfigupdater.sh /opt/scm \
&& cp /tmp/entrypoint.sh /opt/scm \
&& cp -r /tmp/atlassian-bitbucket-${BITBUCKET_VERSION} /opt/scm \
&& chgrp -R 0 /opt/ \
&& chmod -R 755 /opt/ \
&& chgrp -R 0 /scm/bitbucket \
&& chmod -R 755 /scm/bitbucket \
&& ls -lrth /opt/scm && ls -lrth /scmdata
Any help is appreciated to figure out how i can get my entrypoint script copied to the desired path with execute permissions set.
The default file permission is whatever the file permission is in your build context from where you copy the file. If you control the source, then it's best to fix the permissions there to avoid a copy-on-write operation. Otherwise, if you cannot guarantee the system building the image will have the execute bit set on the files, a chmod after the copy operation will fix the permission. E.g.
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh
A better option with newer versions of docker (and which didn't exist when this answer was first posted) is to use the --chmod flag (the permissions must be specified in octal at last check):
COPY --chmod=0755 entrypoint.sh .
You do not need to know who will run the container. The user inside the container is typically configured by the image creator (using USER) and doesn't depend on the user running the container from the docker host. When the user runs the container, they send a request to the docker API which does not track the calling user id.
The only time I've seen the host user matter is if you have a host volume and want to avoid permission issues. If that's your scenario, I often start the entrypoint as root, run a script called fix-perms to align the container uid with the host volume uid, and then run gosu to switch from root back to the container user.
A --chmod flag was added to ADD and COPY instructions in Docker CE 20.10. So you can now do.
COPY --chmod=0755 entrypoint.sh .
To be able to use it you need to enable BuildKit.
# enable buildkit for docker
DOCKER_BUILDKIT=1
# enable buildkit for docker-compose
COMPOSE_DOCKER_CLI_BUILD=1
Note: It seems to not be documented at this time, see this issue.

Where should I put input file in docker environment?

I am a newbie in Docker. I introduced the Docker environment in WSL2 (Windows10Home). I do not wish to use the VSCode for simpler implementation. I would like to rather use the Ubuntu terminal. When I try to compile my LaTeX file (my_input.tex) with a Docker image (https://hub.docker.com/r/weichuntsai/texlive-small), but it complains that there is no such a tex file.
docker run --name mylatex -dt -v /home/myname:/home weichuntsai/texlive-small:1.1.0
When I send the following command in the terminal, he complains of no corresponding file.
txrun my_input.tex xelex, although I created this tex file in the home
(~, or home/myname) directory.
Sending ls returns tex.mf only without showing my_input.tex unfortunately.
Sending pwd returns root with some reasons. I have no idea why it returns root, not home/myname.
It may be due to my insufficient understanding of Docker, but I appreciate your kind advice on
that question.
N.B. I became to know that Docker images are located in /var/lib/docker.
To change this directory, one must stop the Docker daemon with
sudo service docker stop. Then one must edit /etc/docker/daemon.json.
{
"data-root": "/to/some/path"
}
Checking Dockerfile of your image shows that working directory is root https://hub.docker.com/r/weichuntsai/texlive-small/dockerfile
Just mount your home to container root:
docker run --name mylatex -dt -v /home/myname:/root weichuntsai/texlive-small:1.1.0
OR inside container change to home by cd /home
Checking Dockerfile of your image shows that working directory is root https://hub.docker.com/r/weichuntsai/texlive-small/dockerfile
Just mount your home to container root:
docker run --name mylatex -dt -v "/home/me":"/file" weichuntsai/texlive-small:1.1.0
OR inside container change to home by cd /home
and then you access your file like
docker exec -it mylatex bash
cd /file
ls

Bash commands are ignored in my ENTRYPOINT Docker bash script

I have some few libraries that I have compiled on my machine. So I want to copy all the binaries into my docker container and at first, I tried to use COPY and ADD command in my Dockerfile:
# Installing zeromq
WORKDIR /${home}/${user}/master-wheel
COPY ${PWD}/libzmq ./libzmq
COPY ${PWD}/cppzmq ./cppzmq
WORKDIR /${home}/${user}/master-wheel/libzmq/binaries
ADD * /
WORKDIR /${home}/${user}/master-wheel/cppzmq/binaries
ADD * /
Note that the directories and files do exist and upon entering into my created container, I can see that the copied directories libzmq and cppzmq do exist and I can manually copy all the binaries to the root /. However, for some reason Dockerfile doesn't copy and I can't figure out what could be the problem.
Then, I have decided to do that inside my ENTRYPOINT script and it looks like this:
#!/bin/bash
#set -e
#set -u
echo "==> Executing master image entrypoint ..."
echo "-> Setting up"
cp -r /home/ed/master-wheel/libzmq/binaries/* /
cp -r /home/ed/master-wheel/cppzmq/binaries/* /
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib
ldconfig
echo "==> Container ready"
exec "$#"
Everything except two cp commands executes. I tried the same command (cp command) from my container bash terminal and it worked.
What could be the problem?
EDIT:
This part of the file works and the binaries do really get copied to the root directory:
# Installing libfreespace
WORKDIR /${home}/${user}/master-wheel
COPY ${PWD}/libfreespace ./libfreespace
WORKDIR /${home}/${user}/master-wheel/libfreespace/binaries
COPY * /
EDIT 2:
It seems that if I do something like this:
WORKDIR /${home}/${user}/master-wheel
COPY ${PWD}/libzmq ./libzmq
COPY ${PWD}/cppzmq ./cppzmq
WORKDIR /${home}/${user}/master-wheel/libzmq/binaries/usr/
ADD * /usr/
WORKDIR /${home}/${user}/master-wheel/cppzmq/binaries/usr/
ADD * /usr/
It works.

Result of .sh script RUN from Dockerfile is not saved to image

After RUN ["./run.sh"], a folder produced by run.sh is visible from inside the script but lost once Docker continues.
Expected behaviour:
I would like to have access to the public/ folder, which is generated by the run.sh script.
Dockerfile
...
RUN mkdir -p /opt/site
WORKDIR /opt/site
VOLUME /opt/site
COPY . .
RUN ["chmod", "+x", "./run.sh"]
RUN ["./run.sh"]
RUN pwd
RUN ls
RUN ls public
FROM nginx
COPY --from=build-stage /opt/site/public /usr/share/nginx/html
Script
#!/usr/bin/env bash
rm -rf public/ node_modules/ node_modules/.bin/ package-lock.json yarn.lock
npm install
ls
touch newfile.txt
npm run build
ls
ls from inside the run.sh script after build. The public folder is present.
...
Generated public/sw.js, which will precache 6 files, totaling 197705 bytes.
info Done building in 44.842 sec
*ls*
Dockerfile
config
gatsby-config.js
gatsby-node.js
newfile.txt
node_modules
package-lock.json
package.json
postcss.config.js
public
run.sh
src
static
tailwind.css
tailwind.js
ls from inside the Dockerfile. The public folder is missing and trying to interact with it leads to failure.
Removing intermediate container 1692fb171673
---> 474d83267ccb
Step 10/14 : RUN pwd
---> Running in 7c351b151904
/opt/site
Removing intermediate container 7c351b151904
---> bae37da8b513
Step 11/14 : RUN ls
---> Running in 384daf575cae
Dockerfile
config
gatsby-config.js
gatsby-node.js
package-lock.json
package.json
postcss.config.js
run.sh
src
static
tailwind.css
tailwind.js
Removing intermediate container 384daf575cae
---> 1f6743a4adc1
Step 12/14 : RUN ls public
---> Running in 7af84c5d72a0
ls: cannot access public: No such file or directory
The command '/bin/sh -c ls public' returned a non-zero code: 2
ERROR: Job failed: exit code 2
You've created a volume with the selected directory:
VOLUME /opt/site
When defined in an image, a volume will get created for every container created from that image. If you do not specify a source for the volume (which you cannot at build time), docker will create an anonymous volume. And with both a named and anonymous volume, docker will initialize the contents to that of the image at that location.
The result of a RUN command is the following:
create a temporary container
that temporary container runs your requested command and verifies the exit code before continuing
if successful, docker captures the result of a diff between the image and container. This is mainly the container specific read/write filesystem layer. However it does not include any external volumes.
This behaviour is documented by docker:
Changing the volume from within the Dockerfile: If any build steps change the data within the volume after it has been declared, those changes will be discarded.
My standard recommendation is to remove any volume definition from the Dockerfile. If you need a volume, define it at runtime with something like a docker compose file. This allows the image to be extended, and prevents anonymous volumes from cluttering the filesystem.

bash file in docker isn't executing

I want my bash file to run whenever I the run the docker image, so firstly I created the Dockerfile inside a new directory say demo and in that directory demo I created a new directory home and in that directory I’ve my bash file - testfile.sh.
Here is my docker file -
FROM ubuntu
MAINTAINER Aman Kh
COPY . /home
CMD /home/testfile.sh
On building it with command - sudo docker build -t amankh99/hello .
the following output was received -
Sending build context to Docker daemon 3.584kB
Step 1/4 : FROM ubuntu
—> 0458a4468cbc
Step 2/4 : MAINTAINER Aman Kh
—> Using cache
—> 98fbe31ed233
Step 3/4 : COPY . /home
—> Using cache
—> 7e52ff3439e2
Step 4/4 : CMD /home/testfile.sh
—> Using cache
—> 1d2660df6387
Successfully built 1d2660df6387
Successfully tagged amankh99/hello:latest
But when I run it with command
sudo docker run --name test -it amankh99/hello
it says
bin/sh: 1: /home/testfile.sh: not found
After having build successfully why it is unable to found the file.
I want to convert this container in image and want to push on docker hub, so that when I can run this with simple run command as we run the hello-world(sudo docker run hello-world) , I get my bash file executed, so what changes in dockerfile can I do to acheive this.
OP's Description
I created the Dockerfile inside a new directory say demo and in that directory demo I created a new directory home and in that directory I’ve my bash file - testfile.sh.
So according to your description
demo/
+--- Dockerfile
+--- home/
+--- testfile.sh
You need to COPY home directory into /home
FROM ubuntu
COPY home /home
CMD /home/testfile.sh
If you do this COPY . /home, your home/testfile.sh will be copy into /home/home/testfile.sh
If you want copy only your testfile.sh, then do this
COPY home/testfile.sh /home/
It either can find the file and it just does not know what to do with it because the interpreter of your script ( #!.... on the first line of your script) is not present in your docker image, or it cannot find the file because it was not copied.
You can verify this by passing /bin/bash as final argument to your docker run command, and run ls -l /home/testfile.sh and/or /home/testfile.sh on the prompt.

Resources