Result of .sh script RUN from Dockerfile is not saved to image - shell

After RUN ["./run.sh"], a folder produced by run.sh is visible from inside the script but lost once Docker continues.
Expected behaviour:
I would like to have access to the public/ folder, which is generated by the run.sh script.
Dockerfile
...
RUN mkdir -p /opt/site
WORKDIR /opt/site
VOLUME /opt/site
COPY . .
RUN ["chmod", "+x", "./run.sh"]
RUN ["./run.sh"]
RUN pwd
RUN ls
RUN ls public
FROM nginx
COPY --from=build-stage /opt/site/public /usr/share/nginx/html
Script
#!/usr/bin/env bash
rm -rf public/ node_modules/ node_modules/.bin/ package-lock.json yarn.lock
npm install
ls
touch newfile.txt
npm run build
ls
ls from inside the run.sh script after build. The public folder is present.
...
Generated public/sw.js, which will precache 6 files, totaling 197705 bytes.
info Done building in 44.842 sec
*ls*
Dockerfile
config
gatsby-config.js
gatsby-node.js
newfile.txt
node_modules
package-lock.json
package.json
postcss.config.js
public
run.sh
src
static
tailwind.css
tailwind.js
ls from inside the Dockerfile. The public folder is missing and trying to interact with it leads to failure.
Removing intermediate container 1692fb171673
---> 474d83267ccb
Step 10/14 : RUN pwd
---> Running in 7c351b151904
/opt/site
Removing intermediate container 7c351b151904
---> bae37da8b513
Step 11/14 : RUN ls
---> Running in 384daf575cae
Dockerfile
config
gatsby-config.js
gatsby-node.js
package-lock.json
package.json
postcss.config.js
run.sh
src
static
tailwind.css
tailwind.js
Removing intermediate container 384daf575cae
---> 1f6743a4adc1
Step 12/14 : RUN ls public
---> Running in 7af84c5d72a0
ls: cannot access public: No such file or directory
The command '/bin/sh -c ls public' returned a non-zero code: 2
ERROR: Job failed: exit code 2

You've created a volume with the selected directory:
VOLUME /opt/site
When defined in an image, a volume will get created for every container created from that image. If you do not specify a source for the volume (which you cannot at build time), docker will create an anonymous volume. And with both a named and anonymous volume, docker will initialize the contents to that of the image at that location.
The result of a RUN command is the following:
create a temporary container
that temporary container runs your requested command and verifies the exit code before continuing
if successful, docker captures the result of a diff between the image and container. This is mainly the container specific read/write filesystem layer. However it does not include any external volumes.
This behaviour is documented by docker:
Changing the volume from within the Dockerfile: If any build steps change the data within the volume after it has been declared, those changes will be discarded.
My standard recommendation is to remove any volume definition from the Dockerfile. If you need a volume, define it at runtime with something like a docker compose file. This allows the image to be extended, and prevents anonymous volumes from cluttering the filesystem.

Related

Does WORKDIR create a directory?

These are the first 2 lines of a Dockerfile:
FROM node:12
WORKDIR /code
Here are some things I don't understand:
From docker's documentation I know that the 2nd line sets the working directory to /code.
Where does this process occur?
Does it happen when docker runs the second line of the Dockerfile, while creating the image?
If /code doesn't exist, does it get created by docker?
Where will /code be created? In the root directory of the image?
The Dockerfile WORKDIR directive
... sets the working directory.... If the WORKDIR doesn’t exist, it will be created even if it’s not used in any subsequent Dockerfile instruction.
I occasionally see SO questions that RUN mkdir a directory before switching WORKDIR to it. Since WORKDIR will create the directory, this isn't necessary.
All paths in a Dockerfile are always inside the image, except for the source paths for COPY and ADD instructions, which are inside the build context directory on the host. Absolute paths like /code will be directly inside the root directory in the image, following normal Unix conventions.
You can run temporary containers off of your image to examine this, even if the Dockerfile isn't complete yet.
host$ docker build -t my-image .
host$ docker run --rm my-image ls -l /
host$ docker run --rm -it my-image /bin/sh
0123456789ab# ls -l /
0123456789ab# exit
(This will always work, assuming the image includes core tools like sh and ls. docker exec requires the container to be running first; while you're refining the Dockerfile this may not be possible yet.)
The Workdir /path will be created inside the container.
To test this you can do sh into your container.
Steps:
docker exec -it <container-id> sh
ls (Here you will see WORKDIR)
If you want to view the intermediate image layers from your custom image
docker image inspect < image-name >
The default WORKDIR driectory if this is not specified, is the / directory.
More info at the following link,
https://www.geeksforgeeks.org/docker-workdir-instruction/

Where should I put input file in docker environment?

I am a newbie in Docker. I introduced the Docker environment in WSL2 (Windows10Home). I do not wish to use the VSCode for simpler implementation. I would like to rather use the Ubuntu terminal. When I try to compile my LaTeX file (my_input.tex) with a Docker image (https://hub.docker.com/r/weichuntsai/texlive-small), but it complains that there is no such a tex file.
docker run --name mylatex -dt -v /home/myname:/home weichuntsai/texlive-small:1.1.0
When I send the following command in the terminal, he complains of no corresponding file.
txrun my_input.tex xelex, although I created this tex file in the home
(~, or home/myname) directory.
Sending ls returns tex.mf only without showing my_input.tex unfortunately.
Sending pwd returns root with some reasons. I have no idea why it returns root, not home/myname.
It may be due to my insufficient understanding of Docker, but I appreciate your kind advice on
that question.
N.B. I became to know that Docker images are located in /var/lib/docker.
To change this directory, one must stop the Docker daemon with
sudo service docker stop. Then one must edit /etc/docker/daemon.json.
{
"data-root": "/to/some/path"
}
Checking Dockerfile of your image shows that working directory is root https://hub.docker.com/r/weichuntsai/texlive-small/dockerfile
Just mount your home to container root:
docker run --name mylatex -dt -v /home/myname:/root weichuntsai/texlive-small:1.1.0
OR inside container change to home by cd /home
Checking Dockerfile of your image shows that working directory is root https://hub.docker.com/r/weichuntsai/texlive-small/dockerfile
Just mount your home to container root:
docker run --name mylatex -dt -v "/home/me":"/file" weichuntsai/texlive-small:1.1.0
OR inside container change to home by cd /home
and then you access your file like
docker exec -it mylatex bash
cd /file
ls

Permission Denied for cp command on Bitnami Nginx Docker Image

I want to serve a static website using Bitnami's Nginx base image. I have a multi-stage Dockerfile as follows:
# build stage
FROM node:lts-alpine as build-stage
COPY ./ /app
WORKDIR /app
COPY .npmrc .npmrc
RUN npm install && npm run build
# Production stage
FROM bitnami/nginx:1.16 as production-stage
COPY --from=build-stage --chown=1001 /app/dist /app
COPY nginx.conf /opt/bitnami/nginx/conf/nginx.conf
COPY --chown=1001 entrypoint.sh /
RUN chmod +w /entrypoint.sh
CMD ["/entrypoint.sh"]
I use that entrypoint.sh to replace some file content with environment variables like:
#!/bin/bash
function join_by { local IFS="$1"; shift; echo "$*"; }
vars=$(env | grep VUE_APP_ | awk -F = '{print "$"$1}')
vars=$(join_by ' ' $vars)
for file in /app/js/app.*;
do
### T H I S L I N E T H R O W S E R R O R ###
cp $file $file.tmpl
envsubst "$vars" < $file.tmpl > $file
rm $file.tmpl
done
exec "$#"
On cp command it throws an error:
cp: cannot create regular file '/app/js/app.042ea3b0.js.tmpl': Permission denied
As you see, I have copied both the dist files and the entrypoint.sh with --chown=1001 (The default user in the Bitnami image), but no benefits.
Is it because the image folder app is exposed by a volume by default? How can I copy and modify that file I have moved into the image?
P.S: It runs in an OpenShift environment.
The Bitnami image performs some actions in the postunpack.sh script, this script is called from the Dokerfile. One of the actions perfomed by the script is related to configure the permissions due to the user running nginx is a non-root user. You can try implementing something similar with your needs.
It turned out to be a result of Openshift's behavior stated here:
How can I enable an image to run as a set user ID?:
When an application is deployed it will run as a user ID unique to the project it is running in. This overrides the user ID which the application image defines it wants to be run as.
...
The best solution is to build the application image so it can be run as an arbitrary user ID.
So, instead of copying the files and modifying the owner of them (chown), the access levels (chmod) of the files must be set appropriately.

bash file in docker isn't executing

I want my bash file to run whenever I the run the docker image, so firstly I created the Dockerfile inside a new directory say demo and in that directory demo I created a new directory home and in that directory I’ve my bash file - testfile.sh.
Here is my docker file -
FROM ubuntu
MAINTAINER Aman Kh
COPY . /home
CMD /home/testfile.sh
On building it with command - sudo docker build -t amankh99/hello .
the following output was received -
Sending build context to Docker daemon 3.584kB
Step 1/4 : FROM ubuntu
—> 0458a4468cbc
Step 2/4 : MAINTAINER Aman Kh
—> Using cache
—> 98fbe31ed233
Step 3/4 : COPY . /home
—> Using cache
—> 7e52ff3439e2
Step 4/4 : CMD /home/testfile.sh
—> Using cache
—> 1d2660df6387
Successfully built 1d2660df6387
Successfully tagged amankh99/hello:latest
But when I run it with command
sudo docker run --name test -it amankh99/hello
it says
bin/sh: 1: /home/testfile.sh: not found
After having build successfully why it is unable to found the file.
I want to convert this container in image and want to push on docker hub, so that when I can run this with simple run command as we run the hello-world(sudo docker run hello-world) , I get my bash file executed, so what changes in dockerfile can I do to acheive this.
OP's Description
I created the Dockerfile inside a new directory say demo and in that directory demo I created a new directory home and in that directory I’ve my bash file - testfile.sh.
So according to your description
demo/
+--- Dockerfile
+--- home/
+--- testfile.sh
You need to COPY home directory into /home
FROM ubuntu
COPY home /home
CMD /home/testfile.sh
If you do this COPY . /home, your home/testfile.sh will be copy into /home/home/testfile.sh
If you want copy only your testfile.sh, then do this
COPY home/testfile.sh /home/
It either can find the file and it just does not know what to do with it because the interpreter of your script ( #!.... on the first line of your script) is not present in your docker image, or it cannot find the file because it was not copied.
You can verify this by passing /bin/bash as final argument to your docker run command, and run ls -l /home/testfile.sh and/or /home/testfile.sh on the prompt.

How to fix "sh: 0: Can't open start.sh" in docker file?

I have created a docker image which contains the following CMD:
CMD ["sh", "start.sh"]
When I run the docker image I use the following command inside a Makefile
docker run --rm -v ${PWD}:/selenium $(DOCKER_IMAGE)
which copies the files from the current (host-)directory to the docker's /selenium folder. The files include files for selenium tests, as well as the file start.sh. But after the container has started, I get immediately the error
"sh: 0: Can't open start.sh"
Maybe the host volume is mounted inside docker after the command has been run? Anything else that can explain this error, and how to fix it?
Maybe there is a way to run more than one command inside docker to see whats going on? Like
CMD ["ls", ";", "pwd", ";", "sh", "start.sh"]
Update
when I use the following command i the Dockerfile
CMD ["ls"]
I get the error
ls: cannot open directory '.': Permission denied
Extra information
Docker version 1.12.6
Entrypoint: WORKDIR /work
Your mounting your volume to the /selenium folder in your container. Therefor the start.sh file isn't going to be in your working directory its going to be in /selenium. You want to mount your volume to a selenium folder inside your working directory then make sure the command references this new path.
If you use docker-compose the YAML-file to run the container would look something like this:
version: '3'
services:
start:
image: ${DOCKER_IMAGE}
command: sh selenium/start.sh
volumes:
- .:/work/selenium
If you try and perform each step manually, using docker run with bash,
docker exec -it (container name) /bin/bash
It will be more easier and quicker to look at the errors, and you can change the permissions, view where the file is located, before running the .sh file and try again.
Check the permission using ls -l.
Give the permission 777 using sudo chmod 777 file_name.
Repeat for other files you might find.

Resources