Why this docker file is not running with a shell script? - shell

Now, I made a docker file named as (Dockerfile) as follows:
When I build this using the following command:
docker build -f Dockerfile .
I get the following output:
Step 1 : FROM ******.dkr.ecr.us-east-1.amazonaws.com/centos-base:7
---> 9ab68a0dd16a
Step 2 : COPY echo_hello.sh /echo_hello.sh
---> Using cache
---> e7d541f5cf53
Step 3 : RUN bash /echo_hello.sh
---> Running in 4b5518faab28
hello world
hello world
.......
But, when I then begin to run it using the following command:
docker run -it d2cc33b16e8f
This doesn't happen and instead it shows me an error:
the command to run to start the application
Where am I going wrong in this?

You will need to change your Dockerfile to the following:
FROM *******.dkr.ecr.us-east-1.amazonaws.com/centos-base:7
COPY echo_hello.sh /echo_hello.sh
RUN chmod u+x /echo_hello.sh
CMD /echo_hello.sh
This page explains the Dockerfile instructions pretty well:
http://goinbigdata.com/docker-run-vs-cmd-vs-entrypoint/

RUN commands are executed at build time (when you do a docker build). What you need is CMD or ENTRYPOINT. Either of these sets the executable that is run when the container starts (via docker run)
Your Dockerfile should look similar to what #Rezney has in his answer.

Related

How to run sql scripts against mariaDB docker container

I am trying to create a mariadb instance in docker and then run all the files in a directory against it. I know my script works when I execute it after my dockerfile runs, but when I put the script into the docker file it reports that mariadb got a 127 error. I have tried putting the call to mysqld inside of the script, but that has not fixed the issue.
Dockerfile
ENV MYSQL_ROOT_PASSWORD test
ENV MYSQL_DATABASE mydatabase
COPY . /usr/src
WORKDIR /usr/src
RUN script_runner.sh test
EXPOSE 3306
CMD ["mysqld"]
script_runner.sh
files=`ls start_script | grep ^'do'`
for script in $files
do
mysql -u root --password=$1 < `pwd`/start_script/$script
done
docker-compose.yml
...
mariadb:
build:
context: ./mariaDB
restart: always
ports:
- "3306:3306"
volumes:
- "/var/lib/mysql:/var/lib/mysql"
- "/srv/docker/sockets/mariadb.container.sock:/var/run/mysqld/mysqld"
file system
-repo/
--docker-compose.yml
--mariadb/
---Dockerfile
---script_runner.sh
----start_script/
----do-release.sql
error
Building mariadb
Step 1/8 : FROM mariadb:10.4.11-bionic
---> bc20d5f8d0fe
Step 2/8 : ENV MYSQL_ROOT_PASSWORD test
---> Running in 5987d662632b
Removing intermediate container 5987d662632b
---> e40256430e39
Step 3/8 : ENV MYSQL_DATABASE mydatabase
---> Running in a865ef21cdcc
Removing intermediate container a865ef21cdcc
---> dc5997996fef
Step 4/8 : COPY . /usr/src
---> 5314d67545bb
Step 5/8 : WORKDIR /usr/src
---> Running in 4643fe58e44e
Removing intermediate container 4643fe58e44e
---> 88e7901d501a
Step 6/8 : RUN script_runner.sh test
---> Running in 502ab4fddbb8
/bin/sh: 1: script_runner.sh: not found
ERROR: Service 'mariadb' failed to build: The command '/bin/sh -c script_runner.sh test' returned a non-zero code: 127
Your script_runner.sh file needs executable permissions, otherwise it will not be found as executable. Try setting the attribute in your local folder (chmod 7xx) or inside the Dockerfile
COPY . /usr/src
WORKDIR /usr/src
RUN chmod 711 script_runner.sh
RUN script_runner.sh test
I agree with the comment that says you need to make the script_runner.sh script executable and that would likely be the cause of the 127 return code. However I think you'll find it still won't work even after making it executable.
At the time the RUN script_runner.sh test gets executed, the MySQL server won't actually running yet inside the Docker container during the build process. So if you made the script executable I think you'll see that it'll return a "cannot connect to mysql sock" type error. You'd have to start mysqld as part of your Docker build. Also, remember that a docker build will execute each RUN statement by itself and make a layer of the disk changes after that command finishes. If you simply try to issue a RUN mysqld (or similar), either the mysqld process will start and block forever (assuming it doesn't daemonize) or (if you tell it to daemonize) it will start and as soon as it moves to the background, docker will make a layer of the disk and then execute your script_runner.sh but mysql will have exited.
If you want to do this you have two options:
Combine the two into a single RUN statement such as RUN mysqld_safe && script_runner.sh
Have your script_runner.sh start mysqld before it tries to execute the mysql client.
Either way should work and would (I think) end up with the same single layer of disk changes. The documents touch on this in their best practices recommendations (although here they're trying to minimize the number of layers for performance reasons):
https://docs.docker.com/develop/develop-images/dockerfile_best-practices/#minimize-the-number-of-layers

The docker image build is stuck

I need to build a windows docker image for a window app on windows 10 with docker desktop installed on vm, but when the build process is stuck when running the installation of this app. the log looks like something bellow,
PS C:\docker> docker image build . -t app
Sending build context to Docker daemon 144.1MB
Step 1/6 : FROM mcr.microsoft.com/dotnet/framework/aspnet:4.8
---> 9b87edf03093
Step 2/6 : COPY . /app
---> ac4b1124d856
Step 3/6 : WORKDIR /app
---> Running in cf4bd2345d26
Removing intermediate container cf4bd2345d26
---> d4f28097afd9
Step 4/6 : RUN .\ELSA1.0_2.6.6.243.exe
---> Running in b9356f975aa6
(And the process is stuck for several hours here and is terminated by me)
the docker file is
FROM mcr.microsoft.com/dotnet/framework/aspnet:4.8 # this is a base image because asp.net 3.5 is a prerequisite for the app
COPY . /app
WORKDIR /app
RUN .\ELSA1.0_2.6.6.243.exe # I am stuck here!
I have tried to do something in the base image like so, but it seems that I cannot do anything
PS C:\docker> docker container run -it
mcr.microsoft.com/dotnet/framework/aspnet:4.8
Service 'w3svc' has been stopped
Service 'w3svc' started
Are there any good ideas to debug this issue? By the way the installer can work normally on windows 10.
You should never execute the command in the RUN statement that not terminate. I see in Docker build logs that you started exe file in RUN command. This will keep stuck your docker build process and will wait for SIGINT. The same will happen like if you execure RUN npm start so it will hang the build process.
Add your executable at entrypoint or CMD.
Another thing that can be the issue in such cased
Considerations for using CMD with Windows
On Windows, file paths specified in the CMD instruction must use
forward slashes or have escaped backslashes \. The following are
valid CMD instructions:
dockerfile
# exec form
CMD ["c:\\Apache24\\bin\\httpd.exe", "-w"]
# shell form
CMD c:\\Apache24\\bin\\httpd.exe -w
You can further read about Window CMD here
However, the following format without the proper slashes will not work:
dockerfile
CMD c:\Apache24\bin\httpd.exe -w

Cannot write into ~/.m2 in docker maven container

Why aren't files written in /root/.m2 in the maven3 docker image persistent during the build?
A simple dockerfile:
FROM maven:3-jdk-8
RUN touch /root/.m2/testfilehere && ls -a /root/.m2
RUN ls -a /root/.m2
CMD ["bash"]
Produces the following output.
$ docker build -t test --no-cache .
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM maven:3-jdk-8
---> 42e3884987fb
Step 2 : RUN touch /root/.m2/testfilehere && ls -a /root/.m2
---> Running in 1c1dc5e9f082
.
..
testfilehere
---> 3da352119c4d
Removing intermediate container 1c1dc5e9f082
Step 3 : RUN ls -a /root/.m2
---> Running in df506db8c1dd
.
..
---> d07cc155b20e
Removing intermediate container df506db8c1dd
Step 4 : RUN stat /root/.m2/testfilehere
---> Running in af44f30aafe5
stat: cannot stat ‘/root/.m2/testfilehere’: No such file or directory
The command '/bin/sh -c stat /root/.m2/testfilehere' returned a non-zero code: 1
The file created at the first command is gone when the intermmediate container exists.
Also, this does not happen in the ubuntu image, just maven.
edit: using ADD hostfile /root/.m2/containerfile does work as a workaround, but is not what I want.
the maven docker image has an entrypoint
ENTRYPOINT ["/usr/local/bin/mvn-entrypoint.sh"]
On container started, entrypoint copy files from /usr/share/maven/ref into ${MAVEN_CONFIG} and erase your file
You can see script executed on startup by following this link
https://github.com/carlossg/docker-maven/blob/33eeccbb0ce15440f5ccebcd87040c6be2bf9e91/jdk-8/mvn-entrypoint.sh
That's because /root/.m2 is defined as a VOLUME in the image. When a container runs with a volume, the volume storage is not part of the UnionFS - so its data is not stored in the container's writable layer:
A data volume is a specially-designated directory within one or more containers that bypasses the Union File System.
The first RUN command creates a file in the volume, but that's in an intermediary container with its own volume. The file isn't saved to the image layer because it's in a volume.
The second RUN command is running in a new intermediary container which has its own volume. There's no content in the volume from the base image, so the volume is empty in the new container.
If you want to pre-populate the volume with data, you need to do it in the Dockerfile as you've seen.
There is a documentation for this in Docker Maven:
https://github.com/carlossg/docker-maven#packaging-a-local-repository-with-the-image
COPY settings.xml /usr/share/maven/ref/
After you run your Docker image, the settings.xml will appear in /root/.m2.

Docker Hello Wold - oci runtime error

I am trying to understand Docker and I have a very simple Dockerfile at ~/dockerfiles/test on my OSX.
FROM scratch
RUN echo "Hello world" > ~/helloworld.txt
CMD ["cat", "~/helloworld.txt"]
When I try to build an image for this file like
docker build -t simple .
I get an error during the build process.
Error Output
Sending build context to Docker daemon 2.048 kB
Step 1 : FROM scratch
--->
Step 2 : RUN echo "Hello world" > ~/helloworld.txt
---> Running in fc772fd39d45
oci runtime error: exec: "/bin/sh": stat /bin/sh: no such file or directory
Any pointers on why I am facing this issue?
You start from SCRATCH (the empty image), you are using cat, which is not a shell built-in.
cat needs /bin/sh to run (it will fork the process and load dynamic libraries)
Note: By default, ENTRYPOINT is /bin/sh -c, but the doc "Creating a simple base image using scratch" shows that ENTRYPOINT is empty for a scratch image.
As BMitch comments below:
The quick fix to the problem is changing from to something like FROM debian:latest or even FROM busybox:latest if size matters
The image currently used for that is alpine
FROM alpine:3.4
The image is only 5 MB and has access to a package repository that is much more complete than other BusyBox based images.

How to override the CMD command in the docker run line

How you can replace the cmd based on the docker documentation:
https://docs.docker.com/reference/builder/#cmd
You can override the CMD command
Dockerfile:
RUN chmod +x /srv/www/bin/* & chmod -R 755 /srv/www/app
RUN pip3 install -r /srv/www/app/pip-requirements.txt
EXPOSE 80
CMD ["/srv/www/bin/gunicorn.sh"]
the docker run command is:
docker run --name test test/test-backend
I tried
docker run --name test test --cmd ["/srv/www/bin/gunicorn.sh"]
docker run --name test test cmd ["/srv/www/bin/gunicorn.sh"]
But the console say this error:
System error: exec: "cmd": executable file not found in $PATH
The right way to do it is deleting cmd ["..."]
docker run --name test test/test-backend /srv/www/bin/gunicorn.sh
The Dockerfile uses CMD instruction which allows defaults for the container being executed.
The below line will execute the script /srv/www/bin/gunicorn.sh as its already provide in CMD instruction in your Dockerfile as a default value, which internally gets executed as /bin/sh -c /srv/www/bin/gunicorn.sh during runtime.
docker run --name test test/test-backend
Now say if you want to run some thing else, just add that at a end of the docker run. Now below line should run bash instead.
docker run --name test test/test-backend /bin/bash
Ref: Dockerfile Best Practices
For those using docker-compose:
docker-compose run [your-service-name-here-from-docker-compose.yml] /srv/www/bin/gunicorn.sh
In my case I'm reusing the same docker service to run the development and production builds of my react application:
docker-compose run my-react-app npm run build
Will run my webpack.config.production.js and build the application to the dist. But yes, to the original question, you can override the CMD in the CLI.
Try this in your Dockerfile
CMD ["sh" , "-c", "command && bash"]

Resources