I have a shell script, script.sh, that writes some lines to a file:
#!/usr/bin/env bash
printf "blah
blah
blah
blah\n" | sudo tee file.txt
Now in my Dockerfile, I add this script and run it, then attempt to add the generated file.txt:
ADD script.sh .
RUN chmod 755 script.sh && ./script.sh
ADD file.txt .
When I do the above, I just get an error referring to the ADD file.txt . command:
lstat file.txt: no such file or directory
Why can't docker locate the file that my shell script generates?
Where would I be able to find it?
When you RUN chmod 755 script.sh && ./script.sh it actually execute this script inside the docker container (ie: in the docker layer).
When you ADD file.txt . you are trying to add a file from your local filesystem inside the docker container (ie: in a new docker layer).
You can't do that because the file.txt doesn't exist on your computer.
In fact, you already have this file inside docker, try docker run --rm -ti mydockerimage cat file.txt and you should see it's content displayed
It's because Docker load the entire context of the directory (where your Dockerfile is located) to Docker daemon at the beginning. From Docker docs,
The build is run by the Docker daemon, not by the CLI. The first thing a build process does is send the entire context (recursively) to the daemon. In most cases, it’s best to start with an empty directory as context and keep your Dockerfile in that directory. Add only the files needed for building the Dockerfile.
Since your text file was not available at the beginning, you get that error message. If you still want that text file want to be added to Docker image, you can call `docker build' command from the same script file. Modify script.sh,
#!/usr/bin/env bash
printf "blah
blah
blah
blah\n" | sudo tee <docker-file-directory>/file.txt
docker build --tag yourtag <docker-file-directory>
And modify your Dockerfile just to add the generated text file.
ADD file.txt
.. <rest of the Dockerfile instructions>
Related
I am a docker beginner. I have used this SO post to run a shell script with docker run and this works fine. However, what I am trying to do is to apply my shell script to a file that lives in my current working directory, where Dockerfile and script are.
My shell script - given a file as an argument, return its name and the number of lines:
#!/bin/bash
echo $1
wc -l $1
Dockerfile:
FROM ubuntu
COPY ./file.sh /
CMD /bin/bash file.sh
then build and run:
docker build -t test .
docker run -ti test /file.sh text_file
This is what I get:
text_file
wc: text_file: No such file or directory
I'm left clueless why the second line doesn't work, why the file can't be found. I don't want to copy my text_file to the container. Ideally, I'd like to run my script from docker container on any file in my current working directory.
Any help will be much appreciated.
Thanks!!
You're building your Docker image containing the script /file.sh. Still, your Docker container does not contain (or know) about the file text_file which you're passing as an argument.
In order to make it known to your Docker container, you have to mount it when running the container.
docker run --rm -it -v "$PWD"/text_file:/text_file test /file.sh /text_file
In order to check for other files, you just have to swap text_file in both the mount and the argument.
Notes
In addition to Docker volume mounts, I might suggest some more improvements to spice up your image.
In order to run a script, you don't have to use ubuntu as your base image. You might be fine with alpine or even more focused bash. And don't forget to use tags in order to enforce the exact same behavior over time.
You can set your script as an ENTRYPOINT of your Dockerfile. Then, your only specifying the script name (text_file in that case) as your command.
When mounting files, you can change the name of the file in your container. Therefore, you can simplify your script and just mounting the file to test at the exact same place every time you run the container.
FROM alpine:3.10
WORKDIR /tmp
COPY file.sh /usr/local/bin/wordcount
ENTRYPOINT /usr/local/bin/wordcount
CMD file
Then,
docker run --rm -it -v "PWD"/text_file:/tmp/file test
will do the job.
I have a super dumb script
$ cat script.sh
cat <<EOT > entrypoint.sh
#!/bin/bash
echo "$#"
EOT
docker run -it --rm -v $(pwd)/entrypoint.sh:/root/entrypoint.sh --entrypoint /root/entrypoint.sh bash:4 Hello World
But when I run script I got strange error:
$ sh script.sh
standard_init_linux.go:207: exec user process caused "no such file or directory"
Why script does not print Hello world ?
standard_init_linux.go:207: exec user process caused "no such file or directory"
The above error means one of:
Your script actually doesn't exist. This isn't likely with your volume mount but it doesn't hurt to run the container without the entrypoint, just open a shell with the same volume mount and list the file to be sure it's there. It's possible for the volume mount to fail on desktop versions of docker where the directory isn't shared to the docker VM and you end up with empty folders being created inside the container instead of mounting your file. When checking from inside of another container, also make sure you have execute permissions on the script.
If it's a script, the first line pointing to the interpreter is invalid. Make sure that command exists inside the container. E.g. alpine containers typically do not ship with bash and you need to use /bin/sh instead. This is the most common issue that I see.
If it's a script, similar to above, make sure your first line has linux linefeeds. A windows linefeed adds and extra \r to the name of the command trying to be run, which won't be found on the linux side.
If the command is a binary, it can refer to a missing library. I often see this with "statically" compiled go binaries that didn't have CGO disabled and have links to libc appear when importing networking libraries.
If you use json formatting to run your command, I often see this error with invalid json syntax. This doesn't apply to your use case, but may be helpful to others googling this issue.
This list is pulled from a talk I gave at last year's DockerCon: https://sudo-bmitch.github.io/presentations/dc2018/faq-stackoverflow.html#59
First of all:
Request
docker run -it --rm bash:4 which bash
Output
/usr/local/bin/bash
So
#!/bin/bash
Should be changed to
#!/usr/local/bin/bash
And
docker run -it --rm -v $(pwd)/entrypoint.sh:/root/entrypoint.sh --entrypoint /root/entrypoint.sh bash:4 Hello World
Gives you
Hello World
Update
Code
cat <<EOT > entrypoint.sh
#!/bin/bash
echo "$#"
EOT
Should be fixed as
#!/usr/bin/env bash
cat <<EOT > entrypoint.sh
#!/usr/bin/env bash
echo "\$#"
EOT
When I execute the following command (which moves all files with the .txt and .sbreaks extension to another folder):
sudo docker exec name mv xyz/data/outputs/*.{sbreaks,txt} <>/data/spare
I get the following error:
mv: cannot stat ‘xyz/data/outputs/*.sbreaks’: No such file or directory
mv: cannot stat ‘xyz/data/outputs/*.txt’: No such file or directory
But, when I go into docker via sudo docker exec -it name bash and execute the same command: mv xyz/data/outputs/*.{sbreaks,txt} xyz/data/spare, it executes fine.
What am I doing wrong here?
PS: Both local and the Docker container are ubuntu environments
That is because the * is expanded by a shell program (i.e. bash). (Psst, this is typical interview question).
So pass your command to a shell and let it launch the mv for you:
sudo docker exec cypher bash -c 'mv xyz/data/outputs/*.{sbreaks,txt} .......'
When you do docker exec some_program some_param, docker searches for some_program and executes it directly without doing anything extra, and just pass some_param as a parameter (a star in your case). mv expects real file names, and not *.
I am trying to create a shell script for setting up a docker container. My script file looks like:
#!bin/bash
docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash
Running this script file will run the container in a newly invoked bash.
Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh)
How to do that?
You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:
docker exec mycontainer /path/to/test.sh
And to run from a bash session:
docker exec -it mycontainer /bin/bash
From there you can run your script.
Assuming that your docker container is up and running, you can run commands as:
docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"
I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.
Dockerfile
...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash
Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.
In case you don't want (or have) a running container, you can call your script directly with the run command.
Remove the iterative tty -i -t arguments and use this:
$ docker run ubuntu:bionic /bin/bash /path/to/script.sh
This will (didn't test) also work for other scripts:
$ docker run ubuntu:bionic /usr/bin/python /path/to/script.py
This command worked for me
cat local_file.sh | docker exec -i container_name bash
You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)
I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)
Here is how you bind your current directory:
docker run -it -v $PWD:/scripts $my_docker_build /bin/bash
Now your current directory is bound to /scripts of your docker instance.
(Outdated)
To save your .bashrc changes commit your working image with this command:
docker commit $container_id $my_docker_build
Update
To solve the issue to open up a new shell for every change I now do the following:
In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.
BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.
Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use
$ docker run ubuntu:bionic /bin/bash -c '
echo "Hello there"
echo "this could be a long script"
'
Have a look at entry points too. You will be able to use multiple CMD
https://docs.docker.com/engine/reference/builder/#/entrypoint
If you want to run the same command on multiple instances you can do this :
for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done
This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.
docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh
I am experiencing strange behavior when executing a Dockerfile (in https://github.com/Krijger/es-nagios-docker). Basically, I add a file to append its contents to a file in the image
ADD es-command /tmp/
RUN cat tmp/es-command >> /opt/nagios/etc/objects/commands.cfg
The problem is that, while /tmp/es-command is present in the resulting image, the commands.cfg file was not changed.
As a prelude to the accepted answer: my Dockerfile extends cpuguy83/nagios, which defines /opt/nagios/etc as a volume.
Good to the see sample code, which find the route cause.
Your docker image comes from cpuguy83/nagios, from this image https://github.com/cpuguy83/docker-nagios/blob/master/Dockerfile
You can see /opt/nagios/etc directory is set as VOLUME
VOLUME ["/opt/nagios/var", "/opt/nagios/etc", "/opt/nagios/libexec", "/var/log/apache2", "/usr/share/snmp/mibs"]
Then you can notice that docker volume can't be changed at the next commit by your new build.
And this is the reason you can see your changes when you enter into the container and lost it when exits.
Here is how I use it:
ls ./
configure.sh
commands.cfg
cat configure.sh
#!/bin/bash
script_path=$( cd "$( dirname "$0" )" && pwd )
cp ${script_path}/commands.cfg /opt/nagios/etc/objects/
docker run -d --name nagios cpuguy83/nagios
docker run --rm -v $(pwd):/tmp --volumes-from nagios --entrypoint /tmp/configure.sh cpuguy83/nagios