"Error: No such container:path:" in shell script only - bash

I am trying to copy a folder outside of a container using docker cp, but I am running in an unexpected issue: the command works perfectly outside of a shell script yet fails when running the script.
For example: copy_indices.sh
for x in "${find_container_id_arr[#]}"; do
CONTAINER_NAME="${x}"
CONTAINER_ID=$(docker ps -aqf "name=${x}")
idx_name=$(docker exec -it "$CONTAINER_ID" ls -1 /usr/share/elasticsearch/data/nodes/0/indices)
docker cp "$CONTAINER_ID":/usr/share/elasticsearch/data/nodes/0/indices/"$idx_name" "$ALL_INDICES"/"$idx_name"
done
I determine the container ID using CONTAINER_ID=$(docker ps -aqf "name=${x}"), find the name of folder I need using idx_name=$(docker exec -it "$CONTAINER_ID" ls -1 /usr/share/elasticsearch/data/nodes/0/indices) and then copy it on the host filesystem: docker cp "$CONTAINER_ID":/usr/share/elasticsearch/data/nodes/0/indices/"$idx_name" "$ALL_INDICES"/"$idx_name"
My issue is that every command evaluates and run as expected when not put inside this script. I can run the command docker cp <my_container>:/usr/share/elasticsearch/data/nodes/0/indices/<index_name> ./all_indices/<index_name> and the target folder is indeed found and copied onto the host.
Once these commands are inside a script however, I get an "Error: No such container:path:" error and I can't pinpoint what is going wrong, because the mentionned path indeed exists in the container and the container is correct as I tested it running the "final" command supposed to be executed (the docker cp one).
What could be the reason these commands suddenly stop working when put in a shell script?

Related

Run a shell script with arguments on any given file with docker run

I am a docker beginner. I have used this SO post to run a shell script with docker run and this works fine. However, what I am trying to do is to apply my shell script to a file that lives in my current working directory, where Dockerfile and script are.
My shell script - given a file as an argument, return its name and the number of lines:
#!/bin/bash
echo $1
wc -l $1
Dockerfile:
FROM ubuntu
COPY ./file.sh /
CMD /bin/bash file.sh
then build and run:
docker build -t test .
docker run -ti test /file.sh text_file
This is what I get:
text_file
wc: text_file: No such file or directory
I'm left clueless why the second line doesn't work, why the file can't be found. I don't want to copy my text_file to the container. Ideally, I'd like to run my script from docker container on any file in my current working directory.
Any help will be much appreciated.
Thanks!!
You're building your Docker image containing the script /file.sh. Still, your Docker container does not contain (or know) about the file text_file which you're passing as an argument.
In order to make it known to your Docker container, you have to mount it when running the container.
docker run --rm -it -v "$PWD"/text_file:/text_file test /file.sh /text_file
In order to check for other files, you just have to swap text_file in both the mount and the argument.
Notes
In addition to Docker volume mounts, I might suggest some more improvements to spice up your image.
In order to run a script, you don't have to use ubuntu as your base image. You might be fine with alpine or even more focused bash. And don't forget to use tags in order to enforce the exact same behavior over time.
You can set your script as an ENTRYPOINT of your Dockerfile. Then, your only specifying the script name (text_file in that case) as your command.
When mounting files, you can change the name of the file in your container. Therefore, you can simplify your script and just mounting the file to test at the exact same place every time you run the container.
FROM alpine:3.10
WORKDIR /tmp
COPY file.sh /usr/local/bin/wordcount
ENTRYPOINT /usr/local/bin/wordcount
CMD file
Then,
docker run --rm -it -v "PWD"/text_file:/tmp/file test
will do the job.

Unable to get any Docker Entrypoint from script working without continuous restarts

I'm having trouble understanding or seeing some working version of using a bash script as an Entrypoint for a Docker container. I've been trying numerous things for about 5 hours now.
Even from this official Docker blog, using a bash-script as an entry-point still doesn't work.
Dockerfile
FROM debian:stretch
COPY docker-entrypoint.sh /usr/local/bin/
RUN ln -s /usr/local/bin/docker-entrypoint.sh / # backwards compat
ENTRYPOINT ["docker-entrypoint.sh"]
CMD ["postgres"]
docker-entrypoint.sh
#!/bin/bash
set -e
if [ "$1" = 'postgres' ]; then
chown -R postgres "$PGDATA"
if [ -z "$(ls -A "$PGDATA")" ]; then
gosu postgres initdb
fi
exec gosu postgres "$#"
fi
exec "$#"
build.sh
docker build -t test .
run.sh
docker service create \
--name test \
test
Despite many efforts, I can't seem to get Dockerfile using an Entrypoint as a bash script that doesn't continuously restart and fail repeatedly.
My understanding is that exec "$#" was suppose to keep the container form immediately exiting, but I'm not sure if that's dependent some other process within the script failing.
I've tried using a docker-entrypoint.sh script that simply looked like this:
#!/bin/bash
exec "$#"
And since that also failed, I think that rules out something else going wrong inside the script being the cause of the failure.
What's also frustrating is there are no logs, either from docker service logs test or docker logs [container_id], and I can't seem to find anything useful in docker inspect [container_id].
I'm having trouble understanding everyone's confidence in exec "$#". I don't want to resort to using something like tail -f /dev/null or using a command at docker run. I was hoping that there would be some consistent, reliable way that a docker-entrypoint.sh script could reliably used to start services that I could run with docker run as well for other things for services, but even on Docker's official blog and countless questions here and blogs from other sites, I can't seem get a single example to work.
I would really appreciate some insight into what I'm missing here.
$# is just a string of the command line arguments. You are providing none, so it is executing a null string. That exits and will kill the docker. However, the exec command will always exit the running script - it destroys the current shell and starts a new one, it doesn't keep it running.
What I think you want to do is keep calling this script in kind of a recursive way. To actually have the script call itself, the line would be:
exec $0
$0 is the name of the bash file (or function name, if in a function). In this case it would be the name of your script.
Also, I am curious your desire not to use tail -f /dev/null? Creating a new shell over and over as fast as the script can go is not more performant. I am guessing you want this script to run over and over to just check your if condition.
In that case, a while(1) loop would probably work.
What you show, in principle, should work, and is one of the standard Docker patterns.
The interaction between ENTRYPOINT and CMD is pretty straightforward. If you provide both, then the main container process is whatever ENTRYPOINT (or docker run --entrypoint) specifies, and it is passed CMD (or the command at the end of docker run) as arguments. In this context, ending an entrypoint script with exec "$#" just means "replace me with the CMD as the main container process".
So, the pattern here is
Do some initial setup, like chowning a possibly-external data directory; then
exec "$#" to run whatever was passed as the command.
In your example there are a couple of things worth checking; it won't run as shown.
Whatever you provide as the ENTRYPOINT needs to obey the usual rules for executable commands: if it's a bare command, it must be in $PATH; it must have the executable bit set in its file permissions; if it's a script, its interpreter must also exist; if it's a binary, it must be statically linked or all of its shared library dependencies must be in the image already. For your script you might need to make it executable if it isn't already
RUN chmod +x /usr/local/bin/docker-entrypoint.sh
The other thing with this setup is that (definitionally) if the ENTRYPOINT exits, the whole container exits, and the Bourne shell set -e directive tells the script to exit on any error. In the artifacts in the question, gosu isn't a standard part of the debian base image, so your entrypoint will fail (and your container will exit) trying to run that command. (That won't affect the very simple case though.)
Finally, if you run into trouble running a container under an orchestration system like Docker Swarm or Kubernetes, one of your first steps should be to run the same container, locally, in the foreground: use docker run without the -d option and see what it prints out. For example:
% docker build .
% docker run --rm c5fb7da1c7c1
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"docker-entrypoint.sh\": executable file not found in $PATH": unknown.
ERRO[0000] error waiting for container: context canceled
% chmod +x docker-entrypoint.sh
% docker build .
% docker run --rm f5a239f2758d
/usr/local/bin/docker-entrypoint.sh: line 3: exec: postgres: not found
(Using the Dockerfile and short docker-entrypoint.sh from the question, and using the final image ID from docker build . in those docker run commands.)

Problem in executing a shell script present on host using docker exec

I'm trying to execute a script on the master node of AWS EMR cluster. The intention is to create a new conda env and link it to jupyter. I'm following this doc from AWS. Problem is, whatever be the content of the script, I'm getting the same error: bash: /home/hadoop/scripts/bootstrap.sh: No such file or directory while executing sudo docker exec jupyterhub bash /home/hadoop/scripts/bootstrap.sh. I've made sure the sh file is in the correct location.
But if I copy the bootstrap.sh file inside the container, and then run the same docker exec cmd, it's working fine. What am I missing here? I've tried with a simple script with the following entries, but it throws the same error:
#!/bin/bash
echo "Hello"
The doc clearly says:
Kernels are installed within the Docker container. The easiest way to
accomplish this is to create a bash script with installation commands,
save it to the master node, and then use the sudo docker exec
jupyterhub script_name command to run the script within the jupyterhub
container.
The docker exec command runs a command within the container's namespaces. One of those namespaces is the filesystem. So unless the command is part of the image, written into the container directly, or you have mounted a host volume to map a host directory into the container, you won't be able to execute it. A host volume could look like:
docker run -v /host/scripts:/container/scripts -n your_container $your_image
docker exec -it your_container /container/scripts/test.sh
That host volume could be the same path on both the host and the container.
If it is a shell script, you could use I/O redirection, e.g.:
docker exec -i $container_id /bin/bash <local_script.sh
but be aware that you cannot do interactive stuff this way since the script content has replaced your terminal as stdin. This works because the shell inside the container is just processing commands from stdin.
Other than those scenarios, I don't know what to tell you other than the documentation from AWS appears to be wrong.

Cannot run script added to existing docker container

I have a container that is running with no issues. I added a bash script to compliment a couple other scripts already in the container. The docker image copy 2 scripts to /usr/local/bin and they can be accessed with docker exec -c container-name existingscript.
I added my own script to the same directory and when running the same command I get an error that exec cannot run the script: no file or directory,script not located in $PATH. I check path and sure enough, /usr/local/bin is listed. I checked permissions and the script is 755.
I then open an interactive shell with docker exec -it mycontainer bash and run /usr/local/bin/myscript and it runs with no problem.
Why can I not run the script from outside the container like I can the other two (that were included in the image). All three have almost the same functions a day do not use any special programs, one lists files, one adds files, one reads the file.
The base is Ubuntu.
EDIT: Found where I was running into the issue. Provided the answer in case anyone else happens to make the same mistake.
EDIT-2: So the script that came with the docker image to perform a couple common functions calls the image not the container so my adding the scripts to the container had no effect on the script which was why I kept getting the no file or directory error.
The line in the script in question was:
docker run --rm -v "$(pwd)/config":/path/to/file -ti image_name:latest" mynewscript $#
Of course that ran against the image and NOT the container.
Once I noticed that I tried running it with exec instead of run and it ran without error, like so:
docker exec -it container_name mynewscript
The reason is "/usr/local/bin" not in your script's $PATH, you can use /usr/local/bin/myscript explicitly in your script. Or export $PATH first in the script.
While I was adding snippets to help explain the issue I found the problem and the solution.
So I access the scripts inside the container from the host with another script that allows you to do different things based on switch case. The scripts are called against the docker image and not the container so the script I added does not actually exist in the image.
I modified the script to call the container instead of the image and it works as expected.
EDIT: I updated the question with the answer but I am adding it here as well:
So the script that came with the docker image to perform a couple common functions calls the image not the container so my adding the scripts to the container had no effect on the script which was why I kept getting the no file or directory error.
The line in the script in question was:
docker run --rm -v "$(pwd)/config":/path/to/file -ti image_name:latest" mynewscript $#
Of course that ran against the image and NOT the container.
Once I noticed that I tried running it with exec instead of run and it ran without error, like so:
docker exec -it container_name mynewscript

Running a script inside a docker container using shell script

I am trying to create a shell script for setting up a docker container. My script file looks like:
#!bin/bash
docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash
Running this script file will run the container in a newly invoked bash.
Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh)
How to do that?
You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:
docker exec mycontainer /path/to/test.sh
And to run from a bash session:
docker exec -it mycontainer /bin/bash
From there you can run your script.
Assuming that your docker container is up and running, you can run commands as:
docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"
I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.
Dockerfile
...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash
Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.
In case you don't want (or have) a running container, you can call your script directly with the run command.
Remove the iterative tty -i -t arguments and use this:
$ docker run ubuntu:bionic /bin/bash /path/to/script.sh
This will (didn't test) also work for other scripts:
$ docker run ubuntu:bionic /usr/bin/python /path/to/script.py
This command worked for me
cat local_file.sh | docker exec -i container_name bash
You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)
I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)
Here is how you bind your current directory:
docker run -it -v $PWD:/scripts $my_docker_build /bin/bash
Now your current directory is bound to /scripts of your docker instance.
(Outdated)
To save your .bashrc changes commit your working image with this command:
docker commit $container_id $my_docker_build
Update
To solve the issue to open up a new shell for every change I now do the following:
In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.
BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.
Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use
$ docker run ubuntu:bionic /bin/bash -c '
echo "Hello there"
echo "this could be a long script"
'
Have a look at entry points too. You will be able to use multiple CMD
https://docs.docker.com/engine/reference/builder/#/entrypoint
If you want to run the same command on multiple instances you can do this :
for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done
This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.
docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh

Resources