Docker container - "Docker run" append bash command - bash

I´m executing following command:
sudo docker run IMAGE bash ~/commands.sh
where IMAGE is my docker image and commands.sh a script inside the container.
When starting the container with "docker run", I want to execute the script. But it doesn´t work. I get the following error-status:
Exited (127) Less than a second ago
This error-status means, it doesn´t know the command.
Can you tell me where my mistake is?

I would assume that your local bash (running on the host system) expands the ~ before it reaches the running docker container. Have you tried using an absolute path here?

Related

"Error: No such container:path:" in shell script only

I am trying to copy a folder outside of a container using docker cp, but I am running in an unexpected issue: the command works perfectly outside of a shell script yet fails when running the script.
For example: copy_indices.sh
for x in "${find_container_id_arr[#]}"; do
CONTAINER_NAME="${x}"
CONTAINER_ID=$(docker ps -aqf "name=${x}")
idx_name=$(docker exec -it "$CONTAINER_ID" ls -1 /usr/share/elasticsearch/data/nodes/0/indices)
docker cp "$CONTAINER_ID":/usr/share/elasticsearch/data/nodes/0/indices/"$idx_name" "$ALL_INDICES"/"$idx_name"
done
I determine the container ID using CONTAINER_ID=$(docker ps -aqf "name=${x}"), find the name of folder I need using idx_name=$(docker exec -it "$CONTAINER_ID" ls -1 /usr/share/elasticsearch/data/nodes/0/indices) and then copy it on the host filesystem: docker cp "$CONTAINER_ID":/usr/share/elasticsearch/data/nodes/0/indices/"$idx_name" "$ALL_INDICES"/"$idx_name"
My issue is that every command evaluates and run as expected when not put inside this script. I can run the command docker cp <my_container>:/usr/share/elasticsearch/data/nodes/0/indices/<index_name> ./all_indices/<index_name> and the target folder is indeed found and copied onto the host.
Once these commands are inside a script however, I get an "Error: No such container:path:" error and I can't pinpoint what is going wrong, because the mentionned path indeed exists in the container and the container is correct as I tested it running the "final" command supposed to be executed (the docker cp one).
What could be the reason these commands suddenly stop working when put in a shell script?

Problem in executing a shell script present on host using docker exec

I'm trying to execute a script on the master node of AWS EMR cluster. The intention is to create a new conda env and link it to jupyter. I'm following this doc from AWS. Problem is, whatever be the content of the script, I'm getting the same error: bash: /home/hadoop/scripts/bootstrap.sh: No such file or directory while executing sudo docker exec jupyterhub bash /home/hadoop/scripts/bootstrap.sh. I've made sure the sh file is in the correct location.
But if I copy the bootstrap.sh file inside the container, and then run the same docker exec cmd, it's working fine. What am I missing here? I've tried with a simple script with the following entries, but it throws the same error:
#!/bin/bash
echo "Hello"
The doc clearly says:
Kernels are installed within the Docker container. The easiest way to
accomplish this is to create a bash script with installation commands,
save it to the master node, and then use the sudo docker exec
jupyterhub script_name command to run the script within the jupyterhub
container.
The docker exec command runs a command within the container's namespaces. One of those namespaces is the filesystem. So unless the command is part of the image, written into the container directly, or you have mounted a host volume to map a host directory into the container, you won't be able to execute it. A host volume could look like:
docker run -v /host/scripts:/container/scripts -n your_container $your_image
docker exec -it your_container /container/scripts/test.sh
That host volume could be the same path on both the host and the container.
If it is a shell script, you could use I/O redirection, e.g.:
docker exec -i $container_id /bin/bash <local_script.sh
but be aware that you cannot do interactive stuff this way since the script content has replaced your terminal as stdin. This works because the shell inside the container is just processing commands from stdin.
Other than those scenarios, I don't know what to tell you other than the documentation from AWS appears to be wrong.

Running shell script using Docker image

Input:
- There is Windows machine with Docker Toolbox installed.
- There is a shell script file baz.sh which calls py2dsc-deb.
Problem: py2dsc-deb is not available on Windows.
As I understand correctly, I can pull some Linux distro image from Docker repository, create a container and then execute shell-script file and it will run py2dsc-deb and do its job.
I have pulled:
debian - stretch-slim - 3ad21 - 3 weeks ago - 55.3MB
Now
How do I run my script using debian, something like: docker exec mycontainer /path/to/test.sh?
Running docker --rm debian:stretch-slim does nothing. Doesn't it suppose to run Debian distro at docker-machine ip?
I have tried to keep the container up using docker run -it debian:stretch-slim /bin/bash, then run the script using docker exec 1ef5b ./build.sh, but getting
$ docker exec 745 ./build.sh
rpc error: code = 2 desc = oci runtime error: exec failed: container_linux.go:247: starting container process caused "exec: \"./build.sh\": stat ./build.sh: no such file or directory"
Does it mean I can't run external script and has to always pass it inside the Docker?
You can execute bash command inside your container by typing
docker exec -ti -u `username` `container_name` bash -c "cd /path/to/ && ./test.sh"
lets say your container name is test_buildbox, you are root and your script stays inside /bin/test.sh You can call this script by typing
docker exec -ti -u root test_buildbox bash -c "cd /bin/ && ./test.sh
Please check if you have correct line endings in your .sh scripts (<LF>) when you built Docker image on Windows.

How to check if the docker image has all the files?

Is there a way to check if the docker image has all of the files that the Dockerfile copies over and to understand if the image is built as configured in the Dockerfile? My situation is that the image is built successfully, however when I try running it, docker complains that it cant find some file or other and the container fails to run, so I cant exec on it.
Doing docker inspect is not helping since it does not report on the files in the image. Is there some method?
You can run a shell based on that image:
docker run -it <image-name> bash
Use sh instead if there is no bash available. There you can search for files as any shell.
But maybe you have not bash in the image, so use sh:
docker run -it <image-name> sh
But maybe you have an odd entrypoint, so override it:
docker run -it --entrypoint sh <image-name>
You can see the history of file and check if all the required files are present at the time of image creation
docker image history --no-trunc [image_name] > [file_name]

Cannot run script added to existing docker container

I have a container that is running with no issues. I added a bash script to compliment a couple other scripts already in the container. The docker image copy 2 scripts to /usr/local/bin and they can be accessed with docker exec -c container-name existingscript.
I added my own script to the same directory and when running the same command I get an error that exec cannot run the script: no file or directory,script not located in $PATH. I check path and sure enough, /usr/local/bin is listed. I checked permissions and the script is 755.
I then open an interactive shell with docker exec -it mycontainer bash and run /usr/local/bin/myscript and it runs with no problem.
Why can I not run the script from outside the container like I can the other two (that were included in the image). All three have almost the same functions a day do not use any special programs, one lists files, one adds files, one reads the file.
The base is Ubuntu.
EDIT: Found where I was running into the issue. Provided the answer in case anyone else happens to make the same mistake.
EDIT-2: So the script that came with the docker image to perform a couple common functions calls the image not the container so my adding the scripts to the container had no effect on the script which was why I kept getting the no file or directory error.
The line in the script in question was:
docker run --rm -v "$(pwd)/config":/path/to/file -ti image_name:latest" mynewscript $#
Of course that ran against the image and NOT the container.
Once I noticed that I tried running it with exec instead of run and it ran without error, like so:
docker exec -it container_name mynewscript
The reason is "/usr/local/bin" not in your script's $PATH, you can use /usr/local/bin/myscript explicitly in your script. Or export $PATH first in the script.
While I was adding snippets to help explain the issue I found the problem and the solution.
So I access the scripts inside the container from the host with another script that allows you to do different things based on switch case. The scripts are called against the docker image and not the container so the script I added does not actually exist in the image.
I modified the script to call the container instead of the image and it works as expected.
EDIT: I updated the question with the answer but I am adding it here as well:
So the script that came with the docker image to perform a couple common functions calls the image not the container so my adding the scripts to the container had no effect on the script which was why I kept getting the no file or directory error.
The line in the script in question was:
docker run --rm -v "$(pwd)/config":/path/to/file -ti image_name:latest" mynewscript $#
Of course that ran against the image and NOT the container.
Once I noticed that I tried running it with exec instead of run and it ran without error, like so:
docker exec -it container_name mynewscript

Resources