Running a bash script from alpine based docker - bash

I have Dockerfile containing:
FROM alpine
COPY script.sh /script.sh
CMD ["./script.sh"]
and a script.sh (with executable permission):
#!/bin/bash
echo "hello world from script file"
when I run
docker run --name testing fff0e5c81ca0
where fff0e5c81ca0 is the id after building, I get an error
standard_init_linux.go:195: exec user process caused "no such file or directory"
So how can I solve it?

To run a bash script in alpine based image, you need to do either one
Install bash
$ RUN apk add --update bash
Use #!/bin/sh in script instead of #!/bin/bash
You need to do any one of these two or both
Or, like #Maroun's answer in comment, you can change your CMD to execute your bash script
CMD ["sh", "./script.sh"]

Your Dockerfile may look like this:
FROM openjdk:8u171-jre-alpine3.8
COPY script.sh /script.sh
CMD ["sh", "./script.sh"]

Related

Run a simple shell script before running CMD command in Dockerfile

I have a dockerfile and the last command is
CMD ["/opt/startup.sh"]
Now i have another shell script i.e replacevariables.sh and i want to execute the following command in my dockerfile.
sh replacevariables.sh ${app_dir} dev
How can i execute this command. It is a simple script which is basically going to replace some characters of files in ${app_dir}. What can be the solution for this because when i see any kind of documentation they all suggest to run only one sh script.
You can use a Docker ENTRYPOINT to support this. Consider the following Dockerfile fragment:
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh replacevariables.sh
ENTRYPOINT ["./entrypoint.sh"]
# Same as above
CMD ["/opt/startup.sh"]
The ENTRYPOINT becomes the main container process, and it gets passed the CMD as arguments. So your entrypoint can do the first-time setup, and then run the special shell command exec "$#" to replace itself with the command it was given.
#!/bin/sh
./replacevariables.sh "${app_dir}" dev
exec "$#"
Even if you're launching some alternate command in your container (docker run --rm -it yourimage bash to get a debugging shell, for example) this will only replace the "command" part, so bash becomes the "$#" in the script, and you still do the first-time setup before launching the shell.
The important caveats are that ENTRYPOINT must be the JSON-array form (CMD can be a bare string that gets wrapped in /bin/sh -c, but this setup breaks ENTRYPOINT) and you only get one ENTRYPOINT. If you already have an ENTRYPOINT (many SO questions seem to like naming an interpreter there) move it into the start of CMD (CMD ["python3", "./script.py"]).

Run sh script from docker container

I build a docker image with bellow Dockerfile:
FROM node:8.9.4
ADD setENV.sh /usr/local/bin/setENV.sh
RUN chmod +x /usr/local/bin/setENV.sh
CMD [ "/bin/bash" "usr/local/bin/setENV.sh" ]
The setENV script is:
#!/bin/sh
echo "PORT=${PORT:-1234}" >> .env
echo "PORT_SERVICE=${PORT_SERVICE:-8888}" > .env
echo "HOST_SERVICE=${HOST_SERVICE:-1234}" > .env
I build the image as:
docker image build -t my-node .
And then I run the image as:
docker run -it my-node bash
But the script is not executed.
From inside the container a run the script as:
/bin/bash usr/local/bin/setENV.sh
And working fine.
Note that I am using docker for windows.
The command
docker run -it my-node bash
just runs bash.
To run the CMD, you have to do
docker run -it my-node
However, note that your container will immediately exit because there is nothing to do after writing to the file. So to see the result, you would need to add cat .env or something to setENV.sh.
I think the last line should be
CMD [ "/bin/bash", "usr/local/bin/setENV.sh" ]
You missed a ,.

Can't start bash script on ubuntu docker container

it's been a few days now but I really can't understand how to run a bash script correctly in ubuntu/xenial64 using docker. Any clarifications will be very appreciated.
I created a Dockerfile like this
FROM ubuntu:16.04
COPY setup.sh /setup.sh
RUN chmod +x /setup.sh
ENTRYPOINT [ "/setup.sh" ]
The error returned is: standard_init_linux.go:195: exec user process caused "no such file or directory"
But why?? If I run ls the file is correctly placed on the root. I also tried using CMD ["/setup.sh"]. My script file has a shebang like this #!/bin/bash.

Dockerfile CMD instruction will exit the container just after running it

I want to setup some configuration when my container starts, for this I am using shell scripts. But my container will exits as soon as my scripts ends, I have tried with -d flag / detached mode but It will never run in detached mode.
Below is my Dockerfile
FROM ubuntu:14.04
ADD shell.sh /usr/local/bin/shell.sh
RUN chmod 777 /usr/local/bin/shell.sh
CMD /usr/local/bin/shell.sh
Below is my shell script
#!/bin/bash
echo Hello-docker
Run without any flag
docker run hello-docker
This will print 'Hello-docker' on my console and exits
Run with -itd flags
docker run -itd hello-docker
and as below my console output, This time also will exits soon. :(
The difference I saw is in COMMAND section when I run other images command section will shows "/bin/bash" and will continue in detached mode.
And when I run my image in container with shell script COMMAND section will show "/bin/sh -c /usr/loca", and Exit.
I want to run container till I not stop it manually.
EDIT:
After adding ENTRYPOINT instruction in Dockerfile, this will not execute my shell script :(
FROM ubuntu:14.04
ADD shell.sh /usr/local/bin/shell.sh
RUN chmod 777 /usr/local/bin/shell.sh
CMD /usr/local/bin/shell.sh
ENTRYPOINT /bin/bash
As per docker documentation here
CMD will be overridden when running the container with alternative arguments, so If I run docker image with some arguments as below, will not execute CMD instructions. :(
sudo docker run -it --entrypoint=/bin/bash <imagename>
A docker container will run as long as the CMD from your Dockerfile takes.
In your case your CMD consists of a shell script containing a single echo. So the container will exit after completing the echo.
You can override CMD, for example:
sudo docker run -it --entrypoint=/bin/bash <imagename>
This will start an interactive shell in your container instead of executing your CMD. Your container will exit as soon as you exit that shell.
If you want your container to remain active, you have to ensure that your CMD keeps running. For instance, by adding the line while true; do sleep 1; done to your shell.sh file, your container will print your hello message and then do nothing any more until you stop it (using docker stop in another terminal).
You can open a shell in the running container using docker exec -it <containername> bash. If you then execute command ps ax, it will show you that your shell.sh is still running inside the container.
Finally with some experiments I got my best result as below
There is nothing wrong with my Dockerfile as below it's correct.
FROM ubuntu:14.04
ADD shell.sh /usr/local/bin/shell.sh
RUN chmod 777 /usr/local/bin/shell.sh
CMD /usr/local/bin/shell.sh
What I do to get expected result is, I just add one more command(/bin/bash) in my shell script file as below and vola everything works in my best way.
#!/bin/bash
echo “Hello-docker” > /usr/hello.txt
/bin/bash
You can also modify your first Dockerfile, replacing
CMD /usr/local/bin/shell.sh
by
CMD /usr/local/bin/shell.sh ; sleep infinity
That way, your script does not terminate, and your container stays running.
CMD bash -C '/path/to/start.sh';'bash'
Try
CMD /bin/bash -c 'MY_COMMAND_OR_SHELL_SCRIPT; /bin/bash'
Trying an explanation here to the answer of #lanni654321. sh shell is standard in Dockerfile. You must call bash shell to start bash with .bashrc, many commands also need RUN /bin/bash -c '...' in the same way as in CMD above, since sh shell is often not enough. If you add 'bash' in the end of CMD, the container will not exit because the image was committed with something that is still open.
See “/bin/sh: 1: MY_COMMAND: not found” for an error caused by sh and solved by bash.
I think that you will usually not need this. You can just use RUN /bin/bash -c '...', in my case, this could do anything that can be done in a base image before you go into varying details in docker-compose to start the containers.
But that is all not needed if you need to just have a container running without exiting. Just
docker run -dit --name MY_CONTAINER MY_IMAGE:latest
and then
docker exec -it MY_CONTAINER /bin/bash
and you should be in the bash of the container, and it should not exit.
Or if the exit happens during docker-compose, use
command: bash -c "MY_COMMAND --wait"
At your start shell append a line code:
tail -f /dev/null or /bin/bash
to make sure you shell done and suspend a process in system so that docker container not shutdown.Don't forget to give "chmod +x" access to start.sh.
there is demo:
#!/bin/bash
cp /root/supervisor/${RUN_SERVICE}.ini /etc/supervisor/conf.d/
sleep 1
service supervisor start
/bin/bash

Running a script inside a docker container using shell script

I am trying to create a shell script for setting up a docker container. My script file looks like:
#!bin/bash
docker run -t -i -p 5902:5902 --name "mycontainer" --privileged myImage:new /bin/bash
Running this script file will run the container in a newly invoked bash.
Now I need to run a script file (test.sh)which is already inside container from the above given shell script.(eg: cd /path/to/test.sh && ./test.sh)
How to do that?
You can run a command in a running container using docker exec [OPTIONS] CONTAINER COMMAND [ARG...]:
docker exec mycontainer /path/to/test.sh
And to run from a bash session:
docker exec -it mycontainer /bin/bash
From there you can run your script.
Assuming that your docker container is up and running, you can run commands as:
docker exec mycontainer /bin/sh -c "cmd1;cmd2;...;cmdn"
I was searching an answer for this same question and found ENTRYPOINT in Dockerfile solution for me.
Dockerfile
...
ENTRYPOINT /my-script.sh ; /my-script2.sh ; /bin/bash
Now the scripts are executed when I start the container and I get the bash prompt after the scripts has been executed.
In case you don't want (or have) a running container, you can call your script directly with the run command.
Remove the iterative tty -i -t arguments and use this:
$ docker run ubuntu:bionic /bin/bash /path/to/script.sh
This will (didn't test) also work for other scripts:
$ docker run ubuntu:bionic /usr/bin/python /path/to/script.py
This command worked for me
cat local_file.sh | docker exec -i container_name bash
You could also mount a local directory into your docker image and source the script in your .bashrc. Don't forget the script has to consist of functions unless you want it to execute on every new shell. (This is outdated see the update notice.)
I'm using this solution to be able to update the script outside of the docker instance. This way I don't have to rerun the image if changes occur, I just open a new shell. (Got rid of reopening a shell - see the update notice)
Here is how you bind your current directory:
docker run -it -v $PWD:/scripts $my_docker_build /bin/bash
Now your current directory is bound to /scripts of your docker instance.
(Outdated)
To save your .bashrc changes commit your working image with this command:
docker commit $container_id $my_docker_build
Update
To solve the issue to open up a new shell for every change I now do the following:
In the dockerfile itself I add RUN echo "/scripts/bashrc" > /root/.bashrc". Inside zshrc I export the scripts directory to the path. The scripts directory now contains multiple files instead of one. Now I can directly call all scripts without having open a sub shell on every change.
BTW you can define the history file outside of your container too. This way it's not necessary to commit on a bash change anymore.
Thomio's answer is helpful but it expects the script to exist inside the image. If you have a one-of script that you want to run/test inside a container (from command-line or to be useful in a script), then you can use
$ docker run ubuntu:bionic /bin/bash -c '
echo "Hello there"
echo "this could be a long script"
'
Have a look at entry points too. You will be able to use multiple CMD
https://docs.docker.com/engine/reference/builder/#/entrypoint
If you want to run the same command on multiple instances you can do this :
for i in c1 dm1 dm2 ds1 ds2 gtm_m gtm_sl; do docker exec -it $i /bin/bash -c "service sshd start"; done
This is old, and I don't have enough reputation points to comment. Still, I guess it is worth sharing how one can generalize Marvin's idea to allow parameters.
docker exec -i mycontainer bash -s arg1 arg2 arg3 < mylocal.sh

Resources