Export function from bash and run it through command line - bash

I have a bash script in server.sh file
#!/usr/bin/env bash
function start {
docker-compose up -d --build && docker exec php bash -c "composer install; vendor/bin/phinx migrate" && \
docker exec web bash -c "cd web; npm install; pm2 start node_modules/react-scripts/scripts/start.js --name web"
}
function stop {
docker-compose down
}
export -f start stop
I want to call these function from command line such as
$./server.sh start
$./server.sh stop
Is this possible ? Right now it doesn't do any thing

Your script ignores its command line arguments, so passing it start or stop is pointless.
The only thing it does is to define (and export, for some reason) two functions, so running it in a separate shell does nothing.
What you can do is source the script in the current shell:
. ./server.sh
Then you will have two functions available that you can run:
start
and
stop
(both in the current shell).
If you want it to work differently, you'll have to redesign your shell script.

You cannot use export -f start stop like this.
Here is a good thread explaining how to use it:
https://unix.stackexchange.com/questions/22796/can-i-export-functions-in-bash
If you wish to call your start/stop method from the command line, you will have to expose it like this:
#!/usr/bin/env bash
function start {
docker-compose up -d --build && docker exec php bash -c "composer install; vendor/bin/phinx migrate" && \
docker exec web bash -c "cd web; npm install; pm2 start node_modules/react-scripts/scripts/start.js --name web"
}
function stop {
docker-compose down
}
if [[ "$1" == "start" ]]; then
start
fi
# [... same idea for the stop one ...]
And then call it like $ ./server.sh start
This is an example as there is more efficient ways to manage arguments.
Hope this give you some insights.

Related

docker run entrypoint with multiple commands

How can I have an entrypoint in a docker run which executes multiple commands?
Something like:
docker run --entrypoint "echo 'hello' && echo 'world'" ... <image>
The image I'm trying to run, has already an entrypoint set in the Dockerfile, so solution like the following seems not to work, because it looks my commands are ignored, and only the original entrypoint is executed
docker run ... <image> bash -c "echo 'hello' && echo 'world'"
In my use-case I must use the docker run command. Solution which change the Dockerfile are not acceptable, since it is not in my hands
As a style point, this gets vastly easier if your image has a CMD that can be overridden. If you only need to run one command with no initial setup, make it be the CMD and not the ENTRYPOINT:
CMD ./some_command # not ENTRYPOINT
If you need to do some initial setup and then launch the main command, make the ENTRYPOINT be a shell script that ends with the special instruction exec "$#". The CMD will be passed into it as parameters, and this line replaces the shell script with that command.
#!/bin/sh
# entrypoint.sh
... do first time setup, run database migrations, set variables ...
exec "$#"
# Dockerfile
...
ENTRYPOINT ["./entrypoint.sh"] # MUST be JSON-array syntax
CMD ./some_command # as before
If you do these things, then you can use your initial docker run form. This will replace the CMD but leave the ENTRYPOINT intact. In the wrapper-script case, your alternate command will be run as the exec "$#" command, so all of the first-time setup will be done first.
# Assuming the image correctly honors the CMD
docker run ... \
image-name \
sh -c 'echo "foo is $FOO" && echo "bar is $BAR"'
If you really can't do this, you can override the docker run --entrypoint. This runs instead of the image's entrypoint (if you want the image's entrypoint you have to run it yourself), and the syntax is awkward:
# Run a shell command instead of the entrypoint
docker run ... \
--entrypoint /bin/sh \
image-name \
-c 'echo "foo is $FOO" && echo "bar is $BAR"'
Note that the --entrypoint option comes before the image name, and its arguments come after the image name.

Docker run to execute script in mount without exiting container automatically?

I have a simple bash script 'test.sh' in the root of mounted folder :
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
However, when i try to mount folder and start the container with docker run as follows:
docker run -d -p 8000:8787 -e ROOT=true -e DISABLE_AUTH=true --name container -v mount-folder/:/home/rstudio/ image_name /home/rstudio/test.sh
above run command starts the container but exits automatically.
I am looking for a docker run command that starts the container , mounts the folder and then executes the bash script which is in the mount-folder without exiting the container.
(** dont want to go with docker exec command as it is not suitable for my use case for other reasons)
Dockerfile:
FROM rocker/rstudio:4.0.2
//some RUN commands to install necessary r packages
EXPOSE 8787
CMD tail -f /dev/null
Other details :
Image that i am using is rstudio server from rocker and container runs on AWS ubuntu machine.
Edit :
have also tried adding CMD tail -f /dev/null at the end of dockerfile as suggested in http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/ even then the container exits.
Docker containers shutdown automatically when run in detached mode. I think this article proposes a nice solution:
http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/
You could add tail -f /dev/null as the last command in your bash script instead so that the script will never halt unless it is told to do so.
When you do docker run [options] image_name [cmd] the command you specify becomes the command for the container and replaces any the command specified in the dockerfile (that's why adding CMD tail -f /dev/null doesn't do anything). If you ran your container without the /home/rstudio/test.sh at the end, it should stay running.
The solution would be to update your script to add the tail command at the end.
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
exec tail -f /dev/null
If you can't update that script, you could instead add it to the command being passed to the container, with something like:
docker run [options] image_name bash -c '/home/rstudio/test.sh && exec tail -f /dev/null'

Source script on interactive shell inside Docker container

I want to open a interactive shell which sources a script to use the bitbake environment on a repository that I bind mount:
docker run --rm -it \
--mount type=bind,source=$(MY_PATH),destination=/mnt/bb_repoistory \
my_image /bin/bash -c "cd /mnt/bb_repoistory/oe-core && source build/conf/set_bb_env.sh"
The problem is that the -it argument does not seem to have any effect, since the shell exits right after executing cd /mnt/bb_repoistory/oe-core && source build/conf/set_bb_env.sh
I also tried this:
docker run --rm -it \
--mount type=bind,source=$(MY_PATH),destination=/mnt/bb_repoistory \
my_image /bin/bash -c "cd /mnt/bb_repoistory/oe-core && source build/conf/set_bb_env.sh && bash"
Which spawns an interactive shell, but none of the macros defined in set_bb_env.sh
Would there be a way to provide a tty with the script properly sourcered ?
The -it flag is conflicting with the command to run in that you're telling docker to create the pseudo-terminal (ptty), and then running a command in that terminal (bash -c ...). When that command finishes, then the run is done.
What some people have done to work around this is to only have export variables in their sourced environment, and the last command would be exec bash. But if you need aliases or other items that aren't inherited like that, then your options are a bit more limited.
Instead of running the source in a parent shell, you could run it in the target shell. If you modified your .bash_profile to include the following line:
[ -n "$DOCKER_LOAD_EXTRA" -a -r "$DOCKER_LOAD_EXTRA" ] && source "$DOCKER_LOAD_EXTRA”
and then had your command be:
... /bin/bash -c "cd /mnt/bb_repository/oe-core && DOCKER_LOAD_EXTRA=build/conf/set_bb_env.sh exec bash"
that may work. This tells your .bash_profile to load this file when the env variable is already set, but not otherwise. (There can also be the -e flag on the docker command line, but I think that sets it globally for the entire container, which is probably not what you want.)

Run inline command with pipe in docker container [duplicate]

I'm trying to run MULTIPLE commands like this.
docker run image cd /path/to/somewhere && python a.py
But this gives me "No such file or directory" error because it is interpreted as...
"docker run image cd /path/to/somewhere" && "python a.py"
It seems that some ESCAPE characters like "" or () are needed.
So I also tried
docker run image "cd /path/to/somewhere && python a.py"
docker run image (cd /path/to/somewhere && python a.py)
but these didn't work.
I have searched for Docker Run Reference but have not find any hints about ESCAPE characters.
To run multiple commands in docker, use /bin/bash -c and semicolon ;
docker run image_name /bin/bash -c "cd /path/to/somewhere; python a.py"
In case we need command2 (python) will be executed if and only if command1 (cd) returned zero (no error) exit status, use && instead of ;
docker run image_name /bin/bash -c "cd /path/to/somewhere && python a.py"
You can do this a couple of ways:
Use the -w option to change the working directory:
-w, --workdir="" Working directory inside the container
https://docs.docker.com/engine/reference/commandline/run/#set-working-directory--w
Pass the entire argument to /bin/bash:
docker run image /bin/bash -c "cd /path/to/somewhere; python a.py"
You can also pipe commands inside Docker container, bash -c "<command1> | <command2>" for example:
docker run img /bin/bash -c "ls -1 | wc -l"
But, without invoking the shell in the remote the output will be redirected to the local terminal.
bash -c works well if the commands you are running are relatively simple. However, if you're trying to run a long series of commands full of control characters, it can get complex.
I successfully got around this by piping my commands into the process from the outside, i.e.
cat script.sh | docker run -i <image> /bin/bash
Just to make a proper answer from the #Eddy Hernandez's comment and which is very correct since Alpine comes with ash not bash.
The question now referes to Starting a shell in the Docker Alpine container which implies using sh or ash or /bin/sh or /bin/ash/.
Based on the OP's question:
docker run image sh -c "cd /path/to/somewhere && python a.py"
If you want to store the result in one file outside the container, in your local machine, you can do something like this.
RES_FILE=$(readlink -f /tmp/result.txt)
docker run --rm -v ${RES_FILE}:/result.txt img bash -c "grep root /etc/passwd > /result.txt"
The result of your commands will be available in /tmp/result.txt in your local machine.
For anyone else who came here looking to do the same with docker-compose you just need to prepend bash -c and enclose multiple commands in quotes, joined together with &&.
So in the OPs example docker-compose run image bash -c "cd /path/to/somewhere && python a.py"
If you don't mind the commands running in a subshell, just put a set of outer parentheses around the multiple commands to run:
docker run image (cd /path/to/somewhere && python a.py)
TL;DR;
$ docker run --entrypoint /bin/sh image_name -c "command1 && command2 && command3"
A concern regarding the accepted answer is below.
Nobody has mentioned that docker run image_name /bin/bash -c just appends a command to the entrypoint. Some popular images are smart enough to process this correctly, but some are not.
Imagine the following Dockerfile:
FROM alpine
ENTRYPOINT ["echo"]
If you try building it as echo and running:
$ docker run echo /bin/sh -c date
You will get your command appended to the entrypoint, so that result would be echo "/bin/sh -c date".
Instead, you need to override the entrypoint:
$ docker run --entrypoint /bin/sh echo -c date
Docker run reference
In case it's not obvious, if a.py always needs to run in a particular directory, create a simple wrapper script which does the cd and then runs the script.
In your Dockerfile, replace
CMD [ 'python', 'a.py' ]
or whatever with
CMD [ '/wrapper' ]
and create a script wrapper in your root directory (or wherever it's convenient for you) with contents like
#!/bin/sh
set -e
cd /path/to/somewhere
python a.py
In many situations, perhaps also consider rewriting a.py so that it doesn't need a wrapper. Either make it os.chdir() where it needs to be, or have it look for its data files in a directory you configure in its environment or similar.

Docker kill not working when executed in shell script

The following works fine when running the commands manually line by line in the terminal:
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
docker stop test
docker rm -test
But when I run it as a shell script, the Docker container is neither stopped nor removed.
#!/usr/bin/env bash
set -e
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
docker stop test
docker rm -test
How can I make it work from within a shell script?
If you use set -e the script will exit when any command fails. i.e. when a commands return code != 0. This means if your start, exec or stop fails, you will be left with a container still there.
You can remove the set -e but you probably still want to use the return code for the go test command as the overall return code.
#!/usr/bin/env bash
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
rc=$?
docker stop test
docker rm test
exit $rc
Trap
Using set -e is actually quite useful and catches a lot of issues that are silently ignored in most scripts. A slightly more complex solution is to use a trap to run your clean up steps on EXIT, which means set -e can be used.
#!/usr/bin/env bash
set -e
# Set a default return code
RC=2
# Cleanup
function cleanup {
echo "Removing container"
docker stop test || true
docker rm -f test || true
exit $RC
}
trap cleanup EXIT
# Test steps
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
RC=$?

Resources