Commands after docker run doesn't execute in bash script - bash

I have a bash script as follows:
#!/bin/bash
if [ $1 = "first" ]
then
cd /Users/sulekahelmini/Documents/fyp/fyp_work/demo/target && docker build . -t suleka96/factorial
fi
docker run --rm --name factorialorialContainer -p 8080:8080 -e JAVA_OPTIONS="$(cat /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/flags.txt)" suleka96/factorial:latest
sleep 3
#run test
cd /Users/sulekahelmini/Documents/fyp/apache-jmeter-5.2.1/bin && sh jmeter -n -t /Users/sulekahelmini/Documents/fyp/jmeter_scripts/factorial.jmx -l /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jmeter_results.jtl
#convert result to csv
cd /Users/sulekahelmini/Documents/fyp/apache-jmeter-5.2.1/bin && ./JMeterPluginsCMD.sh --generate-csv /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/agg_test.csv --input-jtl /Users/sulekahelmini/Documents/fyp/fyp_work/MLscripts/jmeter_results.jtl --plugin-type AggregateReport
docker stop factorialorialContainer
when I run this script using:
sudo ./microwise.sh two
It starts the container and prints the starting of the spring framework and other information in the terminal. The problem is that the next two lines (executing jmeter test and getting results into a csv) after docker run doesn't get executed.
What am I doing wrong?

this is because your container is still running in foreground, so you need to add -d flag after docker run so it will detach the console and run it in background.

Related

Docker run to execute script in mount without exiting container automatically?

I have a simple bash script 'test.sh' in the root of mounted folder :
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
However, when i try to mount folder and start the container with docker run as follows:
docker run -d -p 8000:8787 -e ROOT=true -e DISABLE_AUTH=true --name container -v mount-folder/:/home/rstudio/ image_name /home/rstudio/test.sh
above run command starts the container but exits automatically.
I am looking for a docker run command that starts the container , mounts the folder and then executes the bash script which is in the mount-folder without exiting the container.
(** dont want to go with docker exec command as it is not suitable for my use case for other reasons)
Dockerfile:
FROM rocker/rstudio:4.0.2
//some RUN commands to install necessary r packages
EXPOSE 8787
CMD tail -f /dev/null
Other details :
Image that i am using is rstudio server from rocker and container runs on AWS ubuntu machine.
Edit :
have also tried adding CMD tail -f /dev/null at the end of dockerfile as suggested in http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/ even then the container exits.
Docker containers shutdown automatically when run in detached mode. I think this article proposes a nice solution:
http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/
You could add tail -f /dev/null as the last command in your bash script instead so that the script will never halt unless it is told to do so.
When you do docker run [options] image_name [cmd] the command you specify becomes the command for the container and replaces any the command specified in the dockerfile (that's why adding CMD tail -f /dev/null doesn't do anything). If you ran your container without the /home/rstudio/test.sh at the end, it should stay running.
The solution would be to update your script to add the tail command at the end.
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
exec tail -f /dev/null
If you can't update that script, you could instead add it to the command being passed to the container, with something like:
docker run [options] image_name bash -c '/home/rstudio/test.sh && exec tail -f /dev/null'

Export function from bash and run it through command line

I have a bash script in server.sh file
#!/usr/bin/env bash
function start {
docker-compose up -d --build && docker exec php bash -c "composer install; vendor/bin/phinx migrate" && \
docker exec web bash -c "cd web; npm install; pm2 start node_modules/react-scripts/scripts/start.js --name web"
}
function stop {
docker-compose down
}
export -f start stop
I want to call these function from command line such as
$./server.sh start
$./server.sh stop
Is this possible ? Right now it doesn't do any thing
Your script ignores its command line arguments, so passing it start or stop is pointless.
The only thing it does is to define (and export, for some reason) two functions, so running it in a separate shell does nothing.
What you can do is source the script in the current shell:
. ./server.sh
Then you will have two functions available that you can run:
start
and
stop
(both in the current shell).
If you want it to work differently, you'll have to redesign your shell script.
You cannot use export -f start stop like this.
Here is a good thread explaining how to use it:
https://unix.stackexchange.com/questions/22796/can-i-export-functions-in-bash
If you wish to call your start/stop method from the command line, you will have to expose it like this:
#!/usr/bin/env bash
function start {
docker-compose up -d --build && docker exec php bash -c "composer install; vendor/bin/phinx migrate" && \
docker exec web bash -c "cd web; npm install; pm2 start node_modules/react-scripts/scripts/start.js --name web"
}
function stop {
docker-compose down
}
if [[ "$1" == "start" ]]; then
start
fi
# [... same idea for the stop one ...]
And then call it like $ ./server.sh start
This is an example as there is more efficient ways to manage arguments.
Hope this give you some insights.

bash script to run tests in multiple docker-compose environments

Need some help writing a script to automate a fairly simple process of running tests on several docker-compose environments on a windows host.
This is the manual process that I would like to automate:
Open a docker quickstart terminal
Setup 3 identical environments: docker-compose -p test1 up -d && docker-compose -p test2 up -d && docker-compose -p test3 up -d
Open 2 more docker terminals, and then run one of the following on each:
docker-compose-run -p test1 app ./node_modules/gulp/bin/gulp.js cuc-reports
docker-compose-run -p test2 app ./node_modules/gulp/bin/gulp.js cuc-not-reports1
docker-compose-run -p test3 app ./node_modules/gulp/bin/gulp.js cuc-not-reports2
When all tests complete, tear down: docker-compose -p test1 down && docker-compose -p test2 down && docker-compose -p test3 down
I'm stuck pretty much in the beginning. I can open a docker machine shell but can't get it to change directories in order to execute step 2. I tried the following:
#!/bin/bash
src=$PWD/../../
cd "C:\Program Files\Docker Toolbox"
"C:\Program Files\Git\bin\bash.exe" --login -i "C:\Program Files\Docker Toolbox\start.sh" cd $src && docker-compose -p test1 up -d && docker-compose -p test2 up -d && docker-compose -p test3 up -d
However the "cd $src" is not executed which causes the subsequent commands to fail.
Trying to generalize the things I think I need in order to run this script, I might summarize as follows:
How can I pass commands to be executed once the docker shell loads (such as "cd ...")?
How can I open multiple independent (docker) shells from the root shell and wait for them to finish executing their commands?
I intended to write the script for git-bash on windows, which is my preference, but suggestions for a windows batch script are also welcome.
Well, it wasn't so hard in the end. Pretty messy and I'm sure it could be improved, but it does what I wanted (in the end I'm running 2 docker environments not 3 as it performs better with 4 cores). If anyone is interested where I got all this weirdness, just ask and I'll site some sources. Remember this answer is for Windows:
#!/bin/bash
cd `dirname $0`/../../
start bash -c 'docker-compose -p test1 up -d; sleep 3s; docker exec -i $(docker-compose -p test1 ps ros | grep -m 1 ros | cut -d " " -f1 ) ./node_modules/gulp/bin/gulp.js cucumber1; docker-compose -p test1 down; bash'
start bash -c 'docker-compose -p test2 up -d; sleep 3s; docker exec -i $(docker-compose -p test2 ps ros | grep -m 1 ros | cut -d " " -f1 ) ./node_modules/gulp/bin/gulp.js cucumber2; docker-compose -p test2 down; bash'
sleep 5s
start bash -c "docker stats $(docker ps | awk '{if(NR>1) print $NF}')"

Docker kill not working when executed in shell script

The following works fine when running the commands manually line by line in the terminal:
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
docker stop test
docker rm -test
But when I run it as a shell script, the Docker container is neither stopped nor removed.
#!/usr/bin/env bash
set -e
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
docker stop test
docker rm -test
How can I make it work from within a shell script?
If you use set -e the script will exit when any command fails. i.e. when a commands return code != 0. This means if your start, exec or stop fails, you will be left with a container still there.
You can remove the set -e but you probably still want to use the return code for the go test command as the overall return code.
#!/usr/bin/env bash
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
rc=$?
docker stop test
docker rm test
exit $rc
Trap
Using set -e is actually quite useful and catches a lot of issues that are silently ignored in most scripts. A slightly more complex solution is to use a trap to run your clean up steps on EXIT, which means set -e can be used.
#!/usr/bin/env bash
set -e
# Set a default return code
RC=2
# Cleanup
function cleanup {
echo "Removing container"
docker stop test || true
docker rm -f test || true
exit $RC
}
trap cleanup EXIT
# Test steps
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
RC=$?

Bash script to get into a running container and then run another bash script from that container

I have a shell script which runs as follows :
image_id=$(docker ps -a | grep postgres | awk -F' ' '{print $1}')
full_id=$(docker ps -a --no-trunc -q | grep $image_id)
docker exec -i -t $full_id bash
When I run this from the base linux OS, I expect to actually enter the postgres container which is a running container. But the issue is that the shell script hangs on 3rd line during ' docker exec' step.
My end goal is using the bash script, enter a running postgres container and run another bash script inside that container.
However the same command when I run it from command line, it works fine and gets me into the postgres container.
Please help, I have spent hours and hours to solve this but no progress.
Thanks again
Your setup is a bit more complex than it needs to be.
Docker ps can filter containers directly with the --filter= option
docker ps --no-trunc --quiet --filter="ancestor=postgres"
You can also --name containers when you run them which will be less fraught with danger than the script you are attempting
docker run --detach --name postgres_whatever postgres
docker exec -ti postgres_whatever bash
I'm not sure that your script is hanging as opposed to sitting there waiting for input. Try running a command directly
Using naming
exec_test.sh
#!/usr/bin/env bash
docker exec postgres_whatever echo "I have run the test"
When run
$ ./exec_test.sh
I have run the test
Without naming
exec_filter_test.sh
#!/usr/bin/env bash
id=$(docker ps --no-trunc --quiet --filter="ancestor=postgres")
[ -z "$id" ] && echo "no id" && exit 1
docker exec "${id}" echo "I have run the test"
When run
$ ./exec_filter_test.sh
I have run the test

Resources