Docker Check if DB is Running - bash

entrypoint.sh contains various cqlsh commands that require Cassandra. Without something like script.sh, cqlsh commands fail because Cassandra doesn't have enough time to start. When I execute the following locally, everything appears to work properly. However, when I run via Docker, script.sh never finishes. In other words, $status never changes from 1 to 0.
Dockerfile
FROM cassandra
RUN apt-get update && apt-get install -y netcat
RUN mkdir /dir
ADD ./scripts /dir/scripts
RUN /bin/bash -c 'service cassandra start'
RUN /bin/bash -c '/dir/scripts/script.sh'
RUN /bin/bash -c '/dir/scripts/entrypoint.sh'
script.sh
#!/bin/bash
set -e
cmd="$#"
status=$(nc -z localhost 9042; echo $?)
echo $status
while [ $status != 0 ]
do
sleep 3s
status=$(nc -z localhost 9042; echo $?)
echo $status
done
exec $cmd
Alternatively, I could do something like until cqlsh -e 'some code'; do .., as noted here for psql, but that doesn't appear to work for me. Wondering how best to approach the problem.

You're misusing the RUN command in your Dockerfile. It's not for starting services, it's for making filesystem changes in your image. The reason $status doesn't update is because you can't start Cassandra via a RUN command.
You should add service cassandra start and /dir/scripts/entrypoint.sh to your script.sh file, and make that the CMD that's executed by default:
Dockerfile
CMD ['/bin/bash', '-c', '/dir/scripts/script.sh']
script.sh
#!/bin/bash
set -e
# NOTE: I removed your `cmd` processing in favor of invoking entrypoint.sh
# directly.
# Start Cassandra before waiting for it to boot.
service cassandra start
status=$(nc -z localhost 9042; echo $?)
echo $status
while [ $status != 0 ]
do
sleep 3s
status=$(nc -z localhost 9042; echo $?)
echo $status
done
exec /bin/bash -c /dir/scripts/entrypoint.sh

Related

How to run shell script via kubectl without interactive shell

I am trying to export a configuration from a service called keycloak by using shell script. To do that, export.sh will be run from the pipeline.
the script connects to k8s cluster and run the command in there.
So far, everything goes okay the export work perfectly.
But when I try to exit from the k8s cluster with exit and directly end the shell script. therefore it will move back to the pipeline host without staying in the remote machine.
Running the command from the pipeline
ssh -t ubuntu#example1.com 'bash' < export.sh
export.sh
#!/bin/bash
set -x
set -e
rm -rf /tmp/realm-export
if [ $(ps -ef | grep "keycloak.migration.action=export" | grep -v grep | wc -l) != 0 ]; then
echo "Another export is currently running"
exit 1
fi
kubectl -n keycloak exec -it keycloak-0 bash
mkdir /tmp/export
/opt/jboss/keycloak/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp/export -Dkeycloak.migration.usersExportStrategy=DIFFERENT_FILES -Djboss.socket.binding.port-offset=100
rm /tmp/export/master-*
exit
kubectl -n keycloak cp keycloak-0:/tmp/export /tmp/realm-export
exit
exit
scp ubuntu#example1.com:/tmp/realm-export/* ./configuration2/realms/
After the first exit the whole shell script stopped, the left commands doesn't work. it won't stay on ubuntu#example1.com.
Is there any solutions?
Run the commands inside without interactive shell using HEREDOC(EOF).
It's not EOF. It's 'EOF'. this prevents a variable expansion in the current shell.
But in the other script's /tmp/export/master-* will expand as you expect.
kubectl -n keycloak exec -it keycloak-0 bash <<'EOF'
<put your codes here, which you type interactively>
EOF
export.sh
#!/bin/bash
set -x
set -e
rm -rf /tmp/realm-export
if [ $(ps -ef | grep "keycloak.migration.action=export" | grep -v grep | wc -l) != 0 ]; then
echo "Another export is currently running"
exit 1
fi
# the suggested code.
kubectl -n keycloak exec -it keycloak-0 bash <<'EOF'
<put your codes here, which you type interactively>
EOF
mkdir /tmp/export
/opt/jboss/keycloak/bin/standalone.sh -Dkeycloak.migration.action=export -Dkeycloak.migration.provider=dir -Dkeycloak.migration.dir=/tmp/export -Dkeycloak.migration.usersExportStrategy=DIFFERENT_FILES -Djboss.socket.binding.port-offset=100
rm /tmp/export/master-*
kubectl -n keycloak cp keycloak-0:/tmp/export /tmp/realm-export
scp ubuntu#example1.com:/tmp/realm-export/* ./configuration2/realms/
Even if scp runs successfully or not, this code will exit.

Linux script run with run-this-one doesn't work with docker

I'm experiencing an issue in which I run a command in a cronjob and want to make sure that it's not already being executed. I achieve that running as run-one [command] (man-page).
If I want to cancel the already running command and force the new command to run, I run as run-this-one [command].
At least this is what I expected, but if the command runs a docker container, the other process seems to be terminated (but isn't), the terminal shows Terminated, but continues to show the command output that is running in the container (but the commands after the container ends running are not executed). In this case, the command that runs run-this-one is not executed (not expected).
Example:
/path/to/file.sh
#!/bin/bash
set -eou pipefail
echo "sleep started..." >&2
docker run --rm alpine /bin/sh -c 'echo "sleep started inside..." && sleep 5 && echo "sleep ended inside..."'
echo "sleep ended..." >&2
If I run in a terminal window sudo run-one /path/to/file.sh, and then run in another terminal (before the previous command ends running) the command sudo run-one /path/to/file.sh, this command is not executed, as expected, and that command ends succesfully.
Terminal1:
user#host:/path$ sudo run-one /path/to/file.sh
sleep started...
sleep started inside...
sleep ended inside...
sleep ended...
user#host:/path$
Terminal2:
user#host:/path$ sudo run-one /path/to/file.sh
user#host:/path$
But if I run in a terminal window sudo run-one /path/to/file.sh, and then run in another terminal (before the previous command ends running) the command sudo run-this-one /path/to/file.sh, this command is not executed, which is not expected, and that command shows in the terminal Terminated, with the terminal showing user#host:/path$, but the output in the container still shows (the command is still running in the container created in the 1st terminal).
Terminal1:
user#host:/path$ sudo run-one /path/to/file.sh
sleep started...
sleep started inside...
Terminated
user#host:/path$ sleep ended inside...
# terminal doesn't show new input from the keyboard, but I can run commands after
Terminal2:
user#host:/path$ sudo run-this-one /path/to/file.sh
user#host:/path$
It works if the file is changed to:
/path/to/file.sh
#!/bin/bash
set -eou pipefail
echo "sleep started..." >&2
sleep 5
echo "sleep ended..." >&2
The above script file with docker was just an example, in my case it's different, but the problem is the same, and occurs independently of running the container with or without -it.
Someone knows why this is happening? Is there a (not very complex and not very hackish) solution to this problem? I've executed the above commands in Ubuntu 20.04 inside a VirtualBox machine (with vagrant).
Update (2021-07-15)
Based on #ErikMD comment and #DannyB answer, I put a trap and a cleanup function to remove the container, as can be seen in the script below:
/path/to/test
#!/bin/bash
set -eou pipefail
trap 'echo "[error] ${BASH_SOURCE[0]}:$LINENO" >&2; exit 3;' ERR
RED='\033[0;31m'
NC='\033[0m' # No Color
function error {
msg="$(date '+%F %T') - ${BASH_SOURCE[0]}:${BASH_LINENO[0]}: ${*}"
>&2 echo -e "${RED}${msg}${NC}"
exit 2
}
file="${BASH_SOURCE[0]}"
command="${1:-}"
if [ -z "$command" ]; then
error "[error] no command entered"
fi
shift;
case "$command" in
"cmd1")
function cleanup {
echo "cleaning $command..."
sudo docker rm --force "test-container"
}
trap 'cleanup; exit 4;' ERR
args=( "$file" "cmd:unique" )
echo "$command: run-one ${args[*]}" >&2
run-one "${args[#]}"
;;
"cmd2")
function cleanup {
echo "cleaning $command..."
sudo docker rm --force "test-container"
}
trap 'cleanup; exit 4;' ERR
args=( "$file" "cmd:unique" )
echo "$command: run-this-one ${args[*]}" >&2
run-this-one "${args[#]}"
;;
"cmd:unique")
"$file" "cmd:container"
;;
"cmd:container")
echo "sleep started..." >&2
sudo docker run --rm --name "test-container" alpine \
/bin/sh -c 'echo "sleep started inside..." && sleep 5 && echo "sleep ended inside..."'
echo "sleep ended..." >&2
;;
*)
echo -e "${RED}[error] invalid command: $command${NC}"
exit 1
;;
esac
If I run /path/to/test cmd1 (run-one) and /path/to/test cmd2 (run-this-one) in another terminal, it works as expected (the cmd1 process is stopped and removes the container, and the cmd2 process runs successfully).
If I run /path/to/test cmd2 in 2 terminals, it also works as expected (the 1st cmd2 process is stopped and removes the container, and the 2nd cmd2 process runs successfully).
But not so good: in the 2 cases above, sometimes the 2nd process stops with an error before the 1st removes the container (this can occur intermittently, probably due to a race condition).
And it gets worse: if I run /path/to/test cmd1 in 2 terminals, both commands fail, although the 1st cmd1 should run successfully (it fails because the 2nd cmd1 removes the container in the cleanup).
I tried to put the cleanup in the cmd:unique command instead (removing from the other 2 places), so as to call only by the single process running, to avoid the problem above, but weirdly the cleanup is not called there, even if the trap is also defined there.
Just to simplify your question, I would use this command to reproduce the problem:
run-one docker run --rm -it alpine sleep 10
As can be seen - either with run-one and run-this-one - the behavior is definitely not the desired one.
Since the command creates a process managed by docker, I suspect that the run-one set of tools is not the right tool for the job, since docker containers should not be killed with pkill, but rather with docker kill.
One relatively easy solution, is to embrace the way docker wants you to kill containers, and create your short run-one scripts that handle docker properly.
run-one-docker.sh
#!/usr/bin/env bash
if [[ "$#" -lt 2 ]]; then
echo "Usage: ./run-one-docker.sh NAME COMMAND"
echo "Example: ./run-one-docker.sh temp alpine sleep 10"
exit 1
fi
name="$1"
command=("${#:2}")
container_is_running() {
[ "$( docker container inspect -f '{{.State.Running}}' "$1" 2> /dev/null)" == "true" ]
}
if container_is_running "$name"; then
echo "$name is already running, aborting"
exit 1
else
docker run --rm -it --name "$name" "${command[#]}"
fi
run-this-one-docker.sh
#!/usr/bin/env bash
if [[ "$#" -lt 2 ]]; then
echo "Usage: ./run-this-one-docker.sh NAME COMMAND"
echo "Example: ./run-this-one-docker.sh temp alpine sleep 10"
exit 1
fi
name="$1"
command=("${#:2}")
container_is_running() {
[ "$( docker container inspect -f '{{.State.Running}}' "$1" 2> /dev/null)" == "true" ]
}
if container_is_running "$name"; then
echo "killing old $name"
docker kill "$name" > /dev/null
fi
docker run --rm -it --name "$name" "${command[#]}"

CMD does not run if used after ENTRYPOINT

I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
RUN apt-get update && apt-get -y install cron
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/name/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN chmod +x /usr/share/kafka-connect-script/plugins-config.sh
ENTRYPOINT [ "./etc/confluent/docker/run" ]
CMD ["/usr/share/kafka-connect-script/plugins-config.sh"]
And the bash file as this
#!/bin/bash
#script to configure kafka connect with plugins
# export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
# export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=60
echo "Waiting for Kafka Connect to start listening on localhost" >> log.log
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT" >> log.log
while [[ $(eval $curl_command) -eq 000 && $sleep_second_counter -lt $max_seconds_to_wait ]]
do
echo "In" >> log.log
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter" >> log.log
echo "Going to sleep for $sleep_second seconds" >> log.log
sleep $sleep_second
echo "Finished sleeping" >> log.log
((sleep_second_counter+=$sleep_second))
echo "Finished counter" >> log.log
done
echo "Out" >> log.log
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
/bin/bash
Entry point gets called correctly but CMD does not get invoked.
I also try to understand the solution given here CMD doesn't run after ENTRYPOINT in Dockerfile
but I did not understand the solution.
If someone could explain a bit more what is wrong here.
What I am trying to accomplish
I am trying to have a single docker container image which will start the kafka-connect server (ENTRYPOINT) and then via bash file (CMD) I will configure the plugins. Requirement is that the same sequence of steps gets executed everytime the containers restarts.
CMD is run after ENTRYPOINT, like parameters after a function invokation, in the same command line.
In your case you want two different commands running sequentially. Then, you may add them to a startup_script.sh whose content is:
#!/bin/bash
./etc/confluent/docker/run & # run in background not to get stuck in here
/usr/share/kafka-connect-script/plugins-config.sh # apply configuration
sleep 100000000 # to avoid the startup script to exit since that would kill the container

How to run command conditionally in docker compose

I want to run command conditionally in docker-compose
because when someone run this application at first time,
They would have to run migrate command so that they can run django application properly
But If their docker have run migrate, there is no need to run migrate again
So this is the command to check that their docker have run migrate.
if [[ -z $(python3 zeus/manage.py showmigrations | grep '\[ \]')]]; then
echo 'no need to migrate'
else
echo 'need to migate'
fi
This is my docker-compose.
version: '3'
services:
db:
image: postgres
web:
command: >
bash -c "if [[ -z $(python3 zeus/manage.py showmigrations | grep '\[ \]')]]; then
echo 'no need to migrate'
else
echo 'need to migate'
fi && python3 zeus/manage.py runserver 0.0.0.0:8000
"
But Error occurs like this
ERROR: Invalid interpolation format for "build" option in service
"web": "bash -c "if [[ -z $(python3 zeus/manage.py showmigrations | grep '\[ \]')]]; then
echo 'no need to migrate' else echo 'need to migate' fi
&& python3 zeus/manage.py runserver 0.0.0.0:8000""
Any idea?
Edit
This is fine when I run the script of checking migration in normal bash
I think docker-compose can't parse $(python3 manage.py .....) part.
try this :
version: '3'
services:
db:
image: postgres
web:
command: bash -c "if [[ -z $$(python3 zeus/manage.py showmigrations | grep '\\[ \\]') ]]; then
echo 'no need to migrate';
else
echo 'need to migate';
fi && python3 zeus/manage.py runserver 0.0.0.0:8000"
three problems were there , you need to escap the escape charachter \ and add more $ to escape the replacment in compose and one more space before the last ]]
Try to avoid writing complicated scripts in docker-compose.yml, especially if they're for normal parts of your application setup.
A typical pattern is to put this sort of setup in an entrypoint script. That script ends with the shell command exec "$#". In a Docker context, that tells it to replace itself with the command (from a Dockerfile CMD statement or Docker Compose command:). For your example this could look like
#!/bin/sh
if [ -z $(python3 zeus/manage.py showmigrations | grep '\[ \]')]; then
echo 'no need to migrate'
else
echo 'need to migate'
fi
exec "$#"
Then in your Dockerfile, copy this file in and specify it as the ENTRYPOINT; leave your CMD that runs your application unmodified.
COPY entrypoint.sh /app
RUN chmod +x entrypoint.sh
ENTRYPOINT ["/app/entrypoint.sh"]
CMD python3 zeus/manage.py run --host=0.0.0.0:8000
The ENTRYPOINT statement must be the JSON-array form and must not have an explicit sh -c wrapper in it.
If you want to verify that things have gotten set up correctly, you can run
docker-compose run web sh
and you will get a shell at the point that exec "$#" is: after your migrations and other setup have run, but instead of your main server process.

Starting multiple services using shell script in Dockerfile

I am creating a Dockerfile to install and start the WebLogic 12c services using startup scripts at "docker run" command. I am passing the shell script in the CMD instruction which executes the startWeblogic.sh and startNodeManager.sh script. But when I logged in to the container, it has started only the first script startWeblogic.sh and not even started the second script which is obvious from the docker logs.
The same script executed inside the container manually and it is starting both the services. What is the right instruction for running the script to start multiple processes in a container and not to exit the container?
What am I missing in this script and in the dockerfile? I know that container can run only one process, but in a dirty way, how to start multiple services for an application like WebLogic which has a nameserver, node manager, managed server and creating managed domains and machines. The managed server can only be started when WebLogic nameserver is running.
Script: startscript.sh
#!/bin/bash
# Start the first process
/u01/app/oracle/product/wls122100/domains/verdomain/bin/startWebLogic.sh -D
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start my_first_process: $status"
exit $status
fi
# Start the second process
/u01/app/oracle/product/wls122100/domains/verdomain/bin/startNodeManager.sh -D
status=$?
if [ $status -ne 0 ]; then
echo "Failed to start my_second_process: $status"
exit $status
fi
while sleep 60; do
ps aux |grep "Name=adminserver" |grep -q -v grep
PROCESS_1_STATUS=$?
ps aux |grep node |grep -q -v grep
PROCESS_2_STATUS=$?
# If the greps above find anything, they exit with 0 status
# If they are not both 0, then something is wrong
if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
echo "One of the processes has already exited."
exit 1
fi
done
Truncated the dockerfile.
RUN unzip $WLS_PKG
RUN $JAVA_HOME/bin/java -Xmx1024m -jar /u01/app/oracle/$WLS_JAR -silent -responseFile /u01/app/oracle/wls.rsp -invPtrLoc /u01/app/oracle/oraInst.loc > install.log
RUN rm -f $WLS_PKG
RUN . $WLS_HOME/server/bin/setWLSEnv.sh && java weblogic.version
RUN java weblogic.WLST -skipWLSModuleScanning create_basedomain.py
WORKDIR /u01/app/oracle
CMD ./startscript.sh
docker build and run commands:
docker build -f Dockerfile-weblogic --tag="weblogic12c:startweb" /var/dprojects
docker rund -d -it weblogic12c:startweb
docker exec -it 6313c4caccd3 bash
Please use supervisord for running multiple services in a docker container. It will make the whole process more robust and reliable.
Run supervisord -n as your CMD command and configure all your services in /etc/supervisord.conf.
Sample conf would look like:
[program:WebLogic]
command=/u01/app/oracle/product/wls122100/domains/verdomain/bin/startWebLogic.sh -D
stderr_logfile = /var/log/supervisord/WebLogic-stderr.log
stdout_logfile = /var/log/supervisord/WebLogic-stdout.log
autorestart=unexpected
[program:NodeManager]
command=/u01/app/oracle/product/wls122100/domains/verdomain/bin/startNodeManager.sh -D
stderr_logfile = /var/log/supervisord/NodeManager-stderr.log
stdout_logfile = /var/log/supervisord/NodeManager-stdout.log
autorestart=unexpected
It will handle all the things you are trying to do with a shell script.
Hope it helps!

Resources