I need send curl request to itself while docker container starting.
But after nginx starting any command not working because nginx stay in foreground.
Example of entrypoint.sh
echo "Starting memcached"
memcached memcache -m 1024 -d
service memcached restart
echo "Starting php-fpm"
php-fpm -D
echo "Starting Nginx"
nginx -g 'daemon off;'
!!this part not working!!!
check_robots=$(wp wpc check_robots)
echo "Starting check robots"
if [ "$check_robots" != "Robots checked successfully!" ]; then
echo ERROR: Robots checked failed
exit 0
fi
exec "$#"
Not sure if it would work, but lanching nginx to background directly could be a solution, like so:
nginx -g 'daemon off;' &
Have not tested in case of Docker entrypoint though.
Related
I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/KarthikDuggirala/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
# COPY script and make it executable
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN ["chmod", "+x", "/usr/share/kafka-connect-script/plugins-config.sh"]
#entrypoint
ENTRYPOINT [ "./usr/share/kafka-connect-script/plugins-config.sh" ]
and the following bash script
#!/bin/bash
#script to configure kafka connect with plugins
#export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
#export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=30
echo "Waiting for Kafka Connect to start listening on localhost"
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT"
while [[ $(eval $curl_command) -eq 000 ]]
do
echo "In"
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter"
echo "Going to sleep for $sleep_second seconds"
# sleep $sleep_second
echo "Finished sleeping"
# ((sleep_second_counter+=$sleep_second))
echo "Finished counter"
done
echo "Out"
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
I try to run the docker and using docker logs to see whats happening, and I am expecting that the script would run and wait till the kafka connect is started. But apparently after say few seconds the script or (I dont know what is hanging) hangs and I do not see any console prints anymore.
I am a bit lost what is wrong, so I need some guidance on what is that I am missing or is this not the correct way
What I am trying to do
I want to have logic defined that I could wait for kafka connect to start then run the curl command
curl -X POST -H "Content-Type: application/json" --data '{"name":"","config":{"connector.class":"com.github.jcustenborder.kafka.connect.snmp.SnmpTrapSourceConnector","topic":"fm_snmp"}}' http://localhost:8083/connectors
PS: I cannot use docker-compose way to do it, since there are places I have to use docker run
The problem here is that ENTRYPOINT will run when the container starts and will prevent the default CMD to run since the script will loop waiting for the server to be up , that is the script will loop forever since the CMD will not run.
you need to do one of the following:
start the kafka connect server in your Entrypoint and your script in CMD or running your script outside the container ....
for testing I try to run the gcloud devserver inside docker with that comment:
sudo /usr/local/gcloud/google-cloud-sdk/bin/java_dev_appserver.sh --disable_update_check --port=8888 --help /app/target/qdacity-war/ 2>&1 | sudo tee /app/logs/devserver.log > /dev/null &
To check if the devserver has started successfully, I use this script:
#!/bin/bash
# This script waits until the port 8888 is open.
SERVER=localhost
PORT=8888
TIMEOUT=180
TIME_INTERVAL=2
PORT_OPEN=1
PORT_CLOSED=0
time=0
isPortOpen=0
while [ $time -lt $TIMEOUT ] && [ $isPortOpen -eq $PORT_CLOSED ];
do
# Connect to the port
(echo > /dev/tcp/$SERVER/$PORT) >/dev/null 2>&1
if [ $? -ne 0 ]; then
isPortOpen=$PORT_CLOSED
else
isPortOpen=$PORT_OPEN
fi
time=$(($time+$TIME_INTERVAL))
sleep $TIME_INTERVAL
done
if [ $isPortOpen -eq $PORT_OPEN ]; then
echo "Port is open after ${time} seconds."
# Give the server more time to properly start
sleep 10
else
echo "Reached the timeout (${TIMEOUT} seconds). The port ${SERVER}:${PORT} is not available."
exit 1
fi
After running all the test, I just got back:
Reached the timeout (180 seconds). The port localhost:8888 is not available.
I couldn't find out if there were any problems starting the devserver or querying the port.
Does anyone have an idea or solution?
Thanks!
By default, by only accepting localhost|loopback traffic, you're unable to access the server remotely.
Please try adding --address=0.0.0.0:
(link) to your java_dev_appserver.sh command.
Example
Used a variant of Google's HelloWorld sample.
Ran this with mvn appengine:run (to confirm it works and build the WAR).
Then /path/to/bin/java_dev_appserver.sh ./target/hellofreddie-0.1 (to confirm it works with the local development server).
Then used Google's Cloud SDK container image (link), mounted the previously generated WAR directory into it, and ran the server on :9999:
docker run \
--interactive \
--tty \
--publish=9999:9999 \
--volume=${PWD}/target:/target \
google/cloud-sdk \
/usr/lib/google-cloud-sdk/bin/java_dev_appserver.sh \
--address=0.0.0.0 \
--port=9999 \
./target/hellofreddie-0.1
Am able to curl the endpoint:
curl \
--silent \
--location \
--write-out "%{http_code}" \
--output /dev/null \
localhost:9999
returns 200
And, running your scripts adjusted with PORT=9999 returns:
Port is open after 2 seconds.
I'm using Docker with Rancher v1.6, setting up a Nextcloud stack.
I would like to use a dedicated container for running cron tasks every 15 minutes.
The "normal" Nextcloud Docker image can simply use the following:
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
while /bin/true; do
su -s "/bin/bash" -c "/usr/local/bin/php /var/www/html/cron.php" www-data
echo $$(date) - Running cron finished
sleep 900
done
EOF'
(Pulled from this GitHub post)
However, the Alpine-based image does not have bash, and so it cannot be used.
I found this script in the list of examples:
#!/bin/sh
set -eu
exec busybox crond -f -l 0 -L /dev/stdout
However, I cannot seem to get that working with my docker-compose.yml file.
I don't want to use an external file, just to have the script entirely in the docker-compose.yml file, to make preparation and changes a bit easier.
Thank you!
I have a starting docker script here:
#!/usr/bin/env bash
set -e
echo '>>> Get old container id'
CID=$(sudo docker ps --all | grep "web-client" | awk '{print $1}')
echo $CID
echo '>>> Stopping and deleting old container'
if [ "$CID" != "" ];
then
sudo docker stop $CID
sudo docker rm $CID
fi
echo '>>> Starting new container'
sudo docker pull my-example-registry.com:5050/web-client:latest
sudo docker run --name=web-client -p 8080:80 -d my-example-registry.com:5050/web-client:latest
The fact is this script has umproper result. It deletes the old container everytime the script is run.
The "starting new container" section will pull the most recent image. Here is an example output of docker pull if the image locally is up to date:
Status: Image is up to date for
my-example-registry:5050/web-client:latest
Is there any way to improve my script by adding a condition:
Before anything, check via docker pull the local image is the most recent version available on registry. Then if it's the most recent version, proceed the stop and delete old container action and docker run the new pulled image.
In this script, how to parse the status to check the local image corresponds to the most up to date available on registry?
Maybe a docker command can do the trick, but I didn't manage to find a useful one.
Check the string "Image is up to date" to know whether the local image was updated:
sudo docker pull my-example-registry.com:5050/web-client:latest |
grep "Image is up to date" ||
(echo Already up to date. Exiting... && exit 0)
So change your script to:
#!/usr/bin/env bash
set -e
sudo docker pull my-example-registry.com:5050/web-client:latest |
grep "Image is up to date" ||
(echo Already up to date. Exiting... && exit 0)
echo '>>> Get old container id'
CID=$(sudo docker ps --all | grep "web-client" | awk '{print $1}')
echo $CID
echo '>>> Stopping and deleting old container'
if [ "$CID" != "" ];
then
sudo docker stop $CID
sudo docker rm $CID
fi
echo '>>> Starting new container'
sudo docker run --name=web-client -p 8080:80 -d my-example-registry.com:5050/web-client:latest
Simple use docker-compose and you can remove all the above.
docker-compose pull && docker-compose up
This will pull the image, if it exists, and up will only recreate the container, if it actually has a newer image, otherwise it will do nothing
If you're using docker compose, here's my solution where I keep put my latest docker-compose.yml into an image right after I've pushed all of the needed images that are in docker-compose.yml
The server runs this as a cron job:
#!/usr/bin/env bash
docker login --username username --password password
if (( $? > 0 )) ; then
echo 'Failed to login'
exit 1
fi
# Grab latest config, if the image is different then we have a new update to make
pullContents=$(docker pull my/registry:config-holder)
if (( $? > 0 )) ; then
echo 'Failed to pull image'
exit 1
fi
if echo $pullContents | grep "Image is up to date" ; then
echo 'Image already up to date'
exit 0
fi
cd /srv/www/
# Grab latest docker-compose.yml that we'll be needing
docker run -d --name config-holder my/registry:config-holder
docker cp config-holder:/home/docker-compose.yml docker-compose-new.yml
docker stop config-holder
docker rm config-holder
# Use new yml to pull latest images
docker-compose -f docker-compose-new.yml pull
# Stop server
docker-compose down
# Replace old yml file with our new one, and spin back up
mv docker-compose-new.yml docker-compose.yml
docker-compose up -d
Config holder dockerfile:
FROM bash
# This image exists just to hold the docker-compose.yml. So when remote updating the server can pull this, get the latest docker-compose file, then pull those
COPY docker-compose.yml /home/docker-compose.yml
# Ensures that the image is subtly different every time we deploy. This is required we want the server to find this image has changed to trigger a new deployment
RUN bash -c "touch random.txt; echo $(echo $RANDOM | md5sum | head -c 20) >> random.txt"
# Wait forever
CMD exec bash -c "trap : TERM INT; sleep infinity & wait"
I am trying to create a bash utility script to check if a docker daemon is running in my server.
Is there a better way of checking if the docker daemon is running in my server other than running a code like this?
ps -ef | grep docker
root 1250 1 0 13:28 ? 00:00:04 /usr/bin/dockerd --selinux-enabled
root 1598 1250 0 13:28 ? 00:00:00 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --runtime docker-runc
root 10997 10916 0 19:47 pts/0 00:00:00 grep --color=auto docker
I would like to create a bash shell script that will check if my docker daemon is running. If it is running then do nothing but if it is not then have the docker daemon started.
My pseudocode is something like this. I am thinking of parsing the output of my ps -ef but I just would like to know if there is a more efficient way of doing my pseudocode.
if(docker is not running)
run docker
end
P.S.
I am no linux expert and I just need to do this utility on my own environment.
I made a little Script (Mac Osx) to ensure Docker is running by checking the exit code of docker stats.
#!/bin/bash
#Open Docker, only if is not running
if (! docker stats --no-stream ); then
# On Mac OS this would be the terminal command to launch Docker
open /Applications/Docker.app
#Wait until Docker daemon is running and has completed initialisation
while (! docker stats --no-stream ); do
# Docker takes a few seconds to initialize
echo "Waiting for Docker to launch..."
sleep 1
done
fi
#Start the Container..
This works for me on Ubuntu
$ systemctl status docker
You have a utility called pgrep on almost all the Linux systems.
You can just do:
pgrep -f docker > /dev/null || echo "starting docker"
Replace the echo command with your docker starting command.
if curl -s --unix-socket /var/run/docker.sock http/_ping 2>&1 >/dev/null
then
echo "Running"
else
echo "Not running"
fi
Ref: Docker api v1.28
The following works on macOS and on Windows if git bash is installed. On macOS open /Applications/Docker.app would start the docker deamon. Haven't seen anything similar for Windows however.
## check docker is running at all
## based on https://stackoverflow.com/questions/22009364/is-there-a-try-catch-command-in-bash
{
## will throw an error if the docker daemon is not running and jump
## to the next code chunk
docker ps -q
} || {
echo "Docker is not running. Please start docker on your computer"
echo "When docker has finished starting up press [ENTER} to continue"
read
}
You can simply:
docker version > /dev/null 2>&1
The exit code of that command will be stored to $? so you can check if it's 0, then docker is running.
docker version will exit 1 if daemon is not running. If other issues are encountered, such as docker not being installed at all, the exit code will vary.
But in the end of the day, if docker is installed and daemon is running, the exit code will be 0.
The 2>&1 will redirect stderr to stdout and > /dev/null will redirect stdout to /dev/null practically silencing the output no matter what was the result of the execution.
You could also just check for the existence of /var/run/docker.pid.
Following #madsonic, I went for the following
#!/bin/bash
if (! docker stats --no-stream 2>/dev/null); then
# On Mac OS this would be the terminal command to launch Docker
open /Applications/Docker.app
echo -n "Waiting for Docker to launch"
sleep 1
# Wait until Docker daemon is running and has completed initialisation
while (! docker stats --no-stream >/dev/null 2>&1); do
# Docker takes a few seconds to initialize
echo -n "."
sleep 1
done
fi
echo
echo "Docker started"
A function could looks so:
isRunning {
`ps -ef | grep "[d]ocker" | awk {'print $2'}`
}
I created a script to start, stop, restart a mongodb-server.
You only need to change some path inside the scripts, and i also works for you:
Script
I'm sure you want to start the docker daemon so here's the code to start it before executing your Docker run statement:
sudo systemctl start docker