Jenkins shell command wait for docker container downloaded , up , running - shell

I'm using Docker steps as below to bring up selenium grid
My query is how to gracefully wait till particular container is downloaded , up and running.
docker run -ti -m 150M --memory-swap 300M --cpu-shares=104 -d -p 4444:4444 --name selenium-hub -e GRID_BROWSER_TIMEOUT=15000 selenium/hub
sleep 10
for i in {1..2}
do
echo "Starting Node: $i"
docker run -ti -m 750M --memory-swap 900M --cpu-shares=460 -d --link selenium-hub:hub -v /dev/shm:/dev/shm selenium/node-chrome
sleep 5
done
Is there better way of avoiding sleep , as sometimes container download takes longer.
After the job is done , I stop and remove all the containers in order to do fresh start for new job.
Thanks & Regards,
Vikram

You can call the selenium service with curl and check the result:
Then in a while check for COUNT value if it is different from zero
COUNT=$(curl -q localhost:4444 | grep 403 | wc -l)
while [ $COUNT -eq 0 ]
do
sleep 1
COUNT=$(curl -q localhost:4444 | grep 403 | wc -l)
done
Regards

Related

How to connect to netcat running in docker container?

I'm looking small rest server for sent request and execute scenario.
I've found it here:
Create a minimum REST Web Server with netcat nc
I'm trying to execute this small rest server.
Below Dockerfile and bash script.
Dockerfile
FROM debian
ADD ./rest.sh /rest.sh
RUN apt-get update \
&& DEBIAN_FRONTEND=noninteractive \
&& apt-get install -y net-tools netcat curl \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& chmod +x /rest.sh
EXPOSE 80
CMD /rest.sh
rest.sh
#!/bin/bash
/bin/rm -f out
/usr/bin/mkfifo out
trap "rm -f out" EXIT
while true
do
/bin/cat out | /bin/nc -l 80 > >( # parse the netcat output, to build the answer redirected to the pipe "out".
export REQUEST=
while read line
do
line=$(echo "$line" | tr -d '[\r\n]')
if /bin/echo "$line" | grep -qE '^GET /' # if line starts with "GET /"
then
REQUEST=$(echo "$line" | cut -d ' ' -f2) # extract the request
elif [ "x$line" = x ] # empty line / end of request
then
HTTP_200="HTTP/1.1 200 OK"
HTTP_LOCATION="Location:"
HTTP_404="HTTP/1.1 404 Not Found"
# call a script here
# Note: REQUEST is exported, so the script can parse it (to answer 200/403/404 status code + content)
if echo $REQUEST | grep -qE '^/echo/'
then
printf "%s\n%s %s\n\n%s\n" "$HTTP_200" "$HTTP_LOCATION" $REQUEST ${REQUEST#"/echo/"} > out
elif echo $REQUEST | grep -qE '^/date'
then
date > out
elif echo $REQUEST | grep -qE '^/stats'
then
vmstat -S M > out
elif echo $REQUEST | grep -qE '^/net'
then
ifconfig > out
else
printf "%s\n%s %s\n\n%s\n" "$HTTP_404" "$HTTP_LOCATION" $REQUEST "Resource $REQUEST NOT FOUND!" > out
fi
fi
done
)
done
docker build -t ncimange .
docker run -d -i -p 80:80 --name ncrest ncimange
docker container ls
IMAGE COMMAND CREATED STATUS PORTS NAMES
ncimange "/bin/sh -c /rest.sh" 8 seconds ago Up 2 seconds 0.0.0.0:80->80/tcp ncrest
docker ps
IMAGE COMMAND CREATED STATUS PORTS NAMES
ncimange "/bin/sh -c /rest.sh" 41 seconds ago Up 34 seconds 0.0.0.0:80->80/tcp ncrest
docker logs ncrest
empty
From host:
curl -i http://127.0.0.1:80/date
curl: (56) Recv failure: Connection reset by peer
From container:
docker exec -it ncrest /bin/bash
netstat -an|grep LISTEN
tcp 0 0 0.0.0.0:41783 0.0.0.0:* LISTEN
curl -i http://127.0.0.1:80/date
curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
curl -i http://127.0.0.1:41783/date
curl: (56) Recv failure: Connection reset by peer
How to connect to netcat rest docker container?
You are installing the "wrong" netcat. Debian has two netcat packages: netcat-traditional and netcat-openbsd, and both are slighty different. The netcat package is an alias of netcat-traditional.
For example, in your case your nc command should be nc -l -p 80, because nc -l 80 will only work on netcat-openbsd.
tl;dr: install netcat-openbsd instead of ǹetcat if you wish to use your script unmodified..

Cannot reach port of gcloud appengine devserver

for testing I try to run the gcloud devserver inside docker with that comment:
sudo /usr/local/gcloud/google-cloud-sdk/bin/java_dev_appserver.sh --disable_update_check --port=8888 --help /app/target/qdacity-war/ 2>&1 | sudo tee /app/logs/devserver.log > /dev/null &
To check if the devserver has started successfully, I use this script:
#!/bin/bash
# This script waits until the port 8888 is open.
SERVER=localhost
PORT=8888
TIMEOUT=180
TIME_INTERVAL=2
PORT_OPEN=1
PORT_CLOSED=0
time=0
isPortOpen=0
while [ $time -lt $TIMEOUT ] && [ $isPortOpen -eq $PORT_CLOSED ];
do
# Connect to the port
(echo > /dev/tcp/$SERVER/$PORT) >/dev/null 2>&1
if [ $? -ne 0 ]; then
isPortOpen=$PORT_CLOSED
else
isPortOpen=$PORT_OPEN
fi
time=$(($time+$TIME_INTERVAL))
sleep $TIME_INTERVAL
done
if [ $isPortOpen -eq $PORT_OPEN ]; then
echo "Port is open after ${time} seconds."
# Give the server more time to properly start
sleep 10
else
echo "Reached the timeout (${TIMEOUT} seconds). The port ${SERVER}:${PORT} is not available."
exit 1
fi
After running all the test, I just got back:
Reached the timeout (180 seconds). The port localhost:8888 is not available.
I couldn't find out if there were any problems starting the devserver or querying the port.
Does anyone have an idea or solution?
Thanks!
By default, by only accepting localhost|loopback traffic, you're unable to access the server remotely.
Please try adding --address=0.0.0.0:
(link) to your java_dev_appserver.sh command.
Example
Used a variant of Google's HelloWorld sample.
Ran this with mvn appengine:run (to confirm it works and build the WAR).
Then /path/to/bin/java_dev_appserver.sh ./target/hellofreddie-0.1 (to confirm it works with the local development server).
Then used Google's Cloud SDK container image (link), mounted the previously generated WAR directory into it, and ran the server on :9999:
docker run \
--interactive \
--tty \
--publish=9999:9999 \
--volume=${PWD}/target:/target \
google/cloud-sdk \
/usr/lib/google-cloud-sdk/bin/java_dev_appserver.sh \
--address=0.0.0.0 \
--port=9999 \
./target/hellofreddie-0.1
Am able to curl the endpoint:
curl \
--silent \
--location \
--write-out "%{http_code}" \
--output /dev/null \
localhost:9999
returns 200
And, running your scripts adjusted with PORT=9999 returns:
Port is open after 2 seconds.

Looping over arguments in bash array for docker commands?

I seem to be stuck here. I'm attempting to write a bash function that starts x number of docker containers, wish an array that holds exposed ports for the given app. I don't want to loop over the array, just the commands, while referencing the array to get the value. The function looks like this:
#!/bin/bash
declare -a HOSTS=( ["app1"]="8002"
["app2"]="8003"
["app3"]="8008"
["app4"]="8009"
["app5"]="8004"
["app6"]="8007"
["app7"]="8006" )
start() {
for app in "$#"; do
if [ "docker ps|grep $app" == "$app" ]; then
docker stop "$app"
fi
docker run -it --rm -d --network example_example \
--workdir=/home/docker/app/src/projects/"$app" \
--volume "${PWD}"/example:/home/docker/app/src/example \
--volume "${PWD}"/projects:/home/docker/app/src/projects \
--volume "${PWD}"/docker_etc/example:/etc/example \
--volume "${PWD}"/static:/home/docker/app/src/static \
--name "$app" --hostname "$app" \
--publish "${HOSTS["$app"]}":"${HOSTS["$app"]}" \
example ./manage.py runserver 0.0.0.0:"${HOSTS[$app]}";
echo "$app"
done
}
And I want to pass arguments like so:
./script.sh start app1 app2 app4
Right now it isn't echoing the app so that points towards the for loop being declared incorrectly...could use some pointers on this.
This line:
if [ "docker ps|grep $app" == "$app" ];
doesn't do what you want. It looks like you mean to say:
if [ "$(docker ps | grep "$app")" == "$app" ];
but you could fail to detect two copies of the application running, and you aren't looking for the application as a word (so if you look for rm you might find perform running and think rm was running).
You should consider, therefore, using:
if docker ps | grep -w -q "$app"
then …
fi
This runs the docker command and pipes the result to grep, and reports on the exit status of grep. The -w looks for a word containing the value of "$app", but does so quietly (-q), so grep only reports success (exit status 0) if it found at least one matching line or failure (non-zero exit status) otherwise.
docker ps -f lets you conveniently check programmatically whether a particular image is running.
for app in "$#"; do
if docker ps -q -f name="$app" | grep -q .; then
docker stop "$app"
:
Unfortunately, docker ps does not set its exit code (at least not in the versions I have available -- I think it has been fixed in some development version after 17.06 but I'm not sure) so we have to use an ugly pipe to grep -q . to check whether the command produced any output. The -q flag just minimizes the amount of stuff it prints (it will print just the container ID instead of a bunch of headers and columnar output for each matching container).

Bash parse docker status to check if local image is up to date

I have a starting docker script here:
#!/usr/bin/env bash
set -e
echo '>>> Get old container id'
CID=$(sudo docker ps --all | grep "web-client" | awk '{print $1}')
echo $CID
echo '>>> Stopping and deleting old container'
if [ "$CID" != "" ];
then
sudo docker stop $CID
sudo docker rm $CID
fi
echo '>>> Starting new container'
sudo docker pull my-example-registry.com:5050/web-client:latest
sudo docker run --name=web-client -p 8080:80 -d my-example-registry.com:5050/web-client:latest
The fact is this script has umproper result. It deletes the old container everytime the script is run.
The "starting new container" section will pull the most recent image. Here is an example output of docker pull if the image locally is up to date:
Status: Image is up to date for
my-example-registry:5050/web-client:latest
Is there any way to improve my script by adding a condition:
Before anything, check via docker pull the local image is the most recent version available on registry. Then if it's the most recent version, proceed the stop and delete old container action and docker run the new pulled image.
In this script, how to parse the status to check the local image corresponds to the most up to date available on registry?
Maybe a docker command can do the trick, but I didn't manage to find a useful one.
Check the string "Image is up to date" to know whether the local image was updated:
sudo docker pull my-example-registry.com:5050/web-client:latest |
grep "Image is up to date" ||
(echo Already up to date. Exiting... && exit 0)
So change your script to:
#!/usr/bin/env bash
set -e
sudo docker pull my-example-registry.com:5050/web-client:latest |
grep "Image is up to date" ||
(echo Already up to date. Exiting... && exit 0)
echo '>>> Get old container id'
CID=$(sudo docker ps --all | grep "web-client" | awk '{print $1}')
echo $CID
echo '>>> Stopping and deleting old container'
if [ "$CID" != "" ];
then
sudo docker stop $CID
sudo docker rm $CID
fi
echo '>>> Starting new container'
sudo docker run --name=web-client -p 8080:80 -d my-example-registry.com:5050/web-client:latest
Simple use docker-compose and you can remove all the above.
docker-compose pull && docker-compose up
This will pull the image, if it exists, and up will only recreate the container, if it actually has a newer image, otherwise it will do nothing
If you're using docker compose, here's my solution where I keep put my latest docker-compose.yml into an image right after I've pushed all of the needed images that are in docker-compose.yml
The server runs this as a cron job:
#!/usr/bin/env bash
docker login --username username --password password
if (( $? > 0 )) ; then
echo 'Failed to login'
exit 1
fi
# Grab latest config, if the image is different then we have a new update to make
pullContents=$(docker pull my/registry:config-holder)
if (( $? > 0 )) ; then
echo 'Failed to pull image'
exit 1
fi
if echo $pullContents | grep "Image is up to date" ; then
echo 'Image already up to date'
exit 0
fi
cd /srv/www/
# Grab latest docker-compose.yml that we'll be needing
docker run -d --name config-holder my/registry:config-holder
docker cp config-holder:/home/docker-compose.yml docker-compose-new.yml
docker stop config-holder
docker rm config-holder
# Use new yml to pull latest images
docker-compose -f docker-compose-new.yml pull
# Stop server
docker-compose down
# Replace old yml file with our new one, and spin back up
mv docker-compose-new.yml docker-compose.yml
docker-compose up -d
Config holder dockerfile:
FROM bash
# This image exists just to hold the docker-compose.yml. So when remote updating the server can pull this, get the latest docker-compose file, then pull those
COPY docker-compose.yml /home/docker-compose.yml
# Ensures that the image is subtly different every time we deploy. This is required we want the server to find this image has changed to trigger a new deployment
RUN bash -c "touch random.txt; echo $(echo $RANDOM | md5sum | head -c 20) >> random.txt"
# Wait forever
CMD exec bash -c "trap : TERM INT; sleep infinity & wait"

How to check if docker daemon is running?

I am trying to create a bash utility script to check if a docker daemon is running in my server.
Is there a better way of checking if the docker daemon is running in my server other than running a code like this?
ps -ef | grep docker
root 1250 1 0 13:28 ? 00:00:04 /usr/bin/dockerd --selinux-enabled
root 1598 1250 0 13:28 ? 00:00:00 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim docker-containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --runtime docker-runc
root 10997 10916 0 19:47 pts/0 00:00:00 grep --color=auto docker
I would like to create a bash shell script that will check if my docker daemon is running. If it is running then do nothing but if it is not then have the docker daemon started.
My pseudocode is something like this. I am thinking of parsing the output of my ps -ef but I just would like to know if there is a more efficient way of doing my pseudocode.
if(docker is not running)
run docker
end
P.S.
I am no linux expert and I just need to do this utility on my own environment.
I made a little Script (Mac Osx) to ensure Docker is running by checking the exit code of docker stats.
#!/bin/bash
#Open Docker, only if is not running
if (! docker stats --no-stream ); then
# On Mac OS this would be the terminal command to launch Docker
open /Applications/Docker.app
#Wait until Docker daemon is running and has completed initialisation
while (! docker stats --no-stream ); do
# Docker takes a few seconds to initialize
echo "Waiting for Docker to launch..."
sleep 1
done
fi
#Start the Container..
This works for me on Ubuntu
$ systemctl status docker
You have a utility called pgrep on almost all the Linux systems.
You can just do:
pgrep -f docker > /dev/null || echo "starting docker"
Replace the echo command with your docker starting command.
if curl -s --unix-socket /var/run/docker.sock http/_ping 2>&1 >/dev/null
then
echo "Running"
else
echo "Not running"
fi
Ref: Docker api v1.28
The following works on macOS and on Windows if git bash is installed. On macOS open /Applications/Docker.app would start the docker deamon. Haven't seen anything similar for Windows however.
## check docker is running at all
## based on https://stackoverflow.com/questions/22009364/is-there-a-try-catch-command-in-bash
{
## will throw an error if the docker daemon is not running and jump
## to the next code chunk
docker ps -q
} || {
echo "Docker is not running. Please start docker on your computer"
echo "When docker has finished starting up press [ENTER} to continue"
read
}
You can simply:
docker version > /dev/null 2>&1
The exit code of that command will be stored to $? so you can check if it's 0, then docker is running.
docker version will exit 1 if daemon is not running. If other issues are encountered, such as docker not being installed at all, the exit code will vary.
But in the end of the day, if docker is installed and daemon is running, the exit code will be 0.
The 2>&1 will redirect stderr to stdout and > /dev/null will redirect stdout to /dev/null practically silencing the output no matter what was the result of the execution.
You could also just check for the existence of /var/run/docker.pid.
Following #madsonic, I went for the following
#!/bin/bash
if (! docker stats --no-stream 2>/dev/null); then
# On Mac OS this would be the terminal command to launch Docker
open /Applications/Docker.app
echo -n "Waiting for Docker to launch"
sleep 1
# Wait until Docker daemon is running and has completed initialisation
while (! docker stats --no-stream >/dev/null 2>&1); do
# Docker takes a few seconds to initialize
echo -n "."
sleep 1
done
fi
echo
echo "Docker started"
A function could looks so:
isRunning {
`ps -ef | grep "[d]ocker" | awk {'print $2'}`
}
I created a script to start, stop, restart a mongodb-server.
You only need to change some path inside the scripts, and i also works for you:
Script
I'm sure you want to start the docker daemon so here's the code to start it before executing your Docker run statement:
sudo systemctl start docker

Resources