How do I generate reports in Jenkins if my shell script fails? Is there a plug in? - shell

I have a jenkins job that will do a deployment in my CentOS machine by running the docker-compose file. This is how my shell script looks ?
#!/bin/sh
# Post steps for deployment
# Navigate to deployment scripts
cd /deployment/scripts/v1.0.784
#Execute the uninstall script
./dit_undeploy_all.sh
set +e
#Remove all docker images and containers
docker container rm $(docker ps -a -q) -f
docker rmi $(docker images -a -q)
docker volume rm $(docker volume ls -q)
set -e
#Remove and clear out the folder structure
rm -rf *.*
#Gets all the latest files from Artifactory by reading from teh input file
wget -B https://artifactory.gue.com -i /deployment/scripts/inputFile.txt
# Gives rea/write access
chmod +x *.*
# Execute docker compose file to get all the latest containers
./dit_deploy_all.sh
#Add wait time for the services to be up and running
sleep 60s #Wait 15 sec
# Need to update the URL
./dit_create_policies.sh
#Verify URL Status Code of 200
cd /deployment/scripts
sleep 60s #Wait time 60s
./verifyHttpCode
The script ./verifyHttpCode does the following:
#!/bin/bash
while read LINE; do
curl -o /dev/null --silent --head --write-out '%{http_code}' "$LINE"
echo " $LINE"
done < url-list.txt
So basically after the deployment it will verify the http status code...What's the equivalent of testNG in shell script that I can use in Jenkins to verify the http status code and generate reports ??

Have you tried this plugin : Audit2DB
https://wiki.jenkins.io/display/JENKINS/Audit+To+Database+Plugin
It will log all required job details into a DB. When you want to create a report, it will fetch it from DB.
So in this case, you'll need to fail the job (non-zero exit) on script failure. Also you can set $http_code as env variable on completion of your job and log the same into DB.

Related

Docker run to execute script in mount without exiting container automatically?

I have a simple bash script 'test.sh' in the root of mounted folder :
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
However, when i try to mount folder and start the container with docker run as follows:
docker run -d -p 8000:8787 -e ROOT=true -e DISABLE_AUTH=true --name container -v mount-folder/:/home/rstudio/ image_name /home/rstudio/test.sh
above run command starts the container but exits automatically.
I am looking for a docker run command that starts the container , mounts the folder and then executes the bash script which is in the mount-folder without exiting the container.
(** dont want to go with docker exec command as it is not suitable for my use case for other reasons)
Dockerfile:
FROM rocker/rstudio:4.0.2
//some RUN commands to install necessary r packages
EXPOSE 8787
CMD tail -f /dev/null
Other details :
Image that i am using is rstudio server from rocker and container runs on AWS ubuntu machine.
Edit :
have also tried adding CMD tail -f /dev/null at the end of dockerfile as suggested in http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/ even then the container exits.
Docker containers shutdown automatically when run in detached mode. I think this article proposes a nice solution:
http://bigdatums.net/2017/11/07/how-to-keep-docker-containers-running/
You could add tail -f /dev/null as the last command in your bash script instead so that the script will never halt unless it is told to do so.
When you do docker run [options] image_name [cmd] the command you specify becomes the command for the container and replaces any the command specified in the dockerfile (that's why adding CMD tail -f /dev/null doesn't do anything). If you ran your container without the /home/rstudio/test.sh at the end, it should stay running.
The solution would be to update your script to add the tail command at the end.
#!/bin/bash
Rscript -e "source('/home/rstudio/mount-folder/src/controller.R')";
exec tail -f /dev/null
If you can't update that script, you could instead add it to the command being passed to the container, with something like:
docker run [options] image_name bash -c '/home/rstudio/test.sh && exec tail -f /dev/null'

Running a bash script after the kafka-connect docker is up and running

I have the following docker file
FROM confluentinc/cp-kafka-connect:5.3.1
ENV CONNECT_PLUGIN_PATH=/usr/share/java
# JDBC-MariaDB
RUN wget -nv -P /usr/share/java/kafka-connect-jdbc/ https://downloads.mariadb.com/Connectors/java/connector-java-2.4.4/mariadb-java-client-2.4.4.jar
# SNMP Source
RUN wget -nv -P /tmp/ https://github.com/KarthikDuggirala/kafka-connect-snmp/releases/download/0.0.1.11/kafka-connect-snmp-0.0.1.11.tar.gz
RUN mkdir /tmp/kafka-connect-snmp && tar -xf /tmp/kafka-connect-snmp-0.0.1.11.tar.gz -C /tmp/kafka-connect-snmp/
RUN mv /tmp/kafka-connect-snmp/usr/share/kafka-connect/kafka-connect-snmp /usr/share/java/
# COPY script and make it executable
COPY plugins-config.sh /usr/share/kafka-connect-script/plugins-config.sh
RUN ["chmod", "+x", "/usr/share/kafka-connect-script/plugins-config.sh"]
#entrypoint
ENTRYPOINT [ "./usr/share/kafka-connect-script/plugins-config.sh" ]
and the following bash script
#!/bin/bash
#script to configure kafka connect with plugins
#export CONNECT_REST_ADVERTISED_HOST_NAME=localhost
#export CONNECT_REST_PORT=8083
url=http://$CONNECT_REST_ADVERTISED_HOST_NAME:$CONNECT_REST_PORT/connectors
curl_command="curl -s -o /dev/null -w %{http_code} $url"
sleep_second=5
sleep_second_counter=0
max_seconds_to_wait=30
echo "Waiting for Kafka Connect to start listening on localhost"
echo "HOST: $CONNECT_REST_ADVERTISED_HOST_NAME , PORT: $CONNECT_REST_PORT"
while [[ $(eval $curl_command) -eq 000 ]]
do
echo "In"
echo -e $date " Kafka Connect listener HTTP state: " $(eval $curl_command) " (waiting for 200) $sleep_second_counter"
echo "Going to sleep for $sleep_second seconds"
# sleep $sleep_second
echo "Finished sleeping"
# ((sleep_second_counter+=$sleep_second))
echo "Finished counter"
done
echo "Out"
nc -vz $CONNECT_REST_ADVERTISED_HOST_NAME $CONNECT_REST_PORT
I try to run the docker and using docker logs to see whats happening, and I am expecting that the script would run and wait till the kafka connect is started. But apparently after say few seconds the script or (I dont know what is hanging) hangs and I do not see any console prints anymore.
I am a bit lost what is wrong, so I need some guidance on what is that I am missing or is this not the correct way
What I am trying to do
I want to have logic defined that I could wait for kafka connect to start then run the curl command
curl -X POST -H "Content-Type: application/json" --data '{"name":"","config":{"connector.class":"com.github.jcustenborder.kafka.connect.snmp.SnmpTrapSourceConnector","topic":"fm_snmp"}}' http://localhost:8083/connectors
PS: I cannot use docker-compose way to do it, since there are places I have to use docker run
The problem here is that ENTRYPOINT will run when the container starts and will prevent the default CMD to run since the script will loop waiting for the server to be up , that is the script will loop forever since the CMD will not run.
you need to do one of the following:
start the kafka connect server in your Entrypoint and your script in CMD or running your script outside the container ....

Does not getting build failure status even the build not successful run(cloud-build remote builder)

Cloud-build is not showing build failure status
I created my own remote-builder which scp all files from /workspace to my Instance and running build on using gcloud compute ssh -- COMMAND
remote-builder
#!/bin/bash
USERNAME=${USERNAME:-admin}
REMOTE_WORKSPACE=${REMOTE_WORKSPACE:-/home/${USERNAME}/workspace/}
GCLOUD=${GCLOUD:-gcloud}
KEYNAME=builder-key
ssh-keygen -t rsa -N "" -f ${KEYNAME} -C ${USERNAME} || true
chmod 400 ${KEYNAME}*
cat > ssh-keys <<EOF
${USERNAME}:$(cat ${KEYNAME}.pub)
EOF
${GCLOUD} compute scp --compress --recurse \
$(pwd)/ ${USERNAME}#${INSTANCE_NAME}:${REMOTE_WORKSPACE} \
--ssh-key-file=${KEYNAME}
${GCLOUD} compute ssh --ssh-key-file=${KEYNAME} \
${USERNAME}#${INSTANCE_NAME} -- ${COMMAND}
below is the example of the code to run build(cloudbuild.yaml)
steps:
- name: gcr.io/$PROJECT_ID/remote-builder
env:
- COMMAND="docker build -t [image_name]:[tagname] -f Dockerfile ."
During docker build inside Dockerfile it got failure and show errors in log but status showing SUCCESS
can any help me how to resolve it.
Thanks in advance.
try adding
|| exit 1
at the end of your docker command... alternatively, you might just need to change the entrypoint to 'bash' and run the script manually
To confirm -- the first part was the run-on.sh script, and the second part was your cloudbuild.yaml right? I assume you trigger the build manually via UI and/or REST API?
I wrote all docker commands on bash script and add below error handling code to it.
handle_error() {
echo "FAILED: line $1, exit code $2"
exit 1
}
trap 'handle_error $LINENO $?' ERR
It works!

Unable to Find Entrypoint For Nextcloud (Alpine-based Version) For a Cron Container

I'm using Docker with Rancher v1.6, setting up a Nextcloud stack.
I would like to use a dedicated container for running cron tasks every 15 minutes.
The "normal" Nextcloud Docker image can simply use the following:
entrypoint: |
bash -c 'bash -s <<EOF
trap "break;exit" SIGHUP SIGINT SIGTERM
while /bin/true; do
su -s "/bin/bash" -c "/usr/local/bin/php /var/www/html/cron.php" www-data
echo $$(date) - Running cron finished
sleep 900
done
EOF'
(Pulled from this GitHub post)
However, the Alpine-based image does not have bash, and so it cannot be used.
I found this script in the list of examples:
#!/bin/sh
set -eu
exec busybox crond -f -l 0 -L /dev/stdout
However, I cannot seem to get that working with my docker-compose.yml file.
I don't want to use an external file, just to have the script entirely in the docker-compose.yml file, to make preparation and changes a bit easier.
Thank you!

Bash parse docker status to check if local image is up to date

I have a starting docker script here:
#!/usr/bin/env bash
set -e
echo '>>> Get old container id'
CID=$(sudo docker ps --all | grep "web-client" | awk '{print $1}')
echo $CID
echo '>>> Stopping and deleting old container'
if [ "$CID" != "" ];
then
sudo docker stop $CID
sudo docker rm $CID
fi
echo '>>> Starting new container'
sudo docker pull my-example-registry.com:5050/web-client:latest
sudo docker run --name=web-client -p 8080:80 -d my-example-registry.com:5050/web-client:latest
The fact is this script has umproper result. It deletes the old container everytime the script is run.
The "starting new container" section will pull the most recent image. Here is an example output of docker pull if the image locally is up to date:
Status: Image is up to date for
my-example-registry:5050/web-client:latest
Is there any way to improve my script by adding a condition:
Before anything, check via docker pull the local image is the most recent version available on registry. Then if it's the most recent version, proceed the stop and delete old container action and docker run the new pulled image.
In this script, how to parse the status to check the local image corresponds to the most up to date available on registry?
Maybe a docker command can do the trick, but I didn't manage to find a useful one.
Check the string "Image is up to date" to know whether the local image was updated:
sudo docker pull my-example-registry.com:5050/web-client:latest |
grep "Image is up to date" ||
(echo Already up to date. Exiting... && exit 0)
So change your script to:
#!/usr/bin/env bash
set -e
sudo docker pull my-example-registry.com:5050/web-client:latest |
grep "Image is up to date" ||
(echo Already up to date. Exiting... && exit 0)
echo '>>> Get old container id'
CID=$(sudo docker ps --all | grep "web-client" | awk '{print $1}')
echo $CID
echo '>>> Stopping and deleting old container'
if [ "$CID" != "" ];
then
sudo docker stop $CID
sudo docker rm $CID
fi
echo '>>> Starting new container'
sudo docker run --name=web-client -p 8080:80 -d my-example-registry.com:5050/web-client:latest
Simple use docker-compose and you can remove all the above.
docker-compose pull && docker-compose up
This will pull the image, if it exists, and up will only recreate the container, if it actually has a newer image, otherwise it will do nothing
If you're using docker compose, here's my solution where I keep put my latest docker-compose.yml into an image right after I've pushed all of the needed images that are in docker-compose.yml
The server runs this as a cron job:
#!/usr/bin/env bash
docker login --username username --password password
if (( $? > 0 )) ; then
echo 'Failed to login'
exit 1
fi
# Grab latest config, if the image is different then we have a new update to make
pullContents=$(docker pull my/registry:config-holder)
if (( $? > 0 )) ; then
echo 'Failed to pull image'
exit 1
fi
if echo $pullContents | grep "Image is up to date" ; then
echo 'Image already up to date'
exit 0
fi
cd /srv/www/
# Grab latest docker-compose.yml that we'll be needing
docker run -d --name config-holder my/registry:config-holder
docker cp config-holder:/home/docker-compose.yml docker-compose-new.yml
docker stop config-holder
docker rm config-holder
# Use new yml to pull latest images
docker-compose -f docker-compose-new.yml pull
# Stop server
docker-compose down
# Replace old yml file with our new one, and spin back up
mv docker-compose-new.yml docker-compose.yml
docker-compose up -d
Config holder dockerfile:
FROM bash
# This image exists just to hold the docker-compose.yml. So when remote updating the server can pull this, get the latest docker-compose file, then pull those
COPY docker-compose.yml /home/docker-compose.yml
# Ensures that the image is subtly different every time we deploy. This is required we want the server to find this image has changed to trigger a new deployment
RUN bash -c "touch random.txt; echo $(echo $RANDOM | md5sum | head -c 20) >> random.txt"
# Wait forever
CMD exec bash -c "trap : TERM INT; sleep infinity & wait"

Resources