MAVEN: How to run something BEFORE each plugin execution - maven

I have the following script, that works. It starts a docker mysql container, and initializes the database before running several executions of the liquibase plugin that run different stuff.
function die { mvn docker:stop ; exit 1; }
function initializeDatabase {
docker exec -it mysql-1 mysql -uroot -proot \
-e "DROP USER IF EXISTS 'myuser'#'%';" \
-e "DROP DATABASE IF EXISTS mydb;" \
-e "CREATE DATABASE mydb;" \
-e "CREATE USER 'myuser'#'%' IDENTIFIED WITH mysql_native_password BY 'mypassword';" \
-e "GRANT ALL PRIVILEGES ON mydb.* TO 'myuser'#'%';"
}
mvn docker:start && sleep 10
initializeDatabase || die
mvn liquibase:update#execution1 || die
initializeDatabase || die
mvn liquibase:update#execution2 || die
initializeDatabase || die
mvn liquibase:update#execution3 || die
initializeDatabase || die
mvn liquibase:update#execution4 || die
initializeDatabase || die
mvn liquibase:update#execution5 || die
initializeDatabase || die
mvn liquibase:update#execution6 || die
mvn docker:stop
It occurs to me that I could just have the docker start happen during the pre-integration-test phase and have all the liquibase executions run during the integration-test phase and then have the docker stop in the post-integration-test phase. However, I don't know how to initialize the database using the root user before each execution of the liquibase plugin.
Any thoughts on how I could do this so that it would just happen seamlessly as part of the maven build lifecycle? Thanks!

Related

Does not getting build failure status even the build not successful run(cloud-build remote builder)

Cloud-build is not showing build failure status
I created my own remote-builder which scp all files from /workspace to my Instance and running build on using gcloud compute ssh -- COMMAND
remote-builder
#!/bin/bash
USERNAME=${USERNAME:-admin}
REMOTE_WORKSPACE=${REMOTE_WORKSPACE:-/home/${USERNAME}/workspace/}
GCLOUD=${GCLOUD:-gcloud}
KEYNAME=builder-key
ssh-keygen -t rsa -N "" -f ${KEYNAME} -C ${USERNAME} || true
chmod 400 ${KEYNAME}*
cat > ssh-keys <<EOF
${USERNAME}:$(cat ${KEYNAME}.pub)
EOF
${GCLOUD} compute scp --compress --recurse \
$(pwd)/ ${USERNAME}#${INSTANCE_NAME}:${REMOTE_WORKSPACE} \
--ssh-key-file=${KEYNAME}
${GCLOUD} compute ssh --ssh-key-file=${KEYNAME} \
${USERNAME}#${INSTANCE_NAME} -- ${COMMAND}
below is the example of the code to run build(cloudbuild.yaml)
steps:
- name: gcr.io/$PROJECT_ID/remote-builder
env:
- COMMAND="docker build -t [image_name]:[tagname] -f Dockerfile ."
During docker build inside Dockerfile it got failure and show errors in log but status showing SUCCESS
can any help me how to resolve it.
Thanks in advance.
try adding
|| exit 1
at the end of your docker command... alternatively, you might just need to change the entrypoint to 'bash' and run the script manually
To confirm -- the first part was the run-on.sh script, and the second part was your cloudbuild.yaml right? I assume you trigger the build manually via UI and/or REST API?
I wrote all docker commands on bash script and add below error handling code to it.
handle_error() {
echo "FAILED: line $1, exit code $2"
exit 1
}
trap 'handle_error $LINENO $?' ERR
It works!

Organization of commands in makefile

I have the following command which runs integration tests:
run-it:
docker-compose -f docker-compose-tests.yml pull && \
docker-compose -f docker-compose-tests.yml up --build & \
sleep 150s && dotnet test --filter TestCategory=Integration.Tests $(SOLUTION) ; docker-compose -f docker-compose-tests.yml down
I want:
pull all containers
run docker compose in background
run integrations tests in 150 seconds.
But it looks that test start running before the pull command is completed.
I want 1 point run first, then 2 started and 3 start in 150 seconds after 2 is started.
Before worrying about make you should ensure that you can run the commands correctly from the shell prompt. In this case you're misunderstanding how the background token & works; it applies to the entire previous section of the command. Basically, you're running the equivalent of this:
( docker-compose -f docker-compose-tests.yml pull && \
docker-compose -f docker-compose-tests.yml up --build ) & \
sleep 150s && ...
Since you've put both the pull and the up into the background, the sleep and tests start running concurrently to the pull.
Try adding some parens to put the backgrounded process into a subshell:
docker-compose -f docker-compose-tests.yml pull && \
( docker-compose -f docker-compose-tests.yml up --build & ) && \
sleep 150s && ...

How to write in Bash script 3-4 commits to launch different type of automated UI tests in CircleCI?

Before it was working for regression and sanity tests:
Inside config.yml file:
- run: if (git log --format=%B -n 1 $CIRCLE_SHA1) | grep -iqF release; then cd <company_folder> && mvn clean && mvn test -e -X -Dwebdriver.chrome.driver=../../../chromedriver -Dsurefire.suiteXmlFiles=regression.xml; else cd <company_folder> && mvn clean && mvn test -e -X -Dwebdriver.chrome.driver=../../../chromedriver -Dsurefire.suiteXmlFiles=smoke.xml; fi
When we added for 'debug' commit - all builds started failing.
- run: if (git log --format=%B -n 1 $CIRCLE_SHA1) | grep -iqF release; then cd <company_folder> && mvn clean && mvn test -e -X -Dwebdriver.chrome.driver=../../../chromedriver -Dsurefire.suiteXmlFiles=regression.xml; elif (git log --format=%B -n 1 $CIRCLE_SHA1) | grep -iqF debug; then cd <company_folder> && mvn clean && mvn test -e -X -Dwebdriver.chrome.driver=../../../chromedriver -Dsurefire.suiteXmlFiles=debug.xml; else cd <company_folder> && mvn clean && mvn test -e -X -Dwebdriver.chrome.driver=../../../chromedriver -Dsurefire.suiteXmlFiles=smoke.xml; fi
We want to make one commit 'Debug' for debugging failed tests, because we don't want to wait 1 hour until all tests will be completed.
Please Help.

Return a result value from shell script for tests in Docker container to Jenkins CI

I want to use Jenkins CI to execute my automated end2end tests. My tests are running with Nightwatch.js. I want to run my automated tests in a self created docker container via a shell script. This shell script runs on local machine perfect, but if I start this shell script in Jenkins CI the shell script has to return a value, if the tests pass or fail. If they pass the Jenkins job has to pass too. And when the tests in Docker container fail then the Jenkins job has to fail, too.
Here is my current shell script:
#run the end2end tests headless in docker container from local with remote repository
#parameter: $1 folder on local machine to copy in project sources (with cucumber reports after test execution)
# (IMPORTANT: You have to ensure that the given folder path is in the file sharing paths in Docker configuration!!!)
# $2 git url with credentials to GitLab repo (e.g. https://<username>:<password>#gitlab.com/hrsinnolab/e2e-web-tests.git)
# $3 defined the running browser (scipts in package.json)
# $4 branch to run the tests against (optionally / if empty then the 'master' is used for tests as default)
#examples:
# run the tests with chrome against the branch 'NIKITA-1234'
#./runTestsLocal /Users/me/e2e-tests/reports/ https://me:mypassword#gitlab.com/hrsinnolab/e2e-web-tests.git test-chrome NIKITA-1234
# run the tests with firefox against the default 'master'
#./runTestsLocal /Users/me/e2e-tests/reports/ https://me:mypassword#gitlab.com/hrsinnolab/e2e-web-tests.git test-firefox
docker_image=grme/nightwatch-chrome-firefox:0.0.2
echo "------ stop all Docker containers ------" \
&& (docker stop $(docker ps -a -q) || echo "------ all Docker containers are still stopped ------") \
&& echo "------ remove all Docker containers ------" \
&& (docker rm $(docker ps -a -q) || echo "------ all Docker containers are still removed ------") \
&& echo "------ pull Docker image '"$docker_image"' from Docker Cloud ------" \
&& docker pull "$docker_image" \
&& echo "------ start Docker container from image ------" \
&& docker run -d -t -i -v $1:/my_tests/ "$docker_image" /bin/bash \
&& echo "------ execute end2end tests on Docker container ------" \
&& docker exec -it $(docker ps --format "{{.Names}}") bash -c \
"rm -Rf /my_tests/project \
&& git clone $2 /my_tests/project \
&& cd /my_tests/project \
&& git checkout $4 \
&& npm install \
&& npm install -y nightwatch-cucumber#7.1.10 \
&& npm install -y chromedriver#2.30.1 \
&& npm install -y geckodriver#1.7.1 \
&& npm install -y cucumber-html-reporter#2.0.3 \
&& npm install -y multiple-cucumber-html-reporter#0.2.0 \
&& xvfb-run --server-args='-screen 0 1600x1200x24' npm run $3" \
&& echo "------ cleanup all temporary files ------" \
&& rm -Rf $1/project/tmp-* \
&& rm -Rf $1/project/.com.google* \
&& echo "------ stop all Docker containers again ------" \
&& (docker stop $(docker ps -a -q) || echo "------ all Docker containers are still stopped ------") \
&& echo "------ remove all Docker containers again ------" \
&& (docker rm $(docker ps -a -q) || echo "------ all Docker containers are still removed ------")
How can I return a value to give the Jenkins job an information about the test status? I know it should be difficult, because I execute a shell script which starts Docker container and executes the tests within this Docker container.
The only thing you might need from my answer below could be:
exit $?
$?gives you the return value of your previous command/script
Full Answer:
I have a similar setup how I deal with executing the nightwatch tests in an docker container inside jenkins:
RC=0
docker exec -i ${DOCKER_NIGHTWATCH} node_modules/.bin/nightwatch --suiteRetries 1 || RC=$?
exit $RC
Container is obviously started beforehand with -d and sleep.
But actually I don't think it is a good practice to relay on an exit code from a shell script when executing the tests. What I also did for this job is, that I started the container with -v ${WORKSPACE}/reports:/nightwatch/reports argument and I added a Post-build Action Publish JUnit test result report and specified the reports folder to be included. This helps you getting better information about yours system health.
Hope that helps you! :)

Docker kill not working when executed in shell script

The following works fine when running the commands manually line by line in the terminal:
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
docker stop test
docker rm -test
But when I run it as a shell script, the Docker container is neither stopped nor removed.
#!/usr/bin/env bash
set -e
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
docker stop test
docker rm -test
How can I make it work from within a shell script?
If you use set -e the script will exit when any command fails. i.e. when a commands return code != 0. This means if your start, exec or stop fails, you will be left with a container still there.
You can remove the set -e but you probably still want to use the return code for the go test command as the overall return code.
#!/usr/bin/env bash
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
rc=$?
docker stop test
docker rm test
exit $rc
Trap
Using set -e is actually quite useful and catches a lot of issues that are silently ignored in most scripts. A slightly more complex solution is to use a trap to run your clean up steps on EXIT, which means set -e can be used.
#!/usr/bin/env bash
set -e
# Set a default return code
RC=2
# Cleanup
function cleanup {
echo "Removing container"
docker stop test || true
docker rm -f test || true
exit $RC
}
trap cleanup EXIT
# Test steps
docker create -it --name test path
docker start test
docker exec test /bin/sh -c "go test ./..."
RC=$?

Resources