Teamcity build to fail if number of failed tests is more than N% of total tests - teamcity

Is that possible to configure a Teamcity build to fail if number of failed tests is more than N% of the passed tests in this build?
I've only discovered an option to fail based on a metric comparing current build against some previous one:
But I need to fail a build based on fail/passed test ratio within the same build.

It can be done by using TeamCity REST API to get failed/passed values,
Configuration Parameters and Service Messages for controlling build status.
turn off 'fail if at least one test fails' in build 'failure conditions'
add threshold(%), tc_url and tc_token as build parameters.
add a new build step using the script below (or equivalent)
#!/bin/bash
# set threshold from tc parameters
threshold=%threshold%
# get test results from teamcity API, build# from %teamcity.build.id% param
curl -s -X GET "%tc_url%/app/rest/builds/%teamcity.build.id%/testOccurrences?locator=count:1,status:failed,status:passed" \
-H "Accept: application/json" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer %tc_token%" \
-o test_results.json & wait $! # wait for download to finish
# read
passed=`cat test_results.json | jq '.passed'`
failed=`cat test_results.json | jq '.failed'`
# calc
total=`expr $passed + $failed`
failPercent=`echo "scale=0; 100 * $failed / $total" | bc`
# fail build if failPercent > threshold
if [ $failPercent -gt $threshold ]; then
# fail build using service message
echo "##teamcity[buildStatus status='FAILURE' text='$failPercent% of tests failed (>$threshold%)']"
fi

Related

Mark Gitlab CI stage as passed based on condition in job log

I have a bash script that is executing a dataflow job like so
/deploy.sh
python3 main.py \
--runner DataflowRunner \
--region us-west1 \
--job_name name \
--project project \
--autoscaling_algorithm THROUGHPUT_BASED \
--max_num_workers 10 \
--environment prod \
--staging_location gs staging loc \
--temp_location temp loc \
--setup_file ./setup.py \
--subnetwork subnetwork \
--experiments use_network_tags=internal-ssh-server
So I use gitlab ci to run this
./gitlab-ci.yml
Deploy Prod:
stage: Deploy Prod
environment: production
script:
- *setup-project
- pip3 install --upgrade pip;
- pip install -r requirements.txt;
- chmod +x deploy.sh
- ./deploy.sh
only:
- master
So now my code runs and logs in the gitlab pipeline AND in the logs viewer in dataflow. What I want to be able to do is that once gitlab sees JOB_STATE_RUNNING, it marks the pipeline as passed and stops outputting logs to gitlab. Maybe there's a way to do this in the bash script? Or can it be done in gitlab ci?
GitLab doesn't have this capability as a feature, so your only is to script the solution.
Something like this should work to monitor the output and exit once that text is encountered.
python3 myscript.py > logfile.log &
( tail -f -n0 logfile.log & ) | grep -q "JOB_STATE_RUNNING"
echo "Job is running. We're done here"
exit 0
reference: https://superuser.com/a/900134/654263
You'd need to worry about some other things in the script, but that's the basic idea. GitLab unfortunately has no success condition based on trace output.
Little late, hope this helps someone, The way i solved it was I created two scripts
build.sh
#!/bin/bash
# This helps submit the beam dataflow job using nohup and parse the results for job submission
nohup stdbuf -oL bash ~/submit_beam_job.sh &> ~/output.log &
# Wait for log file to appear before checking on it
sleep 5
# Count to make sure the while loop is not stuck
cnt=1
while ! grep -q "JOB_STATE_RUNNING" ~/output.log; do
echo "Job has not started yet, waiting for it to start"
cnt=$((cnt+1))
if [[ $cnt -gt 5 ]]; then
echo "Job submission taking too long please check!"
exit 1
fi
# For other error on the log file, checking the keyword
if grep -q "Errno" ~/output.log || grep -q "Error" ~/output.log; then
echo "Error submitting Dataflow job, please check!!"
exit 1
fi
sleep 30
done
The submit beam job is like this
#!/bin/bash
# This submits individual beam jobs for each topics
export PYTHONIOENCODING=utf8
# Beam pipeline for carlistingprimaryimage table
python3 ~/dataflow.py \
--input_subscription=projects/data-warehouse/subscriptions/cars \
--window_size=2 \
--num_shards=2 \
--runner=DataflowRunner \
--temp_location=gs://bucket/beam-python/temp/ \
--staging_location=gs://bucket/beam-python/binaries/ \
--max_num_workers=5 \
--project=data-warehouse \
--region=us-central1 \
--gcs_project=data-warehouse \
--setup_file=~/setup.py

Git post-receive hook, send curl commit message to Discord Webhook

I am trying to post an information-message in our discord every time someone pushes to the master.
I have a post-receive bash script looking like this:
#!/bin/bash
while read oldrev newrev ref
do
if [[ $ref =~ .*/master$ ]];
then
tail=$(git log -1 --pretty=format:'%h %cn: %s%b' $newrev)
url='https://discordapp.com/api/webhooks/validapikey'
curl -i \
-H "Accept: application/json" \
-H "Content-Type:application/json" \
-X POST \
--data '{"content": "'$tail'"}' $url
fi
done
If I output tail to a file I get the expected string
6baf5 user: last commit message
but the message does not get posted on discord
If I replace $tail with "hello" it gets posted.
3 suggestions:
a) -d "{\"content\": \"${tail}\"}"
b) You can write this in the same language your project is, like Python or NodeJS, which is always better than bash (use the same name and make it executable)
c) To avoid this to be maintained in each dev machine, you can version this logic inside your repo using https://pypi.org/project/hooks4git or any other too that provides git hook management.

If proxy is down, get a new one

I'm writing my first bash script
LANG="en_US.UTF8" ; export LANG
PROXY=$(shuf -n 1 proxy.txt)
export https_proxy=$PROXY
RUID=$(php -f randuid.php)
curl --data "mydata${RUID}" --user-agent "myuseragent" https://myurl.com/url -o "ticket.txt"
This script also use curl, but if proxy is down it gives me this error:
failed to connect PROXY:PORT
How can I make bash script run again, so it can get another proxy address from proxy.txt
Thanks in advance
Run it in a loop until the curl succeeds, for example:
export LANG="en_US.UTF8"
while true; do
PROXY=$(shuf -n 1 proxy.txt)
export https_proxy=$PROXY
RUID=$(php -f randuid.php)
curl --data "mydata${RUID}" --user-agent "myuseragent" https://myurl.com/url -o "ticket.txt" && break
done
Notice the && break at the end of the curl command.
That is, if the curl succeeds, break out of the infinite loop.
If you have multiple curl commands and you need all of them to succeed,
then chain them all together with &&, and add the break after the last one:
curl url1 && \
curl url2 && \
break
Lastly, as #Inian pointed out,
you could use the --proxy flag to pass a proxy URL to curl without the extra step of setting https_proxy, for example:
curl --proxy "$(shuf -n 1 proxy.txt)" --data "mydata${RUID}" --user-agent "myuseragent"
Lastly, note that due to the randomness, a randomly selected proxy may come up more than once until you find one that works.
Avoid that, you could read iterate over the shuffled proxies instead of an infinite loop:
export LANG="en_US.UTF8"
shuf proxy.txt | while read -r proxy; do
ruid=$(php -f randuid.php)
curl --proxy "$proxy" --data "mydata${ruid}" --user-agent "myuseragent" https://myurl.com/url -o "ticket.txt" && break
done
I also lowercased your user-defined variables,
as capitalization is not recommended for those.
I know i accepted #janos answer but since I can't edit his I'm going to add this
response=$(curl --proxy "$proxy" --silent --write-out "\n%{http_code}\n" https://myurl.com/url)
status_code=$(echo "$response" | sed -n '$p')
html=$(echo "$response" | sed '$d')
case "$status_code" in
200) echo 'Working!'
;;
*)
echo 'Not working, trying again!';
exec "$0" "$#"
esac
This will run my script again if it gives 503 status code which i wanted :)
And with #janos code it will run again if proxy is not working.
Thank you everyone i achieved what i wanted.

Triggering builds of dependent projects in Travis CI

We have our single page javascript app in one repository and our backend server in another. Is there any way for a passing build on the backend server to trigger a build of the single page app?
We don't want to combine them into a single repository, but we do want to make sure that changes to one don't break the other.
Yes, it is possible to trigger another Travis job after a first one succeeds. You can use the trigger-travis.sh script.
The script's documentation tells how to use it -- set an environment variable and add a few lines to your .travis.yml file.
It's possible yes and it's also possible to wait related build result.
I discover trigger-travis.sh from the previous answer but before that I was implementing my own solution (for full working source code: cf. pending pull request PR196 and live result)
References
Based on travis API v3 documentation:
trigger a build triggering-builds
get build information resource/builds
You will need a travis token, and setup this token as secreet environment variable on travis portal.
Following this doc, I were able to trigger a build, and wait for him.
1) make .travis_hook_qa.sh
(extract) - to trigger a new build :
REQUEST_RESULT=$(curl -s -X POST \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Travis-API-Version: 3" \
-H "Authorization: token ${QA_TOKEN}" \
-d "$body" \
https://api.travis-ci.org/repo/${QA_SLUG}/requests)
(it's trigger-travis.sh equivalent) You could make some customization on the build definition (with $body)
2) make .travis_wait_build.sh
(extract) - to wait a just created build, get build info :
BUILD_INFO=$(curl -s -X GET \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Travis-API-Version: 3" \
-H "Authorization: token ${QA_TOKEN}" \
https://api.travis-ci.org/repo/${QA_SLUG}/builds?include=build.state\&include=build.id\&include=build.started_at\&branch.name=master\&sort_by=started_atdesc\&limit=1 )
BUILD_STATE=$(echo "${BUILD_INFO}" | grep -Po '"state":.*?[^\\]",'|head -n1| awk -F "\"" '{print $4}')
BUILD_ID=$(echo "${BUILD_INFO}" | grep '"id": '|head -n1| awk -F'[ ,]' '{print $8}')
You will have to wait until your timeout or expected final state..
Reminder: possible travis build states are created|started (and then) passed|failed

I need to execute a Curl script on Jenkins so as to check the status URL

The script should check for Http status code for the URL and should show error when status code doesn't match for eg. 200.
In Jenkins if this script fails then Build should get failed and Mail is triggered through post build Procedure.
Another interesting feature of curl is its -f/--fail option. If set, it will tell curl to fail on any HTTP error, i.e. curl will have an exit code different from 0, if the server response status code was not 1xx/2xx/3xx, i.e. if it was 4xx or above, so
curl --silent --fail "http://www.example.org/" >/dev/null
or (equivalently):
curl -sf "http://www.example.org/" >/dev/null
would have an exit code of 22 rather than 0, if the URL could not be found or if some other HTTP error occurred. See man curl for a description of curl's various exit codes.
You can use simple shell command as referred in this answer
curl -s -o /dev/null -w "%{http_code}" http://www.example.org/
This will happen if the following shell script is added:
response=$(curl -s -o /dev/null -w "%{http_code}\n" http://www.example.org/)
if [ "$response" != "200" ]
then
exit 1
fi
exit 1 will mark build as failed
Jenkins also has HTTP Request Plugin that can trigger HTTP requests.
For example this is how you can check response status and content:
def response = httpRequest "http://httpbin.org/response-headers?param1=${param1}"
println('Status: '+response.status)
println('Response: '+response.content)
You could try:
response=`curl -k -s -X GET --url "<url_of_the_request>"`
echo "${response}"
How about passing the URL at run time using curl in bashscript
URL=www.google.com
"curl --location --request GET URL"
How we can pass url at runtime ?

Resources