Trigger remote jenkins job with specific build id - bash

I have pipeline that triggers multiple builds on remote jenkins job.
Problem is, when there are multiple builds triggered at same time, following command:
JENKINS_BUILD_ID=$(curl -k --silent --netrc-file "$HOME/.netrc" "${JENKINS_URL}/job/${JOB_NAME}/api/json" | grep -Po '"nextBuildNumber":\K\d+')
returns same build numbers( for example 125 is next build id).
In jenkins job configuration, I set up build name to be:
${ENV,var="WORKFLOW_NAME"}
But in api, to get logs, I need to use that build number(for example 125), is there way to use parameter in api (jenkins_job/WORKFLOW_NAME instead of default build number), where WORKFLOW_NAME is parameter parsed with remote trigger:
curl -k -X POST --netrc-file "$HOME/.netrc" "${JENKINS_URL}/job/{$JOB_NAME}/buildWithParameters" \
-F WORKFLOW_NAME="$WORKFLOW_NAME"

Related

How to generate composite html report for distributed load testing in jmeter?

Context: I am running JMeter load test on a distributed load system with 1:2 master slave ratio, with the following command:
jmeter -n -t "home/jmeterscripts/EventGridScript.jmx" -R slave1:1099,slave2:1099 -l "home/jmeterscripts/Result.csv" -e -o "home/jmeterscripts/HTMLReports"
Will the result report to the same report.html because I am getting error of Result.csv presence from slave2 while reporting? how to handle it this, didn't find similar post.
You're getting this message because home/jmeterscripts/Result.csv is present already, if you don't need the results file from the previous run add -f command-line argument to your command line:
jmeter -n -t -"home/jmeterscripts/EventGridScript.jmx" -R slave1:1099,slave2:1099 -f -l "home/jmeterscripts/Result.csv" -e -o "home/jmeterscripts/HTMLReports"
-f, --forceDeleteResultFile
force delete existing results files and web report folder if present before starting the test
The results are not being stored on slaves, the slaves send test metrics to the master so the master collects the statistics from all the slaves so no matter how many slaves you have you always get a single .jtl results file and a single HTML Reporting Dashboard.
More information: How to Perform Distributed Testing in JMeter

JMeter Distributed Runner not able to Generate the Consolidated Reports

When I am running a simple standalone JMeter script using command line as below
jmeter -n -t your_script.jmx
This generates a CSV file which contains all the data related to the execution.
However, when the same JMeter file executed for a distributed load testing with multiple JMeter Server IP addresses which will simulate the given number of users and runs on the target server, I am not able to get the jmeter.csvfile Generated(But the command runs successfully).
The command I have used for distributed execution is
jmeter -n -t script.jmx -R IP_address1, IP_address2,...
Now, I should get a consolidated jmeter.csv file from this execution. But, I am not getting one.
Same is the case with JMeter API DistributedRunner Class- We are not getting the consolidated jmeter.csv file and reports.
This command:
jmeter -n -t your_script.jmx
does not generate any CSV file, you need to add -l command-line argument and provide desired results file location like:
jmeter -n -t your_script.jmx -l jmeter.csv
The same applies for distributed testing:
jmeter -n -t script.jmx -R IP_address1, IP_address2 -l jmeter.csv
If you provide -l command-line argument but still not getting any results most probably your script execution fails on remote slaves somewhere somehow. Follow the below checklist in order to get to the bottom of the script failure:
Inspect jmeter.log file on master machine and jmeter-server.log on the remote slaves, if something goes wrong - most probably you will find the cause in log files
Make sure that JRE version is the same on master and the slaves
Make sure that JMeter version is the same on master and the slaves, it's recommended to use the latest JMeter version where possible
If the test relies on any of the JMeter Plugins - make to install them all onto all slave machines. The plugins can be installed using JMeter Plugins Manager
If your test is using CSV Data Set Config - you will need to copy the CSV file to all slaves manually
If your test needs any additional JMeter Properties you will need to supply the properties via -J or -D command-line arguments on all the machines or via -G command-line arugment on the master

JMeter Master CommandLine Run is not passing the updated values from -J flag to the slaves

I have a single master and 5 slave agents. I am starting my test using the command line option from the master by specifying the slave machines using the -R option.
$JMETER_HOME/current/bin/jmeter -n -t test.jmx -R host1,host2 -l testresult.jtl -Jthreads=$THREADS -Jrampup=$RAMPUP -Jtestduration=$TESTDURATION -JENV=$ENV -e -o ./testreport
I see that the new values that are passed in the command line using the -J switch are not getting applied when the test plan is transferred to the slave machines. Slaves are using only the hardcoded values in the JMX.
According to Jmeter Doc on Overriding Properties Via The Command Line
-J[prop_name]=[value]
defines a local JMeter property.
-G[prop_name]=[value]
defines a JMeter property to be sent to all remote servers.
So, you need to use -G flag for Jmeter property to be sent to all remote servers.

TeamCity: Disable build trigger for all TeamCity projects

I would like to ask is there any way to disable build triggers for all TeamCity projects by running a script?
I have a scenario that I need to disable all the build triggers to prevent the builds from running. This is because sometimes, I might need to perform some upgrading process on build agent machines which will take more than one day.
I do not wish to manually click on Disable buttons for every build triggers on every different TeamCity projects. Is there a way to automate this process?
Thanks in advance.
Use Team City REST API.
Given your Team City is deployed at http://dummyhost.com and you enabled guest access with system admin role (otherwise just switch from guestAuth to httpAuth in URL and specify user with password in request, details are in documentation) you can do next:
Get all build configurations
GET http://dummyhost.com/guestAuth/app/rest/buildTypes/
For each build configuration get all triggers
GET http://dummyhost.com/guestAuth/app/rest/buildTypes/id:***YOUR_BUILD_CONFIGID***/triggers/
For each trigger disable it
PUT http://dummyhost.com/guestAuth/app/rest/buildTypes/id:***YOUR_BUILD_CONFIGID***/triggers/***YOUR_TRIGGER_ID***/disabled
See full documentation here
You can pause the build queue. See this video. This way you needn't touch the build configurations at all; you're just bringing the TC to a halt.
For agent-specific upgrades, it's best to disable only the agent you're working on. See here.
Neither of these is "by running a script" as you asked, but I take it you were only asking for a scripted solution to avoid a lot of GUI clicking.
Another solution might be to simply disable the agent, so no more builds will run.
Here is a bash script to bulk pause all (still not paused) build configurations by project name pattern via TeamCity REST API:
TEAMCITY_HOST=http://teamcity.company.com
CREDS="-u domain\user:password"
curl $CREDS --request GET "$TEAMCITY_HOST/app/rest/buildTypes/" \
| sed -r "s/(<buildType)/\n\\1/g" | grep "Project Name Regex" \
| grep -v 'paused="true"' | grep -Po '(?<=buildType id=")[^"]*' \
| xargs -I {} curl -v $CREDS --request PUT "$TEAMCITY_HOST/app/rest/buildTypes/id:{}/paused" --header "Content-Type: text/plain" --data "true"

"'resource' is not recognized as an internal or external command" while POSTing using Web Service API in SonarQube 5.1.2

I am using SonarQube 5.1.2 on Windows 7 Professional. I am using Web Service API over cURL 7.32.0 (x86_64-pc-win32). I want to upload sonar.exclusions and few more such properties for a specific project using POST.
I use curl -u admin:admin -X POST http://localhost:9512/api/prop
erties/?id=sonar.exclusions -v -T "D:\sonar-exclusions.xml" and I am able to POST it as global sonar.exclusions.
Where as if I use resource to post it to a specific project with the command - curl -u admin:admin -X POST http://localhost:9512/api/prop
erties/?id=sonar.exclusions&resource=org.myProject:myProject -v -T "D:\sonar-exclusions.xml" I get the error {"err_code":200,"err_msg":"property created"}'resource' is not recognized as an
internal or external command, operable program or batch file
What's going wrong with the resource parameter here?
The problem is with the & in the URL, it's interperted by your command line prompt as: Let me run this command:
curl -u admin:admin -X POST http://localhost:9512/api/properties/?id=sonar.exclusions
and then run this command:
resource=org.myProject:myProject -v -T "D:\sonar-exclusions.xml"
The first one returns {"err_code":200,"err_msg":"property created"} while the second one is bound to fail with:
'resource' is not recognized as an internal or external command, operable program or batch file
You should either escape the & or simply put the URL between "quotes".

Resources